id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
15307633
pes2o/s2orc
v3-fos-license
Age-associated epigenetic drift: implications, and a case of epigenetic thrift? It is now well established that the genomic landscape of DNA methylation (DNAm) gets altered as a function of age, a process we here call ‘epigenetic drift’. The biological, functional, clinical and evolutionary significance of this epigenetic drift, however, remains unclear. We here provide a brief review of epigenetic drift, focusing on the potential implications for ageing, stem cell biology and disease risk prediction. It has been demonstrated that epigenetic drift affects most of the genome, suggesting a global deregulation of DNAm patterns with age. A component of this drift is tissue-specific, allowing remarkably accurate age-predictive models to be constructed. Another component is tissue-independent, targeting stem cell differentiation pathways and affecting stem cells, which may explain the observed decline of stem cell function with age. Age-associated increases in DNAm target developmental genes, overlapping those associated with environmental disease risk factors and with disease itself, notably cancer. In particular, cancers and precursor cancer lesions exhibit aggravated age DNAm signatures. Epigenetic drift is also influenced by genetic factors. Thus, drift emerges as a promising biomarker for premature or biological ageing, and could potentially be used in geriatrics for disease risk prediction. Finally, we propose, in the context of human evolution, that epigenetic drift may represent a case of epigenetic thrift, or bet-hedging. In summary, this review demonstrates the growing importance of the ‘ageing epigenome’, with potentially far-reaching implications for understanding the effect of age on stem cell function and differentiation, as well as for disease prevention. INTRODUCTION DNA methylation (DNAm) is a key epigenetic mark of regulatory potential (1) affecting mostly (but not exclusively) cytosines in a CpG context (2). The observation that DNAm in normal cells is altered as a function of age has a relatively long history, with early studies already reporting age-associated changes affecting a small number of individual gene loci, notably high CpG density promoters of important cancer genes such as IGF2 (3) and ESR1 (4; see also 5,6). Global (genome-wide) hypomethylation with age was also noted early on (7), with subsequent confirmation in a longitudinal study of global DNAm patterns (8). It was also observed that monozygotic (MZ) twins exhibit epigenetic divergence in DNAm patterns that increases with age and differences in lifestyle (9). The advent of novel biotechnologies, allowing highly accurate assessment of DNAm levels across at least tens of thousands of CpG sites (10), have allowed more recent studies to test and confirm these earlier observations. For instance, using Illumina Infinium 27K arrays (11), in which probes map mainly to gene promoters, three separate studies (12)(13)(14) have demonstrated that age-associated increases in DNAm happen preferentially at the promoters of key developmental genes, notably those bivalently marked in embryonic stem cells (15) and which are often also marked by the Polycomb Repressive Complex (PRC2) and thus generally referred to as PolyComb Group Targets (PCGTs) (16). Many of the bivalent genes/PCGTs encode known tumour-suppressors and transcription factors (TFs) necessary for differentiation. These data were derived from measuring DNAm in normal tissue, specifically in human whole-blood tissue (13,14), as well as in murine intestinal cells (12). Two more recent studies (17,18), using the more comprehensive and unbiased Illumina 450K arrays (19), have further confirmed that age-associated hypermethylation happens preferentially at high CpG density promoters, which often reside upstream of key developmental genes such as PCGTs. These studies have also confirmed that the majority of changes in the genome involve loss of methylation affecting CpG sites located in low CpG density regions, in line with the fact that most of these sites start out as methylated (17). Thus, it would appear that the machinery responsible for maintaining normal DNAm patterns becomes gradually deregulated with age, leading to deviations from a normal epigenetic state, a process we call epigenetic drift (18). Next, we briefly discuss the potential biological, functional, clinical and evolutionary significance of this age-associated epigenetic drift. TISSUE-SPECIFIC AND TISSUE-INDEPENDENT AGE-ASSOCIATED DNAm SIGNATURES One natural question that arises in the context of the observed epigenetic drift is whether this phenomenon is tissue-specific. Back in 2007, Issa and co-workers (20) already noted that age-related DNAm changes affecting certain genes were only seen in specific tissues. In trying to fully address this question, it is important to appreciate the complex nature of the profiled tissues, which often encompass a variety of different cell types. Thus, if the cellular composition of tissues alters with age, then this could lead to age-associated DNAm signatures which purely reflect these underlying changes in cell type. Indeed, a number of studies have now shown that some of the age-associated DNAm changes seen in whole-blood tissue, especially those involving hypomethylation, can be related to an age-associated skew in blood cell-type proportions, specifically in the relative proportion of myeloid to lymphoid cells (21)(22)(23). This is consistent with reports of global hypomethylation as being associated with commitment to the myeloid lineage (24). Thus, age-associated DNAm signatures reflecting changes in tissue composition are likely to be tissue-specific, and indeed, these generally do not validate in other tissue types (25). Therefore, changing cellular composition is a major confounder when assessing DNAm changes, and, in response to this, statistical methods aimed at dissecting this cellular heterogeneity have recently emerged (22,26). Application of these methods will be key since effect sizes associated with age and with other EWAS phenotypes could be small (27). Indeed, these methods have already been shown to have a dramatic impact on statistical inference and significance estimates (28). Interestingly, however, a number of recent studies have also demonstrated age-associated DNAm signatures that are largely independent of tissue type (12 -14,29). For instance, a DNA hypermethylation signature consisting of 69 CpGs mapping to promoters of PCGTs was validated not only in whole blood but also in normal tissue from the cervix, lung and even in ovarian cancer cells (13). A separate study derived a similar signature, enriched for bivalently marked genes, which was then validated in purified CD4+ T-cells and CD14+ monocytes, thus effectively discarding changing blood cell-type composition as the underlying reason for these signatures (14). A recent meta-analysis focusing on brain and blood tissue also concluded that a large proportion of age-associated DNAm changes are common to both tissue types, with important implications for studying epigenetic effects in diseases like Alzheimer's disease (29). Another more recent study used a systems approach to identify interactome hotspots of age-associated differential methylation, which were found to target stem cell differentiation pathways and to be independent of tissue type (25) (Fig. 1A). These age-associated interactome modules, derived with Illumina 27K arrays, have also been validated in data generated using Illumina 450K arrays ( Fig. 1B and C). In summary, although it is hard to absolutely discard age-associated changes in tissue composition as the underlying mechanism of these common tissue-independent signatures, following Occam's Razor, the evidence does point towards the simplest explanation, which is that gradual age-associated accumulation of DNAm changes does occur independently of cell type. DNAm-BASED AGE PREDICTORS The observation that a number of age-associated DNAm signatures validate consistently across so many different tissue types is remarkable, given that analogous robust molecular signatures at the copy-number, mutational or transcriptomic levels have not been reported, or at least not at the same level of consistency as seen for DNAm. In fact, a meta-analysis of age-associated gene expression changes only reported marginal statistically significant agreement across studies, although, interestingly implicating genes with roles in metabolism and DNA repair (30). Telomere attrition and other molecular features such as T-cell DNA rearrangements can predict age, but the reported prediction accuracies are not high (31 -34). More recently, a study reported mosaic copy-number changes with age, yet whether specific agepredictive copy-number-based signatures can be derived is unclear (35). In contrast, at least three separate studies have now reported DNAm -based age predictors (18,36,37). For instance, in Hannum et al. (18), a DNAm-based age signature derived in whole blood could predict the age of independent blood samples with a median absolute deviation of only +5 years. The authors further noted that this signature was highly correlated with age in other tissue types, but that highly accurate absolute age estimates could be achieved only if the parameters were retrained (18). It, therefore, seems likely that the high predictive accuracy attained by tissue-specific signatures is driven partly, if not entirely, by tissue-specific effects. Mechanistically, if changes in cell-type composition with age are highly certain and tissue-specific, then this would lead to corresponding highly accurate tissue-specific age predictors. This seems particularly relevant in the haematopoietic system where an ageassociated skew towards the myeloid lineage has been observed (38 -40). Thus, it remains to be seen whether tissue-independent age-associated DNA methylation signatures can achieve the predictive accuracy of tissue-specific DNAm signatures as reported in, e.g., Hannum et al. (18). Irrespective of the underlying biological mechanism, the high accuracy of some of the DNAmbased age predictors derived so far already promises some exciting novel applications, for instance in forensic science it has been suggested that they could be used to determine the approximate age of a suspect or victim from DNA samples collected at the crime scene (18,36). However, before this or other applications may be considered, it will be important to validate existing and novel DNAm-based age predictors more extensively, using truly independent cohorts. In doing so, particular care must also be taken in relation to technical confounding factors (e.g. batch effects), as these could easily skew or bias results (41)(42)(43). R8 Human Molecular Genetics, 2013, Vol. 22, Review Issue 1 Most of the age-predictive models reported so far are linear univariate or multivariate models, which assume that the rate at which age-associated DNAm changes accumulate is constant. This ageing rate has been shown to depend on genetic factors, and notably also on sex, with men exhibiting a faster rate (18). It remains to be seen, however, whether the rate of change is indeed constant, or whether instead a non-linear model could more accurately reflect how changes accumulate with age. For instance, a recent DNAm study performed on whole-blood samples from a paediatric population, consisting of boys aged between 3 and 17 years, showed that even at this young age, DNAm changes associated with age can be detected and, intriguingly, that these early changes account for most of the variation seen in the adult population (44). Thus, the authors of this study suggested a log-linear model to describe age-associated DNAm changes. However, not being a longitudinal study, and with the samples from the adult populations coming from different cohorts and generated by independent groups, this analysis could be subject to the usual caveats of confounding factors (41,43). A recent longitudinal twin study comparing buccal DNAm profiles of newborn twins with those at 18 months of age noted significant (3% differences at 12 months) age-associated DNAm changes (45). This is noteworthy in light of cross-sectional studies reporting typically 10% or at most 25% changes in DNAm across wider age ranges encompassing several decades (13,18). Thus, put together, the data from Alisch et al. (44) and Martino et al. (45) seem to suggest that age-associated epigenetic drift kicks in immediately after birth and may be particularly prominent during pre-puberty. These observations are surprising, yet also very interesting in light of studies proposing that the epigenome might be particularly sensitive to environmental stressors (e.g. nutrient deprivation) during pre-puberty (46,47). Thus, cumulative age-associated exposure to environmental factors during early life could be an important driver of epigenetic drift. IMPLICATIONS OF AGE-ASSOCIATED DNAm FOR AGEING, STEM CELL BIOLOGY AND REPROGRAMMING It is a striking observation that it is precisely the tissueindependent age DNAm signatures that also seem to validate in stem cell populations (13,25). For instance, an age-associated DNAm signature, derived from whole-blood tissue and enriched for PCGTs, was validated in bone marrow-derived mesenchymal stem cells (MSCs) from eight donors spanning a wide age range (13). A similar signature was also observed to be present in haematopoietic progenitor cells (HPCs) (48). Other tissueindependent age signatures have also been validated in MSCs and in HPCs (25). Thus, it would appear that the generic epigenetic drift observed across all tissue types may be driven by changes in underlying long-lived stem cells, thus explaining why these age-associated changes can be seen in differentiated cell populations with a high turnover rate such as the haematopoietic system. If epigenetic drift does indeed occur in stem cells, then this drift could, over time, affect stem cell function. Supporting this possibility, a recent bioinformatics study (25) identified tissue-independent age-associated differential methylation interactome hotspots, specifically targeting a number of stem cell differentiation pathways (Fig. 1). One of these hotspots was enriched for stem cell TFs, including UTF1, SOX8 and SOX2, with their promoter CpGs gradually becoming hypermethylated with age. UTF1 is a TF necessary for the differentiation of human embryonic stem cells and has also recently been implicated as an important marker of reprogramming efficiency (49). Another age-associated network hotspot was found to be enriched in genes involved in WNT signalling, a pathway of key importance for normal differentiation of stem cells (50). Interestingly, age-associated changes in WNT-signalling activity are well documented (51,52), yet whether this is due to epigenetic deregulation is unclear. Interpretation is complicated by the fact that the promoters of both negative regulators and receptors of this pathway all become hypermethylated with age, hence the net effect of these changes is hard to predict. Importantly, a number of experimental studies have shown that the age-associated DNAm changes seen in stem cells may underlie the observed decline in stem cell function, for instance in the case of MSCs (53) and myogenic stem cells (54). Further important support for this hypothesis was recently provided by a study showing that DNAm changes associated with haematopoietic stem cell (HSC) ontogeny happen preferentially at PRC2 targets, and in particular at genes that would normally be expressed in differentiated progeny (55). In fact, this study also reported silencing (through DNA hypermethylation) of key TFs needed for cell-lineage specification in aged HSCs, although the overall genome-wide correlation between DNAm and gene expression was very low, suggesting that most DNAm changes at PRC2 targets in HSCs only affect their transcriptional competency when passed on to downstream progeny (55). In summary, age-associated DNAm changes in adult stem cells possibly underpins the observed decline in stem cell function (Fig. 2), including the observed myeloid skewing of the ageing haematopoietic system (55) and immunosenescence (56). Reprogramming of adult differentiated cells into induced pluripotent stem cells (iPSC) is accompanied by widespread changes in the DNAm landscape (57). Interestingly, loci undergoing age-associated DNAm changes have been observed to overlap significantly with those undergoing changes in reprogramming experiments (57,58). Thus, an intriguing and exciting possibility, suggested now by a number of studies and reviewed in Rando et al. (59), is that of 'epigenetic rejuvenation', whereby age-driven accumulation of DNAm changes could be rewound and reset to zero, thus resembling the DNAm patterns of embryonic stem cells. AGE-ASSOCIATED DNAm, DISEASE RISK AND CAUSALITY Age-associated DNAm targeting the promoters of key tumoursuppressor genes in normal tissue has long been noted (5,7). Moreover, age-associated epigenetic divergence has been observed in MZ twins, with the drift proportional to the environmental divergence as measured by differences in lifestyle and time spent apart (9), suggesting that epigenetic drift could underlie their observed disease discordancy (60,61). Using Illumina beadarrays, a recent study demonstrated a highly statistically significant overlap between genes undergoing age-associated hypermethylation in their promoters and gene promoters undergoing hypermethylation in cancer (13), both involving preferentially PRC2 targets (62)(63)(64). Furthermore, one also observes strong overlaps of these genes with gene promoters characterized by DNAm changes associated with specific cancer risk factors [e.g. smoking (65), inflammation (66)(67)(68), obesity (69) and viral oncoprotein expression (70)]. Thus, age, cancer and cancer risk factor DNA hypermethylation signatures all seem commonly enriched for bivalent marks and PCGTs. However, not all DNAm changes associated with known cancer risk factors correlate with those seen in ageing-for instance, this seems to be the case for sunlight/UV exposure (71). In contrast to DNAm, it is only more recently that the age-associated mutational burden in normal tissue has been assessed (72), and therefore it is yet unclear if age-and cancer-associated mutational signatures overlap to the same degree as observed at the DNAm level. Given that (i) age-associated DNAm changes are seen in normal tissue, (ii) that these are shared with those associated with cancer risk factors and (iii) that age is the strongest demographic risk factor for cancer (73), it is thus entirely plausible that these DNAm changes could predispose to cancer and thus be used for early detection or risk prediction. Supporting this, an age-associated DNAm signature enriched for PCGTs was Figure 2. Putative effects of epigenetic drift. In normal ageing, whereby an individual is not significantly exposed to disease risk factors and does not have an unfavourable genotype, the deregulation of DNAm happens only gradually and possibly in a linear fashion, as demonstrated by highly accurate age-predictive linear models (18). In contrast, an individual exposed to risk factors, either environmental or genetic, may experience an aggravated or premature ageing profile, characterized by an abnormally higher deregulation of DNAm patterns, increasing the risk of age-related diseases like cancer or diabetes. One can further hypothesize that individuals with a favourable genotype (longevity genes) and with a healthy lifestyle may preserve a more intact epigenome and hence experience longevity. Reprogramming of aged cells into iPSCs and regeneration of differentiated cells may provide a mechanism for epigenetic rejuvenation. In addition to epigenetic drift, telomere shortening has been associated with ageing, age-associated stem cell dysfunction and disease risk factors. R10 Human Molecular Genetics, 2013, Vol. 22, Review Issue 1 found to be aggravated in intraepithelial neoplasias of the cervix, a pre-invasive cancer lesion (13). In a subsequent study that used a novel statistical risk-prediction algorithm based on epigenetic variable outliers, DNAm profiles measured in cytologically normal cervical swabs collected 3 years in advance of morphological transformation were shown to predict the future risk of a high-grade cervical intraepithelial neoplasia (CIN2+) with a low, yet statistically significant, AUC of approximately 0.64 (74). Importantly, the CpG sites making up this risk classifier were shown to be associated with age in normal cervix and other normal tissue types including blood, as well as being highly differentially variable between individuals at different risk of developing CIN2+ (74). Supporting this, inter-individual variable DNAm sites have also been shown to correlate with disease predisposition (75) and to be enriched for bivalent/ PCGT genes (76). The overlap between DNAm changes associated with age in normal tissue with those conferring risk of cervical cancer is intriguing. Interestingly, overexpression of an HPV-associated viral oncoprotein has recently been shown to lead to widespread DNA hypermethylation at promoters of PRC2 targets (70). This suggests that at least some of the DNAm changes associated with the risk of cervical cancer are likely to have been caused by HPV infection. In this regard, it is important to recall, however, that HPV infection, although necessary, is not a sufficient factor for cervical cancer. Hence, it is possible that age-associated epigenetic drift, possibly linked with a cumulative exposure to other risk factors, contributes to disease predisposition and that further HPV-induced epigenetic alterations then synergize with these to allow initiation of morphological transformation (74). The intriguing link between age-associated epigenetic drift and the changes seen in cancer and in precursor cancer lesions suggests a causal contributing role for DNAm in disease initiation and may also extend to other diseases. Indeed, a recent study analysing DNAm profiles of patients with Hutchinson -Gilford Progeria and Werner Syndrome, a premature ageing condition, concluded that DNAm changes may play a key causal or mediating role in these diseases (77). In fact, in those patients where the syndrome could be linked to genetic mutations in known causal genes (LMNA and WRN), aberrant DNAm profiles showed a remarkable overlap with those associated with age. Interestingly, DNAm changes, although distinct ones, were also observed in those patients not carrying the causal genetic mutation, suggesting that DNAm changes could play a causal role in these subsets of patients. Further support for a causal role of DNAm in mediating disease risk was provided by a recent EWAS study, which identified a number of methylation quantitative trait loci associated with rheumatoid arthritis (28). Finally, a number of epidemiological studies have also linked epigenetic changes with overall stress levels, itself a major risk factor for neurological diseases (78). AGE-ASSOCIATED EPIGENETIC DRIFT: FUTURE DIRECTIONS It is clear that the epigenome is altered by an age-associated epigenetic drift, whereby normal methylation patterns become deregulated with age. High CpG density promoters, and in particular those mapping to developmental genes, acquire methylation, whereas CpGs located outside these regions tend to lose methylation with age. However, a number of key questions require urgent attention. First, what is the precise biological mechanism (or mechanisms) leading to the deregulation of the normal DNAm patterns? While age-dependent expression of DNA methyltransferase genes has been reported (79), other epigenetic modulators (e.g. Sirtuins) may likely play an equally or even more important role (80)(81)(82). Methyl-binding domain proteins (e.g. MBD4) also seem implicated in modulating the rate of epigenetic drift (18). More fundamentally, it has been proposed that long-term deregulation of DNAm patterns occurs in response to spontaneous loss of histone modifications that happen on shorter timescales and in direct proportion to the number of cell divisions (55,83). To elucidate the mechanisms of regulation, it might help to investigate the degree of spatial stochasticity of age-associated DNAm changes. Besides CpG density, fairly little is known as to which other DNA sequence features may effect these age-related changes. Thus, it would be interesting to see whether age-associated methylation changes 'cluster' spatially as is observed, for instance, in cancer (84,85), or if instead they implicate a higher proportion of 'singleton CpGs', i.e. those that exhibit solitary DNAm changes and which are, therefore, less likely to be of functional significance. Heyn et al. (17) reported an overall loss of spatial correlations in the DNA methylome of centenarians, yet another study did report finding extended age-associated DMRs (aDMRs) (45). How frequent aDMRs are and how they compare with cancer DMRs in terms of their spatial correlative patterns remain to be seen. The precise pattern of CG dinucleotides in sequences affected by age-associated DNAm may also point to which epigenetic enzymes might be implicated (86). Alternatively, does one observe preferential enrichment of specific TF motifs among the sites that acquire age-associated DNAm changes, which would then point to the importance of specific TFs in mediating this age-associated deregulation of DNAm patterns, analogous to the TF-mediated redistribution of DNAm patterns one observes in response to stem cell differentiation and disease (87,88). A second pressing question relates to the functional consequences of epigenetic drift, since it would appear that the association between age-driven DNAm and gene expression changes is, at the very best, only marginal (18,25,55). Moreover, it could well be that the weak association between DNAm and gene expression observed in whole blood (18) is entirely driven by underlying changes in blood cell-type composition with no direct effect on cell function. Thus, it will be interesting to perform comprehensive paired DNAm and transcriptomic profiling of specific genes/pathways (e.g. WNT-signalling pathway) undergoing age-associated DNAm changes in a large number of cell-purified samples to assess the functional impact of the epigenetic changes. It is very likely that only a very small fraction of the age-associated epigenetic drift is of functional consequence, with the few functional changes ultimately affecting key transcriptional regulators, thus compromising stem cell differentiation (55) or predisposing cells to neoplastic transformation (74). A third key outstanding question is the dissection of age-associated DNAm changes that grow with 'chronological age', reflecting the number of cell divisions of long-lived stem cell populations, from the age-associated DNAm changes that Human Molecular Genetics, 2013, Vol. 22, Review Issue 1 R11 may result from cumulative exposure to environmental risk factors, as well as from the changes that may accumulate with age in response to underlying genetic risk factors (8,18) (Fig. 2). Two longitudinal studies, one on MZ twins (45) and another involving families from an Icelandic cohort (8), have shown the importance of genotype in influencing the DNAm changes seen with age. Considerations of these separate components thus lead to the notion of a 'biological' age, as measured by the overall deregulation of DNAm in the genome of an individual, and which may be indicative of an overall prospective disease risk (Fig. 2). Epigenetic studies in model organisms where (isogenic) animals can be kept under controlled environmental conditions, allowing, for instance, a sustained and constant exposure to risk factors, seem key in order to help dissect the relative contributions of the intrinsic and extrinsic 'epigenetic clocks' in determining the biological age of the organism. Although Beerman et al. (55) studied DNAm changes during HSC ontogeny and concluded that most of the age-associated hypermethylation at PRC2 targets was determined by the proliferative history of the HSCs (i.e. the intrinsic clock), these were not cells that had been exposed to the effects of environmental risk factors, as it might happen, for instance, under inflammatory conditions. A fourth key question concerns the relative importance of epigenetic drift in comparison with other age-associated biological effects, most notably the well-known shortening of telomeres with age (31,(89)(90)(91). Curiously, age-associated epigenetic drift and telomere shortening share many similar properties: both processes are influenced by genotype (18,92,93), both have been proposed to lead to stem cell dysfunction (55,94), both are aggravated in men compared with women (18,91,92), both are tissue-independent phenomena (13,95), and both have been linked to disease, disease risk and disease risk factors (13,18,74,(96)(97)(98)(99)(100). For instance, a recent study reported seven genetic variants associated with leucocyte telomere length (LTL), with inter-individual variation in LTL being associated with cancer and other age-related diseases (92). Thus, both age-associated epigenetic drift and telomere shortening have been proposed as markers of biological ageing (13,18,92,101). In this regard, it is important to note that although epigenetic drift seems to outperform telomere length as a predictor of chronological age, it is the estimated deviations between predicted (i.e. biological) and chronological age that are potentially of most interest and which may account for the observed variation in disease risk. Thus, it remains to be seen whether epigenetic drift or telomere attrition is a more relevant marker of biological ageing. Matched LTL and DNAm data for MZ twin pairs discordant for disease status or for exposure to environmental risk factors could elucidate the relative contributions of these two biological processes to the biological ageing and disease risk phenotype. EPIGENETIC DRIFT: A CASE OF EPIGENETIC THRIFT? Finally, it is of interest to discuss the potential evolutionary significance of age-associated epigenetic drift. One attractive framework in which to interpret epigenetic drift is in the context of evolutionary theories of ageing. One competing theory argues that ageing emerged early in evolution as a means of controlling population size (102,103). It is conceivable that during times of limited resources or famine, which would have been frequent in early living history, overpopulation could lead to resource depletion and severe risk of mass extinction. Thus, in our ancestral species, natural 'group' selection could have favoured genetic and epigenetic mechanisms that promote ageing, with the damaging effects of these mechanisms only kicking in after the reproductive period, thus allowing new improved gene pools to take over (104) and keeping overall populations at a stable and sustainable level (102). This viewpoint is supported by a related idea, grounded on evolutionary mathematical principles, and referred to as highly optimized tolerance (105). This evolutionary theory proposes that biological organisms, and multi-cellular species in particular, represent states of highly optimized tolerance, providing robustness to common perturbations, but simultaneously, and also inevitably, implicating costly trade-offs, such as an increase in fragility, as exemplified by the ageing phenotype. Thus, it is tempting to speculate that epigenetic drift is one possible mechanism contributing to the ageing phenotype (e.g. through a decline in stem cell function) and to an associated increased risk of disease and death (e.g. through increased predisposition to cancer or other age-related diseases, and possibly mediated by immunosenescence). As mentioned earlier, another mechanism could be telomere shortening (96), and so both telomere shortening and epigenetic drift may be seen as providing an evolutionary benefit to the species as a whole, by managing population dynamics through ageing and increased fragility. There are a number of other important observations which further support a role for epigenetic drift in human evolution. For instance, a recent study has shown that epigenetic drift does not happen randomly in the context of the human interactome, but that it preferentially affects genes of low connectivity and centrality (106). Thus, genes carrying out integral housekeeping and cellular functions, and which are generally of high connectivity and centrality, appear to be more protected from epigenetic drift. Since epigenetic drift kicks in straight after birth (45) and is prominent even in paediatric populations (44) (i.e. well before the reproductive period), it is tempting to speculate that drift affecting highly integral and essential genes would be weeded out by natural selection. In contrast, natural selection would not be able to efficiently weed out the epigenetic drift targeting the non-essential and less integral genes, since the main effects of drift at these genes only show up after the reproductive age. Indeed, as argued earlier, epigenetic drift targeting nonessential genes may even be selected for as a mechanism underlying ageing and increased fragility, which are necessary for population control. The observation that epigenetic drift is prominent in early life (44,45), and that associated epigenetic changes could be heritable (107 -110), further suggests that epigenetic drift may represent another example of 'thrift', a term first coined by James Neel in the genetic context (111), whereby genes that would have conveyed an evolutionary advantage to our ancestral species would now lead to an opposing, apparently detrimental, effect in today's resource-rich society. Indeed, the genetic thrift hypothesis has recently been invoked to explain the current obesity and metabolic disease epidemics (112), according to which, genes favouring our ancestors, for instance, in promoting fat storage R12 Human Molecular Genetics, 2013, Vol. 22, Review Issue 1 in anticipation of possible famines, would now have detrimental effects. Interestingly, epigenetic (or phenotype) thrift has also been proposed as the underlying mechanism to explain the increased incidence of diabetes and cardiovascular disease among people born during the 1944 Dutch Winter Famine (113,114). A compromised resource-depleted in utero environment could lead to epigenetic deregulation of metabolic genes to promote a more favourable metabolic state, which in a foodrich environment, however, would only be detrimental (114). Thus, if epigenetic changes are heritable, this would allow epigenetic drift to quickly shape phenotypes and evolution. In summary, it is tempting to speculate that DNAm changes associated with epigenetic drift, which may be heritable, represent a case of epigenetic thrift, or perhaps even a case of evolutionary bet-hedging (83,105,115), contributing to both phenotypic diversity and the ageing phenotype, in a way that optimized evolutionary adaptation and species survival in the face of potential uncertain adversities. Conflict of Interest statement. None declared. FUNDING A.E.T. is supported by a Heller Research Fellowship. J.W. is supported by an EPSRC/BBSRC PhD studentship awarded to CoMPLEX. S.B. was supported by the Wellcome Trust (WT084071) and a Royal Society Wolfson Research Merit Award (WM100023). Funding to pay the Open Access publication charges for this article was provided by the Wellcome Trust (WT084071).
2016-05-04T20:20:58.661Z
2013-08-04T00:00:00.000
{ "year": 2013, "sha1": "0a6bc6edf65ae022140bd42a4cf8337afe20c202", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/hmg/article-pdf/22/R1/R7/2035750/ddt375.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "64d6962fa24877503f8fc3a7238a0dc23882b9d9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
222133125
pes2o/s2orc
v3-fos-license
Explainability via Responsibility Procedural Content Generation via Machine Learning (PCGML) refers to a group of methods for creating game content (e.g. platformer levels, game maps, etc.) using machine learning models. PCGML approaches rely on black box models, which can be difficult to understand and debug by human designers who do not have expert knowledge about machine learning. This can be even more tricky in co-creative systems where human designers must interact with AI agents to generate game content. In this paper we present an approach to explainable artificial intelligence in which certain training instances are offered to human users as an explanation for the AI agent's actions during a co-creation process. We evaluate this approach by approximating its ability to provide human users with the explanations of AI agent's actions and helping them to more efficiently cooperate with the AI agent. Introduction In science and engineering, a black box is a component that cannot have its internal logic or design directly examined. In artificial intelligence (AI), "The black box problem" refers to certain kinds of AI agents for which it is difficult or impossible to naively determine how they came to a particular decision (Zednik 2019). Explainable artificial intelligence (XAI) is an assembly of methods and techniques to deal with the black box problem (Biran and Cotton 2017). Machine Learning (ML) is a subset of artificial intelligence that focuses on computer algorithms that automatically learn and improve through experience. (Goodfellow, Bengio, and Courville 2016). The current state-of-the-art models in ML, deep neural networks, are black box models. Intuitively, it is difficult to cooperate with an individual when you cannot understand them. This is critical in co-creative systems (also called mixed-initiative systems), in which a human and an AI agent work together to produce the final output. (Yannakakis, Liapis, and Alexopoulos 2014). There is a wealth of existing methods in the field of XAI (Adadi and Berrada 2018). For example, those that draw Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). comparisons between the input and the output of a model (Cortez and Embrechts 2011;Cortez and Embrechts 2013;Simonyan, Vedaldi, and Zisserman 2013;Bach et al. 2016;Dabkowski and Gal 2017;Selvaraju et al. 2017), or analyze the output in terms of the model's parameters (Boz and Hillman 2000;García, Fernández, and Herrera 2009;Letham et al. 2015;Hara and Hayashi 2018). Alternatively, there is the strategy to attempt to simplify the model (Che et al. 2015;Tan et al. 2017;Xu et al. 2018). The major difference between our approach and these previous ones is that we present a method which makes it possible to explain an AI agent's action through a detailed inspection of what it has learned during the training phase. Questions we might want to ask an AI agent include "How did you learn to do that action?" or "What did you learn that led you to make that decision?" (Cook et al. 2019). We sought to develop an approach that could answer these questions. Thus, our approach needed to find explanations for the AI agent's decisions based on its training data. In this paper, we make use of the problem domain of a co-creative Super Mario Bros. level design agent. We use this domain since XAI is critical in co-creative systems. We introduce an approach to detect the training instance that is most responsible for an AI agent's action. We can then present the most responsible training instance to the human user as an answer to how the AI agent learned to make a particular decision. To evaluate this approach we compare the quality of these responsible training instances to random instances as explanations in two experiments on existing data. Related Work Our problem domain is generating explanations for a PCGML co-creative agent. Therefore we separate the prior related work into three main areas: Procedural Content Generation via Machine Learning (PCGML), co-creative systems, and Explainable Artificial Intelligence (XAI). Procedural Content Generation via Machine Learning (PCGML) Procedural Content Generation via Machine Learning (PCGML) is a field of research focused on the creation of game content by machine learning models that have been trained on existing game content (Summerville et al. 2018 (Summerville, Philip, and Mateas 2015), Long Short-Term Recurrent Neural Networks (LSTMs) (Summerville and Mateas 2016), Autoencoders (Jain et al. 2016), Generative Adversarial Neural Networks (GANs) (Volz et al. 2018), and genetic algorithms through learned evaluation functions (Dahlskog and Togelius 2014) to generate these levels. In a recent work, Khalifa et al proposed a framework to generate game levels using Reinforcement Learning (RL), though they did not evaluate it in Super Mario Bros. . We also draw on reinforcement learning for our agent, however our approach differs from this prior work in terms of focusing on explainability. Co-creative systems There are numerous prior co-creative systems for game design. These approaches traditionally have not made use of ML, instead they rely on approaches like heuristics search, evolutionary algorithms, and grammars (Smith, Whitehead, and Mateas 2010;Liapis, Yannakakis, and Togelius 2013;Yannakakis, Liapis, and Alexopoulos 2014;Deterding et al. 2017;Baldwin et al. 2017;Charity, Khalifa, and Togelius 2020). ML methods have only recently been incorporated into co-creative game content generation. Guzdial et al. proposed a Deep RL agent for co-creative Procedural Level Generation via Machine Learning (PLGML) . In another recent work, Schrum et al. presented a tool for applying interactive latent variable evolution to generative adversarial network models that produce video game levels (Schrum et al. 2020). The major difference between our approach and previous ones is that it explains an AI partner's actions based on what it learned during training. It is important to note that we are not actually evaluating our approach in the context of co-creative interaction with a human subject study. We are only making use of data from prior studies in which humans interacted with ML and RL agents in co-creative systems. Explainable Artificial Intelligence (XAI) The majority of existing XAI approaches can be separated according to which of two general methods they rely on: (A) visualizing the learned features of a model (Erhan et al. 2009;Simonyan, Vedaldi, and Zisserman 2013;Nguyen, Yosinski, and Clune 2015;Nguyen, Yosinski, and Clune 2016;Nguyen et al. 2017;Olah, Mordvintsev, and Schubert 2017;Weidele, Strobelt, and There are a few prior works focused on XAI applied to game design and game playing. Guzdial et al. presented an approach to Explainable PCGML via Design Patterns in which the design patterns act as a vocabulary and mode of There exist a few approaches to explain RL agent's actions (Puiutta and Veith 2020). Madmul et al. presented an approach that learns structural causal models to derive causal explanations of the behavior of model-free RL agents (Madumal et al. 2019). Kumar et al. presented a deep reinforcement learning approach to control an energy storage system. They visualized the learned policies of the RL agent through the course of training and visualized the strategies followed by the agent to users (Kumar 2019). Cruz et al. proposed a memory-based explainable reinforcement learning (MXRL) where an agent explained the reasons why some decisions were taken in certain situations using an episodic memory (Cruz, Dazeley, and Vamplew 2019). In another recent paper, an approach was presented that employs explanations as feedback from humans in a human-in-the-loop reinforcement learning system (Guan, Verma, and Kambhampati 2020). To the best of our knowledge, this is the first XAI work focused on the training data of a target ML model. Our approach differs from existing XAI work in detailed inspection and alteration of the training phase. System Overview In this paper, we present an approach for Explainable AI (XAI) that aims to answer the question "What did the AI agent learn during training that led it to make that specific action?". As is shown in Figure 1, the general steps of the approach are as follows: First, during training a DNN, we detect the training instance (or instances) that maximally alters each neuron inside the network. Secondly, during testing, we pass each instance through the network and find the neuron that is most activated (Erhan, Courville, and Bengio 2010). Then given the information from the first step, we can easily identify an instance (or instances) from the training data that maximally impacted the most activated neuron. We refer to this as "the most responsible training instance" for the AI agent's action. The intuition is that the user can take this explanation as something akin to the end goal of the agent taking that action. Our hope is that it will be helpful in the user deciding whether to keep or remove some addition by the AI. For example in Figure 3, given the most responsible level as the explanation, the user might keep the lower of the two Goombas, despite the fact that it seems to be floating, if they can match it to the Goombas from the most responsible level. For this purpose, we pre-trained a Deep RL agent using data from interactions of human users with three different ML level design partners (LSTM, Markov Chain, and Bayes Net) to generate the Super Mario Bros level. This is the same Deep RL architecture and data from prior work by Guzdial et al. (Guzdial, Liao, and Riedl 2018) for co-creative Procedural Level Generation via Machine Learning (PLGML), in which they made use of the level design editor from ) which is publicly online. 1 The agent is designed to take in a current level design state and to output additions to that level design, in order to iteratively complete a level with a human partner. Our training inputs are states and the outputs are the Q table values for taking a particular action for the particular state. The input comes into the network as a state of shape (40x15x34). The 40 is the width and 15 is the height of a level chunk. At each x,y location there are 34 possible level components (e.g. ground, goomba, pipe, mushroom, tree, Mario, flag, ...) that could be placed there. As is shown in the visualized architecture of the Convolutional Neural Network (CNN) in Figure 2, it has three convolutional layers and a fully connected layer followed by a reshaping function to make the output in the form of the action matrix which is (40x15x32). The player (Mario) and flag are the level entities that cannot be counted as an action, so there are 32 possible action components instead of the 34 state entities. Our activation function is "Leaky ReLu" for every layer and the loss function is "Mean Squared Error" and the optimizer is "Adam", with the network built in Tensorflow (Abadi et al. 2016). We make use of this existing agent and data since it is the only example of a co-creative PCGML agent where the data from a human subject study is publicly available. During each training epoch we employ a batch size of one to track when each training instance passes through the network. We calculate and store the change of neuron weights between batches. After training, by summing over the changes of each neuron weight with respect to training data, we are able to identify which training instance maximally results in alteration of a neuron. Since positive and negative values can counteract each other's effects, it is important to not look at the absolute values until the end of the training. We can then sum and store this information inside eight arrays of shape (4x4x34) for the first convolutional layer, 16 arrays of shape (3x3x8) for the second convolutional layer, and 32 arrays of shape (3x3x16) for the third convolutional layer. These are the shapes of the filters in each layer. We name these arrays Most Responsible Instance for each Neuron in each Convolutional layer (MRIN-Conv1, MRIN-Conv2, and MRIN-Conv3). These data representations link neurons to IDs representing a particular instance 1 1https://github.com/mguzdial3/Morai-Maker-Engine of a human user working with the AI in the co-creative tool. We can then search these arrays and find the ID of a training instance that is the most responsible for changes to a particular weight. Our end goal is to determine the most responsible training instance for a particular prediction made by our trained CNN. To do that, we need to find out what part of the network was most important in making that prediction. We can then determine the most responsible instance for the final weights of this most important part of the network. The most activated filter of each convolutional layer is a filter that contributes to the slice with the largest magnitude in the output of that layer. Hence the most activated filter can be considered the most important part of the convolutional layer for that specific test instance (Erhan, Courville, and Bengio 2010). For example, we pass a test instance into the network. A test instance is a (40x15x34) state that is a chunk of a partially designed level. Since the first convolutional layer has 8 4x4x34 filters with the same padding, the output would be in the shape of (40x15x8). Then we find the (40x15) slice with the largest values. The most activated filter is a (4x4x34) array in our convolutional layer which led to the slice with the greatest magnitude. Finally, once we have the maximally activated filter we can identify the most responsible training instance (or instances) by querying the MRIN-Conv arrays we built during training. The most responsible training instance is the ID that most repeated in the MRIN-Conv array associated with the maximally activated filter. We chose the most repeated ID since it is the one that most frequently impacted the majority of the neurons in the filter during training. Evaluation In this section, we present two evaluations of our system. We call the first evaluation our "Explainability Evaluation" as it addresses the ability of our system to provide explanations that help a user predict an AI agent's actions. We call the second evaluation our "User Labeling Error Evaluation" as it addresses the ability of our system to help human users identify positive and negative AI additions during the cocreative process. Both evaluations approximate the impact of our approach on human partners by using existing data of AI-human interactions. Essentially, we act as though the pre- recorded actions of the AI agent were outputs from our Deep RL agent and identify the responsible training instances as if this were the case. Due to the fact that our system derives examples as explanations for the behavior of a co-creative Deep RL agent, a human subject study would be the natural way to evaluate our system. However, prior to a human subject study, we first wanted to gather some evidence of the value of this approach. Explainability Evaluation The first claim we made was that this approach can help human users better understand and predict the actions of an AI agent. In this experiment we use the most responsible level as an approximation of the AI agent's goal, in other words what final level the AI agent is working towards. The most responsible level refers to a level at the end of a human user's interactions with an AI agent. We identify this level by finding the most responsible training instance as above and identifying the level at the end of that training sequence. This experiment is meant to determine if this can help a user to predict the AI agent's actions. To do this, we passed test instances into our network and found the most responsible training instances. We then compared the most responsible level for some current test instance to the AI agent's action in the next test instance. If the most responsible level is similar to the action it would indicate that the most responsible level can be a potential explanation for the AI agent's action by priming the user to better predict future actions by the AI agent. In comparison, we randomly selected 20 levels from the training data and found their similarities to the AI agent's action in the next test instance. If our approach outperforms the random levels, it will support the claim that the responsible level is better suited to helping predict future AI agent actions compared to random levels. We used two different sets of test data: (A) Our first testset is derived from a study in which users interacted with pairs of three different ML agents as mentioned in our System Overview section . We used the same testset identified in that paper. (B) Our second testset is obtained from a study in which expert level designer users interacted with the trained Deep RL agent (Guzdial et al. 2019). If we find success with the first testset then that would indicate that our trained Deep RL agent is a good surrogate for the original three ML agents, since we would be in effect predicting the next action of one of these agents. Good results for the second testset would demonstrate the capability for prediction of the Deep RL agent's actions itself. Since the first convolutional layer is the layer that most directly reasons over the level structure, we decided to find the most responsible training instance of just the first convolutional layer. However, this setup puts our approach at a disadvantage, since we are going to compare only one most responsible level to 20 random ones. For comparing the most responsible level and the random levels to the actions, we needed to define a suitable metric. We desired a metric that detects local overlaps and represents the similarity between a level and action. We wanted to pick square windows which are not the same size as the first convolutional layer, to capture some local structures without biasing the metric too far towards our first convolutional layer. As a result, we found all three-by-three nonempty patches for both a given level and an action. Then we counted the number of exact matches of these patches on both sides, removing the matched ones from the dataset since we wanted to count the same patches only once. Finally, we divided the total number of the matched patches by the total number of patches in the action, since this was always smaller than the number from the level. We refer to this metric as the local overlap ratio. Explainability Evaluation Results We had 242 samples in the first testset and 69 samples in the second one. Since we wanted to compare instances in which the AI agent actually made some serious changes, we chose instances where the AI agent added more than 10 components in its next action. Thus we came to 38 and 46 instances from the first and second testsets, respectively. Our approach outperforms the random baseline in 78.94 percent of 38 instances for the ML agents data and 67.29 percent of 46 instances for the Deep RL agent data. The average of the local overlap ratios is shown in Table 1 (higher is better). The minimum value here would be 0 for zero overlap and the maximum value would be 1 for complete overlap between the action and the most responsible level or the random level. This normalization means that even small differences in this metric represent large perceptual differences. For example, a 0.04 difference in the local overlap ratio between the most responsible level and the random levels in Table 1 indicates the most responsible level has 20 more three-by-three non-empty overlaps. We expect that the reason that the Deep RL agent values are generally lower is that the second study made use of published level designers rather than novices and an adaptive Deep RL Agent, meaning that there was more varied behavior compared with the three ML agents. An example of explainability is demonstrated in Figure 3. As is shown in the figure, the AI agent made an action and added some components (e.g. goomba and ground) to the existing state. By looking at the chunk of the most responsible level, the user might realize that the AI agent wants to generate a level including some goombas as enemies and some blocks in the middle of the screen. The AI agent also added ground at the bottom and top of the screen, which the user could identify as being consistent with both their input to the agent and the most responsible level. User Labeling Error Evaluation For the second evaluation, we wanted to get some sense of whether this approach could be successful in terms of assisting a human user in better understanding good and bad agent actions during the co-creation process. To do this, we needed to identify specific instances where our tool could be helpful in the data we have available. We defined two such concepts: (A) false-positive decisions and (B) false-negative decisions, based on the interactions between users and AI partner during level generation: (A) False-positive decisions are additions by the AI partner that the user kept at first but then deleted later. (B) False-negative decisions are additions by the AI partner that the user deleted at first but then added later. Given these concepts, if we could help the user avoid making these kinds of decisions, our approach could help a human user during level generation. We anticipated that one reason that users made these kinds of decisions was from a lack of context of the AI agent's action. Thus, if the user had context they may not delete or keep what they would otherwise keep or delete, respectively. To accomplish this, we implemented an algorithmic way to determine false-positives and false-negatives among the two testsets described in the previous evaluation. In this algorithm, we first find all user decisions in terms of deleting or keeping an addition by the AI agent. Then we look at the level at the end of the user and the AI agent's interaction. If a deleted AI addition exists in the final level, it is counted as a false-negative example, and if a kept addition does not exist in the final level it is counted as a false-positive example. Once we discovered all false-negative and false-positive examples, we found the state before the example was added by the AI agent and named it the Introductionstate (I-state). We found the state in which false-positivity or false-negativity occurred (i.e. when a user re-added a false-negative or deleted a false-positive) and named it the Contradiction-state (C-state). Since some change between the I-state and the C-state led to the user altering their decision, we wanted to see some sign that presenting the most responsible level to the user could change their mind before they reached this point. Thus we compared these two states to find all the changes that the AI agent or the user made and named this the Difference-state (D-state). We compared each D-state with the final generated level derived from the most responsible training instance. We also compared each D-state with 20 other randomly selected levels from the existing data. For the comparison, we used the local overlap ratio defined in the previous evaluation. If our approach outperforms the random baseline, we will be able to say that there is some support for the responsible level helping the user avoid false-positives and false-negatives in comparison to random levels. User Labeling Error Evaluation Results We The average of the local overlap ratio values were 0.2665 and 0.2328 for the most responsible level and random levels, respectively. Again this represents a large perceptual difference of roughly 15 more non-empty 3x3 overlaps. Interestingly, our approach outperforms the random levels in all of the false-negative examples in the second dataset, compared with just 20 percent of false-negatives in the first dataset. Further, our approach performs around 1.5 times better than the random levels in 15 false-positive examples in the second dataset. These instances come from the study that used the same RL agent as we used to derive our explanations, which could account for this performance. Discussion In this paper, we present an XAI approach for a pre-trained Deep RL agent. Our hypothesis was that our method could be helpful to human users. We evaluated it by approximating this process for two tasks using two existing datasets. These datasets are obtained from studies using three ML partners and an RL agent. Essentially, we used the XAIenabled agent in this paper as if it were the agents used in these datasets. The results of our first evaluation demonstrates that our method is able to represent examples as explanations to help users predict an agent's next action. The results of our second evaluation support our hypothesis and give us an initial signal that this approach could be successful in order to help human users more efficiently cooperate with a Deep RL agent. This indicates the ability of our approach to help human designers by presenting an explanation for an AI agent's actions during a co-creation process. A human subject study would be a more reasonable way to evaluate this system since human users might be able to derive meaning from the responsible level that our similarity metric could not capture. Our approach performs better than our baseline of random levels in both evaluation methods and this presents evidence towards its value at this task. However, we look forward to investigating a human subject study in order to fully validate these results. There could be other alternatives to a human subject study. For example, a secondary AI agent that predicts our primary AI agent's actions can play a human partner's role in the co-creative system. Thus making use of a secondary AI agent to evaluate our system before running a human subject study might be a simple next step. It is important to mention that we only offer one most responsible level from only the first convolutional layer as an explanation. Looking into providing a user with multiple responsible levels or looking into the most responsible levels of the other layers could be a potential way to further improve our approach. Our metric for determining the most responsible training instance is based on finding the most repeated instance inside the MRIN-Conv arrays associated with the most activated filter. We identified the most activated filter by looking at the absolute values. We plan to investigate other metrics such as looking for the most activated neurons outside of the filters. In addition, considering negative and positive values separately in the maximal activation process could also lead to improved behavior. Negative values might indicate that an instance negatively impacted a neuron. It could be the case then that the filter might be maximally activated because it was giving a very strong signal against some action. One quirk of our current approach is that the most responsible training instance depends on the order in which it was presented to the model during the training. Thus, this measure does not tell us about any inherent quality of a particular training data instance, only it's relevance to a particular model that has undergone a particular training regimen. In the future, we intend to explore how more general representations of responsibility such as Shapely values might intersect with this approach (Ghorbani and Zou 2019). Only the domain of a co-creative system for designing Super Mario Bros. levels is explored in this paper. Thus making use of other games will be required to ensure this is a general method for level design co-creativity. Beyond that, we anticipate a need to demonstrate our approach on different domains outside of games. We look forward to running another study to apply our approach to human-in-the-loop reinforcement learning or other co-creative domains. Conclusions In this paper we present an approach to XAI that provides human users with the most responsible training instance as an explanation for an AI agent's action. In support of this approach, we present results from two evaluations. The first evaluation demonstrates the ability of our approach to offer explanations and to help a human partner predict an AI agent's actions. The second evaluation demonstrates the ability of our approach to help human users better identify good and bad instances of an AI agent's behavior. To the best of our knowledge this represents the first XAI approach focused on training instances.
2020-10-06T01:01:04.616Z
2020-10-04T00:00:00.000
{ "year": 2020, "sha1": "4d29697d613c2381ca96ec4612ca92d496cfae3f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4d29697d613c2381ca96ec4612ca92d496cfae3f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
264389824
pes2o/s2orc
v3-fos-license
Trans/plurilingual Pedagogies: A Multiethnography ABSTRACT Trans/plurilingual theory and pedagogies have been generating extensive attention within global English language teaching and teacher education. This article responds to the burgeoning area of trans/plurilingual pedagogies, outlining diverse pedagogical practices and perspectives from a group of English language educators at a large, cosmopolitan Canadian university. Considering recent assertions that there are onto-epistemological differences between multilingualism, plurilingualism, and translingualism, this article looks to demonstrate how (or even if) these differences manifest in the pedagogical practices of diverse faculty within the same language teaching and teacher education programs. Drawing on multiethnographic data, this article concludes with a discussion of the potential and limitations of critical pedagogies, the affordances of multiethnography as an accessible methodology for use by researchers and pedagogues, and a call for greater bi-directional knowledge flow between language researchers and classroom instructors. Introduction Pluri-oriented theories and pedagogies have been front and centre in language (teacher) education during the seismic multi / plural shifts over the past decades (Kubota, 2016;2020). 6Most recently, trans / plurilingual approaches, as the pedagogical soups du jour, have been generating extensive attention within English language teaching (ELT) and teacher education in Canada (e.g., Heng Hartse et al., 2018;Marshall et al., 2018;Payant & Galante, 2022) as well as globally (e.g., Cenoz & Gorter, 2020;García et al., 2017;Piccardo et al., 2021;Sun, 2022) including in Brazil (Rocha, 2019;Rocha & Maciel, 2015;Windle, 2019).This empirical and conceptual work, drawing on translingual and plurilingual theory, integrates sociolinguistic realities of language use, challenging entrenched monolingual theories and practices in language education, viewing additional languages as resources rather than problems.This recent body of work, under the banner of translanguaging / plurilingualism, has often trumpeted its critical or transformative potential, pointing to how such plurioriented approaches to language education may counter the hegemony of English, and thus unequal and inequitable relations of power (Canagarajah, 2013;Corcoran, 2019;García & Lin, 2017).Rarely, however, has this work described and situated such pluri-oriented practice to the extent that educators might adapt and adopt activities for implementation in global ELT classrooms while considering their theoretical underpinnings and transformative possibilities.Thus, our objective is to somewhat close the gap between trans/plurilingual theory, research, and classroom practice, stimulating discussion among language researchers, language teachers, and language teacher educators working in English medium of instruction (EMI) contexts.So, how are trans / plurilingual theories being taken up by scholarpractitioners in their respective post-secondary classrooms, and why might insight into such pluri-oriented pedagogies be important for global language researchers, instructors, and administrators interested in transformative educational practice?Finally, how might a novel research methodology -multiethnography -afford critical reflection that can elucidate connections between theory and classroom practice aimed at social transformation?6. Recognizing the multitude, complexity, overlap, and contested nature of terms used to describe pedagogies emerging from the multi/plural turns, throughout this piece we use pluri-oriented as a blanket term to describe understandings and approaches that valorize and/or utilize speakers/students full language repertoires as resources for additional language learning.Considering recent assertions that there are ontological and epistemological differences between multilingualism, plurilingualism and translingualism (e.g., García & Otheguy, 2020;Marshall, 2021), this article looks to highlight how (or even if) these differences manifest in the pedagogical practices of diverse faculty within the same language education context.For example, though often confused or conflated, multilingualism is generally conceived of as language policy and practice aimed at groups whereas plurilingualism is better understood at the level of individual actors, highlighting the value of accessing and utilizing multiple linguistic resources for intercultural communication (Coste et al., 2009;Marshall & Moore, 2018;Piccardo et al., 2021).Translingualism, the newest -ism to the table, like plurilingualism, is oriented towards language use of individuals, but, unlike plurilingualism, explicitly challenges static cognitive and social (and socially-constructed) boundaries between languages, often proclaiming a more overtly critical orientation based on theoretical / conceptual criteria (García & Lin, 2017;Wei, 2018).But how do these theories get taken up at the pedagogical level?Beginning with an overview of the institutional context and author subjectivities, this article proceeds to articulate five examples of pluri-oriented pedagogies employed by practitioners in their English for academic purposes (EAP), Teaching English as a Second or Other Language (TESOL), and Graduate 7 Applied Linguistics classrooms.The authors briefly describe classroom activities, outlining how these activities are inspired by theory and research.Subsequent collegial commentary presents convergent and divergent pluri-oriented ontoepistemologies, i.e., ways of knowing and doing, of language teaching and teacher education, raising important questions for consideration by language educators working to support plurilingual students across English medium of instruction (EMI) post-secondary contexts.Paramount among these considerations is the notion of consequential validity, or the potential value of theory -in this case trans/plurilingual theory -for informing classroom language teachers' and teacher educators' practice (Cummins, 2021). 7. Also known as post-graduate. Consequential validity assigns a preeminent role to teacher agency in the local enactment of language theory and policy.Irrespective of the acquisitional or transformative potential claimed by adherents for a given trans/plurilingual theory, consequence and relevance arise from situated practice and the central role of teachers as knowledge creators uniquely positioned to negotiate institutional constraints and possibilities on behalf of or alongside their students.It is in this close pedagogical work that theoretical inadequacies, complicities, and opportunities organically emerge, providing teachers with paradigmatic insights unavailable to outside experts.Indeed, following Cummins' validity criteria, we have chosen to foreground our own curricular responses and interventions with discussions of context and the conditions of possibility that have informed our pedagogical decision-making. Institutional Context, Participants, and Methodology Though each of us brings diverse ethnolinguistic identities and experiences to this collaboration -e.g., Colombian, Korean, Hong Kong, and Canadian-born instructors with experience teaching English in a range of geolinguistic contexts -we all completed our PhD studies at OISE/University of Toronto and we all currently work (though Brian has recently retired) as Assistant or Associate Professors of ESL and/ or Applied Linguistics at York University.In line with qualitative inquiry that forefronts author subjectivities, Figure 1 displays our convergent and divergent histories and experiences, and our personal and professional identities, which play an important part in guiding our classroom pedagogies, as seen in the following section.Prior to describing our classroom pedagogies, though, it is important to understand the institutional context in which we work, shown in Figure 1.York University, a large Canadian university serving over 50,000 students, and its bilingual (English-French) college campus, Glendon College, serving approximately 3,000 students, were some of the first post-secondary institutions in Canada to offer undergraduate, content-based, credit-bearing English for academic purposes (EAP) courses, and the distinct history of this EAP provision guides our instruction aimed at our culturally and linguistically diverse population of students (Mendelsohn, 2001).The key criterion for credit, as negotiated with the university senate more than twenty years ago, was that these EAP courses engage with academic content at a level of rigour comparable to other Social Sciences and Humanities courses.Not all the courses described in this article are specific to EAP (e.g., James' graduate English for specific purposes (ESP) and Marlon's language teacher education examples), but all contributors to this article teach in an EAP program, some exclusively.This somewhat unique content and language-integrated orientation to EAP informs our varied pedagogical practices outlined in the next section, including a strong emphasis on content related to Canadian Studies, broadly speaking, where issues of individual and collective identity negotiation are of particular interest to students and instructors alike. An emerging, critical methodological approach, duo / multiethnography is rooted in a qualitative paradigm that challenges accepted regimes of truth, instead seeking to better understand social realities and individual agency / identity through collective reflection and storytelling.Though still fighting for legitimacy in applied linguistics, multiethnography has been increasingly embraced in language education, not only in North American contexts (e.g., Ahmed & Morgan, 2021;De Costa et al., 2022;Valencia et al., 2020), but also in a range of global and transnational contexts (e.g., Adamson et al., 2019;Lowe & Lawrence, 2020).This novel methodological approach has been increasingly adopted in service of facilitating critical reflection on the part of language teacher educators and language teachers at different stages in their academic/ professional trajectories.In order to reflect upon and interrogate our individual classroom pedagogies, we have adopted this dialogic approach that allows for a critical juxtaposition of our voices as we work together to critique and question beliefs, discourses, ontologies, and epistemologies (Sawyer & Norris, 2015).Interestingly, in a multiethnography, the authorparticipants -in this case all language (teacher) educators -are both the researchers and the researched.In our case, we have found that engaging in this dialogic process has allowed us to trace connections between our life trajectories, engagement with theory and research, and conceptions and operationalizations of pluri-oriented pedagogies.In the following sections, we present brief activity descriptions displaying how we, as classroom instructors, enact our pluri-oriented pedagogies, followed by targeted discussion in a conversational format that invites the readers to consider not only our pedagogical beliefs and practices, but also their own. Our Stories: Pluri-oriented Pedagogies and Perspectives In this section, drawing upon our synchronous Zoom chats and asynchronous exchanges via email and OneDrive, which produced qualitative data that were coded into emergent, salient themes (Saldaña, 2021;Silverman, 2020), we present five distinct pluri-oriented practices from across EAP, TESOL, and graduate applied linguistics classrooms at our post-secondary institution, highlighting our theoretical underpinnings, and responding to each other's described classroom activities.This section concludes with an exchange on the affordances and limitations of our research methodology in the field of language (teacher) education. Classroom Activity #1: My metaphors Marlon: As a recently appointed professor in our Certificate in the Discipline of Teaching English as an International Language (D-TEIL), I was tasked with developing a new course to help EIL teachers embrace the learning and teaching of English grammar from a critical practitioner stance.As part of this course development, building on translanguaging's notion that plurilingual speakers possess a richer, summative lexico-grammatical repertoire (García & Otheguy, 2020), I invite language teacher candidates (TCs) to view grammar as a vehicle to convey meaning, evoke concepts, and construct language users' imaginaries in discourse across languages, rather than simply viewing it as a set of rules for standard English which teachers must guard and enforce.This translingual orientation, driven by my experiences as a plurilingual user of English as an additional language, is reflected in my approach, as evidenced in my pedagogical practice and syllabus construction, as well as in two activities that I use at the beginning of the course.First, as part of my community-building efforts, we create identity portraits, which are artistic representations of who my teacher candidates are.After modeling my own identity portrait that displays my multilingual identity, I give them a blank silhouette so they can use crayons, markers, and colors to visually represent their multiple identities.This activity is followed by a virtual gallery walk in which each TC explains their portraits and showcases their diverse linguistic identities.As the culturally and linguistically diverse TCs take in each other's identity portraits, they reflect upon the inextricable links between language, identity, and pedagogical practice (Cummins, 2009;Prasad, 2020). Heejin: I can imagine that this warm-up activity would instantly create a welcoming and safe learning environment to explore the intersectionality of class participants' evolving personal / academic identities in a creative way.I am curious to hear more about how you promote translingual perspectives among your TCs. Marlon: Well, I am happy you asked!Another example of this type of pluri-oriented practice is when we discuss metaphor use across languages, sharing metaphors in an online forum.In an asynchronous forum discussion, TCs are invited to share sayings in the languages that they spoke or knew about, explaining metaphors contained in such sayings.This leads to a prolific discussion of metaphors in English, Italian, Mauritian Créole, Punjabi, and Spanish, among other languages.This exercise allowed TCs to realize that there were common metaphors in romance languages like Spanish and Italian.One example I remember was: 'buono come il pane' in Italian or 'bueno como el pan' in Spanish, which literally mean 'as good as bread'.These metaphors are used to speak of someone who is kind and has a good character. Jacqueline: This translingual and transcultural way of exploring metaphor seemed to provide the conditions for your students to uncover new, exciting, and progressive ways of understanding and teaching English lexico-grammatical components.Might you share a tangible example of this activity?Marlon: Sure, below is one example of an exchange between students on the e-learning space: My Italian Metaphors by (Student name) -Friday, 18 September 2020, 12:31 PM Number of replies: 3 "Buono come il pane" -This is an Italian metaphor that translates to good as bread in English.This metaphor makes me think of and is similar to the English saying, "good as gold".I chose this metaphor since it is something my Italian parents used to say to me while growing up.They would use this expression as a way to describe someone that had a heart of gold.It can be used when explaining a kind and helpful person. "Fa un freddo cane!" -This is another Italian metaphor that is like the English saying, "its dog cold!" and can be used when it is extremely cold outside.My parents would use this metaphor when I was growing up, which makes it a saying that reminds me of my childhood.When my older brother and I would complain that it was freezing outside our parents taught us this metaphor, and it is now something that we still joke about to this day. James: I love how you are able to produce meaningful "play" with and across language(s) in this type of metaphor / translation activity, while promoting metalinguistic awareness, encouraging students to draw on and shuttle between available linguistic resources (Canagarajah, 2013).I can certainly see the influence of translingual theory in your pedagogical practices.I wonder, though, whether highlighting the distinctiveness of English, Portuguese, Spanish, etc. merely reifies the notion that there are distinct languages, rather than linguistic repertoires without artificial boundaries (García & Lin, 2017)?And I also wonder if these types of pedagogical practices are ever challenged by fellow faculty or students for diverting time and attention from the main task at hand, teaching English?Marlon: Those are interesting points.For me, the challenge is creating a space where theoretical and applied linguistics could safely intersect to nurture and inspire culturally and linguistically diverse TCs by presenting them with foundational concepts and tools to help them re-imagine grammar conceptually, not prescriptively.Thus, the intention is that TCs could reaffirm and embrace their rich plurilingual repertoires in order to help them unpack the notion of the monolingual native speaker as the prevalent model for grammar teaching and learning.I feel it is important to help TCs share metaphors in the multiple languages that they speak, so they notice how these languages influence the way they think and experience the world, thereby inspiring a form of critical intercultural rhetorical awareness.I do not think this reifies distinctions between 'named' languages, and I am not sure the students would appreciate the potential distinction made between plurilingual and translingual conceptualizations of linguistic repertoires.Also, no, I have experienced no pushback at all from colleagues or students!I am fortunate to collaborate with colleagues who are open to acknowledging TCs' plurilingual repertoires and their translanguaging practices.Moreover, TCs feel proud and more confident about their multiple languages, so they welcome these interactions. Brian: James, I know Marlon's teaching context very well, having taught there for 14 years.The belief in distinctive languages-aligned with distinctive methodologies and measures of proficiency-was, and for some, continues to be a foundational principle of the college where D-TEIL is housed.From the perspective of consequential validity (cf.Cummins), an argument could be made that longstanding frustration with this "distinctiveness" has been a causal factor in enabling the kinds of plurilingual/translingual teacher and student agency Marlon describes.A decade ago, pushback would have been more pronounced.Another enabling factor has been the sustained publication of plurioriented research, which provides a theoretical justification or "cover" for teachers who want to explore these ideas in their local practice. Heejin: Brian, that's a good point on the conditions that foster teacher agency.Towards that goal, Marlon, can you elaborate how your TCs were able to transform their critical understandings into practice within and/or beyond the classroom?Marlon: I think that activity frees my plurilingual TCs to look at grammar across languages and acknowledges their robust lexico-grammatical systems, which can be a first step in becoming more cognizant of how their future students may learn English grammar.This openness to translanguaging and transcultural practices was on display when they did virtual English as a foreign language class observations and a small intervention with Colombian students.They quickly made connections between theory and practice when they saw how resourceful their host language teachers were using English cognates when communicating with introductory level students.These 'real world' experiences TCs had during their practicum experiences demonstrate the potentially impactful, if not transformative nature of this type of language teacher education. Classroom Activity #2: Multimodal Terminology Corner Heejin: Guided by a curricular focus on social justice, the "Multimodal Terminology Corner" illuminates well my multimodal and multilingual orientations (Cummins, 2009;Cummins & Early, 2011).In this online activity, students are asked to review academic terms/ concepts critical to engaging with culturally sensitive course readings, adding their thoughts to a virtual bulletin board on Padlet (padlet.com), an online educational application.For this activity, students are guided through five steps: 1) quoting and locating key academic terms and concepts; 2) interpreting/meaning-making; 3) plurilingual and intercultural understanding; 4) applying the understanding; and 5) visualizing the concept and understanding (see Figure 2).Heejin: Prior to this activity, students had engaged in a series of class discussions related to systemic racism in Canada (e.g., Davis, 2018), where students were expected to report back their understandings of these course readings, highlight forms of racism using media resources, and share their lived experiences in social and academic settings through guided discussion prompts.The multimodal terminology corner activity is designed as a follow-up activity to these discussions, promoting a deeper understanding of the readings, and a growing ability to identify and engage with recurrent and newly emerging forms of social and racial marginalization related to many of my students' lives during the COVID 19 pandemic (e.g., anti-Asian sentiments, BIPOC movements).They explored how selected terms from the course readings might be used and understood across cultures, providing examples from their L1, and drawing on the multimodal resources available through the Padlet's image and video collection for further explanations 8 .Thus, multimodality with plurilingual and intercultural exploration is additive (Cummins, 2021), standing in stark contrast to an apolitical, transmission model wherein collaborative and creative knowledge co-construction is not possible.During this activity, students seemed to meaningfully engage and synthesize important terms such as systemic racism, microaggression, and xenophobia, which are fundamental for students' explication of their experiences during the pandemic.By the end of the course, it was evident that many students effectively integrated the lexical knowledge gained through these collective efforts as I witnessed the appropriate use of the terms in online discussions and final reports. Marlon: Heejin, I feel inspired by how your terminology corner provides students with an independent-study tool that is quite helpful to learn and retain new academic vocabulary, while simultaneously encouraging them to make connections across words, languages, concepts, and cultures.The multimodal nature of this activity grants students access to the mental imagery that they already associate with each concept in their other languages.Thus, one could say that this activity brilliantly combines languaging, or making meaning through linguistic production, and builds on learners' plurilingual lexical repertoires (Galante, 2020;Piccardo et al., 2021).I am looking forward to implementing these terminology corners in my EAP courses! 8. On Padlet, students could add a relevant video or picture to their posts through the Padlet's cloud-based collection of royalty-free public images or videos. James: In my estimation, there is great potential in using these types of lexical building opportunities to not only enhance students' academic literacies but also their critical language / cultural awareness, where they are prompted to consider the role of languages in promoting, maintaining, exacerbating, and / or challenging unequal social relations of power (Cummins, 2009).I can't help but wonder whether your pedagogies are influenced by your own linguacultural background and personal / professional journey? Brian: I'm wondering the same as it presents an opportunity to reflect on the role of (plurilingual) language teacher identity in shaping our curricular decisions and relationships with students. Heejin: I can surely sympathize with students who may experience different types of discrimination.Indeed, my own journey of identity construction and negotiation as a Korean speaker / instructor of English as an additional language has impacted everything I do in the classroom.As such, it was my sincere hope that learners engaged in this task would develop critical awareness of issues around identity, social injustice and inequity.In this way, this instructional practice then attempted to not only serve students' academic literacies (e.g., lexical knowledge; advanced reading skills; critical thinking skills) but also, perhaps just as importantly, instill in students a more social justice-oriented lens through which to see the world and, ultimately, equip them with action strategies to resist against unjust biases and discrimination outside the classroom. James: It is interesting to consider that the immediate language learning outcomes for these undergraduate students that you are working with may be secondary to the longer-term development of critical, intercultural and language awareness during their undergraduate degree programs.After completing this activity, I wonder if you noticed any changes in students' work or responses in their class engagement? Heejin: In completing this activity, students were able to connect sociopolitical and sociocultural factors with the use of certain terms in different cultural contexts, thereby raising not only their critical literacies but also their metalinguistic and intercultural awareness (Morgan & Vandrick, 2009).To illustrate, students often noted similar and differing patterns of prejudice and discrimination that exist in their own culture (e.g., boy/male favoritism and xenophobic patterns toward regional dialects and minoritized/ racialized ethnocultural groups in China; intergenerational difference in perceptions toward racism in different nations).Toward the end of the course, I could see how students were confidently and astutely able to integrate those terms to express their thoughts.The following excerpt, which is part of the student reflection on the class activity of allyship development, exemplifies how discussions and engagement with key terminology enabled their critical, reflective capacities: "Dr.Davis mentioned that we must commit to ending this pattern of systemic racism by learning and unlearning.Learning how to respond to racism, to call it out and to expand on a pedagogy that is inclusive to all beings etc. […] it is our social responsibility to be mindful in the things we say and do and to contribute to the eradication of this evil called Racism from society."(Student entry from online discussion forum) Classroom Activity #3: Multilingual Identity Text Project Jacqueline: Many of my ELL (English language learners) students see themselves as deficient users of English.This fact resonates at a personal level with me, as a Hong Kong-born user / instructor of English as an additional language who engaged in a challenging transnational journey, where I had to negotiate my evolving, hybrid academic and professional identity.Thus, I am particularly interested in attending to students' transnational self-efficacy and academic success.In my EAP course, students design and create an Identity Text Project in which they draw upon their distinctive linguistic and cultural characteristics that represent them as unique individuals in a multilingual and multicultural society (Cummins & Early, 2011).In the "narrative, reflective space" afforded, students are excited to tell their personal stories (via Padlet and eClass, our e-learning space) that provide cultural insights and describe the impact of various factors (e.g., family, culture, institution, and peers) on their transnational identity construction and negotiation (Zaidi et al., 2016).The Identity Text Project effectively promotes ELL students' target language learning through making meaning of learned concepts, engaging in intercultural dialogues, and producing innovative multimodal work.These learning outcomes align with plurilingual pedagogy that maximizes available resources and opportunities to ensure language can be learned and used in meaningful ways as students take ownership of their learning process. Heejin: Jim Cummins' (2009) transformative multiliteracies pedagogy posits that when learners are given an opportunity to express their ideas with whatever modes of expression and whatever forms of literacies are available to them, learners will show increased engagement and investment in learning.In this sense, this is a very empowering activity for students, and I can see the influence of Jim's scholarship in your pedagogical practice!I am particularly interested to know more about how an identity text project is executed. Jacqueline: Yes, my work has been heavily influenced by Jim's scholarship.I would say the same for all of us that studied with him at OISE/University of Toronto, no? Anyway, let's proceed with a general explanation of the identity text project.First, drawing on their plurilingual and pluricultural competence (Coste et al., 2009), students produce a multilingual, multimodal "text" -i.e., a text that uses available semiotic resources, including audio, video, image-where they reflect on their linguistic and cultural identities via digital storytelling (Corcoran, 2017).The result of this project is two main artifacts that promote critical reflection on identity construction and negotiation, including an audio-recorded digital "text", as well as a multimodal poster that highlights their struggles and achievements in a new cultural context.Figure 3 showcases an example of one student's multimodal infographic poster of her transnational journey from Syria to Canada.Marlon: As an avid user of multimodal identity texts in the language classroom, I applaud your creative implementation of this Identity Text Project, which clearly welcomes and acknowledges students' pluri-competence and diverse experiences.I think inviting EAP students or TCs to narrate their personal life-stories can situate them simultaneously in their memories of the past, their present experiences, and their envisioned future (Ng, 2011;Valencia et al., 2020).Learning about your use of identity texts happily validates my identity-infused orientation to the teaching of English as an additional language!I suspect I already know your answer, Jacqueline, but I wonder if you ever get any pushback from administrators about incorporating a non-traditional assignment like this one in your credit-bearing EAP classroom?Jacqueline: Recognizing York University's teaching priorities with respect to eLearning, the Identity Text Project is complementary to traditional course work, promoting broad academic literacies, including critical thinking skills (Lea & Street, 2006).Even though students are occasionally hesitant to share their evolving identities in class, I see the urgency and importance of engaging them in a student-led learning process afforded by this activity that helps them construct and negotiate their identities and consider their subjectivities.From my students' learning outcomes and reflections, I am convinced of the efficacy of this multiliteracies pedagogy!Brian: That's such an interesting point on eLearning and its affordances for the kinds of innovative multimodal identity work you are doing.Regarding Marlon's question on pushback, administrators have gotten much more (i.e., your innovative project) and perhaps much less (i.e., fewer economic efficiencies and savings) from their exuberant rush to bring in more information technologies and computer-assisted learning into post-secondary education. James: I wonder if you noticed any changes in students' attitudes, perspectives and/or academic production in class following this project?Jacqueline: Yes, I did see significant changes in my students' self-perceptions and identity formation after they completed the project.When we initially discussed major course themes such as racism, discrimination, and power relations between the dominated and the oppressed, many ELL students identified themselves as socially and linguistically marginalized, drawing on ideas such as intersectionality (Collins & Bilge, 2020).I was surprised to find how the project has inspired students to reshape their identities and reposition their social status.For example, a student in my class was a Syrian refugee who had been living independently in Canada for a few months.In her reflection, she expressed that she used to be passive and silent in other courses and was not comfortable to talk about her background publicly.I was thrilled to see this student passionately share her transnational journey in her Identity Text Project and discuss how she has transitioned from a "deficient" ELL learner to a multicompetent user of English (Cook, 2016), from a minoritized, marginalized member to an active, engaged, and legitimate member in the new community, and from a refugee to a proud Syrian-Canadian.This student was invited to present her Identity Text Project at the York University Undergraduate Research Fair in 2021 and received an award for her academic achievement.This type of academic trajectory demonstrates that ELL students can actively engage in academic endeavors beyond the classroom level, claiming agency, and affirming their evolving, hybrid, transnational, multilingual identities (Nicolaides & Archango, 2019).To me, that is transformative!James: In my "English for Specific Purposes: Theory and Practice" course in our Linguistics and Applied Linguistics (LAL) graduate program, I go about de-centring English from the jump.During the first class, I invite my graduate students -all active language teachers with at least two years' experience -to contribute to a "classroom language landscape document" by writing short phrases (e.g., "what's up?", as seen in Figure 5) in the languages that they "know", leading to discussion of Englishes, languaging, heteroglossic variation, and language hierarchies (Lau & Van Viegen, 2020).The result of this activity is a permanent classroom product that can become a fixture on the classroom wall and / or e-learning space, and a constant reminder of our collective cultural and linguistic diversity.Heejin: Your "classroom language landscape" activity is an excellent instantiation of creating a welcoming and engaging dialogue among class participants, instigating critical reflection on English(es) and ELT throughout the course.I am curious to hear more about the discussions that this activity invoked. James: Discussions surrounding the classroom language landscape are often interesting and involve questions surrounding language varieties.In an effort to validate students' diverse and evolving linguistic repertoires, I often challenge students to consider more broadly their inter-and intralinguistic knowledge of "standard English vs. other Englishes" and other languages / dialects.Building on this initial activity, I pose a set of provocative questions about acronyms used in our course (e.g., ESP; EAP): What does it signify when English is forefronted in these terms?Is English the only language that is used for specific (academic/occupational) purposes?What are the cultural/ linguistic profiles of those who use English for specific purposes? What other examples of acronyms or language use in the ELT (aha, another one!)classroom elevate certain languages and language users at the expense of others?Again, this discussion serves to increase these scholar-practitioners' critical language awareness, drawing immediate attention to issues of classroom language use, power, and subjectivities (De Costa et al., 2017;Siqueira, 2021).Next, assuming a three-hour course -standard in our graduate program-the remainder of the class is focused on interrogating "common sense" ideologies of English language teaching (Cummins, 2007;Phillipson, 2016) such as, "English is the best language for intercultural communication", "Those who use English as an additional language draw upon only English when engaged in academic/ professional linguistic production", and "English instruction is most effectively carried out by native speakers of English".This set of Day 1 activities set the stage for critical reflection upon oft-obscured monolingual ideologies and ontoepistemologies underpinning the work we do as language teachers and teacher educators (Kubota, 2020;Moore et al., 2020;Watson & Shapiro, 2018).Jacqueline: Interesting!I also pose these types of questions in my EAP course.James, I think these types of awareness-raising practices about monolingual ideologies in ELT display a plurilingual approach that challenges coercive power relations between individuals and groups, something evident in both plurilingual and translingual approaches to language education (Cummins, 2021;Marshall, 2021). Heejin: It is noteworthy that you challenge the implicit colonial discourses and monolingual ideologies (e.g., named language hierarchies) often implicit in English teaching and learning.In this sense, would you say, at least to some extent, your pedagogy follows the spirit of translanguaging, as outlined by García and Otheguy (2020), Wei (2018), and others?James: Jacqueline, I do think my pedagogical practice in the area of English for specific purposes acts in the spirit of translanguaging and I certainly value the recent contributions of translingual theory and translanguaging pedagogy, especially as it applies to the teaching of writing in an additional language.For example, in addition to working with in-service language teachers in our Linguistics and Applied Linguistics graduate program at York University, I also work in international contexts (e.g., Mexico) where I design and teach English for research publication purposes courses (see Flowerdew & Habibie, 2021).However, as I continue to refine and operationalize my pedagogies for supporting plurilingual scholars' advanced literacies (see Englander & Corcoran, 2019), I am not sure it makes much difference whether I label these practices as translanguaging or plurilingual or multilingual, given their shared philosophical orientations and transformative objectives.I suppose I'll leave that for the readers to decide. Marlon: These insights certainly resonate with me, as I feel the boundaries between translanguaging and plurilingualism aren't clearly defined when it comes to pedagogical application.Wouldn't you say that teachers simply use the ideas they feel work best for their teaching contexts and needs regardless of theoreticians' big debates and divides (Kumaravadivelu, 2001)? Brian: I agree.The theoretical debates can seem overly dogmatic at times, trivializing the knowledge creation, expertise and experience of practitioners, which is why consequential validity (Cummins, 2021) and Kumaravadivelu's (2001) post-method parameters (i.e., particularity, practicality, possibility) are so important.I'd also add that given James' setting of a graduate course on ESP, an emphasis on teachers' assessment of local needs in adopting pluri-oriented pedagogies makes a lot of sense. James: Working with graduate students who have language teaching experience is always enjoyable and rewarding given the immediate connections they can often make between theory, research, and pedagogical practice.Importantly, I agree with Marlon that practitioners must be able to find meaningful connections between theory and practice, answering the "so what?" question that can sometimes get lost in esoteric and myopic debates between theoreticians and researchers.Ultimately, in order to pave the way for transformative pedagogical practice that promotes critical literacies, we need to make theory and research accessible to our students, and that can only be done when there are real connections that students make between scholarship and real-world experiences.These pedagogies should be situated within the sociohistorical and sociocultural contexts in which they are used -there is no such thing as a perfect, one-size-fits-all approach to language teaching/teacher education. Classroom Activity #5: Trans-Semiotic Culture Jamming Brian: James, I couldn't agree more on the need to make scholarship accessible and relevant to the real-world challenges students experience.That has been the guiding aspiration for a content-based EAP course titled, "Dealing with Viewpoint" I taught at Glendon College from 2008 to 2020.The course sought to develop advanced academic language skills while promoting critical multiliteracies related to citizenship (Windle & Morgan, 2020).Over the past ten years, I have integrated plurilingual / translanguaging elements in several course assignments, one of which I describe here.The "culture jamming" assignment explores a semiotic, multimodal orientation to L2 work, drawing on students' experiential familiarity with social media as well as their everyday encounters with the kinds of remixed, hybrid texts characteristic of Toronto's ethnolinguistic diversity.For this assignment, students select a spoof ad or culture jamming image for analysis, responding to the following prompts in approximately 400-500 words: How does it work as a parody or copy of the original ad or image?In your opinion, how effective is it in terms of "meme warfare"?How does it use image and texts to "jam" our consumer culture and/or act on our emotions? In class, I provide several examples of culture jamming images, discussing compositional strategies and techniques (e.g., visual parody, aesthetic imitation).I also focus attention on what I describe as examples of "emo-fishing"-how text designers integrate plurilingual and trans-semiotic elements such as digraphia (orthography/grapheme mixing) that engage with multilingual audiences in differentiated ways, for some, indexing strong emotional responses linked to postmemory, e.g., the intergenerational transfer of traumatic historical events for diasporic communities (Ahmed & Morgan, 2021).One example I like to show in class comes from a political rally held at the Ontario Legislature in December 2008 (see Figure 5).As I explain to students, for Ukrainian-Canadians the postmemory of the man-made famine in Ukraine under Stalin's rule (i.e., the Holodomor) would be indexed by this multimodal poster and its mixing of Cyrillic and Roman scripts, no doubt provoking strong feelings of opposition to the late Canadian parliamentarian, Jack Layton. For the next class, I ask students to bring in one or two assignment examples and ask them to do a short, informal presentation focused on reasons for their choice and emergent analyses based on course readings.The group discussions around this activity are especially useful for raising intercultural, plurilingual and trans-semiotic awareness.Jacqueline: Thank you for sharing this inspirational, empowering EAP classroom pedagogy, Brian, I wonder, are students allowed to select a multilingual culture jamming product/ad from their "home country"?Might that enrich the content of their presentation and enhance their plurilingual and trans-semiotic awareness? Heejin: I agree that this assignment is a powerful way to facilitate students' engagement with critical literacies.Particularly, it is meaningful as it speaks to unique sociocultural and sociolinguistic student populations reflecting the diverse ethnocultural and linguistic makeup of Toronto.As Jacqueline alluded to, it would be ideal if students brought their own spoof ads and utilized their cultural and linguistic knowledge to decode the trans-semiotic expressions. Brian: Absolutely.Jacqueline and Heejin, this has been an important addition to this assignment in recent years.When students bring in culture jamming items from other countries and in different languages/scripts, I encourage them to explain the requisite cultural and semiotic knowledge needed to understand the sign makers' rhetorical intentions (see e.g., Ferraz & Mezan, 2019). Methodological Affordances and Limitations James: As we come to the end of this project, I am curious as to how you all view your experiences with multiethnography as a research methodology.This is now the fifth time I have participated in such a project and while other experiences have undoubtedly been positive, our project has been unique in certain ways.For example, though one might argue that duo/multiethnography often works best when there are a range of contrasting perspectives, leading to analysis of convergent, divergent, dynamic author perspectives, working on this project has highlighted the undeniable synergies in pedagogical orientations and practices of our group. Heejin: Indeed!This project has certainly made me feel connected with colleagues who share similar pedagogical orientations and approaches despite differing teaching trajectories and expertise.This connectedness has been an important source of developing and maintaining relationships with colleagues, particularly during the pandemic, and greatly influenced how I teach.Through our continued back-and-forth dialogues via zoom and emails, I believe we became more comfortable with sharing our ideas, at times challenging, questioning, and confirming my perspectives, which impacted (and often validated) my pedagogical practices. Jacqueline: I absolutely agree that multiethnography can effectively enhance researchers' team building through exchange of professional knowledge, critical insights, and expertise.Despite all the uncertainties and challenges I have been encountering during these isolating pandemic times, this collaborative project certainly affirmed my (and others?) pedagogical orientations and classroom practices.More importantly, this emergent methodological approach recognizes our critical reflections as both instructors and researchers, providing the conditions for transformation of current conditions and construction of a new social future (New London Group, 2000). Marlon: Some photographers argue that you don't take photographs; rather, photographs are given to you, the result of the confluence of a series of fortuitous events.In a similar vein, I was given multiethnography during my doctoral studies when a colleague of mine and I found out how analogous our data collection journeys were despite our vastly different research contexts (Sri Lanka vs. the Americas).Therefore, out of curiosity and solidarity, we started a series of critical conversations.That was several years ago, and I'm happy to report that this is my ninth multiethnography.Successful multiethnographies can be a cathartic experience based on trust in a consensual and safe creative space.I say creative because you can start with some ideas, but you never know how the narratives will be weaved together until you come to the final stage of the article, writing it up as a type of script (Sawyer & Norris, 2015).However, because we mostly rely on narratives emanating from self-experience, multiethnography is constantly questioned as a legitimate research methodology among applied linguistics scholars (Lowe & Lawrence, 2020).James: Those are real concerns you raise about perceived legitimacy.As is often the case with novel methodological approaches, there are those who bristle at change, often raising issues of validity stemming from paradigmatic fundamentalism.Without completely dismissing those who question the validity of our approach, I think it is important to, first, suggest that there is a certain internal validity to the triangulation of perspectives and, second, highlight that ethnographic work should be viewed as an accessible and innovative form of research that can incorporate perspectives of those with differing levels of research expertise, providing a forum for debate and discussion that leads to critical reflection and change.The importance of accessibility should not be underestimated in the field, where both real and perceived vertical hierarchies all too often limit knowledge exchange between theoreticians, researchers, and practitioners (Liu et al., 2020).I look forward to seeing how this novel methodology is adopted for a variety of purposes, not only in the research community, but also at the pedagogical level, where there is growing evidence of its efficacy in language teaching and teacher education classrooms (e.g., Tjandra et al., 2020;Huang & Karas, 2020).I envision its adoption as a potentially game-changing tool for critically oriented language educators and researchers looking to impact social relations of power within and beyond the classroom walls. Brian: I would also add that the game-changing possibilities are reciprocated through our unique field-internal experiences of texts, genres, lexicogrammar, and interpersonal relationships characteristic of EAP settings.I hope duo / multiethnographers in other fields check out what we have to offer. Discussion and Concluding Thoughts Our conversations display a surprising amount of convergence with respect to adoption of pedagogical approaches that draw on extant theory and research.While we may put forth differing theoretical and related epistemological influences (multiliteracies; multilingualism; plurilingualism; translingualism), we largely agreed that the ways in which we take up theory in the classroom have similar transformative objectives of identity affirmation, promotion of linguacultural diversity, and challenging inequitable relations of power within and beyond the classroom.What is also apparent from our discussions is that the affirming nature of pluri-oriented pedagogies applied not only to plurilingual students using English as an additional language, but also their instructors, a point that should not be lost on global ELT, with its long history of monolingualism and native speakerism (Cook, 2016;Kiczkowiak & Lowe, 2021;Phillipson, 2016).But are these plurioriented perspectives and practices enough to meaningfully challenge systemic inequities in language education or are we, as some have suggested, merely creating plurilingual subjects for participation in advanced capitalist economies?(e.g., Flores & Rosa, 2015;Kubota, 2020) And, further, can pluri-oriented perspectives travel freely across global geolinguistic contexts, or are these modernist, emancipatory objectives housed in antiquated, critical ideals that do not adequately consider subjectivities and onto-epistemologies emanating from the global south?(Pennycook & Makoni, 2019;Sousa Santos & Meneses, 2019;Sugiharto, 2021).These remain open questions, with need for further exploration regarding the longer-term impact of pluri-oriented language teaching and teacher education on both practitioners and their students as they travel through their academic and professional trajectories.Surely, social transformation requires more than simply pluri-orienting our pedagogies; education is not a panacea for curing all social ills.That said, our multiethnographic conversations suggest the potential of pluri-oriented pedagogies to meaningfully impact culturally and linguistically diverse students' trajectories; for practitioners who share our transformative objectives, engaging in critical reflection on the connections between theory, research, and practice may be invaluable to personal and professional impact and self-efficacy. Finally, in considering the implications of our multiethnographic findings, the issue of consequential validity looms large.Drawing on our collective perspectives, it seems clear that our pluri-oriented pedagogies are enacted with a range of theoretical underpinnings that are often meaningful in as much as they are relevant and effective in our classrooms (Cummins, 2021).Translanguaging, for example, is yet another name for a growing list of pluri-oriented pedagogical approaches that allow for language teachers and teacher educators to challenge monolingual ideologies, policies, and pedagogies.Thus, while we recognize some of the distinct onto-epistemological underpinnings claimed for different pluri-oriented theories, e.g., multiliteracies; multilingualism; plurilingualism; translingualism, these distinctions may be somewhat inconsequential to their enactment.Furthermore, asserting the inherent emancipatory or transformative qualities of one theory over another (translanguaging versus plurilingualism, for example) serves to disempower and marginalize practitioners as knowledge creators and change agents.This is not to say that engaging with theory is unimportant for TESL educators.Indeed, as evidenced by this article, dialogic engagement with pedagogies may afford meaningful reflection on synergies between educational philosophies, theories of language (learning), and pedagogical practices, a process facilitated by, we argue, multiethnographic investigations such as ours (indeed any range of dialogic methodologies would facilitate such processes).Ultimately, we hope this pedagogically oriented piece has provided language teachers and teacher educators with ideas for pluri-oriented classroom practice while highlighting the connections between theory, research, and practice.We conclude this piece by adding our voices to a call for greater collaboration between researchers and practitioners, thereby challenging coercive relations of power, and promoting a more equitable, bi-directional knowledge flow that holds central EAP teacher agency, knowledge, and experience. Figure 3 - Figure 3 -Student sample of Infographic Identity Text
2023-07-31T15:35:50.236Z
2023-10-20T00:00:00.000
{ "year": 2023, "sha1": "62e2e053e854ccf5a125cad450fad9343ee1ecca", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/delta/a/srXdnD5b7jns6zQYwGFxHxw/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Dynamic", "pdf_hash": "904e30e959d2fb63776986d9ad52a7d403729660", "s2fieldsofstudy": [ "Education", "Sociology" ], "extfieldsofstudy": [] }
89306759
pes2o/s2orc
v3-fos-license
Impact of controlled neonicotinoid exposure on bumblebees in a realistic field setting our colony census measures showed a more pronounced effect of exposure, with fewer adult workers and sexuals in treated colonies after 5 weeks. 5. Synthesis and applications. Pesticide-induced impairments on colony development and foraging could impact on the pollination service that bees provide. Therefore, our findings, that bees show subtle changes in foraging behaviour and reductions in colony size after exposure to a common pesticide, have important implications and help to inform the debate over whether the benefits of systemic pesticide application to flowering crops outweigh the costs. We propose that our methodology is an important advance to previous semi-field methods and should be considered when considering improvements to current ecotoxicological guideli-nes for pesticide risk assessment. Introduction Vast areas of crop monocultures have become common practice in modern agriculture, with a heavy reliance on chemical insecticides to prevent crop damage from insect pests. However, while insecticide application provides the obvious benefits of controlling insect pest populations, we understand less about the costs associated with inadvertent exposure to non-target organisms (Desneux, Decourtye & Delpuech 2007;Goulson 2013). Many non-target insect species provide an important pollinator service, with ca. 75% of agricultural crop species being (to some degree) dependent on pollination which represents an estimated global economic value of over €150 billion per *Correspondence author. E-mail: r.gill@imperial.ac.uk annum (Klein et al. 2007;Gallai et al. 2009), as well as maintaining healthy wild-flower populations (Ollerton, Winfree & Tarrant 2011). Hence, it is important we understand the potential risks posed to insect pollinators by stressors, such as insecticide exposure ). Indeed, concern over insect pollinator declines is growing (Kremen & Ricketts 2000;Biesmeijer et al. 2006;Brown & Paxton 2009;Cameron et al. 2011;Burkle, Marlin & Knight 2013) and insecticides have been implicated as a contributing factor (Desneux, Decourtye & Delpuech 2007;Vanbergen & Initiative 2013;Goulson 2015). Bees are considered to be the major contributor to insect pollination (Greenleaf & Kremen 2006;Klein et al. 2007;Winfree et al. 2008;Potts et al. 2010), and there is increasing evidence to support that insecticide exposure can lead to sublethal behavioural effects (Desneux, Decourtye & Delpuech 2007;Gill, Ramos-Rodriguez & Raine 2012;Gill & Raine 2014;Lundin et al. 2015), potentially increasing susceptibility to other stressors such as pathogens (Alaux et al. 2010;Fauser-Misslin et al. 2014). Furthermore, in social bees, pesticide-induced impairments to colony functions, such as foraging, could accumulate and eventually lead to a significant decrease in colony size or even collapse (Bryden et al. 2013;Perry et al. 2015). Yet we still have a limited understanding of whether, and how, exposure to pesticides in semi-field or field environments might impair foraging behaviour (Gill, Ramos-Rodriguez & Raine 2012;Henry et al. 2012;Schneider et al. 2012;Fischer et al. 2014;Gill & Raine 2014;Stanley et al. 2016). To date, however, there has been criticism surrounding many pesticide exposure studies highlighting that most do not represent true field scenarios (i.e. based in artificial laboratory or semi-field conditions), and may have used unrealistically high concentrations and/or doses (Raine & Gill 2015). Moreover, the majority of studies on bees have often concentrated on effects of acute exposure, yet we understand relatively little about chronic effects (Gill & Raine 2014;Stanley et al. 2016), the potential impacts on colony fitness when considering social bees, and whether the pollination services are altered when bees are sublethally impaired (Gill, Ramos-Rodriguez & Raine 2012;Whitehorn et al. 2012;Bryden et al. 2013;Perry et al. 2015;Rundl€ of et al. 2015;Stanley et al. 2015). We conducted an experiment that bridged the gap between laboratory and field studies. We placed bumblebee colonies Bombus terrestris audax (Harris, 1776), in a field setting, and exposed them to a commonly used neonicotinoid pesticide, clothianidin, at concentrations approximating field realistic levels (Table S1, Supporting Information). Globally, neonicotinoids are a widely used class of pesticide that are, due to their systemic properties, readily taken up by treated plants to provide protection across all tissues for an extended period of time (Elbert et al. 2008). However, residues are found in the nectar and pollen of treated/contaminated flowering plants resulting in a direct route of exposure to foraging insect pollinators such as bees (Rortais et al. 2005). In recent years, clothianidin has been one of the most heavily used neonicotinoids (Goulson 2013), and calls for evidence on the acute and chronic risks that clothianidin, as well as other neonicotinoids, pose to insect pollinators have been issued (EFSA 2013a). This study undertook careful observations of B. t. audax colonies providing detailed insights to the foraging behaviour of 20 colonies across 5 weeks when provisioned with sucrose solution spiked with clothianidin at five parts per billion (ppb), or a sucrose control solution, allowing us to address the potential chronic effects of exposure on: (i) foraging activity (rate of returning forager bees), (ii) pollen foraging performance; (iii) any effect of wind speed and temperature on foraging patterns; and (iv) a comparison of endpoint measures including colony brood weight and the production of eggs, larvae, pupae and adults. Materials and methods audax colony [mean (AESEM) workers per colony = 44Á4 AE 1Á67; range = 35-58] was placed inside a wooden nest box which was then placed within a 110-L plastic container for protection from weathering and predation. The containers were set in the grounds of Silwood Park campus (110 ha site of non-agricultural parkland on 6 June 2013, see Figs 1a and S1, and electronic supplementary material for description of nesting boxes and land type). Colonies were assigned to ten pairs using a split block design to experimentally control for differences in initial colony size (Fig. 1b), and each pair was randomly assigned to either a control or treatment group (Table S2). We found no significant difference in colony size between the control and treatment group in either worker or pupae number (GLM: workers: Z = 0Á738, P = 0Á46; pupae: T = 1Á221, P = 0Á238). Colonies within a pair were located 8-10 m from each other, and pairs were placed a minimum of 25 m apart. Colonies were provided with sucrose solution three times per week, with the calculated volume provided deemed to be half that which colonies would typically consume over the course of the experiment (see electronic supplementary information and Table S3 for details), but we did not provide any pollen. All control colonies (n = 10) were fed untreated 40% v/v sucrose/water solution. Treated colonies (n = 10) were fed sucrose solution containing a five ppb concentration of clothianidin which approximates a field realistic concentration (range found in flowering agricultural crops: <1Á0-14 ppb in nectar; see Table S1). Colonies remained in the field for 5 weeks (35 days) and were then frozen. A colony census was then conducted by recording colony structure weight (wax, pollen stores and nectar pots) and the number of eggs, larvae, pupae, workers and sexuals present. As B. t. audax is a native UK subspecies, we did not fit the colonies with queen excluders, but this meant we were unable to prevent the dispersal of gynes from the colonies; therefore, the number of gynes in the colony represents a snapshot of the colony at the end of the experiment. to the start of the experiment, five pairs were assigned to observer 1 and the remaining five pairs to observer 2 (Fig. 1b). Both the order in which the colony pairs were observed each day and the order each individual colony within a pair was observed were randomised, and there was no significant difference in colony size between observer 1 and 2 in either worker or pupae number (GLM: workers: Z = 0Á872, P = 0Á383; pupae: T = À0Á234, P = 0Á817). Observers were positioned beside the plastic box at a distance of 0Á5-1 m to view the transparent entrance tube. Any worker returning to the colony was assumed to be a forager, and observers collected common measurements that included: (i) counting the number of returning foragers (forager 'activity'); (ii) recording whether foragers were carrying pollen; and (iii) taking the mean of three temperature (°C) and wind speed (m s À1 ) readings outside (1 m) of the plastic box. In addition, each observer carried out measurements exclusive to themselves: Observer 1pollen removal When a forager returned with pollen, a plastic 'trap-door' was used to prevent the bee from entering the colony. The bee was then held with forceps, and the pollen load from one leg was removed with a spatula and stored at À20°C. We only took pollen from one corbicula as removal from both may affect future forager motivation (Raine & Chittka 2007). As pollen is typically gathered into each corbicula evenly (Winston 1991), we assumed that the weight of the pollen load mass was half of the total collected pollen. At the end of the observation, each pollen load was weighed (accuracy: AE0Á1 mg). Observer 2photographic method A standardized photograph was taken of each returning forager [Nikon D3300 SLR (Tokyo, Japan) fitted with a remote shutter release and 18-55 mm f/3Á5-5Á6 A-FP Non VR Lens] with the camera consistently placed 200 mm from the entrance tube with a 55 mm focal length. We then calculated the 2D-surface area (Fig. S2) of the pollen load using the software package Image J (Rassband 1997(Rassband -2015 relative to a 10-mm scale bar drawn on the side of the entrance tube. Tables S5-S7). For foraging data, the spatial structure of paired colonies and observation regime of the experiment was modelled by nesting the variable Colony within Pair within Observer as random effects, while we included a random slope by observation Day to account for temporal pseudoreplication. Fixed factors included Treatment, Time (either hour of day, or day in the experiment) and Treatment 9 Time interaction. We initially fitted each model to include Temperature and Wind Speed as covariates, but these were only retained when their removal significantly decreased the fit of the model, determined by a significant likelihood ratio test. All models analysing colony census data included Treatment as a fixed effect and Pair nested in Observer as random effects. F O R A G I N G B Y T I M E O F D A Y Average foraging activity was higher in control relative to treated colonies as shown by the significant main effect of treatment and a lack of a significant interaction between treatment and observation hour (GLMER: 'Treatment' Z = À2Á542, P = 0Á011, 'Treatment' 9 'Time' Z = 0Á510, P = 0Á610; Fig. 2a, Table S5a); however, foraging activity in both groups declined as the day progressed (Z = À6Á346, P < 0Á001). In contrast, the proportion of foragers carrying pollen increased as the day progressed (GLMER: Z = 4Á508, P < 0Á001) with both treatment and control colonies responding similarly ('Treatment' and 'Treatment 9 Time' interaction Z ≤ 1Á501, P ≥ 0Á33; Fig. 2b, Table S5b). The average pollen load weight increased as the day progressed (LMER: v 2 = 11Á523, P = 0Á003, Fig. 2c) with no difference between foragers from control and treated colonies (v 2 ≤ 2Á055, P ≥ 0Á357; Fig. 2c and Table S5c). However, the mean area of the pollen load (calculated from the photo images) from control colonies was initially smaller (LMER v 2 = 18Á527, P < 0Á001) and increased as the day progressed (v 2 = 11Á72, P = 0Á003), whereas they remained relatively constant on foragers from treated colonies (v 2 = 11Á956, P < 0Á001; Fig. 2d; Table S5d). Despite the minor effects of treatment detected by the photographic method, these did not translate into significant differences in either total weight or area of pollen brought back as there was no significant main or interactive effect with treatment (LMER: v 2 < 3Á61, P > 0Á165; Fig. 2e,f; Table S5e,f). F O R A G I N G A C R O S S D A Y S We next investigated whether there were changes in daily foraging patterns across 5 weeks, aiming to elucidate any chronic effects caused by clothianidin treatment. Foraging activity over all colonies followed a parabolic pattern. In control colonies, we observed an average of 10 foraging bouts h À1 on day 3, increasing to 21 h À1 by day 19 followed by a decline to 14 h À1 by day 33; a relationship best described by fitting a polynomial relationship between foraging activity and observation day (GLMER: 'Day': Z = 7Á742, P < 0Á001; 'Day 2 ': Z = À8Á513, P < 0Á001; Fig. 3a; Table S6a). The lack of a treatment or treatment by time interaction indicated that treated colonies made comparable numbers of foraging bouts throughout the experiment (Treatment: Z = À1Á658, P = 0Á097; 'Treatment' 9 'Day': Z = 1Á255, P = 0Á209; 'Treatment' 9 'Day 2 ': Z = À0Á64, P = 0Á522). The proportion of foragers observed carrying pollen also increased until approximately midway through the experiment before declining as the colony aged (GLMER: 'Day': Z = 6Á527, P = <0Á001; 'Day 2 ': Z = À6Á344, P < 0Á001, Table S6b). The significant main effect of treatment indicated that, early in the experiment, foragers from treated colonies returned carrying pollen more frequently compared to control colonies (Z = 2Á425, P = 0Á0153), while the significant treatment by day interactions ('Treatment' 9 'Day', Z = À2Á368 P = 0Á017; 'Treatment' 9 'Day 2 ', Z = 2Á357, P = 0Á018; Table S6b) showed that both the rate of increase and rate of decline were significantly lower for treated colonies resulting in less fluctuation in the proportion of foragers observed carrying pollen relative to control colonies (Fig. 3b). The mean weight of pollen loads showed a curved relationship initially increasing then decreasing as the experiment progressed across colonies (LMER: 'Day': v 2 = 28Á72, P < 0Á001; 'Day 2 ': v 2 = 25Á73, P = 0Á002; Fig. 3c). Foragers from treated colonies initially carried heavier pollen loads per foraging bout (v 2 = 27Á63, P = 0Á001) and increased the mean weight of pollen loads at a higher rate than control colonies (v 2 = 10Á646, P = 0Á005; Table S6c). Conversely, observer 2 found no effect of experimental day (LMER 'Day' v 2 = 3Á571, P = 0Á168; Table S6d) indicating that the mean area of pollen loads remained constant throughout the experiment (Fig. 3d). Observer 2 found that the mean area of pollen loads from foragers returning to treated colonies was significantly smaller than those returning to control colonies (v 2 = 7Á193, P = 0Á007) and, as we were unable to detect an interaction between treatment and observation day (v 2 = 3Á024, P = 0Á082), it remained so throughout the experiment. Neither observer detected any effect of treatment nor a treatment by time interaction on either the total weight or area of pollen brought back per hour. However, they did find an initial increase in total weight and area of pollen collected in the early stages of the experiment followed by a decline as the colony aged, mirroring the combined effects of forager number and the proportion of pollen foraging workers (Weight: LMER: 'Day': v 2 = 27Á952, P < 0Á001; 'Day 2 ': v 2 = 23Á889, P < 0Á001; Area: 'Day': v 2 = 21Á502, P < 0Á001; 'Day 2 ', v 2 = 17Á859, P < 0Á001 ; Fig 3e,f; Table S6e,f). E F F E C T O F T E M P E R A T U R E A N D W I N D O N F O R A G I N G B E H A V I O U R We found similar effects both within and between days, with higher wind speeds associated with increases in foraging activity (LMER: within day; Z = 3Á117, P = 0Á002), proportion of foragers carrying pollen (GLMER: within day; Z = 4Á979, P < 0Á001; between days; Z = 4Á157, P < 0Á001), the total weight of pollen (LMER: between days; Z = 109Á16, P < 0Á001) and total area (combined area) of pollen (LMER: between days; v 2 = 29Á475, P < 0Á001). Higher temperatures were associated with fewer foraging bouts (within day; Z = À3Á067, P = 0Á002; between days; v 2 = À4Á894, P < 0Á001) and a lower proportion of foragers carrying pollen (within day: Z = À2Á549, P = 0Á01; between days; v 2 = À2Á047, P = 0Á041); however, of the foragers observed carrying pollen, the average load size was larger at higher temperatures (within day: weight; v 2 = 21Á934, P < 0Á001; between days: mean area v 2 ≥ 11Á094, P < 0Á001). B R O O D C O M P O S I T I O N A N D A D U L T C E N S U S A F T E R 3 5 D A Y S I N T H E F I E L D All but one of the colonies increased in weight compared to the start, although we found no significant difference between control and treatment colonies in colony weight change (Table S7a). To see whether treatment induced changes to the demographic structure of the brood in colonies, we analysed the number of eggs, larvae and pupae separately. We found some differences in each of the life stages with treated colonies containing significantly fewer eggs, but significantly more larvae and pupae (All: Z = ≥2Á87, P < 0Á004, Fig. 4a-c, Table S7b-d). However, the similarity in colony weight gain and the inconsistent effect of treatment on the number of brood at three life stages makes it difficult to be confident to determine what, if any effect treatment had on brood development (see Discussion; for model outputs for colony weight and brood composition see Tables S2b and S7a-d). However, the effect of treatment on the number of adults showed a consistent pattern with fewer workers, drones and gynes within treated colonies after 35 days (All: Z ≥ 2Á31, P ≤ 0Á02; Table S7e-g, Fig. 4d-f). E F F E C T O F C L O T H I A N I D I N E X P O S U R E To date, few studies have investigated the effects of pesticide exposure on bee colonies under field settings. Our study, hence, contributes to this growing evidence base, but shows novelty by delivering known levels of pesticide exposure in a semi-field experiment while recording detailed information on foraging behaviour across time. In this experiment, clothianidin exposure initially increased the proportion of bees foraging for pollen in the early days of the experiment compared to control colonies; however, the proportion of control foragers returning with pollen increased rapidly to similar levels as treated colonies (Table S6b). Of these pollen foraging trips, while the pollen removal method found that treated returning foragers brought back heavier pollen loads, the photographic method found that treated returning foragers brought back marginally smaller pollen loads compared to control foragers (Table S6c,d). The results from the colony measurements at the end of the experiments were mixed; we found no difference in the weight gain of the colony and no clear pattern in the effects of clothianidin on the number of brood individuals (eggs, larvae and pupae) within the colony. However, by the end of the experiment treated colonies contained fewer workers, drones and gynes in comparison with control colonies. While we cannot determine if the reduction in the number of adults is due to the effect of direct exposure to clothianidin during development, indirect impairment to colony function (i.e. ability to rear brood) or loss of adult workers while foraging, it is interesting that our results are in agreement with previous semi-field (Gill, Ramos-Rodriguez & Raine 2012;Whitehorn et al. 2012;Moffat et al. 2016) and field studies (Goulson 2015;Rundl€ of et al. 2015) in colonies exposed to a neonicotinoid. Our behavioural results contrast with similar studies investigating the effects of imidacloprid and thiamethoxam, where chronic exposure produces obvious differences in foraging activity through time in treated colonies while simultaneously reducing the rate of pollen collection (Feltham, Park & Goulson 2014;Gill & Raine 2014;Stanley et al. 2015Stanley et al. , 2016). An issue with our study is that we could not distinguish between a lack of motivation to collect pollen or impaired ability to collect pollen as we did not measure nectar foraging (i.e. we could not tell whether bees returning with nothing had crops containing nectar). However, we can still ask: why does the effect of clothianidin on bumblebee foraging behaviour apparently differ from the studies showing induced impairment from imidacloprid and thiamethoxam exposure? First, it is possible that environmental conditions during our five-week study did not impose strong enough constraints on foraging, and therefore, the treated colonies could buffer any clothianidininduced effects. However, similar semi-field studies have reported large effects on foraging behaviour apparently under similar environmental conditions (Gill, Ramos-Rodriguez & Raine 2012;Feltham, Park & Goulson 2014;Gill & Raine 2014;Stanley et al. 2016), although admittedly the complexity of the landscape, the availability of floral resources and the interactions with other stressors make direct comparisons difficult. Secondly, in wild pollinator communities, the level of neonicotinoid exposure will vary depending on floral resource availability and the level of pesticide contamination in the environment affecting the acute and chronic doses that individuals receive over time. Using a method similar to Gill, Ramos-Rodriguez & Raine (2012), we tried to ensure our dosage was realistic by: (i) basing the concentration of clothianidin within the range reported from field samples (Table S1); (ii) allowing bees to forage on both provisioned sucrose and field nectar; and (iii) providing what we deemed to be half of the required sucrose the growing colonies required. Thirdly, the different neonicotinoids do not have a homogenous mode of action on bumblebee physiology and resultant behaviour. Although all neonicotinoids function as nicotinic acetylcholine receptor agonists, there are sufficient structural differences between compounds to alter their toxicity to the same species of bee (Iwasa et al. 2004) and there is increasing evidence that different bee species/taxa, such as honeybees, solitary bees and bumblebees, vary in sensitivity to the same neonicotinoid (Goulson 2013;Arena & Sgolastra 2014;Laycock et al. 2014;Godfray et al. 2015;Rundl€ of et al. 2015;Moffat et al. 2016). To date, studies have largely focused on imidacloprid exposure, resulting in comparatively less research on the other neonicotinoids, despite the use of thiamethoxam and clothianidin significantly increasing in recent years (UK: Godfray et al. 2015). Of the studies exposing clothianidin and thiamethoxam at field realistic levels to bees (≤11 ppb) results have been mixed, with some studies on honeybees and bumblebees reporting little or no effect on colony success (Honeybees: Cutler & Scott-Dupree 2007;Cutler et al. 2014;Bumblebees: Franklin, Winston & Morandin 2004;Laycock et al. 2014;Scholer & Krischik 2014). Our finding that clothianidin causes only subtle behavioural changes to foraging is perhaps encouraging, however, we do still observe reductions in the number of adults within a colony, indicating that caution should still be taken when applying clothianidin onto flowering crops that are attractive to bumblebees (also see Rundl€ of et al. 2015). B U M B L E B E E F O R A G I N G E C O L O G Y Colonies maintained consistent levels of pollen collection throughout the day despite the foraging patterns showing more workers returning in the morning than afternoon. Our data show that any decrease in foraging activity later in the day is offset by an increase in both the proportion of foragers carrying pollen (also see : Free 1955), and the average amount brought back per foraging bout. While we did not measure nectar collection, previous research has shown that early morning bumblebee foraging activity concentrates more on gathering nectar (Free 1955;Peat & Goulson 2005), which is consistent with the pattern we observed. We further found that daily foraging activity showed a parabolic pattern over the course of the experiment mirroring the pattern of typical production of workers through a colony life cycle (Goulson 2010), and is what we might expect if the number of pollen foragers were a function of colony development stage. Due to the design and nature of the experiment, we could not appropriately look for tri-interactions between treatment, time and wind speed or temperature. But we could look to see how wind and temperature influenced overall colony foraging activity across all 20 colonies. Perhaps counter-intuitively, given that wind speed is likely to increase energetic demands of flying insects (Niitepõld et al. 2009), we found that higher wind speeds correlated with increased foraging activity and pollen collection. Moreover, higher temperatures were associated with decreases in foraging activity at a colony level but with greater amounts of pollen brought back by individuals. A possible explanation could be that pollen may be drier and easier to collect under such conditions (Peat & Goulson 2005). Alternatively, it may be due to differences in weight distribution between carrying pollen and nectar loads, concentrating on collecting pollen at higher wind speeds to increase foraging performance relative to nectar (Mountcastle, Ravi & Combes 2015) which presumably offsets increased energetic costs of flying in windy conditions. Given the mild conditions during our experiment, temperature is unlikely to have placed a lower limit on foraging activity considering that bumblebees are known to cope well with low temperatures (Peat & Goulson 2005); in fact, we found that higher temperatures actually constrained foraging activity, resulting in fewer foraging individuals with a lower proportion concentrating on pollen. A P P L I E D B E N E F I T S O F O U R S T U D Y While laboratory studies are invaluable tools to investigate causal effects, a common criticism is they represent unrealistic conditions. For instance, 'true' effects may be easily buffered if colonies are raised under ideal conditions (Godfray et al. 2014(Godfray et al. , 2015Macfadyen et al. 2014), or exacerbated if colonies are fed exclusively on contaminated foods or experience an intensified and targeted application. Although recent studies have been designed to incorporate multiple stressors, either in the laboratory, such as combining pesticide by parasite interactions (Alaux et al. 2010;Vidau et al. 2011;Baron, Raine & Brown 2014;Fauser-Misslin et al. 2014;Brandt et al. 2016) or by operating partially in the laboratory and in the field (Whitehorn et al. 2012), it is still difficult to simulate environmental variability in the laboratory. More rarely, studies are conducted at a landscape scale under a 'real-world' scenario by placing bee colonies next to treated or untreated fields of flowering crops. Such ambitious studies should be applauded given the geographic scale required to prevent bees foraging on neighbouring fields , but such an approach is expensive and logistically challenging. Furthermore, such studies can suffer from difficulty in controlling exposure to a single pesticide given the numerous chemicals applied in the environment with potential interactive effects (Thompson et al. 2013;Rundl€ of et al. 2015) and providing appropriate replication is challenging (Pilling et al. 2013;Thompson et al. 2016). Although semi-field studies such as ours are not necessarily novel per se, they are underdeveloped for risk assessment (EFSA 2013b) and often rely solely on endpoint measurements. The incorporation of behavioural data into risk assessment is important for two reasons: (i) the influence of anthropomorphic stressors on pollinator behaviour could directly influence the ecosystem services that pollinators provide, although a recent paper showed that bumblebees chronically exposed to the neonicotinoid, thiamethoxam, did not reduce the pollination service provided compared to non-exposed bees (based on measures of fruit set and number of seeds; Stanley et al. 2015); (ii) behavioural changes may reveal the underlying mechanistic explanation behind changes in pollinator numbers or decreases in colony fitness. Here we employed two separate methods for assessing pollen load, both of which have advantages. The pollen removal method allowed us to collect complete pollen loads throughout the experiment, and considering that pollen loads are not perfectly spherical, taking the mass of each collected pollen load is likely to provide a better estimate of pollen foraging performance than relying on the 2D surface area calculated using the photographic method. However, the removal method relies on an observer to handle the bees and collect the pollena process which is relatively time-consuming and labour intensive. In contrast, the photographic method is quick and simple to implement in the field. Interestingly, despite the pollen collection method appearing to be more invasive, we found no significant difference in the behaviour of foraging bees based on which of these two collection methods was used, allowing us to pool the data to measure foraging activity and the proportion of foragers carrying pollen (Table S8a-d). In this study, the data collection involved minimal financial costs, but the collection regime was somewhat labour intensive, relying on the availability of two observers, which unfortunately limited the amount of time we could observe each colony. Furthermore, in our study, workers were not uniquely marked for identification, so we could not account for the degree of pseudoreplication (observing the same worker returning multiple times) unlike studies that individually tagged workers (Schneider et al. 2012;Feltham, Park & Goulson 2014;Gill & Raine 2014;Stanley et al. 2015;Thompson et al. 2016). However, given that our observations were carried out for one hour per day, the probability of counting the same individual more than twice is low, due to the time taken for a successful foraging bout (Gill, Ramos-Rodriguez & Raine 2012). Moreover, if we consider that the overall amount of pollen entering the colony is the most critical endpoint result, then regardless of which individuals are returning, it is likely that the total food income to the colony is what matters when focusing on colony growth (although see Perry et al. 2015). We propose that further development of our method towards automation, for example RFID and automated weighing scales (Feltham, Park & Goulson 2014) or the use of video or camera traps, could be utilized for behavioural assays to inform higher tier assessment of pesticides on social bees. Experiments like the one we present here provide a feasible and appropriate method to bridge the gap between laboratory and field experiments, allowing us to expose colonies with known levels of specific pesticides, in a comparable manner to laboratory studies, while exposing them to field realistic conditions to detect any colony level effects (Gill, Ramos-Rodriguez & Raine 2012). These data are important in aiding the conservation of social bee species and provide crucial insights into pesticide-induced changes to foraging behaviour; this is particularly important with the increasing need to mitigate threats to insect pollinator services ). Supporting Information Additional Supporting Information may be found in the online version of this article. Table S1. Selection of studies reporting mean levels and ranges of Clothianidin residues (in ppb) across a range of agricultural settings. Table S2. (a) Census for each experimental colony prior to the start of the experiment, and (b) census at the end of the experiment after five weeks in the field. Colonies were assigned into ten pairs based on colony size (assessed by the number of workers and the number of pupae), and each pair was assigned to one of two observers using either a pollen removal method (removal of one pollen load), or photographic method (photograph taken of pollen load). Table S3. Volume of provisioned sucrose consumed (to the nearest 0.5 mL) at the time of feeder replenishment (the volume of sucrose provisioned shows the volume provided two or three days prior to collection of the feeder). Table S4. Example of an observer's timetable for monitoring foraging behaviour for their 10 assigned colonies. Table S5. Model outputs for LMER or GLMER for foraging during the day: (a) forager activity; (b) proportion of foragers bringing back pollen; (c) mean weight of pollen; (d) mean area of pollen; (e) total weight of pollen and; (f) total area of pollen. Table S6. Model outputs for LMER or GLMER for foraging over the five weeks: (a) forager activity; (b) proportion of foragers bringing back pollen; (c) average weight of pollen; (d)average area of pollen; (e)total weight of pollen and; (f)total area of pollen. Table S7. Model output for colony census. All models were LMER or GLMER, using a Gaussian or Poisson distribution. Table S8. Model output for colony census including collection method as a fixed effect.
2019-04-01T13:15:50.266Z
2017-08-01T00:00:00.000
{ "year": 2017, "sha1": "bdaf01d00679f9bae66ee08a22b594e770fb2f3e", "oa_license": "CCBY", "oa_url": "https://besjournals.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1365-2664.12792", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "6b44da607ba811e777035cba8cf8fa6176e685a3", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
239049710
pes2o/s2orc
v3-fos-license
Motion of charged and spinning particle influenced by dark matter field surrounding a charged dyonic black hole We investigate the motion of massive charged and spinning test particles around a charged dyonic black hole spacetime surrounded by perfect fluid scalar dark matter field. We obtain the equations of motion and find the expressions for the four-velocity for the case of a charged particle, and four-momentum components for the case of a spinning particle. The trajectories for various values of electric $Q_{e}$ and magnetic $Q_{m}$ charges are investigated under the influence of dark matter field $\lambda$. We find the equations of motion of a spinning particle that follows a non-geodesic trajectory via Lagrangian approach in addition to the charged and non-spinning particles that follow geodesic motion in this set-up. We study in detail the properties of innermost stable circular orbits (ISCOs) in the equatorial plane. The study of ISCOs of a spinning massive particle is done by using the pole-dipole approximation. We show that, in addition to the particle's spin, the dark matter field parameter $\lambda$ and black hole charges ($Q_{m}\;\text{and}\;Q_{e}$) have a significant influence on the ISCOs of spinning particles. It is observed that if the spin is parallel to the total angular momentum $J$ (i.e. $\mathcal{S}>0$), the ISCO parameters (i.e.$r_{ISCO}, L_{ISCO}\;\text{and}\; E_{ISCO}$) of a spinning particle are smaller than those of a non-spinning particle, whereas if the spin is anti-parallel to total angular momentum $J$ (i.e. $\mathcal{S}<0$), the value of the ISCO parameters is greater than that of the non-spinning particle. We also show that for the corresponding values of spin parameter S, the behaviour of Keplerian angular frequency of ISCO $\Omega_{ISCO}$ is opposite to that of $r_{ISCO}, L_{ISCO}\; \text{and}\; E_{ISCO}$. I. INTRODUCTION Black holes have so far been predicted to be formed under gravitational collapse of a sufficiently compact massive object that has exerted all its resources to withstand gravity, and thus they have been very attractive objects with their remarkable geometric properties. Recent observational studies of stellar mass black holes in x-ray binaries and gravitational wave astronomy [1,2] and supermassive black holes through modern astronomical observations of the Event Horizon Telescope Collaboration and BlackHoleCam [3,4] have verified the presence of black holes in the Universe. Since much progress has been made in observations of the first image of a supermassive black hole at the center of the M87 galaxy, black holes have been currently considered the main laboratories in testing general relativity and modified theories of gravity. Because of these abovementioned observations, a qualitatively new stage has been opened to study the remarkable gravitational properties of astrophysical black holes and obtain precise constraints and measurement of * Electronic address: sanjar@astrin.uz † Electronic address: hukmipankaj@gmail.com ‡ Electronic address: sksiwach@hotmail.com black hole parameters. To understand more deeply and explore the properties of black holes, one needs to investigate the geodesics of test particles and photons as their motion would be a very potent tool in addressing a question in connection with the black hole's nature. In this respect, the test particle trajectories may play a decisive role in understanding the unexplored nature of other existing fields contributing to altering its motion as well as the behavior of background spacetime in the astrophysical context. For example, the motion of test particles gets altered drastically as a consequence of the presence of external magnetic fields [5] and the presence of surrounding matter fields [6,7]. On the other hand, particle motion around black holes gives the possibility of exhibiting departures of the geometry of astrophysical black holes. There is an extensive body of research involving the effect of the electromagnetic field on the motion of a test particle in the black hole vicinity in various gravity models (, for example [8][9][10][11][12][13][14][15]). Furthermore, one of the other existing matters is the dark matter field that can exist in the environment of supermassive black holes. With this in mind, one is allowed to test the effect of the dark matter field on both the background geometry as well as the test particle motion in a realistic astrophysical context. Regardless of the fact that there is still no direct experimental detection that verifies the presence of dark matter, observations have provided strong evidence that dark matter can exist in the environment of giant elliptical and spiral galaxies [16]. Relying on the theoretical analysis and astrophysical data, it is believed that dark matter contributes approximately up to 90 % of the mass of the Galaxy, while the rest is the luminous matter composed of baryonic matter [17]. Astrophysical observations indicate that giant elliptical and spiral galaxies are embedded in a giant dark matter halo [3,4]. In the literature, there are several black hole solutions that involve the dark matter contribution in the background geometry; here we give the representative ones [18,19]. Of them, Kiselev derived a static and spherically symmetric black hole with a dark matter profile through a quintessential scalar field [18], according to which the new solution involved a term λ ln(r/r q ) in the metric coefficients. This solution, in turn gave new prosperity in deriving further solutions. Later on, another interesting solution was derived by Li and Yang [19] containing the same term (r q /r) ln(r/r q ) represented by a phantom scalar field under the assumption that the dark matter profile is formed from massive particles that must participate in the weak interaction (i.e. weakly interacting massive particles with the equation of state ω 0). There is a large amount of work the considers the dark matter field in the background spacetime in various situations (see, e.g. [20][21][22][23][24][25][26][27]). In an astrophysical scenario, black holes can be characterized by at most three parameters, i.e. black hole mass M , rotation a and electric charge Q. Note that the first two parameters are constrained by the black hole mass. Also black holes can be regarded as charged black holes referred to as the Reissner-Nordström black hole solution with mass and electric charge. For example, the detailed analysis of the neutral and charged particle motion around a Reissner-Nordström black hole has been discussed in [28,29]. There are several astrophysical mechanisms for a black hole to possess charge. A black hole can have a positive net electric charge due to the balance between the Coulomb and gravitational forces for charged particles near the surface of the compact object [30,31] and due to the matter which gets charged as that of the irradiating photons [32]. It is also worth noting that, by the Wald mechanism [33] the induced charge can be produced by the magnetic field lines getting twisted as that of the fame-dragging effect. However, in all mentioned cases the black hole charge can be regarded as much more weaker (see, for example, [34]). Also, based on the an exact rotating Schwinger dyon solution, a black hole can be endowed with four parameters: mass M , the electric charge Q e , magnetic charge Q m , and rotation parameter a [35]. This black hole solution is an interesting case of a charged black hole even if there exists no rotation parameter. In this context, there are several other ways that a black hole can include electric and magnetic charges, and the properties of these black hole solutions have been tested by different astrophysical processes [36,37]. The analysis of particle motion is a powerful tool to reveal the geometric properties of compact objects and astrophysical processes in the environment surrounding the black holes [38][39][40][41][42][43]. In a realistic scenario background spacetime geometry cannot be regarded as vacuum. Thus, in this paper, we consider a charged dyonic black hole placed in the perfect fluid dark matter (PFDM) field. For this spacetime geometry, we study the influence of PFDM on the dynamical motion of magnetically and electrically charged particles moving around this charged dyonic black hole. This may lead us to understand the nature of the gravitational interaction between the charged black hole spacetime and PFDM field. In addition to the motion of a charged particle, we also investigate the motion of a spinning particle moving in the vicinity of a charged dyonic black hole surrounded by a PFDM. For spinning particles, we mainly discuss the properties of innermost stable circular orbit (ISCO) parameters r ISCO , L ISCO , E ISCO , and Ω ISCO and bring out the effect of dark matter field in addition to the black hole charges. The main motivation to study the ISCOs is that they can be treated as the initial condition of the final merger of a binary system of compact objects. Obviously, the motion of a massive test particle is influenced by black hole parameters like charge, mass and spin (see Refs. [44][45][46][47][48][49][50][51][52]). The study of the ISCOs of the spinning test particle started with the works of Suzuki and Maeda [44] and Jefremove et al. [53]. In [44], the ISCOs of a spinning test particle are explored for Kerr a black hole. Later, in [53], the approximate analytical solutions of the ISCOs for a spinning particle are presented for both Schwarzschild and Kerr black holes within the small spin limit. Thereafter, the study of ISCOs for spinning particles in various black hole backgrounds is done [54][55][56][57][58][59][60][61][62][63], although, the motion of a spinning particle is studied in various contexts [64][65][66][67][68][69][70][71][72][73][74][75]. Still, there are not many studies of the ISCOs of a spinning particle when a black hole with the surrounding medium is considered. Here, we are interested in to seeing how the surrounding medium (say, PFDM in our case) will affect the motion of a spinning particle besides the black hole parameters. It is well known that when the test particle reaction effects (such as self-force corrections) are taken into account, the test particle will follow a nongeodesic trajectory [76][77][78]. Additionally, if the test particle has a spin due to spin-curvature coupling an extra force is acting on the particle which also results in nongeodesic motion of the spinning test particle [79,80]. In this paper, we consider only the spin-orbit coupling via "pole-dipole" approximation and discard the reaction of the particle with the black hole background in order to numerically investigate the properties the ISCO of a spinning particle with an arbitrary value of the spin S moving in the charged dyonic black hole immersed in the PFDM. To do so, we use the Tulczyjew spin-supplementary condition (TSSC). We investigate the effect of λ, Q m , Q e , and S on the ISCO parameters r ISCO , L ISCO , E ISCO , and Ω ISCO of a spinning test particle together with the superluminal con-straint (for a brief discussion on the equations of motion of the spinning particle in curved spacetime, TSSC and superluminal constraint, see Appendix A and references therein). It is worth noting that here we are considering the spin orbit coupling only up to first order and discard the higher order spin correction terms known as the 1/c 2approximation (or as quadratic spin corrections) [81,82]. The paper is organized as follows: In Sec. II, we describe briefly the charged black hole metric which is followed by the study of charged particle dynamics in Sec. III. We investigate the motion of a spinning particle around a charged black hole submerged in a perfect fluid dark matter environment in Sec. IV. In Sec. V, we offer a summary and emphasize the important findings. To make the article self-contained, we include a brief introduction of the Lagrangian theory of the spinning particle in appendix A. In addition, the explicit form of the equations required to identify the ISCOs and the explicit form of the equation expressing the superluminal constraint are provided in appendix B. Throughout this work, we use a system of units in which G = c = 1. Greek indices are taken to run from 0 to 3, while latin ones from 1 to 3. II. THE BLACK HOLE METRIC We consider the Lagrangian density of Einstein-Maxwell gravity in the presence of the perfect fluid scalar dark matter field [19,83] with the electromagnetic field tensor where * G µν refers to the Dirac string term [83]. Note that V (Φ) is related to the phantom field potential, while L DM and L I , respectively, refer to the dark matter Lagrangian density and Lagrangian representing the interaction between the dark matter and phantom field. Further L I can be regarded as negligible because of small interaction between the phantom field and the dark matter. Let us then write the Einstein field equation for the Einstein-Maxwell gravity in the presence of PFDM, where the energy-momentum tensor T µν is defined by with T DM µν representing the energy-momentum tensor for the PFDM. We consider the static, spherically symmetric metric ansatz for the black hole as ds 2 = −e A(r) dt 2 + e B(r) dr 2 + r 2 dθ 2 + sin 2 θdφ 2 , (6) and then the Einstein equations with Maxwell field can be written in the form [19] It is worth noting that the components θθ and φφ of the Einstein field equations are identical. From the above Einstein equation, the black hole metric in Boyer-Lindquist coordinates x α = (t, r, θ, φ) is then given by the following line element: where F (r) is defined as with M being the black hole mass, where Q e and Q m are the electric and magnetic charges and λ is related to the scale parameter characterized by the PFDM. Thus, the static and spherically symmetric solution black hole can be endowed with three parameters, i.e. mass M , the electric charge Q e and magnetic charge Q m . Note that parameters, M , Q e,m and λ are dimensionful quantities, and hence their dimensions are taken to be L 1 having set G = c = 1. However, for further analysis we shall for simplicity normalize these parameters as Q e,m /M and λ/M , respectively, in order to define them as dimensionless quantities having set M = 1. For Q e = Q m = λ = 0, the black hole spacetime metric surrounded by the PFDM field, i.e., Eq. (10), reduces to the Schwarzschild metric, while for Q m = λ = 0 the spacetime metric then reduces to the Reissner-Nordström metric. Similarly, in the case of vanishing Q e and Q m , it reduces to the Schwarzschild black hole surrounded by PFDM field. In the case of nonvanishing dark matter distribution, i.e. λ = 0, the energy-momentum tensor for an anisotropic perfect fluid distribution is written as Here density, radial and tangential pressures will respectively read as We shall only consider the positive value for a dark matter profile λ > 0 for further analysis, which refers to positive energy density that represents attractive behavior. The horizon of the black hole is determined as a solution of the following nonlinear equation, In the case of vanishing λ the horizon radius takes the simple form as The above equation corresponds to the outer and inner horizons. If the two outer and inner horizons coincide, i.e. r + = r − , it is then interpreted by the extremal charged dyonic black hole for which the following condition is satisfied while the black hole horizon no longer exists in the case of M 2 < Q 2 e+m , exhibiting a naked singularity. In the case of nonvanishing λ, it becomes complicated to solve the nonlinear Eqn. (14) analytically for the black hole horizon. It is, however, possible to reach its analytic form by imposing the condition, i.e., λ M . In doing so, one can get the horizon radius in the following form Hereafter, we only use the black hole's outer horizon r + popularly known as the event horizon, and use the notation r h instead of r + , unless otherwise specified. In Fig. 1, we show the limits on the parameters Q e , Q m , and λ. As can be seen from Fig. 1, the shaded region corresponds to the permitted region for which a black hole exists, while the unfilled region corresponds to the naked singularity. To understand a bit more clearly, in Fig. 2, we show the dependence of the horizon radius r h on the PFDM parameter λ. From Fig. 2, the black hole horizon radius decreases as parameters Q e , Q m , and λ increase till some particular value of λ, and then it starts to grow. This clearly shows that the parameter λ has the physical effect that shifts black hole's outer horizon toward to small r. This is in agreement with the physical meaning of the parameter λ as a repulsive gravitational charge, similarly to the black hole's electric and magnetic charges Q e,m . However, it can be seen from Fig. 2 that the radius of the horizon grows in the limit of large λ. This happens because the radial pressure p r (see, Eqn. (13)) has the repulsive nature, and thus it turns out that the dark matter field, ρ, can be repulsive in the limit of large λ. Hence, we further focus on the small λ in order to manifest more realistic model for the dark matter distribution. In the following sections, the motion of magnetically and electrically charged particles, as well as that of neutral spinning particles will be studied. It's worth noting that, we use the Hamiltonian technique to investigate the motion of magnetically and electrically charged particles just because of simplicity. On the other hand to investigate the motion of spinning particle, there are two wellknown approaches mainly: the Mathisson-Papapetrou (MP) approach [84,85] and the Lagrangian approach [79,[86][87][88]. For the spinning particle, we use the Lagrangian technique rather than the MP approach in this article. The rationale for adopting the Lagrangian approach over the MP approach is explained in appendix A III. CHARGED PARTICLE DYNAMICS First, we focus on charged particle motion around a charged black hole in the presence of perfect fluid dark matter. We assume that the test particle is endowed with the rest mass m, electric charge q, and magnetic charge q m . In general, the Hamilton-Jacobi equation of the system is then expressed as [89] with the action S with the spacetime coordinates, the components of the vector and the dual vector potentials A α and A α , and electric and magnetic charges q and q m . The components of the associated vector potentials A α and A α of the electromagnetic field take the following form As one can see, the spacetime Eqn. (10) and the components of the vector potential Eqn. (19) are independent of coordinates (t, φ) which lead to two conserved quantities, namely, energy E, and angular momentum L of the charged particle measured at infinity. It is known that the Hamiltonian is regarded as a constant H = k/2 with relation to k = −m 2 , where m is the mass of a particle having electric and magnetic charges. Following the Hamilton-Jacobi equation for charged particle motion, we write the action S as follows where S r and S θ , respectively, refer to the radial and angular functions of only r and θ. Using Eqn. (20) one can easily obtain the Hamilton-Jacobi equation in the following form Here one can see that Eqn. (21) is fully separable into radial and angular parts. Hereafter, performing simple algebraic manipulations, one can show that where K is the Carter constant of motion. From the above equation, E, L, k, and K are independent constants of motion. The fourth one is related to the latitudinal motion and caused by the separability of the action. If we focus on the equatorial motion (i.e., θ = π/2) we can further eliminate the fourth constant of motion [89]. We shall for convenience introduce the following notations: Following Eqns. (21)-(23), we obtain the radial equation of motion for charged particles in the following forṁ where V ef f (±) (r) are positive and negative solution of the effective potential V ef f (r) describing the radial function of the radial motion and is given by where the charge coupling parameters are defined as and in further calculations the radial coordinate and dark matter parameter are normalized as r → r/M and λ → λ/M , respectively. As seen from Eqn. (25), we have either E > V ef f (+) (r) or E < V ef f (−) (r) sinceṙ 2 ≥ 0 always. However, we select V ef f (+) (r) as an effective potential, i.e. V ef f (r) = V ef f (+) (r). From Eqn. (26), if we remove all parameters except black hole mass M , we can simply recover the effective potential as in the Schwarzschild spacetime. Also, we note that we have written the radial motion for a charged test particle in the equatorial plane (i.e., θ = π/2) in the form given by Eqn. (26). In an astrophysical scenario, it is believed that the environment surrounding black holes cannot be regarded as vacuum as a consequence of the existence of matter and fields. Therefore, it would become increasingly important to take into account the repulsive and attractive effects due to the matter distribution in nearby environment of the black hole. For that we analyze the radial profile of the effective potential V ef f (r) that reflects the trajectories of particles for various combinations of charge coupling parameters (σ e and g m ) and black hole parameters involving black hole charges (Q e,M ) and dark matter parameter λ that describe the background geometry. As can be seen from Fig. 3, the left panel in the top row reflects the role of σ e and g m for the radial profile of the effective potential, while keeping black hole charges Q e , Q m fixed and dark matter parameter λ, whereas the right panel reflects the role of parameter λ for the case when parameters Q e , Q m , σ e , and g m are fixed. The panel in the bottom row of Fig. 3 shows the impact of the parameters Q e and Q m on the radial profile of V ef f (r) for the case when parameters λ, σ e and g m are fixed. From Fig. 3, the height and strength of V ef f (r) increase, and its shape shifts toward the event horizon r h as a consequence of the impact of black hole parameters Q e , Q m , and λ. Further, examination of Fig. 3 shows that the inclusion of parameters σ e and gm reduces the maximum of V ef f (r) and pushes it away from the horizon r h . Also, one can infer that the combined effect of parameters Q e,m and λ on the profile of the V ef f (r) is balanced by the parameters σ e and g m . As was stated earlier, we restrict the motion to the equatorial plane, i.e., π = θ/2. We then further find a particle trajectory that helps us to understand qualitatively how charged test particle behaves around the black hole under the combined effect of electromagnetic and gravitational forces. In Fig. 4, we show particle trajectory restricted to move on the equatorial plane of the charged dyonic black hole immersed in PFDM. As seen in Fig. 4, there occur for various orbits, i.e. bound, captured and the escaping orbits, depending upon the parameter choices. It is obvious from the particle trajectory that orbits are captured by black hole when the magnetic charge q m of the particle increases, while they turn the escaping orbits for larger electric charge q e of the particle. However, it turns out that these orbits become bounded as a consequence of the increase in value of PFDM parameter λ in the case of fixed particle's energy and angular momentum as shown in sub plots of Fig. 4. Now we turn to the study of the circular orbits of electrically and magnetically charged test particles around the charged dyonic black hole in the presence of PFDM. As we know from the theory of geodesic particles that a test particle needs to satisfy following conditions simul-taneously in order to move in circular orbits • the radial velocity should vanish: • the radial acceleration should also vanish: These two conditions determine the circular orbits both types of circular orbits i.e. stable and unstable. The above equations lead us to determine the specific angular momentum L and energy E for electrically and magnetically charged particles on the circular orbits where refers to a derivative with respect to r. It is worth noting that in the case g m = σ e , we obtain L for the Reissner-Nordström black hole case. We now analyze the radial profiles of the specific angular momentum for charged particles so as to understand the behavior of the circular orbits around the black hole. In Fig. 5, we show the radial profile of the specific angular momentum L for charged particles at the circular orbits around a charged dyonic black hole immersed in PFDM. As shown in Fig. 5, the left panel reflects the impact of the charge coupling parameters g m and q e on the radial profile of L for fixed values of black hole charges Q e and Q m and PFDM parameter λ, while the right panel reflects the impact of the dark matter while keeping the black hole charge and charge coupling parameters fixed. From Fig. 5, one can easily see that circular orbits of electrically and magnetically charged particles move at larger radii as we increase the charge coupling parameters g m and σ e (see left panel) In contrast, circular orbits move toward smaller radii and become close to the black hole horizon r h with increasing PFDM parameter λ. Additionally, what we observe from the above analysis is that both black hole charges Q e,m and PFDM parameter λ bring the radii of the circular orbits closer to the r h of the black hole, thus behaving as a repulsive nature as a result On the other hand, another limit of the existence of circular orbits is described by the radius for unstable circular photon orbits r ph , which is determined by the divergence of either the specific angular momentum L or 31), one can easily obtain the limit on the existence of circular orbits that always exist at r > r ph , which gives the radii of the photon orbits. We shall, for simplicity, consider λ 1 to determine the approximate expression for the photon orbit r ph as From the above expression one can easily see that it explicitly reduces to the Schwarzschild case, i.e., r ph = 3 in the limit λ, Q e,m → 0. However, it is certain from Eqn. (33) that r ph decreases as we increase both the parameters λ and Q e,m . Next, in order to determine the ISCO we need to find limiting value of orbital angular momentum L for which V ef f (r) still has an extremum. The ISCO lies at the position where the maximum and minimum of effective potential V ef f (r) merge. Hence, one more condition is needed beside conditions Eqns. (28)- (29), which reads • the second derivative of V ef f (r) with respect to radial coordinate r should vanish: For being somewhat more quantitative, in Table I we show the numerical values of ISCO parameters L ISCO , E ISCO and r ISCO for various combinations of Q e,m , σ e , g m and λ. As shown in Table I λ have similar effect that decreases the r ISCO . However, the r ISCO increases as a consequence of the increasing the charge coupling parameters g m and σ e . Furthermore, one can see that the combined effect of both Q e,m and λ respectively decreases L ISCO and increases E ISCO , while the opposite is the case as we increase the charge coupling parameters, as seen in Table I. Let us then consider the orbital and angular velocity of an electrically and magnetically charged particle moving around the charged dyonic black hole surrounded by PFDM, measured by a local observer [89,90]. For that, we first write the coordinate velocity components for the particles: where the Carter constant parameter K becomes irrelevant when we restrict motion to the equatorial plane, i.e., θ = π/2. For the particle energy, its classical form can be written as follows: where and we have defined v = υ/c. As can be easily seen from Eqs. (35-37) vθ = vφ = 0 always, while the radial velocity is given by vr = 1 very near the black hole horizon, i.e. F (r) = 0. Similarly, one can determine the particle linear velocity at the ISCO radius (see, for example [28]). The radial and latitudinal velocities vanish, i.e. v r = v θ = 0 at the ISCO, yet the particle orbital velocity takes the form where Ω = −∂ r g tt /∂ r g φφ corresponds to the orbital angular velocity of test particle, i.e., the so-called Keplerian angular frequency. Equation (39) then yields v = 1 2 It is worth noting here that the v ISCO for the Schwarzschild black hole (i.e. v ISCO = 1/2) may be readily recovered from the preceding equation when the black hole parameters Q e,m and λ both disappear simultaneously. We now analyze the radial profile of orbital velocity v for a test particle at ISCO in order to have detailed information about the background geometry of the charged dyonic black hole surrounded by PFDM as compared to the one for the Schwarzschild black hole case. In Fig. 6, we show the radial profile of the orbital velocity v for test particles orbiting at the ISCO radius. From Fig. 6, one can easily see that the orbital velocity of test particle around a charged dyonic black hole immersed in PFDM decreases, and its shape shifts down to smaller v as a consequence of an increase in the value of both black hole charges, i.e., Q e and Q m and PFDM parameter λ. However, this may not be the case in general. Thus, we need to explore it numerically to understand the behavior of the orbital velocity better. The behavior of the results on the orbital velocity demonstrated in Fig. 6, is explained in detail in Table I. From Table I, one can infer that the orbital velocity v decreases when increasing the PFDM parameter λ, while it increases due to the effect of black hole charges Q e and Q m . IV. SPINNING PARTICLE DYNAMICS In this section, we focus on the study of motion of a spinning particle moving around a charged dyonic black hole in the presence of PFDM. Corinaldesi and Papapetrou [91] first studied the motion of a spinning particle moving in the vicinity of a Schwarzschild black hole. Later, Hojman [86] used the Lagrangian approach to find the equation of motion for the spinning particles. In this section, we use the Lagrangian approach to study the motion of a spinning particle (for a thorough explanation of why we chose the Lagrangian approach over the Mathisson [84] and Papapetrou [85] approach, (see appendix A). As the metric (10) is independent of time, we can find the four constants of motion via a simple calculation using the above Killing vectors. For simplicity we are focusing only on the spinning particle motion in the equatorial (i.e., θ = π/2) plane. First, we find the constant of motion (i.e., conserved energy E and total angular momentum J perpendicular to the θ = π/2 plane) for the spinning particle using the Killing vectors and Eqn. (A13) where the symbol ( ) represents the derivative with respect to radial coordinate "r". Using the conservation Eqns. (A6) and (A7), we find another two constants of motion (i.e., mass and spin): On the other hand, the TSSC Eqn. (A8) for our case reads S tr P r F (r) + r 2 S tφ P φ = 0, F (r)S tr P t + r 2 S rφ P φ = 0, The four-momentum Eqn. (A4) for the metric (10) reduces to the following equations: and the equations of motion for the spin calculated from Eqn. (A5) turn out to bė where the over-dot sign (˙) indicates the derivative with respect to coordinate time. Using these 13 equations [i.e., Eqns. (42)-(54)], one can completely determine the motion of a spinning particle in vicinity of a charged dyonic black hole in PFDM. Using Eqns. (46) and (47), one can write Now, using conservation of mass and spin Eqns. (44) and (45) together with Eqn. (55), the expression for S tr comes out as Using Eqns. (42), (43) and (47), the relation between the conserved E and J is which can be solved explicitly for P φ after substituting the value of S tr from Eqn. (56). The expression for P φ reads as where Q 2 e+m = Q 2 e + Q 2 m , and J = J/m, S = ∓S/m and E = E/m are the total angular momentum, spin and energy per unit mass, respectively. Here, S (negative) positive means that the spin of the particle is (anti-) parallel to J. It is important to note here that total angular momentum per unit mass J is equal to the sum of spin angular momentum per unit mass ∫ and orbital angular momentum per unit mass L. Similarly, using Eqns. (42), (56) and (58), we find the time component of the four-momentum which further leads to the explicit expression of P r by using Eqn. (58) and conservation of mass Eqn. (44) It is worth noting here that in the limits S → 0 and λ → 0, the Eqns. (58)-(60) reduce to the nonzero components of the four-momentum for the Schwarzschild black hole obtained in [92]. Next, we are interested in the explicit form of the radial and azimuthal components of coordinate velocity (i.e., r andφ), since they will help us later to confine the motion of the spin particle to the subluminal zone, which is the region where the particle's four-velocity is timelike. With this aim in mind, we need to first find the unknown components of the spin tensor (i.e., S tφ and S rφ ) in terms of known four-momentum components. By using the first [Eqn. (46)] and second [Eqn. (47)] TSSC together with Eqn. (56) it is easy to show that the components S tφ and S rφ of the spin tensor can be written as and To determineṙ first, we multiply the factor ±(Sr) to the TSSC Eqn. (48) and subtract Eqn. (52) together with the help of Eqns. (61) and (62), we get This, when combined with Eqns. (51) and (56) along with Eqns. (61) and (62), allows us to find the precise symbolic expression asṙ and now, to calculateφ, we utilize Eqns. (56), (58), (61) and (64) in Eqn. (52). With a little algebra, we obtaiṅ It is also observed that the spatial component of the spin perpendicular to the θ = π/2 plane is which in the asymptotic limit (i.e., r → ∞) becomes S z /m = SE. It is also worth mentioning that, as demonstrated in [92,93], one may simply bypass the four-momentum and four-velocity relation Eqn. (A12) and work with coordinate velocities solely in the case of static spherically symmetric spacetimes. As a result, we directly found the necessary coordinate velocities in Eqns. (64) and (65) (i.e.ṙ andφ, respectively) instead of the four-velocity of the spinning particle. A. Effective potential It is well known from Newtonian mechanics that the equations of motion can be solved in terms of the radial coordinate [94] to study the motion of a test particle in a central force field, and that in general relativity, the "effective potential" method is widely used to study the dynamics of a test particle in the black hole background. Because the radial velocity u r is parallel to the radial component P r of the four-momentum P µ [53], the effective potential V ef f of the spinning particle moving in the background of a charged dyonic black hole in PFDM can be computed using the P r Eqn. (60). Since this equation is quadratic in E, factorizing it separates the energy component from the radial component, giving us (67) where, A, B and the V ef f (±) are were defined, with The first-look analysis of Eqn. (67) confirms that P r is real, and therefore u r if and only if the motion of the spinning particle is constrained in such a way that E > V ef f (+) or E < V ef f (−) , as mentioned in previous section as well together with the condition with A > 0, which is an extra condition coming solely because of the spin-orbit coupling. In Table II, we discover the values of the radial parameter (i.e. r min ) for various combinations of λ, Q e and Q m , as well as the corresponding range of spin parameter per unit mass S for which the condition A > 0 is fulfilled. For a consistency check, we demonstrated in Table II that when Q e and Q m vanish and the PFDM parameter is extremely tiny (i.e., λ = 0.0000001), the exact Schwarzschild limiting values (i.e., r min = 3 and −5.19615 < S < 5.19615) as found in [92] are recovered. Furthermore, in the limit Q m → 0 and λ → 0, the V ef f obtained in Eqn. (68) reduces to that of the Reissner-Nordström black hole (BH). However, if Q e → 0 in addition to Q m → 0 and λ → 0, V ef f (+) matches that of the Schwarzschild BH [92]. The effective potential V ef f (+) for both direct (i.e. L = J − S > 0) as well as retrograde (i.e. L = J − S < 0) trajectories of a spinning particle is plotted in Figs. 7-9, for various com-binations of Q e , Q m , λ and S. A brief summary of the results obtained after analyzing the plots in Figs. 7-9 of V ef f (+) are as follows: (i) In Fig. 7, for the direct trajectories, we observe that when the parameter S grows, the maximum of V ef f (+) first drops and then climbs again, as well as moves closer to the BH's event horizon r h (see left panel). In contrast, the maximum of V ef f (+) goes away from the r h as the parameter S increases for retrograde trajectories (see right panel). (ii) In Fig. 8, we fix the parameters Q e , Q m and S while varying the PFDM parameter λ. The maximum of V ef f (+) grows as the parameter λ increases for the direct orbits, as shown in the left panel. The variation of r h for the corresponding values of λ is shown in the inset plot; we find that increasing the parameter λ decreases the size of the event horizon r h . A similar behavior is observed for the retrograde case of the V ef f (+) in the right panel when the parameter λ increases; the only difference is that the maximum of V ef f (+) is at higher values in comparison to the direct case for the equivalent values of λ. (iii) Similarly, in Fig. 9, the behavior of V ef f (+) for both direct (see left column) as well as retrograde trajectories (see right column) is plotted as a function of r/M is shown for various combinations of the parameters Q m , λ and S while varying the electric charge parameter Q e . The maximum of V ef f (+) is shown to grow when the parameter Q e rises for both direct and retrograde trajectories. Additionally, it is seen from the inset plot in the top row, left panel that the event horizon r h decreases with an increase in parameter Q e . It is also seen that the maximum of V ef f (+) occurs at a higher value for the retrograde trajectories when S > 0 (see top row of Fig. 9), while for the case when S < 0, it occurs for the direct trajectories (see bottom row of Fig. 9). Now, by analyzing Eqn. (67) further, one finds that the divergence of (P r /m) 2 when B = 0 means that it gives the location r div . As it is not easy to find the analytical value r div , we thus numerically analyze it, and the results are presented in Figs. 10 and 11. In Fig. 10, we plot r div as a function of the parameter S for the fixed values of Q e = 0 and 0.5, and λ = 0.0001 and vary the magnetic charge parameter Q m . We find that r div increases as the parameter ±S increases. Similarly, in Fig. 11, we plot the r div as a function of the parameter S but here we choose different combinations of Q e and Q m , and vary the λ. A similar behavior r div is observed with the parameter ±S as in Fig. 10. It is vital to notice that the parameters Q e and Q m were chosen in such a way that a black hole horizon should exist. However, the PFDM parameter λ is selected in such a way that it must satisfy the requirement λ M , as stated in Sec. II. Whereas the parameter S is selected in such a manner that it must be smaller than M (known as the Møller limit [95,96]), for the cases like the intermediate mass ratio inspirals, there is no such constraint on the choices of parameter S. As a result, we picked both scenarios where the parameter S < M and S > M . B. ISCO of a spinning particle moving in the background of a charged dyonic black hole immersed in PFDM In this subsection, our primary interest is to study the behavior of the parameters r ISCO , L ISCO , E ISCO , and Ω ISCO of a spinning particle numerically, and also, due to the complexity of the equations of motion, it is more convenient to define the effective potential similar to [53]. Hence, we redefine the effective potential using Eqn. (60) as Now, in order to find the ISCOs of the spinning par-ticle, the analogy of Eqns. (28) and (29) implies that we need to solve the following two equations simultaneously: and together with the condition [coming from the analogy of Eqn. (34)] The evolution of r ISCO , L ISCO , and; E ISCO , which are thus dependent on the black hole parameters Q e , Q m , M , PFDM parameter λ, and the particle's spin S, is thus analyzed in 12 using these three equations, which constitute a closed system. It is worth mentioning here that for the spinning particle we use the total angular momentum J which is the sum of orbital momentum L and spin S (i.e. J = L + S) [97] [103] In addition, for r ISCO , L ISCO and; E ISCO there is one more important quantity related to the circular motion of the particle that is its orbital angular velocity Ω, also known as orbital angular frequency or Keplerian angular frequency. This Keplerian frequency Ω is identified via the relation Ω ≡φ. To find the Ω ISCO for a spinning particle, we use the analogy of a geodesic particle, and hence substitute the values of r ISCO , L ISCO , and; E ISCO in Eqn. (78) found via Eqns. (75)- (77). We analyze the behavior of Ω as a function of S at the ISCO, as seen by an observer at infinity in Fig. 12 together with other ISCO parameters. Further, while studying the motion of a spinning particle in a curved spacetime it is important to take into consideration superluminal constraint. As we know from the literature [44,53], for a spinning particle moving in curved spacetime, its four-momentum P µ and four-velocity u µ are no longer parallel to each other. Hence, four-velocity can be timelike (i.e., u µ u µ < 0) as well as spacelike (i.e., u µ u µ > 0). The timelike (physical) four-velocity of a spinning particle is known as the subluminal, whereas the spacelike (unphysical) four-velocity of a spinning particle is known as superluminal. Therefore, to ensure that the motion of the spinning particle is subluminal (physical), we impose a constraint known as the superluminal constraint. The explicit form of the U 2 is given in appendix B as Eqn. (B4). The behavior of U 2 as a function of parameter S in addition to the behavior of parameters r ISCO , L ISCO , E ISCO , and Ω ISCO is shown in Fig. 12, for various values of black hole parameters λ, Q e and Q m . V. SUMMARY AND CONCLUSIONS In this paper, we have investigated the motion of two different classes of test particles: (i ) a massive test particle with charge, and (ii ) a massive particle with spin S, moving in the space-time of a charged dyonic black hole immersed in PFDM. To investigate the aforementioned, we first described the line element of the black hole using Eqns. (10) and (11), and obtained the explicit expression for horizons (i.e. the Cauchy horizon r − and the event horizon r + = r h ) in the limit λ M . The bounds on the charge parameters Q e and Q m , and PFDM parameter λ for which a black hole exists is also obtained (see the shaded region in Fig. 1). It is observed that the horizon of the black hole is very sensitive to PFDM parameter λ as the shaded region for which black hole exists (i.e the event horizon r h ), first shrinks and then grows again as parameter λ rises (see Figs. 1 and 2). We followed the Hamiltonian approach to study the dynamics of a charged moving around a charged dyonic black hole immersed in PFDM. To begin with, we first numerically analysed behaviour of V ef f (r) for various combinations of black hole parameters Q e,m and λ in addition to the charge coupling parameters σ e and g m . It is observed from the top right and bottom panel of Fig. 3 that the maximum of the V ef f (r) increases as well shifts towards the event horizon r h as an effect of parameters Q e,m and λ. Contrary to this, when parameters Q e,m and λ are fixed and the coupling parameters σ e and g m vary the the maximum of V ef f (r) reduces and shifts away from r h (see top left panel of Fig. 3). We also plotted the trajectories of charge particle in Fig. 4, it found that a charged particle have three different kind of orbits: bound, captured and escaping orbits, namely. On the other hand, a captured orbit occurs when the magnitude of magnetic charge q m increases, while a escaping orbit is observed for lager value of electric charge q. It is further observed that these orbits becomes bounded as consequence of increase PFDM parameter λ. From Fig. 5, it is found that circular orbits of charge particles shift toward larger radii for as the parameter g m and q e increase. However, an opposite behavior is observed when PFDM parameter λ increase. Finally, we presented the values ISCO parameters for the case of charged particles in Table I, it is found that radius of ISCO r ISCO decreases with the rise in PFDM parameter λ while it increases with the increase in the charge coupling parameter g m and σ e . Whereas, the orbital angular momentum at ISCO L ISCO decreases with the increase in parameter λ keeping charge parameters Q e,m and coupling parame-ters σ e and g m of black hole fixed, an opposite behaviour is observed for E ISCO . Contrary to this, it is seen that the parameters L ISCO and E ISCO behave oppositely when increase the parameter σ e and g m keeping the black hole parameters Q e,m and λ fixed. Additionally, it is found that the orbital velocity v at ISCO always decreases as one move away as the value of the both the charge parameters Q e,m and PFDM λ increases (see Fig. 6). To study the motion of massive spinning particle in the equatorial plane of the charged dyonic black hole immeresed PFDM, we used the Lagrangian approach over the MP approach for the reasons stated in the appendix A. We explicitly obtained the expressions for the non-zero components of four-momentum P µ and coordinated velocity v µ . For various combinations of parameters Q e,m , λ, and S, we numerically analysed the nature of effective Potential V ef f (+) in detail (see Figs. 7-9) for both direct (i.e. L = J − S > ) and retrograde (i.e L = J − S < 0) trajectories. When comparing Figs. 7 and 8, it was discovered that V ef f (+) is extremely sensitive to particle spin and the PFDM parameter λ, and showed different behaviour when one parameter is fixed while the other is varied. Further, as Mathisson and Papapetrou Eqns. (A4) and (A5) lead to superluminal motion (i.e. space like behaviour) of the spinning particle as they neglect the "multi-pole" moments and consider only the "spindipole" moment. In Sec. IV B, we analysed the ISCOs parameters r ISCO , L ISCO , E ISCO and Ω ISCO of spinning particle in detail to find the region where spinning particle motion is timelike and bring out the effect of PFDM λ together with black hole charge parameters Q e,m on ISCOs. It is seen that the parameters r ISCO and L ISCO become smaller as the PFDM parameter λ increases for the corresponding value of particle spin parameter S and black hole charge parameters Q e,m , which is consistent with the already established fact in [27,36] that it can lead to an attractive force. Whereas, the parameters E ISCO and L ISCO increase (compare left column versus right column in Fig. 12). Analyzing more, the left column versus the right column of Fig. 12, it is interestingly observed that when the dark matter parameter λ grows, the spinning particle enters the superluminal region (i.e. U 2 > 0, spacelike) for smaller values of spin, contrary to the fact established in the previous works that the superluminal region is reached for larger values of spin S if the dark matter field is absent [60]. While when one moved down the columns in Fig. 12, the values of r ISCO , L ISCO , E ISCO and Ω ISCO decrease with increase in the sum (Q e + Q m ) of black hole charges. However, the limiting value of particle spin for which the motion is subluminal decreases slightly, contrary to what is observed when one moves along the row by varying the PFDM parameter λ keeping Q e,m fixed. Finally, it is concluded that when the PFDM parameter λ is very tiny (say 0.00001) and the black hole charges disappear Q e = Q m = 0, the ISCO parameters r, L, E, and Ω displayed in the top left panel of Fig. 12 matched the Schwarzschild black hole scenario ( Fig. 2 of [60]), implying that for very small value of PFDM parameter λ the charged dyonic black hole immersed in PFDM is identical to Schwarzschild black hole. It is also worth noting that our findings of a charged dyonic black hole immersed in PFDM reproduced the exact same conclusion as Jefremov et al. [53] in the Schwarzschild limit. On the other hand, in the case Q m = 0 and λ = 0.00001, our results (see the middle panel on the left column of Fig. 12) coincided with Zhang et al. [57] for Reissner-Nördstrom black hole. These observations on the PFDM parameter λ imply that for tiny enough values of λ (say, λ ≤ 0.00001), the charged dyonic black hole immersed in PFDM acts identically to its counterparts without PFDM. However, for large enough λ values (say, λ ≥ 0.1), the ISCO parameters r ISCO and L ISCO fall dramatically (as seen by comparing the left and right columns of Fig. 12), showing that small ISCOs are conceivable for large λ values. Thus, the work done here for the charged particles and spinning particles moving around the charged dyonic black hole submerged in PFDM is novel and intriguing since the PFDM effect on the spinning particle has not yet been documented in the literature. Also, in the near future it will become possible to detect and accurately measure gravitational waves emitted from extreme-mass ratio inspirals as well as from intermediatemass-ratio inspirals with the help of advanced spacebased gravitational wave detectors like TianQin, Taiji, and LISA (Laser Interferometer Space Antenna). As a consequence, with the advancement of technology, we will be able to acquire information about the ISCOs and our results, including the PFDM. As a result, the work done here on ISCOs will be important in better understanding the nature and initial condition of binary black hole mergers and their surrounding. As usual, by varying the action I, the nongeodesic equations of motion for a spinning particle moving in curved spacetime are obtained [86,87] where, D/Dτ ≡ u µ ∇ µ and R µ ναβ are the covariant derivatives along u µ and the Riemann tensor, respectively. Interestingly, these results hold for the arbitrary function L as well and it is found that the dynamical variable fourmomentum P µ and spin tensor S µν are the generators of the Poincaré group. It is also shown in [88] that for a spinning particle moving in curved spacetime both its mass m and spin S are two conserved quantities defined as One can see here that conservation of S 2 is coming solely from the Eqn. (A5) equation of motion by contracting Eqn. (A5) with the covariant component of spin tensor S µν . It is worth to emphasizing here that conservation of particle spin comes naturally in the Lagrangian theory [86], in contrast with the extended MP formulation [100], which requires an extra assumption. Furthermore, simply glancing at Eqns. (A4) and (A5) reveals that there are more unknown variables than there are equations. Toover come this difficulty, the TSSC [98] P µ S µν = 0 (A8) is used. The above Eqn. (A8) helps us set three of the six independent components of spin tensor to zero (i.e. S 0i = 0) for a particular frame of reference. Hence, it gives the freedom to fix the worldline describing the path of a spinning particle. The components of S 0i are associated with the mass dipole moment of the spinning particle, so setting these equal to zero in some particular frame of reference fixes the center of mass of the spinning particle in that frame of reference (for a detailed discussion on SSC, we request reader to see [101] and the references therein). Also, it is shown in [102] that Eqn. (A8) defines a unique worldline of the spinning particle in a curved spacetime. In addition to the four reasons stated earlier for using the Lagrangian theory to find equations of motion, it worth noting here that if function L is chosen wisely, one would derive the TSSC from the Lagrangian [88], which is another motivation to use the Lagrangian theory approach. Now, one can define a normalized fourmomentum as such that it satisfies the conservation relation (V µ V µ =-1). As the four-velocity u µ is not the conserved quantity for the case of spinning particle. Hence, one need to bridle the u µ as in addition to SSC Eqn. (A8) so that the spinning particle will have a timelike motion. For convince, one may choose as pointed in [97]. Now, using Eqns. (A4)-(A11), a relation between a u µ and momentum V µ can be establish which reads as Additionally, more conserved quantities can be found depending on the geometry of the spacetime by using where, ξ µ is a Killing vector that satisfies the relation 2ξ (µ;ν) = 0. In order to make this paper self-contained, we presented a brief overview of Lagrangian theory. The equations of motion of the spinning particle moving in the vicinity of a charged dyonic black hole immersed in PFDM are determined using Eqns. (A2)-(A13).
2021-10-22T01:15:30.870Z
2021-10-20T00:00:00.000
{ "year": 2021, "sha1": "d19f052a63c607c717d822a77ee1563bda35de09", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8da172ebda7d8a725855583b4942dd39baf8022a", "s2fieldsofstudy": [ "Physics", "Education" ], "extfieldsofstudy": [ "Physics" ] }
231728642
pes2o/s2orc
v3-fos-license
Deep Generative SToRM model for dynamic imaging We introduce a novel generative smoothness regularization on manifolds (SToRM) model for the recovery of dynamic image data from highly undersampled measurements. The proposed generative framework represents the image time series as a smooth non-linear function of low-dimensional latent vectors that capture the cardiac and respiratory phases. The non-linear function is represented using a deep convolutional neural network (CNN). Unlike the popular CNN approaches that require extensive fully-sampled training data that is not available in this setting, the parameters of the CNN generator as well as the latent vectors are jointly estimated from the undersampled measurements using stochastic gradient descent. We penalize the norm of the gradient of the generator to encourage the learning of a smooth surface/manifold, while temporal gradients of the latent vectors are penalized to encourage the time series to be smooth. The main benefits of the proposed scheme are (a) the quite significant reduction in memory demand compared to the analysis based SToRM model, and (b) the spatial regularization brought in by the CNN model. We also introduce efficient progressive approaches to minimize the computational complexity of the algorithm. INTRODUCTION The quest for high spatial and temporal resolution is central to several dynamic imaging problems, ranging from MRI, video imaging, to microscopy. A popular approach to improve spatio-temporal resolution is self-gating, where cardiac and respiratory information is estimated from navigator or central k-space using bandpass filtering or clustering, followed by binning and reconstruction [1,2]. Several authors have also introduced smooth manifold regularization, which models the images in the time series as points on a high dimensional manifold [3,4,5]. This approach may be viewed as an implicit soft-gating alternative to self-gating methods. Manifold methods including our smoothness regularization on manifolds (SToRM) approach has been demonstrated in a variety of dynamic imaging applications with good performance [3,4,5]. Since the data is not explicitly binned into a specific phase, manifold methods are not vulnerable to potential errors in clustering the time series based on navigators. Despite the benefits, a key challenge with current manifold methods is the high memory demand. Unlike self-gating methods that only recover the specific phases, manifold schemes recover the entire time series. This approach restricts the extension of the framework to higher dimensional problems. The high memory demand also makes it difficult to use additional spatial and temporal regularization. The main focus of this work is to exploit the power of deep convolutional neural networks (CNN) to introduce an improved and memory efficient generative/synthesis formulation of SToRM. Unlike current manifold and self-gating methods, this approach does not require k-space navigators to estimate the motion states. Besides, unlike traditional CNN based approaches, the proposed scheme does not require extensive training data, which is challenging to acquire in free-breathing applications. We note that current manifold methods can be viewed as an analysis formulation. Specifically, a non-linear injective mapping is applied on the images such that the mapped points of the alias-free images lie on a low-dimensional subspace. When recovering from undersampled data, the nuclear norm prior is applied in the transform domain to encourage their non-linear mappings to lie in a subspace. Unfortunately, this analysis approach requires the storage of all the image frames in the time series. In this work, we model the images in the time series as non-linear mappings ρ t = G θ (z t ), where z i are vectors that live in a very low-dimensional subspace. The dimension of the subspace can be very small (e.g 2-4) in practical applications. We represent the non-linear mapping using a convolution neural network with weights θ. The memory footprint of the algorithm depends on the number of parameters θ and z, which is orders of magnitude smaller than that of traditional manifold methods. We propose to jointly optimize for the network parameters θ and the latent vector z such that the cost i A t (G θ z t ) − b i 2 is minimized during image reconstruction. The smoothness of the manifold generated by G θ (z) depends on the gradient of G θ with respect to its input. To obtain a smooth manifold, we regularize the gradient of the mapping ∇ z G θ 2 . Similarly, the images in the time series are expected to vary smoothly in time. Hence, we also use a Tikhonov smoothness penalty on the latent vectors z t to further constrain the solutions. Unlike traditional CNN methods that are fast during testing/inference, the direct application of this scheme to the dynamic MRI setting is computationally expensive. We use a three-step progressive-in-time approach to significantly re- [4,6] in (a) minimizes the nuclear norm of the nonlinear mappings ϕ(xi) of the images xi to encourage them to be in a subspace. By contrast, the proposed formulation expresses the images as non-linear mappings G θ (zi) of the low-dimensional latent vectors zi. The main benefit of the generative model is its ability to compress the data, thus offering a memory efficient algorithm. duce the computational complexity of the algorithm. Specifically, we grow the number of frames in the datasets during the optimization process. The latent vectors from the previous iteration are linearly interpolated to initialize the latent vectors. We observe that the use of the progressive-in-time approach significantly reduces the computational complexity of the algorithm. The proposed approach is inspired by deep image prior (DIP) [7], which was introduced for static imaging problems. We note that the extension of DIP to dynamic imaging was considered in [8]. The key difference of the proposed formulation from the above work is the joint optimization of the latent variables z, unlike the above method that chooses z as random or interpolated versions of random vectors. Another key distinction is the use of regularization priors on the network parameters and latent vectors, which ensures that the scheme learns meaningful latent vectors and the performance of the network does not degrade with iterations as in traditional DIP methods. METHODS Smooth manifold methods model images x i in the dynamic time series as points on a smooth manifold. In SToRM, the exponential (injective) functions of the images denoted by ϕ(x i ) of the alias-free images are assumed to lie on a lowdimensional subspace. See Fig. 1.(a). The joint recovery of the images denoted by the matrix X = [x 1 , ..x N ] from undersampled data is posed as a nuclear norm minimization problem To overcome the challenges with the above analysis scheme, we propose to model the images in the time series as where G θ is a non-linear mapping. We realize G θ using a deep convolutional neural network, inspired by the extensive work on generative image models. Here, z i are latent vectors that lie in a low-dimensional subspace. As z i vary in the subspace, their non-linear mappings vary on the image manifold. The mapping G θ may be viewed as the inverse of the injective mapping ϕ considered in analysis SToRM; rather than mapping the images to a low-dimensional subspace as in classical SToRM methods we now propose to express the images as non-linear functions of latent variables living in a low-dimensional subspace. See Fig. 1.(b). The smoothness of the manifold is determined by the gradient of the non-linear mapping, denoted by ∇ z G θ . A mapping with high gradient values can result in very similar latent vectors being mapped to very different images. To minimize this risk, we propose to penalize the 2 norm of the gradients of the network, denoted by ∇ z G θ 2 . We term this prior as network regularizer. We expect the adjacent time frames in the time series to be similar; we propose to add a temporal smoothness regularizer on the latent vectors. The parameters of the network θ as well as the low-dimensional latent vector z are estimated from the measured data by minimizing with respect to z and θ. We initialize the network parameters and latent vectors to be random variables. We use ADAM optimization to determine the optimal parameters. Note that the first and the second term in the expression is separable over i. To keep memory demand of the algorithm low, we propose to choose mini-batches consisting of random subset of frames. A key benefit of this framework over conventional neural network schemes is that it does not require any training data. Note that it is often impossible to acquire fully-sampled training data in dynamic imaging applications. The main benefit of this model is the compression offered by the representation; the number of parameters of the model in (2) is orders of magnitude smaller than the number of pixels in the dataset. The dramatic compression offered by the representation, together with the mini-batch training provides a memory efficient alternative to analysis SToRM [3,4]. Although our focus is on establishing the utility of the scheme in 2-D settings in this paper, the approach can be readily translated to higher dimensional applications. Another benefit is the implicit spatial regularization brought in by the generative CNN. Specifically, CNNs are ideally suited to represent images rather than noise-like alias artifacts [7]. Progressive in time training While the generative SToRM approach significantly reduces the memory demand, a challenge with this approach is the increased computational complexity. To minimize the complexity, we propose to use a progressive optimization strategy. Specifically, we solve for a sequence of vectors z 0 , z 1 ,.., z M each corresponding to increasing number of time frames. For instance, in this work we choose z 0 to be a 2×1 vector, where we consider the recovery of an average image G θ (z 0 ) = x 0 from the entire data. We solve for the optimal θ 0 and z 0 by minimizing (1). Since we are solving for a single image, this optimization is fast. Following convergence, the latent vector {z 0 } is linearly interpolated to the size of z 1 and used along with θ 0 as initialization, while solving for {θ 1 , z 1 }. This approach significantly reduces the computational complexity as seen from our experiments 3. EXPERIMENTS Dataset and imaging experiments All the experiments in this paper are based on a whole-heart multi-slice dataset collected in the free-breathing mode using a golden angle spiral trajectory. The acquisition of the data was performed on a GE 3T scanner. The sequence parameters were: TR= 8.4 ms, FOV= 320 mm x 320 mm, flip angle= 18 Fig. 3. Impact of network regularization and latent variable regularization. The SER vs epoch plots are shown above, while two of the reconstructed images, their time profiles, and recovered latent variables are shown. We note that the blue curve captures respiratory motion, while the orange one captures cardiac motion. degrees, slice thickness= 8 mm. Results were generated using an Intel Xeon CPU at 2.40 GHz and a Tesla P100-PCIE 16GB GPU. Results in §4.2, §4.3 were based on the first slice in the dataset, and results in §4.4, §4.5 were based on the second slice in the dataset. We binned the data from six spiral interleaves corresponding to 50 ms temporal resolution. The entire dataset corresponds to 522 frames. We omit the first 22 frames and used the remaining 500 frames for SToRM reconstructions, which is used as ground truth for comparisons. In all the studies, we assumed the latent variables to be two dimensional since the main source of variability in the data correspond to cardiac and respiratory motion. Benefit of progressive in time approach We demonstrate the quite significant reduction in running time offered by the progressive training strategy described in Section 2.1 in Fig. 2. Here, we consider the recovery from 150 frames with and without the progressive strategy. We plot the reconstruction performance, measured by the Signal-to-Error Ratio (SER) with respect to the running time. The results show that the proposed scheme can offer good reconstructions in ≈ 200 seconds, which is better than the direct approach that takes more than 2000 seconds. Impact of regularization priors We study the impact of network regularization priors in Fig. 3.(a), where we show the reconstruction performance with respect to the number of epochs. The recovered latent variables are also shown in the plots. We chose λ 2 = 2 in this experi- ment. We note that unlike the case without network regularization, the SER of the regularized reconstruction increases with iteration. The case without regularization will start to fit to the noise with iterations as in the case of deep image prior. We note that with regularization, the latent variables capture cardiac (orange curve) and respiratory (orange curve) motion, even though no explicit priors or additional information (e.g navigators) about cardiac or respiratory rates were used. Without network regularization, we observe increased mixing of the cardiac and respiratory patterns in the latent vectors. In the cost function (3), we also have the temporal smoothness regularization of the latent variables. We compare λ 2 = 2 against λ 2 = 0, while λ 1 was fixed as 0.001. Similar to the network regularization setting, we observe that the performance of the un-regularized algorithm falls with iterations, while the performance of the regularized approach increases or plateau with iterations. We also obsrved significant mixing between cardiac and respiratory patterns in the latent variables when no regularization is used. Comparison with existing methods We compare the proposed generative SToRM approach with analysis SToRM [6] and time dependent deep image prior algorithm [8]. We use the k-space data of 150 frames for the reconstructions. The reconstruction results are shown in Fig. 4. The results show that the generative SToRM approach is able to reduce noise and alias artifacts compared to analysis SToRM, offering around 1dB improvement in performance. We attribute the improved performance to spatial regularization offered by the CNN generator, which is absent in the analysis SToRM formulation. The reconstruction time of both the algorithms are comparable. The Time-DIP scheme, which assumes the latent variables to be fixed as random values results in increased artifacts and blurring of motion details. We note that unlike the analysis schemes, the proposed scheme does not use k-space navigators to estimate the motion states; the latent variables are estimated from the measured k-space data itself. CONCLUSION We introduce a generative manifold representation for the recovery of dynamic image data from highly undersampled measurements. The deep CNN generator is used to lift lowdimensional latent vectors to the smooth image manifold and this proposed scheme does not require fully-sampled training data. We jointly optimize the CNN generator parameters and the latent vectors based on the undersampled data. We also proposed the training-in-time approch to minimize the computational complexity of the algorithm. During the training, the norm of the gradients of the generator is penalized to the learning of a smooth surface/manifold, while temporal gradients of the latent vectors are penalized to encourage the time series to be smooth. Comparisons with existing methods suggest the utility of the proposed scheme in dynamic images.
2021-02-01T02:16:02.194Z
2021-01-29T00:00:00.000
{ "year": 2021, "sha1": "cb4a98a2ac09550a5ed360f38f0e0b457e0789ef", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2101.12366", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "cb4a98a2ac09550a5ed360f38f0e0b457e0789ef", "s2fieldsofstudy": [ "Computer Science", "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine", "Engineering", "Computer Science" ] }
51680434
pes2o/s2orc
v3-fos-license
The Improving Global Health fellowship: a qualitative analysis of innovative leadership development for NHS healthcare professionals Background The importance of leadership development in the early stages of careers in the NHS has been highlighted in recent years and many programmes have been implemented which seek to develop leadership skills in healthcare professionals. The Improving Global Health (IGH) Fellowship scheme is one such programme, it provides a unique leadership development opportunity through an overseas placement with a focus on quality improvement work. This evaluation examines the impact of completing an IGH Fellowship on the career and leadership development of participants, who are referred to as Fellows. Methods Fellows who had returned from overseas placement between August 2008 and February 2015 were invited to complete an anonymised online questionnaire, which collected information on: demographic details, motivations for applying to the programme, leadership development and the impact of the IGH Fellowship on their career. Fifteen semi-structured interviews were conducted to further explore the impact of the programme on Fellows’ leadership development and career progression. Interview transcripts were manually coded and underwent thematic content analysis. Results The questionnaire had a 67% (74/111) response rate. The number of fellows who self-identified as a leader more than doubled on completion of the IGH Fellowship (24/74 pre-fellowship versus 58/74 post-fellowship). 74% (55/74) reported that the IGH Fellowship had an impact upon their career, 35 of which reported that the impact was “substantial”. The themes that emerged from the interviews revealed a personal development cycle that consolidated the fellows’ interests and values whilst enhancing their self-efficacy and subsequently impacted positively upon their career choices. Three interviewees expressed frustration at the lack of opportunity to utilise their new skills on returning to the United Kingdom (UK). Conclusions The IGH Fellowship successfully empowered healthcare professionals to self-identify as leaders. Of the 45/74 respondents who commented on the impact of the IGH Fellowship on their career, 41/45 comments were positive. The fellows described a process of experiential learning, reflection and evolving cultural intelligence, which consolidated their interests and values. The resultant increase in self-efficacy empowered these returned fellows in their choice of career. Background In the twenty-first century the NHS faces unprecedented challenges that differ from those it faced when it was first conceived in the 1940s [1,2]. It is acknowledged that in order to be effective and responsive to these challenges there need to be people with established leadership skills embedded within the NHS at all levels [1,2]. In his 2008 report, "High-quality care for all: NHS next stage review", Lord Darzi emphasised the importance of fostering clinical leadership in order to create the right environment for high-quality care [1]. Due to the complex nature of leadership development, the challenge lies in integrating these skills effectively into NHS staff training and empowering leadership at a local level. Leadership development needs to be encouraged and accessible for all members of the multidisciplinary healthcare team [1]. In recent years there have been a number of initiatives to increase leadership capabilities across the NHS [3], an example of which is the NHS Healthcare Leadership Model, developed in 2013. This model aims to empower healthcare professionals of all backgrounds in developing their leadership behaviours [4]. Until recently the integration of medical leadership and management (MLM) in undergraduate and postgraduate medical training was limited [3,5]. With increasing awareness of the importance of MLM, and of embedding good leadership practice early in the careers of healthcare professionals [1], medical schools have begun to integrate this teaching into their curricula [5]. The effectiveness of different interventions in developing leadership skills has been explored, however there is limited high-level evidence supporting them [3]. Giving potential leaders challenging assignments, with mentor support, has been shown to be an effective way of developing leadership skills [6,7]. In his paper on the future of leadership development, Nick Petrie describes how individuals develop faster when they are given responsibility for their own development and provides a strong argument for increasing emphasis on vertical development (which describes development earned for oneself ) [8]. Assignment-based leadership development links with Kolb's experiential learning process in which the individual moves from concrete experiences to reflection, abstract conceptualisation and, finally, active experimentation which is known to be a powerful form of adult learning [9]. The Improving Global Health (IGH) programme was developed with three key objectives: firstly, 'to support the delivery of sustainable improvements in health and healthcare, in collaboration with overseas partners in their community in resource poor settings' , secondly, 'to provide an unparalleled personal and leadership development experience for participants (fellows) who are recruited as volunteers on the programme' and thirdly, 'to create a cadre of leaders with service improvement skills who are able to make a real difference to the NHS on their return to the UK' [10]. Central to the programme is the opportunity for fellows to lead a quality improvement project with an overseas partner thus learning leadership skills and behaviours. During this assignment fellows live in a resource-poor country for a period of three to nine months. The placement forms the fulcrum of the fellow's experiential learning cycle as they act and reflect upon their leadership performance within the new environment. Throughout the programme fellows are supported by UK-based mentors who help them to identify their learning needs and support them through the development process. There is also a period of formal leadership training prior to overseas placement to optimise assignment-based learning. During this process the NHS Healthcare Leadership Model [4], and the models that preceded it, are used to define leadership and aid the Fellow in identifying and addressing their learning needs. The purpose of this evaluation was to understand the impact of the IGH Fellowship on the leadership development of returned fellows and on their subsequent careers. In particular, we wanted to understand the process of personal development in order to further improve the aspects of the IGH Fellowship that facilitated leadership development. Participants All fellows who completed an IGH Fellowship between August 2008 and February 2015 were identified and invited to complete an online questionnaire regarding their experience. Questionnaire The questionnaire was designed to elicit information regarding the individuals' motivation for applying for a fellowship, the development of fellows' leadership skills and the impact on the fellow's careers after completion of the programme. In addition, it identified respondents' demographic and professional background. The questionnaire used a mixture of structured and non-structured questions to collect data and explore the respondents' ideas. Prior to dissemination, the survey was beta-tested by senior IGH Fellows and revised following recommendations. The survey was implemented online through SurveyMonkey© between the 17th August 2015 and the 5th October 2015. Microsoft Excel® proprietary software was used to analyse respondents' demographic and professional details. The non-structured free text responses were explored through an inductive approach utilising thematic content analysis. The text initially underwent open coding which was subsequently refined to develop a final coding framework for each of the key questions posed. Semi-structured interview Fifteen respondents were identified through matrix selection and invited to undergo a semi-structured interview to explore the main themes that emerged from the questionnaire results. The interviews were conducted by one of two individuals, were tape recorded and transcribed. At the beginning of each interview the purpose of the evaluation was explained and confidentiality assured. After clarifying demographic and professional details, the interviews explored the general impact of participation in the programme, the skills acquired and the impact of the programme on the interviewee's current and future career aspirations. The interview transcripts underwent open coding that was refined into a final coding framework as seen in Table 1. Each transcript was coded independently by two researchers; in the event of conflict, a third researcher coded the response to resolve the disagreement. The transcripts were analysed within this framework and grouped together by theme. All transcripts and quotations were anonymised prior to analysis to ensure confidentiality. Questionnaire Respondents The questionnaire was distributed to 111 returned IGH fellows and was fully completed by 74 (response rate: 67%). Sixty-four respondents (86%) were female, reflecting the gender split of those applying for the fellowship. The greatest number of respondents were aged between 31 and 35 years (31 respondents) followed by those aged between 26 to 30 years (25 respondents). This reflects the programme's focus on healthcare professionals who are at the early stages of their careers. In order to participate in the fellowship 23 respondents (31%) took a career break, 25 (34%) planned a gap in their post-graduate training, eleven (15%) waited for the end of their contract, eleven (15%) successfully planned the IGH Fellowship as a secondment from their employment and four (5%) resigned from their jobs. The opportunity for overseas experience and personal development were the most important motivations cited for applying for the IGH Fellowship (Table 2). Leadership development was the third most frequently cited motivation for application to the programme. Placements Sixty-seven respondents (91%) completed one placement and seven respondents (9%) completed two, resulting in a total number of 81 placements. All placements were completed between 2008 and 2015. The majority of first placements were between four and six months in length (56/74), with 15% (11/74) undertaking placements of less than three months and 9% (7/74) undertaking placements lasting over six months. Thirty-five respondents (47%) undertook their first placement in Cambodia, 29 (39%) at one of three locations in South Africa, ten (14%) in Tanzania, two in Kenya and one in Zambia. These numbers may reflect the longevity and nature of the partnerships with organisations in these countries. The work undertaken by respondents while on the overseas placements spanned many themes, the most commonly reported were quality improvement (85%), public health (77%) and education (72%). Leadership development Twenty-four respondents (32%) considered themselves leaders prior to their participation in the programme compared with 58 (78%) following it (Fig. 1). An analysis of the non-structured free text responses revealed that the most commonly cited reasons for the shift in self-perception was a change in the understanding of leadership and a development of personal qualities that empowered leadership. For example: 'I have seen how it is possible to lead from any role within an organisation, not just those traditionally seen to be "at the top".' (Respondent 39) 'I recognise the qualities of my personality that are positive when I am in a leadership role; I also recognise those aspects of my character that are a barrier to good leadership. Insight into both aspects has helped me going forward.' (Respondent 66) Ten respondents reported that they did not know whether they considered themselves to be leaders after completion of the programme, six of these had not considered themselves leaders prior to the programme. The majority of this group stated that their ability to act as a leader was context-dependent and a result of the opportunities that arose. For example: 'I think this depends on the situation -I have definitely further developed my leadership skills through the project but I think it is probably a little more complex than simply being a leader or not being a leader.' (Respondent 47) Career impact When asked whether they had been able to use the skills learned from their project work since completing the Fellowship, 53 respondents (72%) reported that they had been able to utilise these whereas thirteen respondents (18%) had not and six respondents did not know. Those who believed that they had not utilised these skills mostly cited that they had not yet had the opportunity to do so. One respondent expressed frustration: 'Partially I have been able to use skills learned, however my employer failed to see that I could do In contrast, those who had been able to use the skills that they had developed on the programme were very positive about the opportunities they had encountered. 'It has resulted in my completely changing my career direction. For a long time I had been unhappy with my job and had planned to leave the UK to work in Australia long-term. Since [the programme] I have been able to find a career that I find more fulfilling and I have become much more committed to remaining in the NHS long term.'(Respondent 6) Of the respondents who did not feel that the programme had an impact on their careers, some felt that it was too early to be able to assess the true impact of the programme, whereas others expressed frustration at the lack of opportunities available to them. For example: 'Whilst very much needed, I do not think that the NHS at the moment is fostering an appetite amongst its employees to embrace positive change and improvement.' (Respondent 56) Semi-structured interview Participants Fifteen participants were selected from the questionnaire respondents. Thematic content analysis of the semi-structured interviews identified the process by which the fellowship influenced the fellow's career and leadership development. In the qualitative section of this report, the term 'most' relates to ideas expressed by more than 75% of interviewees, 'many' relates to ideas expressed by more than 50% of interviewees and 'some' relates to ideas expressed by more than 25% interviewees. For analytical purposes, the themes were subdivided into concrete experience factors that instigated development, abstract personal development factors which resulted from personal reflection and the subsequent internal and external outcomes (as described in the Methodology Table 1). Experience factors that instigate development Core to the IGH programme was the overseas placement in which the fellow had the opportunity to lead a quality improvement project under the direction of the in-country partner. Three aspects were identified by interviewees as being important to personal development: the experience of practicing leadership skills, facing challenges and adapting to new roles; the exposure to different professions, cultures and attitudes; and personal awareness of effective and ineffective leadership styles. Experience The benefits that interviewees described as a result of working on an overseas quality improvement project may be broadly grouped as those relating to the experience of working within a foreign healthcare system and those relating to the opportunity of practicing leadership through taking the lead on an assignment. Leaving the UK and facing the challenge of working within a foreign healthcare system gave the fellow a new perspective on healthcare: "…it makes you look at the wider picture as you turn up knowing nothing." (Interviewee 3) Many of the interviewees reflected positively on the experience of working abroad. They described qualities that they developed as a result of being immersed in a foreign culture, such as flexibility, adaptation and self-reliance: '…it made me more responsive to the environment that I'm in…' (Interviewee 9) '…you are quite self-reliant and you are leading this project so you have to make decisions and get people on board. That kind of attitude can be quite helpful.' (Interviewee 6) Most of the interviewees stated that that the opportunity to lead a quality improvement project allowed them to practice their leadership skills. It was commonly believed that such skills could not be taught in the classroom: 'It takes practice to [lead], it's not something you can be taught.' (Interviewee 10) Interestingly, many of the interviewees stated that it allowed them to practice skills that would not have been possible for them in the UK. '…you are sprung into a leadership role when you're working on the programme and you're doing things like training staff, capacity building and leading projects. These are things you would never have a chance to do at that level of seniority in the UK.' (Interviewee 2) All of the interviewees agreed that the experience was a valuable one and was positive in developing their leadership skills. The local expectations and the high level of supported autonomy contributed to the perceived effectiveness of the fellow's development: …you are suddenly the lead on a project, it is very much in your court in terms of how you go about that project. In [XXXX], people were looking to you to be the leader and to come up with the direction and strategy for it. (Interviewee 7) Exposure The IGH programme exposed the fellows to other cultures, challenges and health professionals that the interviewees did not believe they would have experienced in their roles within the NHS. It was noted by the interviewees that were not working face-to-face with patients in the NHS that there was great value in working on projects relating to the practical delivery of healthcare. This allowed them to develop an understanding and appreciation of the challenges and demands from a different perspective: 'I was given the time and space to work through with the clinical team what they needed and how I could add value to the clinical pathways…' (Interviewee 1) Many of the interviewees cited that these interactions with professionals from different backgrounds influenced the way that they interacted with their colleagues on returning to work within the UK. The focus of these changes included an increase in empathy and willingness to understand another healthcare professional's perspective, and an ability to adapt to other ways of working. For example: 'I'm probably more empathic since I've been back when people have had concerns and I was probably a bit more judgemental before I went…' (Interviewee 3) 'The way I relate to others is different now…you were forced to rely on other people to share decisions and I've brought that function into my work at the moment…' (Interviewee 15) Most of the interviewees asserted the value of exposure in aiding them with their development and their careers: '[The IGH programme] has definitely helped [my career]…Lots of things: I've been exposed to different types of people, styles of work, teams, yes, absolutely helped.' (Interviewee 6) '...being exposed to people, that was definitely something very beneficial and that I wouldn't have had without the Fellowship.' (Interviewee 8) Leadership styles Repeatedly cited by many of the interviewees as an important aspect of the fellowship was the fellow's exposure to different styles of leadership. Most interviewees stated that they were able to observe different leadership styles and reflect upon their effectiveness. Leadership behaviours were observed not only in the overseas partners, but also in the UK-based IGH programme directors and the other IGH Fellows. Some interviewees discussed how they reflected on the leadership styles embraced by the individuals they encountered. The following comment does not explicitly state whether the leaders referred to are part of the overseas partnership or the UK-based programme: 'Our big boss had a very unusual, interesting leadership style. Our local boss had a very different leadership style. They both had strengths and weaknesses and I think observing and working with them you could see what worked in a very different context.' (Interviewee 2) Some of the strengths that were highlighted related to motivation and energy, communication and the ability to unify a team. One interviewee noted: 'These people I'm thinking of managed to unify people so nobody was upset or offended and got things back on track. It was great to see that that it is possible -how you can navigate situations and keep everyone happy whilst still moving things forward.' (Interviewee 8) Though the leadership styles were not always thought to be positive, many of the interviewees explained that the learning they gained was still valuable. For example: 'I could see different ways leaders went about it and there is something to be learnt both positive and negative'. (Interviewee 7) Many of the interviewees expressed how their reflections on seeing different styles of leadership influenced the way that they viewed their own leadership abilities and skills. 'I didn't grasp the concept that it might be much more collaborative and that you could still be a leader but an integral part of the team I had my eyes opened to different styles of leadership…' (Interviewee 5) Through their reflections most of the fellows were able to consider the ways in which they could influence their effectiveness as leaders and develop their own style. 'I am more aware of the leadership styles of others and how that may impact on the desired results that they would like. Equally, I am much more aware of how I come across to others and reflecting on when things maybe haven't gone so well.' (Interviewee 3) Personal development factors As a result of the experience factors described above, the interviewees described areas in which they had undergone significant personal development. Over the course of our interviews, three of these areas were identified: interests, relating to the evolving interests and values of the fellow; perspective, relating to their professional development and their role within the NHS, and insight, reflecting the fellow's self-awareness. Interests Interviewees described how they developed their interests and aspirations as a result of their experiences on placement. Common areas that evolved included interest in leadership roles, developing the service, improving patient care and global health. Interviewees reported that the development of these interests and values influenced their subsequent choices in careers. One interviewee stated how the programme equipped her to pursue the career of her choice: 'The programme taught me how to project myself, how to run a project and gave me an idea or concept of how to do this and it has made me more able to carve my career niche… which is very useful for me.' (Interviewee 12) Many interviewees reported that they had established new ideas about roles that they would like to work in and empowered them to pursue their career choices. For many, it enabled them to understand how they would be able to achieve such a goal: Perspective The opportunity to work on a placement outside of the NHS allowed many of the interviewees to gain valuable perspective on both their careers and the way that their local NHS system functioned when they returned. Reflecting on how they felt prior to commencing their fellowship some of the interviewees expressed frustration at working within the NHS: 'It's very easy when you're working in such a big system to feel powerless, as if you can't change anything…' (Interviewee 2) 'An NHS hospital can be quite a difficult place, things don't work the way you want them to and you can get frustrated and want to change everything, and, before I went away I got really frustrated by all these things, wanting things to be better and things to work better.' (Interviewee 12) There were both positive and negative attitudes towards the NHS expressed by the interviewees following the fellowship. Many explained that the experience of working in a foreign system allowed them to gain an appreciation for working within the NHS: 'We are really lucky in the system we have. Yes, there are lots of frustrations in the NHS but when you've seen other healthcare systems that are so basic and so different it makes you very grateful.' (Interviewee 7) 'We have nice structured clinical times. We have access to everything we would need. Our patients have good levels of comprehension. We're well paid for what we do…' (Interviewee 3) The other areas that interviewees found a new appreciation for included the evidence-based approach to medicine and the protocols designed to standardise patient care. On returning to the NHS, some of the interviewees explained how this new perspective empowered them to seek opportunities to help develop the service: 'When you work with IGH you realise how easy it is to get involved with things and change things and work at quite a high level, it gives you confidence to do that…' (Interviewee 2) 'I am much more knowledgeable now who needs to endorse changes and, if I see something not working very well, who I need to speak to for change. Now, I have the confidence to move higher up the chain to speak to people if I think something could be done better. Now, I'm more likely to say let's not just talk amongst ourselves, let's go out there and raise it higher up and get something done.' (Interviewee 11) Conversely, some of the interviewees took a more negative view of their new perspective on working in the UK. The majority of these views were due to frustration. For example: 'When I first came back and people were complaining, I was quite irritable and thought, "you don't have any idea, you should just get on with it"…I got really angry once…' (Interviewee 3) Many of the interviewees explained how their new perspective allowed them greater scope in considering the challenges of the NHS and aided them with their approach to dealing with these challenges. 'I am able to look at the bigger picture much more…' (Interviewee 7) 'I definitely have better appreciation of the fact that there is more than one way to do things, just because somebody disagrees with me doesn't mean I'm wrong and just because I disagree with someone it doesn't mean they're wrong…' (Interviewee 15) 'It's just stepping outside and seeing that things can function perfectly well using different models, that's been a valuable experience' (Interviewee 15) Insight In addition to gaining perspective on their careers and the NHS, many interviewees believed that they had developed personal insight through reflective practice whilst completing the fellowship. In some instances this was expressed as a new awareness of qualities that they already possessed, and in other instances it involved recognising a gap between their perception of self and reality. Some interviewees reported that the programme empowered them to gain confidence in skills they had not previously recognised: 'I don't think I viewed myself as a leader before I went away so I think it has changed even seeing myself as a leader, my ability to lead projects or come up with ideas... it gives you more understanding of yourself.' (Interviewee 12) For others, the insight they developed allowed them to focus on certain areas of their development: 'Before the programme I think I misjudged significantly how good I was or how bad I was at leadership skills in certain areas and I was surprised about this…[it] made me realise actually I do need to develop in some areas and I was wrong in my perception of where I was…' (Interviewee 14) Similarly, some interviewees reported that the improved self-perception allowed them to have greater insight into how they acted and were perceived by others. For example: 'I present and manage myself better… overall I think I'm much better at working with others and within teams than I was before…' (Interviewee 11) Internal and external outcomes As a result of the fellow's personal development through the programme the interviewees identified various outcomes. These may be considered internal, in that they have influenced the behaviour of the fellow, or external, in that they influenced the perception of the fellow from outside sources. Internal outcomes included the development of personal qualities; change in practice and change in career choice. External outcomes included the perception of the fellow, the availability of opportunities and the overall impact of the fellowship. Personal qualities The interviewees all reported on the qualities that they brought back with them to their work in the UK as a result of the IGH programme. Most commonly mentioned by the interviewees was confidence: 'I'm more confident, happier to direct people, listen to their feedback and give them feedback…' (Interviewee 10) Many related this new confidence with the ability to effectively work as a leader: 'I'm much more confident with leaderships-style tasks.' (Interviewee 3) Others directly related the confidence that they had gained to their choice to pursue jobs that they would not have applied for prior to the fellowship. One interviewee explained that the confidence she had gained had made her more articulate and enhanced her abilities to deal with conflict: 'I'm not scared to be fair at the risk of upsetting somebody, which I think has changed.' (Interviewee 3) In addition to confidence, other qualities that interviewees developed included a positive change to a more proactive attitude: 'I am more likely to think "what can I do to change that?" rather than "oh that was someone else's fault" or "it couldn't have been helped"…' (Interviewee 6) Similarly, others described the positive effects resulting from their ability to take the initiative: 'I'm far more likely to approach someone though before I may have lacked the confidence to do that…' (Interviewee 5) Adaptability and flexibility were additional qualities that were highlighted by the interviewees as new ways that they had learned to change their approach: 'I am more likely to change my behaviour, maybe to behave in ways I wouldn't normally want to in order to try to work successfully with someone…' (Interviewee 14) Change in practice During each interview, the interviewees explored the ways that their development had influenced their subsequent approach to professional practice. This included practical changes, such as a fellow in a managerial position deciding to spend sessions working clinically in order to develop connections with the staff involved in practically delivering healthcare, and abstract changes in which the fellow adapted the way that they interacted with their colleagues. One interviewee explained how choosing to spend some time working with clinicians had enhanced her practice as a manager in the NHS: 'I think that they have a lot more respect for me as I feel they think I've taken the time to understand their role and frustrations… my personal experience working so closely with clinicians has made me a much stronger advocate…'(Interviewee 1) Most commonly cited as a change in practice was the change to the interviewees' interactions with colleagues in a team. 'Overall, I think I'm much better at working with others and within teams than I was before…' (Interviewee 11) When discussing changes in their approach to leadership, opinion was divided regarding how their behaviour had evolved. Most cited that their behaviours had changed considerably: 'It has proved a really good experience for knowing how to work with others in a team, how to psyche yourself to work with others, the importance of working as a team, sitting down, thrashing out ideas, taking other people point of view and bringing those on board to reach objectives…' (Interviewee 11) Whereas others reported that they did not believe that their leadership behaviour had changed although they were able to reflect upon leadership styles: 'I don't think it necessarily changed how I behaved as a leader but it definitely gave me a chance to reflect on the leadership styles I knew.' (Interviewee 9) Change in career choice Many interviewees described the differences between their career path prior to the fellowship compared to their career path following it, with 5/15 interviewees reporting a complete change in career direction. When discussing careers within the NHS, it was generally expressed that the structure for career progression was rigid and prescriptive. Whilst reflecting on this, one interviewee explained: Many interviewees revealed that their values had been clarified through their experience in regards to the type of organisation that they wanted to work for and in regards to the work-life balance that they would like to achieve. For example: 'I probably would have been more focussed on the job role, the banding and the status of that role, whereas now I am more bothered by the value set in the organisation, what its trying to achieve…' (Interviewee 1) Many of the interviewees explained that their career choices had changed direction following the fellowship. For some this meant applying for roles that were more senior than they would have prior to the fellowship, for some it involved applying for additional roles with a leadership or management aspect and for some it involved completely changing the course of their career. Many of them described it as a useful talking point for interviews and for their CV that set them apart from their peers: '…so few people have had this experience, particularly in a management career in the NHS and quite early on in my career, it was part of my unique selling point.' (Interviewee 1) The experience allowed them to have solid evidence as a foundation for their credibility. For example: '…if people do question it, I feel I am able to give them some examples of quite significant leadership I've done that most GPs haven't had the opportunity to do at all…' (Interviewee 7) The majority reported that this was positive for their careers, however one interviewee explained: 'My midwifery career it probably hasn't helped as I feel ready to move on from that.' (Interviewee 8) This reflects the frustration that was expressed by some of the interviewees in regards to the rigid and traditional clinical roles within the NHS. Some interviewees expressed new aspirations for their careers: '…I purposely chose my role in xxxx Trust to be more strategic. It was a conscious effort having enjoyed the opportunity to step back from direct service provision; this was something I really valued and looked for in my current role.' (Interviewee 1) 'I can't think of just going back to doing ward work without wanting to do something more exciting and challenging.' (Interviewee 8) When asked whether they intended to remain working within the NHS the majority of the interviewees were very positive that they would remain. Most reported a great affinity for the NHS despite the frustrations and difficulties they associated with working in it: 'I think the positives to our careers outweigh the negatives despite everything that's going on.' (Interviewee 11) There were a few interviewees who were more pragmatic in their responses and highlighted a new awareness of their own transferable skills and the opportunities globally for healthcare: 'I have a real love for the NHS… and the opportunity to stay within the NHS is something I'd definitely welcome, but I am open minded enough to know that health is becoming more global.' (Interviewee 9) 'I wouldn't be afraid to leave medicine or hospitals and work for the NHS in terms of public health or work for another organisation. I do think that having had a high quality education as doctors, we are able to apply our skills to different areas and we do have skills that can be offered, not only in clinical work, and knowing that broadens your horizons a bit.' (Interviewee 15) Perception Through the course of the study, interviewees explored the ways that perceptions about them had changed as a result of the fellowship. There were mixed reactions reported with some colleagues expressing scepticism about the value of the fellowship and others expressing interest. Discussing these reactions, one interviewee explained: Most of the interviewees said that their colleagues' perception of them had changed following the fellowship, with some reporting positively the results of greater confidence, taking the initiative and being proactive. However, some were unsure whether perceptions about them had changed as they had not been established within a role for a long enough period of time to have received feedback. Opportunity On returning to their careers in the UK, some of the interviewees reported that the fellowship had given them a unique platform from which further opportunities had developed. As one interviewee explained: '…[the IGH Fellowship] enabled me to build a very great CV, very good networks and contacts and I don't think I would have been able to have got to where I have without having some of the IGH education…'(Interviewee 2) For many it developed their ideas regarding their career choices and provided opportunities for the fellows to explore. 'It led on to other things and opened my eyes to other opportunities…'(Interviewee 7) In contrast, others expressed frustration at the lack of opportunities to utilise their skills or develop further in their NHS roles. When talking about opportunities for further development through performing audits and quality improvement in her clinical work, one interviewee explained: '…it was a shame there wasn't more use made of me. It's a bit of a waste when you have someone come back who is very motivated to do the job which others aren't particularly eager to do…'(Interviewee 8) Similarly, other interviewees expressed their frustration: '…there is more I'd like to do but I've found it difficult to have the opportunity.' (Interviewee 3) '…I became increasingly frustrated as I didn't see any opportunities in my role as a locum GP or in the wider NHS workforce…' (Interviewee 7) This frustration prompted Interviewee 7 to leave their current posts and seek personal development opportunities elsewhere in order to find a fulfilling role: '…when I came back and got the job working for the CCG (Clinical Commissioning Group) I felt then that I did have opportunities for leadership back in the UK.' (Interviewee 7) Most of the interviewees expressed a desire to continue to develop and explore challenges that arose, with many of them actively seeking new opportunities. Impact of fellowship When asked about the impact of completing the fellowship on their career, the majority reported a positive impact. Some interviewees highlighted the qualities that they had developed on the programme: '…I would definitely use those qualities and use those skills I learnt in the future for leadership and clinical leadership.' (Interviewee 6) Others discussed the motivation and enthusiasm they gained from the programme: '…I came back refreshed and renewed, motivated; all very positive type feelings to change. Yes, I think it was a great thing to do.' (Interviewee 3) Some of the interviewees explained that the skills and attitudes that they had developed through the programme would be increasingly important as they progressed through their careers, discussing their roles as consultants, GP partners and members of the CCG. One of the interviewees explained that it was too soon to understand the true impact of the programme on their career, the majority expressed that it had been very helpful: 'It's immensely helped. There is no way I'd be where I am without it, its been absolutely crucial.' (Interviewee 2) Discussion For this group of respondents successfully completing an IGH Fellowship empowered them to view themselves as leaders, developing and embedding qualities, behaviours and skills that not only enhanced their leadership abilities in their current roles but also encouraged them to pursue fulfilling career paths. In this discussion we ask why the programme was successful and try to delineate which elements encouraged the development of leadership skills and career choice. Answering these questions was essential to understanding how to develop leadership in healthcare for the future so that the IGH programme could optimise its effectiveness. It is important to note that the individuals who undertook the IGH Fellowship self-selected through a desire to apply for the fellowship. Whilst leadership development was not the most frequently stated motivation for applying to the programme it was an important factor and so it is likely that the individuals undertaking the IGH Fellowship had some interest in leadership prior to commencing the programme. The experience of completing an overseas quality improvement project was an important factor in enabling the fellow's personal development. The cycle of experiential learning, through the process of experience, reflection, conceptualisation and experimentation was at the heart of the fellowship [9]. Creating an environment for experiential learning, through which knowledge may be created from reflection on experience, is known to be a successful form of adult learning [11]. In order for learning from experience to be effective, there needs to be a period of active reflection [12]. The opportunity for reflection was embedded in the structure of the IGH programme and was aided by individual mentors who supported the fellows in identifying their learning needs and developing strategies to address them. The uniqueness of the experience provided by the fellowship was made up of two elements: its overseas nature, involving significant cross-cultural work within a foreign healthcare system, and the opportunity to lead a quality improvement project at an early stage of the individual's career. The capability to work effectively across cultures is dependent upon an individual's cultural intelligence [13]. This relates to the ability to adapt to the nuances of different cultures and consists of three core elements: metacognition and cognition (described as thinking, learning and strategizing), motivation (described as efficacy and confidence, persistence, value congruence and affect for the new culture) and behaviour (described as social mimicry and behavioural repertoire) [14]. In order to be successful in their overseas placements whilst working within a foreign culture, the IGH Fellows would have been required to utilise and enhance their cultural intelligence. It has been postulated that employing cultural intelligence enhances the likelihood that individuals will engage in experiential learning and this will then guide the direction of their development [15]. As such, the integrated nature of the overseas placement is likely to have provided a fertile environment for significant personal development. The second aspect of the overseas placement that provided a unique experience for the fellow was the high degree of responsibility for a project that they would not have had the opportunity to lead on whilst in their role prior to the fellowship. In order to be successful in a new environment and whilst undertaking a new task requires an individual to be highly adaptable, versatile and tolerant of uncertainty [16]. This adaptability is being recognised as an increasingly important trait for new leaders [16,17]. The interviews reflect the process by which the fellows recognised the requirement to adapt in order to perform successfully on placement. As a result of this they enhanced their self-awareness, recognised gaps in their capabilities and developed qualities that enhanced their self-efficacy on returning to the UK. By taking an active role in their own development, the fellows were able to own their progress and success [8]. The majority of the fellows (58/74) who undertook the IGH Fellowship viewed themselves as leaders after completing the programme. In part, this was due to a new awareness of what they believed being a leader entailed, often moving away from a top-down, 'heroic' form of leadership to a more integrated, 'shared' leadership. The fellows were also able to undertake practical experience in leadership, which has been shown to be highly valuable in developing leadership skills [3]. In addition to developing an active awareness of leadership as a concept, and practicing their own leadership skills, the fellows were able to actively observe different leadership styles. This type of observational learning is also known to be influential in developing concepts of leadership [18]. The process of experiential learning empowered the fellows to explore and establish their interests, gain a valuable perspective on the NHS and develop their self-awareness. These factors allowed the fellows to cultivate personal qualities in relation to leadership and subsequently adapt their professional practice. In addition, it encouraged them to establish what they required from a fulfilling career. Following the fellowship many of the fellows reported that their career choice had changed. This was often regarded as a result of a change in their perception of self-efficacy, referring to an individual's belief in their capabilities to perform a task. Self-efficacy is a dynamic set of self-beliefs with complex interactions that, in combination with outcome expectations (anticipating the consequences of particular actions and behaviours) and the formation of interests, influence an individual's choices of activity [19,20]. Self-efficacy is dynamic, and is influenced by personal performance accomplishments, vicarious learning, social persuasion and physiological states [19]. Personal accomplishments are believed to be the most influential of these and so personal success experiences will raise efficacy [19]. This may explain the increase in confidence reported by the majority of the interviewees following completion of the programme and their corresponding increase in perceived self-efficacy in relation to leadership roles and therefore their subsequent career choices. The fellows who found or sought out new opportunities within their careers on returning to the NHS reported success and satisfaction. In both the questionnaire and the interviews, those who reported a lack of opportunity to utilise their new skills expressed frustration. This is an important point in terms of retaining staff within the NHS. Studies have shown that creating an employability culture, through stimulating employees, encouraging self-development and providing challenging work assignments, encourages staff retention [21]. In addition, for an organisation such as the NHS, which increasingly requires adaptability, change and leadership at a local level, it is essential to create opportunities to develop these skills for staff [1]. The results highlight that despite widespread agreement in the literature that NHS staff should be supported and encouraged to embrace leadership roles within their organisations [1][2][3], there remain some barriers to this at a local level. This poses the risk that empowered individuals who are keen to utilise their new skills will seek out fulfilling job roles outside of the NHS. It is interesting to note that despite developing their confidence in leadership, the individuals who expressed frustration at the lack of opportunities on returning to the NHS had not managed to successfully develop these opportunities themselves. A systematic review of leadership development programmes for physicians in 2014 concluded that the majority of these programmes resulted in an increase in self-reported knowledge and expertise, as has been revealed in our study [22]. There are two aspects of the IGH Fellowship which contrast with the programmes evaluated by Frich et al. Firstly, the IGH Fellowship purposefully integrated the development of non-physician and physician professionals in the programme, which was found to have important developmental impact through shared experiences. Secondly, the primary learning method employed by the IGH Fellowship was through participation and experience rather than more common methods, such as lectures, seminars and group work. Quantitatively and qualitatively evaluating the impact of the IGH Fellowship provides an example of a novel programme which addresses some of the important gaps identified by Frich et al. This evaluation has some limitations: 1) despite having a good response rate to the online questionnaire (67%) the absolute number of questionnaire respondents is small due to a limited overall number of fellows, 2) a correspondingly small number of in-depth interviews were conducted, though this made up 20% of all questionnaire respondents and accurately reflected the diversity of the individuals undertaking the fellowship. 3) As the purpose of the questionnaire and the interview were both explained in order to inform consent to participate, this may have introduced responder bias in which the respondents' answers were influenced by this knowledge. 4) Though there are few similarly-structured programmes (i.e. focussing on an overseas placement as the catalyst for leadership development) with which to triangulate our findings, this study adds to the literature available for the development of leadership within healthcare [22] and to the development of leadership within the realm of global health. It is important to note that despite completing a programme with a strong global health aspect, the respondents and interviewees did not report pursuing further opportunities in global health; it is unclear whether this was due to desire for such opportunities or the lack of opportunities. This formal evaluation of the leadership development aspect of the IGH Programme will enable the programme team to better understand the programme so that it can be further improved. Conclusions The IGH Fellowship successfully fosters healthcare professionals to recognise and develop their leadership skills and behaviours and subjectively has a positive impact on their career development. The process involves experiential learning, reflection and evolving cultural intelligence, which in turn helps to develop self-awareness, increases self-efficacy and subsequently leads to positive changes in career choice. Three interviewees reported that they felt that their skills were not recognised on returning to the UK, suggesting a disparity in the perceived opportunities to utilise these skills on returning to the NHS. In order for the NHS to face the significant challenges of the twenty-first century, it is essential that healthcare professionals are supported in taking on leadership roles in the early stages of their careers. For the 41/74 fellows who self-reported positive career impacts through finding or creating opportunities to utilise the skills that they developed on the programme, valuable outcomes in terms of engaging with quality improvement work and leadership have been reported. It is essential that continuing evaluation is conducted into the effectiveness of leadership development programmes and the perceptions of such programmes to pave the way for innovative leadership within the NHS, whose survival is dependent upon the strengths of its staff and its ability to adapt to the changing environment.
2018-07-18T13:15:14.520Z
2018-07-17T00:00:00.000
{ "year": 2018, "sha1": "274f21435f261ec80912dc408eae4c85dfb0ce27", "oa_license": "CCBY", "oa_url": "https://globalizationandhealth.biomedcentral.com/track/pdf/10.1186/s12992-018-0384-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "147181cd798c35952a97ffac2490075f5bfe54e2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
244643159
pes2o/s2orc
v3-fos-license
Technological assurance of the quality of processing of products from aluminum alloys with a complex geometric shape using magnetic abrasive processing In the presented article, the issue of the implementation of magnetic abrasive processing is considered in order to ensure the quality of surfaces of complex shapes of parts made of corrosion-resistant aluminum alloys. The implementation was carried out through theoretical and experimental research. In a theoretical study, the features of processing corrosion-resistant aluminum alloys, existing and possible schemes for magnetic-abrasive processing of surfaces of complex geometric shapes, including a combination of various working movements of the workpiece and pole pieces, are considered. In an experimental study, the dependence of the quality of the processed surface (roughness) on the size of the working gap between the workpiece and the working pole was determined. The result of the research is the determination of the optimal treatment schemes for surfaces with a complex geometric shape, as well as the derived exponential dependence of the change in surface roughness on the size of the working gap. Introduction Nowadays, ensuring the required surface or part quality of a product is a key requirement in machining. Therefore, product finishing becomes the most important part of the part manufacturing process. The final processing of the product significantly affects the technical, productiontechnological and performance indicators of the quality of the product, since there is a direct relationship between the accuracy of processing, surface roughness and the service life of the product [1-2,7-9]. Aluminum alloys and products Today, in such industries as power engineering, aircraft construction, space engineering, etc., aluminum and its alloys are used for the manufacture of most of the products (up to 90% of the total mass of products) [3][4][5]. Aluminum can be obtained in its primary state (pure aluminum) or in the form of alloys with other chemical elements. Alloys can be formed by reacting aluminum with cements such as zinc, manganese, magnesium, iron, etc., which give the primary aluminum new properties. Aluminum alloys have been used in aircraft construction since 1930, mainly alloys of grades 1100, 2014, 2017, 2024, 3003, 6061, 7075. These alloys are widely used in the aerospace and automotive industries, as they have a high specific strength and can replace steel and cast iron [25,27]. The main areas of application of aluminum alloys: • Manufacturing of car disks, panels and structures from A356 alloy; • Pistons, brake discs, drums and piston liners SiCp or Al2O3p or aluminum-silicon alloys with Si content up to 20%; • Aircraft structures made of T7451 aluminum alloy; • Anchor, gears, shafts from T6 aluminum alloy; • Plating of aircraft from T3 aluminum alloy. The rest of the applications of aluminum and aluminum alloys are used in civil engineering, in the electrical, electromechanical, electronic and packaging industries, in the production of nanostructures with high mechanical strength and thermal stability (aluminum alloy 6061-T6). Surface roughness The body parts of the gas-insulated transformer are made of the difficult-to-machine AMts aluminum alloy. At the same time, high requirements for roughness Ra = 0.8 ... 1.6 are imposed on the surfaces of complex-shaped products, which is due to the specifics of the operation of an gas-insulated transformer. Insufficient cleanliness of the surface leads to electrical breakdowns, burning of the working surface of the part, an accident, failure of the gas-insulated transformer [28][29][30]. Form requirements In addition to the high requirements for the roughness of the working surfaces, high requirements for alignment and roundness are imposed on the parts of the SF6 transformer, which are due to the positioning of the parts in the assembly. This causes the opening of the holes of the parts directly in the assembly, which also complicates the technological process of manufacturing a gas-insulated transformer, since some holes have a complex geometric shape, must have a low surface roughness Ra = 0.8 microns [31][32][33]. Obtaining such holes is another problem facing technologists, since the inner surface of the shield of an gas-insulated transformer ( Fig. 1, a) cannot be subjected to grinding, since it has large overall dimensions. Manufacturers can finish the working surface of the case using an angle grinder with vulcanite discs. Mechanical processing of aluminum alloys Let's consider the problem of mechanical processing of aluminum alloys with the peculiarities of their processing. Aluminum and aluminum alloys are some of the most commonly used lightweight metallic materials because they offer a range of distinct mechanical and thermal properties: low specific gravity, high ductility, corrosion resistance, high electrical conductivity. In addition, they are relatively easy to shape, especially by machining. In fact, aluminum and its alloys are considered to be easily machinable materials compared to other light materials such as titanium and magnesium alloys [6,10]. However, it is known that alloying aluminum with various elements such as magnesium, manganese, copper, silicon, etc., changes the machinability of the alloy. This change is described by such negative factors associated with the high toughness of aluminum alloys, such as the formation of a buildup on the front surface of the cutting tool, the appearance on the treated surface of a layer with increased microhardness (work hardening), which makes it difficult to achieve a given surface quality, leads to overheating, jamming, breakage of the cutting tool [17][18][19]. The magnitude of the build-up and work-hardening is influenced by the cutting conditions, the forces arising during the cutting process, the geometry of the cutting tool, and its durability. Based on the listed features of processing aluminum alloys, various methods of abrasive processing were considered as a final processing method. With the current technological progress, a wide range of enterprises have an urgent need for such a method of finishing the surface of the workpiece, which will provide a low surface roughness Ra (up to 0.01 microns). There are a number of methods for finishing the surface of the workpiece associated with the use of abrasives: abrasive processing, processing using a magnetic rheological fluid as an abrasive, and magnetic abrasive processing using a magnetic abrasive powder [9,11,12]. Magnetic-abrasive processing Particular attention should be paid to the process of magnetic abrasive processing, as it is gradually gaining popularity among manufacturers of highprecision products, since it is able to provide a surface roughness Ra of 0.01 to 0.4 microns, and it can be used for processing both magnetic and nonmagnetic materials. Magnetic abrasive processing is based on the phenomena of a magnetic field and magnetic induction, which is a force characteristic of a magnetic field. The magnetic field can be created by permanent magnets or electromagnets, which are located relative to each other in an order depending on the method of magnetic abrasive processing used [24,29]. The magnetic field is necessary for the formation and retention of the magnetic-abrasive brush at the working poles of the magnetic-abrasive installation, the action of magnetic induction on the magnetic particles of the magnetic-abrasive brush, due to which the tangential Ft and normal Fn forces are formed, acting on the abrasive particles [16,18,26]. When an abrasive particle, held by magnetic particles, comes into contact with the surface of the workpiece due to the action of the normal force Fn and due to the action of the tangential force Ft, removing material. Thus, a high-quality surface layer of the part is formed (Fig. 2). Magnetic-abrasive brush The mechanism of formation of a magnetic-abrasive brush is a complex system, therefore, when calculating the forces acting on an abrasive particle, researchers of the process of magnetic-abrasive processing usually adhere to certain assumptions [21,23]. First, all abrasive and magnetic particles are spherical objects, equally oriented relative to the surface of the workpiece. Secondly, the particle diameter is the same and constant for all particles of the same type, the particles are located relative to each other without voids. Third, the magnetic flux density is distributed evenly throughout the workspace [33][34][35]. Magnetic flux It should be noted that the magnetic flux density and the uniformity of the magnetic field are one of the key factors affecting the formation of a high-quality surface layer. The distribution of the magnetic field in the processing area determines the shape and rigidity of the magnetic abrasive brush. The characteristics of the magnetic field in the working gap are the density and magnitude of the magnetic flux F, its gradient [20,22]. They depend on the shape, size, material of which the working poles are made, the voltage or current supplied to the coils, and the relative position of the working poles relative to each other [6,8]. The magnetic flux acts on the abrasive particle by applying a magnetic force to the magnetic particles that hold the abrasive particle. Due to this effect, the normal force Fn and the tangential force Ft are formed, which are responsible for the indentation of the abrasive particle into the workpiece and the removal of material [13][14][15]. Schematically, the magnetic fluxes that arise in the working gap during the processing of a spherical surface with a flat inductor consisting of successively alternating permanent magnets and pole pieces are shown in Figure 3. 3. Scheme of the movement of the abrasive grain and the change in the depth of work hardening when processing a spherical workpiece on an installation with a flat inductor, where: 1 -permanent magnets; 2 -poles; 3blank; 4 -layer with increased microhardness (peening); h1, h2 -work hardening depth; δ1, δ2 -working gap; α is the angle of incidence; β -angle of reflection [compiled by the authors] Distribution of magnetic flux when processing a spherical workpiece An analysis of the magnetic flux distribution scheme during processing of a spherical workpiece on an installation with a flat inductor shows that an increase in the working gap δ negatively affects the quality of the processed surface. Magnetic flux flows pass from one permanent magnet to another, bypassing the workpiece, therefore, the values of the forces Fn and Ft decrease, the removal of material from the surface of the workpiece decreases, the depth of the layer with increased microhardness (work hardening) decreases. The simulation of the distribution of magnetic fluxes in the working gap between the spherical workpiece and the flat inductor was carried out on a setup with a flat inductor in the ANSYS Maxwell software environment (Fig. 4), which confirmed the above scheme. Analysis of the modeling shows that the effective use of magnetic abrasive machining in order to achieve a given surface quality of a spherical workpiece is possible only if a constant working gap is ensured. The opposite will lead to a forced increase in processing time. 4 Schemes of magnetic abrasive processing of spherical blank In further theoretical studies, possible schemes for processing a workpiece with a complex geometric shape were proposed (Fig. 5). The fillet of the screen of an gas-insolated transformer, which has several generatrices, is taken as a workpiece. The scheme with two flat-shaped pole pieces does not solve the problem described above, since the use of a flat pole piece when machining a shaped surface does not ensure a constant working gap during machining. The advantage of using flat pole pieces is the possibility of combining two movements -rotation of the workpiece Vw and rotation of the pole pieces Vp. The use of a single shaped tip that follows the shape of the workpiece to maintain a consistent working gap during machining will result in uneven surface roughness. Finger-shaped pole-piece design ensures consistent working gap, desired surface quality. The combination of three movements: the rotation of the pole pieces Vp, the reciprocating movement of the pole pieces Sp, the reciprocating movement of the workpiece Sw will also have a positive effect on the surface quality -the number of abrasive grains that come into contact with the work surface will increase. But the low productivity of this method of magnetic abrasive processing will complicate the introduction of technology into production, it may turn out to be economically inexpedient [4][5][6]8]. The most successful option for solving the task to ensure a constant working gap during processing of a shaped part is the option with two shaped tips. The only drawback of this method is the ability to provide only rotational movements of the workpiece and pole pieces. Magnetic-abrasive device After carrying out theoretical studies of the influence of the size of the working gap on the surface quality when processing spherical products made of aluminum alloys, experimental studies were carried out. They were carried out on an installation for magnetic abrasive machining with CNC, the 3D model and design of which are shown in Figure 7. The device for magnetic abrasive machining with CNC includes a base 8 with retaining elements 6, posts 7, core bodies 3, cores 4, 5 pole pieces, 1 adjusting screws and 2 solenoid coils. On the base 8 are installed racks 7, on which the bodies of the cores 3 are attached. The bodies of the cores 3 serve on one side to provide directionality during the reciprocating movement of the cores 4, which is carried out by means of adjusting screws 1; on the other hand, to fix the electromagnetic coils 2 on them, and in order to reduce the load on the core bodies 3, the electromagnetic coils 2 are installed on the supporting elements 6 (Figure 7 a, b). Magnetic-abrasive machining parameters The processing of spherical billets from the AMts aluminum alloy was carried out with the following parameters of magnetic-abrasive processing (Table 1). Figure 8 shows photographs of the surfaces of parts made of AMts aluminum alloy, processed with the above processing parameters with a working gap of 1.5 mm (Fig. 7, a), 4.5 mm (Fig. 7, b). The obtained photographs show the difference in the quality of the obtained surfaces. The surface obtained during processing with a minimum working gap of 1.5 mm is distinguished by the constancy of its structure, the absence of grooves obtained even during turning. Also shown are profilograms of the processed surfaces with the above processing parameters with a working gap of 1.5 mm (Fig. 8, a), 4.5 mm (Fig. 8, b). The roughness of the raw surfaces of the workpieces was 1.6-1.8 microns. As a result of magnetic abrasive processing, according to the obtained profilograms, it can be seen that for the first workpiece, processed with a minimum working gap of 1.5 mm, ΔRa = 1-1.2 μm, for the first workpiece, ΔRa = 0.9-1.1 μm, the second has ΔRa = 0.6-0.8 microns. Conclusion Thus, it can be concluded that the quality of the surface layer of spherical parts made of aluminum alloy during magnetic abrasive processing with an increase in the working gap δ is deteriorating. Based on the results obtained, the following graph and mathematical dependence of ΔRa on δ can be constructed ( Fig. 9), where the exponential dependence is described by the following function: In further studies, it is necessary to determine by changing what parameters of the magnetic abrasive machining it is possible to compensate for the increased working gap. At the moment, after the performed theoretical studies, it seems that an increase in the magnetic induction B and the rotation frequency of the workpiece n can provide surface quality with an increased working gap, since an increase in the rigidity and performance of a magnetic abrasive brush has a positive effect on the quality of the resulting surface. It should be determined when this increase will be effective and appropriate. In addition, the most successful option for solving the task to ensure a constant working gap during machining of a shaped part is the option with two shaped tips. The only drawback of this method is the ability to provide only rotational movements of the workpiece and pole pieces. The combination of different motions in magnetic abrasive machining significantly increases the machining efficiency, and, therefore, can solve the problem of ensuring an optimal working gap.
2021-11-26T16:43:56.022Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "5757c4ce339797767278f84913735392d3f32b83", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/102/e3sconf_ipdme2021_00033.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2e42a2c0c680c79c4f53e8a789984e3c5c38d9cb", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [] }
17565056
pes2o/s2orc
v3-fos-license
How I do it? Biportal endoscopic spinal surgery (BESS) for treatment of lumbar spinal stenosis Background Prevalent endoscopic spine surgeries have shown limitations especially in spinal stenosis (Ahn in Neurosurgery 75(2):124–133, 2014). Biportal endoscopic surgery is introduced to manage central and foraminal stenosis with its wide range of access angle and clear view. Methods The authors provide an introduction of this technique followed by a description of the surgical anatomy with discussion on its indications and advantages. In particular, tricks to avoid complications are also presented. Conclusions Effective circumferential and focal decompression were achieved in most cases without damage to the spinal structural integrity with preservation of muscular and ligamentous attachments. The biportal endoscopic spinal surgery (BESS) may be safely used as an alternative minimally invasive procedure for lumbar spinal stenosis (Figs. 1 and 2). Electronic supplementary material The online version of this article (doi:10.1007/s00701-015-2670-7) contains supplementary material, which is available to authorized users. Relevant surgical anatomy Many minimally invasive procedures including various endoscopic procedures have been introduced to maintain the overall spinal structures (Figs. 1 and 2). The multifidus muscle is very important in its function as a stabilizer of spine and locomotor action. Even minimally invasive surgeries including various endoscopic procedures might damage the medial multifidus, which is innervated by the medial branch of the dorsal ramus with no segmental nerve supply as in the other paraspinal muscles [4]. This approach through spatium intermusculare with biportal endoscope and small cannula can prevent the erecta spinae from the injury by overdistracting procedures (Fig. 3). Furthermore, variable access angles permit wider and further view of the contralateral side. The paraspinal extraforaminal approach with this technique gives a wider view of the foraminal lesion avoiding injury of the exiting nerve and radicular artery (Fig. 4). With the proper biportal endoscopic surgical technique, the injuries to these structures can be avoided. By this procedure, we could treat all kinds of spinal stenosis including central, lateral recesses, and foraminal stenosis. Description of the technique Instruments The standard arthroscopic facilities and conventional spine instruments such as Kerrison rongeurs, pituitary forceps, curettes, and high-speed diamond burrs are used. Room setup and patient positioning The fluoroscopy unit and the video equipment for the endoscope. The procedure is performed under general or epidural anesthesia. The patient is placed in the prone position with the abdomen free over the radiolucent chest frame in a flexed position to open the interlaminar space and foramen. Endoscopic portals placement Under image intensification, fluoroscopic confirmation of the level is made with a spinal needle inserted at the target area. Skin entry points are determined according to the lesion site and the patient's anatomical variation. Two standard entry points are made at 1 cm above and below the disc space for a posterior approach (Fig. 5) and at the foramen level for the posterolateral approach (Fig. 6). The fascia is opened approximately 7 mm with a 15-blade scalpel along the skin crease followed by blunt muscle-splitting technique [2,3] with a serial dilator touching the lamino-facet joint junction. Position is confirmed with biplanar fluoroscopy. Insertion of the endoscope and preparation of the surgical field The posterior approach is accomplished via two portals through the intermuscular septum separating erector spinae and multifidus muscles using serial dilators. The multifidus muscle is saved by detaching the muscle from the lamina without injury with a blunt dissector to prepare a working space. This technique offers benefits over other techniques, such as the microendoscopic procedure, by creating a potential fatty space between the multifidus muscle in which to avoid crushing injury from over-retraction. We achieve a clear visual field with saline irrigation in this working space, which Laminotomy/medial facetectomy/ligamentum flavum removal Thereafter, via a working cannula, conventional surgical instruments, such as the burr, punch, curette, and pituitary forceps, can be used freely in various access angles. Depending on the pathology, ipsilateral decompression is performed first by performing hemilaminotomy with a drill and a Kerrison rongeur until the superior edge of the deep part of the ligamentum flavum in exposed. Hypertrophied facet joints and the lamina are undercut by drilling; then a blunt hook dissector is used to identify the plane between the ligament and the dura, ensuring that it is free from adhesions, and a curette and a punch can be used to peel off the ligaments and relieve the neural structures. If bilateral decompression is required, the midline of the spinal canal must first be confirmed by resecting the base of the spinous process with a high-speed drill. The scope can then be adjusted medially. Usually the base of the spinous process obstructs the placement of the scope, therefore it may need to be partially resected to secure sufficient working space. Once exposed, the ligamentum flavum can be detached from the contralateral lamina and then undercut with a burr. The entry to the contralateral side is performed dorsal to the dura with the ligamentum flavum intact for protection. Bony decompression is performed again using cranial and caudal laminotomy. Medal partial facetectomy of the contralateral superior articular process is performed to preserve the facet joint integrity. After bony decompression, thickened ligamentum flavum is resected with a curette to fully relieve the neural structures [5,7]. The use of Kerrison rongeurs, a high-speed drill, and an ultrasonic bone cutter enables the lateral recess to be enlarged while keeping the facet joint intact. The endpoint of decompression is the outer edges of the bilateral nerve roots [6]. Continuous saline irrigation at 25 to 30 mmHg maintains a clear surgical view and preserves the epidural fat and vessels from damage, which may happen during the microendoscopic decompression surgery. This technique also avoids increased epidural hydrostatic pressure and subsequent increased intracranial pressure. Laminotomy and flavectomy is performed in a similar fashion as microscopic surgery, but bleeding is more effectively controlled by the radiofrequency bipolar system under continuous irrigation. In case of foraminal stenosis, a working space around the foramen is achieved by meticulous dissection with a blunt dissector under clear vision and variable angle view. First, landing on the superior articular process is one of the important keys to the operation. The procedure begins at the safe extraforaminal area. Initial decompression of the superior articular process is performed sufficiently so that the exiting nerve is decompressed more safely without much manipulation. Out-in decompression around the nerve under wide view is of paramount importance [1]. Indications Moderate-to-severe spinal stenosis including central, lateral, and foraminal, moderate-to-large HNP, with or without mild instability; and grade I spondylolisthesis In order to prevent air embolism during the procedures, be aware of clearing air bubbles in the irrigation pump line. 3. In case of severe stenosis, there may be a dense adhesion of ligamentum flavum to the dura. In that case, frequent gentle tractions of ligamentum flavum from the dura with punch and pituitary forceps are helpful for spontaneous detachment. The careful insertion of a blunt hook over the dura will prevent tears in the dura, which leads to spontaneous adhesiolysis by saline irrigation into the epidural space between the dura and the overlying ligamentum flavum. If there is a dense adhesion between the dura and the ligamentum flavum, the outer layer is peeled off and the densely adhesed area is left over, keeping the dura intact. Preoperative considerations Preoperative CT scan and a foraminal-view MRI can be studied to find out the feasibility of this technique in an individual patient. Specific intraoperative considerations 1. Saline irrigation pump is monitored to keep between 25 and 30 mmHg, depending on the patient's condition to prevent increase of the epidural hydrostatic pressure and ICP with infusion of saline into the epidural space 2. Dural tears can be prevented by frequent piecemeal detachments of ligamentum flavum from the adhesed dura with saline irrigation into the potential space Postoperative considerations Surgical drains are inserted and kept for 24 h after surgery until spontaneous bleeding is controlled. A muscle balance In case of foraminal stenosis, initially decompress the superior articular process and then decompress the herniated disc and bony spurs around the foramen. Be careful not to damage the exiting nerve. 4. Preserve multifidus muscles by going through the intermuscular septum without crushing or overretraction injury. 5. Determine biportal entry points to get a wide-angle view with variable access angle according to the lesion. 6. Clear vision can be obtained under continuous saline irrigation. Hydrostatic pressure of an irrigation pump is monitored and kept between 25 and 30 mmHg to prevent an increase of epidural hydrostatic pressure and subsequent increase of ICP. 7. Free handling of surgical instruments such as burr and punch as doing open microsurgery 8. Easy learning curve for the surgeon who is acquainted with microscopic surgical anatomy 9. Broad indications: moderate-to-severe spinal stenosis with or without HNP, mild instability 10. Preserve epidural fat and vessels: Epidural fat can be preserved by the meticulous bleeding control with continuous saline irrigation and radiofrequency bipolar coagulation, without using suction tip over the epidural fat Compliance with ethical standards This study protocol was approved by the Research Ethics board in our hospital. All patients signed a written informed consent form that they will be enrolled in this study. Conflicts of interest None. Funding No funding was received for this research. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
2017-08-02T20:44:07.642Z
2016-01-18T00:00:00.000
{ "year": 2016, "sha1": "d11ce16baf52b49402bbbb1787e7b79fd39f3174", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00701-015-2670-7.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d11ce16baf52b49402bbbb1787e7b79fd39f3174", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
92605920
pes2o/s2orc
v3-fos-license
Abscisic acid and the antioxidant system are involved in germination of Butia capitata seeds Seed germination is an important step for plants without vegetative propagation and is a physiological process that begins with specific environmental cues resulting in biochemical responses. Breaking-dormancy is necessary to study germination in dormant seeds with asynchronous germination. We investigated the processes of breaking dormancy and germination of Butia capitata (Arecaceae) seeds, in which germination is slow and asynchronous, by operculum removal. This treatment increased germination of B. capitata to 90 %. Embryos of dry, imbibed, 24-hours post-operculum removal and early-germinated seeds were collected for biochemical analysis of the following: quantification of abscisic acid (ABA) and hydrogen peroxide (H2O2), activities of antioxidant enzymes (catalase – CAT, superoxide dismutase – SOD, glutathione reductase – GR) and histolocalization of superoxide anion (O2). Decreases in H2O2 and ABA were recorded 24 hours post-operculum removal. Increased GR and SOD activities during imbibition, and CAT upon germination, indicate a role in controlling reactive oxygen species. Interestingly, the accumulation of O2 on the haustorium upon imbibition seems to be involved in germination, instead of H2O2. For B. capitata seeds, signaling from the removal of the operculum probably resulted in ABA catabolism mediated by O2, which thus promoted seed germination. Seed germination is one of the most important processes in the life cycle of a plant, and its success depends on environmental conditions and appropriate physiological and biochemical responses (Bewley et al. 2013).Reactive oxygen species (ROS) are continuously produced in metabolically active cells of seeds (Gomes & Garcia 2013).During metabolic recovery following imbibition, ROS production increases as germination progresses (Bailly 2004).The production of ROS stimulates the antioxidant system to scavenge toxic levels of those same chemicals as they interact with phytohormones and germination signaling pathways (Diaz-Vivancos et al. 2013).Reactive oxygen species, especially hydrogen peroxide (H 2 O 2 ), are known to be involved in gibberellin (GA) biosynthesis and abscisic acid (ABA) catabolism during germination (Liu et al. 2010;Gomes & Garcia 2013).The balance between these two phytohormones induces germination or the maintenance of dormancy (Bicalho et al. 2015;Vieira et al. 2017). Short communication 1 Laboratório de Fisiologia Vegetal, Departamento de Botânica, Universidade Federal de Minas Gerais, 31270-901, Belo Horizonte, MG, Brazil Palms (Arecaceae) are restricted to the tropics (Tomlinson 2006), but can be found in varying environments (Svenning 2001) with many species producing seeds with primary dormancy and very slow seedling recruitment over time (Pérez et al. 2008;Baskin & Baskin 2013;Bicalho et al. 2015).In the cerrado (Brazilian savanna), there are interesting and useful palm species that contribute to the survival and development of local communities.One such palm is Butia capitata (Mart.)Becc., popularly known as coquinho-azedo, which naturally occurs in environments with seasonal water deficit and whose fruits are used as food.The germination of B. capitata seeds is considered slow and asynchronous due to the physiological, or morphophysiological (see Baskin & Baskin 2014), dormancy of this species, which is related to the difficulty the embryo has in overcoming restrictions imposed by adjacent tissues (Oliveira et al. 2013).Overexploitation and slow natural regeneration by seeds call for conservation efforts regarding the species.However, the physiological changes, as well as the underlying signaling, involved in the process of seed germination of B. capitata are not well understood. We thus performed an experiment to better understand the relationship between abscisic acid (ABA), ROS and the antioxidant system of B. capitata embryos during germination.To facilitate the experiment and to optimize the synchrony of germination, we employed operculum removal for overcoming dormancy since this method is known to significantly increase germination of palm species (Spera et al. 2001;Fior et al. 2011;Oliveira et al. 2013;Bicalho et al. 2016).The operculum comprises the region with micropilar endosperm and the tegument that is pushed by the embryo during natural germination.However, the natural removal of the operculum by the embryo (visible germination) in B. capitata (and other palm species, see Segovia et al. 2003, andBicalho et al. 2015) is very slow and asynchronous (Oliveira et al. 2013), which would make the present experiment impracticable.Artificial removal of the operculum exposes the embryo to the environment, which may trigger some kind of signaling, but information regarding this is scarce.Thus, our main objective was to better understand the dynamics of ABA and ROS following operculum removal that leads to germination. Fruits of B. capitata were collected in the municipality of Mirabela, state of Minas Gerais.The region is tropical with hot summers and very dry winters; the mean annual precipitation is approximately 900 mm with a seasonal water deficit, and the mean annual temperature is 22 °C with a thermal range of approximately 15 °C (Cwa by the Köppen system) (Köppen 1900;Alvares et al. 2013). Approximately 4000 B. capitata fruits were collected from at least 100 trees.Four replicates of 15 seeds were sowed in distilled water and used to determine imbibition time by weighing the seeds daily until they reached a constant weight.Four replicates of 25 seeds were used for preliminary germination experiments.These seeds were surface sterilized with 5 % NaClO for 15 minutes and then rinsed three times with distilled water.The seeds were then held in distilled water for three days and subsequently transferred to transparent germination boxes lined with a double layer of filter paper moistened with distilled water, which were kept in a germination chamber at 30 °C for 30 days under a 12 h -photoperiod (Phillips, 40 µmol photons m -2 s -1 ). The opercula of the seeds were removed three days after immersion in distilled water (when imbibition stabilized).After this dormancy-breaking treatment, the germination experiment was carried out in the germination chambers as previously described with germination percentages being evaluated daily.The criterion used for visible germination was the protrusion of the cotyledonary petiole. Embryos were extracted from seeds at different phases of the germination process: pre-imbibition (D, dry), postimbibition (I), 24 hours after operculum removal (24 h), and early germination (G, visible germination).The sampled embryos were used for the following biochemical analyses: quantification of ABA and H 2 O 2 ; enzymatic activity of catalase (CAT), superoxide dismutase (SOD), glutathione reductase (GR); and localization of superoxide anion O 2 -.Germination phases were independently sampled with the samples consisting of four replicates of 200 embryos. Abscisic acid extraction was performed following Weiler (1980), with some modifications.Four replicates of 100 mg of embryos were powdered using a mortar and pestle with liquid nitrogen and then homogenized in 80 % methanol with 0.5 mg mL -1 ascorbic acid and 10 mg L -1 BHT.The samples were then added to pH 7.0 TBS buffer with the resulting solution reaching less than 10 % organic solvents.Quantification of ABA (controls, standard curve, samples and calculations) was performed strictly following the recommendations of Phytodetek® ABA Test Kits (Agdia Incorporated, Indiana, USA).Since water content increased throughout imbibition, the values of ABA quantification obtained for I, 24 h and G phases were corrected based on D phase moisture content. Quantification of H 2 O 2 was performed following Velikova et al. (2000).Four replicates of 100 mg of embryos were powdered using a mortar and pestle with liquid nitrogen, homogenized with 0.1 % trichloroacetic acid (TCA), and centrifuged at 12000 g for 15 minutes.The extract was reacted with 10 mM of pH 7.0 phosphate buffer and 1 M KI.The absorbance was recorded at 390 nm and the H 2 O 2 content subsequently calculated from the standard curve.The values of H 2 O 2 were also corrected as for ABA quantification. Antioxidant enzyme extract was prepared following Gomes et al. (2013).Four replicates of 100 mg of embryos were pulverized in liquid nitrogen and homogenized in 100 mM of pH 7.8 phosphate buffer with 100 mM EDTA and 5 % PVP.The extract was then centrifuged at 12000 g (4 °C) and subsequently used to evaluate enzymatic activity and protein content, the latter using Bradford's method (Bradford 1976). Catalase activity was determined following Aebi (1984) in 67 mM phosphate buffer (pH 7.0) with 10 mM H 2 O 2 and adequate amounts of the enzyme extract.Consumption of H 2 O 2 was recorded at 240 nm for 2 minutes (Ɛ = 0.0394 mM −1 cm −1 ).Superoxide dismutase activity was determined in a reaction buffer containing 50 mM phosphate buffer (pH 7.8), 13 mM L-methionine, 0.1 mM EDTA, 0.002 mM riboflavin, and 0.075 mM nitroblue tetrazolium (NBT) (Giannopolitis & Ries 1977).Blue formazan concentrations were read at 575 nm, and one unit of SOD was considered as the amount of enzyme necessary to reduce 50 % of the O 2 -.Glutathione reductase activity was determined in media containing 100 mM phosphate buffer (pH 7.8), 50 mM oxidized glutathione, 5 mM NADPH, and adequate amounts of the enzymatic extract, with NADPH oxidation being monitored at 340 nm (Ɛ = 6.2 mM -1 cm -1 ) (Foyer & Halliwell 1976). Histolocalization of superoxide anion (O 2 -) was performed strictly following Oracz et al. (2012).Twenty embryos of each phase were immersed in 10 mM of TRIS buffer (pH 7.0) with 1 mM of a nitroblue tetrazolium (NBT) solution for 10 minutes.The embryos were then washed three times in 10 mM TRIS buffer (pH 7.0) and photographed. The data for ABA, H 2 O 2 and antioxidant enzyme activities were tested for normality and homoscedasticity using Cochran and Levene´s test, respectively.Normal and homoscedastic or transformed data (arcsine root square) were analyzed by ANOVA followed by the Tukey test for comparisons of all means at 5 % level of probability, using Statistica 7 StatSoft® software. The germination of B. capitata seeds was 88 % at 72h post-operculum removal (Fig. 1A) with great concentrations of ABA prior to imbibition (Fig. 1B).Biosynthesis of ABA has been shown to be a response to water deficit (Bray 1997), a common condition in plants growing in localities with well-defined dry seasons (Liu et al. 2005), as is the case for B. capitata.The initially high levels of ABA observed for B. capitata embryos reduced during imbibition (Fig. 1B).Reductions in ABA levels during imbibition were also observed by Bicalho et al. (2015) for macaw palm embryos, and by Dias et al. (2017) in the cotyledonary petiole of B. capitata seeds, both of which were related to leakage.In this work the reduction in H 2 O 2 levels was found only 24 h after operculum removal (Fig. 1C) and the presence of H 2 O 2 has been shown to induce ABA catabolism in pea seeds, thereby allowing germination (Barba-Espin et al. 2010).It is interesting to point out that in the present work ABA levels decreased continuously from I to early-germination phase (Fig. 1B).Concentrations of H 2 O 2 were also observed to be maintained in the early-germination phase relative to 24 h post-operculum removal (Fig. 1C). As shown by purple coloration indicating a positive reaction with NBT, O 2 -was intensely produced in the haustorium region of B. capitata embryos since imbibition (Fig. 2).Interestingly, the presence of O 2 -was indicated only in haustorium region, which is rich in reserves for early seedling growth.These results suggest that the haustorium has actions that go beyond reserve mobilization beginning with the first events of imbibition (see the increasing color intensity from D to G embryos of B. capitata; Fig. 2).The haustorium can also be assumed to play a role as a source of signaling mediated by O 2 -formation.An interesting relationship was observed here between ABA and ROS: reductions of both ABA and H 2 O 2 levels following operculum removal (Fig. 1B), with an increase in O 2 -formation (Fig. 2).These results indicate that the dynamics of ROS and ABA are related to operculum removal.This leads to the suggestion that the exposure of the embryo to O 2 due to operculum removal (dormancy-breaking process) induced metabolic changes in hormonal and oxidant components.Reductions in H 2 O 2 levels and increasing O 2 -formation in embryos of after operculum removal have also been observed in other palm species (TRS Santos unpubl.res.).The formation of ROS during germination is a natural process due to the reactivation of metabolism and increased respiration (Diaz-Vivancos et al. 2013), and can serve as a window of signaling, or the cause of damage if overproduced (El-Maarouf-Bouteau & Bailly 2008).Therefore, the activity of antioxidant enzymes during germination of B. capitata seeds was investigated. Among the enzymes investigated here, SOD and GR were observed to have increased activity relative to D seeds as soon as imbibition occurred (Fig. 1E, F), while CAT activity was higher in the early-germination phase than in phases I and 24 h post-operculum removal (P < 0.05; Fig. 1D).The SOD and GR results indicate that imbibition in B. capitata seeds seems to induce or stimulate the activity of antioxidant enzymes.Indeed, that imbibition leads to increased respiration metabolism, which generates ROS and stimulates antioxidant pathways to keep redox homeostasis under control, has been widely discussed (Foyer & Noctor 2005;El-Maarouf-Bouteau & Bailly 2008;Diaz-Vivancos et al. 2013;Gomes & Garcia 2013).It is important to point out that CAT activity was at least 10fold higher than that of SOD (Fig. 1D, E), suggesting that H 2 O 2 was maintained at non-toxic levels in germinating seeds (higher CAT activity).Another interesting observation is that the increased SOD activity upon imbibition did not eliminate O 2 -, which leads to the conclusion that the presence of this molecule is essential for the process of seed germination.Moreover, since O 2 -is present only in the haustorium, it participates in germination itself and not only in reserve mobilization (a post-germinative event).Altogether, as germination of B. capitata seeds progresses successfully, the activities of CAT, SOD and GR can be assured and are essential for keeping ROS under control during the recovery of metabolism, and thus allowing germination to occur. The present study showed the involvement of ROS and antioxidant enzymes other than ABA during the germination process of the palm species Butia capitata.The involvement of H 2 O 2 and O 2 -during the entire process was evident.Interestingly, it seems that superoxide anion is more related to germination signaling in B. capitata seeds than the expected H 2 O 2 , by interacting with ABA after operculum removal and allowing germination itself to proceed.The activity of GR and SOD had important roles in ROS scavenging during the entire process, while CAT activity must be related to the initial development of seedlings due to its increased activity in early-germination seeds, as shown here.Taking all the enzymes together, their role in ROS scavenging seems to be secondary to their role in seed germination signalling.The marked presence of O 2 -was essential for demonstrating that the haustorium is important for palm seed germination from the beginning of the germination process.Actually, the role of the haustorium during palm seed germination itself has been neglected in previous works and needs to be investigated more deeply in B. capitata and in other palm species. Figure 1 . Figure 1.Germination percentage and biochemical parameters for embryos of Butia capitata during phases of germination.Graphics show germination percentage from the dormancy-breaking process (A); levels of ABA (B) and H 2 O 2 (C); and CAT (D), SOD (E) and GR (F) activities during germination phases.Dots or bars are means ± standard error of four replicates.Means followed by the same letters do not differ according to Tukey test at 5 % of probability.ABA, abscisic acid; H 2 O 2 , hydrogen peroxide; CAT, catalase; SOD, superoxide dismutase; GR, glutathione reductase (y-axis); D, dry; I, imbibed; 24h, 24 hours after operculum removal; G, early-germinated (x-axis).
2019-04-03T13:07:32.244Z
2019-03-01T00:00:00.000
{ "year": 2019, "sha1": "b863556188911f52f6d168fbe574698a4cdc8ecb", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/abb/v33n1/0102-3306-abb-0102-33062018abb0193.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b863556188911f52f6d168fbe574698a4cdc8ecb", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
85505962
pes2o/s2orc
v3-fos-license
Measuring nebular temperatures: the effect of new collision strengths with equilibrium and kappa-distributed electron energies In this paper we develop tools for observers to use when analysing nebular spectra for temperatures and metallicities, with two goals: to present a new, simple method to calculate equilibrium electron temperatures for collisionally excited line flux ratios, using the latest atomic data; and to adapt current methods to include the effects of possible non-equilibrium '{\kappa}' electron energy distributions. Adopting recent collision strength data for [O iii], [S iii], [O ii], [S ii], and [N ii], we find that existing methods based on older atomic data seriously overestimate the electron temperatures, even when considering purely Maxwellian statistics. If {\kappa} distributions exist in H ii regions and planetary nebulae as they do in solar system plasmas, it is important to investigate the observational consequences. This paper continues our previous work on the {\kappa} distribution (Nicholls et al. 2012). We present simple formulaic methods that allow observers to (a) measure equilibrium electron temperatures and atomic abundances using the latest atomic data, and (b) to apply simple corrections to existing equilibrium analysis techniques to allow for possible non-equilibrium effects. These tools should lead to better consistency in temperature and abundance measurements, and a clearer understanding of the physics of H ii regions and planetary nebulae. Introduction Fundamental to all methods of measuring temperatures and abundances in gaseous nebulae are the atomic data for the ionized nebular species. In particular, an accurate knowledge of the collision strengths for the excitation of ionized nebular species is critical to obtaining reliable information on the conditions in these plasmas. Unfortunately, computing these collision strengths is a lengthy and complex process, placing considerable demands on computational power. Many current nebular abundance analysis methods make use of atomic data computed over 20 years ago. In this work we assemble the best available modern data to investigate the effects on temperature and abundance measurement. We find that the latest data makes a considerable difference to the answers obtained. All previous approaches have used "effective collision strengths", where the detailed computed collision strengths are convolved with Maxwell-Boltzmann electron energy distributions at fixed temperatures. In this work, we use the detailed collision strengths whose energy dependence has not been convolved with an electron energy distribution. Our approach has enabled us to build simple formulas which will allow the observer to calculate (equilibrium) electron temperatures, based on the most recent atomic data. We also return to the subject of our previous paper (Nicholls et al. 2012, hereafter, NDS12), the non-equilibrium κ electron energy distribution. These distributions have been widely detected in solar system plasmas (Pierrard & Lazar 2010), and Tsallis et al. (1995) have explained from entropy considerations why and how such distributions can occur. As previous analyses have assumed equilibrium energy distributions in H ii regions and planetary nebulae, we revisit our reasons for considering non-equilibrium electron energy distributions in these objects. In this paper we take the exploration of the κ distribution further. Using the unconvolved collision strengths, we explore in detail the effects of the κ distribution. We derive formulae to simplify calculating the effects of a κ distribution from conventional equilibrium results. In this way, observers can investigate the effect of any κ-type divergence from equilibrium electron energies. Our aim is to provide the observer with a set of tools to (a) take advantage of the latest atomic data for equilibrium calculations; and (b) using the κ electron energy distribution, to correct apparent temperatures measured from temperature sensitive line ratios or recombination continua for subsequent abundance analyses. The paper is organised as follows. In section 2, we present a rationale for considering non-equilibrium electron energy distributions in gaseous nebulae. In section 3, we describe the κ distribution for electron energies and compare the collisional excitation rates for the κ and Maxwell-Boltzmann (M-B) distribution. In section 4, we discuss the factors involved in obtaining accurate collisionally excited line (CEL) equilibrium electron temperatures from theoretical collision strengths. In particular we point out the errors resulting from inaccuracies in the collision strengths used as the bases for most current direct electron temperature techniques. We show that for non-equilibrium electron energy distributions, it is necessary to use detailed collision strengths for atomic species of interest, as distinct from the thermally averaged effective collision strengths that are usually published; we discuss the effect of premature truncation of collision strength computation at high energies; and list the sources for the collision strength data we have used. In section 5 we explore the effect of the κ distribution on recombination processes, and how to calculate the effects of a κ distribution on the apparent temperature and density of the recombining electrons; and we calculate the degree to which recombination lines are enhanced by the κ distribution. In section 6, we describe in detail the effect of the κ distribution on collisionally excited lines. Using a general expression for the collisional excitation rate ratio between the κ and M-B distributions, we derive the relative intensity enhancements for different atomic species, and detailed equations for temperature sensitive line flux ratios. We show typical flux ratio vs. kinetic temperature plots for [O iii] and [S iii] for a range of values of κ, based on direct calculation from recently published collision strengths. Section 7 is the main focus of the paper, where we present a new, simple method for calculating equilibrium electron temperatures from line flux ratios using the most recent collision strength data, including density corrections; tools for measuring true (kinetic) CEL electron temperatures, using conventionally calculated equilibrium electron temperatures as a starting point; and a simple linear equation for converting between conventional measurement results and κ-corrected temperatures. In section 8 we discuss briefly the effect of κ on strong line methods. In section 9 we present ways to determine κ and point out the need for and progress with implementing κ effects in photoionization modelling codes. In section 10 we summarize our conclusions. In Appendix 1 we list the temperature-sensitive lines for the most common atomic species found in H ii regions and PNe; the transition probabilities for these transitions; and the various factors appearing in the formulae in Section 5 which allow the temperature-sensitive line ratios to be computed for any internal energy temperature and value of κ. Rationale for considering non-equilibrium electron energies. It has long been held that the electrons in H ii regions and planetary nebulae are in thermal equilibrium. Analytical calculations of electron velocity distributions in gaseous nebulae were presented by Bohm & Aller (1947). Their work led them to state that the velocity distribution is "very close to Maxwellian". Spitzer (1962, Ch.5) also examined the thermalization process for electron energies in plasmas and found that electron energies equilibrate rapidly through collisions. This early work has lead later authors to assume the electrons in gaseous nebulae are always in thermal equilibrium. However, Spitzer's analysis showed that the equilibration time of an energetic electron is proportional to the cube of the velocity, so even using equilibrium theory, plasmas with very high energy electrons take much longer to equilibrate than those excited by normal UV photons from stars found in H ii regions. In more recent times, the electron energies in solar system plasmas have been measured directly by satellites and space probes. This began with Vasyliunas (1968), who found that the electron energies in the Earth's magnetosphere departed substantially from the Maxwellian, and resembled a Maxwellian with a high energy power law tail. He showed that this distribution could be well described by what he called the "κ distribution". Since then, κ distributions have been widely detected in solar system plasmas and are the subject of considerable interest in solar system physics. 1 They have been detected in the outer heliosphere, the magnetospheres of all the gas-giant planets, Mercury, the moons Titan and Io, the Earth's magnetosphere, plasma sheet and magnetosheath and the solar wind (see references in Pierrard & Lazar (2010)). There is also evidence from IBEX observations that energetic neutral atoms in the interstellar medium, where it interacts with the heliosheath, exhibit κ energy distributions ). In solar system plasmas, the κ distribution is the norm, and the MB distribution is a rarity. So we are confronted with the fact that despite the early theoretical work suggesting that the electrons in such plasmas should be in thermal equilibrium, they are almost always not. Initially, κ distributions were used as empirical fits to observed energies, and were criticized as lacking a theoretical basis. Subsequently, the distribution has been shown to arise naturally from entropy considerations. See, for example, Tsallis et al. (1995);Treumann (1999); Leubner (2002), and the comprehensive analysis by Livadiotis & McComas (2009). They have explored "q non-extensive statistical mechanics" and have shown that κ energy distributions arise as a consequence of this entropy formalism, in the same way as the Maxwell-Boltzmann distribution arises from Boltzmann-Gibbs statistics. The requirement for this to occur is that there be macroscopic interactions between particles, in addition to the shorter-range Coulombic forces that give rise to Maxwell-Boltzmann equilibration. Tsallis statistics provide a sound basis for the overtly successful use of the κ distribution in describing solar system plasmas. κ distributions appear to arise whenever the plasma is being pumped rapidly with high energy non-thermal electrons, so that the system cannot relax to a classical Maxwell-Boltzmann distribution. Collier (1993) has also shown that κ-like energy distributions can arise as a consequence of normal power-law variations of physical parameters such as density, temperature, and electric and magnetic fields. It is plausible that such conditions are also present in H ii regions and PNe-solar system plasma parameters span the many of the conditions found in gaseous nebulae, and, as in the solar system, H ii region plasmas can be magnetically dominated (Arthur et al. 2011;Nicholls et al. 2012)-so it is important to investigate the effects of non-equilibrium energy distributions with high-energy tails in occurring in gaseous nebulae, should they occur. Such non-Maxwellian energies may occur whenever the population of energetic electrons is being pumped in a timescale shorter than, or of the same order as the normal energy re-distribution timescale of the electron population. Suitable mechanisms include magnetic reconnection followed by the migration of high-energy electrons along field lines, the development of inertial Alfvén waves, local shocks (driven either by the collision of bulk flows or by supersonic turbulence), and, most simply, by the injection of high-energy electrons through the photoionization process itself. Normal photoionization produces supra-thermal electrons on a timescale similar to the recombination timescale. However, energetic electrons can be generated by the photoionization of dust (Dopita & Sutherland 2000), and X-ray ionization can produce highly energetic (∼ keV) inner-shell (Auger process) electrons (e.g. Shull & Van Steenberg (1985); Aldrovandi & Gruenwald (1985); Petrini & Da Silva (1997), and references therein). These photoionization-based processes should become more effective where the source of the ionizing photons has a "hard" photon spectrum. Thus, the likelihood of the ionized plasma having a κ electron energy distribution would be high in the case of either photoionization by an Active Galactic Nucleus (AGN), or the case of PNe, where the effective temperature of the exciting star could range up to ∼ 250, 000K. So we have no shortage of possible energy injection mechanisms capable of feeding the energetic population on a timescale which is short compared with the collisional redistribution timescale. The rate of equilibration falls rapidly with increasing energy, and we would expect there to be a threshold energy above which any non-thermal electrons have a long residence time. These can then feed continually down towards lower energies through conventional collisional energy redistribution, thus maintaining a κ electron energy distribution. In addition to the energy injection mechanisms capable of maintaining the excitation of suprathermal distributions, several authors and references therein; Shizgal (2007);Treumann (2001)) have investigated the possibility that the κ distribution may remain stable against equilibration longer than conventional thermalization considerations would suggest. In particular, distributions with 2.5 κ > 1.5-detected, for example, in Jupiter's magnetosphere-appear to have the capacity, through increasing entropy, of moving to values of lower κ i.e. away from (Maxwell-Boltzmann) equilibrium. While the physical application of this aspect of κ distributions remains to be explored fully, it suggests that where q non-extensive entropy conditions operate, the suprathermal energy distributions produced exist in "stationary states" where the behaviour is, at least in the short term, time-invariant (Livadiotis & McComas 2010a). These states may have longer lifetimes than expected classically. This is consistent with the numerous observations in solar system plasmas, that κ electron and proton energy distributions are the norm. It is likely, therefore, that photoionized plasmas in gaseous nebulae will show departures from a Maxwell distribution to some degree. The key questions are, is this important, and does it produce observable effects in the nebular diagnostics which we have relied upon hitherto? The answer to both questions appears to be 'yes'. For several decades, systematic discrepancies have plagued abundance measurements derived from observations of emission lines and emission continua in H ii regions and PNe. In particular, abundances determined from collisionally excited lines (CEL) for different ions differ from one another, and temperatures determined from Hydrogen and Helium bound-free continuum spectra are consistently lower than those obtained from CELs. As a consequence, chemical abundances determined from the optical recombination lines (ORL) are systematically higher than those determined from CELs. These discrepancies are often referred to as the "abundance discrepancy problem" and are sometimes even parameterized as the "abundance discrepancy factor" (ADF). The problem was first observed 70 years ago and has been discussed regularly in the literature for 40 years. See, for example, Wyse (1942); Peimbert (1967); Liu et al. (2000); Stasińska (2004); García-Rojas & Esteban (2007). A number of attempts have been made to explain these differences. The earliest attempt appears to be by Peimbert (1967), who proposed small temperature inhomogeneities through the emitting regions as the cause. Later, Liu et al. (2000) suggested the presence of a two-phase "bi-abundance" structure, where the emitting regions contain cool, metal-rich, hydrogen poor inclusions. However, neither explanation appears to be fully satisfactory: the temperature fluctuation model often requires large fluctuations to explain the observed discrepancies, without suggesting how these fluctuations could arise. The bi-abundance model requires proposing inhomogeneities where, in some cases, none are observed, or where the physical processes militate against the stability of such inhomogeneities. The reader is referred to the detailed discussion by Stasińska (2004). Further, in neither of these mechanisms is the discrepancy between different CEL species explained. More recently, Binette et al. (2012) have suggested that shock waves may contribute to the apparent discrepancies, but they state that the mechanism needs to be explored further before it can be considered an explanation. A common feature of all these approaches is that they assume the electrons involved in collisional excitation and recombination processes are in thermal equilibrium. In our previous paper (NDS12) we showed that a non-equilibrium κ electron energy distribution is capable of explaining both the ORL/CEL discrepancy, and the differences between electron temperatures obtained using different CEL species. The mechanism has been shown, for example, to provide an explanation in the case of [O iii] and [S iii] CEL lines (Binette et al. 2012). It is interesting to note that extreme departures from an equilibrium electron energy distribution are not required to accomplish this, and if there is pumping of electron energies by mechanisms clearly likely to occur in gaseous nebulae, such distributions may not be difficult to achieve. In this paper, we continue to explore the implications of κ energy distributions, using recently published collision strength data for key nebular species to model the effects the κ distribution will have, if present, on the physics of H ii regions and PNe. The κ distribution The κ distribution resembles the M-B distribution at lower energies but has a high energy power law tail. Expressed in energy terms, the κ distribution is (NDS12): The parameter κ describes the extent to which the energy distribution differs from the M-B. Its values lie in the range [ 3 2 , ∞]. In the limit as κ → ∞, the energy distribution reduces to the equilibrium M-B distribution: where T U is the "kinetic" or "internal energy" temperature, defined in terms of the energy density of the system, as per NDS12, equation 5; N e is the electron density; and k B is the Boltzmann constant. For a M-B energy distribution, T U is simply the thermodynamic temperature. Thus the Maxwell-Boltzmann distribution is a special case of the κ distribution, where there is no long-range pumping of electron energies at timescales similar to the collisional relaxation time. It can readily be shown by integration with respect to energy between the limits [0,∞] that the area under the curves given in equations (1) and (2) is N e , the electron density, in both cases, and in the case of κ → ∞ the internal energy temperature is identically equal to the classical electron temperature. As shown by NDS12, the collisional excitation rate from level 1 to level 2 for an M-B distribution is given by and for a κ-distribution, the corresponding rate is: (4) where Ω 12 is the collision strength for collisional excitations from level 1 to level 2, E 12 is the energy gap between levels 1 and 2, g 1 is the statistical weight of the lower state, and Γ is the gamma function. As a first order approximation, we can assume that the collision strength from excitations from level 1 to 2, Ω 12 , is independent of energy. For this case the ratio of the rates of collisional excitation from level 1 to level 2 for a κ distribution can be expressed analytically (NDS12) as: Detailed plots and values for this equation for a range of values of κ are given in NDS12, Figure 5 and Table 1. Electron temperatures are generally measured using the line ratio of two emission lines with well-separated excitation energies, of which the best known is the λλ4363/5007 ratio for [O iii]. As shown in NDS12, equations 12 and 13 2 , for a M-B electron energy distribution, considering a simplified three-level atom, the ratio of the collisional excitation rate from level 1 to level 3 to the rate from level 1 to level 2, for the constant Ω case, is given by the well-known formula: where the collision strengths are once again considered to be independent of energy. For a κ electron energy distribution, again for the constant Ω case, the collisional excitation rate ratio is given by: where T U is the kinetic or internal energy temperature. Collision strength considerations 4.1. "Non-averaged" and effective collision strengths Equations (3) and (4) emphasise the importance of a knowledge of the collision strength over all energies. In all the current literature, a M-B distribution has been assumed, and the effective collision strengths used are the collision strengths averaged over M-B energy distributions at different temperatures. It should be noted that this averaging process is calculated for a fixed population of electrons, N e . Thus the full equation for deriving the effective collision strengths, Υ 12 , from the collision strengths, Ω 12 , for collisional excitations from level 1 to level 2 is: where E 12 is the threshold energy for excitation from level 1 to level 2. In the case of a κ-distribution, the weighting with energy in the integral is quite different, c.f. Equation (4), and a knowledge of the behaviour of the collision strength at high energy becomes much more important. It is therefore necessary to use the raw (non-energy averaged) collision strengths. While effective collision strengths have been published for almost all atomic species relevant to H ii regions and PNe, the raw collision strength data are much harder to find. For this work we have collated modern computed "raw" collision strength data for O i, N ii, O iii, S iii, and O ii, and older or limited data for S ii, Ne iii, Ar v, Ne iv, Ar iv, and Ne v. We have no raw collision strength data for N i. Our data sources are listed in Table 1. An example of the complexity of the raw collision strength data is shown for the 1 D 2 and 1 S 0 levels of O iii in Figure 1, where the data is taken from from Palay et al. (2012, hereafter, PNPE12). Note the numerous resonances and edges, and the systematic variation with energy seen in the 3 P − 1 D 2 transition. The calculation of raw collision strengths is a very complex exercise, involving the coupling of many electrons, relativistic corrections, and a host of other computational issues. In general, there has been a steady improvement in the techniques of computation, so we need to be careful in using data from older sources. Given that an accurate knowledge of collision strengths is essential for determining electron temperatures and elemental abundances in nebulae, the errors that may be present in published data sets is a concern. In the following sub-sections we consider the possible effects of truncation of the energy range of the computed collision strengths, errors in the computed excitation energies, and absolute errors in the computed collision strengths on the collisional excitation rates. Note the numerous resonances and edges, and the variation with energy in the 3 P − 1 D 2 transition. Errors in computed collision strengths Our knowledge of the absolute value of the collision strengths feeds directly into measurements of electron temperatures and elemental abundances. Because of the complexity of calculating the collision strengths and the wide range of atomic species for which they are needed, these parameters are frequently only available at present from a single source, if at all. An exception to this is O iii, but even for this important species, they have only been computed four times in the past two decades, and only once in the past decade (Aggarwal 1993;Lennon & Burke 1994;Aggarwal & Keenan 1999;Palay et al. 2012). Further, non-averaged collisions strengths (i.e., not convolved with M-B distributions) are difficult for the end user to obtain. See Table 1 above for details of the sources used. These computations vary considerably in their details, the upper energy limit of the computations (the truncation energy), and what physics is taken into account. For O iii, the most recent computations by PNPE12 appear much the most reliable, as they take into account relativistic effects and have a much higher truncation energy (178.2eV, c.f. 43.5eV for Aggarwal (1993) and 54.4eV for Lennon & Burke (1994)). For this reason the currently used values (see, e.g., Osterbrock & Ferland (2006)) for calculating line flux ratios, and resultant electron temperatures, need to be revised, independently of any κ-distribution considerations. We use the PNPE12 data and detailed numerical integration as the baseline. This became available only after the finalization of our earlier paper. The differences between these and earlier computations can lead to considerable differences in electron temperatures computed from CEL flux ratios, even for M-B equilibrium electron energy distributions. Figure 2 shows that use of the earlier data sources leads to systematic overestimates of [O iii] electron temperatures for temperatures between 5,000 and 30,000K. The IRAF 2.14 results were obtained using the nebular/temden routine, which for the 11/2008 release adopts the Lennon & Burke (1994) effective collision strengths 3 . Figure 2 has a profound impact upon all previous abundance analyses of PNe and H ii regions, even before taking into account the effect of non-equilibrium κ electron energy distributions. Wherever the T e + ionisation correction factor (ICF) method has been used, the overestimate in T e will result in a significant under-estimate in the chemical abundance. The strong line techniques are also liable to revision, as the collision strength for the [O iii] 3 P − 1 D 2 transition is enhanced by about 30% over the previous estimates. The effect on the strong line methods is discussed briefly in section 7, below, but these and other strong line effects will be the subject of a later paper. Errors in computed excitation energies Also critical to the accurate estimation of collision strength effects are errors in the computed threshold energies of the excited states. In some computations (e.g., Aggarwal (1993); Aggarwal & Keenan (1999) for O iii), there are non-trivial differences between the computed and the observed energies. PNPE12 note that although their computed energies for O iii were quite close to the experimentally determined values, errors in effective collision strengths can arise from threshold energy discrepancies for low temperature excitations dominated by near-threshold resonances. They minimize these by adjusting the threshold energies to match the observed excitation energies. In the case of κ distribution, where we integrate the raw collision strengths directly, it is essential that the threshold energies used in the integration (E 12 and E 13 in equation (7)) correspond exactly to the values expressed in the collision strength data. Using a threshold energy from a standard source that differs from the threshold indicated by the particular collision strength computations, can introduce errors in the excitation rate ratios, and, therefore, in the abundances determined assuming M-B equilibrium and the enhancement effects of a κ distribution. Truncation of collision strength computations Finally, we need to consider the effect of truncating computations of collision strengths at high energies. Collision cross sections are calculated between the species excitation threshold energy and a computationally mandated upper limit. For the cross sections of O iii published in the past 20 years, this upper limit has ranged between 43.5eV (Aggarwal 1993) and 178.2eV (Palay et al. 2012). Effective collisions strengths are computed by convolving the raw collision strengths with a M-B distribution, as in equation (8). For temperatures typically found in H ii regions and PNe, the population in the M-B distribution at high energies is sufficiently small that the truncation point for the raw collision strengths has little effect on the value of the effective collision strength. However, κ distributions can have significant populations at higher energies compared to the M-B, and the effect of truncating the collision strength computation can become much more apparent. To demonstrate this effect, using an extreme case with κ=2, we adopt a simple model collision cross section: Ω = zero below the excitation threshold, Ω constant (=1) up to the truncation energy, and zero above that. Specifying an excitation threshold energy allows us to explore the effect of truncating the upper energy bound for the collision strength. In this case we use 3.0eV, which sets the temperature of the point where ∆E/k B T =1.0 to T exc =34,814K. We compare the computed truncated solution with the untruncated analytical solution, equation (5), in which Ω is constant to ∞. Figure 3 shows the percentage difference between the computed values and the analytical value at low values of the parameter ∆E/k B T (i.e., at high temperatures), truncating at 20, 50, 100 and 200eV. The effect is minor at low temperatures; for truncations above ∼50eV and temperatures typically found in H ii regions and PNe; and for values of κ 10. In the EUV and in some supernova remnants, and for extreme values of κ 1.5, the effect may need to be considered, both for κ and M-B energy distributions. -Effect on the excitation rate of truncating the collision strength computations at a range of energies, for a κ=2 distribution. The Effect of κ on recombination processes In this section, we examine first the effect of the κ-distribution on the recombination process. This links directly to the the shape of the the Bound-Free continuum which is used to determine recombination temperatures of H and He, and to the observed intensity of the recombination lines of heavy elements, which are used to determine chemical abundances. Recombination line effects One major consequence of adopting a κ distribution for electron energies arises when comparing abundances determined using optical recombination lines (ORLs) and CELs. In the vast majority of H ii regions and PNe, the ORL abundance is systematically higher than the abundance derived from CEL measurements, the so called "abundance discrepancy factor", or ADF. This has been known for decades and not satisfactorily explained (see, e.g., Stasińska 2004). As NDS12 have pointed out, the κ distribution provides a simple and automatic explanation of the abundance "discrepancy". The reason for this can be understood by comparing the form of the κ distribution to that of the M-B distribution. The key characteristics of the κ distribution, compared to a M-B distribution of the same internal energy, are that the peak of the distribution moves to lower energies; at intermediate energies there is a population deficit relative to the M-B distribution; and at higher energies the "hot tail" again provides a population excess over the M-B. (See Figures 1-3 of NDS12). The κ distribution behaves as a M-B distribution at a lower peak temperature, but with a significant high energy excess. The two distributions peak at different values of the energy, E. The peak of the Maxwell distribution (for the energy form of the distribution) is at E = 1 2 k B T U . For the κ distribution, the peak occurs at 1 2 k B T U (2κ − 3)/(2κ + 1) (NDS12). Thus, for all valid values of κ ( 3 2 < κ < ∞), the κ distribution peaks at a lower energy than the M-B. This is illustrated in Figure 4, for κ = 2. For recombination, or any other physical process that is primarily sensitive to the low energy electrons, the critical point to note is that the form of the κ distribution at lower energies (up to and just past the peak energy) is very similar indeed to a M-B distribution. This is shown in Figure 4, where a M-B distribution (blue solid curve) has been peak-fitted to a κ = 2 distribution (red, dashed curve), adjusting the M-B temperature to T core = T U (1− 3 2κ ) and matching peak heights. The total area under the M-B "core" is less than the area under the κ curve. For any physical process that involves mainly the low energy electrons, such as recombination line emissions, reactions "see" the cool M-B core distribution. In other words, any physical property sensitive to the region of the electron energy distribution around or below the distribution peak will interact with a κ electron energy distribution as if it were a M-B (2) blue curve: "core" M-B distribution fitted to the κ peak; (3) black dash/dot curve: M-B distribution with the same internal energy as the κ. The areas under the red (dashed) and black (dot-dashed) curves (i.e., the total electron densities) are equal, and greater than the area under the blue curve. distribution at a lower temperature than the M-B with the same kinetic temperature and electron density as the κ distribution, and with a slightly lower total internal energy than the κ-distribution. So how does this impact on recombination line abundances and temperatures? In order of importance, the first effect of the kappa distribution on ORLs is the difference between the apparent temperature of the low energy part of the energy distribution that is most important in determining the intensities of the recombination lines, compared to the true internal energy temperature. The second effect arises from the population of electrons in the energy peak of a kappa distribution, compared to the total population. The third effect is the slight difference in shape between the peak of a kappa distribution and the best fit M-B distribution. Correcting the recombination temperature First, the most obvious effect of a kappa distribution is that it shifts the peak of the energy distribution to lower energies, compared to a M-B distribution with the same kinetic temperature. The rate of recombination rate falls off strongly with increasing energy-for hydrogen below the phoionization threshold, the recombination rate depends on ν −3 (e.g., Osterbrock & Ferland 2006). This means that the low energy electrons play the dominant role in recombination processes. Recombination processes experience the κ distribution as a M-B distribution at a temperature T core . Thus, in using recombination temperatures in the presence of κ distributions to estimate the kinetic or internal energy temperature, T U , we need to increase the apparent recombination line temperature by a factor: The difference between the distributions is visually slight for higher kappa values (smaller deviation from thermal equilibrium), but even minor deviations from equilibrium can be sufficient to explain the "abundance discrepancy factor". Correcting the electron density Second, we need to apply a correction to the apparent electron density. The reason for this is that a M-B distribution at a temperature T core and with the same total energy as the κ distribution with a kinetic temperature T U will have a peak at a higher value of n(E) than the κ. To fit the M-B distribution to the κ-in other words, to simulate what recombination processes react to when they meet a κ distribution-it is necessary to reduce the total electron density by a factor that depends on κ. We can calculate the electron density correction analytically by equating the peak of the M-B electron energy distribution n(E) at a temperature T core to the peak value of the κ distribution at a temperature T U . It is relatively straight forward to show that the effective (apparent) electron density, N e (eff) is related to the actual electron density N e , by: For values of κ 10, this factor is close to unity, and in most conditions likely to be found in H ii regions and PNe (NDS12) is unlikely to substantially affect the physics. The correction factor is shown in Figure 5 as a function of κ. The recombination process "sees" a lower electron density for all values of κ, but for typical values ∼10, the difference between effective and true electron densities is less than 10%. For computational purposes, the curve can be fitted with a simple power law (reciprocal), also shown in Figure 5: The third effect is that the shape of the "fitted" M-B distribution differs slightly from the peak of the κ distribution. Figure 6 shows the difference in recombination electrons as a function of ∆E/k B T U , using a weighting factor of 1/E to account for a typical energy dependence of the recombination process, and normalised so that the total number of electrons at the distribution peaks are the same. It shows that for a typical value of κ of 10, the difference in the κ distribution and the fitted M-B leads to an error of less than 2%. The recombination rate (in s −1 cm −3 ) for hydrogen ions combining with electrons is given by N e N p α, where N e and N p are the densities of electrons and protons and α is the recombination rate, which for an electron energy distribution f (E)dE is given by: where σ(E) is the recombination cross section. It is related via the Milne Relation to the ionization cross section a ν by: where g 1,2 are the statistical weights of the lower and upper levels, h is the Planck constant, m e is the electron mass, c is the speed of light and ν is the photon energy above the threshold (expressed as a frequency). For hydrogen, a ν can be expressed approximately as: where a T is the threshold value of the ionization cross section and ν T is the threshold frequency. Inserting these values into equation (12) and gathering the energy-independent components outside the integral we get: We can calculate the ratio of the recombination rates for a κ distribution to a M-B distribution by substituting the appropriate forms for f (E): This simplifies to a form similar to the analytical expression for collisional excitation with a constant collision strength from equation (5), but in this case with E 12 = 0: This implies the hydrogen ion recombination rates are enhanced, but for a typical value, κ=10, only by 4.3%. Typical values for the recombination rate ratios are given in Table 2: Recombination lines: summary In summary, when interpreting a κ distribution as if it were a M-B distribution: (1) apparent recombination temperatures need to be increased by a factor κ/(κ − 3/2); (2) apparent electron densities need to be divided by the correction factor in equation (10), to get the true electron densities and kinetic temperatures; (3) the "shape" correction is sufficiently small that it can be neglected; and (4) recombination rates are slightly enhanced, as per Table 2 and equation (17). Note that the corrections to the recombination rate are only applicable to recombination of ions with recombination coefficients similar to Hydrogen. Effect on CEL Intensities In Figure 7 we show, for κ = 10 and a kinetic (internal energy) temperature T U =10,000K, the relative collisional excitation rate relative to a M-B distribution as a function of T exc /T U for the [O iii] λ 4363 auroral line, computed using the detailed collisions strengths for O iii from PNPE12. Any other CEL would produce a similar curve, so Figure 7 provides a generic description of the effects of a κ-distribution on CEL intensities. Note that for a fixed kinetic temperature T U , positions along the x-axis correspond to values of the CEL excitation temperature in units of 10 4 K. The axis could equally well be looked at by scaling the kinetic temperature for a fixed excitation temperature, but here we want to differentiate the effects of κ on lines with different excitation temperatures at a fixed kinetic temperature. -The collisional excitation rate for κ=10 compared to a M-B distribution, plotted as a function of the excitation threshold energy (expressed as an equivalent temperature) divided by the kinetic temperature T U . Setting the kinetic temperature T U to a typical nebular temperature of 10 4 K allows us to locate the excitation temperature of the O iii 1 S 0 level. It is marked by the vertical dashed line. Where this intercepts the κ curve shows the enhancement of the excitation rate ratio (and therefore, of the population in that level, relative to the M-B population). This illustrates the generic behaviour of all CELs. When T exc /T U is low, such as for transitions in the IR and FIR, the emission line intensities are slightly enhanced (dark grey area). In the central (light-gray) region typical of transitions giving rise to lines at optical wavelengths, a mild reduction in line intensity is expected. For T exc /T U 3, appropriate UV or "auroral" line intensities are either enhanced or strongly enhanced. So what does this mean for different atomic species, energy levels and radiative transitions? Figure 7 can be divided into three parts, marked in different shades. The left-most dark gray segment corresponds to fine-structure levels with low excitation energy. These typically correspond to far infrared lines. For such levels, the population rate is slightly enhanced, leading to slightly higher line fluxes. The middle section (mid-gray) corresponds to the excitation of the strong visible transitions, with excitation energies of a few eV. An example would the [S ii] lines at 6731Å and 6716Å, with excitation temperatures of ∼21,400K (for T U =10,000K, this corresponds to x=2.14 in Figure 7). The collisional excitation rates for these lines are mildly reduced in a κ-distribution compared to a M-B distribution. The third, right-most section shows the excitation energies where the population rate will be enhanced or strongly enhanced by the κ distribution, compared to the M-B. This region is appropriate to either highly-excited UV lines, or the "auroral" lines in the visible spectrum. Examples include the [O iii] UV lines at 2321, 2331Å and the auroral line at 4363Å, with an excitation temperature of ∼62,000K corresponding to x=6.2 in Figure 7. In summary then, for a κ distribution the far-IR transitions are slightly enhanced, and the strong emission lines used in the optical to obtain CEL abundances will be mildly reduced. However, we expect the UV lines, such as the important C ii or C iii intercombination lines, to be strongly enhanced, and the "auroral" lines used in temperature diagnostics also to show strong enhancements in more metal-rich H ii regions. The relative effect of κ at different metallicities is interesting to consider. Plasmas with higher metallicities cool faster than plasmas with low metallicities. If we set the kinetic temperature for Figure 7 to 20,000K, i.e., to a lower metallicity, the excitation temperatures are now scaled in units of 2 × 10 4 K. Thus for the O iii 1 S 0 level, the excitation temperature occurs at x ∼ 3.1, and at this point on the curve, the excitation enhancement by the κ distribution is much lower, ∼1.1 c.f. ∼2.6. The precise effect on the line flux ratio used to measure the electron temperature depends as well on the relative enhancement of the 5007Å and 4959Å lines, which will also fall with lower metallicities. The process is not simple because of the interconnected effects, and is best explored with photoionization models that take the κ effects into account. We have extensively updated the MAPPINGS photoionization code to take into account both the κ effects and the latest atomic data. We explore these effects in a subsequent paper (Dopita et al. 2013) using this code. In the following section we explore the explicit effects of the κ distribution on line flux ratios. Temperature-sensitive line ratios Collisionally excited line ratios are central to the measurement of electron temperatures in H ii regions and PNe. Most frequently, the ratio of optical forbidden lines of O iii at 5007Å, 4959Å to the "auroral" transition at 4363Å has been employed. However, many others can be used when bright lines are observed, such as the [ Peimbert 2003). The measurement of electron temperatures depends on having two well-separated excited fine-structure energy levels for which an equation of the form of Equation (6) or (7) applies. An idealised three energy level arrangement is shown in Figure 9(a), which illustrates the transitions involved in the formation of temperature-sensitive line ratios. Among the species actually employed to measure electron temperatures, there are two principal energy level structures. The first of these are the p 2 ions such as O iii, and the p 4 ions such as O i, which have a very similar fine-structure level configuration, as shown in the the second panel of Figure 8 (case a). The second group consists of the p 3 ions, such as O ii, which has a doublet structure in the excited states as shown in the third panel of Figure 8 (case b). These ions are most frequently used to determine electron densities since the closely spaced excited states have different transition probabilities, and undergo collision de-excitation at different densities. The p 2 and p 4 ions have a triplet ground state ( 3 P 0 , 3 P 1 , 3 P 2 ) and singlet upper states, 1 D 2 (lower) and 1 S 0 (upper). Examples include N ii, O iii, S iii, Ne v and Ar v (p 2 configuration) and O i, Ne iii, and Ar iii (p 4 configuration). The p 3 ions have a single ground state (usually 4 S 0 3/2 ) and a pair of closely spaced doublet upper states, usually 2 D 0 3/2 , 2 D 0 5/2 (lower) and 2 P 0 1/2 , 2 P 0 3/2 (upper). Examples of this form include N i, O ii, S ii, Ar iv, and Ne iv. To calculate the flux ratio, we must take into account the branching ratio for transitions from the uppermost state, the summed transition probabilities for transitions to multiple ground states, and the transition-probability-averaged energies for the multiple optical lines. For the p 2 or p 4 ions (case a), the general expression for the ratio of flux of the auroral line from level 3 to 2 to the fluxes of the optical lines from level 2 to 1b and 2 to 1c (ignoring the doubly forbidden line from level 2 to 1a) is given by the a generalised inverse of equation 5.1 in Osterbrock & Ferland (2006): where E 23 is the energy gap between the two singlet states, Υ 12 and Υ 13 are the (mean) effective collision strengths for collisional excitation from the triplet ground states to the lower and upper singlet states, given by (e.g.): and where g 1a , g 1b and g 1c are the statistical weights of levels 1a, 1b, and 1c; and ΣA u is the total transition probablility for the transitions between the upper singlet state (3) and the triplet ground states (1a, 1b and 1c), In practice, one of the transitions from the singlet upper state to one of the triplet ground states is doubly forbidden and its transition probability is negligible. The term in equation (18) in the square brackets is the branching ratio, i.e., the fraction of atoms excited to level 3 that decay to level 2, and the term following that is the energy weighting for the transition probabilities. For the p 3 ions, the expression for the flux ratio is similar to equation (18): where and and with ΣA u and Υ 13 defined analogously. For each of the ions we consider here, the values of the various constants entering in these equations are listed in the tables in the Appendix. As shown in Figure 8, the transitions 'o1' and 'o2' are the "optical" transitions, from the two middle levels to the ground state; transitions 'a1', 'a2', 'a3' and 'a4' are the four "auroral" lines from each of the upper levels to each of the middle levels; and transitions 'u1' and 'u2' are the "UV lines" from the upper two levels to the ground state (frequently in the optical, not the UV, spectrum). Σj λa is the total flux of the (four) auroral transitions, j λo1 and j λo2 are the fluxes of the two optical lines. In some cases where wavelengths of the auroral lines are not well placed, it is more convenient to use the UV lines in combination with the optical lines to measure temperature dependent flux ratios. Examples where this is used in the IRAF/temden routine are S ii, Ne iv and Ar iv. However, in principle, UV lines can be used equivalently to auroral lines. This can be useful at higher redshifts. As the UV and auroral lines both originate from the uppermost of the levels (3 or 3a, 3b in Figure 8), their relative fluxes are related via the branching ratio and the energies of the transitions. This may be expressed in general as a ratio: However, most of the p 3 ions are also strongly density sensitive, so flux ratios using these lines-auroral or UV-will only give useful temperatures at densities 5 cm −3 . Excitation rate ratios The generalised version of Equations (6) and (7) for the energy-dependent Ω case are: for the M-B electron distribution and for the κ-distribution. We can now generalize the expression for the flux ratio for variable Ωs, using equations (18) and (27): where and Ω 13 , Ω 12 are the statistical weight averaged Ωs, defined analogously to equation (19). This equation allows us to calculate the line ratios for any of the relevant atomic species and for any value of κ. The values of the parameter f 1 (A, λ) for several atomic species are given in Table 7 in the Appendix. Similarly, equation (21) can be generalized for non-M-B populations for the p 3 ions as: where The values of the parameters for the p 4 ions are also given in Table 8 in the Appendix. Tables 7 and 8 also show the values of f 1 (A, λ) and f 2 (A, λ) using the UV lines instead of the auroral lines. Plotting the temperature-sensitive line ratios The simplest way to determine the electron temperature T e from the line ratios is to use the IRAF/SCSDS/nebular/temden routine (Shaw & Dufour 1995), or the more recent PyNeb code (Luridiana et al. 2012). Alternatively, one can use Osterbrock & Ferland (2006, Figure 5.1), reading off the temperature from the line ratio graph, using the inverse of equation (12) above. This does not take into account that the collision strengths (and even the effective collision strengths) are not constant with temperature, but in general are complex functions of the energy above the threshold (see Figure 1 below). For a M-B distribution, one can use the effective collision strengths for each temperature, leading to a more accurate function of line ratio vs. temperature. A further improvement to this process was used by Izotov et al. (2006) who derived an iterative formula to obtain T e from the line ratio measurements. However, current methods only apply where there is thermal equilibrium, and in the non-equilibrium κ distribution case, it is necessary to calculate the integrals in equation (21) numerically, using the original collision strength data (not the thermally averaged values). This leads to a graph similar to that presented in Osterbrock & Ferland (2006), with a series of curves for each value of κ required. The result is simple to determine. As noted for equation (18), in this paper (except in section 6.1) we break with tradition and invert the equation, as it is easier to understand the correlation between an increasing upper state flux (j 43 ) and increasing electron temperature (T e ), and the plot is closer to a linear form. Figure 10 is the same as Figure 9, but for the [S iii] transitions. It differs noticeably from the [O iii] case, owing to the lower excitation energy of the upper state of the 6312Å auroral line. The implication is that in extremely low metallicity, high electron temperature plasmas, above ∼20,000K, the effect of the κ distribution is to increase the kinetic temperature above the value suggested by assuming a M-B distribution, rather than the reverse which applies to [O iii] transitions for similar metallicity and temperature environments. Although it occurs at different temperatures for different atomic species, this crossover point in the line ratio flux plots appears to be a universal phenomenon, a point on the electron temperature scale where the collisional excitation generates a line flux ratio which is the same for any value of the parameter κ, including the M-B distribution. There are three methods commonly used to measure electron temperatures from CEL flux ratios. The first is to use the simple exponential expression (equation 5.4, et seq., Osterbrock & Ferland 2006), or the equivalent, using the flux ratio/temperature graphs, e.g., in Figure 10 or the inverse graphs given in Osterbrock & Ferland (2006). The second, in the case of O iii, is to use the iterative process described by Izotov et al. (2006). The third is to use the IRAF STSDAS/nebular/temden routine (Shaw & Dufour 1995) or PyNeb (Luridiana et al. 2012). If we assume the electrons exhibit a M-B energy distribution, the accuracy of these methods depends (inter alia) on the accuracy of the collision strengths used, and all of these methods make use of older values for the effective collision strengths. For example, IRAF/temden by default uses O iii data from Lennon & Burke (1994) and O ii energy levels dating from 1960. In many cases, more recent and more accurate atomic data are available, and should be used in preference to older data. To illustrate the differences that arise from using older data, for O iii, we calculate the flux ratios using the M-B averaged detailed collision strengths from PNPE12 for a range of equilibrium temperatures and an electron density of 100 cm −3 , and then use these flux ratios as input to the the methods mentioned above. The results are given in Figure 12. The differences are considerable and point out the errors inherent in using old data. In this section we present a simple method for calculating equilibrium electron temperatures directly from observed line flux ratios, using the most recent atomic data. . c.f. Figure 2, which uses published effective collision strengths, rather than the numerically integrated collision strengths and resultant flux ratios, used here. Flux ratios of temperature sensitive collisionally excited lines have been used for many years to measure electron temperatures. Most frequently used is the ratio of the [O iii] nebular and auroral lines, but line flux ratios of several other species have been used. Table 3 lists line flux ratios (for which detailed collision strength data is available) that have been or can be used to estimate electron temperatures. Some species, for example, S ii and O ii, can be used to estimate both electron densities and temperatures. The most accurate method for calculating the equilibrium electron temperature from line flux ratios is to compute the flux ratios as a function of temperature by convolving the collision strengths with the M-B distribution, using equation (8). However, based on these calculations, a much simpler approach is possible, which allows the observer to calculate the M-B temperature directly from the line flux ratio measurements. This involves fitting a simple power law to the computed flux ratio vs equilibrium temperature curves. An expression involving the flux ratio R of the form: gives equilibrium temperatures accurate to within 0.5% of the computed collision strength values, where the flux ratio used in equation (32) is as defined in Tables 3 and 4, and the inverse of the ratio used in Osterbrock & Ferland (2006, eqn. 5.4). The observer simply uses equation (32) with the observed line flux ratio to calculate the electron temperature. The equation coefficients a, b, and c are given in Table 3. This method has the advantage that equilibrium electron temperatures can be calculated directly from the observed data, while making use of the latest collision strengths. The p 2 and p 4 ions in Table 3 are those normally used for electron temperature measurement. It is quite feasible to use p 3 ions, but most of these are also strongly density sensitive, so flux ratios calculated simply from collision strength data for these lines-auroral or UVwill only give useful temperatures at densities 5 cm −3 . All ratios listed here increase in value as the electron temperature increases (the inverse of the conventional approach). However, a more sophisticated approach is possible using the MAPPINGS IV photionization code, which makes use of the latest collision strength and effective collision strength data, and takes into account densities. We discuss this in the following section. All line ratios are ultimately dependent upon both temperature and density. For temperature sensitive ratios, a number of attempts have been made account for the effect of electron density on the temperatures measured using CEL ratios. For example see Osterbrock & Ferland (2006, equations 5.4 through 5.7) and the IRAF/temden routine provides a multi-level approach for the commonly used ions. Again, these procedures are approximations and/or are based on older atomic data 5 . Here we have used the newly revised MAPPINGS IV photoionization code to explore how electron density affects computed temperatures. MAPPINGS IV takes into account the multi-level nature or the atomic species involved in generating the emission lines whose ratios are used to compute electron temperatures. The code uses the latest detailed collision strengths (see Table 2) or the latest available atomic data for effective collision strengths where detailed collision strengths are not available. It also uses a consistent set of transition probabilities (Dopita et al. 2013). Figure 13 shows the effect of density on the ratio of auroral to optical line fluxes for 5 See footnote 3: PyNeb is a revised and extended Python-based version of the IRAF nebular/temden routines, developed by Luridiana et al. (2012) [S iii], [N ii], and [O iii], calculated using MAPPINGS IV, for a M-B temperature of 10,000K. Figure 14 shows what temperature these ratios would imply without any density correction. It is apparent that, for most ions, without correction, substantial errors will be made in the in the estimated M-B temperature, even at moderate densities. In a more comprehensive approach to determining the M-B temperatures from ion flux ratios in the presence of changing densities, we have computed the temperature behavior for several important and widely used line ratios, using MAPPINGS IV, at a range of densities, from 1 to 10 4 cm −3 , and have derived simple linear fits as per equation (32). The line ratios and the results of these fits are presented in Table 4. Note that most of these ratios use the brightest and most spectroscopically convenient lines likely to be observed in nebular spectra. In general, we use simpler ratios than those in Table 3, to make use of bright nebular lines and those least sensitive to density effects. However, the full ratio for [O iii] is also presented for comparison with Table 3. Table 3, and show the effects of fully modelling excitation balances using multi-level atoms, rather than the simpler approach taken for Table 3. They should be used in preference to Table 3. For all ions with the exception of N ii and S iii, the density effect can be accommodated by the inclusion of a term which quantifies the collisional de-excitation of the middle level. This takes the form used by Osterbrock & Ferland (2006): where d is a constant related to the critical density for the transition, n crit = (T 1/2 /d). R is the "corrected" value of the observed density R obs , such that the calculated temperature is the true M-B temperature. Because the density effects are complex, it is necessary in some cases to use two different values of the parameters for different density ranges. Table 5 shows the values of d for different density ranges, and for different species. For N ii and S iii a more complex form must be chosen, since the collisional readjustment of the 3 P levels with increasing density causes the peculiar behaviour seen in Figure 13. For N ii an excellent fit can be obtained with two separate values of a, b and d, applicable over different density ranges, as indicated in the footnote to Table 5. Calculating κ dependence In the above approach, we assume that the electron energies are in thermal equilibrium. No insight is given into the effects of non-equilibrium electron energies. To take the effects of a κ distribution into account, we can use Figures 9 and 10 to measure graphically the true kinetic temperature from the [O iii] and [S iii] CEL flux ratios for a range of values of the parameter κ. Similar graphs may be derived for other CEL species. However, an easier method is to derive a simple linear equation from the graph, that expresses the kinetic temperature in terms of the temperature measured using conventional M-B methods, such as the formula in equation (32). This is based on the near-linearity of the curves in Figures 9 and 10 for temperatures between 4,000K and 25,000K. For the range of temperatures (4,000 < T U < 25,000K) encountered in H ii regions and many PNe, the relationship between the apparent (M-B) electron temperature T e and the kinetic temperature T U can be expressed to very good accuracy as a linear equation with parameters that are quadratic functions of 1/κ, for all values of κ, as follows: where a = a 1 + a 2 κ + a 3 κ 2 and and where T e is derived from conventional equilibrium methods such as equation (32). The equation coefficients can be derived for any CEL species for which non-averaged collision strengths are available. For the [O iii] CELs, this equation is illustrated graphically for a range of values of κ in Figure 15. The parameters a 1 , a 2 , a 3 , b 1 , b 2 , and b 3 are given in Table 6, for several nebular atomic species. Using the revised [O iii] atomic data and a κ of 10, we see that an apparent [O iii] electron temperature of 15,000K derived via the IRAF/temden routine (with old atomic data) corresponds to a kinetic (internal energy) temperature of ∼11,000K. Strong line techniques Numerous methods have been developed using ratios of the strong lines in nebular spectra, which are important in the absence of direct electron temperature diagnostic lines. See, for example, Kewley & Ellison (2008) and Kewley & Dopita (2002). These methods make use of the line fluxes from a range of different atomic species, usually selected because they are readily measurable with low noise in most nebular spectra. However the impact of a κ distribution on these methods is not simple, as each species is affected to a different extent by the distribution. It is necessary to calculate and model each strong line index separately as a function of temperature. Initial investigations suggest that several of the methods will not be strongly affected by κ distributions, and in particular, measurements comparing [S ii] 6716Å, 6731Å and [N ii] 6548Å, 6583Å are not significantly affected, as the fluxes of both species are changed to a similar extent by a κ distribution. As a simple illustration, we can consider the strong line ratio "R 23 ". This flux ratio is given by: The excitation temperatures for the [O ii] and [O iii] lines are ∼38,600K and ∼29,000K respectively. For a κ value of 10, and a kinetic temperature of 10,000K, from Figure 7, it is apparent that the [O ii] lines are enhanced by ∼20%; the [O iii] lines are not significantly affected; and, from the discussion earlier, Hβ is enhanced by ∼4%. Thus the overall R 23 ratio is slightly enhanced. A detailed analysis of strong line methods is best tackled using photoionization models that take into accountκ effects. While a detailed analysis of the impacts of κ distributions and new atomic data on strong line methods is beyond the scope of the present paper, it is explored in our next paper in this series (Dopita et al. 2013), which develops new strong line diagnostics that give significantly more consistent results when compared to direct T e methods. The subject will be addressed further in subsequent papers. Nonetheless, it is apparent from this simple example that the effect of changes in the collision strengths and κ on derived strong line abundances is relatively small, but not insignificant. Estimating κ The kappa distribution uses a single parameter to describe the deviation from thermal equilibrium in electron energies. In any one temperature or abundance measurement, there is no unique way to estimate the value of κ, although a value of ∼10 appears con-sistent with many of the observed spectra (NDS12). When more than one measurement is available-for example, electron temperatures obtained using different CEL species, or CEL and ORL-derived abundances-the value of kappa can be estimated by the requirement that the discrepancies be minimized. When several different methods are available such as in bright nebulae, it is possible to iterate to an optimum value of kappa and estimate errors and variance. Figure 11 shows that measuring apparent (M-B) electron temperatures for [S iii] and [O iii] allows one to estimate both κ and the kinetic (internal energy) temperature. Needless to say, in real nebulae there are likely to be kappa distributions spanning a range of values of κ, so specifying a single value is not always meaningful, but the concept can help to avoid the large discrepancies that arise using equilibrium methods, and can augment values obtained using other contributing factors such as temperature and abundance inhomogeneities. There will seldom be a single answer for temperature, abundances and κ for any real nebula, and using photoionization models to explore the complex physics is critically important. For this reason, we have revised the MAPPINGS III photoionization code (Allen et al. 2008) to version IV, to incorporate both non-equilibrium κ effects and the most accurate available collision strengths and other atomic data. This work is the subject of our next paper (Dopita et al. 2013), where we use it to investigate the effect of κ distributions on temperatures and abundances estimated using the strong line methods, to develop a revised set of strong line diagnostics. The code development has been undertaken independently of the work on MAPPINGS Ie (Binette et al. 2012), with which it shares a common origin but which has had a separate development. Conclusions In this paper we have explored further the ideas put forward in NDS12, where the non-equilibrium κ electron energy distribution widely encountered in solar system plasmas was found to explain the long standing abundance discrepancy problem that arises when temperatures and abundances are measured using spectra from different atomic species. We have discussed the factors involved in obtaining accurate CEL temperatures from theoretical collision strengths. We have also shown that significant errors in electron temperatures can arise unless one has access to the best possible collision strength data. We have examined the effects of the κ distribution on recombination processes, in particular how the κ distribution is able to resolve the long standing discrepancy between ORL and CEL abundances. We show that a typical κ distribution leads to a small enhancement of hydrogen recombination lines. We have examined in detail the effects of κ and newly available collision strength data affects the measurement of electron temperatures using collisionally excited lines. We compare these effects on the forbidden lines of S iii and O iii. In the main thrust of the paper, we present simple techniques for calculating equilibrium electron temperatures from line flux ratios using the most up to date atomic data, and using these equilibrium temperatures to derive the actual kinetic (internal energy) temperatures resulting from non-equilibrium electron energy distributions. We outline future work on adapting photoionization modelling programs and strong line methods to take into account the effects of the κ distribution. A. Supplementary Data Tables A.1. Temperature-sensitive line ratio data The tables in this appendix give the wavelengths, transition probabilities and line ratio multipliers (equations 29 and 31), for transitions of the p 2 , p 4 and p 3 lines of nebular interest. For the meanings of the wavelength and transition probability symbols, see sections 5.2 and 5.3. Table 7: Line wavelengths (Å) (in air), line strengths and line ratio multipliers for the p 2 and p 4 ions. The final line in this table shows the f 2 (A,λ) line ratio multiplier for the UV-tooptical line ratios. They are related to the auroral-to-optical line ratios via the wavelength weighted branching ratios. ions. Alternative flux ratios can be used for the p 3 ions, using the UV lines in place of the "auroral". This is done, for example, in the IRAF/temden routine for S ii, Ne iv and Ar iv. The final line in this table shows the f 2 (A,λ) line ratio multiplier for the UV-to-optical line ratios. They are related to the auroral-to-optical line ratios via the wavelength weighted branching ratios.
2013-06-09T14:30:42.000Z
2013-06-09T00:00:00.000
{ "year": 2013, "sha1": "fe2317a588a135ca044a05de328d1e6452b19c84", "oa_license": null, "oa_url": "https://openresearch-repository.anu.edu.au/bitstream/1885/73670/2/01_Nicholls%20_Measuring_nebular_2013.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "fe2317a588a135ca044a05de328d1e6452b19c84", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
9115870
pes2o/s2orc
v3-fos-license
Long‐term bosutinib for chronic phase chronic myeloid leukemia after failure of imatinib plus dasatinib and/or nilotinib Bosutinib is an Src/Abl tyrosine kinase inhibitor (TKI) indicated for adults with Ph+ chronic myeloid leukemia (CML) resistant/intolerant to prior TKIs. This long‐term update of an ongoing phase 1/2 study evaluated the efficacy and safety of third‐/fourth‐line bosutinib in adults with chronic phase (CP) CML. Median durations of treatment and follow‐up were 8.6 (range, 0.2–87.7) months and 32.7 (0.3–93.3) months, respectively. Cumulative confirmed complete hematologic response (cCHR) and major cytogenetic response (MCyR) rates were 74% (95% CI, 65–81%) and 40% (31–50%), respectively; Kaplan–Meier (K–M) probability of maintaining cCHR or MCyR at 4 years was 63% (95% CI, 50–73%) and 69% (52–81%). Cumulative incidence of on‐treatment disease progression (PD)/death at 4 years was 24% (95% CI, 17–33%); K–M 4‐year overall survival was 78% (68–85%). Baseline Ph+ cells ≤35 vs. ≥95% was prognostic of MCyR and CCyR by 3 and 6 months, increased baseline basophils was prognostic of PD/death, and no prior response to second‐line TKI was prognostic of death. Common adverse events included diarrhea (83%), nausea (48%), vomiting (38%), and thrombocytopenia (39%). Bosutinib demonstrates durable efficacy and a toxicity profile similar to previous bosutinib studies in CP CML patients resistant/intolerant to multiple TKIs, representing an important treatment option for patients in this setting. This trial is registered at www.clinicaltrials.gov (NCT00261846). Am. J. Hematol. 91:1206–1214, 2016. © 2017 The Authors American Journal of Hematology Published by Wiley Periodicals, Inc. This report describes the long-term (48 months) efficacy and safety of third-and fourth-line bosutinib therapy in an ongoing phase 1/2 trial in patients with CP CML resistant/intolerant to imatinib plus dasatinib and/or nilotinib. Exploratory analyses assessing baseline predictors of longterm outcomes are also reported. Methods Patients and study design. This analysis includes adults (18 years) enrolled prospectively in an ongoing 2-part, phase 1/2 study [12,14] with a confirmed diagnosis of Ph1 CP CML who had received imatinib followed by dasatinib and/or nilotinib. Additional eligibility criteria are described in the Supporting Information. The present analysis was based on patients with CP CML that was imatinib-resistant (600 mg/day) or imatinib-intolerant (any dose) plus 1 of the following: resistant to dasatinib 100 mg/day (IM 1 D 2 R), intolerant to any dose of dasatinib (IM 1 D 2 I), resistant to nilotinib 800 mg/day (IM 1 N 2 R), or intolerant to any dose of nilotinib or resistant/intolerant to dasatinib and nilotinib (IM 1 N 6 D). Dose escalation to bosutinib 600 mg/day was allowed for lack of efficacy (no complete hematologic response Additional Supporting Information may be found in the online version of this article. [CHR] by week 8 or no complete cytogenetic response [CCyR] by week 12) unless treatment-related grade 3 AEs occurred. Bosutinib treatment continued until disease progression/death, unacceptable toxicity, or withdrawal of consent. The study protocol was approved by each sites' ethics board and conducted in accordance with the principles of Good Clinical Practice and the Declaration of Helsinki. Assessments. Responses were assessed as described previously [12,14]. Hematologic response was defined as achievement of a confirmed CHR (cCHR) or a baseline cCHR that was maintained for 5 weeks. Cytogenetic response was defined as one newly achieved during treatment or, if present at baseline, maintained for 4 weeks. Evaluable patients received 1 dose of bosutinib and had a valid baseline assessment for the respective endpoint. Duration of response (DOR) was evaluated among responders from the first response date until confirmed loss of response, treatment discontinuation due to progressive disease (PD)/death, or death within 30 days after last dose; patients without events were censored at their last assessment visit. Disease progression was defined as described previously [12,14]. Time from first dose to (1) PD/death and (2) transformation to accelerated phase (AP)/blast phase (BP) CML were evaluated through 30 days after last dose; patients without events were censored at their last assessment visit. Overall survival (OS) was evaluated for up to 2 years after treatment discontinuation (per protocol) and included data from patients enrolled in an ongoing extension study (Clinicaltrials.gov ID: NCT01903733) [23]; patients not deceased were censored at the last known alive date. The safety analysis included all patients who received 1 bosutinib dose. AEs were reported up to 30 days after the last dose and graded according to National Cancer Institute Common Terminology Criteria for Adverse Events Version 3.0. Treatmentemergent AEs (TEAEs) were assessed overall and by year of first occurrence. Statistical analyses. Time-to-event distributions and probabilities were estimated using the Kaplan2Meier (K-M) method (DOR, OS) or cumulative incidence adjusting for the competing risk of treatment discontinuation without the event (response, PD/death, transformation). Two-sided 95% confidence intervals (CIs) for response rates and K-M quartiles were based on the exact binomial and Brookmeyer-Crowley linear transformation method, respectively; 2-sided 95% CIs for K-M and cumulative incidence yearly probability estimates were based on Greenwood's formula and Gray's method, respectively. Retrospective backward elimination (criteria: P 5 0.20) multivariable analyses evaluated baseline characteristics as predictors of (1) major cytogenetic response (MCyR) or CCyR by 3 or 6 months or cumulatively using logistic regression, (2) progression-free survival (PFS) distribution (truncated at 4 years) using a Cox proportional, cause-specific hazard model, (3) OS distribution using a Cox proportional hazard model, and (4) time to first diarrhea AE (grade 3/4) or liver-related AEs (any grade and grade 3/4) using a Cox proportional cause-specific hazard model. Results are presented as odds and hazard ratios (95% CI); P values were not adjusted for multiple testing. Compared with patients aged <40 or 40 2 64 years, those aged 65 years were more frequently intolerant to prior imatinib, more often had worse Eastern Cooperative Oncology Group performance scores, and more frequently had medical history events of hypertension, pleural effusion, peripheral edema, osteoarthritis, and tonsillectomy (Supporting Information Table III Of the 53 patients with a dose reduction to 400 mg/day due to an AE, 21 attained/maintained MCyR: 15 (28%) first achieved MCyR after dose reduction, 4 (8%) achieved MCyR before and maintained it after dose reduction, and 2 (4%) lost MCyR after dose reduction. Of 22 patients with a dose reduction to 300 mg/day, 9 attained/maintained MCyR, including 4 (18%) first achieving an MCyR, 4 (18%) maintaining MCyR, and 1 (5%) losing MCyR after dose reduction, respectively (Supporting Information Table VI). Among patients with a dose reduction, the median cumulative number of days on 400 mg/ day was smaller than with 300 mg/day (Supporting Information Table II). Median duration of attained/maintained MCyR (non-K-M) was slightly longer for patients with a dose reduction to 400 mg/day vs. 300 mg/day (21 days longer in patients with a first MCyR after dose reduction and 24 days longer in patients with an MCyR before and after dose reduction). At 4 years, the cumulative incidence of on-treatment PD (including AP/BP transformation) or death was 24% (95% CI, 17-33%) overall (IM 1 D 2 R, 24% and IM 1 N 6 D, n 5 1). Deaths were most commonly because of PD (n 5 12 [10%]) or AEs (n 5 11 [9%]); 3 deaths were due to unknown causes. Ten of the AEs resulting in death were considered by the investigator to be unrelated to treatment (intracranial hemorrhage, internal bleed secondary to kidney infarction, myocardial infarction, respiratory insufficiency, internal hemorrhage, advanced gastric cancer, mesenteric ischemia, pneumonia, severe heart failure, and acute myocardial infarction [n 5 1 each]); 1 was considered treatmentrelated by the investigator (IM 1 D 2 I cohort; lower gastrointestinal bleeding occurring at 78 days on therapy in the setting of grade 4 thrombocytopenia within 30 days of the last bosutinib dose). RESEARCH ARTICLE Cumulative incidence of on-treatment AP/BP transformation at 4 years was similar across age groups, whereas K-M-estimated OS at 4 years was lower in older patients. Cumulative incidence of on-treatment PD/death at 4 years appeared higher in younger vs. older patients (Supporting Information Table V); however, this is likely a function of more older patients discontinuing prior to PD/death. Factors affecting long-term response and survival outcomes Significant baseline predictors of attaining/maintaining an MCyR or CCyR were as follows (Table I) Table I). Table IX. Overall incidence of newly occurring TEAEs (MedDRA preferred terms not reported for the same patient previously for those on treatment during a given year) was most frequent in year 1 (99%) and somewhat lower in years 2 (74%), 3 (64%), and 4 (72%; Fig. 2). Increased blood creatinine and pleural effusion, which occurred in 13% and 17% of patients overall, had a higher incidence of first occurrence in year 4 (13 and 16%) vs. years 1, 2, and 3, respectively ; the sensitivity of all other mutations is unknown. If patients had >1 mutation with different sensitivities, they were categorized based on the following hierarchy: bosutinib-insensitive, unknown sensitivity, and bosutinib-sensitive [15,28]. The effect of (1) insensitive mutations was not estimable because there were no CCyRs by month 3 in the insensitive mutation groups, and (2) prior response to D/N was not estimable because there were no CCyRs in the no prior response to D/N group. Two patients with missing values for baseline covariates were not included in any of the predictors analyses. Table II), most commonly thrombocytopenia (n 5 7/119 [6%]; all 7 were IM 1 D 2 I patients, with 6 discontinuing in year 1). Across cohorts, the median (range) time to discontinuation due to an AE was 170 (15-2171) days. Among 34 patients who discontinued bosutinib due to an AE (including 1 patient who discontinued due to PD as the primary reason), 14 (41%) discontinued without attempting dose reduction to <500 mg/day for that AE. There were 5 deaths within 30 days of last dose, 2 due to PD and 3 due to AEs (myocardial infarction, acute myocardial infarction, and lower gastrointestinal bleeding). 5%]). Liver-related AEs (increased ALT or AST, increased conjugated or blood bilirubin, hepatic enzyme increased, liver function test abnormal, increased transaminases, and hyperbilirubinemia) occurred in 22 (18%; grade 3, 7%; no grade 4 or serious AEs) patients. Significant baseline predictors of time to first grade 3/4 liver-related AE were duration of <6 months (with no prior IFNa) vs. 6 months from diagnosis to IM initiation or IFNa treatment anytime before IM (P 5 0.0366) and increased basophils (P 5 0.0285) (see Supporting Information text for a list of all factors assessed). No significant baseline predictors of allgrade liver-related AEs were identified. Seven (6%; grade 3, 4%; serious, 4%) patients reported vascular TEAEs (IM 1 D 2 R, n 5 1; IM 1 D 2 I, n 5 5; IM 1 N 2 R, n 5 1; IM 1 N 6 D, n 5 0), none of which were considered treatmentrelated by the investigator. Among these 7 patients, 4 had a history of vascular events. One patient had a vascular TEAE that led to treatment discontinuation (IM 1 D 2 R cohort: myocardial infarction) and 2 patients had events that led to death (IM 1 D 2 I cohort: myocardial infarction and acute myocardial infarction). Hypertension-related events occurred in 9 (8%; grade 3, 2%; no grade 4 or serious AEs) patients (IM 1 D 2 R, n 5 2; IM 1 D 2 I, n 5 6; IM 1 N 2 R, n 5 1; IM 1 N 6 D, n 5 0), none of which were considered treatment- Newly occurring TEAEs were defined as those MedDRA preferred terms (PTs) not experienced by the same patient previously for patients on treatment during a given year. *Includes all PTs under the high-level group terms cardiac arrhythmias, pericardial disorders, and heart failures under the cardiac disorders system organ class (SOC) and the following PTs: cardiac death, sudden cardiac death, sudden death, decreased ejection fraction, abnormal electrocardiogram QT interval, prolonged electrocardiogram QT, congenital long QT syndrome. related by the investigator; 2 of these 9 patients (both IM 1 D 2 I) had a history of hypertension. One grade 2 event of peripheral arterial occlusive disease was reported in an IM 1 N 2 R patient with prior nilotinib exposure, which was considered serious, was unrelated to bosutinib, and resolved within 10 days. Cross-intolerance Of the 119 patients who received third-/fourth-line bosutinib, 35 were intolerant (defined as permanent discontinuation due to an AE) to prior imatinib (1 additional patient did not have AEs reported), 50 were intolerant to prior dasatinib (2 additional patients did not have AEs reported), and 3 were intolerant to prior nilotinib (Supporting Information Table X). Among the 35 imatinib-intolerant patients, 7 (20%) were cross-intolerant to bosutinib (i.e., discontinued due to the same AE that led to prior imatinib discontinuation: thrombocytopenia [IM 1 D 2 I, n 5 3] and bone marrow failure [IM 1 D 2 I, n 5 3; IM 1 D 2 R, n 5 1). Among the 50 dasatinib-intolerant patients, 12 (24%) were cross-intolerant to bosutinib (all IM 1 D 2 I patients), most commonly due to thrombocytopenia (n 5 4), pleural effusion (n 5 2), and bone marrow failure (n 5 2). One of the 3 nilotinibintolerant patients discontinued bosutinib due to the same AE that led to prior nilotinib discontinuation (pleural effusion; IM 1 N 6 D cohort). No deaths occurred on bosutinib due to the same AE that led to intolerance to any of the 3 prior TKIs. Discussion The cumulative rates of newly attained/maintained cCHR or newly attained cCHR in this 4-year update of the ongoing phase 1/2 study (data snapshot: May 23, 2014) were similar to those reported at year 1 (data snapshot: March [14]. Cytogenetic responses were also similar; here, cumulative rates of newly attained/maintained MCyR and CCyR were 40% (n 5 45/112) and 32% (n 5 36/112), respectively, compared with 39% (n 5 42/108) and 31% (n 5 33/108) in the 1-year update [14]. Although the cohorts have changed since year 1, these findings suggest that a substantial number of patients receiving third-/fourth-line bosutinib may attain clinical benefit and that response is most likely to occur within the first year of treatment. Responses were also durable, with 1-year vs. 4-year K-M estimates of maintaining CHR, MCyR, and CCyR of 73 vs. 63%, 72 vs. 69%, and 58 vs. 54%, respectively. These rates were lower in the IM 1 D 2 R cohort (57, 43, and 17%) vs. the IM 1 D 2 I (70, 87, and 66%) and IM 1 N 2 R (62, 78, and 63%) cohorts, respectively (NE for IM 1 N 6 D). However, estimates of long-term outcomes may be biased due to early discontinuation of patients for reasons such as unacceptable toxicity or inadequate response, and because survival was only followed for up to 2 years after treatment discontinuation. Analyzing predictive factors for outcome is important for identifying patients most likely to benefit from treatment. Consistent with present observations, a previous study of third-line dasatinib or nilotinib in patients with CP CML found that age was not a significant predictor of cytogenetic response [26]. Furthermore, in the previous study, best response of at least a minimal CyR (MiCyR; 95% Ph1 cells) with prior second-line dasatinib or nilotinib was associated with subsequent cytogenetic response and improved survival on third-line nilotinib or dasatinib [26]. In this analysis, having had at least an MiCyR with prior dasatinib and/or nilotinib was a significant predictor of survival and a lower Ph1 ratio at baseline (35%) was a significant predictor of cytogenetic response. However, marginally significant predictors should be interpreted with caution, as P values were not adjusted for multiple testing. Gastrointestinal toxicities remained the most commonly reported AEs, with similar percentages of patients reported to have gastrointestinal AEs in the 4-year vs. 1-year updates (e.g., diarrhea [83 vs. 81%, respectively], nausea [48 vs. 43%], and vomiting [38 vs. 32%]), indicating the early incidence of these events. Although diarrhea was common, grade 3 events were experienced by only 9% of patients and no grade 4 events were reported. Cumulative rates of grade 3/4 hematologic laboratory abnormalities were also similar in the 4-year vs. 1year [14] updates (e.g., thrombocytopenia [26 vs. 25%], anemia [8 vs. 8%], and neutropenia [18 vs. 19%]). Except for elevated blood creatinine and pleural effusions, newly occurring AEs generally decreased in frequency after the first year of treatment. However, most discontinuations due to AEs (64%) occurred during year 1 and only 29 patients remained on treatment after year 4; thus, these results should be interpreted with caution. The observed low rates of cross-intolerance between bosutinib and prior imatinib, dasatinib, or nilotinib treatment suggests that most patients intolerant to prior TKI therapy may be successfully treated with bosutinib. In conclusion, high response rates were observed in CP CML patients receiving long-term bosutinib as third-or fourth-line treatment. Most responses (attained or maintained) were observed within the first year of treatment and, among responders, the likelihood of maintaining a response was high at 4 years. Overall, 4 years after last enrollment, bosutinib continues to demonstrate durable efficacy and manageable toxicity for the majority of patients with CP CML resistant or intolerant to multiple prior TKIs who are able to remain on treatment after the first year, representing an important treatment option for patients in this setting. Includes discontinuations due to AEs occurring after 4 years and 1 discontinuation due to disease progression as the primary reason.
2018-04-03T00:05:39.989Z
2016-09-15T00:00:00.000
{ "year": 2016, "sha1": "7d93ce09566f784af9a1deb416ea50ac09885775", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ajh.24536", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7d93ce09566f784af9a1deb416ea50ac09885775", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
247975553
pes2o/s2orc
v3-fos-license
“Living with COVID”—implications for immunosuppressed and immunocompromised The UK Government announced its “Living With COVID” plan to remove the remaining legal restrictions while safeguarding people most vulnerable to COVID-19 and maintaining resilience on 21 February 2022 [1]. A recent significant drop in COVID-19 infections has been cited as the reasoning behind this decision. The easing of these restrictions began in July 2021, with most legal limits on social contact being removed and the final closed sectors of the economy, including sports stadiums and nightclubs, reopened. This has raised concern in certain quarters, especially among people with compromised immunity and a clinical risk group [2]. Many people are worried that limiting COVID-19 testing and isolation too soon would be detrimental. Patients undergoing treatment for cancer and those suffering from autoimmune diseases who are taking immunosuppressive drugs are especially vulnerable, with at least 500,000 people in England deemed immunocompromised as a result [3]. Current evidence suggests that those who are immunosuppressed and immunocompromised may not receive the same level of protection from the COVID-19 vaccinations as the general population, compounding the problem [4]. Protection of vulnerable children, e.g. cystic fibrosis under the age of 5 years, who remain unvaccinated is still unclear and a source of concern for parents. Among immunocompromised and immunosuppressed patients with systemic autoimmune rheumatic diseases (SARD), a substantially higher risk of COVID-19 infection or reinfection is possible when rituximab and corticosteroids are used. A study described that those treated with antiCD20 therapy, including rituximab and ocrelizumab, are at a considerably increased risk of developing severe outcomes from COVID-19, with risk ratios ranging from 1.7 to 5.5 being reported [5]. This has been a cause of considerable anxiety as the protective effect of COVID-19 vaccination is probably compromised by concurrently administered biologic drugs such as rituximab, hindering the most viable strategy to fight this pandemic [6]. More recently, prednisone > 7.5 mg/day was negatively related to the presence of neutralising antibodies following COVID-19 vaccination in patients with rheumatoid arthritis [7]. Although COVID-19 reinfections are uncommon, as immunity from SARS-CoV-2 vaccinations wears off, immunocompromised individuals are more likely to contract COVID-19 [8]. Furthermore, given the vaccine inefficiency in this population, paired with the emergence of new COVID-19 variants, the risk of contracting this disease is increased. The most notable variant of 2021 was the globally dominant variant D614G, which possessed significantly higher transmissibility without increasing disease severity. Since then, multiple other variants have been described, with these numbers expected to rise further [9]. It is also true that many nations have not fully vaccinated their populations, which will add additional fears for vulnerable patients owing to international travel and a lack of barriers such as facemasks. A recent study demonstrated that if an uninfected individual is exposed to someone who has SARS-CoV-2 * A. Nune arvind.nune@nhs.net The UK Government announced its "Living With COVID" plan to remove the remaining legal restrictions while safeguarding people most vulnerable to COVID-19 and maintaining resilience on 21 February 2022 [1]. A recent significant drop in COVID-19 infections has been cited as the reasoning behind this decision. The easing of these restrictions began in July 2021, with most legal limits on social contact being removed and the final closed sectors of the economy, including sports stadiums and nightclubs, reopened. This has raised concern in certain quarters, especially among people with compromised immunity and a clinical risk group [2]. Many people are worried that limiting COVID-19 testing and isolation too soon would be detrimental. Patients undergoing treatment for cancer and those suffering from autoimmune diseases who are taking immunosuppressive drugs are especially vulnerable, with at least 500,000 people in England deemed immunocompromised as a result [3]. Current evidence suggests that those who are immunosuppressed and immunocompromised may not receive the same level of protection from the COVID-19 vaccinations as the general population, compounding the problem [4]. Protection of vulnerable children, e.g. cystic fibrosis under the age of 5 years, who remain unvaccinated is still unclear and a source of concern for parents. Among immunocompromised and immunosuppressed patients with systemic autoimmune rheumatic diseases (SARD), a substantially higher risk of COVID-19 infection or reinfection is possible when rituximab and corticosteroids are used. A study described that those treated with anti-CD20 therapy, including rituximab and ocrelizumab, are at a considerably increased risk of developing severe outcomes from COVID-19, with risk ratios ranging from 1.7 to 5.5 being reported [5]. This has been a cause of considerable anxiety as the protective effect of COVID-19 vaccination is probably compromised by concurrently administered biologic drugs such as rituximab, hindering the most viable strategy to fight this pandemic [6]. More recently, prednisone > 7.5 mg/day was negatively related to the presence of neutralising antibodies following COVID-19 vaccination in patients with rheumatoid arthritis [7]. Although COVID-19 reinfections are uncommon, as immunity from SARS-CoV-2 vaccinations wears off, immunocompromised individuals are more likely to contract COVID-19 [8]. Furthermore, given the vaccine inefficiency in this population, paired with the emergence of new COVID-19 variants, the risk of contracting this disease is increased. The most notable variant of 2021 was the globally dominant variant D614G, which possessed significantly higher transmissibility without increasing disease severity. Since then, multiple other variants have been described, with these numbers expected to rise further [9]. It is also true that many nations have not fully vaccinated their populations, which will add additional fears for vulnerable patients owing to international travel and a lack of barriers such as facemasks. A recent study demonstrated that if an uninfected individual is exposed to someone who has SARS-CoV-2 infection while wearing a mask, the uninfected person's risk of contracting the infection is reduced by half [10]. Even though no definitive solution has emerged to eradicate the COVID-19, the mass information currently available must be supplemented with factual data. Consequently, strategies can be undertaken to reassure patients based on their clinical risk during the implementation of "living with COVID plan" in the UK and the wider world. A shared decision-making process, guidance on the possibility of additional booster programmes, an appraisal of the role of regular monitoring of antibody titres in such patients and enhanced guidelines from rheumatology societies may be the way forward. This will reassure the patients and alleviate their anxiety and concerns. Author contribution AN conceptualised and written initial draft. KPI, RB, BB and CM involved in data curation, writing review and editing. All authors read, validated and authorised the final version of the manuscript. Ethics approval No ethics required. Disclosures None.
2022-04-07T02:41:23.680Z
2022-04-05T00:00:00.000
{ "year": 2022, "sha1": "5c65c1cdf2547e5aa8aef913486d5c9d6982f0b2", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s10067-022-06160-9.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "67bcc877d8a447823f847810442c774ff581cfe3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234143609
pes2o/s2orc
v3-fos-license
Uniqueness, existence and solution of the direct boundary heat exchange problem for a weakly non-linear temperature conductivity coefficient This work is a preliminary result necessary for the formulation and solution of the inverse heat conduction problem for a weakly quasilinear equation. Nevertheless, the article has independent significance. For the direct heat conduction problem a solution is constructed in the form of a series with a small parameter, the existence of the solution, its uniqueness are proved, and a number of estimates are made - in particular, the rate of decrease of the solution at large times. The solution is built for a medium, part of which has a piecewise constant coefficient of thermal conductivity, and part of the medium has slightly non-linear thermal diffusivity. The construction of a classical solution is impossible due to the discontinuity of the thermal diffusivity, therefore, on the sections of the media the matching conditions are set - the conditions for the continuity of the solution and the condition for the continuity of the heat flux. The solution to the weakly quasilinear equation is constructed in the explicit form of a series with a small parameter, for which uniform convergence is proved. The decreasing of the solution at large times is proved as of time raised to the negative third power. Introduction In modern technology there are devices whose details are exposed to thermal effects. The properties of materials may change in parts as a result. In turn, this leads to the failure of the device [3]. You can control the surface temperature to protect components from excessive heat. And since the temperature may be too high for direct measurement [4], it is advisable to be able to solve the inverse problem of thermal diffusivity. A protective coating can also be applied. The coating will protect the device for a while, but will make it difficult to solve the inverse problem posed already for the composite material. Nonlinear properties of materials will add additional complexity to the solution of the inverse problem [2]. This complexity is conveniently taken into account either using the small parameter method, as in the present paper, or by the iteration method. The above difficulties of setting and solving a mathematical problem can be overcome, for example, as follows: the thermal diffusivity for one of the regions is considered separately as a weakly non-linear function. A classic solution is available for each of the areas. It is impossible to build a general classical solution due to discontinuities in the thermal diffusivity; therefore, the solutions that are classic within each of the areas are combined into a common one by setting matching conditions on media sections. It is easy to see that constructing a solution to even a direct problem is cumbersome and requires considerable preliminary work. We propose to divide the solution of the inverse quasilinear problem into: (i) constructing the solution of the direct problem, proving the convergence of the method to the solution and the uniqueness of the solution thus constructed, substantiating a number of estimates that will be required to solve the inverse heat conduction problem itself; (ii) construction of a solution of the inverse problem itself, necessary estimates and proofs [5], [6], [7], [8], [9]. We propose to consider the first part of the problem in this article. Statement of the direct problem Consider the area in which we solve the problem. Let x ∈ [0; +∞) be the spatial coordinate, t ∈ [0; +∞) be the temporal coordinate. Let the spatial region be divided into four parts. We denote the temperature in each of the spatial regions as u 1 (x, t), u 2 (x, t), u 3 (x, t), u 4 (x, t). In the first part, x ∈ [0; x 0 ], the medium heats up, the heating function is F (x, t), and the thermal diffusivity k 1 (x, t) = a 2 1 is constant. The protective layer is heated from the wall in the second region, is the third area. The material itself is heated in it, and its thermal diffusivity coefficient weakly non-linearly depends on the temperature . The thermal diffusivity is constant: k 4 (x, t) = a 2 4 in the fourth region x ∈ [x 2 ; +∞). Here a 1 , a 2 , a 3 , a 4 are real numbers, 0 < 1 is a small parameter. Let u(x, t) be such that Let ∆ 4 be such that Let the heat flux operator be D 4 : x ∈ (x 2 ; +∞), t ∈ (0; +∞). (1) Then the problem is (2) Definition The function u(x, t) will be a solution to the problem (2) if: Choose a function F (x, t) in (2) as follows: Therefore, we have to prove: (i) The uniform converges of (4) in x > 0, t > 0, (ii) The uniform converges of term derivatives (first and second order) series (4) in x > 0, t > 0, (iii) The uniform converges of (7) and the uniform converges of series term derivatives (7). 3.1. The uniform continuity of the series Lemma 3.4. Let the following conditions be satisfied: , t ∈ [0; +∞) has following properties: are true for the function u 3 (x, t) : Proof. We calculate the first internal integral in (10) to prove the first part. We get zero, going to the limit at t → 0 + 0. Differentiate J 3 (x, t) with respect to t : The integrand is continuous for every τ ∈ [0; t). We get zero passing to the limit as t → 0+0. Note that due to the finite integration interval over t and the continuity of the integrand, ∀τ ∈ (0; t) this derivative is uniformly bounded, that is, a continuous function in x ∈ (s 2 ; s 3 ), t ≥ 0. We come to the conclusion, repeating the differentiation, that there are every order derivatives in t for the function J 3 (x, t). It is easy to see that similar considerations are true for J 4 (x, t). Thus, the first two statements of the lemma are true. Lets prove the third statement of the lemma. To do this, we estimate the derivatives found for sufficiently large t.. Select the first term in the expression for J 3 (x, t) and denote its kernel for the integral over t as J 10 (x, t). Then for its derivative of order k we get: If t → +∞, then the largest term is (2k + 1)!!/2 k , because 0 ≤ τ ≤ t 0 . Therefore, We obtain an upper bound, repeating the argument for the second term in J 3 , and then for J 4 . , t > 0 is uniformly bounded, and the third statement is true.
2021-01-07T09:06:53.597Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "8385d37bc598dc83dde7ba606fe92bf57abe998d", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1715/1/012050", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "be1ce670cf7c239eb0c074b4b52997e7427ec8ca", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
91060281
pes2o/s2orc
v3-fos-license
Activation of Syringomycin and Syringopeptin, Two Major Toxins of Pseudomonas syringae pv. syringae by three Cherry Cultivars Pseudomonas syringae pv. syringae (Pss) is a gram negative plant pathogenic bacteria that cause severe disease on more than 150 various plant species. Pss is the most frequently bacterial pathogens on crops that make high economical damages in Iran. Studies showed that this bacterium cause significant yield losses in main crops in Iran. During infection of host plants by this organism, damage to plant cells will occur by production of virulence factors. This pathovar produce two cyclic lipodepsipeptide phytotoxin families including syringomycins (SR) and syringopeptins (SP). The strains of Pss were isolated from apricot, cherry, peach and wheat in Southwest Iran. To find out whether production of syringomycin and syringupptin in different isolates is affected by plant material, inhibitory effect of these toxins on Geotrichum candidum and Bacillus megaterium were studied. When no plant extracts were added to the SRM medium, the most intense inhibitory effect on the G. candidum was observed in isolate C1. The highest production of syringomycin was achieved by this strain in the presence of Takdane cultivar leaf extract. The production of syringomycin and syringopeptin by all isolates was significantly higher in the presence of plant extracts on SRM and PDA agar medium. Due to its geographic location and climate, Iran is one of the major producers of stone fruits in the world.One of the ways to increase garden products is to fight against pests and diseases of these plants.One of the most dangerous diseases of stone fruits is bacterial canker caused by Pseudomonas syringae pv.syringae (Pss).This disease reduces the quality and quantity of the product and reduces the life of the garden, especially in cherries, and can also damage young and old gardens sometimes up to 85 percent (Bultreys & Kaluzna, 2010).Currently, bacterial canker of stone fruits in most parts of the world damage apricots, plums, peaches and cherries.In Iran, bacterial canker disease was reported for the first time from apricot trees in Isfahan and P. syringae was reported as causal agent (Bahar et al., 1982).It seems that the concept of pathovar for this bacteria is not applicable, because it infect more than 150 different plant species (Kennelly et al., 2007).Epidemiologic studies indicate that the disease has two epiphytic and endophytic phases.Therefore, the bacteria can be safely multiply in the leaves of the host plants throughout the growth season.During contamination, Pss uses various secretion systems to transfer proteins into plant cells (Feil et al., 2005).This bacterium produces many pathogenicity factors that can infect wide range of plants.Many strains of Pseudomonas syringae pv.syringae are known to produce cyclic lipodepsipeptides (CLPs) as secondary metabolites such as syringomycin and syringopeptin.The CLPs are considered to be plant virulence factors and antifungal agents (Quigley & Gross, 1994).They affect plant membrane activities and induce necroses at relatively high concentrations (Guenzi et al., 1998), but the relationship of these effects to plant diseases has not been clearly established.Both toxins have strong antibacterial and antifungal activity.It has been shown that these secondary metabolites help colonizing of bacteria in the host and promote bacterial growth in intercellular space (Lu et al., 2002).Apparently, syringomycin production was unique to Pss strains and was not reported by other pseudomonas spp.The aim of this study was to investigate the effect of leaf extracts of three different cherry cultivars (cultivated in Iran) on the production of syringomycin and syringupoptin by Pseudomonas syringae pv.syringae. Bacterial strains The bacterial isolates used in the study are listed in Table 1.The strains of Pss were isolated from apricot, cherry, peach and wheat in Fars and Kohgiluyeh and Boyer-Ahmad provinces.The isolates were examined for gram reaction, catalase, oxidase, ultraviolet fluorescence, argenin dihydrolase, levan production, gelatin liquefaction, aesculin hydrolysis, hypersensitive reaction on tobacco leaves and pectolytic capability (Schaad et al., 2001).All isolates were stored in water suspensions (10 6 cells/ml) at 4°C and subcultured on king medium B (Little et al., 1998). Virulence testing on sweet cherry fruitlets Freshly collected immature sweet cherry fruits cv.'Takdane' were disinfected by dipping in 50% ethanol for 3 min and then rinsed three times in sterile distilled water.Afterwards, fruitlets were wrapped with paper towel to remove the excess of water.Each fruitlet was inoculated by pricking in two places to the depth of 2 mm with sterile needle previously immersed in suspension of each strain.After inoculation each fruitlet was immediately placed on moist filter paper in sterile Petri dish and incubated at 24°C for four days.The reference strains and sterile distilled water were included as positive and negative control, respectively.Ten fruitlets were used for testing of each strain. Bioassays for syringomycin production Different strains of Pss were used to determine the amount of syringomycin and syringopeptin production in the absence of leaf extract and in the presence of leaf extract of Cherry cultivars Takdane, Surati and Ghaheri.These strains were evaluated for syringomycin production on SRM (Syringomycin minimal) media by using standard bioassays as previously described by Scholz-Schroeder and associates (2001).15 ml of this culture medium was transferred in each Petri dish. 10 µl of 10 7 CFU/ml suspension from each bacterium was placed in the center of each SRM medium and kept at 23°C for five days.Petri dishes were sprayed with the suspension of the fungus Geotrichum candidum and after 24 to 48 hours the diameter of the inhibition zones was measured (Schaad et al., 2001).For the detection of syringopeptins Bacillus megaterium should be used as toxin indicator strain, since G. candidum is insensitive to syringopeptins.In order to investigate the effect of leaf extract on the production of toxin, 10 g of leaf tissue in 50 ml sterilized distilled water was completely dissolved.The extract was then sterilized through 0.45 µL pore size filter.For each 15 ml of SRM medium, 5 ml of herbal extract was used. Bioassays for syringopeptin production The PDA (Potato Dextrose Agar) medium was used to study the production of syringopeptin.P. fluorescense was also used as a negative control.For each 15 ml culture medium, 5 ml of leaf extract of different cultivars of cherry was added.In each Petri dishes, about 15-20 ml of medium was used.10 µl of bacterial suspension was placed in the center of each Petri dishes, and stored for six days at 23°C.After six day, the suspension of Basillus megaterium, sprayed onto Petri dishes.After 24-48 hours, the inhibition zone diameter was measured. Statistical analysis The experiment was carried out in a completely randomized design and data were subjected to ANOVA and means were separated according to the Duncan's multiple range test.Data were analyzed by SAS software. RESULTS AND DISCUSSION The strains of Pss were isolated from apricot, cherry, peach and wheat in Fars and Kohgiluyeh and Boyer-Ahmad provinces.All P. syringae pv.syringae strains used in this study were negative for oxidase, potato rot, and arginine dihydrolase, but, positive for levan production and the hypersensitive response on tobacco.All strains caused symptoms showing disorders around wounds on fruitlets.After 24 h from fruit inoculations with strains of Pss deep black brown necroses were observed, while strain of P. fluorescens did not induce any symptoms.To find out whether production of syringomycin and syringupptin in different isolates is affected by plant material, inhibitory effect of these toxins on G. candidum and B. megaterium were studied.When no plant extracts were added to the SRM medium, the most inhibitory effect on the G. candidum was observed in isolate C1 (Fig. 1).The highest production of syringomycin was achieved by this strain in the presence of Takdane cultivar leaf extract.Most of the isolates produced more syringomycin in the presence of Takdane leaf extract.Syringomycin inhibition was not significantly different between three strains A1, W1 and P1 on the media containing leaf extract of Ghaheri cultivar (Fig. 1).The inhibitory difference of all isolates in the presence of leaf extract of all three cherry cultivars except P2 was significant in comparison to absence of leaf extract.leaf extract of Surati variety had a lower effect on the increasing of syringomycin and syringopeptin production (Fig. 1 and 2).After Takdane cultivar, Ghaheri and Surati cultivars increased the production of toxin.As a result, Takdane, Qahiri and Surati cultivars were more susceptible to bacterial canker of stone fruits respectively.Significant increase in the production of syringomycin and syringopeptin in the presence of leaf extract in comparison to the conditions without the presence of leaf extract indicates the high effect of plant signal molecules on the expression of pathogenicity genes of Pss. The production of syringomycin and syringopptin is significantly related to the amount of plant signal molecules, especially phenolic and sugar compounds (Wang et al., 2006).Among the isolates tested, inhibition zones were observed for all strains of P. syringae pv.syringae (Fig. 1 and 2).The largest inhibition zones were obtained with strains A1 and C1 in presence of plant extracts.The zones of inhibition were generally larger on medium supplemented with extracts of cherry cultivar Takdaneh, but the sizes of the inhibition zones varied by strain.The strains A1 and C1 produced substantially more syringomycin and syringopeptin on SRM and PDA agar media in presence of all plant extracts.In SRM medium without any extracts, the strain A1 formed 7-mm zones of inhibition of G. candidum, compared to 14-mm zones on medium supplemented with Takdaneh extracts and 11mm zones on medium with Ghaheri extracts (Fig. 1).Similar results were observed on PDA agar developed specifically for the production of syringopeptin under defined culture conditions (Fig. 2).Syringomycin and syringopeptin production by the strains was relatively sensitive to plant extracts.The production of syringomycin and syringopeptin by all isolates was significantly higher in the presence of plant extracts on SRM and PDA agar medium.On SRM agar medium, strains A1 and C1 produced zones of antifungal activity whose diameters were almost two times those produced in the absence of the plant extracts.Also on PDA agar medium, strains W1, A1 and C1 produced zones of antibacterial activity whose diameters were almost two times those produced in the absence of the plant extracts. There is growing evidence that virulence genes in bacteria respond to environmental stimuli (Mo et al., 1995), and Pss is no exception.Because virulence determinants are not constitutively expressed in most bacteria, activation of virulence genes upon perception of a specific chemical or physical stimulus imparts order and balance to pathogenesis that will optimize the bacterium's chances for long term survival.The induction of toxigenesis in P. syringae pv.syringae by specific plant signal molecules reflects an ability of the bacterium to adapt to a dynamic plant environment.It was reported that syringomycin production is activated by specific plant signal molecules in diverse strains of P. syringae pv.syringae (Quigley & Gross, 1994).It recently was established that phytotoxin production by Pss is modulated by the perception of specific plant metabolites (Mo & Gross, 1991).Certain phenolic glucosides, such as arbutin, serve as signals that induce production of syringomycin, a cyclic lipodepsinonapeptide toxin that causes necrotic symptoms in host plants. In addition, the syrB gene, which is conserved in toxigenic strains of Pss and is predicted to encode a synthetase (Quigley & Gross, 1994), is activated by phenolic signal compounds.Cherry tissues appear to contain plant signal molecules that are perceived by Pss based on evidence that the syrB-ZucZ fusion is transcriptionally activated in an environment containing plant constituents (Mo & Gross, 1991).Inoculations of immature cherry fruits demonstrated that there is a rapid and strong expression of syrB in situ.When SRM medium was not amended with the cherry leaf extracts, strains failed to increase syringomycin production.This demonstrated that the cherry leaf extract contained a constituent or signal that was sensed by the bacterium, eventually leading to activation of genes responsible for toxin induction.Studies by Krzesinska et al., (1993) indicated that susceptibility of cherry genotypes to bacterial canker is correlated with signal activity.Extracts from the stems of 12 cherry genotypes were tested for syrB-inducing activity, and the genotypes most susceptible to bacterial canker contained higher signal activities than resistant genotypes.Consequently, both the quantity and quality of plant metabolites with signal activity may have an acute effect on disease development. Because high amounts of flavonoid glycoside signals occur in cherry leaves, one can speculate that their sudden release would be quickly sensed by the bacterium and lead to activation of genes, such as syrB, necessary for phytotoxin production. In addition, it appears that a broad spectrum of Pss strains would be capable of recognizing the flavonoid glycoside signals from cherry.This is based on evidence that most strains of Pss attack a wide range of plant hosts and that they recognize phenolic signal molecules that are found in the leaves, bark, and flowers of many plant species (Mo & Gross, 1991;Quigley & Gross, 1994).Host resistance as observed, for example, in cherry cultivars (Krzesinska et al., 1993) may reflect qualitative and quantitative differences in signal molecules or the balance of plant substances that antagonize induction by plant signals. In conclusion, it was found that the cultivars of Takdaneh, Ghaheri and Surati respectively induce more toxin production by Pss strains.These cherry cultivars in southwest Iran have the highest cultivation area.In susceptible cultivars, the pathogenicity factors of Pss is produced at a higher level compared to resistant cultivars. Fig. 1 .Fig. 2 . Fig. 1.Effects of plant extracts on syringomycin production by different strains of Pseudomonas syringae pv.syringae.The amounts of syringomycin produced were measured as the diameters of inhibition zones of the fungus G. candidum on SRM agar medium Table 1 . Characteristics of bacterial strains used in this study
2019-04-02T13:11:38.147Z
2017-09-30T00:00:00.000
{ "year": 2017, "sha1": "53914e93b5299fd345ffa90a0dff1aba0ecb96e8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.22207/jpam.11.3.09", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "6ac62e3433ed1f27f49f4ed2a7fd9b2226a40c5f", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
14961059
pes2o/s2orc
v3-fos-license
Two-electron self-energy contribution to the ground state energy of heliumlike ions The two-electron self-energy contribution to the ground state energy of heliumlike ions is calculated both for a point nucleus and an extended nucleus in a wide interval of Z. All the two-electron contributions are compiled to obtain most accurate values for the two-electron part of the ground state energy of heliumlike ions in the range Z=20-100. The theoretical value of the ground state energy of heliumlike uranium, based on currently available theory, is evaluated to be -261382.9(8) eV, without higher order one-electron QED corrections. Introduction The recent progress in heavy-ion spectroscopy provides good perspectives for testing the quantum electrodynamics in a region of strong electric field. In [1,2] the two-electron contribution to the ground-state energy of some heliumlike ions was measured directly by comparing the ionization energies of heliumlike and hydrogenlike ions. In such an experiment the dominating one-electron contributions are completely eliminated. Though the accuracy of the experimental results is not high enough at present, it is expected [2] that the experimental accuracy will be improved by up to an order of magnitude in the near future. This will provide testing the QED effects in the second order in α. In this paper we calculate the ground state two-electron self-energy correction in the second order in α in the range Z = 20−100. Calculations of this correction were previously performed for some ions for the case of a point nucleus by Yerokhin and Shabaev [3] and for an extended nucleus by Persson et al. [4,5]. Contrary our previous calculation of this correction, the full-covariant scheme, based on an expansion of the Dirac-Coulomb propagator in terms of interactions with the external potential [6,7], is used in the present work. This technique was already applied by the authors to calculate the self-energy correction to the hyperfine splitting in hydrogenlike and lithiumlike ions [8,9]. The paper is organized as follows. In the Sec. 2 we give a brief outline of the calculation of the two-electron self-energy contribution. In the Sec. 3 we summarize all the twoelectron contributions to the ground state energy of heliumlike ions. The relativistic units (h = c = 1) are used in the paper. Self-energy contribution The two-electron self-energy contribution is represented by the Feynman diagrams in Fig.1. The formal expressions for these diagrams can easily be derived by the two-time Green function method [10]. Such a derivation was discussed in detail in [3]. The diagrams in Fig.1a are conveniently divided into irreducible and reducible parts. The reducible part is the one in which the intermediate state energy (between the self-energy loop and the electron-electron interaction line) coincides with the initial state energy. The irreducible part is the remaining one. The contribution of the irreducible part can be written in the same form as the first order self-energy where Σ R (ε) is the regularized self-energy operator, ε a is the energy of the initial state a, and |ξ is a perturbed wave function defined by Here I(ω) is the operator of the electron-electron interaction. The calculation of the irreducible part is carried out using the scheme suggested by Snyderman [6] for the first order self-energy contribution. The reducible part is grouped with the vertex part (Fig.1b). For the sum of these terms the following formal expression is obtained Here the first term corresponds to the vertex part, and the second one corresponds to the reducible part. G(ε, x, z) is the Coulomb Green function, α is the fine structure constant, α µ = (1, α), α are the Dirac matrices, a and b are the 1s states with spin projection m = ± 1 2 , and P is the permutation operator. According to the Ward identity the counterterms for the vertex and reducible parts cancel each other, and, so, the sum of these terms regularized in the same covariant way is ultraviolet finite. To cancel the ultraviolet divergences analytically we divide ∆E vr into two parts ∆E vr = ∆E (0) vr + ∆E many vr . The first term is ∆E vr with both the bound electron propagators replaced by the free propagators. It does not contain the Coulomb Green functions and can be evaluated in the momentum representation, where all the ultraviolet divergences are explicitly cancelled using a standard covariant regularization procedure. The remainder ∆E many vr does not contain ultraviolet divergent terms and is calculated in the coordinate space. The infrared divergent terms are handled introducing a small photon mass µ. After these terms are separated and cancelled analytically the limit µ → 0 is taken. In practice the calculation of the self-energy contribution is made using the shell model of the nuclear charge distribution. Since the finite nuclear size effect is small enough even for high Z (it constitutes about 1.5 percent for uranium), an error due to incompleteness of such a model is negligible. The Green function for the case of the shell nucleus in the form presented in [11] is used in the calculation. To calculate the part of ∆E irred with two and more external potentials, we subtract from the Coulomb-Dirac Green function the first two terms of its potential expansion numerically. To obtain the second term of the expansion it is necessary to evaluate a derivative of the Coulomb Green function with respect to Z at the point Z = 0. We handle it using some algorithms suggested in [12]. The numerical evaluation of ∆E many vr is the most time consuming part of the calculation. The energy integration is carried out using the Gaussian quadratures after rotating the integration contour into imaginary axis. To achieve a desirable precision it is sufficient to calculate 12-15 terms of the partial wave expansion. The remainder is evaluated by fitting the partial wave contributions to a polynomial in 1 l . A contribution arising from the intermediate electron states which are of the same energy as the initial state is calculated separately using the B-spline method for the Dirac equation [13]. The same method is used for the numerical evaluation of the perturbed wave function |ξ in equation (1). Table 1 gives the numerical results for the two-electron self-energy contribution to the ground state energy of heliumlike ions expressed in terms of the function F (αZ) defined by To the lowest order in αZ, F = 1.346 ln Z − 5.251 (see [3] and references therein). The results for a point nucleus and an extended nucleus are listed in the third and fourth columns of the table, respectively. In the second column the values of the root-meansquare (rms) nuclear charge radii used in the calculation are given [14,15]. In the fifth column the results for an extended nucleus expressed in eV are given to be compared with the ones of Persson et al. [4] listed in the last column of the table. A comparison of the present results for a point nucleus with the ones from [3] finds some discrepancy for the contribution which corresponds to the Breit part of the electron-electron interaction. This discrepancy results from a small spurious term arising in the non-covariant regularization procedure used in [3]. 3 The two-electron part of the ground state energy In the Table 2 we summarize all the two-electron contributions to the ground state energy of heliumlike ions. In the second column of the table the energy contribution due to onephoton exchange is given. Its calculation is carried out for the Fermi model of the nuclear charge distribution with the rms charge radii listed in the Table 1. Following to [14], the parameter a is chosen to be a = 2.3 4 ln 3 fm. The parameters c and N, with a good precision, are given by (see, e.g., [16]) Except for Z=83, 92, the uncertainty of this correction is obtained by a one percent variation of the rms radii. In the case Z=92 ( r 2 1/2 = 5.860(2) fm [17]), the uncertainty of this correction is estimated by taking the difference between the corrections obtained with the Fermi model and the homogeneously charged sphere model of the same rms radius. For Z = 83, the uncertainty comes from both a variation of the rms radius by 0.020 fm (it corresponds to a discrepancy between the measured rms values [14]) and the difference between the Fermi model and the homogeneously charged sphere model. The energy contribution due to two-photon exchange is divided into two parts. The first one ("non-QED contribution") includes the non-relativistic contribution and the lowest order (∼ (αZ) 2 ) relativistic correction, which can be derived from the Breit equation. This is given by the first two terms in the αZ-expansion [18,19,20] and is presented in the third column of the Table 2. The second part which we refer to as the "QED contribution" is the residual and is given in the fourth column of the table. The data for the two-photon contribution for all Z, except for Z = 92, are taken from [21], interpolation is made when it is needed. For Z = 92 data from [22] are taken. In the fifth column of the table the results of the present calculation of the two-electron selfenergy contribution are given. The two-electron vacuum polarization contribution taken from [23] is listed in the sixth column. In the seventh column the "non-QED part" of the energy correction due to exchange of three and more photons is given. This correction is evaluated by summing the Z −1 expansion terms for the ground state energy of heliumlike ions beginning from Z −3 . The coefficients of such an expansion are taken to zeroth order in αZ from [18] and to second order in αZ from [20]. The three and more photons QED correction has not yet been calculated. We assume that the uncertainty due to omitting this correction is of order of magnitude of the total second-order QED correction multiplied by factor Z −1 . It is given in the eighth column of the table. The two-electron nuclear recoil correction is estimated by reducing the one-photon exchange contribution by the factor (1 − m/M). Such an estimate corresponds to the non-relativistic treatment of this effect and takes into account that the mass-polarization correction is negligible for the (1s) 2 state [20]. This correction and its uncertainty, which is taken to be 100% for high Z, are included into the total two-electron contribution. The two-electron nuclear polarization effect is expected to be negligible for the ground state of heliumlike ions. In the last column the total two-electron part of the ground state energy of heliumlike ions is given. In the Table 3 our results are compared with the experimental data [1,2] and the results of previous calculations based on the unified method [20], the all-order relativistic many body perturbation theory (RMBPT) [24], the multiconfiguration Dirac Fock treatment [25], and RMBPT with the complete treatment of the two-electron QED correction [4,5]. Data in the third column of the table are taken from [4] for Z = 54, 92 and from [5] for other Z. The one-electron contribution from [15] is subtracted from the total ionization energies presented in [24,20] to obtain the two-electron part. In the Table 4 we present the theoretical contributions to the ground state energy of 238 U 90+ , based on currently available theory. The uncertainty of the one-electron Dirac-Coulomb value comes from the uncertainty of the Rydberg constant (we use hcR ∞ =13.6056981(40) eV, α=1/137.0359895(61)). The one-electron nuclear size correction for the Fermi distribution with r 2 1/2 = 5.860 fm gives 397.62(76) eV. The uncertainty of this correction is estimated by taking the difference between the corrections obtained with the Fermi model and the homogeneously charged sphere model of the same rms radius [26]. The nuclear recoil correction was calculated to all orders in αZ by Artemyev et al. [27]. The uncertainty of this correction is chosen to include a deviation from a point nucleus approximation used in [27]. The one-electron nuclear polarization effect was evaluated by Plunien and Soff [28] and by Nefiodov et al. [29]. The values of the first order self-energy and vacuum polarization corrections are taken from [30] and [31], respectively. The two-electron corrections are quoted from the Table 2. The higher order one-electron QED corrections are omitted in this summary since they have not yet been calculated completely. We expect they can contribute within several electron volts.
2014-10-01T00:00:00.000Z
1997-07-09T00:00:00.000
{ "year": 1997, "sha1": "c07e7fbd7e6af7eab34408ee40105484f94ccece", "oa_license": null, "oa_url": "http://arxiv.org/pdf/physics/9707009", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ab9e4d7b401b5b553a34cbc1f73334e78e1d2744", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
117744703
pes2o/s2orc
v3-fos-license
Existence of A Rigorous Density-Functional Theory for Open Electronic Systems We prove that the electron density function of a real physical system can be uniquely determined by its values on any finite subsystem. This establishes the existence of a rigorous density-functional theory for any open electronic system. By introducing a new density functional for dissipative interactions between the reduced system and its environment, we subsequently develop a time-dependent density-functional theory which depends in principle only on the electron density of the reduced system. electron density function of the reduced system, and can be used to simulate the transient electronic response. It is often implicitly assumed that the electron density function of any real physical system is real analytic [16]. This is reflected in many existing first-principles methodologies. In practical quantum mechanical simulations, real analytic functions such as Gaussian functions, Slater functions and plane wave functions are adopted as basis sets, which results in real analytic electron density distribution. However, analyticity is not guaranteed for all presently used basis functions. For instance, in the linearized augmented plane wave (LAPW) method [17], radial functions times spherical harmonics span the inside of muffin-tin spheres whereas plane waves are used for the interstitial regime. By this approximation the electron density cannot be analytically continued across the interface of the two disconnected parts. It is thus interesting to ask whether the electron density function of any real system is in principle real analytic. This question was settled recently for time-independent systems. Fournais et al. proved that the electron densities of atomic and molecular eigenfunctions are real analytic in r-space away from the nuclei [18]. Their proof is based on the fact that the time-independent Schrödinger equation, HΨ = EΨ, is an elliptic partial differential equation, and its eigenfunctions can always be real analytic (except isolated points) and quadratically integrable. Their rigorous proof implies that the electron density function of a real physical system is real analytic (except at nuclei) when the system in its ground state, any of its excited eigenstates, or any state which is a linear combination of finite number of its eigenstates. In their derivation [18], nuclei are treated as point charges, and this leads to nonanalytic electron density at these isolated points. Note that the isolated points at nuclei can be excluded for the moment from the physical space that we consider, so long as the space is connected. Later we will come back to these isolated points, and show that their inclusion does not alter our results. We show now that for a time-independent real physical system the electron density distribution function in a sub-space determines uniquely its values on the entire physical space. This is nothing but analytic continuation of a real analytic function. The proof for univariable real analytical functions can be found in textbooks, for instance, Ref. [19]. The extension to multivariable real analytical functions is straightforward. Lemma : The electron density function ρ(r) is real analytic in a connected physical space U . D ⊆ U is a sub-space. If ρ(r) is known for all r ∈ D, ρ(r) can be uniquely determined on the entire space U . for all x ∈ D and γ ∈ (Z + ) 3 . Taking a point x 0 ∈ D and at or infinitely close to the boundary of D, we may expand ρ(x) and ρ(x ′ ) Assuming that the convergence radii for the Taylor expansions of ρ(x) and ρ ′ (x) at x 0 are both larger than a positive finite real number b, we have thus We have thus proven that ρ can be uniquely determined in U once it is known in D. With the above Lemma we are ready to prove the following theorem: Theorem 1: A connected time-independent real physical system is in the ground state. The electron density function ρ(r) of any finite subsystem determines uniquely all electronic properties of the entire system. Proof: Assuming the physical spaces spanned by the subsystem and the connected real physical system are D and U , respectively. D is thus a sub-space of U , i.e., D ⊆ U . According to the above lemma, ρ(r) in D determines uniquely its values in U , i.e., ρ(r) of the subsystem determines uniquely ρ(r) of the entire system. Inclusion of isolated points, lines or planes where ρ(r) is non-analytic into the connected physical space does not violate the above conclusion, so long as ρ(r) is continuous at these points, lines or planes. This can be shown clearly by performing analytical continuation of ρ(r) infinitesimally close to them. Therefore, ρ(r) of any finite subsystem determines uniquely ρ(r) of the entire physical system including nuclear sites. Hohenberg-Kohn theorem [1] states that the ground state electron density distribution of any system determines uniquely all its electronic properties. Hence we conclude that ground state ρ(r) of any finite subsystem determines all the electronic properties of the entire real physical system. As for time-dependent systems, the issue is less clear. Although it seems intuitive that the electron density function of any time-dependent real physical system is real analytical (except isolated points in space-time), it turns out quite difficult to prove the analyticity rigorously. Fortunately we are able to establish a one-toone correspondence between the electron density function of any finite subsystem and the external potential field which is real analytical in both t-space and r-space, and thus circumvent the difficulty concerning the analyticity of time-dependent electron density function. For timedependent real physical systems, we have the following theorem: Theorem 2: The electron density function of a real physical system at t 0 , ρ(r, t 0 ), is real analytic in r-space, and the corresponding wave function is Φ(t 0 ). The system is subjected to a real analytic (in both t-space and rspace) external potential field v(r, t). Let D be a finite rsubspace. The time-dependent electron density function on the subspace D, ρ(r, t) with r ∈ D, has thus a one-toone correspondence with v(r, t) and determines uniquely all electronic properties of the entire time-dependent system. Proof: Let v(r, t) and v ′ (r, t) be two real analytical potentials in both t-space and r-space which differ by more than a constant at any time t t 0 , and their corresponding electron density functions are ρ(r, t) and ρ ′ (r, t), respectively. Therefore, there exists a minimal nonnegative integer k such that the k-th order derivative differentiates these two potentials at t 0 : Following exactly the Eqs. (3)-(6) of Ref. [3], we have where Due to the analyticity of ρ(r, t 0 ), v(r, t) and v ′ (r, t), ∇ · u(r) is also real analytic in r-space. It has been proven in Ref. [3] that it is impossible to have ∇ · u(r) = 0 on the entire r-space. Therefore it is also impossible that ∇ · u(r) = 0 everywhere in D (otherwise based on the Lemma proven earlier, ∇·u(r) can always be analytically continued from the particular finite r-subspace where its values are 0 to the entire r-space, and this would lead to ∇ · u(r) = 0 everywhere in the entire r-space). We have thus for r ∈ D. This confirms the existence of a one-to-one correspondence between v(r, t) and ρ(r, t) with r ∈ D. ρ(r, t) on the subspace D thus determines uniquely all Note that if Φ(t 0 ) is the ground state, any excited eigenstate, or any state as a linear combination of finite number of eigenstates of a time-independent Hamiltonian, the prerequisite condition in Theorem 2 that the electron density function ρ(r, t 0 ) be real analytic is automatically satisfied, as proven in Ref. [18]. As long as the electron density function at t = t 0 , ρ(r, t 0 ), is real analytic, it is guaranteed that ρ(r, t) on the subsystem D determines all physical properties of the entire system at any time t if the external potential v(r, t) is real analytic. According to Theorem 1 and 2, the electron density function of any subsystem determines all the electronic properties of the entire time-independent or timedependent physical system. This proves in principle the existence of a rigorous DFT-type formalism for open electronic systems. All one needs to know is the electron density of the reduced system. The challenge that remains is to develop a practical first-principles formalism. Fig. 1 depicts an open electronic system. Region D containing a molecular device is the reduced system of our interests, and the electrodes L and R are the environment. Altogether D, L and R form the entire system. Taking Fig. 1 as an example, we develop a practical DFT formalism for the open systems. Within the TDDFT formalism, a closed equation of motion (EOM) has been derived for the reduced single-electron density matrix σ(t) of the entire system [20]: where h(t) is the Kohn-Sham Fock matrix of the entire system, and the square bracket on the right-hand side (RHS) denotes a commutator. The matrix element of σ is defined as σ ij (t) = a † j (t) a i (t) , where a i (t) and a † j (t) are the annihilation and creation operators for atomic orbitals i and j at time t, respectively. Fourier transformed into frequency domain while considering linear response only, Eq. (5) leads to the conventional Casida's equation [21]. Expanded in the atomic orbital basis set, the matrix σ can be partitioned as: where σ L , σ R and σ D represent the diagonal blocks corresponding to the left lead L, the right lead R and the device region D, respectively; σ LD is the off-diagonal block between L and D; and σ RD , σ LR , σ DL , σ DR and σ RL are similarly defined. The Kohn-Sham Fock matrix h can be partitioned in the same way with σ replaced by h in Eq. (6). Thus, the EOM for σ D can be written as where Q L (Q R ) is the dissipative term due to L (R). With the reduced system D and the leads L/R spanned respectively by atomic orbitals {l} and single-electron states {k α }, Eq. (7) is equivalent to: where m and n correspond to the atomic orbitals in region D; k α corresponds to an electronic state in the electrode α (α = L or R). h nkα is the coupling matrix element between the atomic orbital n and the electronic state k α . The current through the interfaces S L or S R (see Fig. 1) can be evaluated as follows, i.e., the trace of Q α . At first glance Eq. (8) is not self-closed since the dissipative terms Q α remain unsolved. According to Theorem 1 and 2, all physical quantities are explicit or implicit functionals of the electron density of the reduced system D, ρ D (r, t). Note that ρ D (r, t) = ρ(r, t) for r ∈ D. Q α is thus also a universal functional of ρ D (r, t). Therefore, Eq. (8) can be recast into a formally closed form, Neglecting the second term on the RHS of Eq. (11) leads to the conventional TDDFT formulation in terms of reduced single-electron density matrix [20] for the isolated reduced system. The second term describes the dissipative processes between D and L or R. Besides the exchange-correlation functional, an additional universal density functional, the dissipation functional Q α [r, t; ρ D (r, t)], is introduced to account for the dissipative interaction between the reduced system and its environment. Eq. (11) is the TDDFT EOM for open electronic systems. Burke et al. extended TDDFT to include electronic systems interacting with phonon baths [15], they proved the existence of a one-to-one correspondence between v(r, t) and ρ(r, t) under the condition that the dissipative interactions (denoted by a superoperator C in Ref. [15]) between electrons and phonons are fixed. In our case since the electrons can move in and out the reduced system, the number of the electrons in the reduced system is not conserved. In addition, the dissipative interactions can be determined in principle by the electron density of the reduced system. We do not need to stipulate that the dissipative interactions with the environment are fixed as Burke et al.. And the only information we need is the electron density of the reduced system. In the frozen DFT approach [22] an additional exchangecorrelation functional term was introduced to account for the exchange-correlation interaction between the system and the environment. This additional term is included in h D [r, t; ρ D (r, t)] of Eq. (11). An explicit form of the dissipation functional Q α is required for practical implementation of Eq. (11). Admittedly Q α [r, t; ρ D (r, t)] is an extremely complex functional and difficult to evaluate. As various approximated expressions have been adopted for the DFT exchange-correlation functional in practical implementations, the same strategy can be applied to the dissipation functional Q α . Work along this direction will be published elsewhere [23]. Given Q α [ρ] how do we solve Eq. (11) in practice? Again take the molecular device shown in Fig. 1 as an example. We may integrate Eq. (11) directly by satisfying the boundary conditions at S L and S R . The only boundary condition we need is the potentials at S L and S R . We need thus integrate Eq. (11) together with a Poisson equation for Coulomb potential. And the Poisson equation is subjected to the boundary condition determined by the potentials at S L and S R . It is important to point out that although in principle its physical span can be small, in practice the reduced system is to be chosen so that Eq. (11) can be solved readily with convenient boundary conditions. For instance, for the molecular electronic device depicted in Fig. 1, the reduced system D contains not only the molecular device itself, but also portions of the left and right electrodes. In this way the Coulomb potential at the boundary take approximately the values of the bulk leads. To summarize, we have proved rigorously the existence of a first-principles method for both time-independent and time-dependent open electronic systems, and developed a formally closed TDDFT formalism by introducing a new dissipation functional. This new functional Q α depends only on the electron density function of the reduced system. With an explicit form for the universal dissipation functional Q α , the time evolution of an open electron system in external fields is fully characterized by Eq. (11). In practical calculations, we need thus focus only on the reduced system with appropriate boundary conditions. This work greatly extends the realm of density-functional theory.
2019-04-14T03:21:52.128Z
2006-05-11T00:00:00.000
{ "year": 2006, "sha1": "b9db4434ca70e966c2a74dc0a885dbda05b44220", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "1a9ac94163351a08cf7e6334c8a145e6e9a0c44d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
136051722
pes2o/s2orc
v3-fos-license
Relation between nonlinearity and semiconducting characteristics of SrCoO3 additive in ZnO varistors sintered in a reducing atmosphere Air-sintered SrCoO3 is strongly reduced at 1020°C in PO2 = 1.6 © 1019MPa and then post-annealed at various temperatures in air. The conductive and structural characteristics are examined, comparing them with those of air-sintered. Although annealing in a reducing atmosphere breaks down SrCoO3 to several compounds, SrCoO3 is re-synthesized by post-annealing above 800°C. When post-annealed at 1000°C, the oxidation state of SrCoO3 is believed to be the same higher level as that of sintered in air. The result shows that, enhancement of p-type carriers in SrCoO3 at grain boundaries leads to nonlinearity improvement of SrCoO3doped ZnO varistors sintered in reducing atmospheres. Introduction ZnO-based ceramic varistors are used as a means to protect integrated circuits (ICs) in various electronic devices against overvoltage surges due to excellent nonlinear VI characteristics. 1) Because of the protection performance with small size, multilayer ceramic varistors (MLCVs) are frequently employed mainly in mobile equipment. 2) Recent advanced ICs have been low stability to electro-static discharge (ESD), tending to be easily destroyed. 3) Therefore, it is strongly required for MLCVs to enhance ESD protection performance as an important characteristic. Improvement of ESD suppression is generally given by lowering varistor voltage (V 1mA ) as breakdown voltage. However, since degradation of stability against ESD and nonlinearity usually occurs with a decrease in V 1mA , the improvement by the control of V 1mA is limited within characteristics as practical devices. Conventional Pr-and Bi-based MLCVs thus have lower V 1mA limits of 6.88 and 12 V, respectively. 3) Recently, authors have found that MLCVs using SrCoO 3 -doped ZnO ceramics exhibit lower V 1mA of 5.6 V than conventional two types, maintaining both high nonlinearity and ESD stability. 4) Additionally, cost reduction also is strongly needed for MLCVs. Conventional MLCVs were cofired with precious metals (Ag, Pd, Au and Pt) as internal electrodes, which have a large proportion of internal electrode cost to overall. 5),6) It is shown from previous studies that Cu-MLCVs have the probability of considerable cost reduction as well as enhanced performance. 7) MLCVs using SrCoO 3 -doped ZnO are reported to be able to possess both practical nonlinearity and ESD stability, being sintered with base metals such as Cu in a reducing. 7) Nonlinearity of SrCoO 3 -doped ZnO varistors when sintered in a reducing atmosphere is found to arise due to postannealing effect above 600°C. 7) As stated in our previous paper, nonlinear characteristics of SrCoO 3 -doped ZnO originates from barriers comprised of n-p (ZnO/SrCoO 3 ) and p-n (SrCoO 3 /ZnO) hetero junctions. 4) It is believed that ZnO grains keep n-type conductivity regardless of post-annealing. 8) On the other hand, because SrCoO 3 at grain boundaries has a certain degree of oxygen deficiency as a perovskite-like compound with SrCoO 3¹¤ , it should be extremely sensitive to annealing conditions. 9), 10) This suggests that the formation of potential barriers is closely related to variations in the oxygen non-stoichiometry. It, therefore, seems likely that SrCoO 3¹¤ in a higher oxidation state by post-annealing would bring the barriers at grain boundaries, causing enhanced nonlinearity. However the mechanism of barrier formation giving rise to non-ohmic behavior is not yet clarified in the ZnO varistors by the combination of sintering in a reducing atmosphere and post-annealing. To reveal it, semiconducting characteristics with p/n-types are examined for pre-reduced SrCoO 3¹¤ when post-annealed at various temperatures. This study is intended to report the mechanism of barrier formation causing nonlinearity of SrCoO 3 -doped ZnO varistors, which are produced by reductionsintering and air-annealing. Experimental All SrCoO 3 samples used in this research were produced by the solid-state reaction method, using reagent-grade powders of SrCO 3 and Co 3 O 4 . SrCO 3 and Co 3 O 4 were first weighted in the stoichiometric ratio, respectively. These raw materials were ballmilled in wet blending using polyethylene bottles with zirconia balls and water for 20 h. Dried mixed powders were granulated with an organic binder and passed through a mesh screen, then pressed into cylindrical shapes by uniaxial pressing. Compacted disks were sintered at 1120°C for 10 h in air to synthesize SrCoO 3¹¤ . They were then reduced in P O2 = 1.6 © 10 ¹9 MPa for 5 h, and post-annealed at 4001000°C. The crystal structure in the samples was identified by powder X-ray diffraction (XRD) analysis. XRD measurements were conducted from 10 to 80°f or 2ª with CuK radiation. Seebeck coefficient was determined from thermal electromotive force and the temperature difference between two points on the surface of specimens, which are caused by heating one side. 11) Resistivity was measured by fourterminal method (MCP-T 700, Mitsubishi Chemical Analytech Co., Ltd.). Au electrodes not having contact resistance were formed by sputter deposition on the contact area to perform Seebeck and resistivity measurements. VI properties of SrCoO 3doped ZnO varistors were evaluated using DC current, and voltages (defined as V 1mA ) at a current of 1 mA are termed varistor voltage. ¡ 10¯A is nonlinearity index defined by V 1mA /V 10¯A . Results and discussion 3.1 Crystal structure of reduced and postannealed SrCoO 3−¤ XRD measurements were carried out for reduced and postannealed SrCoO 3¹¤ to analyze the crystal structures within ZnO varistors sintered in a reducing atmosphere. XRD patterns are shown in Fig. 1. The structure when sintered in air is confirmed to be a single phase of SrCoO 2.52 . 10) On the contrary, it is evident from XRD analysis that annealing in a reducing atmosphere breaks down SrCoO 3¹¤ to compounds as Sr 3 Co 2 O 6.13 , Co 3 O 4 , CoO, Sr 3 Co 2 O 5.82 and SrO. 12)15) However, SrCoO 3 phases with at most, three different structures are found to again appear instead of their compounds, by reaction annealing above 800°C in air, whereas not being produced below 800°C. Re-synthesized SrCoO 3¹¤ comprises the matrix phase (SrCoO 2.52 ) of hexagonal and a small amount of extra two structures (orthorhombic and tetragonal as SrCoO 2.5 ) decreasing with post-annealing temperature. 16),17) Eventually, SrCoO 3¹¤ annealed at 1000°C demonstrates the same XRD pattern of the only high oxidation structure as sintered in air. The result on the phase relation indicates that oxidation of reduced SrCoO 3 proceeds with an increase of annealing temperature. Weight variation with the oxidation was thus evaluated for them, and could be detected easily after airannealing. Figure 2 is the change rate of SrCoO 3¹¤ weight before and after post-annealing at various temperatures. Post-annealing up to 700°C is not able to give the variation in the weight change rate from as-annealed in a reducing atmosphere, showing the constant values. In contrast, SrCoO 3¹¤ is found to show an increase in weight with temperature above 800°C. As the variations in the weight and crystal phases occur simultaneously at 800°C, a higher oxidation state in SrCoO 3¹¤ is believed to be produced by airannealing effect. From their results, the same level of oxidation state as sintered in air is likely to be achieved in reduced SrCoO 3¹¤ as well, when post-annealed at 1000°C. Electrical properties of post-annealed SrCoO 3−¤ SrCoO 3 -doped varistors do not show nonlinearity after sintering in a reducing atmosphere. 7) This suggests that electrical barriers of n-p-n junction should not be formed at grain boundaries without post-annealing. First, to analyze carrier type in SrCoO 3¹¤ and the characteristics, which are reduced in P O2 = 1.6 © 10 ¹9 MPa for 5 h and post-annealed at various temperatures, we have evaluated electromotive force of them and determined Seebeck coefficient. Illustrated in Fig. 3 is dependence of Seebeck coefficient of SrCoO 3¹¤ for post-annealing temperature. The values of Seebeck coefficient are positive indicating p-type conductivity. Showing lower levels (<170¯V/K) in a temperature range of below 700°C, Seebeck coefficient around 800°C increases drastically more than double for post-annealed at 400 to 700°C. From the results and XRD analysis, the p-type carrier concentration is found to increase at a relatively higher temperature (> approximately 800°C), simultaneously forming a perovskite-like compounds (hexagonal) with the formula SrCoO 2.52 . On the basis of heterojunction model comprising n-p (ZnO/SrCoO 3 ) and p-n (SrCoO 3 /ZnO), SrCoO 3¹¤ conductivity with p-type carrier density should have an important role in formation of electrical barriers (n/p/n) between ZnO grains with an n-type. Figure 4 gives relation between conductivity of SrCoO 3¹¤ and post-annealing temperature. It is evident that the conductivity increases with post-annealing temperature. Conductivity of SrCoO 3¹¤ starts to gradually increase owing to enhanced p-type carriers when post-annealed at 600°C, then reaching 3.1 S/cm at 1000°C. In particular, re-synthesis of SrCoO 3¹¤ occurring between 800 and 1000°C as presented in Fig. 1, involves a large variation of conductivity (1.7 © 10 ¹2 to 3.1 S/cm) about 10 to 1000 times more than post-annealed below 600°C. As the result, the conductivity for a temperature of 1000°C in Fig. 4, is the approximately similar enhanced level to that (= 4.0 S/cm) of air-sintered (data not shown in Fig. 4), thereby, being a single hexagonal structure with a relatively higher oxidation state (iq., SrCoO 2.52 ). Moreover, as described above, Seebeck coefficient reveals the highest value of 350¯V/K around the temperature of 800°C as well. Consequently, it can be seen from the results that annealing over a temperature range of 800 to 1000°C brings enhancing p-type carriers with oxidation in SrCoO 3¹¤ . Role of SrCoO 3 additive in nonlinearity of SrCoO 3 -doped ZnO varistors sintered in a reducing atmosphere Role by SrCoO 3 additive in nonlinearity of SrCoO 3 -doped ZnO varistors sintered in a reducing atmosphere, is examined from the basis of properties in the bulk disks described above. It can be understood that increasing nonlinearity by post-annealing in air probably is responsible for an increment of p-type carriers in SrCoO 3 . Presented in Fig. 5 is varistor characteristics (V 1mA /mm and ¡ 10¯A ) as a function of post-annealing temperature. Here, V 1mA /mm is obtained from V 1mA (ie., varistor voltage at 1 mA) normalized by unit thickness of bulk-bodies. As shown in Fig. 5, increase of V 1mA /mm and improvement of ¡ 10¯A are able to be provided because of post-annealing effect. Nonlinearity of SrCoO 3 -doped ZnO varistors is currently thought in the potential barrier model as stated in our previous papers, comprising n-p (ZnO/SrCoO 3 ) and p-n (SrCoO 3 /ZnO) between ZnO grains. 4),7) It is thus conceivable from the above bulk-properties of SrCoO 3 that the nonlinearity improvement by post-annealing is attributed to an enhancement of conductivity in SrCoO 3¹¤ with p-type semiconducting characteristics. However, from detailed data analysis of Figs. 4 and 5, a slight difference as 600 and 800°C is found between each post-annealing temperature giving two enhancements of SrCoO 3¹¤ conductivity and nonlinearity in the varistors. Specifically, nonlinearity improvement of SrCoO 3 -doped varistors exhibits a shift toward lower temperature (¹200°C) than an increase in SrCoO 3¹¤ conductivity. In general, diffusion along grain boundaries proceeds easier and faster than in bulk grains because diffusion coefficient in grain boundaries is several orders of magnitude higher than in volume. 18),19) Based on the general understanding of diffusion behavior, oxygen diffusion rate to SrCoO 3¹¤ existing in grain boundaries should increase compared to into SrCoO 3¹¤ as bulk body. Besides, diffusion along grain boundaries is greatly affected by impurities and non-stoichiometry as reported in the previous study. 18) It is thus likely that lower temperature shift from 800 to 600°C is caused by the difference of diffusion coefficient between grain boundaries and grains. However, it is not yet fully resolved in the present. Even so, this research has revealed that nonlinearity of SrCoO 3 -doped ZnO varistors is enhanced with increasing p-type conductivity of SrCoO 3¹¤ by post-annealing. Hence, the characteristic variation of SrCoO 3 in varistors probably affects nonlinear characteristics as a device. To clarify the detailed process of barrier formation in the varistors, electrical analysis of single junctions is considered extremely effective for grain boundaries with n-p (ZnO/SrCoO 3 ) and p-n (SrCoO 3 /ZnO) hetero junctions in the microstructure. It is thus believed, further studies to characterize the electrical variation of single junctions are required for understanding the barrier states within varistors, using direct measurement technique of individual grain boundaries. 20) Conclusions In this paper, we have examined the variation in electrical properties of SrCoO 3¹¤ by post-annealing at various temperatures in air, which is strongly reduced after air-sintering. Influence of the conductive characteristics on nonlinearity of SrCoO 3doped ZnO varistors is analyzed from the basis of results. Our findings are listed below: (1) After being broken down by annealing in a reducing atmosphere, SrCoO 3 is re-synthesized by postannealing above 800°C, which possesses the same oxidation state level in the hexagonal structure as air-sintered. (2) Conductivity of SrCoO 3¹¤ with a p-type carrier starts to gradually increase around post-annealing at 600°C, then shows a considerable increment (1.7 © 10 ¹2 to 3.1 S/cm) at from 800 up to 1000°C. (3) Nonlinearity improvement of SrCoO 3 -doped ZnO varistors by post-annealing after reduction sintering probably is attributed to an enhancement of p-type carriers in SrCoO 3¹¤ between ZnO grains with an n-type. However, annealing behavior of SrCoO 3¹¤ is demonstrated to be slightly different from the bulk and in the grain boundary, in that nonlinearity of varistors improves at a lower temperature than an increase in conductivity of SrCoO 3¹¤ . Therefore, further investigation would be needed to clarify the reason of the temperature shift in the varistors. It may be focused on the characteristics of SrCoO 3¹¤ at grain boundaries. Nevertheless, at present, we could suggest that the key to enhancing performance of SrCoO 3 -doped ZnO varistors is to control the oxygen deficiency in SrCoO 3 . Thus, it is considered that the homogenous control within varistors is an intrinsically important for improvement of MLCVs.
2019-04-29T13:16:57.645Z
2017-06-01T00:00:00.000
{ "year": 2017, "sha1": "6434761e33a44d597d2d2268884e06a3eaabf6a6", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/jcersj2/125/6/125_17003/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "55bfb5ed0f12f7d192f1584372b8a23bb5e526e5", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
3587563
pes2o/s2orc
v3-fos-license
Accessibility and contribution to glucan masking of natural and genetically tagged versions of yeast wall protein 1 of Candida albicans Yeast wall protein 1 (Ywp1) is an abundant glycoprotein of the cell wall of the yeast form of Candida albicans, the most prevalent fungal pathogen of humans. Antibodies that bind to the polypeptide backbone of isolated Ywp1 show little binding to intact yeast cells, presumably because the Ywp1 epitopes are masked by the polysaccharides of the mannoproteins that form the outer layer of the cell wall. Rare cells do exhibit much greater anti-Ywp1 binding, however, and one of these was isolated and characterized. No differences were seen in its Ywp1, but it exhibited greater adhesiveness, sensitivity to wall perturbing agents, and exposure of its underlying β-1,3-glucan layer to external antibodies. The molecular basis for this greater epitope accessibility has not been determined, but has facilitated exploration of how these properties change as a function of cell growth and morphology. In addition, previously engineered strains with reduced quantities of Ywp1 in their cell walls were also found to have greater β-1,3-glucan exposure, indicating that Ywp1 itself contributes to the masking of wall epitopes, which may be important for understanding the anti-adhesive effect of Ywp1. Ectopic production of Ywp1 by hyphae, which reduces the adhesivity of these filamentous forms of C. albicans, was similarly found to reduce exposure of the β-1,3-glucan in their walls. To monitor Ywp1 in the cell wall irrespective of its accessibility, green fluorescent protein (Gfp) was genetically inserted into wall-anchored Ywp1 using a bifunctional cassette that also allowed production from a single transfection of a soluble, anchor-free version. The wall-anchored Ywp1-Gfp-Ywp1 accumulated in the wall of the yeast forms but not hyphae, and appeared to have properties similar to native Ywp1, including its adhesion-inhibiting effect. Some pseudohyphal walls also detectably accumulated this probe. Strains of C. albicans with tandem hemagglutinin (HA) epitopes inserted into wall-anchored Ywp1 were previously created by others, and were further explored here. As above, rare cells with much greater accessibility of the HA epitopes were isolated, and also found to exhibit greater exposure of Ywp1 and β-1,3-glucan. The placement of the HA cassette inhibited the normal N-glycosylation and propeptide cleavage of Ywp1, but the wall-anchored Ywp1-HA-Ywp1 still accumulated in the cell wall of yeast forms. Bifunctional transformation cassettes were used to additionally tag these molecules with Gfp, generating soluble Ywp1-HA-Gfp and wall-anchored Ywp1-HA-Gfp-Ywp1 molecules. The former revealed unexpected electrophoretic properties caused by the HA insertion, while the latter further highlighted differences between the presence of a tagged Ywp1 molecule (as revealed by Gfp fluorescence) and its accessibility in the cell wall to externally applied antibodies specific for HA, Gfp and Ywp1, with accessibility being greatest in the rapidly expanding walls of budding daughter cells. These strains and results increase our understanding of cell wall properties and how C. albicans masks itself from recognition by the human immune system. The 199 bp partial sequence of GFP (nucleotides 2176-2374, as numbered above) omits the last 5 codons and stop codon of GFP, so that the final Gfp that is inserted internally into the endogenous protein lacks the C-terminal 5 amino acids of Gfp (i.e., -DELYK.) This segment is unnecessary for creation of a stable, fluorescent Gfp [3]. Instructions for Use A PCR amplicon encompassing the GFP-URA3-GFP segment serves as the transfecting DNA for ura3 strains of yeast (which require uracil or uridine for growth because their URA3 is defective or missing). Transformants grow in the absence of exogenous uracil or uridine upon stable integration of the GFP-URA3-GFP cassette into their genome by homologous recombination. PCR primers are designed to target the insertion to the point of interest (within the coding sequence of a gene). Subsequent growth of transformants in the presence of 5-FOA selects for rare individual cells that have lost their inserted URA3 through recombination of the homologous flanking GFP sequences (which share 199 bp of identical sequence); this fuses the upstream GFP coding sequence with the downstream coding sequence, resulting in a protein with an internal Gfp insertion. PCR primer design: 80-nucleotide DNA primers have been used routinely and successfully for these transformations. The 5' 60 or so nucleotides should match the genomic insertion target, and the 3' 20 or so nucleotides should match the ends of the GFP-URA3-GFP cassette; the amplicon will therefore have about 60 base pairs of target sequence at each end, with any length of sequence between those two targets in the genomic DNA (even none, if a lossless insertion is desired rather than a replacement). The upstream primer sequence is an in-frame fusion of the target coding sequence and the GFP coding sequence (possibly bypassing the first codon or two of GFP, if desired). The downstream primer sequence is an in-frame fusion of the complement of the target sequence and the complement of the end of the GFP gene (with or without part or all of the optional linker segment, which encodes SSASPSGS). Success has routinely been achieved using primers from IDT that have simply been desalted, not gelpurified. If your PCR polymerase is likely to add an untemplated "A" to the 3' ends of the amplicons, make sure this addition will match the target sequence (i.e., the target should have a "T" immediately upstream from the first [5'] base of your primers). For the most efficient amplification of the GFP-URA3-GFP cassette from the pGFP-URA3-GFP plasmid, pre-digest the plasmid with Hind III and Apa I. The liberated 2.33 kbp insert can also be gel purified for amplification if you anticipate multiple uses of this cassette. This system has been used successfully in each attempt with six different strains of Candida albicans (all derivatives of strains CaI4 and BWP17, which are both derivatives of strain SC5314); it has not yet been attempted in other strains, species or genera. So far, it has been tried only for a GPI-anchored cell wall protein (Ywp1); the initial transformants secreted the Ywp1-Gfp fusion protein into the culture medium, as the chimera lacked the C-terminal anchor of Ywp1; the 5-FOA survivors incorporated Ywp1-Gfp-Ywp1 into the cell wall, as no amino acids were altered or lost from Ywp1 in this assembly and the inserted Gfp did not prevent transport to that destination. Alternatives A similar plasmid template (pMG2082 = pGUG = pGFP-URA3-GFP = pGF-URA3-FP) has been described by Gerami-Nejad et al. [4]. That plasmid consists of URA3 flanked by partial GFP sequences, so that fluorescent Gfp is observed only after recombinative excision of the URA3. In contrast, the current pGEM-GFP-URA3-GFP plasmid has a full upstream GFP that allows initial transformants to be confirmed visually or spectroscopically prior to 5-FOA selection, and creates a fusion protein (with Gfp comprising the C-terminal moiety) that may also be useful for study; the PCR amplicon used for transfection is also more streamlined, being almost 1 kbp shorter than the one generated from pMG2082. Gerami-Nejad et al. also constructed a plasmid template similar to pMG2082, but with URA3 replaced by NAT1, which confers resistance to nourseothricin [5]; this allows positive selection of transformants, and thus does not rely on ura3 auxotrophs as hosts. Correct insertion verification requires PCR analysis of the initial transformants; loss of NAT1 and generation of Gfp fluorescence then requires visual or flowcytometric screening for identification.
2018-04-03T03:35:31.439Z
2018-01-12T00:00:00.000
{ "year": 2018, "sha1": "b0c12e5bf6ec01302848f677f7cfae0d7956a912", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0191194&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "32011ac747f791ca3d4c83f66250be59391599f7", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
270291559
pes2o/s2orc
v3-fos-license
Influence of transformational leadership on innovative work behavior and task performance of individuals: The mediating role of knowledge sharing This research tries to investigate the dynamic link between higher education institution (HEIs) transformational leaders (TFL) and follower's outcome innovative work behavior (IWB) and Task Performance) through Knowledge sharing (KNS) in Pakistan. Using quantitative design an adopted construct was used to obtain response from HEIs leaders and employees behavior. The obtained information was analyzed through structural equation modeling (SEM) technique via Smart PLS. Results depict that direct link between University Transformational leadership and employees Innovative work behavior as well as Task Performance. The results further postulate that KNS mediate the relationship between Transformational leadership and employees TSP in the context of HEIs. Surprisingly, KNS could not evident to become a mediating variable to strengthen the relationship between transformational Leadership and employees IWB in the HEIs sector of Pakistan. In addition to enhancing the theoretical comprehension of higher education leadership, the outcomes of this article provide that promoting knowledge sharing culture is valuable asset for both existing and future HEIs leaders in order to promote the culture of innovation and creativity. Although recent studies investigate the role of KNS as a mediator, however the current study use KNS as contemporaneous intervening variable for IWB and Task Performance for the first time. The study also confirms theoretical underpinning of social exchange mechanism in strengthening the relationship between leader member's continuum. Introduction In the contemporary landscape, democratic leadership has gained prominence as a compelling substitute for the conventional authoritarian method, acknowledging the central importance of employees as key assets within organizations [1].In contemporary leadership paradigms, there is a deliberate effort to devolve roles, authority, and responsibilities to team members, with a commitment to involving employees in decision-making activities [2].The novelty and innovative spirit demonstrated by workers have become integral aspects in defining an organization's competitive edge [3]. Employee involvement in organizational culture is a catalyst for creativity and eventually increases an organization's effectiveness, as highlighted by numerous studies such as [4,5], and [6].Transformative leadership, or TFL, has drawn a lot of attention lately [7] and is acknowledged as a major force behind innovation, particularly in the field of education [8,9].Empirical evidence consistently demonstrates a causal relationship between TFL and innovative work behavior (IWB) [10,11].TFL inspires people to go above the call of duty and use creative approaches to solve challenging problems. TFL practitioners encourage trust and drive beyond job duties, pushing staff members to reach higher goals [12].As [13] demonstrates, this leadership style fosters a sense of respect and belonging among staff members. The ability of an organization's personnel to demonstrate IWB is a crucial component in determining its competitive edge, as highlighted by Refs.[14,15].IWB, according to Ref. [16], is the process by which workers at all organizational levels develop, market, and put into practice worthwhile innovations.It is essential for dealing with developing issues brought on by global competition, increasing customer demands, and altering market dynamics [10]. One of the best ways to promote innovation is to make use of workers' creative abilities to guarantee long-term success [9].In order to improve work processes, goods, and services and support organizational performance, innovative work behavior (IWB) are developed, shared, and put into practice [17].Consequently, it is critical for employers to recognize and support elements that improve IWB across their workforce.In light of the intricate nature of contemporary difficulties, it is imperative for workers to engage in collaborative efforts and leverage their collective experience to uncover novel solutions [18]. Despite this expectation, the connection between TFL and followers' IWB has yielded conflicting findings in previous research [19,20], with meta-analytic results indicating a wide range of associations.Researchers like [21,22], have considered different mediators and moderators that focus on the notion that transformational leaders could encourage IWB to design the atmosphere of work culture that also promotes Knowledge Sharing (KNS).According to Ref. [23] KNS can be expressed by way of "the provision of task-related information and knowledge to benefit others" and is validated as a mediating mechanism in order to align leaders-members relationship.In accord with the study of [24], KNS acts as a mediator between TFL, employees' stress, and IWB among higher education sector.Many other researchers [11,[25][26][27][28][29] try to align the relationship between TFL and KNS as well as TFL and Task Performance (TSP) and IWB individually in different work sectors.Similarly, the mediational role of KNS was investigate in different work contexts.For instance the study of [30] depicts that KNS mediate the relationship between transformational leadership and innovation among manufacturing sector.Similar result are validated by the findings of [31] that support the notion of mediational importance of knowledge sharing among supply chain firms.Recent research work of [32] also support the notion that KNS attributes mediates the relationship between transformational leadership and frugal functionality.One of the most influential recent study [33] indicate the importance of mediational role of KNS to connect the relationship between Transformational leadership and product as well process innovation through organization support. However, there is a gap in the existing research literature that addresses the validation of the mediation mechanism of KNS in connecting the relationships between TFL and employee IWB, as well as TFL and TSP, especially within the context of higher education institutions, all within a single research model.Most of the previous studies although validate the mediational role of KNS with transformational leadership and various outcomes i.e. innovation, frugal functionality, and IWB [11,[25][26][27][28][29][30]33], however there in no evidence that connect the dynamic link between transformational leadership and employees IWB and TSP through KNS as a mediational tool.Most importantly transformational leadership foster the culture of higher task performance through individual stimulation and provide opportunities to the follower to me with innovative work process.At the same time, the knowledge sharing process between leader and follower also promote the culture of win-win situation from both ends.Therefore, this study aims to offer a comprehensive understanding of the intricate connection between leadership and employee behavioral patterns within the framework of KNS in higher education institutions.The insights gained will contribute significantly to our pursuit of fostering a healthier educational environment.As currently global trends indicate that HEIs focusing on knowledge sharing attributes enhance employees as well as student's creativity, as a result the new vistas of opportunity with developed countries through joint research group enhance the capabilities of developing countries HEI segment. Social Exchange Theory Social Exchange Theory (SET), a sociological framework, seeks to illuminate human behavior in the context of interpersonal interactions, focusing on the benefits and drawbacks inherent in these relationships [34].At its core, SET suggests that rational decision-making hinges on an individual's assessment of the advantages and disadvantages associated with engaging in these interactions [35].People are more inclined to sustain relationships when they believe that the benefits outweigh the costs [36]. One of the foundational works in the realm of SET is George Homan's article, "Social Behavior as Exchange," published in the American Journal of Sociology in 1958 [37].In this groundbreaking study, Homans put out the theory that social conduct may be viewed as an exchange in which people interact in a give-and-take fashion, giving and getting rewards.In addition, he presented the idea of "outcome," which denotes the whole effect of a transaction that ultimately decides whether it is beneficial or detrimental for a person to continue a specific connection. Peter Blau, whose 1964 book "Exchange and Power in Social Life" was released, made another essential contribution to SET.Blau added to Homans' groundwork by introducing the idea of "social power," which broadened the theory.He made the case that people with more social influence are better able to manage the resources in their connections.Blau's thesis states that people with more social N. Saif et al. power can gain more from their connections, which strengthens their overall authority and influence [38]. A useful framework for comprehending the dynamics of interpersonal connections and decision-making processes is provided by the Social Exchange Theory [39].It establishes the groundwork for investigating the ways in which human conduct inside organizations-especially those of higher education institutions-is influenced by the concepts of costs, rewards, and social power.This study intends to contribute to a deeper understanding of how TFL and KNS combine to produce a healthy atmosphere within such institutions by exploring the implications of SET in this particular scenario. Transformational leadership and innovative work behavior The complex relationship between TFL and IWB has been the subject of recent investigations, which have shown the moderating and mediating mechanisms at work.For example [40], looked at psychological capital's impact as a moderating variable and discovered that higher psychological capital levels among employees were associated with a bigger TFL impact on IWB.This shows that psychological capital enhances the beneficial effects of TFL on IWB by acting as a useful resource in social exchanges with employers.The mediating function of intrinsic motivation in the TFL-IWB link was examined in another study [41].The study's findings demonstrated the significant moderating influence that intrinsic motivation had, suggesting that TFLs encourage employees to participate in IWB by meeting their basic needs.These results are consistent with the tenets of Social Exchange Theory (SET), which holds that people connect with others when they believe the advantages exceed the disadvantages [42].According to SET, employees are more likely to show IWB and react favorably to TFL in the workplace when they feel their employer recognizes and appreciates their efforts. Numerous studies have examined the complex relationships among TFL, IWB, and SET [15,24,28], and [43].Furthermore [44], investigated how perceived organizational support (POS) mediated the relationship between IWB and TFL.While many aspects of the inventiveness of the employees were determined to be unimportant, POS stood out as a crucial component.These results imply that transformational leaders can support IWB by fostering a cooperative work environment that values and honors contribution from staff members. Notably [45], investigated the mediating roles of perceived support for innovation and innovation readiness in the relationship between TFL and IWB.They found that creativity and self-efficacy served as significant mediators, with a stronger association observed among employees engaged in more social interactions. In summary, these studies collectively point to the interrelated nature of TFL, IWB, and SET.Transformational leaders who create a supportive work environment, value employee contributions, and reward innovation are more likely to inspire and encourage employees to engage in IWB.Moreover, the mediating role of KNS in the relationship between TFL and IWB has been highlighted in the higher education sector (Rafique et al., 2022b).Employees tend to participate more actively in IWB when they perceive that their contributions are acknowledged and valued, thus reinforcing the positive impact of TFL. Based on this literature, we propose the following hypothesis: H1. Transformational Leadership is positively related to IWB. Transformational leadership and task performance Empirical evidence from studies [46,47] consistently demonstrates that TFL is positively associated with various outcomes, including enhanced employee task completion, increased organizational engagement, and higher levels of employee job satisfaction.These findings align with the core principles of Social Exchange Theory (SET), which posits that individuals enter into relationships with the expectation of receiving reciprocal rewards. Within the context of leadership, SET principles suggest that leaders who exhibit transformative behaviors, such as giving individualized attention and stimulating cognitive abilities, are likely to establish high-quality social exchanges with their teams [48].These positive social interactions can lead to improved TSP among subordinates [49].Research has consistently supported the notion that TFL positively influences TSP, with SET principles serving as a mediating factor in this relationship [50]. Moreover [51], provided evidence that TFL has a positive impact on organizational commitment and employee job satisfaction.Similarly [52], demonstrated that TFL is associated with increased levels of Leader-Member Exchange (LMX), which, in turn, positively affects both TSP and job satisfaction. Furthermore [5], found that transformative leadership plays a central role in enhancing emotional organizational engagement and task completion among employees through the intermediary of employee engagement.In another study by Ref. [53], TFL was positively linked to job satisfaction and showed a favorable correlation with TSP. According to the existing literature, SET principles serve as a foundational framework connecting TFL and TSP.Leaders who understand and apply these principles can create a productive workplace characterized by high-quality relationships, heightened employee enthusiasm, and improved productivity. Based on this literature, we propose the following hypothesis: H2. Transformational Leadership is positively related to TSP. Mediating role of knowledge sharing between transformational leadership and innovative work behavior Several research studies have consistently supported the mediating role of KNS in the relationship between TFL and IWB.For instance Ref. [9], conducted research in the context of Iraqi public universities and discovered a significant link between TFL and KNS, with KNS acting as a partial mediator in the connection between TFL and IWB.They also found that the effectiveness of Leader-Member Exchange (LMX) mediated the association between TFL and KNS. Similarly [18], discovered that KNS partially moderated the association between Organizational Justice (OJ) and IWB in a study involving 345 workers in a Chinese telecom business.Their research showed a stronger relationship between OJ and KNS when workers showed higher degrees of affective commitment. Result from the study of indicate that KNS mediate the relationship between TFL and IWB among Indonesian workforce.Similar results were validated by the study of while depicting the relationship between TFL and IWB through 3 different mediational variables [54]. KNS was found to perform an intervening role in the relationship between TFL and innovation capability, specifically influencing both product and process innovation, in another study by Ref. [33] that included 394 participants from 88 Chinese businesses. These results support the SET tenets and imply that KNS mediates the relationship between TFL and IWB.By encouraging improved social interactions and KNS among employees, TFL can help workers with their IWB at work. Drawing from this body of literature, we propose the following hypothesis: H3. Knowledge Sharing mediates the relationship between TFL and IWB. Mediating role of knowledge sharing between transformational leadership and task performance Numerous studies have explored the relationship between TFL, KNS, and TSP, consistently suggesting that KNS plays a mediating role in this dynamic.For example [55], found that KNS partially influenced the interaction between TFL and TSP, with higher job satisfaction associated with a stronger connection between TFL and KNS.Similarly, a study [24], done at Higher Education institutions in Pakistan unveiled a significant correlation between TFL and KNS and IWB.Moreover, the study found that KNS had a moderating role in the association between Pandemic Job Stress (PJS) and IWB, as well as partially mediated the link between TL and IWB. Furthermore [9], demonstrated that KNS served as a strong mediator between TFL and innovation in the context of higher education institutions in Iraq.In another study conducted in an Indonesian work context [56], proactive KNS was found to mediate the relationship between TFL and TSP.Collectively, these findings suggest that TFL can enhance TSP within organizations by fostering robust social interactions and promoting KNS among subordinates.By applying the principles of SET and cultivating a positive work environment, leaders can further enhance employee KNS, ultimately leading to improved TSP. Based on this body of literature, we propose the following hypothesis: H4. Knowledge Sharing mediates the relationship between TFL and TSP. Theoretical framework As references [38,57], and [58] demonstrate, the present study emphasizes the importance of the SET (Social Exchange Theory) method in explaining the dynamic interaction between leaders and employees in organizational situations.SET is a theoretical framework that clarifies how people participate in reciprocal and mutually beneficial social interactions.Fundamentally, SET asserts that workers view advantages based on the inputs they provide while performing their jobs. The SET approach's focus on the reciprocal nature of social exchanges in the workplace makes a natural connection between it and the dynamic interaction between leaders and employees.Leaders are essential in starting and maintaining these conversations because they give their staff members tools, encouragement, and development opportunities. Moreover, the SET approach underscores the importance of trust, fairness, and mutual respect in shaping the quality of social exchanges between leaders and employees.Leaders who cultivate positive interpersonal relationships through knowledge sharing and demonstrate genuine concern for their employees' well-being are more likely to elicit favorable responses and outcomes from their workforce IWB and affective TSP (see Fig. 1). The present study provides vital insights into how leaders may effectively manage and utilize the dynamics of social exchange to improve employee task performance and IWB by embracing the fundamental principles of SET.It emphasizes how interactions N. Saif et al. between managers and employees are mutually beneficial and how important it is to foster a supportive environment at work that is characterized by justice, trust, and reciprocity.All things considered, the SET approach provides a solid framework for understanding and optimizing interactions between supervisors and employees in work settings. Method Following the principles of pragmatism and drawing on previous studies with similar topics, this research utilized a straightforward convenience sampling approach, which is a non-probability sampling technique that selects samples from readily available study groups.Specifically, questionnaires served as the instrument for data collection in this study, distributed to Higher Education Institutions (HEIs) in KP.MBA students, who was enrolled in master's program at the university where one of the authors worked, facilitated the distribution process.A cover letter accompanied the questionnaires to inform respondents that the study had received management approval, and strict confidentiality was guarantee.It was also explained that results were going to be limited to academic study to enhance comprehension of leadership dynamics in the workplace.Participants in the study were not compensated in any way, and the questionnaires were given out while they were at work. The study employed an item-to-response theory, where each item in the questionnaire had a criterion of ten responses (21 × 10), resulting in a sample size of 210 participants [59].The participants were approached through referral basis.Employee data was collected to mitigate the Common Method Bias (CMB) problem, their immediate managers in two successive assessments [60].During the initial wave, 210 employees were contacted to gather information about their top management transformational leadership, and KNS.Out of the contacted employees, 190 responded, yielding 90.47 % response rate.During the follow up wave, the immediate boss of the same 190 employees were contacted to provide feedback on employees' TSP and IWB.Among the top management contacted, resulting in a response rate of 88.13 %.However, only 185 responses were considered for the final analysis.To ensure confidentiality, each response in the first wave was assigned a unique three-digit code, which were corresponded with the responses gathered from their respective managers [61].The study found that 91 % of the respondents were male, reflecting the male-dominated workforce in Khyber Pakhtunkhwa.The current study obtained information through an adopted construct; hence, approval from the Ethical Committee (for ULM research) of DBM ULM was obtained via Reference No. (DBM-ULM-1014, 2021).Additionally, appropriate consent was obtained before receiving responses from the target sample. Transformational leadership (TFL) The Multi-Factor Leadership Questionnaire (MLQ), originally created by Ref. [62], was employed in this study in a modified form., which included 8 items adapted from Ref. [24].Additionally, for more comprehensive information, 2 extra items were incorporated from previous research [63].On a Likert scale with a maximum score of 5, respondents were requested to rate their responses from 1 (not at all) to 5 (frequently).Internal consistency of the scale was determined to be (0.79).An illustrative item from the questionnaire is "My Leader expresses confidence that goals will be achieved." Task performance (TSP) The research utilized a seven-item questionnaire originally developed by Ref. [64] to assess workers' TSP.Participants' responses were gathered using a five-point Likert scale, ranging from 1 (not at all) to 5 (frequently).The scale's internal consistency was determined to be (0.78).An illustrative item from the questionnaire is, "The employee fulfills the responsibilities outlined in their job description." Innovative work behavior (IWB) Using a set of 6 items modified from the construct devised by Ref. [65]the study assessed employees' IWB.By Ref. [24], this questionnaire has recently been validated in the Chinese workplace.The response options for the items remained aligned on a five-point Likert scale, covering the range starting "never" (1) to "always" (5).Scale exhibited calculated internal reliability of 0.79, and after pilot testing, only four items were retained.An illustrative item from the questionnaire was " Overall, I consider myself a creative member of my team in this department." Knowledge sharing (KNS) Adapting eight items modified from the construct created by Ref. [66]employees' KSR attribute is measured.This construct was recently validated in the Higher Education Institution (HEI) sector by Ref. [24].Participants' responses were gathered using a five-point Likert scale, with the options ranging from 1 (not at all) to 5 (frequently).Internal reliability of the scale was calculated to be (0.75), and following pilot testing, six items were retained to obtain responses from the target audience.An example item from the questionnaire was, " When I have learned something new, I tell my colleagues about it.". N. Saif et al. Data analysis procedure Before moving to evaluate structural path relationship, construct mean and standard Deviation is calculated through SPSS (see Table 1).To evaluate research hypotheses, using Smart PLS (4.0), we used the partial least squares approach to structural equation modeling [67].You can approach structural equation modeling in one of two ways: covariance-based SEM, which requires normally distributed data, the variance-based SEM does not required such requirements [68].We followed two-step approach.Initially, validated our model based on measurement approach proceeding to test our hypothesized model through structural analysis [69].For both tasks of measure validation and testing the hypothesized model, we employed the Smart PLS 4.0 software. Measurement model assessment The measuring model was constructed of five crucial latent constructs, namely TRF leadership, KNS, TSP, and IWB.The evaluation of the reflective measurement model involved assessing its reliability and validity issues [69]concerning the latent constructs [70].This entails examining the connection between the latent constructs and their observed indicators.To evaluate our model's internal consistency reliability and convergent validity, we employed composite reliability (CR) and average variance extracted (AVE).The table displays the measuring model's results for various constructs, including (IWB), (KNS), (TFL), and Task Performance (TSP).Each construct is assessed based on its items loading, Cronbach's Alpha, CR, and AVE.For (IWB), four items were used (IWB3, IWB4, IWB5, and IWB6), with their respective loadings ranging from (0.707-0.833).The α for IWB is (0.795), CR is (0.867), and the AVE is (0.621).KNS is measured using five items (KNS1, KNS2, KNS3, KNS4, and KNS5), with their loadings ranging from 0.626 to 0.981.The α for KNS is (0.958), with CR (0.958), and AVE (0.843).Eight items (TFL1 through TFL9) were used to measure transformational leadership (TFL), and their loadings ranged from 0.711 to 0.989.TFL's α is (0.974), CR is (0.978), and AVE is (0.851).The six items (TSP1 through TSP6) used to measure Task Performance (TSP) have loadings ranging from 0.680 to 0.866.TSP's α is (0.875), whereas CR is (0.905) and AVE is (0.615).In general, the table provides useful details about the validity and reliability of the measurement model for every construct, assisting in the evaluation of the precision and caliber of the data derived from the items pertaining to the latent variables (see Table 2). In order to verify that each dimension in a model for measurement is unique and different than the remaining characteristics in the model, discriminant validity is an essential component of validation.The table's Heterotrait-Monotrait (HTMT) values show how different pairings of certain constructs interact with one another.More discriminant validity, or that each of the concepts are significantly separate, is indicated by a lower HTMT score.Table (2) analysis reveals that there are differences in the HTMT values between the construct pairs.As an example, the HTMT score of 0.521 indicates a moderate discriminant validity between the two notions of knowledge sharing and IWB.Similarly, the HTMT value between IWB and TFL is 0.524, also suggesting moderate discriminant validity.However, the HTMT value between IWB and Task Performance is 0.804, indicating a higher level of discriminant validity between these two constructs.Furthermore, the HTMT, KNS and TFL as well as KNS and TSP are 0.443 and 0.486, respectively, both indicating moderate discriminant validity between these pairs.Lastly, the HTMT value between TFL and TSP is 0.418, again suggesting moderate discriminant validity (see table − 3).In conclusion, The HTMT values provide useful information about the distinctness of the latent constructs in the study.These values help researchers assess the validity of their measurement model and ensure that each construct is adequately differentiated from the others, strengthening the overall credibility of the research findings. To evaluate the path coefficient between variables, beta values are presented in table ( 4) and represented via (Fig. 2).The beta value of (0.316) indicates a positive relationship for (KNS -> IWB).This suggests that as KNS increases, it also leads to increase in IWB.The T statistics value of 2.444 is associated with a (p = 0.015; <0.05).Thus, this relationship is considered statistically significant and is "Accepted," indicating that there is evidence to support the notion of positive association between KNS and IWB (see Fig. 2).The beta value of 0.353 indicates a positive relationship between (KNS -> TSP).This means that as KNS increases, it also enhances TSP.The T statistics value of 3.291 is associated with a (P = 0.001,<0.05).As a result, this relationship is considered statistically significant and is "Accepted," providing evidence to support the existence of a positive association between (KNS -> TSP).The beta value of 0.325 indicates a positive relationship between (TFL -> IWB).This suggests that as the level of TRF increases, it also enhances innovative work behavior.The T statistics value of 3.271 is associated with a very low P value of 0.001, indicating that the relationship is statistically significant.Thus, the hypothesis is "Accepted," providing evidence to support the existence of a positive association between (TFL -> IWB).The beta value of 0.426 indicates a positive relationship between (TFL -> KNS).This means that as the level of TRF increases, it also enhances KNS.The T statistics value of 2.714 is associated with a P value of 0.007, which is below then (0.05).As a result, this relationship is considered statistically significant and is "Accepted," providing evidence to support the existence of a positive association between (TFL -> KNS).The beta value of 0.249 indicates a positive relationship between (TFL -> TSP).This means that as the level of TRF increases, it also enhances TSP.The T statistics value of 2.432 is associated with a P value of 0.015, which is lower than (0.05).As a result, this relationship is considered statistically significant and is "Accepted," providing evidence to support the existence of a positive association between (TFL -> TSP). The beta value of 0.151 indicates a positive relationship (TFL -> KNS -> TSP).However, it's important to note that this relationship is weaker compared to the direct relationships.The T statistics value of 2.074 is associated with a P value of 0.038, which is less than the significance level of 0.05.Therefore, this relationship is considered statistically significant and is "Accepted," providing evidence to support the existence of a positive association (TFL -> KNS -> TSP).The beta value of 0.135 indicates a positive relationship between (TFL -> KNS -> IWB).However, this relationship is relatively weak compared to the direct relationships.The T statistics value of 1.827 is associated with a p value of 0.068, which is marginally above the significance level of 0.05.Therefore, this relationship is considered statistically non-significant, and the hypothesis is "Rejected," indicating that there is not enough evidence to support the existence of a significant association between (TFL -> KNS -> IWB). In the context of SEM, the Stoner-Geisser Q2 is an index used to evaluate the fit of a model when dealing with repeated measures or longitudinal data.It assesses the sphericity assumption, which is an assumption related to the covariance matrix of the repeated measures data.Violation of sphericity can lead to biased estimates in repeated measures analyses.The value of Stoner-Geisser Q2 ranges from 0 to 1, where a value closer to 1 indicates better sphericity (i.e., the assumption is more likely to hold), and a value closer to 0 suggests possible violation of the sphericity assumption.The value of 0.032 for the variable "Knowledge Sharing" and similar values for other variables (0.201 for IWB and 0.108 for TSP) suggest that the model may have a reasonably good fit, indicating that the sphericity assumption is reasonably met (see Table 4).The SRMR (Standardized Root Mean Square Residual) is a fit index used in Structural Equation Modeling (SEM) to assess the goodness of fit of a model.It provides a measure of how well the proposed model reproduces the observed covariance matrix of the data.The SRMR evaluates the discrepancy between the observed covariance matrix and the model-implied covariance matrix, after adjusting for the number of estimated parameters in the model.The SRMR value lies between (0-1), where 0 suggests that the model fits the data well, indicating a good fit, while a value closer to 1 indicates poor fit.In the given context, the SRMR value of 0.042 is provided for the structural model being evaluated (see Table 3).Since the value is relatively close to 0, it suggests that the proposed model fits the observed data reasonably well.This means that the relationships among the observed and latent variables in the model are consistent with the actual relationships present in the data. Discussion and implications Based on the results the current study establishes the directional link (direct Vs Indirect) TFL and employees work outcomes (IWB and TSP), while sharing knowledge and learning experiences with peers during their working hours in Higher education setup of Pakistan.The outcomes pertaining to the mediational role of knowledge sharing, results of SEM depict that transformational attribute of Higher Educational Institute in Pakistan also shape employee's behavior to promote the culture of innovation for performing the assigned task.First of all, the direct impact of top crest transformational behavior indicates direct relation and how it relates to employee's innovative work behavior.The [24] study's results demonstrated that there is significant association among TFL and IWB.Similar results were quoted by the findings of [15] by interlinking the relationship through buffering role of Locus of control among Malaysian family firms.Conversely, the findings from the study by Ref. [71] in the work sector of Ghana's banking depict that TFL is positively correlated to employees IWB.The findings of [72] also portray the same story while examining dynamic influence of transformational leadership on fostering a culture of innovation among teachers in Dutch schools' setups.In contrast the result of [73] indicate no relationship between schools leaders TFL behavior and workers IWB in the work context of Indonesian schools.Similar results were evident by Ref. [74] among the leaders workers relationship of (USA & Holland). In addition, the results also demonstrate the significant path relationship between knowledge sharing attribute among higher education workers and their IWB in universities of selected sample in Pakistan.Similar results were evident by the study of [24] in HEIs sector of Pakistan while managing faculty stress during their work performance.Similar results were quoted by Ref. [26] among the Romanian workers.Similar results are evident by the findings of [59] while understanding the dynamic link between ICT workers IWB and KNS through self-efficacy.These results are based on the arguments that sharing knowledge offer novel opportunities to worker in order to solve organizational complex issues.Likewise [15], recognized KNS as the primary factor nurturing employees' IWB.Alternatively, the findings from Ref. [57] indicate insignificant relationship between these attributes among Kazakhstan.One of the major reason behind the positive relation of KNS and IWB in HEIs sector of Pakistan is based upon advancing one's knowledge and capabilities through sharing updates research, curriculum advancement and adaptation of ICT for R&D purpose. Task performance is the ability of individual to achieve organizational and individual objective with in stipulated time frame.Findings from previous studies indicate that TRF promote employees task performance abilities through encouragement, motivation, inspiration and articulating vision.Results of [58] indicate that TRF promote nurses Task performance ability while performing their job at health care institutions.Similar results are recorded by Ref. [11] while investigating the behavioral aspect of academic workers.The findings of [75]also depict the same picture while analyzing the behavioral pattern of employees among hospitality work sector.Similar results are quoted by Ref. [76] among Chinese worker by confirming the importance of social capital theory among insurance sector employees. Earlier studies depict the Significant evidence in connecting the path between TRF and employees KNS during [24]leader member exchange process.Once the employees perceive that top management share information, knowledge and facts with them, which induce the felling of respect, and inspiration of top management, that ultimately enhance worker performance capabilities [11,58,75] and promote Innovative work behavior [71,72].The contemporary research confirms the mediational role of KNS between universities top management TFL behavior and employees task Performance by enhancing (Faculty members) capabilities to achieve organization vision as well as personal goal of promotion and career growth.The investigation by Ref. [24] validates KNS's mediating function in the relationship between TRF and IWB among high education employees.While the result of [76]confirm indirect relationship between TRF and IWB through social media as a mediator.Similarly, the result of (Lan & Chen, 2020) indicate that career adaptability is the major attribute to strengthen the relation between TRF and TSP.The findings of [11] depict that Bass leadership styles did not evident direct relation with employee's task performance among academic staff of Malaysian universities, however results confirm the mediating role of PSW to strengthen the impact of TRF toward employees TSP.In this regard, the current study pioneers the introduction of novel mechanism by connecting HEIs top management TRF behavior in order to investigate employees IWB and TSP through mediating mechanism of Knowledge sharing.Interestingly, the intervening role of KNS is not supported amid TFL and IWB among followers of KP selected universities.One of the major reason behind the insignificant role of (TFL -> KNS -> IWB) is based on the logic that response is obtained from respondents of mixed public and private sector universities.It is generally perceived that in government sector universities among developing countries promotion, and career growth depends upon faculty member's relations with top management as well as one's organizational politics abilities and active role in faculty association.In such an environment innovation became night mare and everyone focuses on performing their assigned task.In contrast Private sector universities develop the culture of joint venture that support innovative ideas, promote faculty on the basis of research productivity and such kind of environment is further booster through continues support from top management in the form of transformational leadership behavior. Conclusion The current study delves into the dynamic interaction between TRF and employee behavior in the form of IWB and TSP among higher education faculty members in Pakistan, using KNS as a mediator.Results confirm that TRF act as direct and significant attribute to shape employees IWB and Task performance as well as enhance individuals KNS behavior.However, during mediational analysis it was concluded that although KNS mediate the relation between TRF and TSP, but could not evident for supporting mediational role of KNS for linking the association between TRF and IWB among higher education sector employees of Pakistan.The current study pioneering research that introduces dynamic association between selected variable in HEIs sector of developing country prospective. Theoretical implication The current investigation establishes the value of [38,77,78] SET approach in connecting the dynamic interaction between leaders and employees.The foundation of Social Exchange Theory (SET) founded on the premise that employees perceive benefits in response to their input in exchange process while performing their job.Hence if employees perceive that top management is loyal in sharing knowledge, information's and facts, as a response workers respond through achievement of organizational objectives through performing their assigned task appropriately.On the other side, once employees get confidence through articulating vision and inspirational approach of guidance through their leaders also infuse the feelings of IWB in order to solve the work/organizational related problems. Managerial implications Many significant management implications result from the favorable mediating function of information sharing between transformational leadership and creative work behavior and task performance among Pakistani higher education personnel. First, it emphasizes how important it is to support transformational leadership cultures in higher education.Leaders who encourage and inspire their staff to embrace innovation and share information facilitate higher task performance.As a result, HEC and administrators ought to fund leadership development initiatives that highlight transformative traits like charisma, vision, and thoughtfulness for each individual. Second, promoting knowledge-sharing initiatives becomes crucial for enhancing the performance of creative tasks and professional conduct.The Higher Education Commission can facilitate the creation of online discussion boards, training sessions, seminars, and other cooperative learning environments where staff members can exchange knowledge, best practices, and lessons learned.This expands the amount of collective knowledge while also improving the creativity and problem-solving abilities of staff members. Rewarding and praising employees who take initiative to share knowledge and behave creatively can also help to promote desirable outcomes.Systems for evaluating employee performance must be designed to recognize and reward actions that support the organization's objectives of promoting creativity and attaining high job performance. Finally, through regular feedback channels and performance indicators, HEC should actively monitor and evaluate the efficacy of leadership approaches, knowledge sharing efforts, and innovation processes.This guarantees sustainable growth and competitiveness within Pakistan's higher education system by enabling constant improvement and adaptation to shifting conditions.In higher education institutions, utilizing the mediational role of knowledge sharing within the framework of transformational leadership can greatly augment organizational efficacy and accomplish strategic goals. Limitation of the work and direction for future researchers Likewise, other studies, the current research work also comprises of several limitations that affects it generalizability to other work N. Saif et al. context.Participants' responses were gathered from selected three private sector universities, while 8 government sector universities.That may arouse the question of response validity from both sector due to cultural and administrative variation.Hence future researchers may have obtained response from similar number of response as well as similar number of HEIs from both sector.That will enrich our information's in gaining a comprehensive understanding of KNS's mediating role.Similarly, data was cross-sectional in nature as response was get at once, that also question its validity, hence future researchers may get the data from selected HEI's with specific time interval and then the findings may be compared to get the accurate picture of underlying phenomena.In the current study, only KNS is use as a potential mediator to align leader's member relationship, however future researcher may use HEI culture, o justice perception, psychological attributes and LMX as a potential mediator.Similarly, employee's behavioral outcome can be understanding through adding employee's commitment, OCB and satisfaction as well as work engagement among teaching faculty members of schools, colleges and universities. Practical implications of the study The results of the study highlight how important transformational leadership is in helping staff members in Pakistani higher education to develop a culture of creativity and task performance.Academic leaders can encourage staff members to share information and take on creative work practices by modeling visionary traits and offering tailored support.This emphasizes how crucial it is to fund leadership development initiatives, in order to foster an atmosphere that encourages creativity and group learning. The study also highlights the necessity of organizational activities that encourage staff members to share their knowledge.Formal methods that promote the sharing of ideas and best practices, like seminars and online forums, are established with the ultimate goal of improving task performance.Employee contributions to innovation and information sharing are acknowledged and rewarded, which strengthens the collaborative culture and creates a positive work atmosphere where staff members feel appreciated and empowered to perform well in their positions.All things considered, these practical implications provide managers and administrators with doable tactics to improve organizational effectiveness and foster innovation in Pakistani higher education institutions. N. Saif et al. Table 1 Mean and Standard Deviation for all items. Table 2 Measurement model results. N.Saif et al. Table 4 Results of the hypothesis testing.
2024-06-07T15:06:06.096Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "ea4d54bcf42eec956407bca2a3f658f40db17f17", "oa_license": "CCBYNC", "oa_url": "http://www.cell.com/article/S2405844024083117/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6758acdc6f3eb18c83c973859feab4a0e8f54cb2", "s2fieldsofstudy": [ "Business", "Education", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
209316097
pes2o/s2orc
v3-fos-license
Compositional Languages Emerge in a Neural Iterated Learning Model The principle of compositionality, which enables natural language to represent complex concepts via a structured combination of simpler ones, allows us to convey an open-ended set of messages using a limited vocabulary. If compositionality is indeed a natural property of language, we may expect it to appear in communication protocols that are created by neural agents in language games. In this paper, we propose an effective neural iterated learning (NIL) algorithm that, when applied to interacting neural agents, facilitates the emergence of a more structured type of language. Indeed, these languages provide learning speed advantages to neural agents during training, which can be incrementally amplified via NIL. We provide a probabilistic model of NIL and an explanation of why the advantage of compositional language exist. Our experiments confirm our analysis, and also demonstrate that the emerged languages largely improve the generalizing power of the neural agent communication. INTRODUCTION Natural language understanding (NLU), which is exemplified by challenging problems such as machine reading comprehension, question answering and machine translation, plays a crucial role in artificial intelligence systems. So far, most of the existing methods focus on building statistical associations between textual inputs and semantic representations, e.g. using first-order logic (Manning et al., 1999) or other types of representations such as abstract meaning representation (Banarescu et al., 2013). Recently, grounded language learning has gradually attracted attention in various domains, inspired by the hypothesis that early language learning was focused on problemsolving (Kirby & Hurford, 2002). While related to NLU, it focuses on the pragmatics (Clark, 1996) of learning natural language, as it implies learning language from scratch, grounded in experience. This research is often practiced through the development of neural agents which are made to communicate with each other to accomplish specific tasks (for example, playing a game). During this process, the agents build mappings between the concepts they wish to communicate about and the symbols used to represent them. These mappings are usually referred to as 'emergent language'. So far, an array of recent work (Havrylov & Titov, 2017;Mordatch & Abbeel, 2018;Kottur et al., 2017;Foerster et al., 2016) has shown that in many game settings, the neural agents can use their emergent language to exchange useful coordinating information. While the best way to design games to favour language emergence is still open to debate, there is a consensus on the fact that we should gear these emergent languages towards sharing similarities with natural language. Among the properties of natural language, compositionality is considered to be critical, because it enables representation of complex concepts through the combinination of several simple ones. While work on incorporating compositionality into emergent languages is still in its early stage, several experiments have already demonstrated that by properly choosing the maximum message length and vocabulary size, the agents can be brought together to develop a compositional language that shares similarities with natural language (Li & Bowling, 2019;Lazaridou et al., 2018;Cogswell et al., 2019). In a different body of language research literature, evolutionary linguists have already studied the origins of compositionality for decades (Kirby & Hurford, 2002;Kirby et al., 2014;2015). They proposed a cultural evolutionary account of the origins of compositionality and designed a framework called iterated learning to simulate the language evolution process, based on the idea that the simulated language must be learned by new speakers at each generation, while also being used for communication. Their experiments show that highly compositional languages may indeed emerge through iterated learning. However, the models they introduced were mainly studied by means of experiments with human participants, in which the compositional languages is favored by the participants because human brain favors structure. Hence, directly applying this framework to ground language learning is not straightforward: we should first verify the existence of the preference of compositional language at the neural agent, and then design an effective training procedure for the neural agent to amplify such an advantage. In this project, we analyze whether and how the learning speed advantage of the highly compositional languages exists in the context of communication between neural agents playing a game. Then we propose a three-phase neural iterated learning algorithm (NIL) and a probabilistic explanation of it. The experimental results demonstrate that our algorithm can significantly enhance the topological similarity (Brighton & Kirby, 2006) between the emergent language and the original meaning space in a simple referential game (Lewis, 1969). Such highly compositional languages also generalize better, because they perform well on a held-out validation set. We highlight our contribution as: • We discover the learning speed advantages of languages with high topological similarity for neural agents communicating in order to play a referential game. • We propose the NIL based on those advantages, which is quite robust compared to most of the related works. • We propose a probabilistic framework to explain the mechanisms of NIL. BACKGROUND 2.1 REFERENTIAL GAME We analyze a typical and straightforward object selection game, in which a speaking agent (Alice, or speaker) and a listening agent (Bob, or listener) must cooperate to accomplish a task. In each round of the game, we show Alice a target object x selected from an object space X and let her send a discrete-sequence message m to Bob. We then show Bob c different objects (x must be one of them) and use c 1 , ..., c c ∈ X to represent these candidates. Bob must use the message received from Alice to select the object that Alice refers among the c candidates. If Bob's selectionc is correct, both Alice and Bob are rewarded. The objects are shuffled and candidates are randomly selected in each round to avoid the agents recognizing the objects using their order of presentation. In our game, each object in X has N a attributes (color and shape are often used in the literature), and each attribute has N v possible values. To represent objects, similarly to the settings chosen in (Kottur et al., 2017), we encode each attribute as a one-hot vector and concatenate the N a one-hot vectors to represent one object. The message delivered by Alice is a fixed-length discrete sequence m = (m 1 , ..., m N L ), in which each m i is selected from a fixed size meaningless vocabulary V . NEURAL AGENT STRUCTURES Neural agents usually have separate modules for speaking and listening, which we name Alice and Bob. Their architectures, shown in Figure 1, are similar to those studied in (Havrylov & Titov, 2017) and (Lazaridou et al., 2018). Alice first applies a multi-layer perceptron (MLP) to encode x into an embedding, then feeds it to an encoding LSTM (Hochreiter & Schmidhuber, 1997). Its output will go through a softmax layer, which we use to generate the message m 1 , m 2 , · · · . Bob uses a decoding LSTM to read the message and uses a MLP to encode c 1 , ..., c c into embeddings. Bob then takes the dot product between the hidden states of the decoding LSTM and the embeddings to generate scores s c for each object. These scores are then used to calculate the cross-entropy loss when training Bob. When Alice and Bob are trained using reinforcement learning, we can use p A (m|x; θ A ) and p B (c|m, c 1 , ..., c c ; θ B ) to represent their respective policies, where θ A and θ B contain the parameters of each of the neural agents. When the agents are trained to play the game together, we use the REINFORCE algorithm (Williams, 1992) to maximize the expected reward under their policies, and add the entropy regularization term to encourage exploration during training, as explained in (Mnih et al., 2016). The gradients of the objective function J(θ A , θ B ) are: where R(c, x) = 1(c, x) is the reward function, H is the standard entropy function, and λ A , λ B > 0 are hyperparameters. A formal definition of the agents can be found in Appendix C. MEASURING COMPOSITIONALITY Compositionality is a crucial feature of natural languages, allowing us to use small building blocks (e.g., words, phrases) to generate more complex structures (e.g., sentences), with the meaning of the larger structure being determined by the meaning of its parts (Clark, 1996). However, there is no consensus on how to quantitatively assess it. Besides a subjective human evaluation, topological similarity has been proposed as a possible quantitative measure (Brighton & Kirby, 2006). To define topological similarity, we first define the language studied in this work as L(·) : X → M. Then we need to measure the distances between pairs of objects: Similarly, we compute the corresponding quantity for the associated messages is a distance in M. Then the topological similarity (i.e., ρ) is defined as the correlation between these quantities across X . Following the setup of (Lazaridou et al., 2018) and (Li & Bowling, 2019), we use the negative cosine similarity in the object space and Levenshtein distances (Levenshtein, 1966) in the message space. We provide an example in Appendix B to give a better intuition about this metric. NEURAL ITERATED LEARNING MODEL The idea of iterated learning requires the agent in current generation be partially exposed to the language used in the previous generation. Even this idea is proven to be effective when experimenting with human participants, directly applying it to games played by neural agents is not trivial: for example, we are not sure where to find the preference for high-ρ languages for the neural agents. Besides, we must carefully design an algorithm that can simulate the "partially exposed" procedure, which is essential for the success of iterated learning. LEARNING SPEED ADVANTAGE FOR THE NEURAL AGENTS As mentioned before, the preference of high-ρ language by the learning agents is essential for the success of iterated learning. In language evolution, highly compositional languages are favored because they are structurally simple and hence are easier to learn (Carr et al., 2017). We believe that a similar phenomenon applies to communication between neural agents: Hypothesis 1: High topological similarity improves the learning speed of the speaking neural agent. We speculate that high-ρ languages are easier to emulate for a neural agent than low-ρ languages. Concretely, that means that Alice, when pre-trained with object-message pairs describing a high-ρ language at a given generation, will be faster to successfully output the right message for each object. Intuitively, this is because the structured mapping described by a language with high ρ is smoother, and hence has a lower sample complexity, which makes resulting examples easier to learn for the speaker agent (Vapnik, 2013). Hypothesis 2: High topological similarity allows the listening agent to successfully recognize more concepts, using less samples. We speculate that high-ρ languages are easier for a neural agent to recognize. That means that Bob, when pre-trained with message-object pairs corresponding to a high-ρ language, will be faster to successfully choose the right object. Intuitively, the lower topological similarity is, the more difficult it will be to infer unseen object-message pairs from seen examples. The more complex mapping of a low-ρ language implies that more object-message pairs need to be provided to describe it. This translates as an inability for the listening agent to generalize the information it obtained from one object-message associated to a low-ρ language to other examples. Thus, the general performance of Bob on any example will improve much faster when trained with pairs corresponding to a high-ρ language than with a low-ρ language. We provide experimental results in section 4.1 to verify our hypotheses. We also provide a detailed example in Appendix D to illustrate our reasoning. NEURAL ITERATED LEARNING AND PROBABILISTIC ANALYSIS We design the NIL algorithm to exploit these advantages in a robust manner, as detailed in Algorithm 1. The algorithm runs for I generations: at the beginning of each generation i, both the agents are reinitialized. As Alice and Bob have different structures, they are then pre-trained differently (see line 5-7 for Alice and line 8-12 for Bob): this is the learning phase. Alice is pre-trained via categorical cross-entropy, using the data generated at the previous generation, which we denote D i . Bob is pretrained with REINFORCE, learning from the pre-trained Alice. We note I a and I b their respective number of pre-training iterations. With hypothesis 1, the expected ρ of the language spoken by Alice should be higher than that of D i . Meanwhile, Bob shold be more "familiar with" the language with a higher ρ than D i , as stated by hypothesis 2. Alice and Bob then play the game together for I g rounds in the interacting phase, in which both agents are updated via REINFORCE. In this phase, the languages used by them are filtered to be more unambiguous -their language must deliver information accurately to accomplish the task. Finally, in the transmitting phase, we feed all objects to Alice and let it output the corresponding messages to be stored in D i+1 for the learning phase of the next generation. To better understand how NIL enhances the expected ρ of the languages generation by generation, we propose a probabilistic model for NIL in Appendix C, as well as a probabilistic analysis of the role played by Alice and Bob in every phase. Intuitively, at the beginning of each generation, the expected ρ of language used by Alice (denoted by E L [ρ(L)]) is quite low because of the random initialization. Then during the learning phase, Alice learns from D i and expected to have the same E L [ρ(L)] with D i if it perfectly learns that data set. However, as the high-ρ language is favored by neural agent during training, the E L [ρ(L)] of the weakly pre-trained Alice should be higher than that of D i . A similar thing may happen when pre-training Bob. Then in the interacting phase, as the game performance has no preference for language with different ρ, E L [ρ(L)] will not change in this phase. 1 Finally, in the transmitting phase, D i+1 is sampled based on the language with current E L [ρ(L)], which is expected to be higher than that of D i . In other words, E L [ρ(L)] would increase generation by generation (the details for derivations are provided in Appendix C): (3) EXPERIMENTS AND DISCUSSIONS In this section, we first verify hypotheses 1 and 2 by directly feeding languages with different ρ to Alice and Bob. Then we examine the behavior and performance of the neural agents, as well as the expected ρ of languages, at each generation. We conduct an ablation study, to examine the effect of pre-training Alice and Bob separately. We then investigate more thoroughly the advantages Randomly initialize D 1 for i = 1, 2, ..., I do Re-initialize Alice and Bob, get Alice i and Bob i // ======= Learning Phase ======= for i a = 1, 2, ..., I a do Randomly sample an example pair from D i and use it to update Alice i with cross-entropy training end for for i b = 1, 2, ..., I b do Alice i generates message based on input objects Bob i receives message and selects the target Bob i updates its parameters if rewarded end for // ======= Interacting Phase ======= for i g = 1, 2, ..., I g do Alice i generates message based on input objects Bob i receives message and selects the target BOTH Alice i and Bob i update parameters if rewarded end for // ======= Transmitting Phase ======= for i s = 1, 2, ..., I s do Generate object-message pairs by feeding objects to Alice i and save them to data set D i+1 end for end for Algorithm 1: The NIL algorithm. I a , I b and I g are the number of iterations used to pre-train Alice, Bob, and to play the game at each generation. brought by high-ρ languages, and highlight the 'interval of advantage' in pre-training rounds, which could help in selecting reasonable I a and I b . Finally, we conduct a series of experiments on a held-out validation set to highlight the positive effect of high-ρ languages on the neural agents generalization ability -which shows the potential of iterated learning for NLU tasks. Details about our experimental setup and our choice of hyper-parameters can be found in Appendix A. More experiments about the robustness of NIL are presented in Appendix E. LEARNING SPEED ADVANTAGES We first use the experimental results in Figure 2 to verify hypotheses 1 and 2. In these experiments, we randomly initialize one Alice and feed languages with different expected ρ for it to learn (and repeat the same procedure for Bob). We generate a perfect high-ρ language (ρ = 1) using the method proposed in (Kirby et al., 2015), and randomly permute the messages to generate a low-ρ language with ρ = 0.21. The other languages are intermediate languages generated during NIL. Note that there is no interacting nor transmitting phase in the experiment in this subsection: we only test the learning behavior of a randomly initialized Alice (or Bob) separately. Figure 2-(a) and (b), we see that the high-ρ languages indeed has the learning speed advantage at both the speaker and the listener side. One important finding is in Figure 2-(c), which record the expected ρ, i.e., E L [ρ(L)], during Alice's learning. From this figure, we find that when learning a language with low expected ρ, the value of E L [ρ(L)] will first increase, and finally converge to the ρ of D. This phenomenon, caused by the learning speed advantage, makes the weak pre-train the essential design for the success of NIL: if I a is correctly chosen, we may expect a higher E L [ρ(L)] than that of the data set it learns from. PERFORMANCE OF NIL In this part, we record the game performance (i.e., the rate of successful object selections) and mean ρ of the object-message pairs exchanged by the neural agents every 20 rounds. We run the simulation 10 times, with a different random number seed each time. Although the results are different, they In these experiments, I=80 and I g =4000, with all other hyper-parameters following Table 3. all follow the same trend. In this first series of experiments, we compare the following 4 different methods: • Iterated learning, with resetting both Alice and Bob at the beginning of each generation. • Iterated learning, only resetting Alice at the beginning of each generation; • Iterated learning, only resetting Bob at the beginning of each generation; • No iterated learning: neither Alice nor Bob are reset during training. From Figure 3-(a), we can see that for the 3 displayed variants of the procedure, neural agents can play the game almost perfectly after a few generations. The curve of the no-reset method will directly converge while the curves of the other two iterated learning procedures will show a loss of accuracy at the beginning of each generation. That is because one or both agents are reset, and are not able to completely re-learn from the data kept from the previous generation during the pre-training phase. However, at the end of each generation, all these algorithms can ensure a perfect game performance. While the use of NIL has little effect on the game performance, given a sufficient number of rounds, these procedures have a clear positive effect on topological similarity. In Figure 3-(b), we can see that the no-reset case has the lowest average ρ while the iterated learning cases all have higher means (and increasing). We provide extra experiments in Appendix E, which demonstrate the robustness of NIL under different scenarios. The discussion on the specific impact of each agent and why the reset-Alice and reset-Bob behave differently is in Section 5. HIGH TOPOLOGICAL SIMILARITY AND INTERVAL OF ADVANTAGE In this section, we explore further the phenomenon caused by the learning speed advantage on NIL. From the discussion in section 3.1 and the experimental results in section 4.1, we know that I a and I b play an important role in NIL: they should not be too large nor too small. Intuitively, if I a is too small, Alice will learn nothing from the previous generation, hence the NIL amounts to playing only one interacting phase. If I a is too large, from the trend in Figure 2-(c), we may expect that the increase of expected ρ should be small in each generation, because Alice will perfectly learn D i , and hence have a ρ similar to its predecessor. Hence we speculate that the value of I a should have a "bottleneck" effect, i.e., a too large one or a too small one will both harm the performance of NIL. A similar argument can also applied in the selection of I b . To verify our argument, we run NIL with different values of I a and I b , examining the behavior of the following 3 different quantitive metrics: • E[r 71:80 ]: The average reward of the last ten generations (game performance); • E[ρ 1:10 ]: The average value of ρ for the first ten generations (converging speed); • E[ρ 71:80 ]: The average value of ρ for the last ten generations (converged ρ). From the results presented in Table 1, we can see the importance of the number of pre-training rounds not being too large nor too small. The suitable I a and I b are shown in bold. Furthermore, combining Figure 2 and Table 1, the interval of suitable I a lies between 1000 to 2000 while it lies between 100 to 200 for I b , which provides us an effective way to choosing hyper-parameters. TOPOLOGICAL SIMILARITY AND VALIDATION PERFORMANCE In this last series of experiments, we aim to explore the relationship between topological similarity and the generalisation ability of our neural agents, which can also indirectly reflect the expressivity of a language. We measure this ability by looking at their validation game performance: we restrict the training examples to a limited numbers of objects (i.e., the training set), and look at how good are the agents at playing the game on the others (i.e., the validation set). Figure 4-(a) demonstrates the strength of the iterated learning procedure in a validation setting. To illustrate the relationship between ρ and validation performance, we randomly choose I a ∈ [60, 4000] and I b ∈ [5, 200] and conduct a series of experiments. Those for which I a and I b are not in their optimal range will yield a worse performance on both validation test and topological similarity. In Figure 4-(b), we record the results from different experimental settings and draw the zero-shot performance given the topological similarity of the emergent language. This shows the linear correlation between these two metrics, and a significance test confirms it: the correlation coefficient is 0.928, and the associated p-value is 3.8 * 10 −104 . Hence, under various experimental settings, the validation performance and the topological similarity are strongly correlated. Table 2 shows that when the size of validation set increases, using iterated learning can always improve the validation performance: in all the cases, both-reset algorithm always yields the best performance. The fact that the Alice-reset setting performs better than the Bob-reset setting also matches our analysis well. DISCUSSION: A PARALLEL WITH LANGUAGE EVOLUTION We can observe an interesting phenomenon in Figure 3-(b): 2 the topological similarity of the emergent language always increases at first, whether we use iterated learning or not. This is akin to the effect apparent for ρ in Figure 2- This comparison allows us to address one important difference between our neural iterated learning algorithm and the original version: our speaking and listening agents are not identical. Actually, the speaking module and listening module of human are also not identical, but the works on traditional iterated learning do not pay much attention to such differences. From Figure 3-(b) and Figure 4-(a), it is clear that Alice and Bob are affected differently by the generational resets, and thus do not offer the same contribution to the final performance. 3 From this parallel, we retain that iterated learning is also linked to the emergence of a certain form of compositionality when applied to neural agents. Besides, we believe that the correlation between topological similarity and validation performance that we highlight in Section 4.4 is another argument in favor of a relationship between compositionality and generalization, which has recently been explored (Kottur et al., 2017;Choi et al., 2018;Andreas, 2019). CONCLUSION In this paper, we find and articulate the existence of the learning speed advantages offered by high topological similarity, with which, we propose the NIL algorithm to encourage the dominance of high compositional language in a multi-agent communication game. We show that our procedure, consisting in resetting neural agents playing a referential game and pre-training them on data generated by their predecessors, can incrementally advantage emergent languages with high topological similarity. We demonstrate its interest by obtaining large performance improvements in a validation setting, linking compositionality and ability to generalize to new examples. The robustness of the algorithm is also verified in various experimental settings. Finally, we hope the proposed probabilistic model of NIL could inspire the application of NIL in more complex neural-agents-based systems. ACKNOWLEDGEMENT We show our sincere gratitude to Kenny Smith, Ivan Titov, Stella Frank and Serhii Havrylov for their helpful discussion and comments that greatly improved the manuscript. We would also like to thank the members from Prof. Jun Zhao's team at Institute of Automation, Chinese Academy of Sciences, e.g. Dr. Kang Liu, Xiang Zhang and Xinyu Zuo, for sharing computing resources to run some experiments as well as sharing their pearls of wisdom with us during the course of this research, and we thank 3 anonymous reviewers for their insights and comments. APPENDIX A: PARAMETER SETTINGS Unless specifically stated, the experiments mentioned in this paper use the hyper-parameters given in To better understand how topological similarity can measure the compositionality of one language, and to give some intuitions on how languages having different ρ would like, we provide and illustrate a toy example in this appendix. In this example, the object space is X = {blue box, blue circle, red box, red circle} and the message space is M = {aa, ab, ba, bb}. Any set of mappings from four distinct objects to four messages (not necessarily distinct, i.e. same message could correspond to different objects) forms a language. Hence, there exist 4 4 = 256 possible languages in this toy example. Following the principles provided in (Kirby et al., 2015), we define the following concepts for describing a language: • Unambiguous language. A type of language that can unambiguously describe all objects in X . In other words, the mappings between X and M are bijectional. In this example, there exist 4 × 3 × 2 × 1 = 24 such languages. • Compositional language. A type of unambiguous language that exhibits systematic compositional structure when forming messages. Such languages can use different symbols to represent different attributes of meaning and combine these symbols in a systematic way to form a message such that the meaning of the whole message is formed from a simple combination of the meaning of its parts. For example, following the rules of S → XY , and X : blue → b; X : red :→ b; Y : box → a; X : circle :→ b, we can derive a compositional language like the example in Note that the number of unambiguous languages is usually much smaller than that of ambiguous languages, and the number of compositional languages is usually smaller than that of holistic languages. Using permutation and combination, we can calculate the numbers of all possible languages, unambiguous language, compositional language and holistic language as: # holistic languages = # unambiguous languages − # compositional languages (7) From the above equation, it is easy to see that the gap between the number of compositional languages and holistic languages would become larger when N v , N a , N L and |V | increase. Further, this means that it becomes even harder to pick a compositional language when randomly sample a language. That could explain why the expected topological similarity of the emergent language may increase when smaller N L and |V | are applied, as illustrated in (Lazaridou et al., 2018;Cogswell et al., 2019). Besides the numbers of different languages, another key difference among these languages is the topological similarity (i.e., ρ), as illustrated in section 2.3. As the language studied in this paper is defined as a mapping function from a meaning (i.e., an object) to a message, a compositional language must ensure that the meaning of a symbol is a function of the meaning of its parts. In other words, compositional languages are neighborhood related: nearby meanings tend to be mapped to nearby signals. Or to say, nearby meanings that share similar attributes are likely to share similar message symbols (Brighton & Kirby, 2006). Thus, as the difference between messages are measured by edit distance, the compositional languages will have a higher ρ than the holistic ones. However, the existence of degenerate component also change the value of ρ: the ρ of a degenerate language might be higher than that of a holistic language. From the above discussions, we find that making the highly compositional languages dominate is a challenging task: it occupies a really small portion among all possible languages, and only using topological similarity also cannot tell them apart from those who are highly degenerate. However, the proposed algorithm can solve this problem almost perfectly: it uses the learning speed advantage caused by high topological similarity to increase the posterior probability of high-ρ languages, and uses the interacting phase to rule out the degenerate components. The details of how the probability of languages changes in different phase of our algorithm are illustrated in Appendix C. APPENDIX C: PROBABILISTIC MODEL OF THE SYSTEM Probabilistic Model of Emergent Languages: In section 2.3, we define a language as a mapping function from object space X to the message space M, i.e., L(·) : X → M. Here we discuss how to describe the probability of a specific language, i.e., P (L). Suppose that we have Assume that messages are uniformly sampled from M whose size is M = |V | N L , we could have P (m n |x n ) = 1 M , ∀n ∈ {1, 2, ..., N }. Hence the initial probability (or prior probability) of any possible language is 1 M N . We define the posterior distribution of languages as the distribution after our neural iterated learning algorithm (NIL), i.e. P (L|NIL). Then, our goal is to enhance the posterior probability of the high-ρ languages, which is equivelant to enhance the expectation of ρ, i.e.: It is obvious that E L [ρ(L)], the expected topological similarity of languages following the prior probability, is quite low, as the high-ρ languages only occupy an extremely small fraction. Definition of the Agents: Following the structure provided in Figure 1, we define the speaking agent (Alice) and listening agent (Bob) formally here. Alice is a bunch of neural networks that can map any input object x to a discrete message m. So we define it as m = h(x), h : X → M. As Alice generate discrete messages with softmax layers, the probabilistic distribution of different words in m n can be obtained. In the example provided in Figure 1, we can have P (m 1 |x) and P (m 2 |x, m 1 ) by reading the distribution from softmax layers. In more general cases, we could obtain P (m l |x, m l−1 , m l−2 , . . . ) following the same method. Thus, we can directly calculate the probability of specific m given x for Alice as follow: If we feed all possible x to Alice and calculate the corresponding P (m|x), we then could calculate the probability distribution of all languages after training Alice, following equation (8) and (9). Then, we can state our goal as to obtain a high E L∼P (L|NIL) [ρ(L)] by using NIL to update the parameters of the neural network. In our setting, the posterior probability of languages is decided by Alice with its softmax layers. Bob plays a role of assistant to ensure the robustness of NIL, which will be further illustrated in Appendix D and E. From Figure 1, we could see that the inputs of Bob are a discrete message m and c different objects. As Bob will calculate a score s c for each object c c , we can denote its function as s = f (m, x), f : M × X → R 1 . Probabilistic Description of Language Evolution in NIL: To avoid confusion, we specify all the probabilities involved in NIL in the left corner of Figure 6. In the figure, the shadow regions with different colors represent the three phases of NIL in ONE generation. Thus, one generation of NIL could be described as: 1. Initialization: At the beginning of generation t, the initial probability of Alice[i] is P 0 (L), which is same as the prior probability of P (L) mentioned before, as Alice[i] is always randomly initialized. The initial function of Bob[i] is represented as f 0 (m, x). 2. Learning Phase: Then following Algorithm 1, Alice[i] will be pre-trained using the data sampled from the previous generation, i.e. D i . The pre-trained probability of languages is defined as P i (L|D i ). Bob[i] will then be pre-trained using the sample generated by P i (L|D i ), using REINFORCE procedure, after which, its function becomes f i (m, x). Interacting Phase: The pre-trained Alice[i] and Bob[i] then interact and update their parameters together following the REINFORCE procedure described in section 3.2. In each round of the game, Alice[i] would first use argmax to select m with the highest probability given a randomly selected object x, both agents would then update their parameters if R = 1, i.e. the data pair m, x could assist them to accomplish the referential game successfully. We argure that this process has the same effect as the following procedure: we first sample a data set D * ∼ P i (L|D i ), and then delete the data pairs that cannot unambiguously deliver information to form a refined data set R i . Then, the interacted probability of Alice[i] can be represented by P i (L|D i , R i ). As Bob also update its parameters in this phase, we define its interacted function as f i * (m, x). 4. Transmitting Phase: Finally, in the transmitting phase, we sample D i+1 ∼ P i (L|D i , R i ) by: i)randomly feeding x n to Alice[i]; ii) sample a message m n ∼ P i (m|x n , D i , R i ). Note that Bob[i] is not involved in this phase. From all sections above, we argue that Alice plays an important role in all the phases in NIL while Bob only helps to make the languages effective during interaction phases. As we will discuss the role of Alice and Bob in further details in Appendix E, we only provide an intuition of how the language changes in NIL in the following paragraphs. Overall, the objective of our NIL design is to ensure the expected topological similarity of emergent languages would increase over generations, as expressed by equation (3). As the languages with higher ρ would be learned faster, which is stated as Hypothesis 1, we can expect those high-ρ languages to have a higher pre-trained probability in P (L|D i ) than in D i , i.e.: 4 Note that this inequality is not a strict corollary, but it is very likely to hold as long as we have an appropriate I a . In the worst case, we can chose an extremely large I a to make Alice learn D i perfectly. However, we could verify it by the experimental results as well as the explanation in Appendix D that the weak pre-training can indeed help us to achieve a higher expected ρ. Then, in the interacting phase, we may expect: as the compositional languages and holistic languages are both unambiguous and the game performance cannot tell them apart. Finally, during the transmitting phase, we have D i+1 ∼ P i (L|D i , R i ). Assuming that we sampled enough D i+1 to ensure it has a very similar distribution to P i (L|D i , R i ), it is reasonable to have: Sum up from the above, equation (3) can be obtained by combining equation (11-13). APPENDIX D: MORE ON THE LEARNING SPEED ADVANTAGE Amplifying mechanism and learning speed advantage are the two main elements for the success of NIL. The former on is elaborated in section 3.2 and Appendix C, under the assumption that the learning speed advantage of high-ρ language indeed exist. In this section, we will explain why such an advantage exist by experimental results and a toy example. Figure 7: Illustration of learning a high-ρ language and low-ρ language. Example for Supporting Hypothesis 1: This hypothesis claims that a high-ρ language would be leared faster than a low-ρ one on the speaker side. As we can directly represent the posterior probability of any language from Alice's perspective, the assertion of "learned faster" can be converted to "the posterior probability increases faster". We use a toy example, i.e. two languages in Table 4, to demonstrate how such an advantage emerges and how it works. To make the notation concise, we use "BB, RB, BC, RC" to represent "blue box, red box, blue circle, red circle" respectively. The probability of the compositional language and the holistic language in Table 4 can be represented as: where C is the common part for both languages. As we are using stochastic gradient descent algorithm to update the parameters of Alice, it straightforward to see that the update of gradient from one point will 'pull up' the neighbourhood region of function h, which is shown in the left panel of Figure 7. Then, we can speculate that if one data sample belonging to both the two language comes, e.g. ab, BC , the following probabilities would increase at the same time: as the input of them are similar with BC (only one attribute changes). As the conditional probabilities must sum to 1, the following probabilities would decrease: P (m 1 =b|BC); P (m 1 =b|BB); P (m 1 =b|RC). Thus, when Alice learns the data sample ab, BC , P (L cmp ) may have two terms increased, i.e., terms 5 and 1 . For P (L hol ), however, the decrease of term 1 will harm the increase of term 5 , hence P (L hol ) increases slower than P (L cmp ) (The fact that term 7 decreases on both sides would not change our deduction). Example for Supporting Hypothesis 2: We can use a similar explanation for the advantage at Bob. Recall that Bob is defined as a mapping function f from M × X to R 1 . Following the principle mentioned above, if Bob learns ab, BB , a bunch of function values would increase, i.e. f (ab, BB), f (aa, BB), f (bb, BB), f (ab, BC), and f (ab, CB), as they are all close to each other in the input space. Then it is easy to find that two terms in the compositional language in Table 4 are increased while only one term increases in the holistic language. That is, the score of high-ρ language would increases faster. We can also think hypothesis 2 in the following way. With the intuition that a language with higher ρ tends to be smoother and to have fewer inflection points than one with lower ρ, the learning speed advantage given by highly compositional languages can be illustrated by the example provided in Figure 7. In the example, language is considered to be a one-dimensional mapping function, which is represented by the dotted lines in Figure 7. The object-message pairs, which are represented by the cross marks, are the points that satisfy the mapping function. The solid line represents the mapping function of the learning agent. Suppose the target output (i.e. the third cross mark in each figure) is larger than the predicting output (i.e. the circle mark), the optimizer will update the parameters of the neural network following the direction of the gradient, as illustrated by the bold arrows in the figure. Such an update will also pull the neighbouring parts of the function up, as illustrated by the smaller arrows on the solid curve. The smoothness of high-ρ languages implies that the MSE of neighbouring positions will also be reduced by this update, while the MSE of neighbors would be increased in the case of a low-ρ language. Such a trend is represented by the blue arrows and red crossed-arrows in Figure 7: the blue one means a decrease of the MSE at the specific position while the red one means increases of MSE. In other words, for a high-ρ languages, an update corresponding to one data sample is likely to have a larger positive effect on other data samples, and hence ensure a higher learning speed. Meanwhile, for a low-ρ language, one data sample would have both positive and negative effects on its neighbors and thus lead to a lower learning speed. APPENDIX E: ROBUSTNESS OF NIL In this section, we will provide experimental results to demonstrate the robustness of the proposed method. The influence of hyperparameters (e.g. vocabulary size, message length) as well as the role played by Alice and Bob are both elaborated. Robustness for Hyperparameters on Message Space: The message space are decided by the vocabulary size |V | and the message length N L . Thus, we first make experiments to see the effects of different |V | and N L on E L∼P (L|NIL) [ρ(L)]. From the discussion in Appendix B, we know that when |V | and N L are large, making high-ρ language dominate in the posterior probability is very hard, as the compositional languages only occupy an extremely small portion. Such a trend could also be found in Table 5, as the finally converged expectation of topological similarity becomes lower with larger |V | or N L . Our algorithm, however, is very robust to different values of |V | and N L . By comparing different columns in Table 5, E L∼P (L|NIL) [ρ(L)] decreases very slow with the increasement of |V | and N L . An extreme example is that, the converged ρ can still be roughly 0.8 with |V | = 72. The performance of validation accuracy seems more robust when |V | and N L changes: the NIL can always obtain more than 80% accuracy compared to the none reset case (roughly 15%). Furthermore, compared with |V |, N L has a stronger impact on the performance in terms of all metrics but the validation performance, as it is shown in Table 5 that the performance with N L = 3 is significantly lower than its counterpart when N L = 2. One possible explanation is that the increasing of N L brings an exponential change to the message space. However, no matter how |V | and N L change, E L∼P (L|NIL) [ρ(L)] is always significantly higher that the compositionality of emergent languages given by baseline model, i.e. 0.3. Table 5: Values of 4 metrics when |V | and N L changes. Metric G 0.85 means the first generation that the average ρ of the previous three generations exceed 0.85. The notation "-" means the agents never satisfy the requirement. Robustness on Degenerate Components: From the discussions in Appendix B, we know that the ρ of a language who has many degenerate components will also be high, and hence can be learned faster by Alice in the learning phase. Thus, it is necessary to check whether our algorithm can avoid the mode collapse caused by the degenerate components. Intuitively, the degenerate components can be filtered out during the interacting phase, as the REINFORCE algorithm ensure that the parameters of the agent will only be updated with respect to R = 1, i.e. the language is effective and thus unambiguous. To verify our hypothesis, we first observe how the number of message types, i.e. the number of different messages used to describe all 64 objects, changes during NIL. It is straightforward to see that a language without any degenerate component would have 64 different message types. As shown in Figure 8, all methods could achieve high numbers if message types, which indicates that the REINFORCE algorithm could always filter out the degenerate components efficiently. Furthermore, we design two challenging tasks for NIL: 1. Degenerate initialized: We let Alice learn from a pure degenerate language at the beginning of each generation, before it learns from D i . Degenerate mixed: We mix the data pair generated by a pure degenerate language to D i and ensures the proportion of the degenerate pairs is more than 50%, which makes Alice easier to collapse to a degenerate language during learning phase. We then compare the performance, i.e. the expected ρ and validation accuracy, of agents in different tasks. The results shown in Figure 9 demonstrate that our NIL is very robust to the influence of degenerate component, as both E L∼P (L|NIL) [ρ(L)] and the validation score are much higher than the none reset baseline's performance. Figure 9: Two corner case test. NIL with degenerate initialized means Alice is initialized with a degenerate language at the beginning of each generation. NIL with degenerate mixed means Alice is initialized with a degenerate language, AND the D i is mixed with I s degenerate language pairs. The Role of Bob's Pre-training From the discussions above, it is easy to understand why E L∼P (L|NIL) [ρ(L)] would gradually increase in NIL and how the REINFORCE applied in interacting phase can filter the degenerate component. However, the role played by Bob, especially in the learning phase where Bob only update its own parameters, is not straightforward. In short, the pre-training of Bob makes the algorithm more robust, especially at the beginning of the interacting phase. We record the value of E L∼P (L|NIL) [ρ(L)] every 20 iterations among learning phase and interacting phase, and plot the results of two generations in Figure 10. In this figure, the x-axis is the index of iterations. With I a =1000, I b =400, and I g =1600, we split (by dotted lines) each generation to three parts: Alice pre-training, Bob pre-training, and interacting phase. The blue lines are generated by NIL with the pre-training of Bob while the red lines are generated when Bob is not pre-trained (here I g =2000 to make a fair comparison). For the blue lines, E L∼P (L|NIL) [ρ(L)] will not change when Bob is pre-training (begins at the 1000th iteration), because Alice do not update parameters at that time. However, for the red lines, E L∼P (L|NIL) [ρ(L)] begin to decrease at the 1000th iteration. That is because when Bob is not pre-trained, the language learned by Alice may be impacted by playing with a fresh new Bob at the beginning of interacting phase! That is why the pre-training of Bob can make the NIL more efficient and robust. If Bob are pre-trained by the data generated by Alice in the current generation, Bob would be more "familiar" with Alice's language, and hence ensures a more stable interacting phase. Looking at the Emergent Languages From the discussions above, we know that NIL can ensure a high expected ρ of the emergent language, and a high validation performance. Here we show the evolution of the distributions of emergent languages to provide a better intuition on how NIL works. We first provide two examples of converged language (i.e., the language generated by Alice in the last generation) using the none-reset method and the resetting-both method in Table 6 and 7, respectively. In these examples, both languages can almost unambiguously represent all 64 different types of objects in X , and hence they can help Alice and Bob to play the game successfully. However, the language generated using iterated learning has a clear compositional structure: the first position of the message represents different colors, and the second position represents the shape. Such a structure is quite similar to what humans do, e.g., combine an adjective and a noun to represent a complex concept. To better illustrate the posterior probability of emergent languages as a function of the corresponding value of ρ and the generation, we provide the 3D views of P (ρ(L)|D i , R i ) in 80 generations in Figure 12 and 13. The heat-map provided in Figure 11 can be considered as the top views of these 3D illustrations. In these two figures, the x-axis and y-axis represent the index of generation and the topological similarity, and the z-axis represents the probability of languages with a specific value of ρ, in a specific generation. To make the figures easier to read, we smooth the distribution of ρ in each generation using linear interpolation (Boyd & Vandenberghe, 2004). Figure 14-(a) and (b) compare the posterior distributions at some typical generations, which can also be considered as the side views of the 3D illustration from the direction of x-axis. In these figures, we find that the initial distribution of ρ is not flat. That is because even the prior probability for each language is uniform, the amounts of languages with extremely high ρ and low ρ only occupy a small portion among all possible languages, as stated in (Brighton, 2002). Hence the initial probability of ρ(L) is no longer uniform and has a bell shape which is similar to the Gaussian distribution. One new trend provided by these figures is that, in the none-reset case, the width of the curves in different generations do not change much, while in the resetting-both case, the width of the curves will gradually decrease (i.e., becomes more peaky). Such a trends means when iterated learning is applied, language tend to converge to some high-ρ types. Figure 15-(a) and (b) track the ratio of languages with different values of ρ, which can also be considered as the side views of the 3D illustration from the direction of y-axis. In these figures, we divide all possible languages into five groups based on their topological similarity, i.e., languages with ρ ≤ 0.2, 0.2 < ρ ≤ 0.4, 0.4 < ρ ≤ 0.6, 0.6 < ρ ≤ 0.8, and 0.8 < ρ. We plot the ratio of these five different groups of languages at the end of each generation. From Figure 15-(a), we can see that the high-ρ language, which is represented by the bold curve, always occupy a small portion. The topological similarity of the dominant languages are ρ < 0.4. However, in the resetting-both case, as illustrated in Figure 15-(b), the portion of high-ρ language will increase significantly, which further verifies that the iterated learning can gradually make the high-ρ language dominate in posterior.
2019-12-11T20:45:13.171Z
2020-02-04T00:00:00.000
{ "year": 2020, "sha1": "6e482052ac6f73d225387b27ccbfb94375c04ec8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "7237d6c7d778bafcb627cf279b15b3d22d1ba0b2", "s2fieldsofstudy": [ "Computer Science", "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
259939093
pes2o/s2orc
v3-fos-license
Factors associated with the practice of and intention to perform female genital mutilation on a female child among married women in Abakaliki Nigeria Background Female Genital Mutilation (FGM), also known as Female Genital Cutting or Female Circumcision is the harmful excision of the female genital organs for non-medical reasons. According to WHO, approximately 200 million girls and women have been genitally mutilated globally. Its recognition internationally as human rights violation has led to initiatives to stop FGM. This study investigated factors associated with the practice and intention to perform FGM among married women. Methods A cross-sectional study was conducted among 421 married women from communities in Abakaliki Nigeria. The participants were selected through multistage sampling. Data were collected through the interviewer’s administration of a validated questionnaire. Data were analyzed using IBM-SPSS version 25. Chi-square and logistic regression tests were employed to determine factors associated with the practice and intention to perform FGM at a p-value of ≤ 0.05 and confidence level of 95%. Results The mean age of respondents is 40.5 ± 14.9 years. A majority, 96.7% were aware of FGM. On a scale of 15, their mean knowledge score was 8.1 ± 4.3 marks. Whereas 50.4% of the respondents were genitally mutilated, 20.2% have also genitally mutilated their daughters, and 7.4% have plan to genitally mutilate their future daughters. On a scale of 6, their mean practice score was 4.8 ± 1.2 marks. The top reasons for FGM are tradition (82.9%), a rite of passage into womanhood (64.4%), suppression of sexuality (64.4%), and promiscuity (62.5%). Women with at least secondary education are less likely to genitally mutilate their daughters (Adjusted Odds Ratio [AOR] = 0.248, 95% Confidence Interval [CI] = 0.094–0.652). Women who are genitally mutilated are more likely to genitally mutilate their daughters (AOR = 28.732, 95% CI = 6.171–133.768), and those who have previously genitally mutilated their daughters have greater intention to genitally mutilate future ones (AOR = 141.786; 95% CI = 9.584–209.592). Conclusions Women who underwent FGM have a greater propensity to perpetuate the practice but attaining at least secondary education promotes its abandonment. Targeted intervention to dispel any harboured erroneous beliefs of the sexual, health, or socio-cultural benefits of FGM and improved public legislation with enforcement against FGM are recommended. Background Female Genital Mutilation (FGM), also known as Female Genital Cutting or Female Circumcision is the partial or total removal of external female genitalia or other injuries to the female genital organs for non-medical reasons [1]. The World Health Organization (WHO) classifies FGM into four types: type I (clitoridectomy), type II (excision), type III (infibulation) and type IV which comprises all other harmful procedures performed on the female genitalia like pricking, piercing, incising, scraping, and cauterization [2]. The WHO estimates that more than 200 million girls and women alive have undergone FGM and more than 3 million girls are at risk of it annually, in about 30 countries in Africa, the Middle East, and Asia where FGM is rampant [1,3,4]. Despite that FGM prevalence has fallen in many countries among younger girls aged 15-19 compared to older women aged 45-49 [5,6], the incidence remains high in many countries where it is practiced. For instance, a previous report of national FGM prevalence among women aged 15-49 years in West African countries showed high rates of 96.9%, 89.6%, and 82.7% respectively in Guinea, Sierra Leone, and Mali [5]. According to the Nigeria Demographic Health Survey 2018 (NDHS 2018), the national prevalence of FGM among women aged 15-49 years is 20% and the practice is on the rise among girls aged 0-14, the rates having risen from 16.9% in 2013 to 19.2% in 2018. [7] The regional prevalence in the three major tribes of Yoruba, Igbo and Hausa stands at 34.7%, 30.7% and 19.7% respectively and in terms of the geopolitical zones, the prevalence is highest in the South East (35%) followed by the South West (30%), and lowest in the North East (6%) zones. In Ebonyi State in the South East zone, the prevalence is 53.2%, seconding Imo State with the highest prevalence of 61.7% in the zone [7]. Previously, Lawani et al. reported that majority (66.3%) of the primigravid women in their survey in Abakaliki, the Ebonyi State Capital, had undergone FGM [8]. The practice of FGM has been defended upon sociocultural, traditional, political, religious, and economic backgrounds [9][10][11]. Some of the socio-cultural reasons include cultural identity, female cleanliness, protection of virginity, improvement of fertility, prevention of immorality, better marriage prospects, and greater pleasure for the husband [1,5,9,10]. FGM is often believed to reduce a woman's libido and this is considered to help her resist illicit sexual intercourse and preserve marital fidelity [1]. FGM is also often seen as a necessary ritual for initiation into womanhood and is linked to cultural ideals of femininity and modesty [9,10]. Further, the practice is seen in many cultures as a requirement for marriage, social inclusion, and approval [5,9]. Consequently, family pressure to conform to the practice is a strong motivation to continue with FGM, and women who depart from the societal norm may face condemnation, harassment, and rejection [1,12]. FGM has no health benefits, and it harms girls and women in so many ways [1] due to immediate and longterm physical, mental and psychosocial health consequences [2,3,6,10]. Girls who undergo FGM face short-term complications such as severe pain, shock, excessive bleeding, infections, and difficulty in passing urine. In a long term, the victims are significantly at risk of adverse gynaecologic and obstetric outcomes like dyspareunia, prolonged labour, perineal tears, caesarean section, postpartum haemorrhage, episiotomy, extended maternal hospital stay, resuscitation of the infant, inpatient perinatal death and dysuria and the risks are greater with more extensive FGM [8,[13][14][15][16]. The treatment of FGM in 27 high-prevalence countries has been estimated to cost 1.4 billion United States Dollars (USD) per year and is projected to rise to 2.3 billion USD by 2047 if no action is taken [1]. Female genital mutilation is internationally recognized as a violation of the human rights of girls and women [1,2,5,6]. The United Nations strives for the full eradication of the practice by 2030, with its inclusion as target 5.3 in the Sustainable Development Goal (SDG) [17]. In Nigeria, FGM is criminalized; offenders are punishable with a prison sentence of up to 4 years and or a fine of up to 200,000 Naira (US$480) or both [5,18]. Similarly, FGM is prohibited in Ebonyi State with a State Law [19] that prescribes the same penalties for offenders as obtained in the federal law. Despite the enactment of these laws and the growing disapproval globally, the incidence of FGM remains high, raising questions about the factors behind the perpetuation of the practice. This study, therefore, investigated factors associated with the practice of and the intention to perform FGM on a female child among married women, intending to inform strategic interventions to curtail the harmful practice. beliefs of the sexual, health, or socio-cultural benefits of FGM and improved public legislation with enforcement against FGM are recommended. Keywords Female genital mutilation, Female genital cutting, Female circumcision, Harmful practice, Violation of human rights, Female sexual rights, Abakaliki Nigeria Study area and design The study was conducted in Abakaliki Local Government Area (LGA) using a cross-sectional analytic design. Abakaliki is one of the 13 LGAs in Ebonyi State in the South East of Nigeria. The LGA, with a projected population of 198,793 [20], is made up of seven communities including six which are majorly rural and one which is majorly urban, accommodating part of Abakaliki, the capital of Ebonyi State. The people of the LGA engage in diverse occupations with a significant proportion of farmers. The population comprises predominantly Igbo ethnic group who practice Christianity. According to the 2015 Nigeria Education Data Survey, Ebonyi State had a female literacy rate of 40.3%, with only 11.1% of the females who completed secondary education and 4.3% who had more than secondary education. [21]. In terms of healthcare, there are 61 formal health facilities in Abakaliki LGA comprising 29 public and 32 private facilities, [22] and a majority of these facilities offer a wide range of maternal health services. The public health facilities include 2 tertiary health facilities (Alex Ekwueme Federal University Teaching Hospital Abakaliki and National Obstetric Fistula Centre Abakaliki) which offer specialist obstetric and gynaecologic services, one general hospital, 20 health centres and 6 health posts. However, the service delivery is very poor especially in the primary and secondary levels because of poor facilities, inadequate manpower and poor logistics. [22]. Study population, sample size, and sampling technique The study involved married women. A sample size (n) of 422, including anticipated non-response rate of 10%, was estimated using the Cochran formula (n = Z α 2 pq/d 2 ) for sample proportion [23], with a standard normal deviate (Z α ) of 1.96, a prevalence of FGM (p) of 49.6% reported in an earlier study [24], and a precision (d) of 5%. The respondents were selected using multistage sampling method. Firstly, four out of the seven communities including Nkaliki, Mgbabor, Agbaja-Unuhu, and Inyimagu-Unuhu were selected by balloting. Using estimated populations for each of the community, the sample size was allocated proportionately to the four communities, coming up to 95, 105, 109, and 113 respectively. Households were selected from the communities using systematic sampling technique. Firstly, a list of the households in each of the communities was obtained from the Disease Surveillance and Notification Unit of the Health Department of Abakaliki LGA. A sampling interval (nth interval) was determined by dividing the number of households in each community by the sample size estimated for that community. Strategic locations in the communities were identified with the help of resource persons who were members of the communities. These places included a village square, two schools, and a market. In each of the communities, a pen was spined at the location and the street closest to and in the direction of the tip of the pen was followed to locate the first house on either the right or left side of the street, where the first household and first respondent was selected for interview. The spinning of the pen was done to make a random start. After the random start, subsequent households were selected using the nth interval. Where a street divided into two or more, the one on the right-hand side, which was predetermined, was followed. All married women met in the houses visited were screened for eligibility to participate. The inclusion criterium was being legally married, whether living together with husband or not while the exclusion criteria were being divorced or widowed. However, only one woman was selected per household. In a polygamous family with more than one woman who met the inclusion criterium, one of the women was selected by balloting. This was done to ensure better representation of the dispositions of spouses of the married women to FGM. Using the same technique in all the communities, participants were selected until the sample size for each community was achieved. Data collection and analysis Data was collected with a pre-tested, structured questionnaire, which was administered by the researchers. The questionnaire had four sections: (1) sociodemographic characteristic, (2) knowledge of FGM with 15 questions, (3) practices about FGM with six questions, and (4) reasons for performing FGM with ten questions. A mix of English and Igbo languages were used during the data collection. The choice of language used was left to the discretion of the participants; for those who were literate, English language was majorly used. For those who were not literate and could neither understand nor speak English language, they were interviewed in Igbo language. All the research team members who participated in the data collection are fluent in both English and Igbo languages, and they carried out pre-data collection rehearsals of the questions after the pretesting of the tool. Data collection lasted for eight days, at two days per community. Data were entered into the IBM-SPSS version 25, cleaned, and transformed into variables of interest by recoding. Descriptive analyses were carried out using frequency, mean and standard deviation. Inferential statistics were done using chi-square at p-value of ≤ 0.05, and logistic regression of independent variables that had significant relationship with the outcome variables after bivariate analyses, were carried out at confidence level of 95%. Measurement of variables Knowledge of FGM was assessed using fifteen variables (Table 1). A correct answer for each of the variables attracted a score of 1 while an incorrect answer was scored 0, giving a maximum knowledge score of 15 marks. Good knowledge of FGM was determined as the proportion of respondents who scored up to the mean mark (8.1 ± 4.3) while those that scored below the mean were categorised as having poor knowledge. The categorization of knowledge into good and poor was arbitrarily done to enable a test of the relationship of knowledge with the practice and the intention to perform FGM among the respondents. Practice about FGM was assessed using six variables which included having genitally mutilated a female child ( Table 2). A correct answer for each of the variables, denoting a behaviour in support of the abolition of the harmful practices regarding FGM, attracted a score of 1 while an incorrect answer, which showed support for perpetuation of the harmful practice was scored 0, giving a total score of 6 marks. A mean score was estimated as a measure of the overall practice about FGM. Factors associated with the practice of FGM were assessed by cross-tabulating independent variables with having ever genitally mutilated a female child or not and having the intention to perform FGM on a future daughter or not. Results Four hundred and twenty-one (99.8%) out of 422 women interviewed responded. The mean age of the respondents is 40.5 ± 14.9 years and majority of them are of the Igbo tribe (97.4%) and practice Christianity (92.2%, Table 3). The majority, (67.2%) were schooled up to the secondary level including 25.2% who had tertiary education. Up to 29.5% of the respondents are unemployed. Similarly, majority (67%) of their husbands attained secondary education including 28% that attained tertiary level and majority (94%) are employed. The majority of the women have female children (83.4%) and approximately half (49.9%) of them belong to the high socioeconomic class with an average monthly family income of N52,881 (USD$119.1). However, 27.6% of them earn less than the N30,000 (< USD$67.6) national minimum wage in Nigeria. Regarding knowledge of FGM, the mean score of the respondents, on a scale of 15, is 8.1 ± 4.3 marks and 51.3% of them achieved average marks (good knowledge). Table 1 shows that the majority of the respondents are aware of FGM (96.7%), that FGM is a violation of the fundamental human rights of girls and women (59.9%), and that it has no health benefits (56.5%). Further, majority of them are aware that women who are genitally mutilated are at risk of tetanus and sexually transmitted diseases (STDs) such as HIV (53.2%), but only a few of them know (Table 2). Also, majority (91.7%) do not support discrimination against girls who have not been genitally mutilated. Whereas 50.4% of the respondents were genitally mutilated and 20.2% have genitally mutilated their daughters, 7.4% of them have plans to genitally mutilate their future daughters. The mean practice score of the respondents was 4.8 ± 1.2 marks on a scale of 6. Figure 1 shows that FGM is commonly performed by Traditional Birth Attendants (TBAs, 52.7%), mainly during childhood (58.2%). Among reasons adduced for performing FGM, tradition (82.9%), rite of passage of girls into womanhood (64.4%), suppression of women's sexuality (64.4%), reduction of sexual pleasure and promiscuity (62.5%) and increased chances of marriage and family honour (53.2%) were topmost (Fig. 2). The factors associated with having genitally mutilated a female child are presented in Table 4. Age, respondent and husband education, employment status, number of female children, socio-economic class, being genitally mutilated, and knowledge of FGM are significantly associated with having genitally mutilated a female child. Women who attained secondary education have four times less genitally mutilated their female children compared to those who did not have formal education (Adjusted Odds Ratio [AOR] = 0.248, 95% Confidence Interval [CI] = 0.094-0.652). On the other hand, women who are employed have over four and half times, more genitally mutilated their female children compared to those who are unemployed (AOR = 4.723, 95% CI = 1.881-11.859), and women who were genitally mutilated have approximately 29-times more genitally mutilated their female children compared to those who are not genitally mutilated (AOR = 28.732, 95% CI = 6.171-133.768, Table 4). Table 5 shows the factors associated with the intention to genitally mutilate a future daughter. It is shown that age, respondent and husband education, employment status, number of female children, socio-economic class, being genitally mutilated, knowledge of FGM, and having genitally mutilated a female child are significantly associated with the intention to genitally mutilate Table 5). Discussion This study sought to determine the factors associated with the practice and the intention to perform FGM and found that half of the respondents were genitally mutilated, one fifth of them have genitally mutilated their daughters and some of them still have the intention to do same to their next female children. This study further showed that whereas education promotes abandonment of FGM, women who underwent FGM have a greater propensity to perpetuate the practice. A previous survey in Nigeria had shown a similar pattern whereby women who are genitally mutilated are more likely to believe that FGM should be continued [7]. Also in Egypt, majority of a survey respondents who were genitally mutilated did not see any harm from being genitally mutilated [25]. This observation suggests a victim-precipitated action and highlights a need for targeted interventions to dispel any perceived sexual or health benefits of FGM. Further, the reasons for this seeming greater support for FGM among this group of women needs to be carefully investigated. The reasons adduced by our respondents for performing FGM align with previous reports [1,5,[9][10][11] which revolves around beliefs in socio-cultural, sexual and health benefits of FGM. These reasons, however, do not justify the subjection of girls and women to the painful consequences of FGM. Strategies need to be carefully designed to respect the cultural values of the practice within the communities where FGM is performed while upholding the human rights of girls and women [9]. The prevalence of FGM in our study is slightly lower than the reported for Ebonyi State in the NDHS 2018 [7] and much lower than previous report in Abakaliki by Lawani et al. [8]. The discrepancy between our findings and previous ones may be due to improvement in the abandonment of the practice or the fact that Lawani et al. surveyed women in specialist obstetric facilities in Abakaliki metropolis; those women may have visited the facilities from far and wide places to access special gynaecologic and obstetric services due to complications, including those that may have arisen from FGM. Our finding is much higher than the 27.1% reported in Bayelsa State of Nigeria [26], and 14.7% among Egyptian medical students [25], but much lower than 91.7% in Ethiopia [15], and 70% among African women who had migrated to Canada [27]. These discrepancies may be explained by changing trends in the practice of FGM or differences in socio-cultural, traditional, or religious backgrounds, based on which the practice of FGM has been defended [9][10][11]. The poor knowledge of some long-term complications of FGM among the respondents is a cause of concern. Similarly, it was reported in NDHS 2018 that only 61% of women in Nigeria have heard of FGM [7]. This finding suggests the need for public education to increase knowledge of FGM and its consequences and positively influence behaviours towards its eradication. Notwithstanding that FGM remains prevalent in certain countries where laws against it exist [3], it is interesting that majority of our respondents know that it is not just a violation of fundamental human right, it is now a punishable offence in Nigeria, prescribed in existing federal and state laws banning it [5,18,19]. These laws are expected to serve as a deterrent to the proponents of FGM. The fact that Ebonyi State Violence Against Persons (Prohibition) Law established the FGM Monitoring Committee (FGMMC) at the State, LGA, and Community levels [19] is praiseworthy; however, a challenge that may not be disputed is that public awareness of the legislations is not optimal. This suggests the need for expanded enlightenment campaigns about the existing laws and the penalties specified against FGM. In line with our findings, such campaigns should be designed to target women who were genitally mutilated and those who have genitally mutilated their daughters or supported someone to engage in the harmful practice, since they have a greater propensity to perpetuate the practice. Such targeted interventions will require experts in gender-sensitive matters like FGM who will not only be able to identify the victims of FGM but also be empathic enough towards them and able to positively influence their disposition on the matter, helping them to dispel any harboured erroneous beliefs about FGM. The fact that FGM is commonly performed by TBAs has previously been reported in Nigeria [2,28]. Consequently, there is a need for regulation of informal health service providers and traditional medicine practitioners who are at the forefront of the harmful practice, including placing and enforcing sanctions against any offenders. Though our study shows that only 5% of FGM are performed by doctors and nurses, a procedure known as medicalization [1], reports of medicalization abound in Nigeria [26]. Experts estimate that only 18% of women who have been subjected to FGM had the procedure performed by trained healthcare personnel [29]. The prevalence of medicalization in our study is slightly lower than the 7.6% reported in a study done in Gambia [30]. Medicalization had been advanced to reduce the complications of FGM because trained medical personnel are skilled to carry out the procedure in a sterile environment and condition to avoid infections and manage any immediate consequences such as bleeding. However, there is the risk of immediate and long-term consequences even when the procedure is performed in a sterile environment by a healthcare provider [31][32][33]. Further, FGM has no medical justification, violates the code of medical ethics, and the rights to health, life, physical integrity, and non-discrimination, and the rights to be free from cruel, inhuman, or degrading treatment [31,32]. On a larger note, medicalization may confer a sense of legitimacy to FGM or give the impression that it is without health consequences, which can undermine global efforts towards its eradication [32]. These facts make it imperative for the Medical and Dental Council and the Nursing and midwifery Council of Nigeria to enforce sanctions against medical and nursing and midwifery practitioners who perform FGM, in a bit to discourage the harmful practice. Similar to our finding, previous studies have shown that women who attained secondary education are less likely to genitally mutilate their daughters [34] or support FGM [6,27]. Education is an important mechanism to increase awareness of the dangers of FGM, foster questioning and, discussion and provide opportunities for individuals to take on social roles that are not dependent on the practice of FGM for acceptance [34]. Based on this fact, the design of interventions towards the abandonment of FGM must take the role of women's education into cognizance, making girl child education a priority in all states of Nigeria and countries in Africa where FGM is rampant. The NDHS 2018 showed that the prevalence of FGM declined by 5% compared to 2013 [7]; this rate is small. There is a need for sustained enforcement of the legislations about FGM to ensure that Nigeria meets the SDG 5 target 5.3 by 2030 [17]. The efforts to stop FGM must focus on human rights, gender equality, sexual education, and attention to the needs of women and girls who suffer from its consequences [35]. Conclusion The practice of FGM remains high and is worse among women who underwent FGM; they also have a greater propensity to perpetuate the harmful practice into their next female children. This finding suggests a victim-precipitated action due to harboured erroneous beliefs in the socio-cultural, sexual, and health benefits of FGM. The reasons behind the seeming greater support for FGM among victims of the harmful practice needs to be carefully investigated. In the meantime, interventions must target these victims to change their perception and help them to dispel such erroneous beliefs. Although the reasons for practice of FGM have persisted, they do not justify the subjection of girls and women to the pains of FGM and the negative long term consequences. Education is an important factor for the abandonment of FGM; any interventions must take the role of women's education into cognizance, making girl child education a priority in societies where it is practiced. Further, the need for improved legislation with enforcement of sanctions against FGM to serve as a deterrent to offenders cannot be overemphasized. Limitations of the study Female genital mutilation is a gender-sensitive matter associated with emotional trauma, stigmatization and discrimination in the victims. Further, there has been increasing legislations against FGM with prescriptions of punitive measures against perpetuators and supporters of the harmful practice. As a result of these, victims of FGM and individuals who perpetuate the practice are most likely to be secretive and biased against reporting their experiences and practices about FGM. Consequently, inadequate responses from our participants could have affected the conclusions drawn from this study. However, the researchers were mindful of these potential limitations and remained civil and empathic while interviewing the respondents during the data collection. The researchers also assured the respondents of the confidentiality of the information they provided.
2023-07-18T13:12:34.756Z
2023-07-17T00:00:00.000
{ "year": 2023, "sha1": "a7007e58651bd0a6efdb8ca5c3396293d3d5915a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "0c454fcb5139b1635c6779a443450c0e923524d3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231943722
pes2o/s2orc
v3-fos-license
Bisphosphonate nephropathy: A case series and review of the literature From rat studies, human case reports and cohort studies, bisphosphonates seem to impair renal function. However, when critically reviewing the literature, zoledronate and pamidronate are more frequently involved in renal deterioration than other bisphosphonates. When bisphosphonate nephropathy occurs, zoledronate more frequently induces tubular toxicity whereas pamidronate typically induces focal segmental glomerulosclerosis. Thus, although bisphosphonates are highly effective in preventing complications for patients with osseous metastases and are highly effective in preventing fractures for patients with osteoporosis, renal function should be monitored closely after initiation of these drugs. | INTRODUCTION Bisphosphonates are a class of drugs inhibiting bone resorption through several mechanisms, which are used in various skeletal disorders, such as osteoporosis, malignancy-associated bone disease and Paget's disease. 1,2 In 2018 in the Netherlands, bisphosphonates were prescribed for 197.765 users, thus including >1% of the Dutch population. 3 Generally, treatment with bisphosphonates is considered effective and safe. 4,5 However, in some cases, bisphosphonates may induce renal impairment. Here, we report on 3 cases of zoledronate- A 71-year-old male was referred to the emergency department of our hospital for a severe acute kidney injury. Two years before this event, hypertension and chronic kidney disease (CKD) stage 3b were diagnosed, with serum creatinine at approximately 100 μmol/L. The CKD was ascribed to previously undiagnosed hypertension. One year before admittance to the emergency room, prostate carcinoma was diagnosed with hydronephrosis of the right kidney. No clinically relevant deterioration of renal function was observed. Despite the initiation of hormonal therapy, osseous metastases were found a few months later. Treatment with docetaxel and prednisone was started, which resulted in remission 6 months later. Again, no clinically relevant deterioration of CKD was present at that time. Hereafter, hypercalcaemia developed, for which monthly zoledronate was initiated (3 doses had been administered at the time of admission). Three weeks before admittance to the emergency room, the patient noticed a decline in general well-being and reduced exercise tolerance. On admission, serum creatinine was 2000 μmol/L, potassium 5.4 mmol/L, bicarbonate 12 mmol/L and phosphate 4.1 mmol/L. Urinary analysis showed proteinuria and erythrocyturia. Kidney ultrasound showed a small right kidney of 9 cm (similar to earlier findings) and a normalsized left kidney with a dilatation of the pyelocalyceal system of 8-9 mm. Serum anti-GBM and ANCA autoantibodies were negative. Fluid resuscitation did not improve renal function. Dialysis was initiated and a renal biopsy was performed ( Figure 1). The biopsy included 15 glomeruli, of which 2 were globally sclerosed. No segmental glomeruloscelerosis was found. Tubular damage was observed, consisting of vacuolization of the tubular epithelial cells, dilatation of tubules and desquamation of the tubular epithelial cells, compatible with drug toxicity. In addition, the biopsy showed a mild interstitial nephritis with presence of eosinophil granulocytes. Routine immune fluorescence (IF) showed no deposits. Unfortunately, despite cessation of zoledronate, renal function did not improve, which necessitated chronic haemodialysis. A few months later, the patient decided to discontinue dialysis treatment. He died shortly after. | Case 2 An 82-year-old man was referred to the outpatient nephrology clinic for evaluation of a slowly declining renal function. His medical history included prostate carcinoma with osseous metastases diagnosed 2 years before admittance, for which he was treated with goserelin and monthly zoledronate. In the first 8 months of treatment with these drugs, his serum creatinine increased from 95 to 120 μmol/L. In | Case 3 A 60-year-old female was diagnosed with breast carcinoma with osseous metastases, for which zoledronate once a month was started 6 months before referral to our outpatient clinic. At presentation, serum creatinine was 142 μmol/L. Renal biopsy included 19 glomeruli of which 4 were globally sclerosed. No segmental glomerulosclerosis was observed. Furthermore, a mild interstitial nephritis with subtle signs of tubular injury and accumulation of Tamm-Horsfall protein were observed. Routine IF showed no deposits. After cessation of zoledronate, her renal function normalized. In the 3 renal biopsies, nonsclerosed glomeruli were without histological abnormalities and immunostainings for IgA, IgG, IgM, κ and λ light chains, C3c, and C1q showed no specific deposits. | DISCUSSION The basic structure of all bisphosphonates is P-C-P with 2 side chains (R1 and R2), as shown in Figure 3 and described in Table 1. Most bisphosphonates have an OH-molecule at R1, which enhances bone affinity. The R2 chain may contain a nitrogen (N) atom, which highly increases the potency of the bisphosphonate when compared to bisphosphonates with a non-nitrogen containing chain. Non-nitrogen containing bisphosphonates inhibit adenosine triphosphate (ATP)dependent enzymes that are necessary for activity of osteoclasts. Nitrogen-containing bisphosphonates bind and stabilize calcium phosphate in bone matrix and thus prevent dissolution. 1 Furthermore, these agents inhibit the mevalonate pathway, essential in posttranslational lipid modification and anchoring of guanosine triphosphates in the cell membrane, which plays a role in various cell functions, such as apoptosis and ATP-dependent metabolic pathways. 7,8 Lastly, bisphosphonates disrupt the cytoskeleton of cells by inhibition of actin assembly. 9 3.1 | Bisphosphonate nephropathy Bisphosphonate nephropathy has been described since the 1980s 10,11 and the presence of limited renal tolerability of this class of agents has been further explored in the 1990s. 12,13 In 2008, the available literature on the nephrotoxic effects of pamidronate, zoledronate and ibandronate was extensively reviewed. 2 However, since then, additional information has become available. Furthermore, more bisphosphonates, such as alendronate, risedronate and clodronate, are presently available. Therefore, we provide an updated and more extensive review of the literature on bisphosphonate nephropathy of these 6 drugs. Renal biopsy showed a mild increase of mesangial cells and matrix, but no tubular or interstitial abnormalities. Proteinuria had completely disappeared within 40 days after cessation of alendronate. 49 In a cohort of 5227 elderly patients, treatment with alendronate or risedronate was not associated with more adverse renal effects when compared to no treatment. 50 Lastly, a randomized trial in 127 osteoporotic or osteopenic patients comparing alendronate with risedronate or raloxifene showed no deterioration in renal parameters in any of these drugs during 12 months of follow-up. 51 3. 8 which affects multiple cellular processes including apoptosis. As stated previously, these effects are also mechanisms through which bisphosphonates exert its therapeutic effect. Furthermore, bisphosphonates may disrupt cytoskeleton assembly and impair cellular energy. 2,9 Lastly, activation of the immune system may play a role. 57 The possible different pathogenesis and prevalence of nephrotoxicity between the various bisphosphonates may be the result of different pharmacokinetic properties due to different side chains on R1 and R2, which are shown in Table 1. 58 Additional research is warranted to further investigate the abovementioned findings. For example, the determinants of patients at high risk of bisphosphonate nephropathy should be identified. | CONCLUSION In conclusion, predominantly zoledronate and pamidronate may induce a deterioration in renal function. The recent finding that 40% of cancer patients continue nephrotoxic drugs after an impairment of renal function 59 highlights the importance of being aware of the potential nephrotoxic effects of bisphosphonates, and the importance of assessing and monitoring renal function when prescribing these drugs. ACKNOWLEDGEMENT We are grateful to Maartje Korver and Ben Millard-Martin for critically reviewing the language used in the manuscript. COMPETING INTERESTS There are no competing interests to declare.
2021-02-18T06:17:05.197Z
2021-02-17T00:00:00.000
{ "year": 2021, "sha1": "1df1127c0f98e9689d7ef963aaa684723b8a3d16", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/bcp.14780", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "1f7f91a2b40e650e69aee0af53c32e4d9dc66d89", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
199639203
pes2o/s2orc
v3-fos-license
UHRF1 Promotes Proliferation of Human Adipose-Derived Stem Cells and Suppresses Adipogenesis via Inhibiting Peroxisome Proliferator-Activated Receptor γ Once the adipose tissue is enlarged for the purpose of saving excess energy intake, obesity may be observed. Ubiquitin-like with PHD and RING Finger domains 1 (UHRF1) is helpful in repairing damaged DNA as it increases the resistance of cancer cells against cytocidal drugs. Peroxisome proliferator-activated receptor γ (PPARγ), an important nucleus transcription factor participating in adipogenesis, has been extensively reported. To date, no study has indicated whether UHRF1 can regulate proliferation and differentiation of human adipose-derived stem cells (hADSCs). Hence, this study aimed to utilize overexpression or downregulation of UHRF1 to explore the possible mechanism of proliferation and differentiation of hADSCs. We here used lentivirus, containing UHRF1 (LV-UHRF1) and siRNA-UHRF1 to transfect hADSCs, on which Cell Counting Kit-8 (CCK-8), cell growth curve, colony formation assay, and EdU proliferation assay were applied to evaluate proliferation of hADSCs, cells cycle was investigated by flow cytometry, and adipogenesis was detected by Oil Red O staining and Western blotting. Our results showed that UHRF1 can promote proliferation of hADSCs after overexpression of UHRF1, while proliferation of hADSCs was reduced through downregulation of UHRF1, and UHRF1 can control proliferation of hADSCs through transition from G1-phase to S-phase; besides, we found that UHRF1 negatively regulates adipogenesis of hADSCs via PPARγ. In summary, the results may provide a new insight regarding the role of UHRF1 on regulating proliferation and differentiation of hADSCs. Introduction Obesity may lead to a series of serious metabolism diseases, such as hypertension, diabetes, cardiovascular disease, and dyslipidemia [1]. An increase in adipocytes number (hypertrophy) and size (hyperplasia) may significantly induce obesity [2]; thus deep understanding of the mechanism of adipocyte's proliferation and adipogenesis is of great importance. Overexpression of ubiquitin-like with PHD and RING Finger domains 1 (UHRF1) in a variety of haematological and tumors was noted beforehand, as well as a significant association of its remarkable expression with attenuated expression of a number of tumor susceptibility genes (TSGs). Besides, UHRF1 includes four structural domains: a ubiquitin-like (UBL) domain, a plant homeodomain (PHD) domain, a SRA (SET and RING-associated) domain, and a RING domain [11]. UHRF1 is able to regulate DNA-methylation via different DNA-binding proteins, such as histone H3 lysine 9 (H3K9), histone deacetylase 1 (HDAC1), DNA methyltransferase 1 (DNMT1), proliferating cell nuclear antigen (PCNA), and euchromatic histone-lysine N-methyltransferase 2 (EHMT2) [12,13]. An another important function of UHRF1 is to promote cell proliferation, which has been extensively reported [14][15][16]. However, a number of studies indicated that UHRF1 may play different roles in proliferation of different cells. For instance, in tumor cells, the expression of UHRF1 may be easily noted [17], while in some terminal differentiation cells, e.g., UHRF1 is hardly expressed in skeletal muscle cells [18]. At G1/S transition, previous researches demonstrated the efficacy of downregulation of UHRF1 for cell cycle arrest, in which a p53/p21Cip1/WAF1-dependent DNA-damage checkpoint plays a substantial role if that would be activated [19,20]. UHRF1 inhibitors have possessed precious therapeutic influences in form of being anticancer, in addition to restoration of normal gene expression [21,22]. However, based on a previous study, UHRF1 can control the self-renewal of HSC via regulation of the cell-division modes epigenetically [23]. Expression of UHRF1 was noted beforehand in early phase of the lineage, while accompanied with other consequences in later phases of survival and neuronal differentiation [24]. Moreover, UHRF1 can colocalize with the maintenance DNMT1 protein throughout S-phase [12]. In this study, we attempted to explore whether UHRF1 can regulate proliferation and differentiation of human ADSCs (hADSCs). Our results demonstrated that UHRF1 could promote proliferation hADSCs after overexpression of UHRF1, whereas proliferation of hADSCs was decreased through downregulation of UHRF1. In addition, we found that UHRF1 negatively regulated adipogenesis of hADSCs via PPAR . Patients and Methods . . Patients and Clinical Tissue Specimens. Three male patients with peptic ulcer were recruited in this study. The patients had no acute inflammation, diabetes, malignant tumors, smoking, and mental illness. The abdominal subcutaneous adipose tissues (SATs) were separated from the subjects via a surgical method. This study was approved by the Ethics Committee of The Third Xiangya Hospital of Central South University (Changsha, China). All the subjects signed the written informed consent form. . . Isolation, Cultivation, and Differentiation of hADSCs. Here, SAT (0.010 kg) was washed four times with phosphatebuffered saline (PBS), and then, SAT was cut and digested with collagenase I (Sigma-Aldrich, St Louis, MO, USA) at 37 ∘ C for 90 min. Next, 10 ml DMEM/F12 (Life Technologies, Carlsbad, CA, USA) was added into centrifuge tube to terminate digestion, and then the medium was filtered by a nylon mesh and was subsequently centrifuged at 150×g for 10 min. After that, the supernatant was gently poured out; 3 ml erythrocyte lysate was added into tube (Beyotime Institute of Biotechnology, Shanghai, China) and centrifuged at 150 g for 10 min; the supernatant was gently poured out again and washed by D-Hank's solution one time and again centrifuged at 150 g for 10 min. The pelleted cells were seeded in DMEM/F12 containing 10% fetal bovine serum (FBS; Life Technologies, Carlsbad, CA, USA). In addition, 4-6 passage cells were used for the next experiments. The specific cell surface markers of hADSCs were detected using a flow cytometer (Muse EasyCyte, Merck Millipore, Germany) with CD73, CD44, CD45, and CD105 (all purchased from eBioscience, Inc., San Diego, CA, USA) and CD90 and CD34 (BioLegend, San Diego, CA, USA). Here, the applied method was according to Wu et al. 's research [25]. Besides, the differentiation protocol of hADSCs was based on our previous study [26]. . . RNA Extraction and Quantitative Reverse Transcription Polymerase Chain Reaction (RT-qPCR). Total RNA was extracted by TRIzol reagent (Life Technologies, Carlsbad, CA, USA), and cDNA synthesis was performed with a reverse transcription kit (Promega, Madison, WI, USA). The RT-qPCR was applied by a Mastercycler5ep real-time PCR (Eppendorf, Hamburg, Germany). The relative gene expression was calculated by 2 -ΔΔCT . These experiments were carried out for three times. Primer sequences used for RT-qPCR are listed in Table 1. . . Transfection of hADSCs with Lentivirus. Human lentivirus-UHRF1 (LV-UHRF1) and lentivirus negative control (LV-NC) sequences were constructed by GeneChem Co. Ltd. (Shanghai, China) and transfected into hADSCs according to the protocol. The cells were divided into LV-UHRF1 and LV-NC groups. The expression vector (GV341) contained whole coding sequence of UHRF1. After hADSCs reached confluency of 40-50%, hADSCs were transfected by LV-UHRF1 or LV-NC with 2 mg/ml polybrene (GeneChem Co. Ltd., Shanghai, China) in serum-free medium. After 16 h, the medium was abandoned and replaced with a fresh medium. . . Small Interfering RNA (siRNA). In this phase, hADSCs were seeded at 1 × 10 5 cells/well and cultured in six-well plates. After 24 h, cells were transfected with 40 nM siRNA-UHRF1 or siRNA-negative control (si-NC). Lipofectamine 3000 was used as transfection reagent (Life Technologies, Carlsbad, CA, USA), and cells were divided into siRNA-UHRF1 group and si-NC group. Three sequences of siRNA-UHRF1 were synthesized, and siRNA-UHRF1 was tested by Western blotting. . . Western Blot Analysis. The cells were lysed with radioimmunoprecipitation assay (RIPA) buffer (Sigma-Aldrich, St Louis, MO, USA) and protein concentrations were quantified by bicinchoninic acid (BCA) assay (Beyotime Institute of Biotechnology, Shanghai, China). The proteins were separated by sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE) and electroblotted onto a polyvinylidene difluoride (PVDF) membrane (Millipore, Billerica, MA, USA). The PVDF membrane was then blocked in 5% skimmed milk and 0.1% Tween-20 at room temperature for 1.5 h and subsequently was incubated in primary antibody . . Colony Formation Assay. Here, 1,000 cells were seeded in six-well plates and cultured for 10 days. Then, each well was washed with PBS for three times, subsequently fixed with 75% ethanol for 10 min, and stained with 0.1% crystal violet for 30 min. The colonies were observed and counted under a light microscope (Olympus, Tokyo, Japan). . . Cell Cycle Analysis. After hADSCs were transfected by siRNA-UHRF1 and LV-UHRF1 for 72 h, respectively, the two groups were harvested and washed with PBS and then fixed by 70% ice-cold ethanol at 4 ∘ C overnight. The cells were incubated in PBS with 10 mg/mL RNase and 1 mg/mL propidium iodide (PI; Beyotime Institute of Biotechnology, Shanghai, China) for 1 h at room temperature. The cell cycle was tested using a flow cytometer (Muse EasyCyte, Merck Millipore, Germany) and was analyzed with EasyCyte software according to the standard procedure. . . Statistical Analysis. The results were presented as mean ± standard deviation (SD). Two groups were compared by the unpaired Student's t-test, and multiple groups were analyzed by one-way analysis of variance (ANOVA). Statistical significance was defined as a P-value < 0.05. Results . . UHRF Regulates Proliferation of hADSCs. We first detected the identification and characterization of hADSCs, and our results showed that the typical surface marker of mesenchymal stem cells (MSCs) was expressed in hADSCs. Besides, the hADSCs were positive for the mesenchymal markers (CD44, CD73, CD90, and CD105) and were negative for hematopoietic and endothelial markers (CD34 and CD45) (Figure 1(a)). Next, we further explored the expression of UHRF1 after LV-NC, LV-UHRF1, si-NC, and siRNA-UHRF1 were transfected into hADSCs. We found that UHRF1 was significantly upregulated after overexpression of UHRF1 ( * P < 0.05 compared with LV-NC group; Figures 1(b) and 1(c)). On the contrary, UHRF1 was significantly upregulated after overexpression of UHRF1 ( * * P < 0.01 compared with siRNA-NC group; Figures 1(d) and 1(e)). To investigate the effects of UHRF1 on proliferation of hADSCs, LV-NC, LV-UHRF1, si-NC, and siRNA-UHRF1 were transfected into hADSCs, respectively. Besides, CCK8 was used to assess proliferation of hADSCs, in which the results showed that the proliferation of LV-UHRF1 group was significantly increased, while that for siRNA-UHRF1 group was notably decreased compared with LV-NC group and si-NC group after cells were transfected for 72 h ( * P < 0.05 compared with LV-NC group; Figure 1(f)); next, cell growth curve was further assessed for proliferation of hADSCs; after hADSCs were transfected, the number of cells was daily counted by Cell Counting Instrument (Countess II FL Automated Cell Counter; Invitrogen, Carlsbad, CA, USA), in which the number of cells in LV-UHRF1 group was markedly increased compared with LV-NC group and siRNA-NC group after 5-7 days, while it significantly decreased for siRNA-UHRF1 group in comparison with LV-NC group and si-NC group ( * P < 0.05 compared with LV-NC group; Figure 1(g)). The results indicated that overexpression of UHRF1 may promote proliferation of hADSCs, whereas downregulation of UHRF1 may inhibit proliferation of hAD-SCs. . . UHRF Accelerates Colony Formation of hADSCs. To further explore whether UHRF1 affects colony formation of hADSCs, UHRF1 was upregulated or downregulated in hADSCs; after 10 days, the colony formation of hADSCs was stained with 0.1% crystal violet and observed by a microscope. The findings demonstrated that upregulation of UHRF1 notably promoted colony formation of hAD-SCs, while downregulation of UHRF1 significantly depressed colony formation of hADSCs ( * P < 0.05 compared with LV-NC group; Figures 2(a) and 2(b)). Next, we further used EdU proliferation assay to assess proliferation of hADSCs, in which we found that overexpression of UHRF1 markedly increased proliferation of hADSCs, whereas knockdown of UHRF1 remarkably decreased proliferation of hADSCs ( △ P < 0.05 compared with LV-NC group; Figures 2(c) and 2(d)). . . UHRF Promotes G -to S-Phase Transition and Regulates Expression of Cell Cycle-Related Proteins in hADSCs. To indicate how UHRF1 affects proliferation of hADSCs, flow cytometry was carried out to investigate the alteration of cell cycle protein expression in hADSCs. The results indicated that G1-to S-phase transition in LV-UHRF1 group was significantly downregulated in comparison with LV-NC group, while proportion of LV-UHRF1 in S-phase was markedly upregulated compared with LV-NC group, and the S-phase was significantly decreased in siRNA-UHRF1 group ( * P < 0.05 compared with LV-NC group; Figures 3(a) and 3(b)). Furthermore, the expression level of Cyclind D1 and PCNA was detected by Western blotting, and the mentioned level was markedly increased in LV-UHRF1 group compared with LV-NC group; however, that level was notably decreased in siRNA-UHRF1 group ( * P < 0.05 compared with LV-NC group; Figures 3(c), 3(d) and 3(e)). These results indicated that UHRF1 may initiate S-phase through upregulating the expression of cell cycle-related proteins. . . UHRF Regulates Adipogenesis via PPAR . To indicate whether UHRF1 can regulate adipogenesis, the hADSCs were transfected by LV-NC, LV-UHRF1, siRNA-UHRF1, and siRNA-NC, respectively; then those were cultured for 8 consecutive days, and Oil Red staining was undertaken to evaluate cellular lipid droplets in each group. The findings showed that overexpression of UHRF1 could significantly inhibit adipogenesis (Figure 4(a)). It was also revealed that the expression of UHRF1 mRNA was gradually downregulated during adipogenesis ( △ P < 0.01 compared with 0th day; Figure 4(b)). At 8th day, RT-qPCR was carried out to detect the expression of PPAR , C/EBP , and fatty acid binding protein 4 (FABP4) mRNA in each group. It was disclosed that the expression of PPAR , C/EBP , and FABP4 mRNA was significantly downregulated in overexpressed UHRF1 group, while it was upregulated in downregulated UHRF1 group ( * P < 0.05, △ P < 0.01 compared with LV-NC group; Figures 4(c), 4(d) and 4(e)). Next, we analyzed the expression of PPAR and C/EBP mRNA after LV-NC, LV-UHRF1, siRNA-UHRF1, and siRNA-NC were transfected into hADSCs for 3 days, respectively, and we found that overexpression of UHRF1 could inhibit expression of PPAR , whereas downregulation of UHRF1 could promote expression of PPAR ( * P < 0.05, △ P < 0.01 compared with LV-NC group; Figures 4(f) and 4(g)). Discussion In this study, we demonstrated that UHRF1 is a critical factor to regulate proliferation and differentiation of hADSCs. Although a number of previous studies reported that UHRF1 did not affect proliferation in certain stem cells [23,24]; however, UHRF1 may play a different role in proliferation of hADSCs. Some studies have shown that UHRF1 plays a major role in proliferation of cells. Besides, UHRF1 has been extensively studied in tumor pathogenesis [27][28][29], and UHRF1 can maintain methylation status of tumor suppressor genes. Once UHRF1 is upregulated, the expression of those tumor suppressor genes is downregulated, which may cause tumorigenesis [21]. On the other hand, UHRF1 promotes or does not affect proliferation of cells, especially in high proliferation capacity of tumor cells [17], and overexpression of UHRF1 notably actives proliferation, while downregulation of UHRF1 blocks proliferation. However, a number of studies have reported that increase of UHRF1 can block contact inhibition [30,31]. In addition, UHRF1 cannot affect proliferation and terminal differentiation of certain stem cells [12,18,23,24]. However, no study has indicated whether UHRF1 can affect proliferation and terminal differentiation of hADSCs. Our results showed that UHRF1 can regulate proliferation of hADSCs, and increase or decrease of UHRF1 may enhance or inhibit proliferation of hADSCs. In order to indicate whether the mechanism of UHRF1 may affect proliferation of hADSCs, we detected cycle changes in hADSCs after overexpression or silencing of UHRF1. The majority of previous studies have illustrated that UHRF1 can regulate proliferation of cells through transition from G1-phase to S-phase, enforce cell cycle from G1/S-to S-phase, in addition to increase cell proliferation [21,22]. A previous study tested the expression of UHRF1 by immunohistochemistry in specimens of esophageal squamous cell carcinoma (ESCC) patients who treated with radiotherapy, in which it was revealed that UHRF1 was significantly overexpressed in ESCC specimens [32]. The results of the present study showed that UHRF1 controls proliferation of hADSCs through transition from G1-phase to S-phase, which is consistent with those reported previously [19,20]. At G1 and G2/M phases, we found that expression of novel NP95 was suppressed in normal thymocytes, while it was remarkably expressed in mouse T cell lymphoma cells [33]. To date, no study has indicated whether UHRF1 can affect differentiation of hADSCs. The overexpression or downregulation of UHRF1 was used to explore the role of UHRF1 in adipogenesis, in which our results showed that UHRF1 negatively regulates adipogenesis. It was previously shown that UHRF1 negatively regulates PPAR and increases proliferation, migration, and clonal formation in colorectal cancer cells lines, and the molecular mechanism revealed that UHRF1 recruits PPAR promoter and accelerates DNA methylation and repressive histone modification [34]. In the present study, we found that UHRF1 was gradually downregulated during adipogenesis, and also overexpression of UHRF1 might downregulate PPAR in hADSCs, while downregulation of UHRF1 might increase expression of PPAR . In addition, PPAR , as a nucleus transcription factor, was found to negatively regulate cell proliferation, in which upregulation of PPAR significantly decreased proliferation in human breast cancer cells or colon cancer cells, [34,35]. In contrast, reduced expression of PPAR could increase proliferation in smooth muscle cells (SMCs), and nesfatin-1 could stimulate vascular SMCs (VSMCs) thorough inhibiting PPAR [36,37]. Taken together, our results indicated that UHRF1 can promote proliferation of hADSCs and suppress adipogenesis thorough inhibiting PPAR , and this study may provide a new insight for effective treatment of obesity and related metabolic diseases. Data Availability All data can be presented by the corresponding author upon request. Conflicts of Interest The authors declare that there are no conflicts of interest regarding the publication of this article. Authors' Contributions Ke Chen and Zhaohui Mo designed the experiments; Zhaohui Mo overviewed the study and provided technical guidance; Ke Chen performed the experiments; Zi Guo, Yufang Luo, and Jingjing Yuan partly performed experiments; Ke Chen drafted the manuscript. All authors reviewed the manuscript.
2019-08-16T06:18:25.461Z
2019-07-22T00:00:00.000
{ "year": 2019, "sha1": "a0e6c7fdf04b7fd1153fb0d1513a083c19c01f40", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/bmri/2019/9456847.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a0e6c7fdf04b7fd1153fb0d1513a083c19c01f40", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
196162983
pes2o/s2orc
v3-fos-license
A Review of Histogram Equalization Techniques in Image Enhancement Application Image enhancement can be considered as one of the fundamental processes in image analysis. The goal of contrast enhancement is to improve the quality of an image to become more suitable for a particular application. Till today, numerous image enhancement methods have been proposed for various applications and efforts have been directed to further increase the quality of the enhancement results and minimize the computational complexity and memory usage. In this paper, an image enhancement methods based on Histogram Equalization (HE) was studied. This paper presents an exhaustive review of these studies and suggests a direction for future developments of image enhancement methods. Each method shows the owned advantages and drawbacks. In future, this work will give the direction to other researchers in order to propose new advanced enhancement techniques. Introduction The image enhancement technique is to make the digital picture more appealing to our eyes, for example, making the images smooth or sharp. This is an important topic in digital image processing. It can help humans and computer vision algorithms obtaining accurate information from the enhanced images. The visual quality and certain image properties, such as brightness, contrast, signal-to-noise ratio, resolution, edge sharpness, and color accuracy were improved through the enhancement process [1], [2]. Recently, many image enhancement methods have been developed based on various digital image processing techniques and applications. They can be developed in the spatial domain or spatial-frequency domain. The enhanced image will provide useful information for post-processing, especially in segmentation stage. This paper is organized in the following sections: Section 2 describes the related work of the studied using HE and Section 3 gives the conclusion of the work. Histogram Equalization Many researchers argued that Histogram equalization (HE) is a simple and an easy method to enhance the contrast and improve the image quality [15]- [17]. Since 1997, Yeong Kim [18] raised several concerns about contrast problem and suggested Brightness preserving Bi-Histogram Equalization (BBHE) in order to enhance the contrast. The average intensity value was applied as a separating point to differentiate between a dark area and bright area. The above finding contradicts the study by Wang et al. [19]. The author presented that a median intensity value is more accurate as the separating point compared to the average intensity. These results were contradicted that suggested the minimum mean brightness between original and output image as the separating point is more specific and accurate compared to the BBHE and Dualistic Sub Image Histogram Equalization (DSIHE) [20]. Research conducted by Ooi and Isa [21] proposed a new improvement in histogram equalization known as Quadrant Dynamic Histogram Equalization (QDHE). The first step in this technique was to divide the histogram into four sub-quadrant histograms based on the median value of the original image. After normalizing each sub-histogram, finally, the image was equalized. A major advantage of QDHE is that it's enhanced the image without any intensity saturation, noise amplification, and over-enhancement. In 2010, Ooi et al. [22] presented a new method based on Plateau level equation, namely Bi-Histogram Equalization with a Plateau Level (BHEPL). The main objective of this paper is to improve the BBHE technique in term of processing time. The process of this method also involved mean brightness preserving histogram equalization method with a clipped histogram equalization method. However, interestingly, this is contrary to a study conducted by Sengee et al. [23]. They suggest an extension method of BBHE based on the Neighbourhood Metric. This method involved a few steps: First, a large histogram was divided into the sub-region using Neighbourhood Metric. Second, based on mean, the histogram of the original image was separated into two sub-regions and process independently. The results enhanced the local contrast and preserved the brightness of the original image. The comparison of result performance is illustrated in Figure 1. HE, (c) BBHE [23], (d) BHEPL [22], and (e) QDHE [21]. In a different study, Salah et al. [24] explained a new approach to solving the illumination problem on the face images using Histogram Equalization (HE). The technique is based on the combination of gamma correction and the Retinal filter's compression function namely GAMMA-HM-COMP. The Retinal filter is a new enhancement method, and the result was effective compared to the three conventional enhancement methods which are histogram equalization [1], gamma correction [25] and log transformation [1]. In another study, Tan et al. [26] proposed a Background Brightness Preserving Histogram Equalization (BBPHE) method based on non-linear Histogram equalization (HE). Based on the background and non-background level techniques, the original image was separated into three interval histograms: (1) low grey level, (2) medium grey level, and (3) high grey level. The objective of this method is to enhance the object contrast and maintaining the background brightness. Similarly, Moniruzzaman et al. [27] proposed a modification of Brightness Preserving Bi-Histogram Equalization (BPBHE) using the edge pixels data. In order to prove the effectiveness, the Average Mean Brightness Error (AMBE) was calculated and the result was presented in table 1. The lowest of AMBE shows the high quality image and good performance technique. Hashemi et al. [28] proposed a novel enhancement method based on Genetic Algorithm using a simple chromosome structure and corresponding operator. The method was tested on the image that has a low dynamic range. In 2013, research finding by Chaudhary and Patil [29] also suggested a simple method based on Genetic Algorithm. The advantages of both methods are fast processing time, efficient, and produce a high-quality image. Besides that, they also produced a comprehensive comparison between a BBHE, DSIHE, MMBBHE, MPHE, and RMSHE. The analyses were done based on PSNR and contrast ratio. In another study, Shome et al. [30] examined a method using Contrast Limited Adaptive Histogram Equalization (CLAHE) in order to normalize the contrast variation in the retinal image. CLAHE is an adaptive extension of Histogram Equalization followed by thresholding, which helps in the dynamic preservation of the local contrast features of an image. This proposed method used a non-mean based approach to improve the quality of the Diabetic Retinopathy (DR) image while preserving the sharpness and minutes of the details. The method also increased the local contrast pixels. The above view was supported studied by Sundaram et al. [31] where a CLAHE technique with a slight modification was suggested in order to enhance the mammogram images. The method was known Histogram Modified Contrast Limited Adaptive Histogram Equalization (HM-CLAHE). The optimization technique was used to adjust the level of strong contrast and the local details for more relevant interpretation. Based on Enhancement Measure (EME) result, this method was better compared to the HE, Unsharp masking (USM), and CLAHE. Tan et al. [26] introduced a modification of HE based on non-linear technique. The aim is to enhance the image contrast, while preserving the background brightness for images with welldefined background brightness. The original image was divided into three sub-images by using the proposed algorithm. Finally, only the problem region was normalized. The correction process based on sub-images technique also was supported by Shanmugavadivu and Balasubramanian [32]. They proposed a new method called as Thresholded and Optimized Histogram Equalization (TOHE). The main process has divided the histogram using the Otsu thresholding. Based on the result performance, this approaches is successful compared to the HE, BBHE, Range-Limited Bi-Histogram Equalization (RLBHE). Figure 2 shows the comparison result between TOHE and a few histogram methods. Circle in figure 2 represent the improvement and drawback of each method. On the other side, Lin et al. [33] suggested a new HE method was applied to the colour image known as Averaging Histogram Equalization (AVHEQ). The technique also separates the original image into sub-images and equalized independently. A new mathematical algorithm in order to determine the optimal averaging threshold was proposed. The result is better compared to the conventional methods such as BBHE, DSIHE, and BHEPL. Summary, the Histogram Equalization (HE) technique popular because is easy to implement and fast processing [34][23] [35]. However, this technique produces many drawbacks such as it adds noise to the output image, increasing the contrast of its background and the signal gets distorted [15]. The HE may produce over enhancement result and saturation artefacts due to the stretching of the grey levels over the full grey level range. In addition, many types of HE are based on the global technique. However, these global processing techniques of image processing are found to be insufficient to overcome variations due to illumination changes [36]. In non-uniform illumination, variation is still difficult to deal with using these global processing techniques [37], [38]. Conclusion One major area of digital image processing is image enhancement. The main objective of image enhancement is to improve the quality of images emphasize wanted features and make them less obscured. This study has discussed an overview of the background and related work in the area of image enhancement using HE. The HE technique is simple and easy to apply. Recently, many modifications on HE was presented in order to find the best normalization technique. In future, research on the mathematical algorithm in HE should be explored.
2019-02-17T14:20:39.018Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "41ae02b45b91843c39e47a0456df940304e78aa1", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1019/1/012026", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "5e580f913799249b6f15dc9bd8b61a9d6aea2280", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
226529101
pes2o/s2orc
v3-fos-license
The Dark Side of Seizure: Case Series of a T2 Dark- Through Pattern of Peri-ictal Diffusion Restriction Analogous to T2 shine-through, T2 hypointensity can “dark-through” on diffusion weighted imaging (DWI) and mask diffusion restriction. In such cases, diffusion restriction is evident only by apparent diffusion coefficient (ADC) hypointensity, which is often subtle and easily missed without corresponding DWI hyperintensity. Because diffusion restriction may be present in the setting of seizure, avoiding this pitfall can aid in seizure diagnosis for patients without any other magnetic resonance imaging (MRI) findings. We present a case series of nine patients in the peri-ictal period with T2 dark-through on MRI to raise awareness of this finding and its potential clinical role. Introduction T2 dark-through describes diffusion restriction in the setting of T2 hypointensity. Just as T2 hyperintensity can "shine-through" on diffusion weighted imaging (DWI), T2 hypointensity can "dark-though" on DWI. DWI signal is influenced by multiple factors including T2 signal, which is demonstrated in the equation: DWI signal intensity , where k is a constant, PD is the proton density, TE is the echo time, and b is the b-value [1]. Understanding T2 dark-though is essential for detecting diffusion restriction of T2 hypointense processes where subtle apparent diffusion coefficient (ADC) hypointensity may be missed without corresponding DWI hyperintensity. Cases of Hyperglycemia A 6-year-old male presented with 3 episodes of focal right-sided shaking and severe hyperglycemia (414 mg/dL). MRI showed left parietal T2 dark-through ( Figure 1). Patient improved following the treatment of hyperglycemia. A 52-year-old female presented with episodic confusion over the course Figure 2). Electroencephalogram (EEG) showed right intermittent rhythmic delta activity and background slowing. Patient improved following the treatment of hyperglycemia. A 56-year-old male presented with episodic left lower quadrant visual changes 50 times per day and severe hyperglycemia (451 mg/dL). Initial MRI showed right parietal T2 dark-through ( Figure 3). Patient improved following the treatment of hyperglycemia. Findings were less prominent on 3-month follow-up MRI. Case of Venous Thrombosis A 32-year-old male presented with generalized tonic-clonic convulsions lasting 1 minute and superior sagittal sinus thrombosis. MRI showed left parietal T2 dark-through, venous thrombosis, and leptomeningeal enhancement, the latter thought to be related to venous stasis ( Figure 4). Patient was diagnosed with protein C deficiency and improved following anticoagulation. Case of Hyperglycemia and Venous Thrombosis A 62 year-old-female presented with left upper extremity coordination difficulties over the course of 1 week with 2 episodes of shaking lasting 2-3 minutes, severe hyperglycemia (505 mg/dL), and right transverse sinus thrombosis. Initial MRI showed right parietal T2 dark-through, venous thrombosis, and leptomeningeal enhancement, the latter thought to be related to venous stasis ( Figure 5). Patient improved following the treatment of hyperglycemia and anticoagulation. T2 darkthrough was resolved on 1-month follow-up MRI. Figure 6). Patient was lost to follow-up. Cases of Meningitis A 16-year-old male presented with left upper extremity and left facial twitching for 2 minutes in the setting of meningitis. MRI demonstrated right frontal T2 darkthrough, sulcal FLAIR hyperintensity, and leptomeningeal enhancement (Figure 7). Patient was lost to follow-up. Case of Sturge-Weber A 2-year-old male with history of Sturge-Weber presented with seizure of unspecified semiology. MRI demonstrated right cerebral T2 dark-through and stigmata of Sturge-Weber including asymmetric smaller right cerebral hemisphere with pial angiomatosis (Figure 8). CT showed subcortical calcification only in the posterior right cerebral hemisphere, despite diffuse T2 dark-through throughout the right cerebral hemisphere. The patient underwent right hemispherectomy due to uncontrolled seizure. Case of Subdural Hematoma A 78-year-old female presented with one episode of right upper extremity tonic-clonic movement and intermittent right-sided weakness lasting 1-5 minutes followed by sleepiness in the setting of subdural hemorrhage. MRI showed left frontal T2 dark-through (Figure 9). The patient improved following hematoma evacuation and seizure management. Discussion T2 dark-through is an under-recognized pattern of diffusion restriction where T2 hypointensity "darksthrough" on DWI and masks signal changes. When this happens, restricted diffusion can only be identified by ADC hypointensity, which may be subtle and easily missed. We present a case series of peri-ictal T2 dark-through, including the first documented cases in the setting of venous thrombosis and subdural hemorrhage, to raise awareness of this finding and its potential clinical role. The T2 hypointensity component of T2 dark-through has been attributed to the susceptibility effects of paramagnetic substances like deoxyhemoglobin and free radicals [1]. These materials can accumulate from (1) oxygen demand exceeding oxygen supply despite compensatory increased blood flow, such as in the setting of seizure and super-imposed stressors, or (2) insufficient venous drainage, such as in Sturge-Weber, venous thrombosis, or mass effect from adjacent subdural hemorrhage. In the case of Sturge-Weber, axonal hypermyelination and other structural white matter abnormalities have also been proposed as mechanisms for T2 hypointensity [6]. The ADC hypointensity component of T2 dark-through may be related to a combination of hypoxia and excitatory neurotransmitters leading to intracellular edema. In the setting of hypoxia, sodium/potassium-ATPase pump failure results in intracellular shift of sodium and water [11]. Excitatory neurotransmitters released during seizure activate ion channel coupled receptors, which allow the influx of sodium and water and can activate the cell death cascade [12,13]. Although the precise combination and role of these mechanisms are still unclear, it is believed that intracellular edema stemming from hypoxia and over-excitation in the peri-ictal state may be responsible for diffusion restriction. Patients that received follow-up imaging did not have evidence of encephalomalacia. Plausible explanations for this include: (1) T2 dark-through is reversible if underlying seizure and stressors are treated promptly; (2) T2 dark-through may cause mild tissue injury evident only on histology; (3) our follow-up imaging did not allow sufficient time for the full extent of the T2 dark-through event to manifest. Although imaging was obtained within 24 hours of seizure, the precise time-course is unknown. It is possible many cases of T2 darkthrough are resolved by the time patients are scanned. Prompt imaging in the acute setting and long-term follow-up imaging may increase the sensitivity for T2 dark-through and more reliably define its time course and reversibility. Although there is a correlation between T2 dark-through and seizure, a relationship of causality is yet to be determined. Regardless, recognizing T2 dark-through may have important implications for seizure localization and managing seizures with potentially reversible underlying etiologies. Our case series is limited by retrospective review of data, small sample size, and qualitative image analysis. Future prospective studies with a larger sample size and quantitative T2 dark-through analysis can determine the ADC and T2 thresholds for certain clinical outcomes, and better characterize the patient population who will exhibit T2 dark-through and benefit from a more comprehensive workup for comorbidities. Conclusion T2 dark-through is an under-recognized presentation of diffusion restriction occurring in the peri-ictal period. This finding should prompt the radiologist to look for superimposed stressors such as metabolic and vascular abnormalities. Although there is a correlation between T2 dark-through and seizure, a relationship of causality is yet to be determined. With further study, T2 dark-through may have a clinical role in identifying and managing seizure.
2020-07-23T09:05:30.851Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "a0e68d09532b70459edbae0d62a2c4252ff047a0", "oa_license": null, "oa_url": "https://doi.org/10.17756/jnpn.2020-035", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b990119fee57f62118136ee0c6231b1fc1f44359", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Chemistry" ] }
252645628
pes2o/s2orc
v3-fos-license
Differentiation of malignant from benign pleural effusions based on artificial intelligence Introduction This study aimed to construct artificial intelligence models based on thoracic CT images to perform segmentation and classification of benign pleural effusion (BPE) and malignant pleural effusion (MPE). Methods A total of 918 patients with pleural effusion were initially included, with 607 randomly selected cases used as the training cohort and the other 311 as the internal testing cohort; another independent external testing cohort with 362 cases was used. We developed a pleural effusion segmentation model (M1) by combining 3D spatially weighted U-Net with 2D classical U-Net. Then, a classification model (M2) was built to identify BPE and MPE using a CT volume and its 3D pleural effusion mask as inputs. Results The average Dice similarity coefficient, Jaccard coefficient, precision, sensitivity, Hausdorff distance 95% (HD95) and average surface distance indicators in M1 were 87.6±5.0%, 82.2±6.2%, 99.0±1.0%, 83.0±6.6%, 6.9±3.8 and 1.6±1.1, respectively, which were better than those of the 3D U-Net and 3D spatially weighted U-Net. Regarding M2, the area under the receiver operating characteristic curve, sensitivity and specificity obtained with volume concat masks as input were 0.842 (95% CI 0.801 to 0.878), 89.4% (95% CI 84.4% to 93.2%) and 65.1% (95% CI 57.3% to 72.3%) in the external testing cohort. These performance metrics were significantly improved compared with those for the other input patterns. Conclusions We applied a deep learning model to the segmentation of pleural effusions, and the model showed encouraging performance in the differential diagnosis of BPE and MPE. INTRODUCTION Effusions, including pleural effusions, ascites, pericardial effusions and abscesses, are commonly observed in many diseases, such as infections and various cancers. The most common effusions are malignant pleural effusions (MPEs) caused by lung cancer, breast cancer, lymphoma and so on, and benign pleural effusions (BPEs) caused by Mycobacterium tuberculosis infection, heart failure, parapneumonic infections and so on. [1][2][3] The most common conditions leading to ascites are liver disease, cirrhosis and cancer. 4 Because pleural effusions are representative effusions, we chose MPE and BPE as our study objects. The gold standard in the diagnosis of MPE and BPE depends on pleural effusion pathogenic/cytological examinations and thoracentesis with pleural biopsy. 5 6 However, the low positivity rates for pathogenic diagnosis, the invasiveness and high costs of pleural biopsy, and the risk of complications represent the limitations of these gold-standard techniques, although their high specificity is their most important advantage. 7 8 These limitations suggest opportunities for more convenient, highly sensitive and non-invasive methods to improve the diagnostic performance of BPE and MPE. Thoracic CT is an appropriate method for the further assessment of pleural effusion. 9 Because the features extracted from images by radiologists are limited, artificial intelligence (AI) deep learning algorithms are helpful tools for automatically WHAT IS ALREADY KNOWN ON THIS TOPIC ⇒ The limitations of the gold standard in the diagnosis of benign pleural effusion (BPE) and malignant pleural effusion (MPE) suggest opportunities for more convenient, highly sensitive and non-invasive methods to improve diagnostic performance. Although many previous studies have explored other examinations to help diagnose pleural effusion, no available studies have focused on the differential diagnosis of pleural effusion based on thoracic CT image analysis using deep learning algorithms. WHAT THIS STUDY ADDS ⇒ The artificial intelligence (AI) model proposed in this study showed encouraging performance in the segmentation of pleural effusion areas and differential diagnosis of BPE and MPE based on thoracic CT images. HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY ⇒ In the present study, we proposed an AI model to implement the segmentation of pleural effusion regions. It would be worthwhile to apply these segmentation and classification deep learning models to other sorts of effusions. analysing complex medical images thanks to their strong featurelearning ability. 10 However, no available studies have focused on the differential diagnosis of pleural effusion based on thoracic CT image analysis using deep learning algorithms. U-Net, a convolutional neural network, has become an increasingly important basis for many deep learning models in medical image analysis. 11 It can achieve remarkable generalisation performance when trained with a limited number of images, which makes it especially suitable for our research. 12 In previous studies, U-Net was used for the segmentation of solid organs and lesion regions, such as pancreas segmentation, 13 3D cardiac segmentation 14 and automatic ground-glass nodule detection. 15 In our study, we applied U-Net to the segmentation of effusion regions, and specifically pleural effusions. We combined the 3D spatially weighted U-Net with the 2D classical U-Net for pleural effusion segmentation in thoracic CT images to obtain fine masks. The high precision of pleural effusion segmentation identifies predictive features which can be subsequently used to train deep learning models for lesion classification. We thus proposed a deep learning algorithm, based on the global and partial analysis of thoracic CT image features to diagnose BPE and MPE, which can potentially play a critical role in improving patients' clinical prognosis. MATERIALS AND METHODS Patients and study design In this consecutive study, 918 pleural effusion cases retrospectively collected from Wuhan Union Hospital between January 2016 and December 2021 were enrolled, with 311 cases randomly selected as the internal testing cohort and the other 607 as the training cohort. Another independent cohort including 362 patients with pleural effusion collected from Renmin Hospital of Wuhan University between January 2020 and May 2022 was used as the external testing cohort. Patients who met the following inclusion criteria were enrolled: (1) diagnosed with pleural effusion by CT scan of the chest and (2) underwent pleural effusion pathogenic/cytological examinations and diagnostic thoracentesis with or without pleural biopsy. The exclusion criteria were (1) pleural effusion whose cause could not be determined, (2) under the age of 18 and (3) unavailable clinical information. The diagnostic criteria for MPE and BPE adopted in this study are based on our previous studies, 16 17 and the criteria are further described in online supplemental methods. Two professional physicians, HX and WC, collected clinical information, including demographic characteristics, radiological features and laboratory testing results of the enrolled patients from electronic medical records. The volume of pleural effusion was classified as mild (<500 mL), moderate (500-1000 mL) or severe (>1000 mL). The parameters of the CT scanner are presented in online supplemental methods. Architecture of the pleural effusion segmentation model (M1) The pleural effusion segmentation model (M1) is a cascaded two-step deep-learning model ( figure 1A). The initial coarse segmentation results are obtained from the spatial attention information based on the 3D spatially weighted U-Net. We used a 3D spatial attention mechanism to capture large-scale contextual information, thus enhancing the representative ability of the model ( figure 1C). However, owing to the huge amount of information per sample for 3D U-Net, the region of interest needs to be cropped into small image patches to be used as inputs. Using this method, natural contour information may be lost to some degree. Therefore, the model's learning of natural contour information was enhanced through the 2D classical U-Net. The concatenation of 3D spatially weighted U-Net and 2D classical U-Net helps obtain a fine segmentation of pleural effusion. Details about the algorithms of the coarse and fine segmentation models are shown in online supplemental methods. Architecture of the pleural effusion classification model (M2) We developed a 3D deep convolutional neural network to identify patients with MPE from thoracic CT volumes. As shown in figure 1B, this classification model (M2) uses CT volume and its 3D pleural effusion masks as inputs (details in online supplemental methods). The 3D pleural effusion fine masks are obtained by assembling 2D fine masks generated by the fine segmentation model. This component takes advantage of both stacked bottleneck blocks and squeeze-and-excitation (SE) blocks. The bottleneck block is introduced to extract deeper features from the CT volumes and to solve the problem of degradation in the network training process. The SE block is introduced to improve the representational power of the network by enabling it to perform dynamic channel-wise feature recalibration (figure 1D; online supplemental methods). Inputting the 3D fine masks of pleural effusion according to the 3D thoracic CT volume helps reduce the effects of background information and improves the classification of BPE and MPE. Details about the training process of the pleural effusion segmentation and classification model are shown in online supplemental methods. Quantitative assessment indicators For the pleural effusion segmentation model, the Dice similarity coefficient (DSC) and Jaccard coefficient were used to evaluate the spatial overlap between the model-generated contour (M) and the ground truth contour (G). In our study, G means the sets defined by these boundaries of pleural effusion area drawn by a professional radiologist (15 years of experience) on CT images; M means the sets defined by these boundaries of pleural effusion area generated by the AI model. Precision and sensitivity measure the detection capability for identifying the correct regions. The Hausdorff distance 95% (HD95) and average surface distance (ASD) measure the boundary similarity between the model-generated contour and the ground truth contour. Details about the above indicators are described in online supplemental methods. The area under the receiver operator characteristic (ROC) curve (AUC), sensitivity and specificity were used to evaluate the predictive performance of M2 as a pleural effusion classification model. Statistical analysis Implementation details are described thoroughly in online supplemental methods. Statistical analyses were performed using SPSS Statistics (V.22). Comparisons were performed using the Mann-Whitney U test for continuous variables and χ 2 or Fisher's exact test for categorical variables, as appropriate. ROC curves were generated to evaluate the classification performance. Statistical significance was defined as a two-sided p value <0.05. Baseline characteristics of patients The baseline characteristics of the enrolled patients are presented in table 1. We compared the distribution of sex, age, volume of pleural effusion (mild/moderate/severe) and unilateral/bilateral pleural effusion between the two groups (BPE vs MPE). There was no significant difference between the two groups in any of the three cohorts in terms of age and volume of pleural effusion. However, the distribution of gender in patients with MPE and BPE was significantly distinct (p<0.001) in all three cohorts. Lung cancer was the leading cause of MPE (24.2% in the training cohort, 24.7% in the internal testing cohort, 40.3% in the external testing cohort), while parapneumonics was the leading cause of BPE (18.3% in the training cohort, 19.3% in the internal testing cohort, 24.3% in the external testing cohort; table 2). 3D spatially weighted attention mechanism in coarse segmentation for pleural effusion area discovery To highlight the importance of the attention mechanism inserted in 3D U-Net, figure 2 depicts the pleural effusion areas delineated by 3D U-Net and 3D spatially weighted U-Net, respectively, and displays heatmaps to indicate the importance of each part of the pleural effusion areas. The cut-off value used to acquire the highresponse area was 0.5. It can be clearly observed that the highresponse areas were mainly concentrated at the pleural effusion boundary when using 3D spatially weighted U-Net, while they were gathered in the pleural effusion inner part when using 3D U-Net, showing that the attention mechanism significantly improved the accuracy of the pleural effusion area segmentation. Compared with 3D U-Net segmentation, the results of 3D spatially weighted U-Net segmentation better fit the ground truth. Comparison among 3D U-Net, 3D spatially weighted U-Net and segmentation deep learning model (M1) For visual demonstration, representative pleural effusion area segmentation results of M1 (two-step method: 3D spatially weighted U-Net and 2D classical U-Net) are compared with the results of 3D spatially weighted U-Net (one-step method). Figure 3A1-C1 show an example of the radiologist's ground truth contours at three different CT slices, with the outline of contours shown by red lines. figure 3A2-C2 show the contours (yellow lines) obtained using only 3D spatially weighted U-Net, while figure 3A3-C3 show the contours (blue lines) obtained using M1. For 3D illustration, figure 3D1-D3 show the 3D views of the pleural effusion area discriminated by the radiologist, onestep method and two-step method, respectively. Compared with the one-step method, the segmentation results of the two-step method better fit the ground truth. For quantitative assessment of the segmentation results, six indicators were used to evaluate the similarity, difference and segmentation performance of 3D U-Net, 3D spatially weighted U-Net and M1. The average DSC, Jaccard coefficeint, precision, sensitivity, HD95 and ASD indicators for M1 were 87.6%, Figure 2 Pleural effusion area discovery. The cut-off value used to acquire the high-response area in heatmaps was 0.5. The second and third rows show the heatmap of 3D U-Net and the 3D spatially weighted U-Net, respectively. The fourth and fifth rows show the pleural effusion area discovered by 3D U-Net and the 3D spatially weighted U-Net, respectively. Figure 3 Comparison of the segmentation results of a patient using the ground truth contour, 3D spatially weighted U-Net (one-step method) and M1 (two-step method: 3D spatially weighted U-Net and 2D classical U-Net) in 2D and 3D views. The first column shows the images in three different CT slices (A-C) and the 3D (D) view of a patient's thoracic CT scan. The second column (a1, b1, c1 and d1) shows the corresponding experts' ground truth contour (pleural effusion outline highlighted in red). The third column (a2, b2, c2 and d2) shows the pleural effusion segmentation results (pleural effusion outline highlighted in yellow) using 3D spatially weighted U-Net. The fourth column (a3, b3, c3 and d3) shows the pleural effusion segmentation results (pleural effusion outline highlighted in blue) using M1. Diagnostic validation of the classification deep learning model (M2) The proposed M2 model with volume concat mask as input consistently achieved the highest accuracy across the internal and external testing cohorts ( figure 4A,B). In addition, the classification score indicated a notable distinction between BPE and MPE with different input patterns in the internal and external testing cohorts (all p<0.001). The input with volume concat mask bore the most significant distinction between BPE and MPE in both the internal and external testing cohorts, as revealed by the violin plots ( figure 4C,D). AUC, sensitivity and specificity were used as the main indicators for evaluating the diagnostic performance of M2. The three indicators for input with volume concat mask were 0.883 (95% CI 0.841 to 0.916), 78.4% (95% CI 71.6% to 84.2%) and 86.2% (95% CI 79.0% to 91.6%) in the internal testing cohort, and 0.842 (95% CI 0.801 to 0.878), 89.4% (95% CI 84.4% to 93.2%) and 65.1% (95% CI 57.3% to 72.3%) in the external testing cohort, which were significantly improved compared with those for input with only volume and input with volume multiply mask (table 4). The similar AUC values of the internal and external testing cohorts suggested an encouraging level of generalisability of M2 for diagnosing BPE and MPE in new patients. The input with the volume concat mask significantly improved the classification performance of M2, while, notably, the decrease in the speed of the network compared with the other two input patterns was negligible. Comparison of the heatmaps between typical MPE and BPE Comparison of the activation heatmaps generated by M2 between two randomly selected patients with MPE (one with lung cancer, one with breast cancer) and two randomly selected patients with BPE (one with tuberculous pleuritis, one with heart failure) is shown in figure 5. The activation heatmaps indicated the importance of different parts of the pleural effusion regions and suggested that different areas drew the attention of M2 to various degrees. The important areas found by M2, which were considered closely associated with the nature of pleural effusion (BPE or MPE), varied in different patients. The difference in features between highimportance pleural effusion areas and other pleural effusion areas requires further research. DISCUSSION In this study, we proposed a new architecture for the differential diagnosis of BPE and MPE based on pleural effusion segmentation of thoracic CT images. This deep learning architecture was trained using 607 CT images, and its performance was validated in an internal testing cohort (311 pleural effusion cases) and an external testing cohort (362 pleural effusion cases) from Wuhan Union Hospital and Renmin Hospital of Wuhan University. The encouraging diagnostic performance of the deep learning model was shown in both the internal (AUC 0.883, 95% CI 0.841 to 0.916) and external (AUC 0.842, 95% CI 0.801 to 0.878) testing cohorts. In addition, we combined this AI model with some clinical data, including gender, age, unilateral/bilateral pleural effusion and volume of pleural effusion, to predict BPE and MPE. The results showed that combining clinical indicators could improve the AUC in all three cohort (training cohort: 0.903 vs 0.896, internal testing cohort: 0.895 vs 0.882, external testing cohort: 0.868 vs 0.842) (online supplemental figure S1). This deep learning model discovered suspect pleural effusion areas and produced fine segmentations in the first step, then identified BPE and MPE by holistically and partially analysing thoracic CT image features, revealing that the features of thoracic CT images were closely related to the nature of pleural effusions. Our study provides an alternative, easy-to-use method to achieve non-invasive and efficient diagnosis of BPE and MPE from original CT images without human assistance. Previous studies have demonstrated that thoracic CT image features, such as fluid loculation, pleural lesions, pleural nodules and extrapleural fat, can help discriminate MPEs from BPEs. 18 19 Pleural nodules and nodular pleural thickening were reported to be associated to MPE, while circumferential pleural thickening was more common in tuberculous pleural effusion (TPE). 18 20 Zhang et al revealed that spectral CT imaging features combined with patient age and disease history could differentiate BPEs from MPEs with a sensitivity of 100% and a specificity of 71.4%, as well as an AUC of 0.933. 21 Pleural effusions can be divided into transudates and exudates. Although no CT feature can accurately distinguish transudates from exudates, Abramowitz et al indicated that fluid loculation and pleural thickening were more common in exudates than in transudates. 22 Discrimination of a pleural effusion as transudate or exudate is important for further evaluation and treatment. Some causes of BPE, such as heart failure and cirrhosis, generate transudates. However, some causes of BPE, such as infections and pulmonary embolism, generate exudates, as does MPE. 3 23 The immunological microenvironment and inflammatory responses in MPE are two important factors that lead to the production of different components. In addition to neoplastic cells, cytokines and chemokines produced by immune cells, signalling molecules generated by tumour-associated macrophages, and fibroblasts are the main components of the surviving environment of tumour cells in pleural effusions. 24 In early-stage TPE, lymphocyte predominance characterises a large proportion of the fluid; in the meantime, a higher mycobacterial burden appears in effusions that have loculations. 25 Li et al identified different peptide profiles between BPE and MPE through proteomic analysis and established a model to discriminate between BPE and MPE. 26 The different pleural effusion components for BPE and MPE may be a crucial cause of different thoracic CT image features, making it feasible to classify BPE and MPE using a deep learning model based on thoracic CT image features. In the present study, we proposed an AI model to implement the segmentation of pleural effusion regions. The deep learning model for segmentation proposed in our study successfully integrated 3D spatially weighted U-Net and 2D classical U-Net. Our results showed that the cascaded segmentation architecture combining 3D spatially weighted U-Net with 2D classical U-Net (M1) was superior to the other two segmentation methods (only 3D U-Net and only 3D spatially weighted U-Net). Applying the spatial attention mechanism to 3D U-Net not only focuses the deep learning model on the regions of interest for input thoracic CT images, avoiding the interference of background information, but also extracts both shallowlevel and deep-level attention information, which can improve the feature extraction ability of the model. 15 27 However, in order to reduce the cubically growing number of network parameters caused by 3D convolution, using patches (crop region of interest into small image patches) as input may lose some natural contour information. 3D spatially weighted U-Net cascaded by a 2D classical U-Net can be conducive to supplementing natural contour information and excluding most error information about the pleural effusion region. In addition, in the deep learning model for pleural effusion classification, we input the holistic thoracic CT image and the fine segmentation region of pleural effusion generated by M1 at the same time. On the one hand, this approach stresses the features within the pleural effusion areas; while on the other hand, it does not neglect the related information within the areas outside the pleural effusion. It has been reported that the primary tumour cannot be found in approximately 10% of MPEs. 24 Therefore, it is of vital importance to identify MPEs of unknown origin in a timely and non-invasive manner. It would be worthwhile to apply these segmentation and classification deep learning models to other sorts of effusions. Since the causes of ascites and abscess vary depending on the type of tumour and pathogen infection, a single AI model able to determine which type of cancer or bacteria is the reason for effusion production would represent remarkable progress. Further research and efforts are required to achieve this goal. Although the proposed deep learning model of pleural effusion segmentation and classification showed encouraging performance, our study has several limitations. First, the data source only derived from two hospitals which may have limited the generalisability and robustness of the deep learning model. Second, high model Data are presented as % (95% CI). AUC, area under the receiver operating characteristic curve. Figure 5 Comparison of the activation heatmaps generated by the pleural effusion classification model between four randomly selected patients with lung cancer, breast cancer, tuberculous pleuritis and heart failure, respectively. interpretability of deep learning networks is considered valuable, 28 but the association between the imaging representations and the nature feature of pleural effusions cannot be fully understood in our study because of the end-to-end learning strategy. Third, despite the advantages of the proposed model, which uses exclusively thoracic CT images, in terms of convenience and time-saving, the predictive performance may be improved by combining this model with other clinical models; however, this point was not clarified in this study. Future large-scale external validations from multiple centres are necessary to provide convincing evidence of the generalisability of the deep learning model proposed in this study. In conclusion, our research proposed an original deep learning model: a combination of 3D spatially weighted U-Net and 2D classical U-Net were used for the segmentation of pleural effusion. Subsequently, a deep learning model was established for the differential diagnosis of BPE and MPE based on thoracic CT images with masks. The non-invasiveness and high efficiency of the segmentation and classification models suggest their potential clinical utility. Our work shows the potential of AI to assist radiologists in identifying malignant disease and thereby improving patient care.
2022-10-02T06:16:22.882Z
2022-09-30T00:00:00.000
{ "year": 2022, "sha1": "87e5af15038d27aad4f25d8fb8885f025d5a86bd", "oa_license": "CCBYNC", "oa_url": "https://thorax.bmj.com/content/thoraxjnl/78/4/376.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "8a298930afb263bff77aef921a4e5f348883b057", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
19678635
pes2o/s2orc
v3-fos-license
Using electronic health records and Internet search information for accurate influenza forecasting Background Accurate influenza activity forecasting helps public health officials prepare and allocate resources for unusual influenza activity. Traditional flu surveillance systems, such as the Centers for Disease Control and Prevention’s (CDC) influenza-like illnesses reports, lag behind real-time by one to 2 weeks, whereas information contained in cloud-based electronic health records (EHR) and in Internet users’ search activity is typically available in near real-time. We present a method that combines the information from these two data sources with historical flu activity to produce national flu forecasts for the United States up to 4 weeks ahead of the publication of CDC’s flu reports. Methods We extend a method originally designed to track flu using Google searches, named ARGO, to combine information from EHR and Internet searches with historical flu activities. Our regularized multivariate regression model dynamically selects the most appropriate variables for flu prediction every week. The model is assessed for the flu seasons within the time period 2013–2016 using multiple metrics including root mean squared error (RMSE). Results Our method reduces the RMSE of the publicly available alternative (Healthmap flutrends) method by 33, 20, 17 and 21%, for the four time horizons: real-time, one, two, and 3 weeks ahead, respectively. Such accuracy improvements are statistically significant at the 5% level. Our real-time estimates correctly identified the peak timing and magnitude of the studied flu seasons. Conclusions Our method significantly reduces the prediction error when compared to historical publicly available Internet-based prediction systems, demonstrating that: (1) the method to combine data sources is as important as data quality; (2) effectively extracting information from a cloud-based EHR and Internet search activity leads to accurate forecast of flu. Electronic supplementary material The online version of this article (doi:10.1186/s12879-017-2424-7) contains supplementary material, which is available to authorized users. Background Influenza causes about 500,000 death per year worldwide and about 3000 to 50,000 per year in the United States (US) [1]. Accurate and reliable forecasting of influenza incidence can help public health officials and decision makers prepare for unusual influenza activity, including promoting timely vaccine campaigns, improving risk assessment and communication, and improving hospital resource allocation during influenza (flu) outbreaks [2]. Traditional flu surveillance tracks flu activity through patients' clinical visits; in the US the Centers for Disease Control and Prevention (CDC)'s influenza-like illness (ILI) reports track the percentage of patients seeking medical attention with ILI symptoms. ILI symptoms are defined by the CDC as having temperature of 100°F (37.8°C) or greater and a cough and/or a sore throat without a known cause other than influenza [3]. Owing to the time needed for processing and aggregating clinical information, CDC's ILI reports lag behind real time by one to 2 weeks, which is far from optimal for decision making. Technological advances in the last two decades have changed the way in which health information is accessed, modified, and distributed. First, a large portion of the general public gains access to health information through Internet searches [4][5][6][7][8]. Second, many hospitals and medical centers have adopted electronic health records (EHR) to give clinicians faster and easier access to retrieve, enter and modify patient information. These sources of digital information offer the possibility for real-time flu surveillance and forecast, as previous studies have suggested [9][10][11][12][13][14][15][16][17][18]. However, it is the community consensus that further improvements are needed for these forecasting methods to be reliably used for policy making purpose [19,20]. Our paper presents one of such improvements. We study two questions in this article. (a) How much information can these digital sources provide? (b) Is there an efficient way to extract/combine information from these digital sources to produce accurate flu forecasts? Our contribution consists of rigorously adapting and expanding an existing statistical method to combine information from (i) near real-time aggregated patient visits via EHR and (ii) population wide flu-related Google searches with (iii) flu activity levels contained in CDC's historical ILI reports, to produce national flu forecasts for the US up to 4 weeks ahead of CDC's ILI reports. Our prediction target is the percentage of patients seeking medical attention with ILI symptoms as represented and reported by CDC's ILI activity level, an established public health surveillance tool to track flu activity [2,16,17,[21][22][23][24]. A collection of methods aimed at predicting the same target have emerged in response to the recent CDC-organized flu-prediction contest (https://predict.phiresearchlab.org/) and are documented, for example, in [19]. Some of the methodologies studying digital disease detection include for example, empirical Bayes framework [25], Susceptible-Exposed-Infected-Recovered (SEIR) epidemiological mechanistic model, SEIR-based models coupled with data-assimilation Kalman filters [24,[26][27][28], linear regression models with Twitter in addition to shortterm lagged ILI activity level [29], ensemble models with several data sources [30], SEIR models combined with Wikipedia-based nowcast [31], and Gaussian process on Google query logs combined with autoregressive moving average time series model on historical ILI activity level [8,18]. It is important to note that some of the aforementioned methods pursue different forecasting targets: for instance, [25] and [31] focused on the influenza season onset, peak and intensity in national level; [24,[26][27][28] aimed at predicting the number (or proxies) of labconfirmed influenza cases in multiple sub-regions and cities of the US; [30] predict ILI case counts for 15 Latin American countries. As a consequence, the predictive performance of our method and all of the aforementioned methods cannot be directly compared in this study. We primarily compare our forecasts with results in [11] since their historical flu estimates for the four time-horizons for the 2013-2016 time period studied here are publicly available. We also compare our results to other mathematical models and estimates produced in [18,29]. Our forecasts show a significant improvement in accuracy among the existing Internet-based prediction system targeting CDC's ILI activity level. Our method is named ARGO, which stands for AutoRegression with General Online data. It was previously proposed in [10] for the real-time estimate of flu activity level using flurelated Google search data alone. We extend the ARGO methodology to use information from both EHR data and flu-related Google search data for flu forecasting; furthermore, we extend it to produce flu forecasts up to 3 weeks ahead of current time, not only real-time estimate. The extended ARGO method dynamically selects the appropriate set of variables from both the EHR data and Google search data to produce accurate flu estimates for every time horizon of forecast, i.e., real-time, one, two, and 3 weeks ahead of current time, and automatically identifies which variables are important in the predictions in every week. We assess the accuracy of our forecasts using multiple metrics, including root mean squared error (RMSE), for the flu seasons from 2013 to 2016 based on the availability of data. For the retrospective time period of July 2013 to February 2015, ARGO reduces the RMSE of the best available method by 33, 20, 17 and 21%, for the four time horizons: real-time, one, two, and 3 weeks ahead, respectively. Moreover, such accuracy improvements are statistically significant at the 5% significance level. Our real-time estimates correctly identified the peak timing and magnitude of the three flu seasons. As a further validation, we conduct strict out-of-sample testing by applying ARGO to the 2015-2016 flu season (from February 2015 to July 2016), where ARGO reduces the RMSE of the best available method by 36,8,28, and 10%, respectively, for the four time horizons. Our result demonstrates: (1) the method used to combine information sources is equally as important as the quality of the information source; (2) effectively extracting and combining information from the EHR and Internet search activity leads to accurate forecasts of flu. We expect that our approach can be potentially extended to finer geographic regions and the forecasting of other infectious diseases. Study Design We used our method, ARGO, to produce retrospective forecasts of flu activity for the time period of July 6, 2013 through February 21, 2015 based on the availability of EHR data. The CDC's weekly ILI unweighted activity level is our prediction target. At every week of prediction we only used information that would have been available at that time. Data used in our prediction include the historical unrevised original CDC ILI reports, online flurelated search query volumes data from Google Trends, and EHR data obtained from athenahealth. At the ending Saturday of each week, we produced the estimate for the current weekly ILI activity as well as the forecasts 3 weeks into the future. We then compared our forecasts to the subsequently revealed ILI activity level as reported by CDC weeks later. We also compared the performance of ARGO with other available methods. To further assess our method and to reduce the possibility of overfitting, we used the ARGO method to produce flu forecasts for the 2015-2016 time period (February 28, 2015 to July 2, 2016). These forecasts provide strict out-of-sample validation since all the settings of our model are determined without ever touching the data from February 28, 2015 and onward. Data Collection We used the weekly revised unweighted ILI activity level published by CDC as our prediction target (gis.https:// gis.cdc.gov/grasp/fluview/fluportaldashboard.html; date of access: July 9, 2016). In a given week, the most recent CDC's ILI reports typically reflect the ILI activity of the previous week. These reports are often subsequently revised to reflect updates and consistency checks. The historical CDC reports and their revised versions, including the timing of their release can be found on CDC's website. For example, original ILI report for week 7 of season 2015-2016 is available at www.cdc.gov/flu/weekly/ weeklyarchives2015-2016/data/senAllregt07.html. Google publishes weekly search query volumes through Google Trends (www.google.com/trends) in real time. The Google Trends website provides weekly relative search volume of query terms specified by a user. Specifically, the number provided by Google Trends is that week's search volume of a particular search query term divided by the total online search volume of that week, normalized to integer values from 0 to 100, where 100 corresponds to the maximum weekly search within the time period of January 2004 to present. The query terms that we used were identified from Google Correlate (www.google.com/trends/correlate), which gives the top 100 most highly correlated search terms with a time series specified by a user. We identified 129 flu-related Google search terms in total (see Table S1 in the Additional file 1) by supplying Google Correlate with CDC's unweighted ILI activity level for two different time periods: (a) January 2004-March 2009 (prior to the H1N1 pandemic) and (b) March 2009-May 2010, and removing search terms unrelated to flu. We did not use ILI activity level after 2011 on Google Correlate to avoid using any forward-looking information in the selection of search terms. The EHR data that we used are from athenahealth, a provider of cloud-based services and mobile applications for medical groups and health systems (www.athenahealth.com). It covers over 78,000 healthcare providers nationwide. We used historical values of four nationally aggregated weekly counts: total patient visit counts, flu visit counts, ILI visit counts, and unspecified viral or ILI visit counts. These aggregated data of a given Sunday-to-Saturday week are typically available on the following Monday, implying that athenahealth's data are available at least 1 week ahead of the publication of CDC's ILI reports. The EHR data are available in real time starting from July 2009. Further details about the EHR data collected from athenahealth were described in Santillana et al. [12]. Statistical Formulation We combined online search volume data, EHR data, and historical flu information to produce flu forecasts for four time-horizons: real-time, one, two, and 3 weeks ahead. We rigorously expand ARGO for forecast by mathematically deriving the induced multivariate linear regression model based on the underlying assumptions of ARGO. Our independent variables included CDC's historical ILI values, flu related search volumes of 129 selected query terms from Google Trends, and three flurelated ratio variables derived from athenahealth's visit counts: (flu visit counts)/(total patient visit counts), (ILI visit counts)/(total patient visit counts), and (unspecified viral or ILI visit counts)/(total patient visit counts). We used a rolling two-year window to train the multivariate linear regression model of ARGO to capture dynamic changes in people's online search pattern over time. This two-year training window was used in earlier work [10], and we adopted it here. Therefore, we avoid the potential of overfitting because the length of the training period is predetermined before we even touched the data for this study (as opposed to tuning it from the data). As we have more independent variables (52 historical ILI terms, 129 search query terms, and 3 EHR terms) than response variables (104 in total, corresponding to 104 weeks in 2 years) in the training window, we utilized regularized multivariate linear regression by minimizing (a) the sum of squared errors plus (b) the sums of absolute values of the regression coefficients (part (b) is referred to as regularization [32]). Please see the Additional file 1 for detailed mathematical formulation. For a given time window and a forecasting target, the regularized multivariate linear regression used by ARGO automatically selects the most relevant variables for forecasting by zeroing out regression coefficients of terms that contribute little to the prediction. This stabilizes the estimation and leads to interpretable result by identifying which variables are important for prediction in every week. Our method naturally extends the previous method by Yang et al. [10], which tracks flu in real-time using only flu-related Google search terms. We intentionally extend ARGO with minor adaptation in order to take advantage of the robustness of original ARGO model and to minimize the possibilities of overfitting. All analyses were performed with the R statistical software. Comparative Analyses We compared ARGO's retrospective forecasts for the four time-horizons to the ground truth, the finalized (i.e., revised) CDC ILI activity level, for the time period of July 6, 2013 to February 21, 2015. For strict out-of-sample validation, we also used ARGO to produce flu forecasts for the time period of February 28, 2015 to July 2, 2016. For context, we compared our method with three other predictive methods for the period of July 6, 2013 to February 21, 2015. These methods are: (a) an ensemble prediction approach that combines multiple data sources (Google searches, Twitter microblogs, EHR data, participatory mobile surveillance data), which represents the top Internet-based flu forecasts as described in Santillana el al. [11], (b) an autoregression model (autoregression with 4 time lags) using CDC's ILI alone, and (c) a baseline "naive" prediction, which simply uses the prior week ILI activity level as the prediction for ILI activity of the current week, one, two, and 3 weeks later. We note that the same assessment period of July 6, 2013 to February 21, 2015 is studied in the benchmark ensemble method of Santillana el al. [11]. For the validation test (covering February 28, 2015 to July 2, 2016), where all the settings of ARGO are determined without ever touching the data from February 28, 2015 onward, we compared ARGO forecasts with (a) the predictions produced and recorded in the Healthmap Flu Trends system (http://www.healthmap.org/flutrends/), which uses a modified approach that incorporated two additional methodological improvements [10,12] into the original method of Santillana et al. [11], (b) the autoregression model with 4 time lags using CDC's ILI alone, and (c) the baseline "naive" prediction. Four accuracy metrics: root mean squared error (RMSE), mean absolute error (MAE), root mean squared percentage error (RMSPE), and mean absolute percentage error (MAPE), as well as the correlation, were used to assess the performance of each method. RMSE is the square root of the sample average of the squared prediction error. MAE is the sample average of the absolute prediction error. RMSPE is the square root of the sample average of the squared value of relative prediction error, relative to the target. MAPE is the sample average of the absolute value of relative prediction error. For their mathematical definitions, please see Table 1. We calculated the error reduction of ARGO compared to the best available method in the study period (together with a 95% confidence interval based on stationary bootstrap [33]) and the validation period. Results For the period of July 6, 2013 to February 21, 2015, ARGO reduces the RMSE of the (best) available method by 33%, 20%, 17%, and 21%, for the four time horizons: real-time, one, two, and 3 weeks ahead, respectively. See Table 1, which reports the ratio of the error of a given method to that of the naive method; the raw error number of the naive method is given in the parentheses. Likewise, ARGO reduces the MAE of the best available method by 19%, 27%, 24%, and 28%; reduces the RMSPE by 32%, 30%, 23%, and 33%; and reduces the MAPE by 23%, 35%, 31%, and 38%, respectively, for the four time horizons. Thus, uniformly across all evaluation metrics, ARGO reduces the forecasting error by about 20-35%. Table S3 in the Additional file 1 gives the raw error of each method in each horizon. A close look at the first panel of Fig. 1 shows that ARGO's real-time estimation captures the timing and intensity of all the peaks of the flu seasons. In addition, we compared our real-time (nowcast) results with real-time estimates obtained by the method that combines autoregressive information with flu-related Twitter microblogs [29] and with the method that combines Google searches with autoregressive information [18] in different time periods. ARGO provides about 20% more MAE reduction from the time-series baseline model compared to that of [29] for all 4 forecasting horizons (MAE reduction of 29.6%, 27.5%, 24.5%, 22.0% for nowcast, forecast 1,2,3 week were reported in [29]), and has about 10-15% more MAE and MAPE reduction from AR model compared to those of [18] for nowcast (MAE reduction from AR model 43.9%, MAPE reduction from AR model 30.5% were derived from the numbers reported in [18]). ARGO's additional error reduction is likely attributed to the joint modeling of multiple information sources. One caveat that we do want to point out is that [29] was reporting for period 2011-2014 and that [18] was reporting for period 2009-2013, which are not exactly the same as the time period of this study. These error reductions are statistically significant at the 5% significance level in that the 95% confidence intervals of the error reduction to the best alternative, produced using the stationary bootstrap method [33], are all strictly above zero. See Table 2, which reports the ratio of the error of a given method to that of the The evaluation metrics between the prediction p t and the target p t include RMSE ¼ and Pearson correlation. The benchmark models include the ensemble method by Santillana et al. [11], an autoregression model with 4 lags, and a naive model, which uses prior week's ILI level as the prediction for the current week as well as the next 3 weeks. Boldface highlights the best method for each metric in each forecasting time horizon. RMSE, MAE, RMSPE, MAPE are relative to the error of the naive method, i.e., the numbers are the ratio of the error of a given method over that of the naive method; the absolute error of the naive method is given in the round bracket. Table S3 in the Additional file 1 gives the absolute error of all methods. For each forecasting time horizon and each evaluation metrics, the error reduction of ARGO over the best alternative method is given in the second half of the table, together with 95% confidence intervals (in the square bracket) constructed using stationary bootstrap [33] with mean block size of 52 weeks. naive method; the raw error number of the naive method is given in the parentheses. For most error metrics and forecasting horizons, ARGO reduces the forecasting error by about 20-35%. The similarity of the results between the validation period and the first test period shows the robustness of our method and greatly reduces the possibility of overfitting. Table S4 in the Additional file 1 gives the raw error of each method in each horizon. A video showing the performance of ARGO can be found in the Additional file 2; Additional file 3 provides the cover image of this video. We plan to broadcast the real-time performance of ARGO online at http:// www.healthmap.org/flutrends Fig. 1 Forecasting results. The four panels show the forecasted ILI activity levels for real-time and 1 to 3 weeks into the future from ARGO (thick red), the method of Santillana et al. [11](blue), Healthmap Flu Trends system (green), and the autoregression model with 4 lags (grey), compared to the true CDC's ILI activity level (thick black), which became available weeks later. The plot at the bottom of each panel shows the estimation error, namely the estimated value minus the true CDC's ILI activity level Discussion Our results demonstrate that the digital information contained in EHR and Internet users online search activity can be effectively used to produce accurate and reliable forecasting of flu activity up to 4 weeks ahead of the publication of traditional flu tracking reports from CDC's ILINet. Our method ARGO reduces the error from previous publicly available Internet-based flu prediction systems by about 20-35% across multiple error metrics, which makes it one of the most accurate flu forecast methods in the literature. The improvement of ARGO over previous methods is even more pronounced given that the ensemble method by Santillana et al. [11] used two more data sources than ARGO in the estimation -Twitter microblogs [29,34] and participatory mobile surveillance data (from Flu Near You) [35] in addition to the data that ARGO had access to. The accuracy improvement in ARGO's forecasts emerges from its capability to simultaneously optimize the role of different data sources (and all independent variables) in the predictive model. In contrast, previous approaches [11] used different data sources to produce The evaluation metrics are defined in Table 1. The benchmark methods are the same as Table 1 except that the ensemble method of Santillana et al. [11] is replaced by a refined version broadcasted by the Healthmap Flu Trends system. Boldface highlights the best method for each metric in each forecasting time horizon. RMSE, MAE, RMSPE, MAPE are relative to the error of the naive method, i.e., the numbers are the ratio of the error of a given method over that of the naive method; the absolute error of the naive method is given in the round bracket. Table S4 in the Additional file 1 gives absolute error of all methods. For each forecasting time horizon and each evaluation metrics, the error reduction of ARGO over the best alternative method is given in the second half of the table. independent predictive models and subsequently took each model's output into a meta-model. Therefore, while previous studies [11] have shown the utility of multiple data sources over a single one, our result shows that a unified method that transparently accounts for how each data source contributes to the prediction in each time horizon leads to significant performance improvement. Furthermore, as our method also takes the seasonality into account, it is able to produce reliable flu forecasts three to 4 weeks into the future. We note that while CDC's %ILI is only a proxy for flu activity in the population, since it is calculated as the number of visits to healthcare facilities with influenza-like illnesses symptoms, successfully estimating it can help officials allocate resources in preparation for potential surges of patient visits to healthcare facilities. A more detailed discussion about the importance of other indicators for flu incidence in the population can be found in [2,17,21]. Our proposed digital surveillance system, by accurately tracking and forecasting flu activity, could potentially help promote timely vaccine campaigns, improve risk assessment and communication, and improve hospital resource allocation during flu outbreaks. Conclusions Novel approaches that use digital data to predict disease incidence, ahead of traditional clinical-based methods, have emerged in recent years [5, 10-12, 16, 25, 29, 35-39]. Slowly, these approaches are gaining acceptance in the public health decision making process. For instance, Internet users' online search activity has proved to be capable of providing helpful information to public health officials and the general public [10,16,40,41]. As the emergence of internet-based data and EHR offers the potential for real-time disease surveillance and forecast, augmenting traditional syndromic disease surveillance, an important question often overlooked is the statistical methods/models that are capable to efficiently extract information from the digital data sources and aggregate them to produce accurate and reliable forecasts. It can be argued that well-tested methods delivering accurate disease estimates are in critical need. For instance, Google Flu Trends was criticized [9, 10, 42-45] not because people questioned the value of online search data [27,46], but because Google Flu Trends produced misleading forecasts in both 2009 and 2012 when it was needed most, due to its sub-optimal method to process the valuable information [44]. On the contrary, our model, ARGO, demonstrates that effectively extracting and combining information from the EHR and Internet search activity, based upon rigorous statistical reasoning, can lead to accurate flu forecasting. We expect that our approach can be potentially extended to finer geographic regions and the forecasting of other infectious diseases.
2017-12-24T02:59:32.537Z
2017-05-08T00:00:00.000
{ "year": 2017, "sha1": "dde927403706c94267bdcc09f796907af7c96618", "oa_license": "CCBY", "oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-017-2424-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dde927403706c94267bdcc09f796907af7c96618", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257428017
pes2o/s2orc
v3-fos-license
Are we ready to fight the Nipah virus pandemic? An overview of drug targets, current medications, and potential leads Nipah virus (NiV) is a high-lethality RNA virus from the family of Paramyxoviridae and genus Henipavirus, classified under Biosafety Level-4 (BSL-4) pathogen due to the severity of pathogenicity and lack of medications and vaccines. Direct contacts or the body fluids of infected animals are the major factor of transmission of NiV. As it is not an airborne infection, the transmission rate is relatively low. Still, mutations of the NiV in the animal reservoir over the years, followed by zoonotic transfer, can make the deadliness of the virus manifold in upcoming years. Therefore, there is no denial of the possibility of a pandemic after COVID-19 considering the severe pathogenicity of NiV, and that is why we need to be prepared with possible drugs in upcoming days. Considering the time constraints, computational aided drug design (CADD) is an efficient way to study the virus and perform the drug design and test the HITs to lead experimentally. Therefore, this review focuses primarily on NiV target proteins (covering NiV and human), experimentally tested repurposed drug details, and latest computational studies on potential lead molecules, which can be explored as potential drug candidates. Computationally identified drug candidates, including their chemical structures, docking scores, amino acid level interaction with corresponding protein, and the platform used for the studies, are thoroughly discussed. The review will offer a one-stop study to access what had been performed and what can be performed in the CADD of NiV. Introduction Nipah virus is a single-stranded, negative-sense RNA virus [1] which belongs to genus Henipavirus and family Paramyxoviridae [2]. First appearance of Nipah virus (NiV) was reported in 1998 in Malaysia among pig farmers. Subsequently, the virus was detected in Bangladesh [3], Singapore [4], India [5], and the Philippines [6]. Up to December 2022, all the cases followed by mortality ration with the cases and types of strain reported by the WHO is listed in Table 1. Fruit bats of the Pteropodidae family and Pteropus bat species are considered as a primary or natural reservoir of NiV. To support the claim, the NiV was isolated from urine samples of Pteropus lylei [7] in Cambodia and Pteropus hypomelanus and Pteropus vampyrus in Malaysia [8]. As per various reports, the transmission occurs from bats to humans, pigs, sick pigs, or pig's contaminated tissue to humans; even transmission is possible from human to human as well as from the consumption of contaminated food such as raw date palm juice contaminated with saliva or urine of infected bats. Initial research indicated that pig is a primary reason for human infections [9], while some studies pointed out that dogs and cats can also be infected by this virus [10]. A report from the World Health Organization (WHO) dated 30 May 2018 indicated that the fatality rate can range from 40 to 75% and the rate of transmission as well as mortality can vary depending on the outbreak, epidemiological scrutiny, and clinical management of the governing authorities of the place [11]. NiV-infected person can experience a series of clinical symptoms, from subclinical asymptomatic infection to acute respiratory infection and fatal encephalitis followed by death. Fever can be uniformly present as indicated by the probable case diagnosis, followed by altered mental state, headache, severe weakness, cough, trouble breathing, diarrhea, myalgia, and dizziness [12]. Around 20% of patients have experienced residual neurological consequences like seizure, mental disorders, and altered personality. The recovered patients developed delayed onset encephalitis, as per WHO reports. In one study done on 2009, they estimated the R 0 value (the average number of people that one infected person can pass the illness to) of NiV in rural Bangladesh, which was equal to 0.48 [13]. The tendency of NiV showed a low R 0 value but high mortality. So long as all spillover virus strains have a R 0 less than 1 and do not evolve within their human host, each spillover will result in only sporadic person-to-person transmission chains. However, if a strain with a R 0 higher than 1 spread or if a strain infecting a person acquires a R 0 higher than 1, humanity might confront its most destructive pandemic in the world [14]. The last outbreak of NiV occurred in Kozhikode, Kerala, South India, along with 18 cases and 88.8% mortality, in 2018 [15]. In the same year, NiV was considered in the 2018 Annual Review of Diseases prioritized under the Research and Development Blueprint proposed by the WHO [16]. However, there is no approved specific drug for NiV or vaccine until today. NiV encodes six structural proteins, out of which glycoprotein (NiV-G) and fusion protein (NiV-F) are surface proteins of NiV. Remaining four proteins are inner proteins comprising matrix protein (NiV-M), phosphoprotein (NiV-P), nucleoprotein (NiV-N), and the large protein or RNA polymerase protein (NiV-L). Meanwhile, the P gene also encodes to P, W, V, and C proteins. The NiV structure with six major targeted proteins with their nucleotide length is illustrated in Fig. 1. The detailed function of each protein will be explained in the next section. Facing the large probability of a possible NiV outbreak, it offers serious attention to finding potential drugs that can treat NiV without further delay. Over the years, computeraided drug design (CADD) has enabled scientists to reduce the amount of time spent on synthetic and biological testing [17] to speed up the drug design and discovery process for many instances. The classic function of CADD in drug development is to check out large chemical databases and/ or libraries into different subsets of predicted active compounds, allowing for the optimization of lead compounds by enhancing their biological properties (such as affinity and ADMET profiling) and the construction of chemotypes from a nucleating site by incorporating fragments with optimized function [18]. There is no denying a rational amalgamation of CADD and experimental analysis is the best possible approach to find the small drug molecules for NiV. This review discusses in detail all the NiV protein targets that can be explored for CADD and discovery to fight NiV. Although there is no approved medication available for NiV, we have tried to summarize major medications used over the years to treat or alleviate the symptoms and severity of the disease, along with their suggested dose and administered routes. Followed by the present status of computational drug research and future directions for NiV drug discovery is discussed in detail to offer researchers a complete current state of the art of Nipah research. Potential protein targets for NiV drug discovery In this section, we have discussed major structural features and characteristics of all available target proteins for the drug discovery of NiV covering target of NiV and human target. All available PDB ID for all NiV proteins is discussed and listed in Table 2. Nipah attachment glycoprotein (NiV-G) NiV attachment glycoprotein (G Protein) was a critical virulent factor in charge of the host cell receptor attachment [19] including 602 amino acids [20]. The NiV-G protein, unlike most paramyxoviruses, lacks hemagglutinating and neuraminidase activity and does not attach to carbohydrate moieties [20,21]. Research also indicated that the tyrosine 28/29 was critical for correct targeting [22]. Another work on NiV-G was proposed in 2020 [25]. The set of ligands included in this study belonged to The Pathogen Box Medicine for Malaria Venture. The Autodock-Vina was used to perform virtual screening with a 60 × 60 × 60 centered in (XYZ dimension of 27.77, 6.268, 85.037) grid. Autodock performed rigid-flexible molecular docking between NiV-G protein (PDB ID:3D11) and ligands. In the GROMACS software and PRODRUG server, Gro-mos9643a1 was applied for molecular dynamics computation. PyMOL was used to determine the receptor-ligand interaction. Cys240 and Arg236 were proposed as two more critical amino acids for NiV-G binding pocket. Nipah fusion glycoprotein (NiV-F) During NiV fusing to human cells, NiV-G and NiV-F must work together [26]. NiV-F was another critical protein target for potential NiV inhibitor-based drug discovery in this case. The fusion protein was considered type I transmembrane protein, which includes 546 amino acids [27]. Before integration into new virions, the F 0 precursor is broken into disulfide-linked components F 1 and F 2 . The F 1 cleavage product formed from F 0 contains multiple functions and is accountable for fixing the F protein in lipid membranes [28]. Approximately 20 hydrophobic amino acids comprise fusion peptides at the N-terminal region of the F1 subunit. The fusion peptide is highly conserved across all paramyxoviruses and is essential for the biological activity of the F protein, which is inserted into the target membrane to initiate the fusion process [20,29]. Nipah matrix protein (NiV-M) NiV (NiV-M) matrix protein activates many cellular machineries to facilitate and regulate virion budding at the plasma membrane [30]. Matrix protein exploits cellular trafficking and ubiquitination mechanisms to achieve transitory nuclear localization. During the initial 16-20 h following infection, NiV-M was observed to be confined to the nucleus and nucleolus [31,32]. It indicates that paramyxovirus M proteins have crucial nonstructural functions at early stage of the viral replication cycle. Still, their major and best-characterized function coordinates the assembly and budding of offspring viruses at the plasma membrane [30]. Various host pathways, including the ubiquitination mechanism, nuclear export apparatus, and vesicle sorting and trafficking components, can be used as prospective therapeutic targets by the detailed studies of key NiV-M interactions with host proteins for future drug discovery [30]. Nipah phosphoprotein (NiV-P) The multimeric phosphoprotein (P) of the NiV attaches the viral polymerase to the nucleocapsid [33] which plays a significant role in the genome replication [34,35]. The NiV-P multimerization domain consists of a long, parallel, tetrameric, coiled coil with an N-terminal cap and a hydrophobic core [33]. NiV-G has a very close connection and interaction with NiV-N, which will be mentioned in the NiV-N section. The NiV-P gene encodes 4 nonstructural proteins, i.e., P, W, V, and C proteins. Nipah nucleocapsid protein (NiV-N) The association of Nipah virus nucleocapsid protein (NiV-N) with NiV-P during nucleocapsid construction is a crucial step in the virus replication, as only the encapsulated RNA genome can be used for multiplication [36]. Their experimental result also proved that NiV N-P colocalization should be the critical step for the normal functioning of N in viral replication, as mutant N proteins lacking the capacity to form N-P complex cannot operate in minigenome tests [36]. In 2018, Ranadheera et al. proposed their experimental result for the interaction between the NiV-N and NiV-P virus replication regulations [37]. Their result indicated that viral translation and virion generation become dysfunctional in the presence of elevated levels of recombinant NiV N-HA including one or both NiV P binding domains. Furthermore, removal of both NiV P binding sites from NiV N rendered its effects null and void, indicating that contact between NiV N and NiV P is crucial for viral replication inhibition [37]. Nipah large protein or RNA polymerase protein (NiV-L) The structure of the L protein of NiV was not identified up to date. However, research indicated that after performing the BLAST search, the L protein of Human Parainfluenza virus 5 (HPIV-5) with PDB ID: 6V85 gained the highest score of homology sequence [38]. Around 32.39% identity of the NiV-L protein sequence can be pairwise with HPIV-5. Human cell surface receptor ephrin-B2/ephrin-B3 Several scientific articles have extensively studied and documented the targeting of ephrin-B2 and ephrin-B3 by the NiV. As a method of entrance and invasion into the host, the NiV targets ephrin-B2 and ephrin-B3 receptors in host cells [39][40][41][42]. Ephrin is a membrane-bound ligand that plays a significant role in various biological processes, including cell migration and differentiation [43]. The NiV attaches to these receptors via its spike glycoprotein (G). Once the virus connects to the ephrin-B2 and ephrin-B3 receptors, it undergoes endocytosis, which is absorbed into the host cell. This permits the virus to multiply and disseminate to other cells in the host, resulting in the symptoms of NiV infection. It has been demonstrated that soluble ephrin-B2 or ephrin-B3 proteins, as well as soluble NiV-G proteins, inhibit virus entry and cell-cell fusion (Fig. 2). However, the interference with ephrin-B function and the antigenicity of NiV-G itself restrict the practical value of these molecules as antivirals [44]. Importin alpha 3 Importin alpha 3 (Imp3) is utilized by the NiV to reproduce and disseminate within host cells. Imp3 is a karyopherin that transports viral proteins and replication machinery into the nucleus of the host cell [45,46]. According to studies, the NiV infects host cells by employing Imp3 as a mediator for its entrance into the cell nucleus [47]. NiV hijacks Imp3 upon infection and exploits it to carry its replication machinery into the host cell nucleus, where it replicates and spreads (Fig. 2). Simultaneously, research indicated that compared to all other importin molecules, the unique ARM7 and ARM8 conformation of Imp3 provided increased binding interface availability for NLS interactions [48]. Significantly, additional proteins vulnerable to nuclear transport by Imp3 and Imp4 are outcompeted by W, resulting in decreased transport and nuclear activity [49,50]. Heparan sulfate proteoglycans NiV can utilize an attachment receptor called heparan sulfate proteoglycans to adhere to nonpermissive circulating leukocytes, facilitating viral spread inside the host [51]. Heparan sulfate proteoglycans (HSPG) consist of unbranched, negatively charged heparan sulfate (HS) polysaccharides that are linked to a variety of cell surface or extracellular matrix proteins [52]. Lacking heparan sulfate prevented cells from mediating Henipavirus trans-infection and decreased their susceptibility to infection. Present medication status for NiV Currently, no US FDA-approved drug or vaccine can be prescribed for NiV treatment. According to the Centers for Disease Control (CDC) [53], the treatment of NiV is narrowed down to providing treatment based on the symptoms that occurred. Based on research up to date, ribavirin [54], chloroquine [55], remdesivir [56], and favipiravir [57] were experimentally proven effective on either African green monkeys or Syrian hamsters. Ribavirin [54] reported increase of overall survival rate (36%) when tested on Nipah encephalitis patients using both oral and i.v. dose administration. However, no NiV-specific target drug can effectively treat the patients. Although CDC posted that one of the monoclonal antibodies m102.4 has passed phase 1 clinical studies and has been administered for compassionate use in 2020, the m102.4 is not approved to date [53]. Due to the lack of specific drugs for NiV at present and the possible outbreak of NiV in the future, CADD is an integral part of accelerating drug development of the Nipah fight without any doubt, in combination with experimental efforts. In Table 3, we have combined experimentally tested drugs to date where majorly existing drugs were repurposed for checking the effectiveness of the NiV along with monoclonal antibodies. Computational drug and drug derivatives under investigating Computational modeling through high-throughput screening and virtual screening is highly significant for selecting the best drug candidates for NiV. To speed up the traditional drug discovery process, CADD is a proven method in amalgamation with experimental efforts. Over the years, scientists worldwide have tried to find a possible cure for NiV employing molecular modeling tools like quantitative structure-activity relationships (QSARs), docking, pharmacophore, homology modeling, molecular dynamic (MD) simulations, binding energy calculations, and ADMET profiling approaches. We have tried to summarize their efforts in this section which can be a helpful resource for researchers who want to invest their time in the drug discovery of NiV. Nipah G (NiV-G) protein target and computational drug Pathania et al. come up with potential antivirals for NiV [65]. The authors used Autodock Tool to prepare NiV-G crystal structure (PDB ID:2VSM). A pool of 2327 US FDA-approved drugs were collected from the DrugBank database for virtual screening. OpenBabel program was used for ligand energy optimization with MMFF94 force. QuickVina undertook molecular docking. Simultaneously, molecular image rendering was carried out by PyMOL. Considering binding affinity of ≤ − 10 kcal/mol as threshold, 17 potential inhibitors were selected. OSIRIS Property Explorer computed chemical-protein interaction network analysis to access the risk of side effects on all these molecules, where nilotinib, deslanoside, and acetyldigitoxin were identified as top three drug candidates (Fig. 3) Intranasal administration of Q-GRFT offers substantial protection against deadly intranasal NiV-B challenge [64] formed with Tyr581, Ala558, Ala532, and Pro488; hydrophobic interaction with Gln490, Trp504, Leu305, His281, and Lys560; and salt bridge with Lys560 and His281. Since these three US FDA-approved drugs were identified as cardiac glycosides, they were expected to regulate cardiac inflammation during NiV infection treatment. Virtual screening was performed on NiV G-protein (PDB ID: 3D11) by Naeem et al. [19] employing 2118 blood-brain barrier (BBB −) and 189 BBB + compounds from the gold and platinum Asinex library using a total pool of among 211,620 molecules. Molecular docking, density functional theory (DFT), and MD simulations were performed. The two compounds in Fig. 4 had highest performances in each group. For compound 4 (from BBB −), − 10.4 kcal/mol binding energy was gained, and His281 and Tyr508 were found forming a conventional hydrogen bond with a distance of 2.26 Å and 2.07 Å. Carbon hydrogen bond was also found on the protein where Cys282 and His281 had a distance of 3.22 Å and 3.41 Å, respectively. Meanwhile, hydrophobic interaction existed between compound 4 and TRP504, Phe458, Pro 41, and Try508. Wile, Leu305, Tyr351, Gly506, Val507, Pro353, and Gly352 showed Van der Waals interaction. Compound 5 was from BBB + group, and a − 9.2 kcal/mol binding energy was reported. Hydrogen bond existed between compound 5 and Lys560 and Try351 with a reported distance of 2.79 Å and 1.91 Å, respectively. Simultaneously, hydrophobic Van der Waal interaction was found with 12 amino acids: Tyr508, Asp219, Ser241, Cys240, Arg236, Thr218, Asp302, Ile304, Phe458, His281, Cys282, and Gly352. Ribavirin acted as a control group for the study, which reported − 6.3 kcal/mol binding energy. E LUMO and E HOMO were studied, BBB − group compound 4 gained an energy gap of 0.14382 eV, while BBB + group compound 5 had a 0.15387 eV energy gap. Both had low energy gaps and illustrate a good chemical reactivity and a low kinetic stability. Computational study related to favipiravir and its derivatives were also performed to check their effectiveness towards NiV-G protein [66]. Favipiravir shows a relatively good performance in the experimental work too [57]. Based on the computational result from Lipin et al., piperazine-substituted derivatives were found worth further study [67]. Therefore, the relationship between structure and property of favipiravir was studied in Lipin's case. With the help of DFT calculation, a better geometrical structure of all the designed molecules was optimized. Simultaneously, ADMET and molecular docking were performed to assess the chemical properties and affinity with the G-protein of NiV. In the ADMET study, most of the designed favipiravir derivatives showed a higher pIC 50 . The study indicated that piperazine-substituted favipiravir derivatives have a higher ability to bind than the original favipiravir structure. James et al. studied 22 favipiravir derivatives by incorporating heterocyclic groups (moieties such as pyrazole, imidazole, and pyrazino) in favipiravir [66]. All derivatives were subjected to molecular docking for the Nipah G-protein (PDB ID:3D11) target followed by physical property evaluations and ADMET studies. The result indicated that three compounds had relatively better performances compare to control favipiravir, as shown in Fig. 5. Among these three compounds, compound 6 has the highest binding energy with a − 6.16 kcal/mol, followed by compounds 7 and 8 which have scores of − 5.50 and − 5.38 kcal/mol, respectively. Meanwhile, all the designed compounds followed the Lipinski rule of five, representing that they have no issue with the oral bioavailability. Strong interaction between compound 6 and G-protein was found (hydrophobic interaction with Tyr309, Ile304, Ile401, Phe369, Tyr401, Ile408, and Leu409 and polar interaction with Thr308, Ser307, Ash306, Ser405, and Hid406). Hydrogen bond was also reported with amino acid residues Thr308 and Hid406. The result indicated that incorporating heterocyclic moieties in the favipiravir can increase the effectiveness of the designed drug compared to favipiravir against NiV. Kalbhor et al. proposed potential inhibitors for NiV from screening large antiviral specific chemical libraries with multi-step molecular docking and MD simulation [68]. In this study, 8722 chemical entities were collected from Asinex-Antiviral Library, 3700 compounds were collected from Enamine-Antiviral Library, and 67,470 molecules were taken from ChemDiv-Antiviral Library. LigPrep prepared all the collected molecules for further docking. NiV-G (PDB ID: 2VSM) was prepared by Protein Preparation Wizard module. Multi-step docking was performed on all 79,892 molecules. At the same time, the Prime-MM-GBSA method was used for binding free energy computation. From the total pool of chemicals, 299 compounds reached the stipulated binding energy of − 6.00 kcal/mol of docking score and a − 40.00 kcal/mol MM-GBSA score. Followed by 207 molecules were further selected employing ADME study on SwissADME webserver. Based on TOPKAT toxicity prediction assessment, 14 non-toxic compounds were resulted. Combining the results from total study, 5 molecules were selected which is demonstrated in Fig. 6. Compound 9 reported highest binding energy and better interactions with the studied NiV-G protein. It made H-bonds with amino acid residues Trp504 and Tyr508 and formed hydrophobic interactions with Pro441, Pro448, Val507, and Lys 560 followed by pi-stacking with Phe458 and Trp504. The TPSA scores for compounds 9 to 13 were between 61.44 and 137.32 Å which indicated that all the mentioned compounds could potentially be orally active in nature. NiV fusion glycoprotein (NiV-F) target drugs During the process of Nipah virus fuse to human cells, NiV-G and NiV-F must work together [26]. NiV-F was another critical protein target for potential NiV inhibitor discovery in this case. Niedermeier et al. designed 18 quinolone derivatives targeting of NiV-F (PDB 1WP7) [69]. Schrödinger Protein Preparation Guide was used to prepare the protein before docking performed. LigPrep was used for the ligand's preparation, and molecular docking was performed by Schrödinger Glide XP. The compound 14 in Fig. 6 showed the highest score in both computational study and experimental studies. The experiment anti-NiV EC 50 with a value of 1.5 was identified. Computational modeling revealed that compound 14 fits well into a specific protein cavity on the NiV-F protein that is essential for the fusion process. Simultaneously, hydrophobic interactions were found between I474, L481, and V 484 in the ligand (marked in green) in Fig. 7 and the hydrophobic pocket in protein surface. Besides computational design, experimental work has been also executed by authors for compound 14 where authors checked inhibition of NiV envelope protein-induced cell fusion followed by cytotoxicity evaluation. With a result of 1.5 μM of anti-NiV EC 50 , more than 20 μM of CC 50 , and more than 13 μM of SI (CC 50 /EC 50 ), compound 14 showed a highest activity and a lowest cytotoxicity among throughout the twenty-three compounds. Verma et al. proposed three phytochemicals named natural 6-gingerol (compound 15), 4-hyroxypanduratin A (compound 16), and Luteolin (compound 17) as potential Niv-F inhibitors [70]. 4-Hyroxypanduratin A is a food component in many Asian countries, 6-gingerol is an active component from Zingiber officinale, and luteolin is a kind of Chinese traditional medicine. These three molecules.sdf file were converted to PDB format by Open Babel. AutoDock4 was used for molecular docking and binding scores computed, while BOVIA Drug Discovery Studio was used to visualize major interactions between ligands and the NiV-F. The best binding energy score − 4.83 kcal/mol is reported by compound 16, while compounds 17 and 15 (Fig. 8) showed binding energy of − 4.58 kcal/mol and − 3.98 kcal/mol, respectively. NiV multi-protein target drug Randhawa et al. [71] proposed Potential Multitarget computational drug discovery study where three databases were curated to find potential inhibitors for NiV-G, NiV-F, and NiV-N. A total pool of 1231 compounds were considered, where 142 plant-derived was retrieved form SerpentinaDB, 868 molecules from Phytochemica, and 221 ligands from Phytochemical and Drug Target Database (PDTDB). All the molecules were optimized by Merck Molecular Force Field (MMFF94) in the OpenBabel v 2.4.0 software. For NiV-G, NiV-F, and NiV-N following proteins were chosen from PDB: 2VSM, 5EVM, and 4CO6, respectively, for molecular docking study. Based on docking binding energy profile, 8 potential inhibitors were selected with the requirement of binding to all three proteins and have a higher negative value than − 7.8 kcal/mol binding affinity on each protein. Remdesivir was used as a control drug with a reported − 7.5 kcal/ mol binding energy to NiV-G, − 6.6 kcal/mol to NiV-F, and − 5.6 kcal/mol to NiV-N. Based on ADMETlab, ADME and pharmacokinetic properties were computed for the selected compounds to check their PK/PD profiling. Z-score used to evaluate pharmacokinetic properties and top three compounds 18, 19, and 20 ( Fig. 9) reported positive score. Simultaneously, gene expression induction was also accessed by authors for the top identified molecules and found that only a small portion of protein could be induced. MD simulations of three molecules were performed by Groningen Machine for Chemical Simulation (GROMACS). Compounds 18 and 19 were stable in three proteins with average RMSDs equal to ~ 0.12 nm in NiV-G, ~ 0.35 nm in NiV-F, and 0.12 nm in NiV-G. Meanwhile, the docking complexes were found more stable than the free apo-proteins. Compound 20 had a stable performance in NiV-G and NiV-F with a similar RMSD value as compounds 18 and 19. Sen et al. studied all targeted proteins of NiV and proposed 4 putative peptide inhibitors and 146 small molecule inhibitors [72]. A total of 22,685 ligands from ZINC database and NiV Malaysian strain AY029768.1 were used in this study. ModPipe was used for pipeline homology modeling, and MODELLER was used for building the multimeric complexes. Using both sequence-sequence and profile-sequence search strategies, the homology modeling templates were identified by authors. AutoDock4 and DOCK6.8 were used for molecular docking studies; meanwhile, AMBER was used to determine the charge on atoms of protein during this process. Active pocket and binding sites were searched for all the proteins employing DEPTH server. The docking results indicated that a known drug ZINC04829362 could bind N protein, a potential drug molecule for depression and Parkinson's disease. All the ligand and protein used in this study along with their ranking, binding energy, and RMSD between AutoDock and Dock poses can be accessed at http:// cospi. iiser pune. ac. in/ Nipah. New proposed strategies for drug discovery of NiV A new strategy named "Drug-target-drug network-based approach" was proposed in 2021 [73]. Nipah virus and 13 more viruses were tested with the existing US FDAapproved drugs using drug-target-drug network analysis. All the US FDA-approved drugs were evaluated with a confidence score through the computational study as a repurposed theory. Authors have found 16 repurposed drugs between hepatitis E virus (HEV) and NiV. Further, molecular docking is used to validate this strategy's credibility and identify the best possible candidates for the specific viruses. Rajput et al. offered the "anti-Nipah" web source in 2019 [74]. The data bank contained 313 chemicals where the QSAR model was used to estimate the prediction of the inhibitory effect. Based on the model of integrating recursive regression, any untested or unknown compound could be predicted through the web service with a 0.82 Pearson's correlation coefficient (PCC) which is highly robust. Conclusion NiV is a highly lethal but weakly transmissible virus, classified as BSL-4 [75] due to its characteristics. The current situation against a possible NiV outbreak is not optimistic. The world has not developed any US FDA-approved drugs that can be used in humans, and some drugs showed excellent in vitro suppression of viruses but did not work in animals. In this case, computational drug discovery will be a perfect way to accelerate drug development. Combining all our statistical protein structures and existing papers on computational drug development, we realized that most drug development focuses on glycoprotein and fusion protein. However, four other proteins (NiV-P/M/N/L) are also critical to the NiV. Future scientific research can gradually fill in the gaps in drug development for other potential protein targets.
2023-03-11T05:06:21.088Z
2023-03-08T00:00:00.000
{ "year": 2023, "sha1": "084368acfac37d0553e40f173e5cb339308bd658", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "084368acfac37d0553e40f173e5cb339308bd658", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
252109655
pes2o/s2orc
v3-fos-license
Communicating amounts in terms of commonly used budgeting periods increases intentions to claim government benefits Significance In response to rising child poverty, in 2021, the Biden administration sent direct cash transfers to families through the expanded child tax credit (CTC). However, millions of low-income families did not automatically receive their CTC and needed to actively claim it. Policymakers have tried to reach out to this low-income population, highlighting that they could receive up to $3,600 per year for each child. The current work demonstrates that this messaging strategy may be suboptimal. Using common budgeting periods (e.g., $300 a month) to describe the CTC benefit amounts increased CTC claiming intentions relative to the status quo. Millions of eligible families did not claim their 2021 expanded child tax credit (CTC), collectively forgoing billions of dollars. To address this problem, many policymakers focused on increasing awareness of the CTC by highlighting that families could receive up to $3,600 a year per child. However, people rarely budget on a yearly basis. We propose that communicating the CTC benefit amount in terms of commonly used budgeting periods (e.g., $300 a month) instead of uncommonly used budgeting periods (e.g., $3,600 a year) could increase interest in claiming the CTC. Two large-scale field experiments (n = 16,696) among low-income individuals support this account. Using common (vs. uncommon) budgeting periods to describe CTC benefit amounts increased CTC claiming intentions by 16 to 26%. A third large-scale field experiment (n = 14,178) demonstrated that encouraging people to consider different budgeting periods moderated these effects. These results suggest that communicating amounts in terms of common budgeting periods is a simple, cost-effective way to stimulate interest in claiming government benefits. budgeting | income | government benefits | public policy Child poverty in the United States increased from 15.7 to 17.5%, resulting in 1.2 million more children living in poverty from 2019 to 2020 (1). In response to this worrying trend, the Biden administration expanded the child tax credit (CTC), automatically sending most families with children direct unrestricted cash transfers. As a result, the CTC is credited with helping more than 65 million children (2). However, as of December 2021, an estimated four million children and their families had not received their CTC (3). These were primarily low-income families who were not required to file taxes due to their low incomes. Therefore, the Internal Revenue Service (IRS) did not have the required information to send these families their payments automatically. Thus, the families who would benefit the most from the CTC payments were the least likely to receive them. Instead, these low-income families had to actively claim their CTC with the IRS. Policymakers tried to persuade this low-income population to claim their CTC by increasing awareness of the program. Messaging campaigns often highlighted that families could receive up to $3,600 a year per child. For example, a post from the White House Instagram page (Fig. 1) described the benefit amount on a yearly basis. Prior research suggests that communicating amounts on a yearly basis might be an effective strategy for increasing interest in claiming the CTC. Expenses described on a yearly basis tend to be perceived as larger than the same amounts expressed across shorter time frames (4). Consistent with this prior finding, a recent examination of annuity payments shows that large lump sum payments are often perceived as larger than their monthly equivalents (5). However, we propose that describing CTC amounts on a yearly basis might not be optimal. Instead, people may be more interested in income streams described in terms that match their budgeting periods. Therefore, we hypothesize that describing the CTC benefit amount in terms of commonly used budgeting periods (e.g., $300 a month) instead of uncommonly used budgeting periods (e.g., $3,600 a year) will increase interest in claiming the CTC. Budgeting is a fundamental part of how people organize their financial lives. Over 65% of Americans report using a budget to help manage their finances (6). However, fewer than 9% of individuals in a nationally representative study of US households reported budgeting on a yearly basis (6). In contrast, over 85% report budgeting on a weekly or monthly basis. Consistent with the the general population, lower-income populations also do not budget annually. Indeed, in a pre-registered pilot study we conducted among government benefit recipients (n = 499), only 0.6% reported budgeting on a yearly basis. Instead, most people reported budgeting on a weekly or monthly basis (28.3% and 39.5%, respectively) (see SI Appendix for more details). Significance In response to rising child poverty, in 2021, the Biden administration sent direct cash transfers to families through the expanded child tax credit (CTC). However, millions of low-income families did not automatically receive their CTC and needed to actively claim it. Policymakers have tried to reach out to this low-income population, highlighting that they could receive up to $3,600 per year for each child. The current work demonstrates that this messaging strategy may be suboptimal. Using common budgeting periods (e.g., $300 a month) to describe the CTC benefit amounts increased CTC claiming intentions relative to the status quo. Two related factors, the ease of budget creation and the timing of expenses, may help explain individuals' distaste for yearly budgeting. First, people have more difficulty creating yearly (vs. monthly) budgets (7). As a result, individuals have lower confidence in their yearly budget estimates than their monthly budget estimates. Second, people incur expenses on a frequent basis, with the average person incurring 70 expenses per month (8). The tendency to create budgets across relatively short periods (e.g., weekly or monthly vs. yearly) may reflect the complexity of people's financial lives. We propose that people prefer income streams when they are described in terms that match their common budgeting periods, because this match may help them simplify, plan, and effectively manage their resources. In line with this notion, De La Rosa and Tully (9) demonstrated that matching the timing of people's income and expenses led people to report greater ease and confidence in predicting their resource sufficiency. In contrast, standard economic theory would suggest that people should prefer to receive their income upfront, regardless of their commonly used budgeting period. As an initial exploration of these competing hypotheses, in the same pilot study mentioned above, participants were asked to think about earning $60,000 a year. Participants were then asked whether they would prefer to receive $60,000 on January 1 or $5,000 on the first of every month for a year. Consistent with our theorizing, the vast majority of participants (84.8%) preferred to receive the monthly income stream (SI Appendix). One explanation of this result could be that people perceive monthly income streams as larger than yearly income streams or large lump sums. Indeed, the perceived relative size of objective amounts can differ as a function of how they are described (4,5). However, because all participants read that they were earning a $60,000 a year salary in the experiment's instructions, differences in perceptions of the salary amount were unlikely to drive these preferences. Instead, consistent with the budgeting tendencies described above, 71.4% of those who preferred a monthly income stream noted that they chose the monthly income stream because it would help them budget better (SI Appendix). We build on these insights to develop an intervention to increase applicants' interest in the CTC. Specifically, we hypothesize that describing the CTC benefit amount in terms of commonly used budgeting periods (e.g., $300 a month) instead of the currently used uncommon budgeting period ($3,600 a year) will increase people's intentions to claim the CTC. We test this proposition across three large-scale field experiments among government benefit applicants. In these experiments, Code for America, a nonprofit aimed at improving how the government serves the public, randomly selected participants from an internal list of individuals likely to be eligible for the CTC. CTC eligibility was estimated based on participants' household income and composition as reported on a prior government benefit application. Participants received a personalized text message from Code for America, including their name, their expected CTC benefit amount, and a link to a Code for America website created to help their clients go through the CTC claiming process. These participants were used to receiving text messages from Code for America, as they had all previously applied for the Supplemental Nutrition Assistance Program using Code for America's services. Participants could start the CTC claiming process immediately after receiving the text message or contact Code for America with any questions. Thus, participants received helpful, targeted, and personalized text messages. We examine participants' CTC claiming intentions as measured by their likelihood of clicking on the CTC claims website link included in the text message. We compare the click-through rates when benefits are described using common budgeting periods (i.e., weekly or monthly) to the status quo in which benefits are described on a yearly basis (an uncommon budgeting period). We first report two large-scale field experiments (n = 16,696), which find that describing benefit amounts in terms of common (vs. uncommon) budgeting periods increases CTC claiming intentions. Experiment 1 provides an initial demonstration of the main effect by comparing the efficacy of messages describing the CTC in terms of a common budgeting period (weekly) versus an uncommon budgeting period (yearly). Experiment 2 examines the generalizability of the effect found in experiment 1 by comparing a different common budgeting period, monthly, to the yearly control. Finally, experiment 3 (n = 14,178) provides evidence for the proposed conceptual model. It demonstrates that actively prompting people to consider less common budgeting periods moderates these effects. Taken together, these results suggest that describing government benefit amounts in terms that map onto commonly used budgeting periods is a simple, cost-effective way to stimulate interest in claiming government benefits. Results We pre-registered our hypotheses, study designs, and planned analyses for all studies in the paper. All data and pre-registrations are available on ResearchBox (ResearchBox 530; https:// researchbox.org/530). Experiment 1. Experiment 1 serves as an initial test of our hypothesis that describing benefit amounts in terms of common budgeting periods (vs. the uncommon yearly control) increases CTC claiming intentions. Participants were randomly assigned to one of two budgeting period conditions (common vs. control). In the common budgeting period condition, the benefit amount was described on a weekly basis (e.g., $69 a week). In the control condition, the benefit amount was described on a yearly basis (e.g., $3,600 a year). The specific amounts displayed varied as a function of the number and ages of children in each household. Messages were successfully sent via text to a random sample of 8,448 US residents from Code for America's user base that were likely CTC eligible. As predicted, binary logistic regressions revealed that participants were more likely to visit the website (common: 34.9% vs. control: 27.6%; B = 0.34, SE = 0.05, z = 7.18, P < 0.001) and click "File your simplified return now" (common: 16.6% vs. control: 12.2%; B = 0.36, SE = 0.06, z = 5.76, P < 0.001) in the common budgeting period condition compared to the control. Experiment 2. Experiment 1 demonstrated that describing benefit amounts on a weekly basis (a common budgeting period) increased CTC claiming intentions compared to the control. Experiment 2 aims to expand the generalizability of this effect to another common budgeting period (monthly). Experiment 2 also examines whether the effects found in experiment 1 were a result of the specificity of the amount shown, instead of the budgeting period. For example, translating the yearly amounts into weekly amounts often resulted in nonround numbers (e.g., $3,600 a year vs. $69 a week). Prior research has demonstrated that people react differently to round vs. nonround numbers (10,11). To account for this alternative explanation, in experiment 2, we compare responses to descriptions of the CTC amount communicated on a monthly basis, which always resulted in round numbers ending in zero (e.g., $300 a month, $550 a month). Participants were randomly assigned to one of two budgeting period conditions (common vs. control). In the common budgeting period condition, the benefit amount was described on a monthly basis. In the control condition, the benefit amount was described on a yearly basis. Messages were successfully sent to 8,248 participants via text using the same sampling methodology as in experiment 1. We replicated the findings from experiment 1. Binary logistic regressions revealed that participants were more likely to visit the website (common: 31.7% vs. control: 27.4%; B = 0.21, SE = 0.05, z = 4.29, P < 0.001) and click "File your simplified return now" (common: 14.8% vs. control: 11.9%; B = 0.25, SE = 0.07, z = 3.82, P < 0.001) in the common budgeting period condition compared to the control. Experiment 3. Experiment 3 provides an additional test of the proposed conceptual model, which specifies that people are more responsive to income streams that match the budgeting period they are considering. Most people naturally consider budgets in weekly or monthly terms (6). Thus, describing the CTC benefit amount in these terms should increase their intentions to claim the CTC, as we found in experiments 1 and 2. This theorizing would further suggest that prompting individuals to actively consider the budgeting period that matches the description of the CTC benefit amount should moderate these effects. This moderation would provide additional evidence that people seek out income streams that help them budget effectively. Additionally, this moderation would make a number of alternative explanations less plausible. Experiment 3 tests this conceptual model directly. The messages varied across two factors. First, we varied whether participants were encouraged to think about their budgets on a monthly or yearly basis (the budget period prompt). Second, we varied whether the CTC benefit amount was described on a monthly or yearly basis (the benefit amount description). Participants were randomly assigned to one of four conditions in this 2 × 2 between-subject experiment design ( The text in brackets varied as a function of the person receiving the message (name and amount) and their experimental condition (monthly vs. yearly). Messages were successfully sent to 14,178 participants via text. As pre-registered, we analyzed participants' likelihood of clicking on the link to the website. Consistent with our theoretical model, logistic regressions revealed a significant interaction between the two factors (budget period prompt and benefit amount description) (B = −0.07, SE = 0.02, z = −3.59, P < 0.001). Specifically, when people were encouraged to think about their monthly budgets, describing the CTC on a monthly basis outperformed the yearly control (common: 25.5% vs. control: 22.0%; B = 0.19, SE = 0.05, z = 3.52, P < 0.001). This was not the case when people were encouraged to think about their yearly budgets (common: 19.4% vs. control: 21.0%; B = −0.10, SE = 0.06, z = −1.66, P = 0.097; Fig. 2). * While there was no main effect of amount description (B = 0.02, SE = 0.02, z = 1.13, P = 0.257), there was a main effect of budget period prompt (B = −0.10, SE = 0.02, z = −5.08, P < 0.001). Prompting yearly budgets decreased CTC claiming intentions relative to prompting monthly budgets. Although this result is not central to our theorizing, it is consistent with the premise that people do not naturally think about their budgets on a yearly basis. Consequently, prompting individuals to consider an uncommon budgeting period may have led to disfluency, which, in turn, reduced their likelihood of clicking on the message link. Discussion This large-scale field investigation systematically examined the effect of describing benefit amounts across different budgeting periods on people's interest in claiming government benefits. Specifically, this work demonstrates that describing the CTC benefit amount in terms of common (vs. uncommon) budgeting periods increases interest in claiming the CTC. These findings suggest that describing benefit amounts in terms that match people's budgeting periods is a cheap and simple intervention that can be rapidly deployed to help low-income families. These text-based interventions focused on encouraging people to visit the claiming website and ended once they clicked on the message link. Thus, the primary dependent variables focused on people's likelihood of visiting the website. Future research should examine whether this initial message intervention would lead more people to receive government benefits. In addition, an intervention that includes ongoing descriptions of the benefit amounts throughout the claiming process should also be tested. The experiments in this work focused on varying income descriptions across weekly, monthly, and yearly budgeting periods. Future research should explore reactions to amounts described across other budgeting periods. For example, construal level theory would suggest that shorter time horizons would outperform longer time horizons since shorter periods are perceived as more concrete and less abstract (12). However, our theorizing would suggest that common budgeting periods like weekly or monthly would outperform uncommon budgeting periods such as daily or yearly. We encourage future researchers to explore these accounts further. In addition to investigating a wider range of budgeting periods, it would be worthwhile to examine whether the dollar amounts of the CTC payments considered alter the effects demonstrated in this work. While we did not find evidence for this moderation across the three field studies, these effects may be moderated at certain amounts. Our theorizing rests on the premise that people prefer income streams that can help them budget and plan their financial resources effectively. Thus, there may be floor or ceiling effects such that, at extremely low or high amounts, people's budgeting concerns maybe too small or too large for similar interventions to have an impact. All of the field experiments in this paper focused on measuring people's intentions to claim the CTC, a specific government benefit. This research may also be extended to analyze other types of government benefits. For example, roughly 20% of eligible individuals do not claim the Earned Income Tax Credit (EITC), one of the largest poverty alleviation programs in the United States (13). Researchers and policymakers have focused on increasing interest in claiming the EITC by raising awareness, highlighting a sense of urgency, or increasing the psychological ownership of these benefits (14,15). Given that the EITC is currently described in annual terms, our work suggests an additional path to help increase take-up. Beyond government benefits, communicating amounts in terms of common and uncommon budgeting periods might impact other important income streams. To gain insight into this possibility, we conducted a pre-registered follow-up study with 600 government benefit recipients. Participants were asked to think about $15,000 either as a salary (i.e., regular income), government benefit, or lottery winning (i.e., windfall). Participants then selected whether they would want to receive the income on a yearly basis ($15,000 on January 1) or on a monthly basis ($1,250 on the first of every month for a year). Consistent with the pilot study, the majority of participants in each condition preferred to receive their income on a monthly versus a yearly basis. However, people's payment frequency preferences varied as a function of the type of income considered. The strong preference for a monthly income stream was the same across the salary and benefits conditions (81.1% vs. 81.3%, B = 0.01, SE = 0.26, z = 0.06, P = 0.955) but was significantly lower among those in the lottery condition (59.2%, The similarity in preferences for payment streams across the government benefits and salary conditions suggests that people may mentally account for government benefits as regular income rather than as a windfall gain. We encourage researchers to build on this work to understand how payment descriptions might impact other important financial decisions like retirement contributions and withdrawals from retirement accounts. A core insight of this work is demonstrating the impact of helping people map income streams onto their budgets. Future research should examine alternative paths to facilitate this mapping. For example, researchers could explore communicating amounts in terms of expenses people frequently budget for, like rent or groceries (6). To the extent that rent and groceries are salient budget categories for most people, communicating amounts in terms of these commonly budgeted expenses might have similar effects to communicating amounts in terms of commonly used budgeting periods. Beyond income descriptions, an alternative path would be to analyze the impact of the actual payment frequency on people's consumption. Recent research has highlighted how payment frequency impacts people's overall spending and consumption patterns (9,16). While the CTC is often described on a yearly basis, it is typically distributed on a monthly basis. As our pilot study showed, participants overwhelmingly preferred a monthly (vs. a yearly) payment frequency because they believed it would help them budget better. Thus, participants actively chose a more distributed income stream as a self-control mechanism to help them spend less and stick to their budgeting goals. Future research should explore how payment schedules impact people's adherence to their budgets. This work demonstrates that describing income in terms of common (vs. uncommon) budgeting periods increases claiming intentions. However, the optimal income description may vary depending on a communicator's goal. For example, instead of aiming to increase claiming interest in a program, a policy maker might aim to increase perceptions of the size of the program to garner broad public support. To examine this possibility, we conducted a second pre-registered follow-up study (n = 195) where participants considered whether to describe a new benefits program as giving recipients $300 a month or $3,600 a year. Participants were randomly assigned to report either which income description would increase their interest in claiming the benefit or which income description would make it seem like the government was giving away more money. Consistent with our theorizing and the results from the field experiments, 78.1% of participants responded that the monthly (vs. yearly) income description would increase their interest in claiming the benefit. In contrast, only 42.4% of participants responded that the monthly (vs. yearly) income description would make it seem as though the government was giving away more money (78.1% vs. 42.4%, B = −1.58, SE = 0.32, z = −4.94, P < 0.001) (SI Appendix). This finding suggests that policymakers should consider leveraging different messaging strategies when targeting different goals and audiences. In conclusion, three large-scale field experiments demonstrated that using common (vs. uncommon) budgeting periods to describe government benefit amounts increased intentions to claim these benefits by 16 to 26%. The results from this simple and nearly costless intervention suggest that policymakers and researchers should consider how people naturally manage their finances when designing interventions to improve financial well-being. Materials and Methods All of the field experiments (experiments 1 through 3) were launched in October 2021. We collected click-through rates for a period of 7 d after the messages were sent. Anonymized data and pre-registrations for all experiments are available on ResearchBox (ResearchBox 530; https://researchbox.org/530). Human Subject Protections. Before this project commenced, the field experiments were reviewed by the institutional review board (IRB) of the University of Chicago. This IRB determined that these experiments were exempt from the regulations at 45 CFR 46. All other experiments were approved by the IRB of the University of Chicago, and all subjects provided informed consent. No identifying information about experiment participants was ever shared with the researchers. Experiment 1. Code for America randomly generated a pool of 10,000 participants from an internal list of likely CTC-eligible individuals who had recently used Code for America's GetCalFresh website. These participants had previously opted to receive text messages from Code for America and noted that English was their preferred language. Eligibility was estimated using participants' household income and composition. Specifically, selected participants had annual household incomes lower than $12,000 and had at least one child in the household under the age of 6 y and no children above the age of 12 y. We focused on participants with annual incomes lower than $12,000, as these households are typically not required to file taxes. Thus, it was likely that the IRS would not have the required information to automatically send CTC payments to these individuals. This eligibility criterion was applied across experiments 1 and 2. Due to bounced text messages, a total of 8,448 individuals received a message. Specifically, participants received one of two messages that were tailored to include their names and Code for America's dollar estimates of their CTC: week." The expected CTC amount for each person was calculated based on the number and age of the children in the household. The expected CTC amounts ranged from $3,600 to $27,600 a year (95% of participants had an expected CTC amount of less than or equal to $10,800). One week after the messages were sent, we compared, by condition, participants' likelihood of visiting the website and clicking the "File your simplified return now" button on the website. We focused on these two dependent variables, as these were the two dependent variables for which Code for America had the most reliable tracking measures. For this and all other experiments, Code for America could not track final IRS filing data. Experiment 2. Code for America randomly generated a pool of 10,000 participants based on the same sampling methodology used in experiment 1. Due to bounced text messages, a total of 8,248 individuals received a message. Specifically, participants received one of two messages that were tailored to include their names and Code for America's dollar estimates of their CTC: 1) control message: "Hi [First Name], this is Gwen from GetCalFresh. You may have a child tax credit for up to $[amount] per year, which can be used to pay for any expenses, including childcare. Visit [Link] to claim your tax credit of up to $[amount] per year" or 2) common budgeting period message: "Hi [First Name], this is Gwen from GetCalFresh. You may have a child tax credit for up to $[amount] per month, which can be used to pay for any expenses, including childcare. Visit [Link] to claim your tax credit of up to $[amount] per month." The expected CTC amount for each person was calculated based on the number and age of the children in the household. The expected CTC amounts ranged from $3,600 to $36,600 a year (95% of participants had an expected CTC amount less than or equal to $12,600). One week after the messages were sent, we compared, by condition, participants' likelihood of visiting the website and clicking the "File your simplified return now" button on the website. Experiment 3. Code for America randomly generated a pool of 40,000 participants using a broader sampling frame than in experiments 1 and 2. In experiment 3, the sample consisted of those who preferred English and had at least one child in the household under 18 y of age, regardless of income. A total of 14,178 individuals received a message, due to a new spam filter implemented by cellphone carriers which blocked some of the messages. The messages varied two factors: 1) whether participants were encouraged to think about their budgets on a monthly or yearly basis and 2) whether the benefit amount was shown on a monthly or yearly basis. Specifically, participants received one of four messages that were tailored to include their names and Code for America's dollar estimates of their CTC: "Hi [First Name], this is Gwen from GetCalFresh. Think about your [monthly/yearly] budget. You may have a child tax credit for up to $[amount] per [month/year], which can go towards your [monthly/yearly] budget. Visit [Link] to claim your tax credit." The expected CTC amount for each person was calculated based on the number and age of the children in the household. The expected CTC amounts ranged from $3,000 to $29,400 a year (95% of participants had an expected CTC amount of less than or equal to $13,200). One week after the messages were sent, we compared, by condition, participants' likelihood of visiting the website. Due to an implementation error, 2,153 participants in this experiment were also messaged in prior experiments. These participants were randomized and counter balanced across the four conditions. Data, Materials, and Software Availability. Anonymized data and preregistrations for all experiments are available on ResearchBox (ResearchBox 530; https://researchbox.org/530) (17).
2022-09-08T06:16:37.792Z
2022-09-06T00:00:00.000
{ "year": 2022, "sha1": "92db230d7e60e413932c90a29d2dcfd718b838da", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1073/pnas.2205877119", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a6f037357b55096a24f9ba30b99122c434c55b6f", "s2fieldsofstudy": [ "Economics", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
236613012
pes2o/s2orc
v3-fos-license
Evaluating boreal summer circulation patterns of CMIP6 climate models over the Asian region Our confidence in future climate projection depends on the ability of climate models to simulate the current climate, and model performance in simulating atmospheric circulation affects its ability of simulating extreme events. In this study, the self-organizing map (SOM) method is used to evaluate the frequency, persistence, and transition characteristics of models in the Coupled Model Intercomparison Project Phase 6 (CMIP6) for different ensembles of daily 500 hPa geopotential height (Z500) in Asia, and then all ensembles are ranked according to a comprehensive ranking metric (MR). Our results show that the SOM method is a powerful tool for assessing the daily-scale circulation simulation skills in Asia, and the results will not be significantly affected by different map sizes. Positive associations between each two of the performance in frequency, persistence and transition were found, indicating that a good ensemble of simulation for one metric is good for the others. The r10i1p1f1 ensemble of CanESM5 best simulates Z500 in Asia comprehensively, and it is also the best of simulating frequency characteristics. The MR simulation of the highest 10 ensembles for the Western North Pacific Subtropical High (WNPSH) and the South Asia High (SAH) are far better than those of the lowest 10. Such differences may lead to errors in the simulation of extreme events. This study will help future studies in the choice of ensembles with better circulation simulation skills to improve the credibility of their conclusions. Introduction The general circulation of the atmosphere is a key factor affecting climate variation, whether on global or regional scales, because it drives the circulation of energy and water vapor (Maidens et al. 2021;Zhao et al. 2019a, b). With the intensification of global warming, various extreme events have occurred frequently and have had a great impact on society and the environment (Gu et al. 2016;Kong et al. 2020;Sales et al. 2018). Some extreme events, such as extreme high temperature and extreme precipitation, are seriously affected by the atmospheric circulation (Boschat et al. 2014;Fischer and Schär 2010;Gibson et al. 2017;Liu et al. 2015;Loikith et al. 2019;Lu et al. 2020;Ohba and Sugimoto 2020). The circulation at 500 hPa is of great importance because it presents a strong relationship between higher level circulation and surface variables (Gao et al. 2019;Horton et al. 2015;Mioduszewski et al. 2016). Global climate models (GCMs) are powerful tools for simulating the current climate and projecting future climate . The Sixth Coupled Model Intercomparison Project Phase 6 (CMIP6) was initiated by the World Climate Research Programme (WCRP), with the purpose of answering new scientific questions facing the field of climate change and providing data support to achieve the scientific goals established by the WCRP "Grand Challenge" 1 3 plan. The CMIP6 includes about 112 climate models from 33 institutions around the world participating in 23 subprograms. These data will support the next 5-10 years of global climate research (Eyring et al. 2016). The evaluation of CMIP6 climate models is an important requirement for further research on downscaling and projection. However, most of the current assessments of CMIP6 are for surface variables, such as temperature and precipitation (Almazroui et al. 2020). Assessments of the circulation in Asia are rare. There are many methods for evaluating model circulation, but their main purposes are all downscaling. The mainstream downscaling methods include principal component analysis (PCA), empirical orthogonal function (EOF) analysis, K-means clustering, and the self-organizing map (SOM) method. Previous studies have shown that the PCA and EOF methods are not accurate enough and not very intuitive when evaluating model circulation patterns . K-means clustering is an effective circulation classification method (Agel et al. 2017), but it can only represent some discrete atmospheric systems and cannot organize them into a continuum (Gao et al. 2019). However, the actual atmospheric circulation develops continuously. The SOM method solves these issues. The SOM is an unsupervised neural network algorithm that maps highdimensional input data to two dimensions. When iterating, it not only updates the winning node, but also updates its neighboring nodes according to the weight. The SOM method was first proposed by Kohonen (1998), and first applied by Hewitson and Crane (2006) in the field of climate downscaling, and has since been widely used in the field of climate research. It can organize long time sequences of atmospheric circulation into a continuum, so that not only can the characteristics of a specific circulation type be seen, but also how this type might develop, because one particular node tends to change from its neighboring nodes. This method is effective in connecting abnormal atmospheric circulation patterns and extreme high temperature and precipitation events (Agel et al. 2017;Loikith et al. 2017). Some previous studies have used the SOM method to evaluate the ability of CMIP5 models to simulate weatherscale circulation patterns (Cassano et al. 2007;Wang et al. 2015). The circulation simulation capabilities of models can be ranked by comparing the correlation coefficient or root mean square error (Mioduszewski et al. 2016) between models and reanalysis products for the frequency, persistence, and transition metrics (Gibson et al. 2016). Models with better simulation capabilities for one characteristic (such as frequency) tend to be better for other characteristics (persistence and transition; Gibson et al. 2016). These research results provide a basis for studying the causes of extreme events and scenario projections. The Western North Pacific Subtropical High (WNPSH) is the most important circulation system that affects the summer in Asia at 500 hPa. Its position, shape, and strength dominate the climate of Asia (Zhang et al. 2020;Zhao et al. 2019a, b). Monsoon and typhoon activities over the western Pacific are closely related to the WNPSH . For example, water vapor from the tropical ocean is transported to China around the western ridge of the WNPSH. The convergence of warm humidity and cold air from high latitudes means that rainfall often occurs on the northwestern edge of the WNPSH (Liu et al. 2014;Preethi et al. 2017). Besides, the ensemble performance on SAH (South Asian High) and the westerlies are also tested, which are also prominent circulations in JJA that have impacts on Asian weather in addition to WNPSH. This study has two main objectives. First, to use the SOM method to evaluate all ensembles of climate models of CMIP6 based on three metrics for frequency, persistence, and transition and to obtain rankings from the performance of these different aspects and a comprehensive ranking metric (MR). Second, to check whether the top-ranked ensembles also give better simulations of the prominent circulations in summer Asia. This paper is organized as follows: Sect. 2 introduces the data and methodology. Section 3 presents the rankings of CMIP6 ensembles from the performance of different metrics and checks the relationship between SOM-based ensemble rankings and the performance of prominent circulations in summer Asia. A discussion is presented in Sect. 4 and the conclusion in Sect. 5. Reanalysis data In this study, the geopotential height at 500 hPa (Z500) of the European Centre for Medium-Range Weather Forecasts (ECMWF) Reanalysis v5 (ERA5) is mainly used, and data from the Japanese Meteorological Agency (JMA) Reanalysis (JRA-55) is also used to test whether different reanalysis products will lead to different results (Ebita et al. 2011). ERA5 is based on the Integrated Forecasting System (IFS) Cy41r2 and has benefited from long-term development in model physics, core dynamics and data assimilation (Hersbach et al. 2020). Compared with the previous generation JMA reanalysis, a more advanced data assimilation scheme is used for JRA-55, with increased model resolution, new variational bias correction for satellite data, and several additional observational data sources (Ebita et al. 2011). we use absolute Z500 instead of Z500 anomaly because absolute Z500 has a clearer physical meaning and is more intuitive, whereas using Z500 anomaly may mix different circulation regimes (An and Zuo 2021). When the study started, ERA5 was available from 1979, and considering the ending time of most CMIP6 models in historical experiment, we choose 1979-2014 as the research period. We consider only boreal summer (June-August) and calculate the daily average for every 6 hours [0000, 0600, 1200, and 1800 coordinated Universal Time (UTC)], so the total time range is 3312 days. The analysis domain extends from 40° to 180° E, and 0° to 60° N, following the suggestion from Gao et al. (2019) that China is mainly affected by the circulation of this region. Climate model data We analyze the absolute Z500 of 140 ensembles from 23 selected climate models participating in CMIP6 (Table 1). Compared with CMIP5, CMIP6 has more institutions and models participating, and also has more sub-experiments. CMIP6 aims to provide a scientific basis for the Intergovernmental Panel on Climate Change (IPCC) 6th Assessment Report (AR6). The Historical experiment is started from the model state of the piControl experiment at a certain time but is driven by various external forcings based on observations that have changed over time since 1850 (Zhou et al. 2019), and the piControl experiment is an experiment that maintains the external forcing (e.g., greenhouse gas, solar radiation, aerosol, land use) at the level of 1850 to drive the global coupled model for long-term integration of more than 500 years. Each model of CMIP6 may have multiple members in the form of "r*i*p*f*", where "r", "i", "p" and "f" represent initial conditions, initialization methods, physical processes, and forcing conditions, respectively, and "*" will be replaced by different numbers. Methodology In this study, the SOM method is used to classify Asian Z500 characteristics. Each node classified represents a type of circulation pattern. The input data are daily absolute Z500, which interpolated onto an Equal-Area Scalable Earth-type (EASE) grid at a spatial resolution of approximately 2.5° × 2.5° to ensure that the correct distance is calculated by the SOM procedure (Gibson et al. 2017). Firstly, each node initializes its parameters (i.e., weight coefficient), and the number of parameters share the same dimensions of input data. Then, the closest node is found according to the distance function (such as Euclidean distance), and the node with the smallest distance is regarded as "winning node". After finding the "winning node", the weight vector of each node and its neighboring nodes are updated according to the gradient descent method and weight coefficient, respectively. The algorithm iterates until it converges or meets the termination condition set by the user. This study uses the second version of the SOM toolbox on MATLAB developed by Helsinki University of Technology (http:// www. cis. hut. fi/ proje cts/ somto olbox/). The "hexagonal" lattice and "sheet" map shape are set up on the topological structure, and keep other settings default except random initialization Liu et al. 2006). Gibson et al. (2017) have shown that it is suitable for classifying a greater number of different weather patterns to keep the first 50% neighborhood radius decreasing from 5 to 1, and the last 50% at 1. In this study, 1000 iterations have been performed during SOM training. The neighborhood radius of the first 500 iterations decreased from 5 to 1, and the last 500 was kept at 1. One of the most important steps in the SOM approach is to determine the number of nodes. Too few nodes may cause one node confuse different circulation features, and too many nodes will cause different nodes share similar features (Ford and Schoof 2017;Loikith et al. 2019;Mioduszewski et al. 2016). To determine the most suitable map size for this study, we have tested 11 experiments of different map sizes, including 2 × 3, 3 × 4, 4 × 4, 4 × 5, 5 × 5, 5 × 6, 5 × 7, 6 × 6, 6 × 7, 7 × 7, and 7 × 8 ( Fig. 1). For each configuration, we have calculated the spatial correlation coefficient between the actual value of each day and the assigned node to quantitatively evaluate the difference between different map sizes. We call the SOM result obtained from ERA5 the "Master SOM" (Fig. 2), and then map each day onto the "Master SOM" for each ensemble. Mapping is accomplished by finding the node in the "Master SOM" which closest to (i.e., has the smallest squared distance from) the daily state of the ensemble (Cassano et al. 2007). Three metrics were used to evaluate CMIP6 from the SOM derived nodes: frequency, persistence and transition metrics (Gibson et al. 2016). Node frequency refers to the probability of occurrence of this node, and the frequency metric is defined by dividing the number of days the node appears by the total number of days. The 95% confidence level for the node frequency of occurrence is given by The "violin" plot showing pattern correlations under different node configurations between each daily absolute 500 hPa geopotential height and the SOM node it was assigned to, displaying the mean (black line), median (red line), and probability distribution (gray violin) Fig. 2 Master 20 node (4 × 5) SOM derived from ERA5 daily absolute 500 hPa geopotential height (shading and black isobars every 40 gpm) in summer (June-August) over 1979-2014. In terms of the node referencing, node "a1" refers to the node in the top-left corner of the SOM plane and node "e4" refers to the node in the bottomright corner where l represents the number of days, and p represents the node frequency for a randomly derived data set. In other words, p = 1/N for an SOM with N nodes. Node frequency is considered significant if it exceeds these limits. For the master SOM with 20 nodes the expected frequency of occurrence for each node is 0.05 with a 95% confidence interval of ± 0.74%. Node persistence represents the continuity of a node. In this study, the persistence metric is expressed by dividing the number of events that lasted more than two days at this node by the number of events that lasted only one or two days. The transition metric refers to the transition probability from one node to another. Each node can transit to nodes other than itself, so the SOM with N nodes can have N(N − 1) kinds of transitions in total. The performance of different ensembles on different metrics is described by calculating the Pearson correlation coefficient between different ensembles and ERA5. For example, the frequency performance of a particular ensemble is represented by the correlation coefficient of the frequency metrics between this ensemble and ERA5 for all nodes. The higher the correlation coefficient, the better the frequency performance of the ensemble, and the topper the frequency ranking. After ranking according to different metrics, the comprehensive rankings can be obtained according to the ranking metric (MR), which is defined as where m is the number of ensembles, and n is the number of metrics. The closer the MR value is to 1, the higher the comprehensive performance of the ensemble (Li et al. 2015). In order to describe the WNPSH, we calculated the western ridge point index, the northern boundary position index, the area index and the intensity index of ERA5 and each ensemble. The western ridge point index refers to the longitude of the westernmost position of the 5880-gpm isobar at the height of 500 hPa in the domain 10°-60° N, and 90°-180° E. The northern boundary position index refers to the average latitude of the 5880-gpm isobar on the north side of the subtropical high along each meridian at the height of 500 hPa and in the same domain. The area index refers to the sum of the area enclosed by all grid points greater than 5880-gpm isobar at 500 hPa in the domain of north of 10º N, and 110°-180° E. The intensity index refers to the sum of the difference between values of grids greater than 5880-and 5870-gpm value (Liu et al. 2019). The SAH intensity index and the westerlies index are calculated to describe SAH and the westerlies. The SAH intensity index is calculated by summing the difference between the values of grids greater than 1660-and 1660-gpm value at 100 hPa isobar . The westerlies index is obtained by calculating the difference between the geopotential height between 40° N and 60° N at 250 hPa isobar (Gong and Wang 2002). Determination of SOM map size In order to determine the optimal number of nodes, we have tested 11 sets of SOM configurations with different map sizes, and have calculated the spatial correlation coefficient between the actual Z500 and the assigned nodes, as shown in Fig. 1. As the number of nodes increases, the spatial correlation coefficient also increases. However, a larger number of nodes may produce nodes that share similar circulation characteristics and a smaller number of nodes may confuse different circulation characteristics. According to most researches about Asia, the number of SOM nodes ranges from 12 to 20 (Gao et al. 2019;Li et al. 2020b;Liu et al. 2015;Wang et al. 2015). If we search to minimize the quantization error within groups and to maximize the topographic error between groups, 20 (4 × 5) nodes are optimal (Li et al. 2020b;Yin et al. 2010). Figure 2 shows the SOM map obtained using the daily absolute Z500 from ERA5 with the aforementioned SOM settings. In this "Master SOM", node "a1" refers to the node in the top-left corner of the SOM plane and node "e4" refers to the node in the bottom-right corner. The orange shading in nodes a3, a4, b3, b4, c3, c4, d3, d4, e2, e3, and e4 to the east highlights the area with geopotential height exceeding 5880 gpm, where the 5880-gpm isobar is considered the boundary of the WNPSH (Liu et al. 2019). The orange shading of nodes d2, e1, a3, a4, b3, b4, c3, c4, d3, d4, e2, e3, and e4 to the west indicates the North Africa High (NAH), which often changes with the movement of the WNPSH (La et al. 2002). According to the "Master SOM", neighboring nodes share similar circulation characteristics, and the difference along the diagonal between node a1 and node e4 is the largest. The circulation pattern shown at node a1 is characterized by the East Asian Trough (EAT) near the Kamchatka Peninsula in middle and high latitudes and the subtropical high represented by the 5840-gpm isobar at middle and low latitudes. Cold advection from the west of the EAT transports cool air from the north to Asia, alleviating the high temperature in summer. The circulation pattern shown at node e4 is more conducive to generating high temperature. The WNPSH extends westward to southern China, and its intensity reaches 5880-gpm, whereas the westerly jets in the middle to high latitudes are stable and block the cold air from northern latitudes from moving south, leading to the development of high temperatures in Asia. If this type of circulation pattern lasts for a long time, coupled with unsupported regional soil moisture conditions, persistent high temperatures will occur (Boschat et al. 2014;Ding and Qian 2011). From node a1 to e4, the EAT gradually weakens, but the WNPSH continues to expand westward and southward, which gradually has a greater impact on Asian weather. The NAH also gradually intensifies from node a1 to e4, but its position does not change much. In addition, the existence of the strong WNPSH of node e4 is conducive to the transport inland of water vapor from the Pacific Ocean, which favors heavy rainfall in southern China, Japan and South Korea. "Master SOM" The node frequency of the "Master SOM" shows the frequency of occurrence of each circulation pattern of ERA5 in all 3312 days (Fig. 3a); node frequencies that are statistically (at the 95% level) above or below those expected by chance are shown in red. The frequencies of nodes a1 and e4 are the highest, both exceeding 10%. The frequencies of nodes a2, b2, b3, d2, and e3 are the lowest, only ~ 2%, suggesting these are transition circulation patterns that appear less frequently. It shows that the SOM method is effective in identifying weather patterns that occur less frequently. The frequencies of nodes a3, a4, b1, b4, c1, d4, and e1 are close to the random probability of 5% and are not statistically significant at the 95% level. The node persistence represents the duration of each node, and the solid black line in Fig. 4 shows the continuity of each node in "Master SOM". The persistence metrics of most nodes are greater than 1 except nodes a2, b2, b3, d2, and e3, suggesting that, for most nodes, the number of events lasting more than two days is greater than the number of events lasting two days or less. Nodes a2, b2, b3, d2, and e3 have more events with a duration of two days or less. These nodes also have the lowest frequencies (~ 2%; Fig. 3a), and on average appear once or twice in summer each year. Nodes a1 and e4 are the most persistent, and the frequencies of these two nodes are the highest (Fig. 3a), indicating that the circulation patterns corresponding to these two nodes are the two dominant circulation patterns in summer from 1979 to 2014. Node transition is represented by the probability of a node transitioning to another. Figure 5a shows the transition probability of each node in "Master SOM". Generally speaking, a node tends to transfer to its neighboring nodes. This is why the high probability values are concentrated near the diagonal from a1 to e4. The transition probability of Fig. 3 Contour plots of node frequencies corresponding to a the Master SOM from ERA5, b the highest ranked ensemble for frequency performance, and c the lowest ranked ensemble for frequency performance. The highest/lowest ranked ensembles for frequency performance are defined here based on the correlation coefficients of frequency metrics between ensembles and ERA5. Particular node fre-quencies statistically (at p < 0.05) above or below those expected values by chance are shown in red. Node locations correspond to those in Fig. 2. The correlation coefficient between ERA5 and b the highest ranked ensemble for frequency performance is 0.73, while the correlation coefficient between ERA5 and c the lowest ranked ensemble for frequency performance is 0.21 Fig. 4 Line graphs of persistence metric corresponding to ERA5 (black line), the highest ranked ensemble for persistence performance (red line), and the lowest ranked ensemble for persistence performance (blue line). The highest/lowest ranked ensembles of persistence performance are defined here based on the correlation coefficients of persistence metrics between ensembles and ERA5. Node locations correspond to those in Fig. 2. The correlation coefficient between ERA5 and the highest ranked ensemble for persistence performance is 0.97, while the correlation coefficient between ERA5 and the lowest ranked ensemble for persistence performance is 0.39 different nodes and the circulation pattern corresponding to each node in "Master SOM" (Fig. 2) can be combined to analyze the physical explanation of the change of different circulation patterns. Node e4 has the highest probability of transferring to nodes d4, e2, and e3, with a total probability of 0.86, which shows that the position of the WNPSH gradually retreats eastward and northward at low and middle latitudes. However, the probability of node e4 transferring to other nodes is almost zero. The transition probability of node d4 to nodes c4 and e3 is higher, at 0.31 and 0.25, respectively. This means that if the WNPSH is the same shape as in node d4, it is easier for it to shrink (to nodes c4 and e3), but almost never to expand (to node e4), Interestingly, the transition probability of node e4 to d4 is much higher than that of d4 to e4. This shows that although a system is not final and may change in intensity, the probability of changing opposite ways is different, at least for the most important system (the WNPSH) in this study region. Among the transition metrics of all nodes, that from node b4 to a4 is the largest, with a value of 0.47, followed by the transition metric from node a4 to a3, at 0.43. The transition metrics of node b3 to other nodes (nodes a2, a3, a4, b2, b4) are similar but not high (~ 0.1 in each case). Ensemble ranking of different metrics We have calculated the correlation coefficients between all ensembles shown in Table 1 and ERA5 for the frequency, persistence, and transition metrics to evaluate the performance skills of each ensemble on these three different metrics, and three sets of rankings for all ensembles have been obtained (Table 2). Positive associations between any two of frequency, persistence and transition performance have been found, indicating that good simulation skill for one kind of characteristic is related to skills in others (Fig. 6). There are, however, a few exceptions. For example, the persistence performance of the r23i1p1f1 ensemble of IPSL-CM6A-LR is good, with a correlation coefficient of 0.97 with ERA5 and a ranking of 2 for persistence metric (Table 2). However, the transition performance of this ensemble is relatively weaker, with a correlation coefficient of 0.40 with ERA5 and a ranking of 121 for transition metric (Table 2). Figure 6 shows that the relationship between frequency performance and transition performance is the strongest, with a correlation coefficient of 0.57, while the correlation coefficient between persistence and transition performance is 0.50, and 0.45 between frequency and persistence performance. They have all passed the 99% significance test. As a consequence, we have calculated the MR metric considering all three different metrics, and the final ensemble rankings are based on the MR metric. Frequency performances for the highest and lowest ranked ensembles are given in Fig. 3b, c. The r10i1p1f1 ensemble of CanESM5 is the best ensemble for frequency performance. The frequencies of diagonal nodes a1 and e4 are very high, while some transitional nodes in the middle, such as nodes b3, c3, d3, and e3 have very low frequencies. The correlation coefficient between this ensemble and ERA5 reaches 0.73. However, this ensemble also obviously overestimates the frequency of some nodes, such as a3, a4, e1, and e2, and underestimates others, such as b1 and b2. What is interesting is that nodes with a more westward and southward WNPSH appear more frequently in the ensemble that has the highest frequency performance. The r1i1p1f1 ensemble of INM-CM4-8 has the lowest frequency performance, and the correlation coefficient between this ensemble and ERA5 is only 0.21. From Fig. 3c, almost all of the Z500s of this ensemble are assigned to nodes a1, b1, c1, d1, and e1, indicating that this ensemble has a systematic negative bias of Z500, which causes Z500 of this ensemble to be allocated to those nodes with weaker Z500 circulation. Figure 4 shows the highest and lowest ranked ensembles for persistence performance. The r1i1p1f1 ensemble of NorESM2-MM is the highest performing ensemble for Transition probability plots (grayscale indicates probability) for a ERA5, b the highest ranked ensemble for transition performance, and c the lowest ranked ensemble for transition performance. The highest/lowest ranked ensembles for transition performance are defined here based on the correlation coefficients of transition metrics between ensembles and ERA5. Node locations correspond to those in Fig. 2. The correlation coefficient between ERA5 and b the highest ranked ensemble for transition performance is 0.86, while the correlation coefficient between ERA5 and c the lowest ranked ensemble for transition performance is 0.37 r8i1p1f1 123 116 12 88 IPSL-CM6A-LR r7i1p1f1 79 71 101 89 MIROC6 r4i1p1f1 128 109 18 90 IPSL-CM6A-LR r13i1p1f1 85 64 109 91 IPSL-CM6A-LR r28i1p1f1 91 62 110 92 IPSL-CM6A-LR r10i1p1f1 88 65 112 93 MPI-ESM1-2-HR r2i1p1f1 58 125 85 94 CESM2 r5i1p1f1 59 119 91 95 INM-CM5-0 r9i1p1f1 133 16 125 96 IPSL-CM6A-LR r6i1p1f1 108 26 140 97 IPSL-CM6A-LR r4i1p1f1 80 92 104 98 IPSL-CM6A-LR r19i1p1f1 96 66 118 99 IPSL-CM6A-LR r11i1p1f1 78 79 128 100 persistence simulation, with a correlation coefficient of 0.97 with ERA5. The persistence metrics evolution of each node in this ensemble is almost exactly the same as that of ERA5, but the persistence metrics of all nodes except d4 are lower than those of ERA5. This shows that in the r1i1p1f1 ensemble of NorESM2-MM, there are fewer events with a duration of more than two days, and more events with a duration of less than 2 days for each node. The above results show that the highest performing ensembles for node frequency, persistence and transition in CMIP6 are the r10i1p1f1 ensemble of CanESM5, the r1i1p1f1 ensemble of NorESM2-MM and the r8i1p1f1 ensemble of MPI-ESM1-2-LR, respectively. The lowest performing ensemble for frequency and persistence is the r1i1p1f1 ensemble of INM-CM4-8, and the lowest performing ensemble for transition is the r6i1p1f1 ensemble of IPSL-CM6A-LR. To better describe the simulation ability of CMIP6 on Z500 in Asian region, the MR metric is used afterwards. According to the MR metric (Table 2), the top one ranking is the r10i1p1f1 ensemble of CanESM5, which is also the top one ranking ensemble for frequency performance, and the second ranking ensemble is the r1i1p1f1 ensemble of NorESM2-MM, which is also the top one ranking for persistence performance. The lowest ranked ensemble is the r1i1p1f1 ensemble of INM-CM5-0. The r1i1p1f1 ensemble of INM-CM4-8 is ranked last for frequency and persistence performance, but due to its relatively high ranking for transition performance, it ranked second last rather than last by the MR metric. Ensemble ranking and prominent circulations in Asia The WNPSH, SAH and westerlies are key circulation systems that control the summer monsoon and typhoon activities, and are important indicators of summer weather in Asian countries . In order to test whether the top-ranked ensembles can better represent the actual climate in Asia, we calculated the daily WNPSH indexes, the SAH intensity index and the westerlies index for ERA5 and all ensembles shown in Table 1. The probability density 1 3 functions (pdf) of the highest 10 and lowest 10 ensembles are then calculated, and compared with ERA5. Figure 7a, b shows the pdf distributions after fitting to normal distribution of the western ridge point index and the northern boundary position index of ERA5 and the highest and lowest 10 ensembles, respectively. The average western ridge point index of ERA5 is around 130° E, and that of the highest 10 ensembles is around 125° E, which is about 5° E west of ERA5. The pdf distribution of the lowest 10 ensembles shows that the western ridge point index is around 160° E on average, which is about 30° E east of that of ERA5 and even 10° E east of the average of the CMIP5 simulations (Zhao et al. 2019a, b). This may be because only the CMIP5 r1i1p1f1 ensembles were evaluated, and here we evaluated all ensembles. The average northern boundary position index of ERA5 is around 31° N, and the average of the highest 10 ensembles is around 29° N, about 2° to the south. The average northern boundary position index of the last 10 ensembles is around 40° N, which shows that the lowest 10 ensembles shift the northern boundary of the WNPSH northward by about 10°. The results are similar whether choosing highest/lowest 5 or 20 ensembles ( Supplementary Fig. 1). As extreme events like heatwaves, droughts, and typhoons are greatly affected by the location of the WNPSH (Choi and Kim 2019;Zhao et al. 2019a, b), such differences will inevitably lead to errors extreme events simulation. Figure 7c, d shows the pdf distribution of WNPSH area index and intensity index of ERA5 and the highest 10 and lowest 10 ensembles after gamma distribution. It can be seen that the pdf of the highest 10 ensembles is very close to that of ERA5, but the simulated area and intensity are both larger. For the lower-ranked ensembles, most of the ensembles simulate very small and weak WNPSH, and the indexes are almost near 0, which means that the lowest ensembles can hardly simulate the real area and strength of WNPSH. Figure 8 shows the pdf distribution of the SAH intensity index and the westerlies index of ERA5 and the highest 10 and lowest 10 ensembles after fitting to gamma (Fig. 8a) and , area index (c) and intensity index (d) of ERA5 (black) and the highest 10 (red) and lowest 10 ensembles (blue) after fitting to normal (a, b) and gamma (c, d) distributions. The black and red curves correspond to the left axis, while the blue curve corresponds to the right (c, d) Fig. 8 Probability density function distributions of SAH intensity index (a) and westerlies index (b) of ERA5 (black) and the highest 10 (red) and lowest 10 ensembles (blue) after fitting to gamma (a) and normal (b) distributions normal (Fig. 8b) distributions. The results prove that the ensembles which simulate WNPSH well have better simulation capabilities for SAH, and vice versa. The eastward shifting of SAH influence the short-term variation of WNPSH at lower level through dynamical and thermodynamical mechanisms. First, the descent flow forced by the negative vorticity advection at upper layer contributes to the vorticity development of WNPSH. Second, the adiabatic warming caused by the descent is also conducive to the maintenance of warm WNPSH at lower level (Ren et al. 2007). The westerly jet is strongest near 200 hPa, but considering the model output data, we have calculated the westerlies index at 250 hPa. The simulation is very close to that of ERA5 for both the highest and lowest 10 ensembles. Discussion In order to test whether different reanalysis data give different results, we generate another "Master SOM" (Supplementary Fig. 2) using JRA-55 reanalysis data (Ebita et al. 2011;Kobayashi and Iwasaki 2016), and then apply the same procedure to rank all ensembles listed in Table 1. The rankings (Table omitted) are similar to those shown in Table 2, which is obtained from ERA5 reanalysis data. For example, the 25 ensembles of CanESM5 are still ranked high, and the 32 ensembles of IPS-CM6A-LR are still ranked low, although there are slight differences in the specific ranking numbers. Therefore, we consider that the SOM results obtained from these two different reanalysis datasets are consistent. The simulation capability of a model is affected by many factors, such as dynamic core, parametric scheme, resolution and so on. Our research results show that the model's ability of simulating Z500 does not seem to have much to do with resolution. The r10i1p1f1 ensemble of CanESM5, which is the top one ranked ensemble for the MR metric, has a relatively coarse spatial resolution of 500 km, while the r1i1p1f1 ensemble of NorESM2-MM, the second ranking ensemble, has a relatively fine horizontal resolution of 100 km. The lower-ranking ensembles are mainly from the IPSL-CM6A-LR and INM-CM5-0 models, with resolutions of 250 and 100 km, respectively. However, we have only four different resolutions here, 500, 250, 100, and 50 km, and there is only one ensemble with 50-km resolution, so we cannot draw a significant conclusion. Subsequent work needs to add more samples with different resolutions to analyze the relationship between circulation simulation skills and model resolution. In addition, the influence of different dynamic cores and parameterization schemes on the circulation simulation is worth further analysis. But it is beyond doubt that the simulation skills for Z500 of CMIP6 is improved compared with that of CMIP5. Zhao et al. (2019a, b) evaluated the ability of CMIP5 to simulate the atmospheric circulation of East Asia and showed that there's a bias of about 20 hPa between CMIP5 ensemble mean and reanalysis of the WNPSH climatology. However, the bias is reduced to 10 hPa and the northern boundary is much closer to that of the reanalysis (figures omitted). Extreme events have seriously affected the whole world in recent years (Kong et al. 2020;Lee et al. 2020;Li et al. 2020a;Lu et al. 2020;Meehl and Tebaldi 2004). Many studies have pointed out the close connections between circulation and extreme weather events (Boschat et al. 2014;Ding and Qian 2011;Horton et al. 2015;Li and Ruan 2018;Liu et al. 2015;Pezza et al. 2011;Raymond et al. 2017). As mentioned above, the circulation pattern of node e4 favors the occurrence of extreme high temperature and heavy rainfall events in summer. Also, node e4 has a long duration (Fig. 4), which will be more likely to lead to long lasting extreme high temperature events and even compound extreme events (Faranda et al. 2020). Our next work will focus on the extent to which extreme events in Asia, such as extreme high temperatures and heavy precipitation, are affected by the circulation, and how the circulation affects extreme events. If these issues can be studied thoroughly, it will help improve the accuracy of projections of extreme events and reduce the economic losses and casualties in Asia. Conclusions The Asian summer climate is significantly affected by the large-scale atmospheric circulation. The ability of models to simulate circulation characteristics is one of the most important factors affecting the future progress of Asian regional climate research. This paper uses the SOM method to evaluate the ability of CMIP6 in simulating Z500 in Asian region. Our results show that the r10i1p1f1 ensemble of CanESM5 is the best ensemble for frequency performance, and it is also the top ensemble for comprehensive performance, measured as the MR metric. The r1i1p1f1 ensemble of NorESM2-MM is the best ensemble for persistence performance, and the r8i1p1f1 ensemble of MPI-ESM1-2-LR is the top ensemble for transition performance. The r1i1p1f1 ensemble of INM-CM4-8 is the lowest ensemble for frequency and persistence performance. The r6i1p1f1 ensemble of IPSL-CM6A-LR has the lowest transition ranking. The r1i1p1f1 ensemble of INM-CM5-0 is the ensemble with the lowest of MR metric. Generally speaking, the rankings of different ensembles from the same insititution are relatively close. For example, the MR rankings of the 25 ensembles of CanESM5 are relatively high, whereas the MR rankings of the 32 ensembles of IPSL-CM6A-LR are relatively low. In addition, pairwise correlation coefficients between the frequency performance, persistence performance and transition performance are all around 0.5 and significant, which means that good simulation skill for one type of circulation characteristic (like frequency) is related to skills in simulating other types of circulation characteristics (persistence and transition). Judging from the pdf distributions of different circulation indexes, the top-ranked ensembles indeed simulate the WNPSH and the SAH better, indicating that the rankings based on the SOM method are credible. The evaluation of the circulation indexes supports the rankings based on the SOM method.
2021-08-02T00:06:40.441Z
2021-04-30T00:00:00.000
{ "year": 2021, "sha1": "5ae7700d2a8cabbd23eedc0b0f8ef2d9848887a1", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-447304/latest.pdf", "oa_status": "GREEN", "pdf_src": "Springer", "pdf_hash": "de1fc016b27409704a167e68a6eeb8bacb2af7e6", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
221355050
pes2o/s2orc
v3-fos-license
Viruses harness YxxØ motif to interact with host AP2M1 for replication: A vulnerable broad-spectrum antiviral target Host AP2M1 protein is exploited by diverse pathogenic viruses via their YxxØ protein motif during viral replication cycle. Viruses live briefly but perpetually. They invade cells and manipulate host machinery to replicate, transmit, and cause disease. Host signal transduction in response to virus invasion activates transcription factors that determine the gene expression characteristics and signaling mechanisms of the cell fate. These signaling mechanisms have been well studied in the fields of embryonic development and cancer biology but less well studied in the context of virus infection (5). A small set of evolutionarily conserved signaling pathways is associated with the cell fates of apoptosis, differentiation, or virus elimination during virus-host interactions. They include the transforming growth factor- (TGF-)/SMAD, Wnt/-catenin, Notch, and phosphatidylinositol 3-kinase/thymoma viral proto-oncogene (PI3K/AKT) signaling pathways. Here, we show that TGF- signaling is commonly altered by vastly different viruses. Modulating the TGF- pathway is primarily mediated by mislocalization of TGF- cytokines, its receptors TGF-R, and SMAD transcriptional factors, which are largely executed through membrane and intracellular trafficking pathways (6). Therefore, we hypothesize that one approach to determine the cell fate, to die or to support virus replication, is through virusmanipulated membrane and intracellular trafficking. In this study, we performed integrative chemical and genetic screens that identified host AP2M1 protein as a critical player affecting TGF- signaling and facilitating intracellular trafficking of different viruses. AP2M1 is the mu (2) subunit of AP2 adaptor complex, which functions as the major heterotetramer (, 2, 2, and 2 subunits) that orchestrate clathrin-mediated endocytosis (CME) (7). The direct binding between TGF-R and AP2 has been demonstrated in vitro and in vivo (8). Functionally, AP2M1 recognizes the YxxØ sorting motifs present in the cytosolic tail of different cargo proteins, whereas x refers to any amino acid and Ø indicates hydrophobic residues including L/M/F/I/V (9). Here, we identify a previously unrecognized role of AP2M1, which is an intracellular cargo molecule that cotraffics with different internal viral proteins for proper subcellular localizations, in addition to its role in endocytosis. The cotrafficking is mediated through protein-protein interaction (PPI) between host AP2M1 and specific viral proteins harboring a YxxØ motif. Our initial pharmacological screening identifies a tool compound, N-(p-amylcinnamoyl)anthranilic acid (ACA), that disrupts AP2M1/YxxØ interaction without affecting 1 the AP2M1 phosphorylation. ACA exhibits broad-spectrum antiviral efficacy in cell cultures and mouse models. Substitutions made in the influenza A nucleoprotein YxxØ motifs affect viral fitness in vitro and in vitro, indicating a critical role of AP2M1/YxxØ interaction during virus life cycle. Our study reveals AP2M1/YxxØmediated intracellular trafficking of diverse virus families, which represents a previously unidentified intervention target for a broad spectrum of emerging viral diseases. TGF- signaling is commonly altered upon virus infection To determine the virus-induced signaling associated with cell fate pathways, we examined whether virus infection could potentiate host cell signaling by TGF-, Wnt, Notch, and PI3K/Akt pathways. Reporter gene assays with high multiplicity of infection (MOI) and time-course monitoring of luciferase activity were performed in gene-transfected human primary cells including the influenza A pdmH1N1-infected human primary bronchial/tracheal epithelial cells (HBTECs), ZIKVinfected human primary fibroblasts HFL1 and EV-A71-infected human neural progenitor cells. TGF- signaling exhibited distinct patterns of marked changes when compared with the marginally changed Wnt, Notch, and PI3K/Akt pathways (Fig. 1A). Infection by either pdmH1N1 or ZIKV triggered TGF- activation, whereas EV-A71 infection suppressed its signaling, indicating that TGF- signaling is commonly exploited in the cell fate determination pathway by different RNA viruses. TGF--modulating membrane and intracellular trafficking is vulnerable to antiviral intervention Proper control of TGF-R via membrane and intracellular trafficking is documented to mediate TGF- signaling (10). To explore the dependence of viral fitness on these host trafficking genes, we performed a loss-of-function screen to determine the degree of pdmH1N1 replication after individual gene silencing. Among the 142 cellular trafficking genes, knockdown of 17 genes reduced virus titers by more than one-log 10 unit ( Fig. 1B and fig. S1). Of the 17 influenza A virus (IAV) screen hits, 4 had been previously implicated in HIV-1 assembly, release, and budding: Adaptor related protein complex 1 subunit mu 1 (AP1M1), Caveolin 1 (CAV1), Ras-related protein Rab-5A (RAB5A), and Rho-associated protein kinase 1 (RCOK1) (11); depletion of COPA or COPB2 profoundly restricted human cytomegalovirus replication (12); COPG and AP2M1 were demonstrated to be important for IAV production in a genome-wide small interfering RNA (siRNA) screening (13). The results suggested that the selected membrane trafficking genes may serve as proviral factors with broad relevance to a number of virus infections. In view of the vulnerability of intracellular trafficking within virus life cycle, we screened a small-molecule compound library of trafficking inhibitors for finding drug hits with promising antiviral potency, which may also serve as a tool compound enabling the identification and characterization of putative targets within intracellular trafficking pathways. The library consists of 420 inhibitors targeting membrane transporters such as P-glycoprotein, Exportin 1, and ion channels including cystic fibrosis transmembrane conductance regulator, proton pump, calcium pump, etc. Multiple rounds of selection were performed using cell protection (supplementary data sheet) and viral load reduction assays, which identified 20 hits that suppressed virus replication for >3 logs at 10 M and another 5 candidates achieving >2 logs inhibition at 1 M (table S1). To prioritize these five compounds, we evaluated their antiviral efficacy against other emerging viruses and identified ACA as the only inhibitor that exhibited a broad-spectrum antiviral effect against influenza A H1N1, ZIKV, HIV-1, SARS-CoV-2, EV-A71, human adenovirus 5 (AdV5), and severe fever with thrombocytopenia syndrome virus (SFTSV) (Fig. 1C and fig. S2A). The tool compound ACA is a broad-spectrum antiviral in vitro, ex vivo, and in vivo The 50% cytotoxic concentration of ACA ranged from 20 to 120 M in different cell lines, while its half maximal effective concentration (EC 50 ) was at or below micromolar levels ( fig. S2A). ACA (10 M) potently suppressed SARS-CoV-2 replication for >2 logs in both supernatants and cell lysates in Caco2 cells (EC 50 = 0.59 M), indicating a good therapeutic potential for the current COVID-19 pandemic ( fig. S2B). Because ACA displayed the highest selectivity index of 219 against pdmH1N1 infection, ACA was tested against different IAV subtypes in the subsequent antiviral evaluations. Flow cytometry showed that the percentage of pdmH1N1-infected Madin-Darby canine kidney (MDCK) cells after 10 M ACA treatment decreased by 83.2% at 24 hours post-infection (hpi) ( Fig. 2A). ACA exhibited cross-protection against H5N1, H7N7, H7N9, and H9N2 in a dose-dependent manner (Fig. 2B). Notably, ACA treatment reduced supernatant viral titer by >4 logs HBTECs (Fig. 2C). Using our previously established proximal differentiated threedimensional (3D) human airway organoids (AOs) for predicting the infectivity of influenza viruses in humans (14), we confirmed that ACA reduced virus replication by >4 logs (Fig. 2D), with markedly decreased expression of viral nucleoprotein (NP) antigen (Fig. 2E). Collectively, ACA robustly inhibited IAVs replication in vitro and ex vivo. To determine ACA's toxicity, we intraperitoneally inoculated the maximal phosphate-buffered saline (PBS)-soluble ACA (6.75 mg/kg) into BALB/c mice for 10 days. No body weight loss or decreased activity was observed for a consecutive monitoring of 14 and 28 days, respectively ( fig. S2C). To evaluate the in vivo antiviral protection of ACA, we challenged mice with 1000 plaque-forming units (PFU) of mouse-adapted H1N1 virus. All mice after a single intranasal (i.n.) dose of ACA (0.2 mg/kg) survived (n = 8), whereas all dimethyl sulfoxide (DMSO)-treated and zanamivir-treated (2 mg/kg) mice died (Fig. 2F). Using the same experimental regimen, ACA conferred substantial better survival against avian IAV H7N9 (100% versus 0%; Fig. 2F), notably less body weight loss (Fig. 2G), and undetectable lung tissue virus titers at days 2 and 4 after challenge (Fig. 2H). On 4 days post-infection (dpi), histopathologic examination showed substantially less pulmonary alveolar damage and interstitial inflammatory infiltration in ACA-treated mice (Fig. 2I). Together, ACA effectively protected mice challenged by two IAV subtypes by reducing virus replication and pneumonia. The broad-spectrum antiviral activity of ACA in cell cultures warrants further evaluation in other virus disease models (Fig. 3). Type I interferon receptor-deficient A129 mice were infected with ZIKV-PR (a strain of the ZIKA virus originally isolated from a traveler to Puerto Rico) and treated with either ACA (1 mg/kg) or DMSO by subcutaneous administration. Mice receiving one dose of ACA therapy showed a remarkably better survival rate (100% versus 0%) and mean body weight (Fig. 3, A and B). Moreover, ZIKV titer was undetectable in the brains of ACA-treated group, whereas that of DMSO group was generally 4 logs higher (Fig. 3C). Less histopathologic changes of meningoencephalitis and ZIKV-NS1 antigen expression were observed (Fig. 3D). Furthermore, intranasal ACA (0.2 mg/kg, i.n.) provided good protection against lethal challenge with 500 PFU of MERS-CoV in human dipeptidyl peptidase 4 (hDPP4)transgenic mice, whereas all DMSO-treated mice died on or before day 8 after challenge (100% versus 0%; Fig. 3E). Body weight loss in the DMSO-treated mice began since 5 dpi, while that of the ACA-treated group continued to increase (Fig. 3F). About 2 logs lower lung tissue virus titer was detected in the ACA group (Fig. 3G). Inflammatory infiltration and MERS-CoV-NP antigen expression in the lung tissues were substantially reduced after ACA treatment (Fig. 3H). Thus, ACA also exhibited broad-spectrum antiviral efficacy in vivo. Small-molecule compound library screening identified ACA as a broad-spectrum antiviral in vitro. A membrane transporter/cellular trafficking library was primarily screened in pdmH1N1-infected Madin-Darby canine kidney (MDCK) cells (0.01 MOI and 48 hpi) through cell protection assays (box 1, blue dots indicated >90% cell viability), followed by secondary screening using viral load reduction assays (box 2, magenta dots indicated >3 logs of viral load reduction applying 10 M drug concentration) and tertiary screening (box 3, yellow dots indicated >2 logs of viral load reduction using 1 M drug concentration). ACA was prioritized due to its broad-spectrum antiviral efficacy. Shown in the last panel are the antiviral effects against six different viruses as indicated. Shown are the plaque-forming unit (PFU) or 50% tissue culture infectious dose (TCID 50 ) or OD 450 (optical density at 450 nm) value of indicated concentrations relative to controls in the absence of compound (%). ACA targets host AP2M1 protein ACA is known as a phospholipase A2 (PLA2) inhibitor and transient receptor potential (TRP) channel blocker (15), but such mechanisms for suppressing viral replication were excluded ( fig. S3). To explore its actual mechanism of antiviral action, we performed time-of-addition assays to investigate how ACA interferes with different phases of the viral replication cycle. ACA did not inactivate virus particles nor affect virus attachment to host cell surface ( fig. S4). To dissect the postvirus attachment steps, we quantified three types of influenza virus RNA [vRNA, complementary RNA (cRNA), and mRNA] at different time points after virus internalization. Distinct dynamics of vRNA, cRNA, and mRNA synthesis were observed in the control groups, while all viral RNA types in ACA groups remained at the baseline level within 0 to 5 hpi of ACA addition (Fig. 4A). The result suggested that ACA functions within 1.5 hours after internalization before cRNA synthesis and therefore may inhibit virus endocytosis and nuclear import or blocking viral ribonucleoprotein (vRNPs) activity directly. Because LysoTracker red assay indicated that the acidification of endosomal compartments was not affected, the uncoating of vRNA for release into cytoplasm upon virus fusion was unlikely to be blocked by ACA ( fig. S3C). Hence, we directly tracked the location of the vRNA within the incoming vRNPs ( Fig. 4B). At indicated time points, cells were processed and stained for the negative-stranded NP vRNA of PR8 using a specific RNA probe set (red). In the DMSO-treated cells, the dominant nuclear import of vRNA was detected by 1 hpi, followed by predominant cytosol location between 1.5 and 2 hpi. By comparison, the nuclear vRNA signal was rarely observed after ACA addition throughout the time course. Therefore, ACA may inhibit IAV replication by blocking vRNA nuclear import. of 15 To ascertain the target of ACA, we developed an unbiased drug target-elucidating platform that integrates Click-chemistry, WaterLOGSY, and protein ID techniques using liquid chromatographytandem mass spectrometry (LC-MS/MS), namely, CWID (Fig. 4, C and D). First, click-chemistry was applied to introduce an azido group to ACA (azido-ACA), without compromising its antiviral efficacy (Fig. 4C, middle). With an azido-reactive fluorescent dye (DyLight 488), azido-ACA but not ACA was visualized in both the cytosol and nucleus (Fig. 4C, right). Next, WaterLOGSY, a ligandbinding determination method through observing the nuclear overhauser effect, was applied for primary nuclear magnetic resonance (NMR) screening of cellular fragments that exhibited high binding affinity to ACA. Virus-infected MDCK cells were fractionated by gel filtration chromatography, followed by WaterLOGSY to capture the ACA-interacting signals, individually (Fig. 4D, middle). Iterative rounds of subfractionation and WaterLOGSY were performed to obtain the maximally separated fraction with detectable ACA signals. Subsequently, the bound form of fraction-azido-ACA mixture was stained by DyLight 488 and subject to electrophoresis in a native polyacrylamide gel electrophoresis. A specific band indicating the (C) Five mice in each group were euthanized at 6 dpi, and brain tissues were harvested for vial titer determination by plaque assay. The dotted line indicates the lower detection limit of plaque assay (***P < 0.001; for the purpose of statistical analysis and clarity, a value of 10 to 15 PFU/ml was assigned for any titer below the detection limit). (D) Histopathologic and immunohistochemistry (IHC) analyses of the brain samples at 6 dpi indicated less severe meningitis (by H&E, ×200) and less virus infected cells as indicated by ZIKV NS1 antigen staining (red arrows, by IHC, ×200 magnification) after ACA treatment. (E to H) ACA protected human dipeptidyl peptidase 4 (hDPP4) transgenic mice from MERS-CoV infection. The hDPP4 mice were intranasally inoculated with 500 PFU of MERS-CoV and intranasally treated by ACA or 0.5% DMSO for one dose starting 1 hpi. Shown are (E) survival rate, (F) mean body weight, (G) lung viral titer at 2 dpi (n = 5 per group), and (H) representative lung tissues stained by H&E and anti-MERS-CoV-NP immunofluorescence. The staining suggested less inflammatory cell filtration (by H&E, ×200 magnification) and less virus infected cell antigens (by immunofluorescence (IF) staining, green fluorescence) as detected in ACA-treated mouse lungs. Results are presented as mean values ± SD. Differences in survival rates were compared using log-rank (Mantel-Cox) tests and viral titer by Student's t test. ***P < 0.001, **P < 0.01, *P < 0.05. Cells were fixed at the indicated time points and hybridized with RNA probes against the IAV negative-stranded NP vRNA (red) and stained for DNA (blue), examined by confocal microscopy. Images are representative of three independent experiments. Scale bars, 10 m. (C and D) Click chemistry/WaterLOGSY/protein ID (CWID) platform for identification of drug-binding targets. (C) Click-chemistry: chemical structure of azido-ACA showing the location of azido group (green circle) on ACA. Cellular distribution of azido-ACA is shown (green), whereas ACA was used as a negative control due to the lack of phosphine-reactive azido group. Scale bars, 50 m. (D) WaterLOGSY-guided cellular fractionation was subjected to analysis for ACA-featured NMR spectra. "*" and "***" indicate mild and strong binding signals, respectively. The native polyacrylamide gel electrophoresis gel photo shows the selected cell fraction as detected by a fluorescent image analyzer. Red arrow indicates the specific azido-ACA-binding fragment. (E) Mutagenesis analysis of AP2M1 to rescue pdmH1N1 virus replication against ACA. Full-length AP2M1 (full), longin-like domain (LLD), MHD, and mutant AP2M1 were transfected to MDCK cells before virus infection and ACA treatment. Oneway ANOVA. **P < 0.01; n.s, not significant. (F) Partial sequence alignment of human, mouse, and dog AP2M1 is shown. N217 and K410, the key residues for ACA binding, are highlighted with a box. The predicted interaction surfaces on AP2M1 (red) are shown, while ACA (green) is displayed in stick and mesh representation. of 15 protein-compound complex was visualized in the azido-ACA group, while it was absent in the ACA or cell lysate-only group (Fig. 4D, right). Gel plug of the target band was collected for LC-MS/MS, which identified eight candidate proteins that were physically associated with azido-ACA. To validate their biological function besides binding, individual open reading frame (ORF) clone was overexpressed to overcome the inhibition of virus replication by ACA. Apparently, only ectopic expression of AP2M1 notably rescued pdmH1N1 growth despite the presence of ACA, indicating AP2M1 being one of the most likely targets that accounts for ACA's mode of action ( fig. S5). AP2M1 consists of an N-terminal (~150 amino acids) longin-like domain (LLD) and a C-terminal (170 to 434 amino acids) mu homology domain (MHD). Functionally, we demonstrated that overexpression of either full-length AP2M1 or MHD enhanced pdmH1N1 replication for about 1.5 logs despite adding ACA, whereas overexpressed LLD did not antagonize ACA's antiviral activity (Fig. 4E, left bar charts). Thus, MHD harbors sites of ACA interaction. Amino acid residues of MHD are conserved across the human, dog, and mouse AP2M1, which is in line with the broad-spectrum antiviral coverage of ACA in different species of cells and mouse models. Molecular docking predicts that ACA interacts mainly with AP2M1 through four amino acids, M216, N217, K400, and K410 (Fig. 4F). Our mutational experiment showed that substitution in N217A or K410A failed to antagonize the ACA's antiviral activity when compared with that of the M216A, K400A, or wild-type (WT) AP2M1 (Fig. 4E, right bar charts). Collectively, ACA targets host AP2M1 by interacting with its N217 and K410 residues. Host AP2M1 is exploited by multiple families of virus via their proteins harboring YxxØ binding motif The antiviral spectrum of ACA spans across enveloped (ZIKV) and nonenveloped (EV-A71), retro (HIV-1), and nonretro (IAV), as well as DNA (AdV-5) and RNA (MERS-CoV) viruses ( fig. S2A). Thus, AP2M1 must be broadly exploited during life cycles of many viruses. First, we excluded the possibility that ACA affected phosphorylation of AP2M1 ( fig. S6), which has been approved to be antiviral effective by Bekerman et al. (16). Subsequently, to find the specific virus protein interacting with host AP2M1, we performed an immunoprecipitation (IP) screening of viral ORF clones. Previous host-IAV interactome suggested M1, hemagglutinin (HA), PB2, and NP as the potential AP2M1-binding proteins without individual validation (17). After coexpression of each viral gene and HA-tagged AP2M1 in human embryonic kidney (HEK) 293T cells, only NP could be detected in the IP complex, but adding ACA markedly diminished the target NP band ( fig. S7A). Using the same approach, we identified the ZIKV-NS3, MERS-CoV-NP, and EV-A71-2C as the interacting partners of AP2M1 ( fig. S7, B to D). AP2M1 is known to mediate sorting of cargo proteins harboring YxxØ or dileucine motifs (18). Using bioinformatic methods, we found that YxxØ but not dileucine motif was consistently present in the implicated AP2M1-binding viral proteins. The YxxØ motifs are highly conserved in specific proteins across many virus families including the NP of Orthomyxoviridae, Gag of Retroviridae, NP of Bunyaviridae, NS3 of Flaviviridae, NP of Coronaviridae, 2C of Picornaviridae, and Core Protein V of Adenoviridae by bioinformatics (Fig. 5A). To determine whether ACA blocked the AP2M1 YxxØbinding site, we developed a competitive enzyme-linked immunosorbent assay (ELISA). As positive controls, addition of either nonlabeled YxxØ motif peptide DYQRLN or the low-affinity mutant D176A AP2M1 (19) resulted in diminished binding signal during binding of biotin-YxxØ substrate to the immobilized His-AP2M1. ACA pretreatment notably reduced AP2M1/biotin-YxxØ interaction (Fig. 5B). To explore whether the YxxØ-binding pocket is a druggable target for antiviral therapy, we also tested tyrphostin A23, which blocks the tyrosine-binding pocket of AP2M1 (20). At nontoxic concentrations, Tyrphostin A23 suppressed a panel of viruses including pdmH1N1, MERS-CoV, EV-A71, and ZIKV (Fig. 5C). Last, the effect of AP2M1 gene depletion on viral replication was investigated. AP2M1 knockout led to dramatic viral load reduction in pdmH1N1-infected 293T cells, EV-A71-infected rhabdomyosarcoma (RD) cells, ZIKVinfected Huh7 cells, and MERS-CoV-infected Huh7 cells (Fig. 5D). Together, AP2M1/YxxØ interaction is a critical step in the viral replication cycle and druggable for antiviral intervention. AP2M1/YxxØ interaction facilitates virus trafficking beyond endocytosis to promote viral replication Next, we asked the functional roles of AP2M1/YxxØ interaction during viral replication cycles using IAV, EV-A71, and ZIKV-NS3 as three representative viruses. Nuclear import of IAV NP through the nuclear pore complex is a prerequisite for efficient vRNP translocation and subsequent genome replication, while our data clearly showed that ACA impaired vRNA nuclear import and AP2M1/NP binding ( Fig. 4B and fig. S7A). Thus, we postulated that AP2M1 facilitated NP import from cytoplasm to the nucleus via recruiting the NP-YxxØ motif. To quantify the retardation of NP import in AP2M1 −/− 293T cells, cells were infected with 10 MOI IAV, and cycloheximide (CHX) was added to inhibit protein synthesis. Crude cell lysate was separated into nuclear (Nuc) and cytoplasmic (Cyto) fractions at 2 hpi (Fig. 5E). In WT cells, AP2M1 protein was mainly detected in cytoplasm, whereas considerably more viral NPs were found in nucleus. However, in AP2M1 −/− cells, viral NP was predominantly found in the cytoplasmic fraction instead. Since the only source of NP protein was from the incoming vRNPs after CHX treatment, the finding suggested that AP2M1 facilitates the nuclear import of incoming vRNPs. To provide direct evidence for a role of AP2M1 in mediating intracellular NP trafficking that beyond endocytosis, we monitored the cotrafficking of NP-green fluorescent protein (GFP) with AP2M1-mCherry using live cell imaging. GFP/mCherry signal was found largely in the host nucleus (movie S1). Upon ACA treatment, NP resides predominantly in the cytosol, and the mobility of the AP2M1-associated NP puncta (yellow) was reduced remarkably, which suggested diminished NP trafficking via AP2M1 (Fig. 5F and movie S2). Nevertheless, most positive-stranded RNA viruses replicate their genomes on the cytoplasmic endoplasmic reticulum (ER) membrane without entering the nucleus. For example, EV-A71-2C and ZIKV-NS3 proteins induce the formation of viral RNA replication complex by trafficking predominantly to the ER membrane (21). In the context of synchronized viral infection, we demonstrated by confocal imaging the extensive localization of IAV-NP and nucleus, so were the EV-A71-2C and ZIKV-NS3 in ER, respectively. After ACA treatment, however, reduced rates of colocalization were revealed in all three representative viruses (89% versus 12% for IAV-NP/nucleus; 63% versus 21% for EV-A71-2C/ER; 95% versus 56% for ZIKV-NS3/ER; Fig. 5G). In summary, these results confirmed an important role of AP2M1 in trafficking viral proteins beyond endocytosis. To bridge the AP2M1-mediated trafficking and virus replication, we rescued the recombinant IAV with a series of point mutations on two NP-YxxØ locations, i.e., Y296-V299 (YSLV) and Y385-I388 (E) AP2M1 −/− and WT 293T cells were treated with CHX before virus infection (10 MOI). Nuclear (Nuc) and cytoplasmic (Cyto) fractions were separated and detected at 2 hpi by Western blotting. (F) A549 cells transfected with GFP influenza-NP and mCherry-AP2M1 were incubated with DMSO or ACA for 24 hours. Live cell imaging was performed, and motile AP2M1/NP puncta were tracked (movies S1 and S2). Shown is the average velocity of trackable puncta within the overall distance traveled. Student's t test. (G) AP2M1 facilitates the viral protein localization. Synchronized infections were used throughout the experiments. Colocalization was quantified using ImageJ (JACoP) colocalization software and Manders' colocalization coefficients (MCCs). Bar charts indicate mean MCC values represented as percent colocalization (the fraction of green intensity that coincides with blue intensity in the case of IAV-NP/nucleus and the fraction of green intensity that coincides with red intensity in the case of EV-A71-2C/ER and ZIKV-NS3/ER) ± SD (error bars, n = 10 to 15). Scale bars, 10 m. ***P < 0.001 by Student's t test. (YWAI) (fig. S8). Mutagenesis includes two Y/A substitutions (Y296A and Y385A), two Ø/A substitutions (V299A and I388A), and two Ø/L substitutions (V299L and I388L). The growth rates of the Y296A, Y385A, and Ø299A mutant viruses were attenuated for >1-log in each time point, while Ø388L mutant virus exhibited similar replication kinetics as that of the WT (Fig. 6A). The results not only indicated the critical role of NP-YxxØ in determining the virus fitness but also that YxxØ is functionally interchangeable within homologous Ø signals (i.e., I388L substitution). However, we were unable to rescue the recombinant viruses containing Ø299L or Ø388A mutation for three independent experiments. To validate whether the reduction of IAV replication was due to the decreased NP nucleus entry, a step which is beyond AP2M1-mediated endocytosis, the successfully rescued WT and mutant IAV were used to infect A549 cells, followed by measuring NP in both the cytosol and nucleus at 2 hpi. Apparently, NP was not detectable in the host nucleus after infection of IAVs carrying Y296A, Y385A, and Ø299A NP substitutions, while Ø388L mutant virus exhibited similar amount of NP as that of the WT (Fig. 6B). As expected, influenza polymerase luciferase reporter activity from both Ø/L mutants (Ø299L and Ø388L) was similar to that of WT, whereas all Y/A (Y296A and Y385A) and Ø/A (Ø299A and Ø388A) mutants displayed reduced polymerase activity. Moreover, AP2M1 gene depletion reduced the polymerase activity by >75% (Fig. 6C). The results suggested that amino acid residues of NP-YxxØ motif determined the IAV polymerase activity via modulating the NP nuclear import. Furthermore, we extended the analysis in vivo and selected the most attenuated Y296A substitution for a full comparison with the WT. BALB/c mice were challenged with two doses of WT-H5N6-GFP (WT) and Y296A-H5N6-GFP (Y296A) viruses, respectively. Survival of Y296A-challenged (10 5 PFU) group (100% versus 0% versus 60%) was obviously better than those of WT-challenged (10 5 PFU) group and WT-infected (10 4 PFU) group (Fig. 6D). Mice in the 10 5 PFU Y296A group displayed similar weight loss to those of the 10 4 PFU WT-infected group but rebounded after 7 dpi, whereas the 10 4 PFU Y296A group displayed mild body weight loss (<5%) throughout the infection (Fig. 6E). Taking advantage of the GFP reporter feature of the recombinant IAV for in vivo dynamic analysis, we examined lung and brain samples from infected mice at different dpi. Apparently, spreading of WT virus deeper into the bronchioles and possibly alveoli were detected since 3 dpi, while GFP signal from Y296A mutant virus was confined to regions around the initial sites of infection around the trachea and bronchi, indicating limited virus spread (Fig. 6F, left). In line with the reported occurrence of neurological symptoms in highly pathogenic H5-infected animals, extensive GFP signals could be visualized in WT virus-infected mouse brain as early as 3 dpi. In contrast, the Y296A group illustrated reduced GFP intensity throughout the time points (Fig. 6F, right). These data provided evidence that a single Y296A substitution in NP-YxxØ motif notably restricted IAV replication in vivo. Together, host AP2M1/virus YxxØ interaction is critical for intracellular virus trafficking to the sites of replication/ transcription beyond the step of endocytosis, thereby facilitating viral replication (Fig. 6G). DISCUSSION Viruses exploit distinct receptors to facilitate cell entry and to evade a hostile extracellular environment that would otherwise abrogate infection. Within the intracellular settings, we demonstrated a conserved host AP2M1/virus YxxØ interaction that is commonly har-nessed by different viruses during intracellular trafficking but beyond the well-defined CME process (Fig. 6G). Using ACA as a tool compound, we developed the CWID platform and identified the YxxØbinding pocket of the host AP2M1 as a previously unidentified target for broad-spectrum antiviral development, which is different from the previous antiviral strategy to block AP2M1 phosphorylation ( fig. S6). Although the AP2M1 −/− mice is lethal (22), the AP2M1 −/− cell line is not susceptible to the infections of IAV, EV-A71, MERS-CoV, and ZIKV (Fig. 5D). On the virus side, YxxØ motif determines the capacity of NP nuclear translocation and therefore affects IAV fitness (Fig. 6, A to F). The outcome of a viral infection with respect to cellular fate is a fundamental aspect of viral biology. We find that TGF- signaling is a cell fate determinant pathway with broad relevance of multiple viruses, and these viruses use AP2M1 as a common intermediate for intracellular trafficking but beyond endocytosis: First, by using live IAV infection, vRNA was visualized in the perinuclear region but within the cytosol, indicating that the IAV entry process has not been affected by ACA (Fig. 4B); after endocytosis, however, NP were predominantly excluded from the nucleus of AP2M1 −/− cells, suggesting that AP2M1 was indispensable for IAV-NP nuclear localization (Fig. 5E). The role of AP2M1 associated with hepatitis C virus (HCV) entry and assembly has been defined (23,24). Our study further demonstrated the versatility and broadness of AP2M1 to cotraffic with several other internal viral proteins for the completion of their replication cycles. Represented by IAV (enveloped and negativestranded), EV-A71 (nonenveloped), and ZIKV (enveloped and positivestranded), we demonstrated that recruitment of YxxØ-harboring IAV-NP, EV-A71-2C, and ZIKV-NS3 proteins by AP2M1 was functionally related in their correct localization as to facilitate viral genome replication. Strategically, AP2M1 was harnessed by viral NP for efficient nuclear entry, thereby promoting polymerase activity. Substitutions introduced in the viral YxxØ motif as Y to A or Ø to A greatly diminished virus growth in vitro and in vivo. Since the virulence of NP Y296A mutant virus was attenuated, strategic usage of IAV strains containing this or other mutants as vaccines might be evaluated in the future. In the case of EV-A71 and ZIKV, AP2M1 was required for efficient transportation of 2C and NS3 proteins to the ER membrane so that their genome replication can occur. Furthermore, the essentiality of AP2M1 during virus replication was validated in the context of multiple viruses including pdmH1N1, EV-A71, ZIKV, and MERS-CoV infections (Fig. 5D). Together, AP2M1 might be universally exploited by different viruses to complete their replication cycle after cell entry. The architecture of AP2 has to undergo a large conformational change from a "closed," cargo-inaccessible state to an "open" structure so as to expose the YxxØ binding site, which is regulated by AP2associated protein kinase 1 and cyclin G-associated kinase (7). Although it has been reported that inhibitors of this kinase (e.g., sunitinib and erlotinib) can inhibit RNA viruses including dengue, Ebola, and HCV, the in vivo antiviral potency of ACA (100% survival in IAV, ZIKV, and MERS-CoV mouse models) is much better than that of sunitinib [37% protection in dengue virus (DENV) and 30% in ebola virus (EBOV) mouse models] or erlotinib (conferring no protection in either the DENV or EBOV mouse model) (16). Targeting directly the YxxØ-binding pocket instead of the T156 AP2M1 phosphorylation site, our study not only extends the therapeutic window beyond the conformational change of AP2M1, which is transient and mediated by kinase activity, but also opens up another synergistic antiviral approach by combining ACA with other inhibitors including sunitinib or erlotinib. A major roadblock to translating protein kinase inhibitors into clinical development is the doubt about their poor selectivity, which is largely a consequence of the highly conserved ATP-binding site shared by all protein kinases (25). Disruption of PPIs, as exemplified in our study, however, is usually highly specific against the binding interface, which has less concern about cytotoxicity. Occupation of AP2M1 YxxØ-binding pocket by using Tyrphostin A23 blocked the replication of multiple viruses (Fig. 5C). Because ACA interacts with N217 and K410 that lie outside the YxxØ-binding cavity formed by residues F174, D176, K203, and R423 (26), an allosteric activation mechanism of AP2 may exist (Fig. 4E). Besides AP subunit genes (AP1M1, AP2A1, AP2M1, and AP3S2), several syntaxin-relevant genes (STX5, STX10, and STXBP2) were identified to play a proviral role (Fig. 1B and fig. S1). Four pharmacological inhibitors including imperatorin, Pyr6, ZD7288, and ethosuximide exhibited potent anti-influenza activity with an EC 99 at nanomolar range (Fig. 1C). Further characterization of their underlying mechanisms is warranted. Establishment of the CWID platform enables identification of drug-binding target(s) in an unbiased manner, which addresses a common difficulty that challenges all phenotypic forward chemical screening (Fig. 4, C and D). This platform may be adopted for studies using host-targeting strategies to accelerate the progress of drug target discovery. Overall, we demonstrate that the AP2M1/YxxØ interaction is a druggable target for broad-spectrum antiviral therapy, and virus YxxØ mutant could be tested as attenuated vaccines. These approaches may provide additive and possibly synergistic antiviral effects when ACA or its analogs are combined with other antiviral agents for tackling emerging viral infections. Experimental design The main goal of this study was to identify a next generation of broad-spectrum antiviral with elucidated machinery. First, we comprehensively evaluated the antiviral potency of the selected drug ACA in cell cultures, ex vivo and in vivo models. Second, we established an integrative platform, named CWID, to identify the host AP2M1/ YxxØ interaction as the ACA drug target and in an unbiased manner. Third, we investigated the biological importance of AP2M1/YxxØ interaction in the viral replication cycle using three representative viruses. Last, we characterized the effect of virulence of the substitutions in influenza A NP YxxØ motif. Loss-of-function screen To identify host genes essential for H1N1pdm replication, an RNA interference (RNAi)-based screen was performed using a commercially available library targeting 142 cellular membrane-trafficking genes (Ambion Silencer, A30138). Briefly, 1.5 × 10 4 per well of A549 cells were seeded in 96-well plates overnight, followed by siRNA transfection once daily for two consecutive days using the Lipofectamine RNAiMAX reagent. At 48 hours after primary siRNA transfection, A549 cells were infected with 0.1 MOI virus. One hour later, the infectious inoculum was aspirated and replaced with fresh DMEM medium containing 2% BSA and N-tosyl-L-phenylalanine chloromethyl ketone (TPCK)-treated trypsin (2 g/ml). The cell culture supernatant of individual well was collected after another 48 hours for viral load titrated by the RT-qPCR method. Before virus infection, wells exhibited poor cell viability (<80%) after gene silencing were excluded. Chemical library screen A small-molecule compound library with 420 candidates (MedChemExpress), targeting membrane transporters and ion channels, was used for pharmacological screening with antiviral activity. The primary screening of pdmH1N1 inhibitors was CPE inhibition based as we previously established (30). Viability of MDCK cells after 0.01 MOI virus infection and compound treatment (10 M) were determined at 48 hpi using the CellTiter-Glo luminescent cell viability kit (Promega). Secondary screening was performed with viral load reduction assay. Briefly, the cell culture supernatant at 24 hpi and compound treatment (10 and 1 M, respectively) were collected and applied for viral copy quantification by RT-qPCR methods. Favipiravir (50 g/ml) was used as a positive control throughout the screening process. WaterLOGSY, a ligand-observed NMR technique, was used to screen for ACA-interactive cellular fractions (31). To fractionate the pdmH1N1-infected cell lysate, 1 ml of MDCK cells (10 7 cells, 1 MOI, 6 hpi) was ultrasonicated three times for 10 s on ice and then centrifuged. The clarified supernatant was applied for fast protein liquid chromatography (ÄKTAexplorer, GE Healthcare) using the 320-ml HiLoad 26/600 Superdex 200 preparative size exclusion chromatography column to harvest each protein fraction with ultraviolet 260-nm signals. ACA WaterLOGSY experiments were conducted on a 600-MHz Bruker Avance spectrometer using a 5-mm PASEI probe. The pulse scheme used for WaterLOGSY experiment was "ephogsygpno.2" with water suppression using excitation sculpting with gradients (32). All experiments were conducted at 298 K using 5-mm-diameter NMR tubes with a sample volume of 500 l and 17 M ACA supplemented and recorded using 4 K scans. All samples were dissolved in 95% H 2 O and 5% D 2 O with final concentration of 70 M trimethylsilylpropanoic acid as internal standard. Control spectrum was recorded under the same conditions without cellular fraction to confirm the absence of self-aggregated ACA macromolecules. Click chemistry/WaterLOGSY/protein ID To identify the ACA-binding protein target (protein IDs), the cellular fraction was lyophilized before resuspended with 10 l of H 2 O and incubated with 70 M azido-ACA for 1 hour. The mixtures were further incubated with 100 M DyLight 488-Phosphine for 3 hours to allow linkage of fluorescent dye with the azido group, before loading to a 10% nondenaturing native gel for electrophoresis. Subsequently, a Typhoon FLA 9500 laser scanner using AlexaFluor 488 filter was used to visualize the fluorescent bands that were associated with azido-ACA other than ACA. The photomultiplier tube voltage was set as 300 to 350 V and the resolution as 50 m. Target bands were excised and subjected to LC-electrospray ionization MS/ MS analysis by Q Exactive as previously established (Shanghai Applied Protein Technology Co. Ltd.) (33). Samples with or without ACA were also incubated with DyLight 488-Phosphine to act as controls. Gain-of-function validation To validate the target protein essential for ACA-dependent mode of action, eight protein IDs as revealed by the CWID platform were overexpressed individually and screened for their capacity to antagonize the ACA's antiviral activity. ORF clone of each protein was obtained from Mission TRC3 human ORF collection (Merck). In a 24-well plate, MDCK cells were transfected with 500 ng of each plasmid and incubated for 48 hours before pdmH1N1 infection. Cell culture supernatants of the infected cells, with or without ACA (5 M), were collected for viral titer determination by standard plaque assay. Virus infection in human AOs Under the protocol approved by the Institutional Review Board (UW 13-364) of University of Hong Kong/Hospital Authority Hong Kong West Cluster, normal human lung tissue from a patient was obtained surgically. Informed consent was obtained from the human participant, and the experiments were performed in compliance with the approved standard operating procedures. Anti-influenza activity of ACA was also evaluated in 3D human AOs, as we previously reported (14). Briefly, the 3D AOs were sheared mechanically to expose the apical surface to the virus inoculum. The sheared organoids were then incubated with viruses at a MOI of 0.01 for 2 hours at 37°C. After washing, the inoculated organoids were re-embedded in Matrigel and cultured in the medium containing ACA (10 M) or DMSO (0.1%). At the indicated times, AOs were harvested for the quantification of intracellular viral load or fixed for immunofluorescence staining. Mouse experiments BALB/c mice, hDPP4 transgenic C57BL/6 mice, and interferon receptor / knockout (IFNAR −/− ) A129 mice were kept in biosafety level 2 or 3 housing and given access to standard pellet feed and water ad libitum, as we previously described (34)(35)(36). All experimental protocols were approved by the Animal Ethics Committee (CULATAR 4057-16, 4371-17, 4511-17) in the University of Hong Kong and were performed according to the standard operating procedures of the biosafety level 2 or 3 animal facilities. To evaluate the cross-subtype anti-influenza virus efficacy of ACA in vivo, BALB/c mice (20 mice per group) were inoculated intranasally with 200 PFU of influenza A H7N9 virus or 1000 PFU of mouse-adapted influenza A H1N1 virus in 20-l PBS. Treatment was performed 1 hour after challenge by intranasal administration. One group of mice was inoculated with 20 l of ACA (0.2 mg/kg). A second group was treated with 20 l of intranasal zanamivir (2 mg/kg) (30). A third group was given intranasally 0.5% DMSO in PBS as an untreated control. Animal survival and clinical disease were monitored for 14 days or until death. Lung tissues (five mice per group) were collected for viral load detection and hematoxylin and eosin (H&E) histopathologic analyses on days 2 and 4 after challenge, respectively. To evaluate the anti-ZIKV efficacy of ACA in vivo, 4-to 6-weekold A129 mice were randomly divided into two groups to receive ACA treatment or sham treatment through the subcutaneous route (n = 13 per group). The mice were inoculated subcutaneously with 1 × 10 6 PFU (in 100-l PBS) of ZIKV-PR under anesthesia. Each mouse was then received one dose of subcutaneous-administered ACA (1 mg/kg) or 0.5% DMSO in PBS at 1 hpi. The mice were monitored daily for body weight change and clinical signs of disease. Five mice in each group were euthanized at 6 dpi, and brain tissues were harvested for vial titers and histopathologic and immunohistochemistry analyses. The survival of the other mice was monitored until 14 dpi. To examine the anti-MERS-CoV activity of ACA, a total of 26 mice (n = 13 per group) were evaluated. After anesthesia, mice were intranasally inoculated with 20 l of virus suspension containing 500 PFU of MERS-CoV. Intranasal therapeutic treatments were initiated at 1 hpi. One group of mice was inoculated with 20 l of ACA (0.2 mg/kg). The other group was given intranasally 0.5% DMSO in PBS as an untreated control. Animal survival and clinical disease were monitored for 14 days or until death. Five mice in each group were euthanized randomly on 2 dpi, respectively. Mouse lungs were collected for virus titration and H&E histopathologic and immunofluorescence staining. Ex vivo imaging The whole-organ lungs and brains of the GFP virus-infected mice were excised at the indicated time after challenge. After fixation in 4% paraformaldehyde for overnight, the images were acquired in a PE IVIS Spectrum in vivo imaging system fitted with GFP excitation/ emission filters. Live cell imaging Live imaging was used to visualize the cotrafficking of AP2M1 and influenza A NP according to a previous report with some modifications (37). Time-lapse images were taken using the Nikon Ti2-E Widefield Microscope with a 100× 1.46 oil objective in a heated (37°C) chamber. GFP-labeled NP protein and mCherry-fused AP2M1 protein were tracked by sequential imaging every 2 s with 50-and 200-ms exposures for each channel, respectively. Individual colocalized puncta run lengths and transport velocities were calculated using the Track Points for MetaMorph analysis software, measuring the distance traveled (in any direction) between frames for each respective puncta. The movies were made using the Stack function for MetaMorph analysis software via accumulating the relative frames in order. Reporter gene assays Luciferase reporter plasmids reflecting the up-or down-regulation of cell fate determination pathways were used. Experimentally, individual reporter plasmid (100 ng), together with a transfection efficiency control plasmid (pNL1.1.TK, Promega) construct (5 ng), was cotransfected into the indicated cells for 24 hours. Subsequently, 10 MOI of each virus was used to infect the transfected cells, followed by luminance detection at 0, 1, 3, 6, or 12 hpi according to the manufacturer's protocol (dual-luciferase reporter assay system, Promega). The transfected cell with mock infection was taken as a baseline control for normalization. Influenza A virus mini-genome reporter assays were performed as described previously with some modifications (39). RNP complexes composed of PA, PB1, PB2, and NP or their mutants were mixed with a luciferase reporter plasmid (50 ng each) and pNL1.1.TK construct (5 ng) and then cotransfected into HEK293T cells. Luminescence was determined at 24 hours after transfection. YxxØ motif binding assay The assay was designed to detect AP2M1 and YxxØ binding. Free or biotin-labeled YxxØ motif, with a peptide sequence of DYQRLN, was synthesized. To expose the AP2M1 binding site, Calyculin A, an inhibitor of AP2M1 dephosphorylation which "locks" AP2M1 in its YxxØ binding active conformation, was added according to a previous report (24). HEK293T cells in six-well plates were transfected with His-AP2M1 or its low binding affinity mutant D176A for 48 hours. Next, 10 g per well transfected cell lysate containing the overexpressed proteins was incubated with the Ni-NTA HisSorb 96-well plate (Qiagen) for overnight at 4°C. After washing, drugs were added 1 hour before incubation, followed by input of Biotin-YxxØ probe (10 g/ml) and detection of binding signal using horseradish peroxidase-conjugated streptavidin (Thermo Fisher Scientific, N100) and trimethylboron substrate (Thermo Fisher Scientific, N301). In this experiment, the unlabeled peptide DYQRLN was used as a positive control binding inhibitor, while mock-transfected cell lysate was taken as a background control. The washing and dilution buffer consisting 100 nM Calyculin A, PBS, and 0.1% Tween-20 was used throughout the assay. Molecular docking analysis The ACA (PubChem CID: 5353376) 3D structure was downloaded from PubChem database. LeadFinder version 1804 was used to perform ligand-receptor docking (40). Extra precision mode (-xp) was applied to search the ligand conformational space more thoroughly. Energy grid map was generated according to the binding pose of YxxØ motif in the AP2M1/YxxØ complex structure (Protein Data Bank code: 2XA7) (7). Default grid map spacing of 0.375A was set for a good trade-off between accuracy and performance. Bond orders were assigned, hydrogens were added, and cap termini were included with the Protein Preparation Wizard module as implemented in Maestro. Protonation states of side chains were predicted using PROPKA3.1 server. Partial charges over all atoms were assigned within the AMBER99 force field scheme as implemented in AmberTools. The top-ranked pose was visualized by using PyMOL, while 2D intermolecular interaction was visualized with LigPlot+. Statistical analysis Data were analyzed using GraphPad Prism 7 (GraphPad Software, San Diego, CA, USA). The values shown in the graphs are presented as means ± SD of at least three independent experiments. Statistical differences between groups were analyzed using a one-way analysis of variance (ANOVA) statistical test with Dunnett's multiple comparisons tests or two-tailed unpaired t tests. Colocalization rate was quantified using ImageJ (JACoP) colocalization software and Manders' colocalization coefficients (MCCs) as previously described (23). P < 0.05 was considered statistically significant. SUPPLEMENTARY MATERIALS Supplementary material for this article is available at http://advances.sciencemag.org/cgi/ content/full/sciadv.aba7910/DC1 View/request a protocol for this paper from Bio-protocol.
2020-08-20T10:11:38.570Z
2020-08-01T00:00:00.000
{ "year": 2020, "sha1": "329e250fa821f404c68d9291ea4895cff9194053", "oa_license": "CCBYNC", "oa_url": "https://advances.sciencemag.org/content/advances/6/35/eaba7910.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5fb9650fdfc2fba817c4a1948dde357550ae116b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
221373304
pes2o/s2orc
v3-fos-license
Multiscale Model Identifies Improved Schedule for Treatment of Acute Myeloid Leukemia In Vitro With the Mcl‐1 Inhibitor AZD5991 Anticancer efficacy is driven not only by dose but also by frequency and duration of treatment. We describe a multiscale model combining cell cycle, cellular heterogeneity of B‐cell lymphoma 2 family proteins, and pharmacology of AZD5991, a potent small‐molecule inhibitor of myeloid cell leukemia 1 (Mcl‐1). The model was calibrated using in vitro viability data for the MV‐4‐11 acute myeloid leukemia cell line under continuous incubation for 72 hours at concentrations of 0.03–30 μM. Using a virtual screen, we identified two schedules as having significantly different predicted efficacy and showed experimentally that a “short” schedule (treating cells for 6 of 24 hours) is significantly better able to maintain the rate of cell kill during treatment than a “long” schedule (18 of 24 hours). This work suggests that resistance can be driven by heterogeneity in protein expression of Mcl‐1 alone without requiring mutation or resistant subclones and demonstrates the utility of mathematical models in efficiently identifying regimens for experimental exploration. Study Highlights WHAT IS THE CURRENT KNOWLEDGE ON THE TOPIC? ✔ During the past few years, considerable progress has been made in the discovery of Mcl-1 inhibitors, with four compounds currently undergoing evaluation in phase I clinical trials. Studies in vitro have shown that the duration of treatment has a critical impact on overall cell viability. In addition, mouse xenograft studies have shown that tumor regression lasting some weeks can be achieved using a single dose or repeated doses delivered daily or weekly. However, the optimal administration schedule for Mcl-1 inhibitors (i.e., to maximize efficacy) is currently unknown. WHAT QUESTION DID THIS STUDY ADDRESS? ✔ This study investigates the concentration-time profile for the Mcl-1 inhibitor AZD5991 that is able to elicit maximum cell killing in vitro. WHAT DOES THIS STUDY ADD TO OUR KNOWLEDGE? ✔ Our study uses a multiscale mathematical model incorporating elements of Mcl-1 pharmacology, cell cycle, cell division, and importantly, cellular heterogeneity to investigate and prioritize treatment schedules for experimental investigation. In the specific case of AZD5991, this study suggests that the repeated inhibition of Mcl-1 for a short period of time each cycle may be more efficacious for the longer term than inhibition during a longer portion of each cycle. HOW MIGHT THIS CHANGE DRUG DISCOVERY, DEVELOPMENT, AND/OR THERAPEUTICS? ✔ A critical aspect of drug discovery is to determine the dose and schedule that elicits the maximum intended effect with minimal toxicity. This information is needed to support the selection of a drug candidate and regulatory filing that precede clinical evaluation. The possibility of using a mathematical model to prioritize experiments for validation could significantly accelerate these late preclinical activities, enabling a better understanding of the impact of dose and schedule before initiating clinical trials. DIABLO) translocate to the cytosol and initiate activation of caspases, 4 an event that represents an irreversible commitment to apoptosis. The B-cell lymphoma 2 (Bcl-2) family of proteins are major regulators of the intrinsic apoptotic pathway characterized by the presence of at least one of four Bcl-2 homology (BH) domains. 5 They comprise several groups: (i) prosurvival proteins (Bcl-2, Bcl-XL, Bcl-W, Mcl-1, and Bfl1), (ii) multi-BH domain cell-death effector proteins (Bax, Bak, and Bok), and (iii) the BH3-only apoptosis initiator proteins (Bim, Puma, Noxa, and Bad). Interaction among Bcl-2 proteins involves binding of the BH3 domain of the proapoptotic protein to a groove on the surface of the prosurvival proteins. The relative expression of prodeath and prosurvival Bcl-2 proteins and their interaction with each other control commitment to apoptosis. 6 (See ref. 7 for a useful introduction to this complex pathway.) Maintenance of the balance between apoptosis and proliferation is crucial in hematological cells given their high turnover rate. 8 The intrinsic apoptotic pathway is most commonly dysregulated in lymphoid malignancies 9 through one of the following mechanisms: (i) Reduction in expression of the BH3-only proapoptotic proteins, (ii) loss of Bax and/or Bak, or (iii) enhancement of the expression of the prosurvival Bcl-2 proteins. Among prosurvival Bcl-2 proteins, myeloid cell leukemia 1 (Mcl-1) is of particular interest given that its overexpression was found to promote the development of acute myeloid leukemia (AML), 10 multiple myeloma 11 and other hematological malignancies. 12 AZD5991, a novel BH3 mimetic, 13,14 is a selective Mcl-1 inhibitor that triggers mitochondrial outer membrane permeabilization and causes a potent and selective apoptotic response in multiple myeloma and AML cell lines in vitro. Administration of a single dose of AZD5991 has been shown to be sufficient to cause rapid tumor regression in vivo in multiple mouse xenograft models. 14 Although regression is observed, tumors may regrow in the same site, indicating that a fraction of cancer cells too small to be detected immediately following treatment survived the treatment. Fractional kill of the cell population was also seen in cell viability studies in vitro in cell lines treated continuously during the course of 72 hours. Similar observations have been reported in cells treated with TRAIL. 15 Interestingly, if the surviving population is treated once again after allowing for recovery, the same fractional killing is observed following another round of treatment with TRAIL. A major challenge with anticancer therapies is to determine the optimal regimen, maximizing efficacy while minimizing toxicity. Although some approaches have been empirical (e.g., the "3 + 7" induction regimen used for AML 16 ), others have applied optimal-control techniques. 17,18 Such approaches tend to be phenotypic, focusing on efficacy and toxicity end points without offering much insight into the mechanism underlying the disease. A particular concern heard from clinicians is that drug-free intervals may offer a path for cancer cells to develop resistance. 19,20 We aimed to build a model to allow us to interrogate a wide range of schedules and identify those with predicted maximal longer term efficacy for further experimental exploration. Our goal was to triage potential schedules for validation and then use the model to interrogate the differences to build hypotheses as to the underlying mechanisms. Taken together, our analyses suggest that studying the role of Mcl-1 protein, and its evolution under treatment, can shed light on the optimal choice of regimen. MATERIAL AND METHODS Multiscale systems pharmacology model We developed a multiscale model using different techniques to describe the biology relevant at each scale (or "level"). At the foundation is the physicochemical model, which describes the pharmacology of the AZD5991 molecule and its interaction with molecules in the intrinsic apoptotic pathway. A step up from that is the cell cycle model, which operates at a phenotypic level rather than a molecular level. Finally, the agent-based model framework is used to bring in stochasticity in protein expression, enabling a collection of cells to be modeled for a timespan of several cell cycles. This approach has been used in the past to model the spatial heterogeneity of tumor cells, particularly in reference to "diffusibles": primarily drug concentration and oxygen supply. 21,22 Because our cells do not form a solid tumor but are suspended in liquid media in vitro, we assume that diffusion is not a limiting factor, and instead use the agent-based model as a framework on which to build a model of heterogeneity in protein expression within a population of genetically identical cells. Each agent of the agent-based model is running its own physicochemical and cell-cycle models based on the pharmacology of AZD5991 and behavior of the MV-4-11 cell line. These models are described in the Supplementary Information and the sections that follow. Model development and calibration In the Supplemental Methods, we discuss in more detail the construction of the physico-chemical, cell-cycle, and agent-based models as well as the in vitro cell culture and washout experiments. We provide more details on the inheritance rules for protein expression and how a distribution can "relax" to its initial form even when individual cells lack such a mechanism. We also define the "normalized slope" parameter, which we use to compare regimens, and the statistical test used to assess significance. We show how the model can recapitulate a wide range of phenotypic outcomes using only protein expression as input. Simulations are compared against not only MV-4-11 (an AML cell line) but also a range of multiple myeloma cell lines. Description of training data by a multiscale model To better understand the effect of scheduling in maximizing activity against cancer cells, we built a mathematical model to simulate the time course of cell survival at various concentrations. We calibrated this model against the experimental cell-kill curves under constant incubation to 72 hours. Figure S3 illustrates how this model provides qualitative agreement with the data in a number of key areas. As highlighted in Figure S3A, the pharmacology of AZD5991 in AML cell lines exhibits a number of notable features. First, the rate of cell death is concentration dependent and saturable; second, it is fractional; and third, it is biphasic. That it is concentration dependent is easily seen because as concentration increases, the rate of loss in viability increases. It is saturable because, beyond a certain concentration, that rate of loss no longer increases. By "fractional" we mean that the absolute rate of cell death depends on the number of cells remaining: When plotted on a log-linear scale, such behavior is a straight line. When we say that the loss in viability in a population of cells is biphasic, we simply mean that some cells die rapidly, whereas a smaller fraction manage to survive for a comparatively long time. In each of the two phases, the rate of cell kill is exponential (i.e., log-cell-kill is linear) in time, although clearly on a timescale that covers both phases, the effect will be nonlinear. In certain cell lines, it appears that these persister cells may be regrowing (see Figure S5A, AMO-1 at 0.1 µM for an example). Figure S1B illustrates how our model can qualitatively capture most aspects of this cellular behavior. Virtual washout screen and experimental validation Once the multiscale model was in qualitative agreement with the in vitro viability experiments, a series of virtual washout screens was done to explore the effect of schedule differences in cell-kill profiles. We looked at incubation durations of various lengths, assuming a 24-hour treatment cycle because it fits the schedule of both drug discovery and clinical utility. We use a shorthand to describe the regimens: "6ON" refers to a schedule in which we incubate with drug for 6 hours and then wash out and incubate in drug-free medium for 18 hours, whereas "18ON" describes incubation with drug for 18 hours followed by wash out and drug-free incubation for 6 hours. Other regimens may be named by analogy, for example, 3ON, 12ON, etc. We examined a range of cycles with incubation period increasing in 3 hours increments from 3 to 21. We describe in the Supplemental Methods the design, execution, and statistical analysis of the experimental washout screens. RESULTS In these studies, we rely heavily on the concept of representative cell lines, by which we mean a pattern of protein expression that elicits a phenotypic response, such as being "sensitive" or "resistant" to treatment. We call them "representative" because they illustrate particular phenotypic responses on the spectrum of possibilities. (See Figure S5 for experimental data and simulated profiles.) Differential schedule sensitivity in vitro Virtual schedule screening. Using a model enables us to rapidly screen many possible regimens. The in vitro doubling time of this model is around 40 hours, and because our model included cell cycle in a multiscale fashion, we could simulate durations long enough to see the effects of selective pressure on a heterogeneous population of cells over a few cell cycles. We started by simulating a variety of experimentally tractable schedules (e.g., 3ON, 6ON, 12ON, 18ON), with the goal of predicting schedules that most effectively kill tumors by restraining their ability to develop resistance. Figure 1 shows simulated cell-kill curves during a 15-day period. As shown in Figure 1a, at sufficiently low drug concentrations (e.g., 1 arbitrary unit), there is no difference between 6ON and 18ON schedules; most cells grow and divide with no significant cell death. Also, at high concentrations (20 arbitrary units), there is no difference between schedules because the majority of cells die quickly. However, at intermediate concentrations (5 arbitrary units), the simulations suggest that 6ON can achieve more cell kill compared with a 18ON schedule during the same length of treatment (15 days). It does this by maintaining the cell-kill effect for multiple cycles, whereas on the 18ON schedule, this effect wears off. Validation of schedule sensitivity. To validate this prediction, in vitro washout tests for the MV-4-11 cell line were performed for four 24-hour cycles ( Figure 1b; described in detail in the Supplementary Methods). The major difference between schedules during this shorter duration, as predicted by modeling, is not in the final survival end point, but in the rate of cell kill and the extent of its conservation for repeated cycles of exposure to AZD5991. As predicted by the model (and shown in Figure 1c), the rate of cell kill during the "ON" phase is expected to diminish in the second cycle compared with the first on both schedules. Thereafter, it is expected to be largely maintained for successive cycles on both schedules; however, the relative magnitude of the loss in cell-kill rate is predicted greater for the 18ON than for the 6ON schedule. We assessed the experimental data using a linear mixed effect model (description, model code, and output in the Supplementary Methods). Comparing the effect on cell kill of both schedules and all four cycles, we found a significant (P = 4.48 × 10 −7 ) effect of schedule on the initial slope, in which the initial value is approximately 0.035 hour −1 (for the "18ON" schedule) but is almost twice that (i.e., 0.060 hour −1 ) on the 6ON schedule. This is visually evident in Figure 1d and expected from how the cell-kill slope is defined. From that initial value, the absolute decline is highly significant and similar for each subsequent cycle, in the range of 0.020-0.024 hour −1 (P = 6.17 × 10 −7 , 9.74 × 10 −7 , 2.41 × 10 −5 for cycles 2, 3, and 4, respectively). This means that the slope for 18ON drops from 0.035 hour −1 to 0.011-0.015 hour −1 (a drop of 57-68%), whereas for 6ON it drops from 0.06 hour −1 to 0.036-0.040 hour −1 (a drop of 33-40%). In other words, the higher baseline of the 6ON schedule means that it suffers a smaller relative loss in its ability to maintain cell kill. There is a significant interaction (P = 0.043) only between schedule and cycle 2, which indicates that the change from cycle 1 to cycle 2 is similar in magnitude for both schedules and that (broadly speaking) both schedules maintain their respective new slope for successive cycles. This clearly shows that, although both schedules show an absolute loss in cell kill on the second and subsequent doses, the relative loss is significantly less with the 6ON schedule compared with 18ON. During the course of 15 days, the cumulative effects could be expected to lead to enhanced overall cell kill on the 6ON schedule compared with the 18ON schedule. Sensitivity analysis. As shown in Figure S5C,D, several cell lines show an initial response followed by regrowth during the course of 72 hours of treatment. To study the effects of scheduling on different cell lines, we focused on two representative cell lines: Sensitive and resistant. Their differences are highlighted in Figure 3. We define a hypothetical cell line as "sensitive" when all drug concentrations tested result in complete cell kill after a 72-hour treatment period (Figure 3b). By contrast, a hypothetical cell line is called "resistant" if even a high concentration of the drug was not able to eliminate the cells and a survivor population emerges during 72 hours of continuous treatment ( Figure 3e). As expected from a biological perspective, a sensitive cell line will have lower average levels of Mcl-1 than a resistant cell line (compare Figure 3a and Figure 3d). Once these two representative cell lines were selected, we evaluated whether differential sensitivity on schedule has any impact on cell killing. As shown in Figure 3c, no benefit from scheduling was demonstrated in the sensitive cell line as all schedules tested resulted in complete cell kill. However, in a more resistant cell line (Figure 3f), there is a potential benefit from using the 6ON schedule compared with the 18ON schedule as the former achieves complete cell kill, whereas the latter develops resistance at the concentrations tested. Resensitization of a survivor population. Because nongenetic markers of sensitivity, such as Mcl-1 protein expression, can be labile for a short timescale, 15 an important follow-on question in designing a treatment strategy is the following: What is the relaxation time? In other words, once a state of increased protein expression has been attained and drug treatment is stopped, how long does it take for the distribution of protein expression to return to the original baseline condition? Our model does not contain an explicit "relaxation" process; rather, it arises through the process of population turnover as cells with higher expression are replaced by those with expression matching the "founding population" distribution. The current model allowed us to mechanistically study this question based on the distribution of Mcl-1 protein expression and its movement over time under the effect of the drug. Figure 4a shows the predicted distribution of Mcl-1 expression in a simulated experiment in which cells are first incubated for 72 hours, after which time the drug is withdrawn. At first, during the incubation phase, starting after 24 hours, there is a rightward shift in expression (as shown in Figure 3), so that by 72 hours, the distribution of Mcl-1 has shifted significantly toward higher values. At this point, the drug is withdrawn, and gradually the distribution returns to its baseline. The process is largely complete by 120 hours. A more detailed look at Mcl-1 protein distribution in the time interval between 96 and 120 hours is shown in Figure 4b. DISCUSSION In this study, we modeled resistance in cancer cells to treatment with a proapoptotic compound as a shift in the distribution of protein expression. Drawing on the evolutionary similarities between mitochondria and bacteria, 23 we reasoned that a model considering only binding kinetics, cellular heterogeneity, and cell proliferation could describe many phenomena observed in vitro with an agent that induces the intrinsic mitochondrial apoptotic pathway, e.g., fractional cell kill, steep dose response, and heterogeneity-based resistance. After model calibration, we identified schedules with differential predicted activity and showed experimentally that indeed they have a significantly differential impact on the ability of AZD5991 to maintain its cell killing ability. Similar to all models, ours makes a number of simplifications, not the least of which are assumptions around the role of Mcl-1 protein levels in defining sensitivity to AZD5991 and identical binding kinetics of AZD5991 to Mcl-1 regardless of proteins already associated with Mcl-1. The apoptotic pathway is complex, encompassing numerous proteins with various levels of binding specificity toward their potential partners. Because the Mcl-1 expression level has been shown to drive resistance to cell-death agents, 24 we focus on the role of this protein in particular in driving resistance to a cell-death agent, although certainly others will also play a role. Resistance to treatment is a concern across many fields. Bacterial resistance 25,26 is often described using the concepts of "sensitive," "resistant," and "quiescent" populations without further explanation at the molecular level of the processes that give rise to these phenotypes. In oncology, it is commonly understood in terms of discrete mutations conferring a reduced response to particular targeted treatments by inhibiting binding to the target domain, causing constitutive (i.e., ligand independent) activation of the target pathway or other means. 27 However, recent work on resistance in bacterial populations has shown that it is not necessary to invoke arbitrary distinctions between sensitive and resistant cell populations. 23,28 Rather, a combination of chemical binding kinetics and cellular heterogeneity can explain observations such as a postantibiotic effect (duration of response after drug has washed out) or inoculum effect (fractional killing in larger populations) among others. Mitochondria were once free-living organisms that became symbiotically incorporated into eukaryotic cells in the distant past yet retain their own genomes and ribosomes and show strong genomic similarity to intracellular bacteria. 29,30 Furthermore, it has been shown experimentally that stochastic fluctuations in protein expression can yield rare cells in an otherwise genetically identical population, which are rendered resistant to treatment with anticancer drugs. 31 The work of ref. 31 shows that, at least for melanoma treated with kinase inhibitors, mutation is not the only pathway to resistance and that certain rare cells can become transiently preresistant, with further treatment initiating a kind of cellular reprogramming that leads to what the authors term "burn in" of a resistant phenotype. Although they do not speculate as to the mechanism of burn in, the authors allude to the idea that brief drug exposure might be insufficient for the process to complete, allowing cells to revert to the drug-sensitive state. We sought to bring together these three ideas: Mitochondria as bacterial analogs, cell death limited by chemical The same data but focusing with more temporal granularity on the interval 96-120 hours, during which the cells begin to show regression to the founding distribution. Multiscale Modeling Finds Improved Schedule for AML Treatment With AZD5991 Goliaei et al. kinetics, and nongenetic resistance mediated through exposure to anticancer agents. Given that cell-death agents such as AZD5991 target the intrinsic mitochondrial apoptotic pathway, we reasoned that similar methods might apply to cancer cells dependent on BH3 family proteins. We investigated whether fractional killing could be driven by cellular heterogeneity in protein abundance as distinct from genetic alterations. 32 In human cells, this abundance follows a long-tailed, right-skewed distribution, 33,34 commonly modeled as log-normal: A small number of cells (in the tail) display expression of specific proteins at a much higher level than the bulk of the population, allowing them to respond differently to treatment, ultimately leading to a phenotypically resistant population. To our knowledge, it has not yet been established how plastic this expression might be. It has been shown that protein half-life follows a wide distribution centered (in log-space) around approximately 100 hours, with a tendency toward longer stability in the mitochondria compared with other cellular compartments, 35 but to alter expression levels would require a change in either synthesis or degradation rates, neither of which is well characterized as yet. With a halflife on the order of an hour or so, Mcl-1 is anomalous even among Bcl-2 family proteins 36 and might be more susceptible than most to rapid changes in protein expression. In our model, the abundance of proteins in a given cell is fixed at the time of birth; we do not include a means by which cells can alter their protein expression during the cell cycle. Changes in the overall population distribution occur because of the "inheritance" rules, which govern the determination of protein expression at the time of cell division. Importantly, and in line with the work of Shaffer et al., 31 we never found it necessary to assume two distinct a priori populations (sensitive and resistant). Furthermore, we did not find it necessary to assume a separate resistance mechanism, such as through a compensating pathway. Rather, we found that a resistant phenotype emerges following drug treatment through the combination of survival of resistant, high-expressing cells and stochastic inheritance of that same resistant high-expressing phenotype by the daughter cells. Higher expression increases the chance of survival, but inheritance of higher expression is stochastic, with a finite chance the daughter cells may have expression either lower or higher than the parental cell. Over successive generations, this will tend to widen the distribution of Mcl-1 expression within a population for which the selective pressure is mostly present (e.g., 18ON; see Figure 2). Yet, because the number of cells has declined over time, the distribution must simultaneously become flatter. Because cells with higher Mcl-1 expression are less sensitive, this is consistent with reduced cell kill on successive cycles. This suggests a mechanism by which cell populations may rapidly become resistant to therapy under near-constant pressure. Although we did not directly measure Mcl-1 expression in these studies, the role of increased Mcl-1 protein expression as a means of resistance to anticancer therapy is well supported. 37,38 Certainly there could be other explanations, such as a change in the affinity of the binding partners (e.g., Bim). However, our goal here was to provide a model for how such nongenetic resistance could arise and be maintained through little more than the known heterogeneity of cells and simple rules around inheritance upon cell division, and furthermore to illustrate how such a model could enable rapid triage of possible dose regimens for experimental validation. According to model predictions, the time it takes for a resistant population to revert back to a population similar to the initial population is about one cell cycle (40 hours; Figure 4b). This observation is to be expected from the assumptions made in the model. We assumed that Mcl-1 expression levels in a given cell do not change over time (production and synthesis rates are constant at all times). Because the tumor models used in this work exhibit rapid doubling times in vitro (e.g., the 40 hours we use in this work) and in vivo (clinical doubling AML may be as short as 3 days 39 ), the lack of mechanism to alter expression during the lifetime of a single cell does not seem to us a severe limitation. As a result, founder cells start with a particular protein level at birth. However, upon division, the daughter cells have characteristics that are determined both by parents (as baseline for expression) and environment (inheritance rules depend on presence of drug). Consequently, a resistant population that is generated by selective drug pressure needs to be replaced because members do not evolve. When that pressure is removed, their replacement by daughter cells will cause the distribution of protein expression to drift back to the baseline "founder" distribution. In this way, the relaxation time is an emergent property of the cell cycle time and degree of change from the founding distribution. It is apparent that on a 24-hour cycle, once the resistant phenotype emerges, there will never be sufficient time for protein expression to relax, highlighting the importance (under our model) of avoiding it in the first place through short exposure. The scope of this work has been limited to in vitro systems and to the timescale of a few days, over which such systems are viable. An ideal outcome of our experimental work would have been to show a durable, statistically significant interaction between schedule and cycle because that would clearly indicate increased divergence between the efficacy of the two schedules. What we have found is that there is a statistically significant difference between the two schedules on the second dose, where the 6ON schedule shows a relatively smaller loss in cell killing than the 18ON schedule and that this difference is maintained (but not augmented) on successive cycles. This finding supports our hypothesis, although further work is needed for full validation. A logical next step would be to investigate whether such differences also hold in vivo, enabling studies to run to a longer timescale. This would likely need to be carried out in disseminated models rather than traditional subcutaneous xenograft models because the physico-chemical properties required of a compound to disrupt the protein-protein interactions can also lead to long retention time in solid tumors, which would make it hard to test the hypothesis. The shorter duration (6 hours) could be achieved through intravenous infusion of a short half-life compound (so that exposure does not persist much beyond the end of infusion). Although ref. 14 does not quote a half-life, it is readily seen from Figure S6C of that article that the plasma half-life is a matter of hours, which makes 40 can be delivered orally with half-life in excess of 12 hours. In clinical situations where maximal control of duration and degree of exposure is required, intravenous administration is routine. Infusions are also possible in rodents 41 using commercially available solutions. In summary, this study of the role of scheduling in BH3-mimetic induced cancer cell death provides a novel framework on which to build understanding of fractional cell killing of cancer cells in vitro when treated with an Mcl-1 inhibitor. It may be applied to other targets in the intrinsic apoptosis pathway, such as Bcl-2 and Bcl-xL. Specifically, after building a model including cell cycle, the intrinsic apoptotic pathway, drug pharmacology, and reasonable assumptions about how protein expression is inherited across generations, we identified two schedules showing differential cell kill rates. A linear mixed effects analysis of three separate experiments, which show statistically significant differences in retention of cell kill rates over time, provided supporting evidence for these counterintuitive predictions. Not only were these scheduling predictions first made using the model but also once validated, the model provided a platform on which to interrogate and build hypotheses for mechanisms underlying these observations. Together, this study highlights the value of mechanistic multiscale models in the discovery of optimal regimens for control of resistance. These findings enhance our understanding of how best to use BH3 mimetics, in particular how scheduling can affect the emergence of resistance in cancer cells as a consequence of selection within a heterogenous population. To the best of our knowledge, this is the first model to incorporate systems pharmacology of an Mcl-1 inhibitor into the intrinsic apoptotic pathway. Supporting Information. Supplementary information accompanies this paper on the CPT: Pharmacometrics & Systems Pharmacology website (www.psp-journal.com).
2020-08-31T13:04:43.790Z
2020-08-29T00:00:00.000
{ "year": 2020, "sha1": "b100cfaf587b8af09218bae694293a237cc8558c", "oa_license": "CCBYNCND", "oa_url": "https://ascpt.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/psp4.12552", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "61bb90cdb5928fbb2a96e2cd17093fd9f6d659f8", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
233858877
pes2o/s2orc
v3-fos-license
Influx potential and sequestration of CO2 in Cement based material towards establishing a proactive measure for combatting structural deformations The effect of carbon dioxide sequestration by Sandcrete structures was investigated as a means of reducing CO2 emissions. Experiments were conducted to determine the concentration of trioxocarbonate IV (CO3 2-) in sandcrete samples as well as its effect on soil pH. The results showed that the presence of CO3 2 - in sandcrete samples evidenced the process of carbonation in sandcrete structures and other cement-based material which provides an alternative means of CO2 Sequestration. Also, the concentration of CO3 2 - varied along heights and horizontal distance within sections of the sandcrete structure. The acidic nature of the soil close to some sections of the sandcrete structure experienced varying pHs of 6.14 - 6.35 which gave room for enhanced leaching which seriously undermines the strength of the sandcrete structure. The percentage concentration differential in the horizontal and vertical directions, were found to vary widely from 5-46%. This constitutes a potential danger i.e. cracking and possible collapse of the structure. The results of calculated diffusivity values and concentration gradients of the carbonate ion conformed with results obtained using a predictive model which helped in monitoring the migration patterns of CO3 2 - in the sandcrete structure. Introduction In developing nations such as Nigeria, ninety percent of buildings including industrial structures comprise of sandcrete blocks [1]. Currently, efforts to tackle climate change are on the increase globally [2]. Anthropogenic operations (gas flaring, bush burning and agricultural activities) within such proximities release CO 2 into the atmosphere which affect these structures as well as cause global warming [3][4][5]. Cement-based structures consists of materials such as sand, stones and gravels which use cement as a binder to create a structure suitable for application in construction works. However, it has been proven that cement-based structures have the potential for atmospheric carbon dioxide sequestration [6]. Upon hydration, cement produces hydroxides that serve as binding agents which help to maintain the strengths of cement based structures [7]. An increase in atmospheric CO 2 leads to IOP Publishing doi: 10.1088/1757-899X/1036/1/012047 2 ocean acidification, which may impose drastic consequences in marine ecosystems and according to literature, carbon mitigation is one of the newest strategies of the world's energy transition schemes [8]. Carbon dioxide sequestration is a geo-engineering technique for long term storage of CO 2 and other forms of carbon [6]. CO 2 sequestration includes the storage and capture of carbon/its compounds. This refers to large-scale, permanent artificial capture and sequestration of industrially produced CO 2 using subsurface saline aquifers, reservoirs, ocean water and aging oil fields [9]. Pre-and post-combustion systems for capturing CO 2 were previously discussed [6,10]. Carbonation reactions are exothermic reactions involving simple reactions between binary oxides such as MgO and CaO [11]; carbonates have a lower energy state than CO 2 . According to Huntzinger et al. [12], mineral carbonization is one of the only forms of carbon sequestration that is permanent/stable. The disadvantage of other forms such as ocean, geologic and terrestrial sequestration, is largely due to the potential for leakage over a long period of time. This makes the sequestration process ineffective if the trapped CO 2 rapidly escapes back into the atmosphere. Several models were discussed by Ramezaniapour et al. [13] and Odigure [7] on carbonation of cement-based structures. However, most of these works focused on determining the depth of carbonation, physical as well as chemical changes of the cement-based structures. In an earlier investigation, Odigure [7] presented a diffusion model for determining the diffusion of gaseous pollutants in sandcrete blocks. The shortcoming of the model is that other reactions were not taken into consideration (i.e. the formation of calcium bicarbonate from CaCO 3 and its subsequent leaching from the wall to the ground. Theory Carbonation is a chemical reaction in which solid products of cement hydrates, primarily calcium hydroxide or calcium silicate hydrates (CSH), calcium aluminate hydrates and calcium sulfoaluminate hydrates (mainly ettringite) in cement-based materials, react with carbonic acid to form carbonates [6]. A technique on how to sequester carbon dioxide was developed by Klaus Lackner [14]. This involves the combination of Carbon dioxide with Fe and Mg-rich serpentine rocks to produce CO 3 2by the reaction in (1).The chemical reactions involved in this process are as given in (2) and (3): The formed cement hydrates are brought about by hydration in sandcrete structures when they come in contact with acidic gases such as CO 2 , SO 3 in the atmosphere. Hydrated cement contains calcium hydroxide, calcium silicate hydrate, calcium aluminate hydrate, hydrated ferrites and calcium monosulphoaluminate. The rate is dependent on the concentration of these gases in the atmosphere and their relative atmospheric humidity. Long exposure of sandcrete structures to aggressive acids, salts and alkalis, enhance their physicochemical as well as their mechanical deterioration. Therefore, this research investigates the possibility of carbon dioxide sequestration in sandcrete structures as well as the effects of carbonation on sandcrete structures. Products arising from carbonation reactions induce important physicochemical changes on these sandcrete structures [9]. One major effect of carbonization in sandcrete structures, is the loss of aesthetic value. Carbonized spots are not localized but spread across the structures of interest. An increase in atmospheric CO 2 contributes significantly to climate changes and global warming as well as the formation of acid rain which enhances corrosion of reinforced steel in concretes and roofing sheets [15,16]. The potential of this process is yet to be exploited, perhaps due to lack of understanding of the nature and possibility of CO 2 sequestration in cement-based materials [7], hence, the motivation for this research which entails the investigation of the mechanism of carbonation of sandcrete structures in relation to the deterioration of sandcrete structures. Cement hydrates formed as products of hydration in sandcrete structures can react with acidic gases such as CO 2 , SO 3 present in the atmosphere. This reaction occurs internally and may have significant adverse effects. The rate of reaction is dependent on the concentration of these atmospheric gases and their relative humidity. Long exposure of sandcrete structures to aggressive media such as acids, salts and alkalis enhance physicochemical and mechanical deterioration of the structures [17,18]. Consequently, this research investigates carbon dioxide sequestration potential and its effects on the chemical and mechanical properties of sandcrete structures at Covenant University by using a model approach. Also, an estimation of the percentage concentration of CO 3 2-in sandcrete samples of cracked walls of equal height and depth at a mapped location (Covenant University fence) was carried in order to predict the levels of structural deterioration at such sites. The concentrations of CO 3 2and calcite present in each of the samples as well as the pH of the soil at different sections of the sandcrete structure were also determined. Acid concentration in pulverized sandcrete was calculated using: Concentration of HCl (g/l) = Molar mass of HCl * molar concentration of HCl (8) Mass of HCl that reacted with pulverized sandcrete is: volume HCl (l) 3.65g/l * concentration (9) Mass of calcite = mass of HCl used * molar mass of CaCO 3 in g/mol / (molar mass of HCl) (10) The mass of CO 2 adsorbed (g/l) calculated mathematically as: mCO 2 = mHCl* mCO 2 / molar mass of HCl Where: mCO 2 = molar mass of CO 2 (g/mol), molar mass of HCl = 35.5 Equation 6 was used to determine the parameters for the carbonation process. k 2 -Pk-q = 0 (11) where P (concentration gradient) = u / (De) L and u = z/t; z = 0.94 m t = time (h) and q = De / (De)L ln r. Solving equation (11) as a quadratic expression gives the possible values of k. Methods Sandcrete samples were obtained from six sections of a fence at Covenant University, Ota, Ogun state, Nigeria. The samples were obtained from long-exposed sandcrete blocks of the fence by striking a hammer into the blocks that make up the fence at the top (2.82 m section), middle (1.88 m section) and ground level (0.94 m section); the particles falling off were collected in containers. These samples were from six sections of the sandcrete fence at equal height interval and depth (i.e. 20 mm). By means of a measuring tape, horizontal, middle and ground level points of each section of the fence Sample preparation Each of the sandcrete samples discussed in section 3.2 was ground to fines and sieved with a 300 µm mesh to obtain very fine particles. In order to avoid aeration or absorption of moisture, the samples obtained were stored in transparent airtight moisture-proof containers prior testing. Estimation of calcite and CO 3 2concentration by quantitative measurement To estimate the concentration of carbon dioxide and calcite in the samples, back titration was employed. 0.1 M hydrochloric acid was prepared by dissolving 8.33 ml of concentrated HCl to 1 L of distilled water in an Amber bottle. 0.5 M NaOH was prepared by mixing 20 g of Na-pellets in 1 L of distilled water in a reagent bottle. 3 g of the pulverized carbonated sandcrete sample was taken and transferred into a conical flask. 20 mL of 0.1 M HCl was added. The mixture was shaken thoroughly and observed for CO 2 release. 3 drops of phenolphthalein indicator was then added. At the end of CO 2 release, 0.5 M NaOH was measured and titrated against a standard solution of carbonated sandcrete so as to neutralize the HCl therein. Titration of other samples was done thrice in order to abate inaccuracies. The appearance of a pale permanent pink color signified the end point and the titre values were recorded. Soil-pH at the wall sections of Sandcrete fence 20 g of each soil sample was weighed and put in a beaker. 30 mL of distilled water was added and the mixture was stirred properly. The measuring probe of the pH meter which was connected to an electronic meter was then inserted in the mixture. Data were read off from the panel of the pH meter. Results At the 0 and 8.3 m points along the horizontal axis of the fence, the concentration of CO 3 2 -is highest at the middle section of the fence with values being 1.38 and 1.38 g/L respectively (Figure 1). At the 3.8 m point along the horizontal, the highest CO 3 2 -concentration (1.34 g/L) was observed at the top section of the fence while the ground section at the lowest concentrations of carbonates at the aforementioned sections with values being 1.03, 1.05 and 0.83 g/L at all the 0, 3.8 and 8.3 m sections respectively. At the 10.48, 14.25 and 17.21 m points along the axis of the fence, the CO 2 concentrations were highest at the ground with corresponding values of 0.63, 1.38 and 1.5 g/L respectively with the lowest carbonate concentrations at such points being 0.26 (ground level), 0.22 (at the top level) and 1.43 at the top section of the fence above its horizontal axis. The high, medium and low carbon contents recorded at several sections of the fence are as a result of low influx of and dissolution of carbon in precipitation or moisture in the atmosphere which may have been transported as acid water or trioxocarbonate IV acid via diffusion into the walls of the fence. Also, the ease with which carbonation occurred in the fence is a measure of the type of cement used in the construction of the fence. Also, the manufacturer may have produced blended Portland cement using carbonaceous materials like limestone as additive. This may also result from the difference in the rate of leaching which is connected to porosity of the blocks. Furthermore, the production of CaCO 3 by carbonation can result in the gradual clogging of the pores of the sandcrete [6]. Leaching occurred faster at more porous sections than less porous sections of the fence, hence, the extent of block-porosity influenced the rate of carbonation. Hence, this informs that porosity is not the same at all points in the sandcrete wall. Figure 2 gives an illustration of the soil pH at different sections of the pipe. The highest pH (7.63) was recorded at the 3.8 m section of the soil holding the fence in place. The soil pH was found to be lowest i.e. 6.24 at the 0 and 8.3 m sections, 6.75 at the 17.21 m section, 6.35 at the 10.48 m section and 7.5 at the 14.24 m section. This is an indication that the soil pH was either weakly acidic or weakly alkaline. Since carbonic acid is a weak acid, it justifies the recorded pHs in the acidic range, however, owing to the presence of minerals such as K, Ca, Ph, etc, the salts of these elements may have resulted in the basic pHs recorded above the pH of 7 (neutrality). Owing to the presence of minerals in the soil and conversion of the dissolved carbonate to salt, pH variation possibly set-in which gave rise to soil pH values > 7. The migration pattern of the CO 3 2horizontally showed that there are sections that tend to allow more transfer of the carbonate to the soil. This might be due to difference in the soil chemistry. If the pH of the surrounding soil is acidic, leaching is enhanced and if it is alkaline, rate of leaching is reduced thereby retaining more of the carbonates in the sandcrete wall. 3 2concentration gradient and diffusivity in the sandcrete wall CO 3 2concentration gradient The estimated carbonate concentration gradients at selected sections/heights above the ground level of the fence are shown in Table 1. The highest carbonate-concentration gradient was obtained at the lowest height above ground level. This shows that, the carbonate concentration gradient or driving force is highest towards the bottom of the fence giving a value of 1.5 g/L.m at the 1.88 m height above ground level/soil holding the fence in place. The lowest carbonate concentration was seen at the highest of all 3 heights considered at the different (6) horizontal sections considered hence, this justifies the fact that carbonate concentration gradient dropped at from the 0-14.24 m points whereas, beyond the 17.21 m point, the value increased significantly. A close look at the extreme values of table 1 shows that the terminals/extremes of the fence stored more carbonate that any other part of the fence because the values are higher at both ends (i.e. the 0 and 17.21 m points) than at any other point on the fence. This is due to those areas being closed ends where carbonate radical diffuses and thus gets stored because there are no extensions/points beyond those regions. Estimated CO Based on Fick's law of diffusion, concentration gradient is inversely proportional to diffusivity hence, the extreme points of highest concentration gradients have the least diffusivities ( Table 2). The extent of carbonation was investigated by calculating diffusivity (De) L values according to the mathematical model. (De)L represents the dispersion of the dissolved CaCO 3 as Ca (HCO 3 ) 2 in the block-pore volume. It also represents the effective diffusivity of CO 3 2ions into the sandcrete fence. It was approximately determined from the concentration gradient "P". The estimated negative values are indicative of a decrease in concentration of carbonates in the reference sections of the sandcrete fence. The higher the diffusivity, the lesser will be the expected mechanical strength of the sandcrete fence [7]. A high rate of carbonate diffusivity is aided by the temperature of the environment, hence, an environment of high temperature increases the rate of diffusion. Also, Sections with high diffusivity gave lower CO 3 2concentration gradients or lower diffusion rates of the carbonate ion. Furthermore, the diffusivity of CO 2 is 10,000 times smaller in water than in air. This means that diffusivity is faster in air than in water. Therefore the rate of carbonation occurs faster in air than water. According to Odigure [14], this parameter gives the extent of reaction between the cement mineral hydrates and CO 2 and their leaching from the structure. De represents the actual values of CO 2 molecules that diffused into and reacted with the cement minerals in the sandcrete matrix. The parameter r represents depth. This varies but for this research work, q was assumed to be zero since samples were collected at equal height and at constant depth of 20 mm. The depletion of CO 3 2within the sandcrete will lead to a fall in the binding properties of any formed Ca(OH) 2 and may cause a defect or crack in the structure. The observed horizontal changes in carbonate concentration differs with sections of the fence which will exhibit a variation in pH and %composition of its carbon constituents. Furthermore, these changes can lead to internal stress within and along the fence wall which result in crack-formation in the fence structure. Based on the investigation carried out by Odigure [14], the results obtained in this study in terms of calculated diffusivities and concentration gradients of CO 3 2are in close agreement. Figure 3 is an illustration of the irregular/random variation of soil pH and heights across the fence. At the 1.88-2.88 m height above the ground, within the soil pH of 6.24-7.5, the diffusivity of CO 3 2is lowest with corresponding values in the range of -6.56-5.46 m 2 /s and hence, based on Fick's second law of diffusion, the concentration of the carbonate is highest whereas, it is lowest at pHs of 6.35 and 7.63, the resultant diffusivities are 6.69 and 5.46 m 2 /s. However, within 0.94-1.88 m height along the fence, the pHs are 6.35 and 7.5 with corresponding diffusivities of -1.65 and 57.46 m 2 /s respectively, which are the least recorded values hence, the concentrations of CO 3 2are expected to be highest here; also, within the measured pHs of 6.14 and 7.63, the estimated diffusivities are 1.73 and 3.31 respectively, which are expected to give the lowest CO 3 2concentrations. Based on the results, the soil pH range is weakly acidic and alkaline with varying values which influence the estimated diffusivities. Conclusion The presence of CO 3 2in sandcrete samples taken from the fence actually confirm the effect of carbonation in sandcrete structure which gives a clue to the reason for structural failures in fences 3 2-concentration varied with heights and horizontal distances within sections of the sandcrete fence. pH ranges were judged to be in the weakly acidic and weakly alkaline ranges thus encouraging leaching of the soil/weakening of the fence which resultantly undermines the strength of the fence. The estimated percent carbonate concentration differential in the horizontal and vertical directions were found to vary widely from 5 to 46% which constitute a potential danger to the fence in terms of being subsequently probable to cracking and collapse. The predictive model gave comparable diffusivity values with those obtained in literature. Furthermore, at the 1.88-2.88 m height above the ground, the concentration of the carbonate was highest whereas, it was lowest at pHs of 6.35 and 7.63, with resultant diffusivities of 6.69 and 5.46 m 2 /s respectively while at the 0.94-1.88 m height along the fence, the recorded pHs are 6.35 and 7.5 with corresponding diffusivities of -1.65 and 57.46 m 2 /s respectively; these represent the least recorded diffusivities and are expected to be points having the highest concentrations of CO 3 Recommendations Based on the results of this study, it is necessary to carry out routine CO 2 measurements around Covenant University fences in order to ascertain their deformation potentials; this is aimed at developing adequate carbon capture strategies (adsorption systems/CO 2 -traps) for CO 2 mitigation. Also, in the year 2019, a collapse of one of the Covenant University fences occurred and this is a clear evidence that the fence had been weakened at some of its sections owing to the influx of acid rain and CO 2 . Furthermore, an investigation of the mechanism and kinetics of the carbonation process of cement needs be carried out in order to clearly understand the sequestration process of cement or sandcrete structures. Analysis involving the use of Scanning Electron Microscopy, Thermal Gravimetric Analysis and X-ray Diffraction can be carried out to investigate the carbonation profile, the mineralogy and morphology of the understudied sandcrete structure. The effect of temperature on cement-carbonation should be investigated. Estimation of Analysis on deposition and penetration rates of other gaseous pollutants such as SO 2 and NO 2 may be considered in order to ascertain their influence on the carbonation processes of sandcrete structures. The carbonation of other cement-based materials such as concrete can be investigated with the results compared with those of these study. The use of Atomic Absorption Spectrophotometer (AAS) is necessary so as to ascertain the types of metal ions especially Ca 2+ ions in sandcrete samples as this helps to predict the extent of carbonation.
2021-05-07T00:04:32.509Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "eaa8851700558f593c6636112a8043552ce38ff0", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/1036/1/012047", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "6040e707892d1d4a00b6b2c816432ca3abbfa508", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics", "Environmental Science" ] }
208321413
pes2o/s2orc
v3-fos-license
High-sensitivity C-reactive protein as an indicator of ischemic stroke in patients with isolated acute vestibular syndrome Abstract Ischemic strokes presenting with isolated acute vestibular syndrome (AVS) are not rare and still are challenging for diagnosis. In this retrospective study, we aimed to investigate the association of high-sensitivity C-reactive protein (hs-CRP) with stroke in patients with isolated AVS. A total of 217 patients with isolated AVS within 3 days of symptom onset were included. Serum hs-CRP levels were assessed within 24 hours of admission. The relationship between hs-CRP levels and stroke in patients with AVS were analyzed using univariate and multivariate models. The results showed that hs-CRP levels were significantly higher in infarction patients than that in noninfarction group. The stroke occurrence was increased with increasing quartile levels of hs-CRP. The highest quartile level of hs-CRP was associated with a higher occurrence of stroke compared with the lowest quartile group (adjusted odds ratio [OR], 4.099; 95% confidence interval [CI], 1.272–13.216; P = .018). We also found that male gender (adjusted OR, 5.635; 95% CI, 2.212–14.352; P < .001) and increased low-density lipoprotein cholesterol (LDL-C) (adjusted OR, 2.543; 95% CI, 1.175–5.505; P = .018) were independently associated with stroke in patients with AVS. In addition, using the receiver operating characteristic curve analysis, our study yielded a threshold value of hs-CRP at 1.82 mg/L, and demonstrated that combining hs-CRP with LDL-C improved the discriminatory ability to identify stroke patients with AVS (area under the curve of the combined model: 0.753; 95% CI = 0.684–0.821; P < .001). Hs-CRP may be a useful indicator of stroke in patients with AVS. More attention should be paid to the patients with elevated hs-CRP level. Introduction Acute dizziness or vertigo, accompanied or not by nausea or vomiting, and gait unsteadiness, is characteristic of acute vestibular syndrome (AVS). [1] The origins of AVS are briefly ascribed to 2 parts, the peripheral and central disorders involving the vestibule, cerebellum, and their connective nerve fibers. [2] AVS with focal neurologic deficits are strongly indicative of posterior circulation stroke. [3] However, it may be challenging for diagnosis when isolated AVS present as onset symptoms of stroke. It has been reported that approximately 25%, more often than previously believed, of the posterior circulation stroke patients present with isolated AVS. [4] Furthermore, early magnetic resonance imaging (MRI) with diffusion-weighted imaging (DWI) are occasionally false-negative in those stroke patients with AVS. [5] The short time interval from the symptom onset to MRI scans and relatively small lesions might explain the false-negative results. [6] Another useful tool to identify stroke patients with AVS is HINTS (head impulse test, nystagmus pattern, and test of skew), a bedside examination which has been proved to be more sensitive than MRI scan for the diagnosis of stroke with AVS. [7] However, not all examiners are qualified for the HINTS evaluation, and some patients with transient vertigo have recovered from nystagmus before HINTS examination or some other patients with severe and persistent vertigo usually cannot complete the examination. [8] Therefore, the limitations of neuroimaging and HINTS assessment highlight the importance of seeking a readily available biomarker to enhance early recognition and diagnosis of stroke in patients with AVS. Inflammation is widely recognized to play a crucial role in initiation and progression of atherosclerosis. High-sensitivity Creactive protein (hs-CRP) has emerged as one of the important inflammatory markers associated with cerebrovascular disease. [9] Previous large studies have indicated that elevated CRP was associated with increased risk of first-ever ischemic stroke [10] and recurrence of stroke. [11] Furthermore, hs-CRP is predictive of functional outcome and future mortality in stroke patients. [12] However, the association of hs-CRP with the stroke in patients with isolated AVS has not been studied. Thus, we would aim to explore the hs-CRP level in patients with AVS and analyze the relationship between hs-CRP and ischemic stroke in such special patient population, in an attempt to provide a useful indicator for early recognition and diagnosis. Subjects We retrospectively selected patients with isolated AVS hospitalized in the neurology department of Shijingshan Hospital from September 2016 to January 2018. The study was approved by the Ethics Committee of Shijingshan Hospital and the written informed consent was obtained from the patients or their legal proxies. The inclusion criteria were as follows: age ≥18 years; acute onset of vestibular syndrome (admitted within 3 days after symptom onset); lack other symptoms or signs of focal neurological impairment. The exclusion criteria were as follows: vertigo caused by benign paroxysmal positional vertigo (BPPV), aural or ophthalmic disease, medication/drug intoxication, psychological disorder, or systemic diseases such as infection, hypoglycemia, and cardiac insufficiency. Research methods The following demographic and clinical data were collected: age, gender, past medical history, smoking or drinking status, serum hs-CRP, and other common laboratory index. Fasting blood samples were collected from each patient within 24 hours of admission. Level of hs-CRP was assessed using an immunoturbidimetric assay in the certified clinical laboratory in Shijingshan hospital. A cerebral computerized tomography (CT) scan was conducted for each patient within an hour of admission to the hospital. Patients underwent a cerebral MRI (including T1 weighted image, T2 weighted image, DWI, and fluid attenuated inversion recovery sequence) within 48 hours of hospitalization, and patients with the contraindication to MRI received a repeat cerebral CT examination. The presence of acute ischemic stroke was confirmed by high DWI signal and low apparent diffusion coefficient signal combined with other MRI sequences or low density on CT scan according to World Health Organization criteria. Patients were divided into infarction group and noninfarction group. Strokes were etiologically categorized according to the Trial of ORG10172 in Acute Stroke Treatment (TOAST) classification. Statistical analysis Data were expressed as number (n) and percentage for categorical variables, and median (interquartile ranges) for continuous variables. The Chi-squared and the Mann-Whitney U test were performed for categorical and continuous variables respectively to compare the 2 groups. Multivariate logistic regression model was used to analyze the association of hs-CRP with stroke in patients with AVS, after adjustment for the possible confounders and variables with P < .10 in the univariate analysis. Receiver operating characteristic (ROC) curves analysis was used to test the discriminatory ability of the hs-CRP. P < .05 was considered statistically significant. Statistical analysis was conducted with SPSS 24.0 (SPSS Inc, Chicago, IL). Patient demographics and baseline characteristics Of 243 patients with AVS hospitalized during the study period, a total of 217 patients were included in the final analysis, after excluding patients with BPPV (12 patients), psychological disorder (3 patients), infection (5 patients), and hypertension (6 patients). There were 56 patients diagnosed with cerebral infarction, with a mean age of 68.8 ± 11.0 years old (range, 49-85 years old), of which 45 patients were men, a higher percentage compared with the noninfarction group (80.4% vs 47.8%, P < .001). Moreover, the infarction cases had higher prevalence of hypertension history (62.5% vs 47.2%, P = .049) and higher level of hs-CRP compared with the noninfarction group (2.71 [1.32-4.09] vs 1.26 [0.64-2.46] mg/L, P < .001). The levels of leukocyte, neutrophil, low-density lipoprotein cholesterol (LDL-C), and HbA1C were also significantly higher in stroke patients than those without stroke (P < .05). There was no statistically significant difference between the 2 groups regarding other baseline characteristics. According to the TOAST classification, small artery occlusion (SAO) subtype accounted for a substantial part in infarction group (Table 1). Demographics and characteristics of patients according to hs-CRP quartiles Hs-CRP values were divided into 4 levels by quartiles as follows: quartile 1 (Q1), <0.73 mg/L; quartile 2 (Q2), 0.73 to 1.50 mg/L; quartile 3 (Q3), 1.50 to 3.06 mg/L; and quartile 4 (Q4), >3.06 mg/ L. Occurrence of stroke increased with increasing hs-CRP quartiles (P < .001). Besides this, histories of diabetes mellitus, levels of erythrocyte sedimentation rate (ESR), LDL-C, and HbA1C were significantly higher in patients with higher hs-CRP levels, compared with those with lower hs-CRP levels. Level of neutrophil tended to increase with higher hs-CRP levels, except for levels within the quartile 4. History of hyperlipemia and prior stroke and level of leukocyte were unequally distributed among the hs-CRP quartiles. There were no statistically significant differences among the quartiles regarding other characteristics ( Table 2). Univariate and multivariate logistic regression analysis of hs-CRP and other factors for stroke The univariate analysis showed that the third and fourth quartile of hs-CRP was significantly associated with stroke in patients with AVS (P < .05), and multivariate logistic regression analysis indicated that the fourth quartile of hs-CRP remained markedly associated with stroke (adjusted odds ratio [OR], 4.099; 95% confidence interval [CI], 1.272-13.216; P = .018), independent of gender, age, and confounding factors with a P value <.10 in the univariate analysis. We found that male gender (adjusted OR = 5.635; 95% CI, 2.212-14.352; P < .001) and higher level of LDL-C (>2.6 mmol/L) (adjusted OR = 2.543; 95% CI, 1.175- (Table 3). Discussion To our knowledge, the present study is the first to report the significant association of hs-CRP with ischemic stroke in patients Table 2 Demographics and characteristics of patients according to high-sensitivity C-reactive protein quartiles. with AVS. This association was independent of other well-known influential factors of stroke such as age, gender, hypertension, and diabetes. Hs-CRP is a sensitive indicator of inflammation and has evolved as an important marker of atherosclerosis. In our study, we firstly found that stroke patients with AVS had significantly higher level of hs-CRP than nonstroke patients, reflecting the potential involvement of inflammatory response. On the contrary, a recent study found no difference of CRP levels between stroke and nonstroke patients with AVS. [13] Notably, it was CRP, instead of hs-CRP, that was assessed in that study. Lower sensitivity of CRP compared with hs-CRP might to a large extent explain the discrepancy. Furthermore, our study demonstrated that stroke occurrence in patients with AVS increased in pace with increasing quartile levels of hs-CRP, supporting the correlation between hs-CRP and stroke. The following multivariate logistic regression analysis showed that higher level of hs-CRP remained an independent factor associated with stroke after adjusting for the confounders such as LDL-C or HbA1C, suggesting the distinctive role of hs-CRP in pathophysiological changes of stroke. Firstly, acute ischemic stroke in the early stage immediately evokes a quick inflammatory cascade which could be a consequence of Toll-like receptors activation following atherosclerotic plaque rupture and thromboembolic events. [14] Hs-CRP, as an acute phase reactant, is rapidly upregulated paralleled by changes in the levels of other cytokines such as interleukin 6 and tumor necrosis factor. [15] Moreover, hs-CRP levels were positively related to infarct volumes, [16] therefore, it may aid in the evaluation of severity of acute stroke as well as identification of stroke. Additionally, gradual development of atherosclerotic plaque is also strongly linked to inflammation. Growing evidence showed that elevated hs-CRP predicted first-ever ischemic stroke and recurrence of stroke. In our study hs-CRP level was assessed after symptom onset, hence the possibility that stroke patients with AVS had long-lasting higher level of hs-CRP before stroke onset, and were at a higher risk of stroke than nonstroke patients could not be excluded. With regard to the role of CRP, it is still elusive whether CRP is involved in the pathogenesis of atherosclerosis or is elevated as a response to atherosclerosis. [17] Besides hs-CRP, we found that male gender and increased LDL-C were also significantly associated with stroke in patients with AVS. Similarly, some prior studies have showed that male patients with AVS were at higher risk for stroke than female. [18,19] However, the gender effect on the stroke risk was markedly attenuated by some confounders in another research. [20] One possible explanation for the discrepancies might be the difference in sample sizes and patients selection criteria. With regard to LDL-C, in contrast to study from Zuo et al, [13] our finding indicated that higher LDL-C was correlated with stroke, it may be attributed to that elevated LDL-C could contribute to endothelial dysfunction, elicit inflammation, promote atherosclerosis, and exacerbate plaque burden. In previous literatures, the association between cholesterol and stroke risk is still controversial. No association was found between the cholesterol and the occurrence of stroke in the Framingham study, [21] whereas Glasser et al [22] revealed that LDL-C is significantly associated with incident ischemic stroke in REasons for Geographic And Racial Differences in Stroke study. The discordance of results could be due to the difference of the target population as well as the medication history of lipidlowering drugs. Thus, the role of LDL-C in predicting stroke with AVS warrants intensive investigation in future. Additionally, we found that hs-CRP had a great diagnostic value at 1.82 mg/L for stroke patients with AVS by ROC curve analysis. Combining hs-CRP with LDL-C improved the discriminatory ability. Interestingly, the threshold value of hs-CRP obtained from our study was almost identical with the data from a previous study which showed hs-CRP level at 1.72 mg/L in patients with SAO stroke subtype. [23] Meanwhile, consistent with a recent research, [20] the TOAST classification of stroke in patients with AVS in our study was primarily due to SAO. In view of the fact that hs-CRP levels differed in different stroke subtypes, [24] therefore, it is necessary to find the specific target value of hs-CRP for stroke patients with AVS. Some limitations of our study should be noted. First, the present study was a retrospective study at a single center, and the sample population does not represent the general population. Future prospective studies with multicenter data and large sample size are needed to validate our findings. Second, although the time of admission for patients was restricted to 3 days after symptom onset, fluctuation of hs-CRP level could not be completely excluded. However, in stroke patients with mild outcome, the temporal profile of CRP during the first week after symptom onset was proved to be relatively stable. [25] Conclusion To our knowledge, this is the first study to report that hs-CRP was independently associated with stroke in patients with AVS. Elevated level of hs-CRP (≥1.82 mg/L) may be a useful indicator of stroke in patients with AVS.
2019-11-28T12:48:29.705Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "bc10ba1575444eaf6dda362317f39d2ea780ad9c", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1097/md.0000000000018097", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eb33d34f586fd18db3c2553aad2973e2a89b72ea", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
250688877
pes2o/s2orc
v3-fos-license
GC-MS analysis of the solvents contained in C60 nanowhiskers The solvent molecules contained in C60NWs affect not only the shape but also the crystal structure of C60NWs. The solvents contained in C60NWs were found to be toluene and IPA by the GC-MS analysis. The most abundant solvent was toluene and the quantity of residual toluene in the as-prepared C60NWs was about 7 mass%. The quantity of impurity solvents is reported for the as-prepared, air-dried, vacuum-dried and heat-treated C60NWs. isopropyl alcohol (IPA, 99.8% Wako Ltd.) was gently added to the solution to form a liquid-liquid interface. After the addition of IPA, the solution was manually mixed for a few times and was kept in an incubator (SANYO MIR-153) at 20 °C for 24 h. Drying and heat treatment of samples The C 60 NWs grown in solutions were dispersed in IPA and collected by the centrifugal method. The C 60 NWs were treated by three methods before the measurement by GC-MS. The first method is air drying, the second method is vacuum drying in a vacuum furnace and the third method is the heat treatment in Ar gas. The samples were dried in vacuum of a range of 10 -2 and 10 -4 Pa for 5 ~ 720 min. In the heat treatment in Ar, the air in the vacuum furnace was substituted by pure Ar (99.999%), and the C 60 NWs were heated at 50 °C to 300 °C for 30 min. The C 60 NWs were put in disposable containers (Eco-cap, Frontier Laboratories Ltd.) and heat-treated in Ar. Measurement conditions by GC-MS A thermal desorption (TD) method was used to measure the solvents contained in C 60 NWs. Since C 60 sublimates at about 300 °C in vacuum [9], the heating temperature was set at 350 °C. It was expected that the solvent molecules contained in the matrix of C 60 NWs were instantaneously detached at 350 °C, where the TD system with Double-Shot Pyrolyzer (Frontier Laboratories Ltd. PY-2020iD) was used. The gases from C 60 NWs were analyzed by GC-MS system (SHIMADZU GCMS-2010) using a split rate of 1:100. The gas chromatography (GC) was performed at an injection temperature of 330 °C, using a helium carrier gas pressure and a column composed of 5% diphenyl and 95% dimethylpolysiloxane (Ultra ALLOY+, Frontier Laboratories Ltd.). The oven was heated from 40 °C to 300 °C at a heating rate of 30 °C min -1 . The mass analysis was done by using the EI+ ionization mode at an interface temperature of 330 °C. Concentration of toluene (mass ppm) Figure 1. The calibration curve for the quantitative analysis of toluene. The horizontal axis expresses the concentration of toluene (mass ppm). The vertical axis expresses the peak area ratio of toluene versus tetracene standard. About 300 ~ 550 μg of C 60 NWs were collected by centrifugation and used in each measurement by GC-MS. Before the GC-MS analysis, the sample cases containing the C 60 NWs other than air drying were dried in vacuum for 5 min at room temperature in order to remove the solvents adsorbed on the surface of C 60 NWs. Tetracene was used as a standard material. 0.0025 g of tetracene was dissolved in 50 ml of hexane. 100 μL of tetracene-hexane solution was put into a sample case and heated at 50 °C on a heater in order to evaporate only hexane and about 5 μg of tetracene remained in the sample case. The C 60 NWs dried in vacuum for 5 min were put into the same sample case containing 5 μg tetracene and subjected to the GC-MS analysis. The other samples were also similarly treated and subjected to the GC-MS analysis. A calibration curve for the quantitative analysis of toluene was made to measure the amount of residual toluene contained in C 60 NWs as shown in Figure 1. The horizontal axis expresses the concentration of toluene (mass ppm). The vertical axis expresses the peak area ratio of toluene versus the tetracene standard. 1, 2, 5 and 10 mg of toluene were diluted with 10 ml of hexane. The calibration curve was made by using 2 μL of the hexane solutions of toluene with the concentrations of 1, 2, 5 and 10 μg μL -1 , and 5 μg of tetracene for GC-MS at the same time. The amount of residual toluene contained in C 60 NWs was measured with the calibration curve. Figure 2 shows a GC-MS analysis of C 60 NWs after drying in vacuum for 5 minutes. In figure 2(a), there are four peaks labeled at 0.925, 1.067, 2.33 and 10.09 min. In Figure 2(a), the peak of 0.925 min is consistent with the ions originated from CO 2 , N 2 and water. Hence, the surface of C 60 NWs must have adsorbed water, CO 2 and N 2 . But the peak may contain the air introduced into the GC-MS system at the time of sample injection. All of the measured CO 2 , N 2 and water may not have been from the surface of C 60 NWs. Figure 2(b) shows the mass spectra for the peak labeled at 2.33 min with the major ions at 91 and 92 Da. The molecular weight of toluene is 92, and the peak of 91 Da shows the fragments formed by the elimination of hydrogen atoms from toluene molecules in the electron ionization chamber. Figure 2(c) shows the mass spectra for the peak labeled at 1.067 min with the major ions at 45, 59 and 43 Da. The peak of 45 Da shows the fragments of isopropyl alcohol molecules formed by the elimination of -CH 3 groups and the peak of 43 Da shows the fragments formed by the elimination of -OH groups. The peak labeled at 10.09 min is due to the tetracene standard. The amount of residual toluene is more abundant than IPA. This is consistent with the fact that C 60 is hardly dissolved in polar solvents like alcohol. Next, the most abundant toluene contained in the matrix of C 60 NWs is quantitatively analyzed by use of the peak area ratio of toluene versus tetracene. The amount of residual toluene contained in the vacuum-dried C 60 NWs is roughly in proportion to the sample weight of C 60 NWs. Figure 3 shows the amount of toluene measured by using the analytical curve of Figure 1 as a function of the weight of C 60 NWs. Figure 3 shows that the amount of residual toluene in each sample of C 60 NWs increases linearly with increasing the weight of C 60 NWs. Sample weight of C 60 NWs (μg) Figure 3. The amount of residual toluene contained in the C 60 NWs measured by use of the calibration curve of Figure 1 as a function of the sample weight of C 60 NWs dried in vacuum for 5 min at room temperature. Results and discussion Next, the change of the toluene content in the C 60 NWs (mass %) subjected to the air drying and that of the C 60 NWs dried in vacuum at room temperature are shown in Figure 4. The C 60 NWs were dried for 30, 60, 120 and 240 min in air, respectively. The toluene content for the as-prepared C 60 NWs is also shown. The content of residual toluene in the C 60 NWs shows similar values for the drying time of 30 to 240 min. The as-prepared C 60 NWs show about 7 mass % of toluene content. It has been reported that hexagonal solvated structures of C 60 NWs rapidly change into fcc structures upon drying through the evaporation of solvent molecules [6], corresponding to the present decrease of toluene in the C 60 NWs. Figure 4 also shows the data of C 60 NWs dried in vacuum at room temperature for 5, 120 and 720 min, respectively. The quantity of residual toluene decreased to about 1 % from 3.2 % in the C 60 NWs that were dried in vacuum at room temperature for 120 min. But the residual toluene was not completely removed even for the elongated vacuum drying of 720 min. Hence, part of the toluene molecules contained in the matrix of C 60 NWs must be tightly trapped by some special sites of C 60 crystal lattices. As-prepared C 60 NWs Filled rectangles ( ■ ) show the residual toluene contained in the C 60 NWs after air drying at room temperature for 30, 60, 120 and 240 min, respectively, and that of the as-prepared C 60 NWs. Filled diamonds (◆) show the residual toluene contained in the C 60 NWs dried in vacuum at room temperature for 5, 120 and 720 min, Figure 5 shows the residual toluene content of the C 60 NWs heated at 50, 100, 200 and 300 °C for 30 min in Ar gas and that of the C 60 NWs dried in vacuum for 5 min at room temperature. The toluene content in the C 60 NWs decreased to about 0.2 % from the initial content of 3.2 % rapidly by the heat treatment at 100 °C. Since the C 60 NWs heat-treated at high temperatures turn to porous structures [12], This result also suggests that part of the contained toluene molecules are tightly bound to some special sites of C 60 lattices and cannot be easily removed by the heat treatment. Heat-treated C 60 NWs Vacuum-dried C 60 NWs Figure 5. Change of the residual toluene content in the heat-treated or vacuum-dried C 60 NWs expressed as a function of heating temperature. The C 60 NWs were heated in Ar gas at each temperature for 30 min. As-prepared C 60 NWs Figure 6. Raman shifts of A g (2) mode in the as-prepared C 60 NWs, the C 60 NWs dried in vacuum for 5 min at room temperature and the C 60 NWs heated in Ar gas at 100 -300 °C for 30 min. The small amount of residual toluene molecules may alter the mechanical and electrical properties of C 60 NWs from those of pure C 60 crystals. In the Raman spectroscopy of C 60 , it is known that the position of A g (2) peak is very sensitive to the polymerization of C 60 [10,11]. In the C 60 nanotubes prepared by use of a C 60 -saturated pyridine solution and IPA, a remarkable difference in the peak position of A g (2) mode was observed between the as-prepared solvated C 60 nanotubes and the C 60 nanotubes heat-treated at 100 to 500 °C [12]. The position of A g (2) peak was also measured in the present experiment for the as-prepared and heat-treated C 60 NWs as shown in Figure 6. The average value of the Raman shifts of the C 60 NWs heat-treated at 100 to 300 °C and that of the C 60 NWs dried in vacuum at room temperature is 1466.2 ± 0.25 cm -1 , while the as-prepared solvated C 60 NWs show the Raman shift of 1464.1 ± 1.7 cm -1 . From the above results of GC-MS spectrometry, it is found that the difference in the Raman shift between the as-prepared C 60 NWs and the heat-treated C 60 NWs is caused by the difference in the amount of solvent molecules contained in the C 60 NWs. The weak peak shift of A g (2) mode in the as-prepared C 60 NWs from 1469 cm -1 of pristine C 60 [11] must be owing to a slight distortion of C 60 molecules from their spherical icosahedral symmetry that is caused by the interaction between the C 60 molecules and the surrounding solvent molecules. Conclusions The present research can be summarized as follows. (1) Toluene was most abundantly contained in the C 60 NWs. (2) The content of toluene was 7 % in the as-prepared C 60 NWs and rapidly decreased upon the heat treatment in vacuum or drying in the air. (3) The down shift of A g (2) peak observed in the as-prepared solvated C 60 NWs is assumed to be caused by the weak interaction between C 60 molecules and the residual toluene molecules that deforms the spherical icosahedral symmetry of C 60 . In order to understand the mechanism of structural stability of solvated C 60 NWs, it is necessary to study the time evolution between the amount of residual IPA and toluene contained in the C 60 NWs and the crystal structural change of the C 60 NWs upon drying. We are going to investigate this theme further.
2022-06-28T03:28:40.854Z
2009-01-01T00:00:00.000
{ "year": 2009, "sha1": "ac11bef45067d77eefd2132d801d5a8d41d43cc0", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/159/1/012010", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "ac11bef45067d77eefd2132d801d5a8d41d43cc0", "s2fieldsofstudy": [ "Materials Science", "Physics", "Chemistry" ], "extfieldsofstudy": [ "Physics" ] }
8788103
pes2o/s2orc
v3-fos-license
Mechanism of Enhanced Cardiac Function in Mice with Hypertrophy Induced by Overexpressed Akt* Transgenic mice with cardiac-specific overexpression of active Akt (TG) not only exhibit hypertrophy but also show enhanced left ventricular (LV) function. In 3–4-month-old TG, heart/body weight was increased by 60% and LV ejection fraction was elevated (84 ± 2%, p < 0.01) compared with nontransgenic littermates (wild type (WT)) (73 ± 1%). An increase in isolated ventricular myocyte contractile function (% contraction) in TG compared with WT (6.1 ± 0.2 versus 3.5 ± 0.2%, p < 0.01) was associated with increased Fura-2 Ca2+ transients (396 ± 50 versus 250 ± 24 nmol/liter, p < 0.05). The rate of relaxation (+dL/dt) was also enhanced in TG (214 ± 15 versus 98 ± 18 μm/s, p < 0.01). L-type Ca2+ current (ICa) density was increased in TG compared with WT (-9.0 ± 0.3 versus 7.2 ± 0.3 pA/pF, p < 0.01). Sarcoplasmic reticulum Ca2+ ATPase 2a (SERCA2a) protein levels were increased (p < 0.05) by 6.6-fold in TG, which could be recapitulated in vitro by adenovirus-mediated overexpression of Akt in cultured adult ventricular myocytes. Conversely, inhibiting SERCA with either ryanodine or thapsigargin affected myocyte contraction and relaxation and Ca2+ channel kinetics more in TG than in WT. Thus, myocytes from mice with overexpressed Akt demonstrated enhanced contractility and relaxation, Fura-2 Ca2+ transients, and Ca2+ channel currents. Furthermore, increased protein expression of SERCA2a plays an important role in mediating enhanced LV function by Akt. Up-regulation of SERCA2a expression and enhanced LV myocyte contraction and relaxation in Akt-induced hypertrophy is opposite to the down-regulation of SERCA2a and reduced contractile function observed in many other forms of LV hypertrophy. Besides these well characterized functions of Akt, transgenic mice with cardiac-specific overexpression of constitutively active Akt have increased base-line contractility, namely an elevated LV 1 dP/dt max (11). However, the cellular mechanism responsible for increased myocardial function by Akt activation remains unknown. In this study, we examined the correlation of in vivo cardiac function with in vitro intrinsic myocyte contraction and relaxation. To determine the cellular mechanism of the enhanced cardiac function in TG, we examined L-type Ca 2ϩ channel function and Ca 2ϩ -handling proteins, including sarcoplasmic reticulum (SR) Ca 2ϩ ATPase 2a (SERCA2a), phospholamban (PLB), calsequestrin, the ␣-subunit of the Ltype Ca 2ϩ channel (␣ 1c ), and Na ϩ /Ca 2ϩ exchanger (NCX), as well as myocardial ryanodine receptor binding. Our results suggest that activation of Akt enhances Ca 2ϩ transients and facilitates both contraction and relaxation in isolated cardiac myocytes, which is associated with enhanced Ca 2ϩ influx and increased protein levels of SERCA2a. These features indicate a novel function of Akt in the mouse heart. EXPERIMENTAL PROCEDURES Transgenic Mice-The generation of transgenic mice with cardiacspecific overexpression of constitutively active Akt (E40K Akt) (TG) has been described (11). We have also generated transgenic mice with * This study was supported in part by National Institutes of Health Grants HL33065, HL33107, HL59139, HL61476, HL62442, HL65182, HL65183, HL67724, HL69020, and AG14121; American Heart Association Grants 0030125N and 9950673N; and Fondi Ministero Salute, Fondi Italia-USA, and Associazione Italiana per la Ricerca sul Cancro. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 cardiac-specific overexpression of kinase-inactive Akt (Tg-KI-Akt). Tg-KI-Akt did not exhibit an obviously abnormal cardiac phenotype, and cardiac function was apparently normal. In this study, 3-5-month-old TG, Tg-KI-Akt, and nontransgenic littermate mice (WT) were used. Animals used in this study were maintained in accordance with the Guide for the Care and Use of Laboratory Animals (37). Myocytes were field-stimulated at 1 Hz, and contraction was measured using a video motion edge detector (VED103, Crescent Electronics) as described previously (19). The external solution contained 1 mM Ca 2ϩ . The following contractile properties were calculated from length data: % contraction and the rate of shortening (ϪdL/dt). Relaxation properties (ϩdL/dt, the rate of relengthening; t R 70%, the time for 70% relaxation) were also assessed. SR Ca 2ϩ reuptake function was examined in response to thapsigargin (10 Ϫ10 , 10 Ϫ9 , and 10 Ϫ8 M) (Sigma) in both WT and TG. Cells were loaded with 5 M Fura-2 AM (Sigma), and Ca 2ϩ transients were measured using the Photoscan dual beam spectrofluorophotometer (Photon Technology) (19). Western Blot Analysis for SERCA2a, PLB, Calsequestrin, ␣ 1c , and NCX-Left ventricular tissue was obtained and immediately frozen in liquid nitrogen (Ϫ80°C). On the day of the experiment, tissue samples were homogenized in 0.75 M NaCl, 10 mM histidine (pH 7.5) with protease, phosphatase, and kinase inhibitors. Equal amounts of protein were dissolved in 1% SDS, 100 mmol/liter Tris-HCl (pH 6.5), 10% glycerol, 0.05% bromphenol blue, 5% 2-mercaptoethanol, separated by 12.5% SDS-PAGE, and transferred onto polyvinylidene difluoride membranes. Blots for SERCA2a and PLB were incubated for 30 min at room temperature with a 1:20,000 dilution of rabbit anti-SERCA2a polyclonal antibody (generous gift from Dr. Frank Wuytack, Leuven, Belgium) or 1:20,000 mouse anti-PLB monoclonal Ab (Affinity Bio-Reagents Inc., Golden, CO) in Tris-buffered saline containing 0.1% Tween 20 and 5% nonfat milk. Blots for calsequestrin were incubated overnight at 4°C with a 1:25,000 dilution of rabbit anti-canine cardiac calsequestrin polyclonal antibody (Upstate Biotechnology, Lake Placid, NY) in the same buffer. Blots for NCX were incubated overnight at 4°C with a 1:1,000 dilution of monoclonal anti-NCX (Swant) in Tris-buffered saline and 1% milk. Blots for ␣ 1c were incubated overnight at 4°C with a 1:500 dilution of anti-␣ 1c antibody (Affinity BioReagents) in Trisbuffered saline and 1% milk. Intensities of the bands were evaluated by densitometric scanning using a Personal Densitometer SI with Image-QuaNT software (Amersham Biosciences) and normalized for protein loading. Alkaline Phosphatase Treatment-Cell lysates (15 g) were treated with a one-ninth volume of 10ϫ alkaline phosphatase buffer (Promega, Madison, WI) at 30°C for 10 min and then treated with 30 units of calf intestine alkaline phosphatase at 30°C for 10 min. 4ϫ SDS-PAGE loading buffer was added, and the samples were separated by 8% SDS-PAGE. Then, SERCA2a protein was processed for Western blotting as described above. Ryanodine Receptor Binding Studies-Ryanodine receptor binding studies were conducted as described previously (20). Assays were performed using 6 concentrations of [ 3 H]ryanodine (1 to 30 nmol/liter) and 100 g of membrane protein in HEPES buffer (with 0.3 mmol/liter CaCl 2 ) in a total volume of 150 l. Unlabeled ryanodine (10 mol/liter) was used to determine nonspecific binding. Incubation was at 37°C for 90 min. All assays were performed in triplicate and terminated by rapid filtration through Whatman GF/F filters washed with 4 ml of cold buffer (150 mmol/liter KCl, 10 mmol/liter Tris HCl, pH 7.4). Filters were vortexed in 5 ml of Ecoscint and counted in a beta scintillation counter for 5 min. All assays were standardized by the protein content. Protein concentrations were determined using the BCA method. A linear regression was performed on the amount bound versus bound/free ligand. Gene Expression by Reverse Transcription-PCR-Under anesthesia, the mouse heart was removed, frozen in liquid nitrogen, and stored at Ϫ80°C. The tissue was homogenized in a solution containing guanidine isothiocyanate in the presence of 12 g of Escherichia coli rRNA as a carrier, and total RNA was isolated by the CsCl gradient technique. After ultracentrifugation at room temperature in a TL100 (Beckman) at 48,000 ϫ g for 19 h, the supernatant was removed and the RNA pellet was dissolved in water. Sodium acetate (pH 5.3) was added to the solution to a final concentration of 3 M, and the RNA was precipitated with absolute ethanol. The RNA was then rinsed in 70% ethanol, vacuum-dried, and re-dissolved in water. cDNA was synthesized from 2 g of total RNA by using random primers and reverse transcriptase according to the manufacturer's instructions (Roche Applied Science). (17). The PCR products were fractionated on 2% agarose and visualized by ethidium bromide staining. Preparation of Adult Cardiac Myocyte Culture and Adenovirus Transfection-Adult rat cardiac myocyte cultures were prepared as described previously (22) to determine whether overexpression of Akt enhances the expression of SERCA protein as a mechanism responsible for the enhanced myocyte contractile and relaxation function. Adenovirus harboring either Akt (multiplicity of infection 10) or LacZ (multiplicity of infection 10) was applied 2 h after myocyte isolation. Methods for adenovirus transfection have been described (7). Statistical Analysis-All data are expressed as mean Ϯ S.E. A comparison of the data among the groups was made by analysis of variance or by Student's t test, and statistical significance was taken at p Ͻ 0.05. The myocyte data were averaged for each animal for statistical comparison. RESULTS Echocardiography-Heart weight/body weight ratio was increased (p Ͻ 0.01) in TG (11.5 Ϯ 1.4 mg/g) compared with WT (7.2 Ϯ 0.4 mg/g). The results of echocardiographic measurements are summarized in Table I. Heart rate was similar between TG and WT. The end diastolic wall thickness was increased (p Ͻ 0.01) in TG (0.84 Ϯ 0.06 mm) compared with WT (0.62 Ϯ 0.03 mm). Although end diastolic dimensions were similar, end systolic dimensions were significantly reduced by 15% in TG compared with WT (p Ͻ 0.05). The LV ejection fraction was significantly enhanced (p Ͻ 0.01) in TG (84 Ϯ 2%) compared with WT mice (73 Ϯ 1%). These results suggest that TG have concentric hypertrophy with increased systolic function. Myocyte Contraction and Ca 2ϩ Transients- Fig. 1 shows representative contraction/relaxation and Ca 2ϩ transient record- To examine whether the enhanced contraction and relaxation in TG myocytes are caused by changes in intracellular Ca 2ϩ levels, we measured Ca 2ϩ transients. As shown in Fig. 1 and Table II, the amplitude of Ca 2ϩ transients in TG myocytes was significantly increased (396 Ϯ 50 versus 250 Ϯ 24 nmol/ liter, p Ͻ 0.05). There were no significant differences in levels of diastolic free Ca 2ϩ concentration (181 Ϯ 11 versus 171 Ϯ 14 nmol/liter, p ϭ not significant). The time for 70% Ca 2ϩ reuptake was also accelerated in TG myocytes compared with WT myocytes (78 Ϯ 6 versus 141 Ϯ 8 ms, p Ͻ 0.05). Thus, both peak Ca 2ϩ levels and Ca 2ϩ uptake are increased in myocytes isolated from TG. Myocytes isolated from Tg-KI-Akt did not show any significant changes in peak Ca 2ϩ levels and Ca 2ϩ uptake. These results indicate that changes in contractile function and Ca 2ϩ transients are dependent upon Akt activity. L-type Ca 2ϩ Channel Currents-Because Ca 2ϩ influx through L-type Ca 2ϩ channels plays an essential role in excitation-contraction coupling, we examined L-type Ca 2ϩ currents (I Ca ) in LV myocytes isolated from TG and WT (Fig. 2). LV myocyte size estimated by cell capacitance was significantly larger in TG myocytes compared with WT myocytes (Fig. 2B), consistent with the heart weight/body weight ratio data. The traces in Fig. 2 show typical I Ca recorded from TG and WT myocytes. In both groups, I Ca was activated around Ϫ30 mV and reached its maximum near ϩ10 mV. However, the peak I Ca amplitude, normalized by the cell capacitance (pA/pF) was significantly larger (p Ͻ 0.01) in TG myocytes (9.0 Ϯ 0.3 pA/pF) compared with WT myocytes (7.2 Ϯ 0.3 pA/pF). The inactivation kinetics of I Ca at ϩ10 mV was significantly faster in TG myocytes compared with WT myocytes (Fig. 2B). In mouse ventricular myocytes, I Ca inactivation is controlled by Ca 2ϩ in the sarcolemmal subspace (19,23,24). To examine whether the difference in I Ca inactivation rate between WT and TG myocytes was due to the contribution of Ca 2ϩ released from the SR, the SR Ca 2ϩ content was depleted by ryanodine (10 M). Following the application of ryanodine, the rate of I Ca inactivation as measured by t1 ⁄2 was significantly increased in both WT and TG myocytes, and no significant difference in the inactivation rate was detected between two groups (31.4 Ϯ 2.3 ms, n ϭ 6 versus 29.5 Ϯ 2.1 ms, n ϭ 6). The results indicate that the faster I Ca inactivation observed in TG myocytes might be related to enhanced SR Ca 2ϩ release (See below). SERCA2a, PLB, Calsequestrin, ␣ 1c , NCX, and Ryanodine Receptor Protein-Because both enhanced amplitude and relaxation of the Ca 2ϩ transient may be caused by changes in SR Ca 2ϩ -handling proteins, the expression of these proteins was determined by immunoblot analyses. The protein expression of Akt in myocardium was increased Ϸ25-fold in TG compared with WT mice. SERCA2a protein levels were increased significantly (p Ͻ 0.05) by 6.6-fold in TG compared with WT mice. In contrast, protein levels of PLB, calsequestrin, ␣ 1c , and NCX were not significantly different between TG and WT mice (Fig. 3, A and B). The affinity and number of ryanodine receptors were similar between WT and TG (K D ϭ 7.65 versus 6.95 nM; B max ϭ 499 versus 511 fmol/mg in WT and TG mice). Semi-quantitative reverse transcription-PCR measurements indicated that there was no significant difference in SERCA2a mRNA levels between TG and WT (Fig. 3C). Thus, the increase in SERCA2a protein in TG occurs post-transcriptionally. Overexpression of Akt Increases Expression of SERCA2a in Adult Ventricular Cardiac Myocytes-To determine whether transient overexpression of Akt is sufficient to increase SERCA2a protein expression in cardiac myocytes, we conducted adenovirus-mediated overexpression of Akt in primary cultured adult rat ventricular cardiac myocytes. Forty-eight hours after transfection, overexpression of Akt was confirmed by immunoblot analyses. Immunoblotting of the same filters with anti-SERCA2a antibody indicated that SECA2a expression was enhanced by 4.8-fold (n ϭ 4, p Ͻ 0.05) compared with control virus-transfected cardiac myocytes (Fig. 4). Interestingly, although SERCA2a was detected in two bands in control virus-transfected myocytes, only a slower migrating form, possibly a phosphorylated form of SERCA2a, was detected in Akt virus-transfected myocytes. In fact, the slower migrating band disappeared after phosphatase treatment, consistent with the notion that it represents a phosphorylated form of SERCA2a (data not shown). Up-regulation of SERCA2a Increases Contractile and Relaxation Function in Isolated Cardiac Myocytes from TG-We examined the effect of thapsigargin, a specific inhibitor of SERCA2a, on contraction of isolated myocytes. The dose of thapsigargin inhibiting % contraction by half (IC 50 ) was 2.6fold higher in myocytes isolated from TG than in those from WT (Fig. 5). Enhanced relaxation function in TG myocytes was also abolished after thapsigargin (10 Ϫ10 M). These results are con-sistent with the notion that enhanced SERCA2a plays an important role in mediating increased cardiac myocyte contraction and relaxation in TG. DISCUSSION Akt plays a central role in glucose metabolism, cell growth, angiogenesis, transcription, apoptosis, and protein synthesis (25). We found that overexpression of constitutively active Akt 2. A, whole-cell I Ca recorded in WT and TG myocytes. Traces show currents elicited from a holding potential of Ϫ50 mV to the indicated test potentials. TG myocytes had larger I Ca amplitudes and faster inactivation kinetics. B, average cell capacitance, I Ca density, and half-maximal decay (t1 ⁄2 ) of I Ca obtained from WT and TG myocytes. Data points are mean Ϯ S.E. The numbers correspond to total number of cells measured. An asterisk indicates that the mean values are significantly different (p Ͻ 0.01) from respective WT controls. in the mouse heart increased both LV ejection fraction, in vivo, and isolated myocyte contraction in vitro. A recent study using the same TG model demonstrated increased LV dP/dt with a nonsignificant increase in LV ejection fraction (11). The failure to observe significantly increased LV ejection fraction in the prior study (11) may be due to heart rate, which was significantly lower in TG than WT, whereas the heart rates were similar in the two groups in the present study. Interestingly, the improved LV function occurred in the presence of significant LV hypertrophy, which is thought to reduce LV function. The goals of the present study were: 1) to investigate whether constitutively active Akt has direct effects upon contractility and Ca 2ϩ handling in isolated cardiac myocytes; and if so, 2) to identify the downstream mechanism of the enhanced cardiac myocyte function by Akt activation. We have demonstrated that the enhanced cardiac function in TG in vivo is associated with increased intrinsic contractile and relaxation function in isolated cardiomyocytes in vitro. Moreover, the increased contraction and relaxation in myocytes isolated from TG paralleled the increased peak systolic amplitude and more rapid diastolic decay in intracellular Ca 2ϩ transients. The studies in isolated myocytes also demonstrated increased Ca 2ϩ channel currents. These functional changes in isolated myocytes were not observed in Tg-KI-Akt, suggesting that they are dependent upon the kinase activity of Akt. This indicates that activation of Akt directly affects contraction and relaxation through changes in Ca 2ϩ handling in individual cardiac myocytes. It should be noted that the enhanced cardiac contractility was not observed in TG mice with cardiac-specific overexpression of constitutively active PI3K␣, an upstream regulator of Akt (26), or in other forms of constitutively active Akt (T308D, S473D-Akt, and myr-Akt) (10,27) distinct from the one used in this study (E40K-Akt), despite the fact that all mice exhibited cardiac hypertrophy. Furthermore, phosphatidylinositol 3-kinase-␥ activated by inhibition of PTEN (phosphatase and tensin homologue deleted on chromosome 10) negatively regulates cardiac contractility (28). Although myr-Akt is constitutively localized at the plasma membrane, both wild type Akt and E40K-Akt undergo growth factor-dependent membrane translocation (29). These results suggest that cardiac contractility may be tightly regulated by activation of a specific component of the phosphatidylinositol 3-kinase pathway and/or by subcellular localization or the substrate specificity of the Akt mutants used. Thus, it is possible that E40K and wild type Akt may share downstream targets for the enhanced cardiac function. To identify the downstream mechanisms of Akt responsible for enhanced myocyte contraction and relaxation, we examined mechanisms known to control intracellular Ca 2ϩ transients. We found that the amplitude of I Ca was enhanced in Akt myocytes by Ϸ20% compared with that in WT myocytes, which may be at least in part responsible for the enhanced cellular Ca 2ϩ transients. Akt mediates insulin-like growth factor-induced potentiation of I Ca in neuronal cells (30). Therefore, to determine whether the increased I Ca was due to altered Ca 2ϩ channel expression, we measured the expression level of the ␣-subunit of the cardiac L-type Ca 2ϩ channel (␣ 1c ) and found no significant differences in ␣ 1c protein expression between WT and TG. These results indicate that the enhanced I Ca amplitude observed in Akt myocytes may be related to the altered channel regulation, i.e. an increase in open probability or con- ductance of L-type Ca 2ϩ channels through increased phosphorylation, rather than protein expression. Ca 2ϩ entry through L-type Ca 2ϩ channels is a critical first step in the Ca 2ϩ -handling cascade. The cytosolic Ca 2ϩ transients and contraction elicited by membrane depolarization are strongly influenced by the amount of Ca 2ϩ influx through the channel. Thus, an increase in Ca 2ϩ entry through the L-type Ca 2ϩ channels is likely to contribute to the increase in cellular Ca 2ϩ transients observed in TG myocytes (31). It is also possible that Akt could alter the Ca 2ϩ dependence of the open probability of the ryanodine receptor. Our results, however, suggest that neither the affinity nor the total number of ryanodine receptors was affected by activation of Akt. In normal cardiac myocytes isolated from rats or mice, Ϸ90% of cytosolic Ca 2ϩ is removed through SERCA (32). Overexpression of SERCA2a or the enhanced SERCA activity by phosphorylation of PLB accelerates Ca 2ϩ transients and cardiac relaxation (33,34). In the present study, SERCA2a protein levels were up-regulated in TG mice. Interestingly, we found that higher doses of thapsigargin are required to reduce % contraction in cardiac myocytes isolated from TG and the enhanced relaxation function in TG myocytes was abolished in the presence of thapsigargin. These results strongly suggest that increased levels of SERCA2a lead to enhanced Ca 2ϩ transients and contraction as well as accelerated relaxation in cardiac myocytes. Our results indicate that up-regulation of SERCA2a expression by Akt is mediated by post-transcriptional mechanisms. The amino acid sequence of mouse SERCA2a contains potential phosphorylation sites by Akt. Our experiments using cultured adult ventricular cardiac myocytes indicated that adenovirus-mediated transient overexpression of Akt increased expression of SERCA2a. Furthermore, on the SDS-PAGE, only a slower migrating form of SERCA2a appeared after Akt overexpression. This slower migrating form disappeared after phosphatase treatment, and thereafter only the faster migrating form was observed (data not shown), consistent with the concept that SERCA2a is phosphorylated, when Akt is overexpressed. Whether SERCA2a is a physiological substrate of Akt, and if so, whether activation of Akt enhances translation or stability of SERCA2a, remains to be elucidated. We and others have shown that activation of Akt causes cardiac myocyte hypertrophy both in vitro and in vivo (7,10,11,27). Because ventricular hypertrophy is usually accompanied by decreases in SERCA2a, which lead to impairment of Ca 2ϩ handling, up-regulation of SERCA2a and acceleration of Ca 2ϩ transients despite the induction of hypertrophy in this transgenic model is unique, particularly in view of the well known effects of hypertrophy in interfering with Ca 2ϩ handling and myocyte contraction (35,36). Because Akt prevents cardiac myocyte death in response to pathologic stimuli (16), stimulation of Akt might be an ideal method of enhancing the contractility of the heart and improving Ca 2ϩ handling in response to pressure overload. In summary, transgenic overexpression of protein kinase Akt enhances myocyte contractility and relaxation through acceleration of intracellular Ca 2ϩ transients. The underlying cellular mechanism(s) include potentiation of L-type Ca 2ϩ channel function and up-regulation of SERCA2a.
2018-04-03T00:00:37.095Z
2003-11-28T00:00:00.000
{ "year": 2003, "sha1": "f70c740075c14b4c9c1879f723cf06cb6f6cc4ad", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/278/48/47622.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "e7785b00ebdc3433bd80ee03c2405f3a94e898f8", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
234016654
pes2o/s2orc
v3-fos-license
Pile defect quality control analysis on construction company in East Java A production process has a possibility of product defects; the types of defects commonly found in pile products are porous concrete defects, dented joint plate defects, pile dimensions deviation defects, etc. The defect of the product needs to be controlled because it can cause losses to the company. Therefore, in this study, data processing was conducted on pile defect products using statistical process control. An analysis was then performed using the Pareto Chart to determine the types of defects that occur most often. After that, identifying the problems causes that occur in terms of man, machine, method, materials, money, and environment by using fishbone diagrams; so that recommendations for improvement can be given to the company. The results showed that, based on the Pareto principle, 80% of the defects types that occur include the type of porous concrete defects, the type of dented sheath plate defects, and types of joint plate dimension defects. The type of defect with the highest frequency is a porous concrete defects type. Recommendations for improvement that can be given include providing training to workers, maintaining machinery, improving product quality, and providing alternative electrical energy sources. Introduction A company engaged in the field of construction has one of the products produced, namely piles. Pile foundation is part of the structure used to receive and transfer (distribute) the load from the upper structure to the supporting soil, located at a certain depth [1]. A production process has a possibility of product defects, which need to be controlled because it can cause losses. Defective products that occur require a process of repair, and this drives a greater cost [2,3]. When defective products reach consumers, most quality costs increase replacement costs due to complaints made [4]. The innovation of the product is also needed to maintain company capability in the competition [5]. Returned products are usually reworked or modified. This will also result in product sales, which will be influenced by the company's quality costs [6]. Therefore, in this study, data processing was carried out on pile defect products using Statistical Process Control. Statistical Process Control (SPC) is a method that can be used to control a sustainable production process and identify any unusual issue in the production process [7][8][9]. Some method like brainstorming to catch the cause of the problem for the critical to the product's IOP Publishing doi: 10.1088/1757-899X/1034/1/012118 2 quality is essential to build any improvement plan [10,11]. The results of the analysis conducted by SPC in the future can be used as a basis for measuring the products or services' current quality and analyzing the existence of a variable in the production of goods or services experiencing changes that will affect quality [12,13]. Another use of SPC is to collect and analyze the sample's monitoring results during the product's quality control activities [14]. Some tools used to support SPC include the Pareto chart to determine the types of defects that occur most often. After that, identifying the causes of problems that occur is conducted using fishbone diagrams so that later improvements can be given to the company. Material This research was conducted at a construction company located in East Java. The company has several production lines, namely lines 1 to 6, and pile products are produced in lines 1, 2, and line 5. The data used are data of pile product defect produced at production line I. The collection of the data had the purpose of finding the possible causes of the problem. Brainstorming and field analysis is the way used to generate information related to the problem. Method Statistical Quality Control using the SPC method can use seven main statistical tools that can be used to assist the quality control process. Quality control tools include check sheets, histograms, control charts, Pareto diagrams, fishbone diagrams, scatter diagrams, and process diagrams [14]. In this study, the quality control tools used are Pareto diagrams, control charts, and fishbone diagrams. The steps performed in this study are as follows: a. Identify the visual types of defects on the defect pile, and brainstorm with the technical and quality responsible for Plant I. b. Construct a Pareto chart. Pareto chart is a bar chart that illustrates data with the highest percentage value to the lowest. Pareto charts are methods of organizing defectives, problems, or defects to assist focus on problem-solving efforts. Through the Pareto char, we can find out the types of dominant defects that occur. An example of a Pareto chart is shown in Figure 1 below. Figure 1. Pareto Diagram Constructing a control chart, the p-chart is used to determine how big the proportion of defectives or defects in the sample or sub-group, or each time observations are performed [16]. The p-chart is used to determine whether the defects of the pile product produced are still within the required proportions or not. The following is the formula used in processing the P-chart [17]. Possible causes for a problem /problems that occur [18]. In this study, fishbone diagrams are used to find the root causes of specific causes of defects in piles, which consist of 5M and 1E analysis, namely Manpower, Machine, Method, Materials, Money, and Environment. This stage is the final step to determine improvement recommendations given to companies based on an analysis of the causes of the previous defects. The figure of the fishbone diagram is shown in Figure 2. Result The result obtained from the analysis is about the types of defects. The following are types of defects that can occur in pile products, and descriptions of each type of defect are shown in table 1. Strengthen the analysis, and there are also provided with the pictures of each defective product. Analyzing Description Thick waste is a condition where a dimension is measured on the inside of the pile, the inside diameter of the pile is greater than the diameter in the joint plate. This is because the waste is not completely disposed of. 11 Porous edge of the sheath plate Description Porous edge of the sheath plate is a condition where porous concrete on the edge of the sheath plate 12 Broken heading Description Broken Heading is a condition where the head of the PC bar is disconnected from the PC bar. This is because the length of the pc bar is cut unevenly so that the shorter part receives a greater tensile load. Description Thick concrete is a condition where the diameter of the pile exceeds the diameter of the joint plate. This is because the volume of the slurry is excessive or because of worn out dart root. 14 Porous concrete -Description Porous concrete is a condition where the concrete parts that do not blend well or porous on the inside, it is because the slurry is too dry when poured into the framework. 15 Porous fins at segment joint -Description Porous fins at segment joint due to uneven framework lip or it is not tightly closed. The pile produced by the company is 71,423 units. Throughout the year, several types of product defects occur. The following is a recapitulation of defective product data in the company's pile production process shown in table 2. Based on table 2, it can be known the types of defects that often occur in the process of producing piles in the company. To determine the type of defect that has a significant effect, a Pareto chart is made. Pareto charts are also used to show the problem from the highest priority to the lowest priority to determine the problem that must be addressed first. Problems that must be immediately addressed are problems from highest to lowest frequency and reaching 80% of the existing problems. The following is a Pareto chart of pile product defects types in the company shown in figure 3. Figure 3, it can be seen that the most dominant type of defect to fulfill the Pareto chart principle is 80% due to 20% caused. There are three types of defects, including porous concrete, dented sheath plate, and pile dimensions. The type of porous concrete defects consists of 53%; the type of dented plate defect consists of 20%, and the type of pile dimensions defects consist of 13%. Thus, the three types of defects are added to 86%. This type of defect is a type of defect that must be tackled first to avoid any loss due to defective products. Control chart analysis a. Porous Concrete Defects Type The proportion of porous concrete defects in November is above the control proportion because the value exceeds the upper control proportion of 0.000524, and the lower control proportion is IOP Publishing doi:10.1088/1757-899X/1034/1/012118 8 0, while the value of the defect proportion is 0.00118, so the data needs to be deleted and recalculate the UCL, CL and LCL values to revise control chart. After the calculation is revised, the UCL, CL, and LCL values are obtained. UCL value after revision is 0.00045, CL value after revision is 0.00009 and LCL value after revision is 0. After that, a control chart is made with the revised control limit so that the proportion of defect data from January to December (after November is deleted) is within the control limits, which means there is no data out of control. The influence of common causes fluctuations or variations in data. The control chart of porous defect proportions before and after revision can be seen in Figure 4. In the dented joint plate defects type, the data proportion of defects in January to December is within the control limit. The plate defect control limits include a UCL value of 0.00031, CL value of 0.00004, and LCL value of 0. Data on the proportion of defects is between the upper and lower control limits, which means there is no out of control data. The influence of common causes fluctuations or variations in data shown on the control chart of Dented Joint Plate Defect Proportion can be seen in Figure 5. Fishbone diagram A Fishbone diagram is constructed to determine the root causes that cause defective products. The following is a fishbone diagram of a pile defect in the company. The following is a fishbone diagram of a pile product defect in the company that can be seen in Figure 7. The fishbone analysis mentions four major causes of defective problems: man, machine, method, and material. Man, perspective gives information that human morals become a significant factor influencing the product defect. Careless, ignore the procedure, and fatigue is the factor that influences how the human does the job. Machine perspective mention that the failure of the process caused by the lifetime of the component. This cause informs that there are need some improvements related to maintenance or component replacement procedure. Last, for the method and material perspective, there are need to add some standard operating procedure to ensure that the material used in the process fulfill the requirement. Conclusion Based on the results of the analysis conducted in this study, several conclusions were obtained, namely: 1. Considering historical data of pile defect products in the company, can be seen that 80% of the defects types that occur include porous concrete defects, dented plate defects, and dimensional plate defects. The type of defect with the highest frequency is a type of porous concrete defects. 2. Recommendations for improvements that can be given is giving training programs for workers, checking every production process, maintaining machine maintenance periodically, ensuring the length of the PC bar cuts according to specifications, matching the slurry composition according to company capability and needs, slurry composition refers to predetermined specifications, tightening incoming quality inspections, carrying out work following work instructions, collaborating raw materials following customer requests, providing generators for anticipating power outages and designing a comfortable work environment.
2021-05-10T00:04:00.207Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "b3bb8998878ba0361106f2513f1d3555c7a01b81", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/1034/1/012118", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "48d665bdcc1ab2a8276463c4c43ff9f599853072", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
235760996
pes2o/s2orc
v3-fos-license
Nowhere to escape – Diversity and community composition of ferns and lycophytes on the highest mountain in Honduras Abstract IPCC predictions for Honduras indicate that temperature will increase by up to 3–6°C and precipitation will decrease by up to 7–13% by the year 2050. To better understand how fern and lycophyte communities might be affected by climate change, we comprehensively surveyed the community compositions of ferns and lycophytes at Celaque National Park, the highest mountain in Honduras. We surveyed a total of 80 20 × 20 m2 plots along an altitudinal gradient of 1249–2844 m a.s.l., identifying all species and estimating their abundances. We recorded a total of 11,098 individuals from 160 species and 61 genera. Community composition was strongly influenced by changes in altitude, precipitation and the abundance of bryophytes (a proxy for air humidity). Of the 160 species, 63 are expected, under a RCP2.6 scenario for the year 2050, to shift their range fully or partially above the maximum altitude of the mountain. Of these, 65.1% are epiphytes. We found that species with narrow altitudinal ranges at high altitudes were more at risk. Our study indicated that conservation efforts should prioritise higher altitudinal sites, focusing particularly on preserving the vulnerable epiphytic fern species, which are likely to be at greater risk. Introduction Mountains are ideally suited to study the effect of climate change on species distributions due to their rapid variability of climate over short altitudinal distances (Kessler et al. 2016;Rogora et al. 2018). In addition, these geographic features often harbour a very diverse and unique assemblage of fauna and flora and form regional biodiversity hotspots of high conservation importance (Lomolino 2001). Many of these species have discrete altitudinal distributions, determined partially by their biology and the historical distribution of each species, amongst other factors . Current evidence suggests that plant species ranges have seen an average increase of approximately 30-36 m upwards along altitudinal gradients over the last 10 years, an affect that can be attributed to climate change (Jump et al. 2012;Lenoir et al. 2008;Morueta-Holme et al. 2015). Projections suggest that under a 1.5°C increase scenario, we can anticipate further upward shifts in altitude and a loss of >50% of the geographic range of 8% of plant species by the year 2030 (IPCC 2018). Tropical locations, in particular, are believed to show exacerbated effects of climate change on altitudinal distribution patterns, largely due to the narrow optimal temperature ranges of tropical species (Feeley & Silman 2010), with beneficial effects for some species and detrimental results for others (Gibson-Reinemer & Rahel 2015). Upslope shifts have potentially negative implications for future diversity, by increasing the risk of extinction for species that occupy high-altitude sites and that have a narrower range size (Colwell et al. 2008). As such, altitudinal distribution patterns have been studied for several decades, with particular focus on tropical forest vegetation (Cardelus et al. 2006;Ibisch et al. 1996;Kessler 2001;Kidane et al. 2019;Krömer et al. 2005;Rahbek 1995;Richards 1952;Wolf 1993;Zhou et al. 2019). However, many Central and South American studies have mostly focused on countries such as Costa Rica (Stroud & Feeley 2017), whilst other areas, including Honduras, have been largely neglected, making generalisations on the effect of climate change on species altitudinal distributions difficult. In particular, the limited attention that Honduras has received has also been restricted to a small number of taxonomic groups. The greatest concentration of these studies in Honduras has focused on birds (Jones et al. 2020;Neate-Clegg et al. 2018), with fewer studies investigating invertebrates (Anderson & Ashe 2000) and plants (Imbach et al. 2013). Ferns and lycophytes are especially vulnerable to increased temperatures and decreased precipitation, which are both predicted under future climate change, and their responses to these conditions will likely differ between terrestrial and epiphytic species (Mandl et al. 2010). As a result, this climate sensitive, globally distributed and diverse group of plants has received substantial attention in the literature on global altitudinal distribution pattern studies; both directly (Kessler et al. 2001;Kluge & Kessler 2011;Mandl et al. 2010;Watkins et al. 2006) and indirectly (Sánchez-González et al. 2010). However, there is still a severe lack of available distribution data for ferns and lycophytes from some Central American countries such as Honduras, and there is currently no specific distributional data available for epiphytic ferns and lycophytes from anywhere in Honduras. For example, epiphytes until now have only been exclusively studied in Honduras in the context of disturbance events (Batke & Kelly 2015) and biogeographical comparisons (Batke et al. 2016). This is a concerning realisation when considering that Honduras contains a high percentage of vascular epiphytes relative to the overall flora of the country (e.g. >30% of 908 vascular plant species in Cusuco National Park) and compared to other Central America countries (Batke et al. 2016). In contrast to the geographical limitations of plant altitudinal distribution research in Honduras, the theory behind the migration of plants upwards along altitudinal gradients has been well established elsewhere. It is believed that climate warming offers more optimal conditions that favour the establishment and survival of plant species at the upper limits of their temperature ranges (Adams & Kolb 2005), effectively resulting in an upslope 'march'. Other theories have also been used to explain upslope plant shifts, such as the synchronous 'lean' response, although these hypotheses are not mutually exclusive and may occur in sequence or combination (Breshears et al. 2008). However, the individual response of particular plant groups has been shown to vary greatly (Grau et al. 2007(Grau et al. , 2011Wolf et al. 2016). For example, epiphytes, which are restricted to life in the canopy, are often separated from the terrestrial soil environment and have been suggested to therefore respond very differently compared to terrestrial plants (Nervo et al. 2019); particularly as epiphytes are also highly sensitive to changing climate conditions (Ellis 2013;Ellis & Coppins 2007Hsu et al. 2012;Zotz & Bader 2009). Thus, the lack of altitudinal distribution data on terrestrial and epiphytic ferns and lycophytes from Honduras currently prevents us to compare plant distributional responses to predicted changes in future climate to other biodiversity hotspots (Marchese 2015;Myers et al. 2000). To improve our understanding of fern community assemblages across the greatest altitudinal range in Honduras, in this study, we (1) investigated for the first time how species richness, diversity, and community composition patterns of ferns and lycophytes changes along an altitudinal gradient on the highest mountain in Honduras, (2) tested whether there are differences within these patterns between epiphytes and terrestrial species, (3) attempted to identify the underlying environmental factors that drive these patterns, and (4) identified which species are likely to be at greater risk under predicted changes in climate. It is hoped that the data from this study can help us to better understand and generalise the effect of future changes in climate on plant distributions in tropical mountain forests. Study site Celaque Mountain National Park (14°32 0 08″N, 88°42 0 26″W) is located within the western region of Honduras, between the departments of Copán, Lempira, and Ocotepeque ( Figure 1). The term 'Celaque' comes from the Lenca word 'Celac', which means 'cold water' or 'ice water' and is a reference to the large quantity of flowing water in the park (Flores et al. 2012). The protected area contains the highest mountain in Honduras, with an altitude of 2849 m above sea level (a.s.l.). The topography in Celaque is rugged with sandy and shallow soils (Archaga 1998). The vegetation community classification has not been well defined, but it has been broadly described as Pinus-Quercus (pine-oak) forest at lower altitude and transitional mixed broad-leaf/pine montane forest at middle to upper altitude. Above 2200 m, the transitional forest gives way to mainly broadleaved species (Archaga 1998;Southworth et al. 2004). Celaque is believed to be one of the most biologically important sites for plants in Honduras due to its high degree of endemism and diversity (Hermes et al. 2016;ICF 2016). With 217 species recorded to date, ferns species are particularly abundant in Celaque. It is believed to be the most species-rich nature reserve in the country for this group (Chávez et al. 2020;Reyes-Chávez et al. 2018;Rojas-Alvarado 2012, 2017, with two of the seven known Honduran endemic fern species occurring there. Plot selection We surveyed a total of 80 20 × 20 m 2 (400 m 2 ) plots between August 2018 and July 2019, along an altitudinal gradient of 1595 m (1249-2844 m a.s.l) ( Figure 1). Every 100 m in altitude, we selected five plots using a stratified random design, focusing on the most representative forest types including ravines and riparian zones, but excluding canopy gaps, landslides, or other highly disturbed areas where possible. Between 2200 and 2400 m, the topography of Celaque was very steep (an approximate slope of 60%), which made it unsafe to sample plots at 2300 m. In each plot, we surveyed fern and lycophyte richness and abundance (by counting every individual in each plot) following Kessler & Bach (1999) and Karger et al. (2014). For species with long rhizomes, individuals were counted by identifying clumps, which most likely represented genets. We collected epiphytes by searching for low hanging individuals or fallen branches, as well as a visual search using binoculars from a suitable vantage point. We identified all ferns and lycophytes to species. Where necessary, we collected voucher specimens for further analysis and verification. In the case of the genus Elaphoglossum Schott ex J. Sm., we collected a sample of each morphospecies for closer laboratory examination and counted the number of each type found in each plot. For each plot, we measured inclination using a clinometer and estimated the amount of soil covered by plants or rocks and total cover of bryophytes on canopy branches as a proxy for air humidity (Karger et al. 2012). Percentage soil covered by plants or rocks and total bryophyte cover were visually estimated in the field to the nearest 5%. All estimations were carried out by the same individual. Data analysis A digital elevation model (DEM) of the park was created using a 50-m contour map. The model was created using scene in ArcGIS 10.8 (ESRI 2020). The community data were visualised using nonmetric multidimensional scaling (NMS), and Simpson diversity was calculated with the R 'vegan' package (R Developing Core Team 2020). To identify the most important response variable that affected Simpson diversity and fern/lycophyte community composition in Celaque, the Simpson diversity and NMS community scores were correlated in a random/mixed-effects meta-regression model with all response variables. We used the 'glmulti' package in R for this analysis (R Developing Core Team 2020). We fitted the metaregression model separately for NMS axis 1 and 2. In addition, Simpson diversity was also separately fitted for epiphyte and terrestrial species. The relative model average importance of each variable was plotted and the best-fit model selected using Akaike's information criterion (AIC) (Batke & Kelly 2014). We used a 0.8 cut-off to differentiate between important and less important variables (Calcagno & de Mazancourt 2010). In order to assess the richness distribution of terrestrial and epiphytic species along an altitudinal gradient, a spline regression was fitted with a series of polynomial segments using R (Bruce et al. 2020; R Developing Core Team 2020). We extracted current temperature and precipitation data for Celaque from Karger et al. (2017) and climate predictions for temperature and precipitation for western Honduras for the years 2050 and 2100 for RCP2.6 and RCP8.5 from the Fifth Assessment Report (IPCC 2014). To assess altitudinal shifts, as expected from warming and decreases in precipitation, we calculated the lapse rates following Burt & Holden (2010). For each species, we used the rearranged fitted linear equations for the temperature and quadratic equations for the precipitation projections (i.e. solving for x), to calculate altitudinal changes for temperature and precipitation of each climate scenario and year, respectively. We then calculated the number of species that lost all or some of their altitudinal range for each year and climate change scenario. A full loss of range was defined when the minimum altitude of a given species exceeded that of the highest point of the mountain (i.e. 2849 m). Results We recorded a total of 11,098 individual ferns and lycophytes from 160 species and 61 genera (Supplementary Material - Table S1). Of the 11,098 individuals, 7,036 were epiphytes (78 species) and 4,062 were terrestrial plants (82 species epiphytes, whereas terrestrial species had highest richness around~2000 m, showing a hump-shaped relationship with altitude ( Figure 2A). Current altitudinal range sizes did not differ significantly between epiphytes and terrestrial plants (p > 0.05). However, range sizes were proportionally smaller at low-and high-altitudinal sites compared to middle altitudinal sites (not shown). Community composition in Celaque National Park was strongly influenced by changes in altitude. Higher-altitude sites were floristically different compared to low-altitude sites. An NMS ordination (stress = 0.19) clearly illustrated a transitional change in community similarity along axis 1 ( Figure 2B), which was strongly driven by altitude, bryophyte cover, and precipitation ( Figure 3A & B; Table 1). Similarly, Simpson diversity for epiphytes positively correlated to a high abundance of bryophytes, low cover of ground vegetation and low temperatures. It needs to be noted that although ground vegetation cover was an important model factor, it was non-significant for the best-fit model ( Figure 3C; Table 1). Terrestrial species diversity on the other hand were positively correlated with high rain fall, high bryophyte cover and low canopy height; however, only precipitation was statistically significant in the best-fit model for terrestrial species ( Figure 3D; Table 1). Bryophyte cover was positively correlated with altitude (F = 14.22, R 2 -adj = 0.55, p < 0.01). Based on IPCC predictions for western Honduras, we are expected to see a temperature increase between 3°C and 6°C and a precipitation decrease between 7% and 13% (Figure 4). Of the 160 species identified, between 7 and 32 species are expected to shift their ranges above the maximum altitude (2849 m) of the highest mountain in Honduras (Supplementary Material Table S1; Table 2; Figure 5). Generally, epiphytes were more negatively affected at high-altitudinal sites compared to terrestrial species due to their narrower range sizes at high altitude and Figure 3. Relative model-averaged importance of terms calculated using a random/mixed-effects meta-regression model for NMS axis 1 (A), axis 2 (B) and Simpson epiphyte (C) and terrestrial diversity (D). The importance for a predictor is equal to the sum of the weights for the models in which the variable appears. The vertical red line is drawn at 0.8 and denotes the cut-off to differentiate between important and less important variables. The model results that are shown for each of the first three variable terms are the best-fit models following AIC selection. The plus and minus symbols denote the direction of the relationships. negative association with higher air temperatures (Table 1; Figure 3C). The percentage mean altitudinal range lost was between 10% and 18% higher in epiphytes compared to terrestrial ferns. For example, of the eight known Hymenophyllaceae Mart. (filmy ferns) epiphytes found in this study, four would lose 100% of their suitable habitat range, whereas another two would lose between 9% and 87% of their range. Discussion There has been limited research into the altitudinal distribution patterns of epiphytic and terrestrial fern and lycophytes along mountain ranges, especially in the context of climate change. To our knowledge, our study is the first to explore these changing patterns in Honduras. Understanding plant distribution patterns and identifying the most vulnerable species under future predicted change in climate along altitudinal gradients is important, as it has been shown that highaltitude species are particularly vulnerable under rising atmospheric temperatures (Freeman et al. 2018). Increased atmospheric temperatures and decreased water availability from changes in precipitation and cloud formation have been suggested to exacerbate species losses in high-altitudinal sites (Still et al. 1999), due to a loss in suitable habitat conditions for those species that have a small-high altitudinal range. These changes in climate are particularly relevant to mountain systems, which exhibit rapid changes in environmental conditions across an altitudinal gradient (Rogora et al. 2018), relative to their specific geographic region (Kessler et al. 2016), with evidence to suggest that mountains offer an 'elevator to extinction' for highelevation species (Freeman et al. 2018). Previous studies that investigated the effect of climate change on plant distributions in mountains have often focused on nontropical mountain biomes, including temperate (Allen & Lendemer 2016;Janssen et al. 2019), Mediterranean (Di Nuzzo et al. 2021, alpine (Saiz et al. 2021) and subtropical localities (Song et al. 2012). Fewer studies have specifically focused on tropical locations (Acevedo et al. 2020;Hsu et al. 2014;Pouteau et al. 2016), and with even less data are available for biodiversity hotspots in Central or South America (Acevedo et al. 2020). In addition, the altitudinal distribution of selected groups of epiphytes in these understudied tropical montane regions, specifically for epiphytic ferns and lycophytes, remain vastly underexplored (Pouteau et al. 2016), making comparisons difficult between Honduras and other localities. We document here, for the first time, the altitudinal distribution patterns of epiphytic and terrestrial ferns in Honduras along the highest mountain in this country. Our study shows that epiphytes along this mountain exhibit small-high altitudinal ranges. This narrow range has important implications for epiphyte survival, resulting in a greater risk of extinction under future predicted changes in climate, as the ranges of some of these species are likely to shift beyond the maximum elevation of the mountain. For instance, we found that, although species of both epiphytic and terrestrial life forms with narrow range sizes are at high risk in Celaque NP under future IPCC predictions for Honduras, epiphytes were more vulnerable. This is attributed to the higher species richness and abundance of epiphytes at high-altitude plots (ca. 2466-2866 m) under current climate conditions, compared to terrestrial species, which had a higher abundance and richness at mid-altitude. As a result, of the 63 species identified to be at risk (partial or total loss of range) under RCP2.6 for the year 2050, 65.1% were epiphytic taxa, despite epiphytes making up less than 50% of all species recorded. The higher richness in epiphytes at high-elevation sites is thus likely to make them more vulnerable to change in climate conditions, due to their differences in response to environmental conditions compared to terrestrial species (Benzing 1990) and their closer range proximity to the maximum elevation of the mountain. Similar results were reported from studies on other vascular and non-vascular species (Zotz & Bader 2009). For instance, many epiphytic ferns are anchored in the forest canopy with no direct connection to the terrestrial soil environment, relying on dead organic canopy matter for nutrients and rain or atmospheric water vapour for moisture input (Benzing 1998;Foster 2001;Hsu et al. 2014;Zotz & Bader 2009). Terrestrial species on the other hand are intimately connected to the forest soil through their root system and thus rely much less on atmospheric moisture and canopy organic substrata for their water requirements and nutrient uptake. Our study demonstrated that 7-31 species of lycophytes and ferns are likely to lose 100% of their range between 2050 and 2100. Epiphytic ferns, however, are likely to have a higher loss of species compared to terrestrial ferns due to their higher predicted range loss (i.e. 10-18% more than terrestrial species). Global simulation of 2°C increase in temperature by 2100 has been predicted to result in the loss of over half the range of 16-57% of plant species (Smith et al. 2018;Warren et al. 2018), suggesting that our findings are for some species above the global average. We found that particularly, epiphytic ferns that require a continuous water supply, such as species of the genus Hymenophyllum Sm. (Hymenophyllaceae), are predicted to be of greater risk. Hymenophyllum species are found abundantly in humid tropical forests and have been characterised as shade plants, which are well adapted to low light but require ample water supply (Evans 1964;Richards & Evans 1972). These species are considered good indicators of high atmospheric humidity (Hietz & Hietz-Seifert 1995) and due to their dependency on moist habitats, they are extremely sensitive to water loss because of their single-layer cell structure and lack of a well-developed cuticle and stomata (Proctor 2003). The higher species richness of epiphytes at a higher altitude in Honduras is likely the result of increased precipitation and more continuous water supply (McAdam & Brodribb 2012;Nervo et al. 2019). Epiphytic species that are sensitive to water availability appeared to favour higher altitudinal sites, with lower-temperature conditions, increased cloud formation and a supply of fine and frequent precipitation compared to low-altitudinal sites (Bhattarai et al. 2004;Frahm & Gradstein 1991). This was demonstrated by the change in community composition along the altitudinal gradient, with a higher prevalence of epiphytic bryophytes at higher-altitudinal plots in our study. Thus, future predicted changes in climate may alter the suitability of these conditions for climate-sensitive epiphytes in Honduras, both directly by changes in climate and indirectly by likely decreases in moisture availability through the bryophyte branch communities. Bryophytes, specifically, can be important for the survival of epiphytic ferns, as increased bryophyte cover facilitates epiphyte establishment (Winkler et al. 2005) as well as water interception and storage (Ah-Peng et al. 2017;Oishi 2018). In addition, water availability is an important aspect in the fern life cycle as well as for the survival of mature plants, which have less specific stomatal control than angiosperms (McAdam & Brodribb 2013). Comparisons with previous studies of altitudinal distribution patterns in relation to climate change are challenging due to the complete lack of studies within Honduras and limited studies that investigated tropical epiphytic ferns and lycophytes. Interestingly, we found that epiphyte richness was particularly high at highelevation sites, which we believed was one of the key driving factors for epiphytes exhibiting a higher range loss compared to terrestrial species under future predicted changes in climate. In comparison, other studies that investigated vascular epiphyte richness along mountains often found a mid-elevation peak in species richness (Hsu et al. 2014;Pouteau et al. 2016). Therefore, it is likely that the underlying distribution patterns of ferns and lycophytes at a given site will ultimately determine the severity of climate change on the specific life form ranges (e.g. epiphytes vs. terrestrial species). In conclusion, higher temperatures under future predicted climate change may contribute to increases in total canopy evapotranspiration (Calanca et al. 2006;Jung et al. 2010), particularly at higher altitudinal sites. With climate change forecasts predicting rising global temperatures and decreases in precipitation (IPCC 2014), tropical montane forests are likely to experience reductions in cloud immersion due to a shift in cloud layers (Foster 2001;Karmalkar et al. 2011;Lawton et al. 2001;Still et al. 1999). These indirect effects of changing climatic conditions have the potential to exacerbate epiphyte species upward range shifts in the tropical montane forests of Honduras (Nadkarni & Solano 2002), as demonstrated in our study. To minimise the potential negative effect of these upward range shifts under future changes in climate, at least at a local and regional level, current conservation strategies in Honduras would require drastic conservation interventions (e.g. assistant migration and ex situ conservation methods) in order to ensure the survival of many of these highaltitude species. However, a lack of robust information on the distribution of ferns across most of Honduras exacerbates the problem. This issue must be addressed as climate change-induced species responses will ultimately affect plant community composition and distributions in Honduras and elsewhere. The highest mountain in Honduras studied here, has and will in the future, provide insight for the first time into how quickly plant communities will respond to changes in climate. Our study has already indicated that specifically high-altitude fern communities in Celaque will change and/or disappear, and it is likely that similar responses threaten species elsewhere.
2021-07-08T13:16:24.649Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "762019e580245ef7967005c11b3a5e82929cbbe7", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/8E676ED96AAA0E727D1BC92722154C18/S0266467421000122a.pdf/div-class-title-nowhere-to-escape-diversity-and-community-composition-of-ferns-and-lycophytes-on-the-highest-mountain-in-honduras-div.pdf", "oa_status": "HYBRID", "pdf_src": "Cambridge", "pdf_hash": "762019e580245ef7967005c11b3a5e82929cbbe7", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
205377289
pes2o/s2orc
v3-fos-license
Colon cancer metastasis mimicking intraductal papillary neoplasm of the extra-hepatic bile duct Graphical abstract Introduction An accurate diagnosis of the primary cancer in cases with metastatic lesions is quite important because misdiagnosis may lead to the selection of incorrect adjuvant therapy and worse long-term outcomes after surgery. The metastatic sites associated with the dissemination of colon cancer are well known and normally predictable, which includes the lymphatic, haematogenous, or peritoneal regions, while other locations are quite rare. Here, we present a case of colon cancer with an unusual metastatic pattern mimicking an intraductal papillary neoplasm of the bile duct (IPNB) present in the extra-hepatic bile duct. Presentation of case A 65-year-old woman underwent right transverse colectomy for the treatment of moderately differentiated colon cancer (stage II) with no lymphatic or vascular infiltration or lymph node metastasis. After 7 years, enhanced computed tomography (CT) revealed a large (7-8 cm in diameter) metachronous liver metastasis in the right lobe. The serum carcinoembryonic antigen level was high (11 ng/mL) compared to the normal level of <5.0 ng/mL; whereas, the levels of carbohydrate antigen 19-9 and ␣-fetoprotein were within the normal range. As analysis of the primary colonic lesion revealed the presence of the wild-type K-ras, four courses of systemic chemotherapy consisting of S-1 and oxaliplatin plus cetuximab were introduced as part of a clinical trial; thereafter, a right lobectomy was performed as a curative measure. The serum carcinoembryonic antigen level decreased postoperatively, and no apparent recurrence was observed. After 10 months, the patient was detected with jaundice and elevated levels of alkaline phosphatase (408 IU/mL) and -glutamyl transpeptidase (1361 IU/mL); enhanced CT revealed soft tissue in the extra-hepatic bile duct with biliary dilatation ( Fig. 1a and b). Magnetic resonance cholangiography and endoscopic retrograde cholangiography displayed a papillary tumour in the middle to distal portion of the bile duct ( Fig. 1c and d). A blush appearance indicated a class V tumour. Based on the preoperative diagnosis of intraductal papillary neoplasm of the bile duct (IPNB), subtotal stomach-preserving pancreaticoduodenectomy with lymph node dissection of the hepatoduodenal ligament surrounding the common hepatic artery was performed. Macroscopic examination of the resected specimen revealed a soft papillary tumour at the middle to distal portion of the bile duct with intact surrounding epithelial tissue (Fig. 2a). Microscopic findings using haematoxylin-eosin (HE) staining revealed the presence of an intraductal papillary tumour with a fibrovascular core (Fig. 2b). Histological analysis revealed that the IPNB showed moderately differentiated tubular adenocarcinoma; vascular/lymphatic infiltration or lymph node metastasis was not detected. Further examination using immunohistochemical staining uncovered that the IPNB was a metastatic lesion arising from colonic carcinoma (Fig. 3). The neighboring biliary epithelium showed a cytokeratin (CK)-7-positive and CK-20-negative profile (pancreatobiliary type), while the tumour displayed a CK-7-negative and CK-20-positive profile (intestinal type). The expression pattern was similar to those seen in the primary site and liver metastatic site (Supplementary Figs. S1 and S2). The tumour was ultimately diagnosed as a colon cancer metastasis mimicking IPNB, after which adjuvant chemotherapy for colon cancer was initiated. Discussion Accurate diagnosis of the primary cancer in cases of colon cancer with metastatic lesions is quite important, as treatment using effective chemotherapy regimens including molecular targets is currently available. Colon cancer-associated metastatic sites are normally predictable because of its three well-known metastatic patterns of dissemination to the lymphatic, haematogenous, and peritoneal regions. Indeed, the most common metastatic sites of colorectal cancer by order of frequency consist of the regional lymph nodes, the liver via the portal circulation, the lungs, the peritoneum, and the ovaries. Although other localizations of colon cancer metastasis occur very rarely, such metastases occasionally mimic the primary cancer. Here, we show a case of colon cancer with an unusual metastatic pattern mimicking IPNB. While there are a few reports of colon cancer metastasis mimicking IPNB at the intra-hepatic bile duct [1][2][3][4], this is the first report of colon cancer metastasis mimicking IPNB at the extra-hepatic bile duct. In the present case, it was difficult to distinguish between primary IPNB and colon cancer metastasis using radiological findings and HE staining. Conversely, immunohistochemical staining using CK-7 and CK-20 successfully helped to determine that the ductal tumour was a metastatic lesion arising from a colon carcinoma. Therefore, in patients with a history of colon cancer, careful examinations including immunohistochemical staining are needed to obtain accurate diagnosis of IPNB because misdiagnosis may lead to administration of the incorrect adjuvant therapy and worse long-term outcomes after surgery. In such cases, if the immunohistochemical staining profile (CK-7 and CK-20) of a bile duct tumour can be determined using the biopsy sample, then bile duct resection may be a suitable curative procedure. The mechanism of extra-hepatic biliary metastasis of colon cancer is not very clear; however, possible mechanisms include vascular invasion via the lymph vessels, blood vessels, or biliary tract, and implantation of malignant cells to the bile duct epithelium via the bile stream. In the case of this patient, there was no clear evidence of vascular invasion of the lymph and blood vessels in the primary or metastatic sites. In addition, direct biliary involvement of colorectal liver metastasis, which frequently shows intraductal growth connected with the primary intraparenchymal tumour, was not detected. Furthermore, the intraductal tumour was limited within the bile duct epithelium of the extra-hepatic bile duct. Therefore, vascular invasion to the lymph vessels, blood vessels, or biliary tract was unlikely in the present case. To explain the present metastatic pattern to the extra-hepatic bile duct, the other mechanism of metastasis via the bile stream might be possible. Okano et al. [5] reported that bile duct invasion in patients with colorectal liver metastasis was histologically detectable in 62 (42%) of 149 patients. Although there was no clear evidence of microscopic or macroscopic biliary invasions in the metastatic sites in the present case, it is unclear whether biliary invasion was present at the metastatic sites prior to the systemic chemotherapy. Implantation of malignant cells to the bile duct epithelium via the bile stream prior to chemotherapy or during the surgical procedures is possible [4]. Further studies are required to confirm the presence of metastatic patterns via the bile stream. Conclusion This is a rare case of colon cancer metastasis mimicking IPNB at the extra-hepatic bile duct. Immunohistochemical staining with CK-7 and CK-20 is useful to distinguish between primary IPNB and colon cancer metastasis. Our findings also suggest that there may be an incidental 4th metastatic route via the bile stream. Conflict of interest The authors have declared that no competing interests exist. Funding None. Author contribution Takanobu Yamao and Hiromitsu Hayashi wrote the draft. Takaaki Higashi, Hideaki Takeyama, Takayoshi Kaida, Hidetoshi Nitta, Daisuke Hashimoto, and Akira Chikamoto managed the clinical treatment and collected the data. Toru Beppu critical revisions for this draft. Hideo Baba organized the paper and approved the final version to be published. Consent Written informed consent was obtained from the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal on request.
2018-04-03T05:22:13.883Z
2015-03-13T00:00:00.000
{ "year": 2015, "sha1": "46ff3711618bb04b3dde97323699729d21555473", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ijscr.2015.01.053", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "46ff3711618bb04b3dde97323699729d21555473", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
9376416
pes2o/s2orc
v3-fos-license
Validation of AshTest as a Non-Invasive Alternative to Transjugular Liver Biopsy in Patients with Suspected Severe Acute Alcoholic Hepatitis Background/Aims According to guidelines, the histological diagnosis of severe alcoholic steatohepatitis (ASH) can require liver biopsy if a specific treatment is needed. The blood test AshTest (BioPredictive, Paris, France) has been initially validated for the non-invasive diagnosis of ASH in a large population of heavy drinkers. The aim was to validate the AshTest accuracy in the specific context of use of patients with suspected severe ASH, in order to reduce the need for transjugular biopsy before deciding treatment. Methods The reference was liver biopsy, performed using the transjugular route, classified according to its histological severity as none, minimal, moderate or severe. Biopsies were assessed by the same experienced pathologist, blinded to simultaneous AshTest results. Results A total of 123 patients with severe clinical ASH (recent jaundice and Maddrey function greater or equal to 32) were included, all had cirrhosis and 80% had EASL histological definition of ASH. 95% of patients received prednisolone; and the 2-year mortality was 63%. The high AshTest performance was confirmed both for the binary outcome [AUROC = 0.803 (95%CI 0.684–0.881)] significantly higher than the AST/ALT AUROC [0.603 (0.462–0.714); P<0.001], and for the severity of ASH-score system by the Obuchowski measures for [mean (SE) 0.902 (0.017) vs. AST/ALT 0.833 (0.023); P = 0.01], as well as for the diagnosis and severity of ballooning, PMN and Mallory bodies. According to attributability of discordances, AshTest had a 2–7% risk of 2 grades misclassification. Conclusion These results confirmed the diagnostic performance of AshTest in cirrhotic patients with severe clinical ASH, in the specific context of use of corticosteroid treatment. AshTest is an appropriate non-invasive alternative to transjugular liver biopsy. Introduction In patients with suspected alcoholic liver disease, recent guidelines recognized that the precise indications of liver biopsy are not well established in routine practice due to significant morbidity/mortality related to liver biopsy. There is however a consensus that transjugular liver biopsy "should be considered" for the diagnosis of severe alcoholic steatohepatitis (ASH) that is amenable to specific therapy such as corticosteroids, both in the EASL and AASLD guidelines, [1,2] These guidelines have also recommended studies validating diagnostic algorithms including liver biopsy and non-invasive tests. [1,2] In 2006 we published the first accuracy validation study of a non-invasive biomarker called AshTest (BioPredictive, Paris, France). [3] Two hundred and twenty-five heavy-drinker patients were included: 70 in the training group, 155 in the validation groups, and 299 controls. AshTest was constructed using a combination of the six components of FibroTest-ActiTest plus aspartate aminotransferase (AST). The AshTest area under the ROC curves (AUROC) for histologically moderate to severe ASH was 0.90 in the training group, and 0.88 to 0.89 in the validation groups. One limitation of this study was the relatively small number of patients with severe histological ASH (n = 53). The specific aim of the present study, in comparison to the first validation, was to focus on the performance of AshTest in the real context of use, which is a patient with suspected severe clinical and histological ASH who requires specific treatment such as corticosteroid. The following performances of AshTest as a surrogate have been assessed: be in the diagnosis of mild, moderate and severe histological steatohepatitis, in the binary diagnosis of histological ASH according to different definitions, and the correlation with ASH score. Patients and Methods AshTest was prospectively assessed in consecutive patients admitted to the Hepatology department's intensive care unit at "Groupe Hospitalier Pitié Salpêtrière" for suspected severe ASH, clinically defined in a patient with severe liver dysfunction in the context of excessive alcohol consumption and the exclusion of other causes of acute and chronic liver disease, [1,2] such as advanced hepatocellular carcinoma and other etiologies of cirrhosis. Severe liver disease was defined as "Severe liver disease was defined as jaundice in the past 3 months, Maddrey discriminant function (DF) 32 at admission and a total serum bilirubin level >50 mol/L. During hospitalization, patients with clinical complications, such as ascites, spontaneous bacterial peritonitis, renal dysfunction, overt hepatic encephalopathy, or gastrointestinal bleeding associated with portal hypertension, were treated according to current international guidelines. [1,2] This non-interventional study was exempt from IRB review after institutional IRB review (Ethical committee of "Comité de Protection des Personnes of Paris-Ile-de-France", FIBROFRANCEproject. CPP-IDF-VI, 10-1996-DR-964, DR-2012-222, and USA-NCT01927133). All data were analyzed anonymously. All clinical investigation were conducted according to the principles expressed in the Declaration of Helsinki. We ask to all patients of our unit to sign an inform consent for this type of utilization of blood sample. All co-authors had access to the study data and had reviewed and approved the final manuscript. Histology Liver biopsies were fixed, paraffin-embedded and stained with hematoxylin-eosin-safran and Masson's trichrome or picrosirius red for collagen. A single, experienced pathologist (FC), unaware of the patient characteristics, including biomarker results, analyzed the histological features using a specific scoring system, at x200 and x400 magnification. Scoring procedures that focus on the main "independent" elementary lesions, as proposed for chronic viral hepatitis [4] or non-alcoholic steatohepatitis (NASH) [5,6], could be readily adapted for use in histological ASH. [7] We previously used such scoring systems in these patients with ASH by accumulating the grades of the elementary ASH lesions. [3,8] For consistency with these recommendations, we used two primary endpoints: one binary (the presence or absence of ASH) and one non-binary (ordinal according to a score) endpoint. Histological ASH was defined as the presence of steatosis, ballooning and PMN (EASL definition), [2] In sensitivity analyses, the following four other definitions of ASH were used: the pathologist's main conclusion, the most sensitive definition presence of at least one elementary activity feature (ballooning, PMN or Mallory bodies), and the most specific that is presence of all three features. The non-binary primary endpoint (ordinal), was calculated in the identical fashion as the NAS score. The sum of the three elementary lesion grades (none, minimal, moderate, severe; from 0 to 3), resulting in a 4-grade severity score: H0, no ASH; H1, minimal ASH (score 1-2); H2, moderate ASH (score 3-5); and H3, severe ASH (score 6-9). In sensitivity analyses, the severity of histological ASH, also given by the pathologist in four grades in his conclusion, was also used to evaluate the test performance. Steatosis was scored from 0 to 100 according to the percentage of hepatocytes with macroor microsteatosis. Fibrosis was staged with a scoring system adapted from the METAVIR score using a scale from F0 to F4. [10] Serum biochemical biomarkers AshTest was performed according to the analytical recommendations and analyzed using the same cutoffs as in the previous studies. AshTest (BioPredictive, Paris, France), combined the six components of the FibroTest-ActiTest, [GGT, ALT, total bilirubin (BILI), alpha2-macroglobulin (A2M), apolipoprotein A1 (APOA1), and haptoglobin (HAPTO)] with the serum AST activity and specific algorithms adjusted for age and gender. AshTest scores range from 0 to 1.00, with higher scores indicating a greater probability of significant lesions. FibroTest and SteatoTest (BioPredictive, Paris, France; FibroSURE LabCorp, Burlington, NC, USA) were determined using published recommended pre-analytical and analytical procedures. [3,11] GGT, ALT, AST and BILI were measured by a Hitachi 917 analyzer and Roche Diagnostics reagents (both Mannheim, Germany), A2M, APOA1, and HAPTO were measured using a Modular analyzer (BNII, Dade Behring; Marburg, Germany). All coefficient of variation assays were lower than 10%. Analyses of discordances Discordance between biopsy and biomarker tests for the prediction of histological ASH was analyzed according to their respective risk factors of failure. [3,11] Risk factors of AshTest failure were hemolysis, Gilbert's disease, acute inflammation and extra hepatic cholestasis, together with extreme values of the respective components (haptoglobin, bilirubin, GGT). Patented AshTest as FibroTest, includes algorithms which automatically exclude profile of components at high-risk of false positive/negative, the test being classified as not reliable. [11] According the obvious risk of commercial bias we performed several analyses of association between risk of false positive/negative and specific conditions of severe ASH, which could interact with AshTest components. We compare the value of AshTest and its components between patients with or without sepsis at admission to identify if infected patients had an inflammatory profile with increasing haptoglobin (risk of false negative) or A2M (risk of false positive) in comparison without sepsis. High risk factors of biopsy failure were specimen length and fragmentation: less than or equal to 15 mm long, or between 15 and 19 mm but with more than 10 fragments. Failure was attributable to biopsy when there was no high risk of AshTest failure and if the biopsy specimen was at high risk of failure. Due to the risk of commercial bias another analysis was made as the worse scenario for AshTest and AST/ALT, assuming that biopsy was a perfect reference without any false positive/negative. Hemodynamic study After fasting overnight, patients were placed in the supine position, and the wedged and free hepatic venous pressure gradients [HVPG] were measured by two experienced operators using an 8F catheter (Cordis SA, Miami, FL, USA) inserted into the right hepatic vein. Two similar consecutive values of the HVPG were used. PHT (portal hypertension) was defined as an HVPG 5 mmHg, and severe PHT as an HVPG 12 mmHg. [12] Statistical analyses The two primary outcomes (binary and non-binary) were predetermined. The binary outcome was the presence of ASH, the standard definition being steatosis, ballooning and PMMN. [2] The AUROCs were estimated by the empirical (nonparametric) method of Delong et al. and compared using the paired method [13]; The ordinal (non-binary) AshTest cutoffs were predetermined (US patent 7856319 B2) 0.1700 (no ASH), 0.5535 (minimal), 0.780 (moderate), and >0.780 (severe ASH). The non-binary diagnostic performances of AshTest (or standard tests) used the Obuchowski measures to prevent the risk of spectrum effect and to reduce the risk of multiple testing. [14] The Obuchowski measure allows two biomarkers to be compared with a single test, avoiding appropriate correction for the type I error when comparing two biomarkers for different stages or grades.(S2 File) Obuchowski measure is a multinomial version of the AUROC. The overall Obuchowski measure is not equivalent to a usual AUROC curve, as the measurements are weighted according to the distance between grades. Sensitivities, specificities, and positive (PPV) and negative predictive values (NPV) were assessed according to predetermined cutoffs. Sensitivity analyses compared AUROCs according to biopsy specimen length and the number of fragments, and according to several definitions of histological ASH based on combinations of elementary lesions. False positive and false negative results of AshTest were defined using a previous cutoff of 0.50 for the binary endpoint. and analyzed with AUROCs. There is no reference test, and the diagnostic performance of AshTest was compared with the standard AST/ALT ratio, a non-patented quantitative score, and for prognostic performance with DF function, the MELD score, and FibroTest. For the discordance analyses, the weighted quadratic kappa (wKappa) and the maximumadjusted Kappa with observed marginal totals were assessed to identify possible variability factors, such as biopsy specimen length or fragmentation. The following methods were used when appropriate: the chi-squared, Fisher's exact test for qualitative comparisons, Student's t-test, Z-test and the Mann-Whitney U test for quantitative comparisons. For survival analyses, time-dependent Kaplan-Meier analysis for survival curves, the log-rank test for univariate comparisons and the Cox proportional hazard model for multivariate analysis were performed. For all analyses, two-sided statistical tests were used; a P-value of 0.05 or less was considered significant. Statistical analyses were performed using Number Cruncher Statistical Systems 2012 software (NCSS, Kaysville, Utah, USA) and R software [library(nonbinROC) and library(ROCR)]. [15,16]. Results Between 2004 and 2013, of 157 eligible patients, 123 were included and 34 not included. In 31 patients AshTest was not performed due to forgotten prescription or too small blood sample. Among the 123 included patients, AshTesT and biopsy were obtained the same day except for one case obtained 3 days apart. (Fig 1). There were no significant differences in the main patient characteristics. (Table 1) According to the definition, the prevalence of histological ASH varied from 78% (ballooning and PMN, and Mallory), 80% for the EASL definition (Ballooning and PMN and steatosis), to 91% for at least one activity lesion (Ballooning or PMN or Mallory).(S1 Table) All patients had cirrhosis both observed at histology and presumed by FibroTest (median 0.98; range 0.80-1.00); one FibroTest was not interpretable which can be considered as a failure in intention to diagnose. No AshTest with high-risk of false positive/negative was identified by algorithms. No case of elevated unconjugated bilirubin or extra-hepatic cholestasis was identified at the ultrasonography performed in all patients and no severe hemolysis was identified. The proteins included in AshTest and possibly associated with acute or chronic inflammation were not associated with sepsis at baseline, such as haptoglobin (mean (SD)), and alpha2-macroglobulin: 0.89 g/L (0.12) and 1.84 g/L (0.74) among the 23 patients with sepsis vs 0.73 (0.28) and 1.99 (0.70) in 100 patients without sepsis at baseline (P = 0.28 and P = 0.29) respectively. Only the MELD prognostic index was associated with sepsis, as well as a dramatic decrease in apoA1 as previously observed. [3](S2 Table) Biopsy specimens had a risk of failure in 84/123 (68%) patients, with medians of 10 mm in length and 10 fragments. A severe adverse event, possibly associated with transjugular biopsy, occurred in one patient (0.8%). Acute respiratory distress syndrome (desaturation and bilateral lung opacity) occurred one hour after the biopsy together with a generalized seizure. The patient was intubated and treated successfully in the ICU with nitric oxide, intravenous corticosteroid and antibiotics. No hematoma, pneumothorax or capsular perforation was observed; transthoracic ultrasonography was normal, and no infection was identified. Ventricular arrhythmia was not ruled out as a possible cause. Diagnostic performance of AshTest Using the EASL binary endpoint, AshTest performance was confirmed in this severe patient population. Using the predetermined cutoff of 0.50 for AshTest, the sensitivity of AshTest was 88.8% and its specificity was 48.0%; the likelihood ratio for AshTest was 1.67 (95%CI 1. Table) The main differences were observed for non-ASH vs. minimal ASH and for moderate vs. severe ASH. (Table 3) Using pathologist conclusion as endpoint, AshTest performances were similar than using EASL endpoint.(S4 Table) AshTest had also higher performance for histological ASH scores than MaddreyDF and MELD score. (Fig 2)(S5 Table). Discordance analyses Concordance between the histological scores and AshTest scores was significant (P<0.0001) and was rated as moderate to fair (wKappa = 0.48; 95% CI, 0.34-0.63). Discordance was minimal, and a difference of only one grade was found in 60/123 (49%) patients, and of 2 grades or more in only 9/123 (7.3%) patients.(S6 Table) Two or three grades discordance was attributed to failure of biopsy (false negative) in five (4%) cases (including the only "three-grade discordances"); the lengths ranged from 10 to 14 mm, and there were 2 to 12 fragments. Discordance was attributable to AshTest (false negative) in three cases, as the biopsy was 20 mm or longer, and the ApoA1 was not decreased, as is usually observed in histological ASH. In the remaining cases, the cause of discordance could not be determined, though cardiac insufficiency, cirrhosis and sinusoid dilatation, as well as minimal PMN and ballooning were suspected. Univariate comparisons between the 54 concordant cases and the 69 discordant cases identified the associated factors as the clinical non-severity of ASH (P<0.0001), the number of fragments (P = 0.02) and the inferior vena cava pressure (P = 0.03). In multivariate analysis the only factor associated with discordance was the clinical non-severity of ASH (P<0.001).(S7 Table) In a "worse scenario" for AshTest, assuming no failure for biopsy, the failure rate due to the test was 49% for one ASH-score grade and 7% for two grades. In a best scenario focusing on 2 + grades discordances and assuming high risk of failure for biopsy, the failure attributable rate of AshTest was 2.4% (3/123), 4.1% (5/123) for biopsy and unknown in the remaining case (0.8%). Finally, due to the severity of liver disease all these patients were considered for treatment by corticosteroids. Based on AshTest only, or based on biopsy only, 109 (89%) and 88 (72%) patients would have been treated respectively.(S3 File)(S7 Table) Prognostic performance In this population that includes only severe end-stage liver diseases, only the Maddrey discriminant function had a significant prognostic value in multivariate analysis [Risk ratio = 1.01 (95%CI 1.006-1.020) P = 0.0005]. AshTest, AST/ALT, MELD, AHHS and hemodynamic measurements had no prognostic value.(S8 Table). There was no significant association between AshTest and gradient, VCI and OD pressure.(S9 Table). Discussion The results confirmed the diagnostic performance of AshTest in the appropriate context of use, patients with the most severe form of alcoholic steatohepatitis, on a greater number of patients and using both a binary and four-grade scoring system. [3] Therefore the guidelines that recommended transjugular liver biopsy to prove the presence of histological ASH before starting treatment such as corticosteroids need to be challenged. [1,2] As for other tests, the benefitrisks of AshTest versus transjugular biopsy will be discussed according to their respective advantages and limitations. [17,18] Limitations of AshTest The three main limitations of AshTest are the non-independent status of the authors with possible conflicts of interest, the relatively low availability of the test compared with non-patented tests such as the AST/ALT ratio, and the relatively small number of patients included. We acknowledge that a fully independent validation of AshTest is still missing. Contrary to FibroTest, which has been validated by several independent studies in alcoholic liver disease [19,20,21,22,23] since the first publication in 2006, no studies have re-validated AshTest. The lack of funding in alcoholic liver disease is certainly one explanation. [24] We therefore decided to perform another validation in the specific context of use of suspected severe histological ASH in order to support the treatment decision. The relative non-availability of AshTest/AshFibroSure is increasingly less of a limitation, as AshTest uses the same components as FibroTest/FibroSure (plus AST), which is now prescribed and available in more than 50 countries. FibroTest also enables confirmation of the fibrosis stage with 100% positive predictive value in the present study for cirrhosis, including in the case of cardiovascular insufficiency. Advantages of biopsy The recognized advantages of transjugular liver biopsy are its status as the gold standard; its direct assessment of elementary lesions; the simultaneous assessment of the HVPG, which was previously validated as a prognostic index in cirrhotic patients; and its potential ability to diagnose associated causes, such as nodular hyperplasia or heart failure. [24,25,26] The utility of HVPG in our population was unclear, as its prognostic value was not significant and was lower than that of the Maddrey DF (S8 Table). Biopsy was useful in only one patient, showing mixed liver lesions including sinusoidal dilatation, cirrhosis, minimal PMN and minimal ballooning. The pathologist diagnosed cardiac-related liver disease without obvious histological ASH. This patient probably had a mixed cause of cirrhosis, heart failure with an elevated gradient (23 mmHg) together with a high (19 mmHg) inferior vena cava pressure (IVC), and minimal histological ASH. Interestingly, we observed that 31 out of 97 patients (32%) had IVC pressures greater than 15 mmHg, suggesting frequent undiagnosed right heart failure in these patients. These possible relationships between clinical or histological ASH and alcoholic cardiovascular disease should be explored. Limitations of biopsy Transjugular biopsy has a 0.18% probability of resulting in mortality, with an additional 1.27% risk of serious adverse event, as observed in our population (0.8%). [26] In the severe clinical ASH context of use, the other major biopsy limitations were the small specimen length, as well as the large number of fragments. Indeed, the mean number of fragments was significantly higher in discordant versus concordant patients (11 vs. 8). The analyses of severe discordance (2 grades or more) suggested that low quality specimens were associated with 5 false negatives of biopsy and 3 false negatives of AshTest. Therefore, and as observed in NAFLD [27] and chronic hepatitis C, [17,28] liver biopsy is far from being a perfect reference test in patients with severe clinical ASH [8]. This risk of false negatives due to sampling error was also illustrated by one of our patients who was still an active drinker, and who had undergone two biopsies 9 months apart, one without histological ASH (22 mm, 12 fragments) and one with severe histological ASH (26 mm, 4 fragments). Sampling error should be at least equal to those we describe for NASH. [27] Interobserver variability is another cause of false positives or false negatives. Even in 392 drinkers without severe histological ASH, we observed moderate or fair concordance (binary outcome) between two observers using intercostal biopsy specimens, which were of better quality than those from the transjugular route: only 17% were fragmented and 88% were longer than 10 mm. [8] In a recent prognostic study of severe histological ASH, similar limitations of biopsy were observed, including a relatively limited number of subjects despite the international multicenter design (121 in the first set, and 96 in the updated set), small specimen length (median 6 mm) and substantial interobserver variability (non-weighted kappa concordance coefficient ranging from 0.46 to 0.65) compared to the usual coefficient of variation of blood tests lower than 10%. [9,17] Costs The cost of transjugular biopsy is estimated at £1,500 (€1900), and requires an overnight stay and possible transportation costs. The cost of AshTest/Ash-FibroSure varied according to countries and health care systems, from 30 euros to 487 dollars, but 50 Euros for FibroMax (which combines FibroTest, ActiTest, SteatoTest, AshTest and NashTest) is the competitive median price of patented blood liver tests. [29] Advantages of AshTest The four advantages of AshTest compared with biopsy are its absence of adverse events, its rapid assessment, its lower variability and lower cost. All components of AshTest can be measured on a single analyzer, and the algorithms, including the security algorithm directly integrated in the biochemistry routine process, enable the results to be obtained in less than 2 hours in our department. We acknowledge that there is no recognized reference blood test for ASH, and that AST/ ALT ratio is considered to be reflection of alcoholic liver disease in general and not specific for diagnosis of alcoholic hepatitis. However AST/ALT ratio was the only "routine" biochemical test discussed in the diagnostic paragraph of EASL guidelines. [2] One original result of the present study was to demonstrate that the performance AshTest was still higher than AST/ ALT ratio in this severe population with Maddrey-DF >32. AshTest versus non-patented tests, such as the AST/ALT ratio, has a higher diagnostic performance for the binary and ordinal severity scores of histological ASH. Its improved performance was confirmed for the overall histological diagnosis, as well as for the elementary lesions, particularly for PMN ( Table 2). This overall greater performance was mainly due to the ability to discriminate between moderate and minimal histological ASH (S3 Table). The advantage of the AST/ALT ratio is its cost and availability. Another advantage of the AshTest versus the AST/ALT ratio is the possibility of simultaneously assessing the stage of fibrosis using FibroTest, and the steatosis grade using SteatoTest in the FibroMax combination using the same sample. A combination of white blood cell and platelet counts had significant performance for the binary diagnosis of histological ASH, in one study. [30] However these results should be interpreted with cautious due to retrospective design, the small sample size with only 58 patients, of which 43 had ASH as confirmed with a liver biopsy; therefore the specificity was assessed in only 5 patients without histological ASH. [30] Limitations of the study design In our first AshTest validation, we recognized that the small number of patients with severe histological ASH (53 out of 225 patients and 299 controls), was a limitation. [3] Therefore this second validation provides, important additional evidence based on the same outcomes and cutoffs, and in the appropriate context of use, a new population of 123 patients who were predetermined as candidates for corticosteroid treatment. We focused on the diagnosis of histological ASH and not on its prognostic performance, as the AshTest was developed from its inception for diagnosis. We acknowledge that the presence of megamitochondria was not prospectively predetermined with the appropriate recommendations, such as the high magnitude power field (600 vs400). [9] Since all of the patients had cirrhosis with clinically severe acute alcoholic hepatitis, the population may not show sufficient variation to appropriately evaluate prognostic significance. Patients were consecutive suspected severe ASH, clinically defined, and all of them had cirrhosis, which could be also viewed as a limitation. The first validation validated AshTest in a population with a broad spectrum of clinical ASH. In this second validation the population was patients with cirrhosis and severe ASH, which is the target of an unmet need: non-invasive biomarker as treatment such as corticosteroid are discussed and they needed transjugular biopsy. Patients with cirrhosis represented 82% of patients with suspected ASH in of a recent multicenter study with the higher odds ratio for short-term survival. [9] Better validation of biomarkers with better definition of ASH Finally according to the limitation of biopsy it seems fair to simplify the definition the ASH (necro-inflammatory activity) from the non-activity features of ALD (steatosis, fibrosis), as it had been achieved for elementary histological features of chronic viral hepatitis 21 years ago with the METAVIR scoring system, [4] and recently for NAFLD with the SAF score. [6] In ALD new tests should be developed to predict activity, independently of steatosis and fibrosis, including not only the binary diagnosis but also the ordinal diagnosis of severity stages. We believe that the EASL definition is no more appropriate, as it combined steatosis with activity features (ballooning and PMN). Conclusion These results confirm the performance of AshTest in 123 new patients with suspected severe clinical ASH and within the specific context of corticosteroid treatment use. AshTest has limitations, including a 2-7% risk of two grades misclassification, but is an appropriate non-invasive alternative to transjugular liver biopsy, allowing an estimate of histological activity grades. It could be included in updated algorithms for the diagnosis and treatment of histological and clinical ASH. [1,2,25,29] Supporting Information S1 File. Histological methods and details of elementary lesion scores and different ASH definitions and grades.
2016-05-12T22:15:10.714Z
2015-08-07T00:00:00.000
{ "year": 2015, "sha1": "480f728fd207133fd201d3e8c29118689bc7b621", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0134302&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "480f728fd207133fd201d3e8c29118689bc7b621", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
117085228
pes2o/s2orc
v3-fos-license
Review of"Knots"by Alexei Sossinsky, Harvard University Press, 2002, ISBN 0-674-00944-4 This paper is a review of the book"Knots"by Alexei Sossinsky. The review includes a short personal history of knot theory at the end of the twentieth century. Introduction This is a brilliant and sharply written little book about knots and theories of knots. Listen to the author's preface. "Butterfly knot, clove hitch knot, Gordian knot, hangman's knot, vipers' tangle -knots are familiar objects, symbols of complexity, occasionally metaphors for evil. For reasons I do not entirely understand, they were long ignored by mathematicians. A tentative effort by Alexandre-Th[e]ophile Vandermonde at the end of the eighteenth century was short-lived, and a preliminary study by the young Karl Friedrich Gauss was no more successful. Only in the twentieth century did mathematicians apply themselves seriously to the study of knots. But until the mid 1980s, knot theory was regarded as just one of the branches of topology: important, of course, but not very interesting to anyone outside a small circle of specialists (particularly Germans and Americans). Today, all that has changed. Knots -or more accurately, mathematical theories of knots -concern biologists, chemists, and physicists. Knots are trendy. The French "nouveaux philosophes" (not so new anymore) and postmodernists even talk about knots on television, with their typical nerve and incompetence. The expressions "quantum group" and "knot polynomial" are used indiscriminately by people with little scientific expertise. Why the interest? Is it a passing fancy or the provocative beginning of a theory as important as relativity or quantum physics?" Sossinsky continues for 119 pages in that quick, vivid, irreverent way, telling the story of knots and knot theory from practical knot tying to speculations about the relationship of knots and physics. I will first give a personal sketch of some history and ideas of knot theory. Then we will return to a discussion of Sossinsky's remarkable book. A Sketch of Knot Theory Knot theory had its most recent beginnings in the nineteenth century due to the curiosity of Karl Friedrich Gauss, James Clerk Maxwell and Peter Guthrie Tait, and the energy of Lord Kelvin (Sir William Thomson). The latter had the popular physical theory of the day with the hypothesis that atoms were three dimensional knotted vortices in the all pervading ether of space. Thomson enlisted the aid of mathematicians Tait, Little and Kirkman to produce the first tables of knots, with the hopes that these tables would shed light on the structure of the chemical elements. Eventually, this theory foundered, first on the vast prolixity of knotted forms and later in the demise of the etheric point of view about the nature of space. But at the same time, mathematical concepts of manifolds and topology were coming forth in the hands of Gauss, Riemann and later Poincare. With these tools it became possible to analyze topological phenomena such as knotting and the properties of three dimensional manifolds. It was Poincare's fundamental group (of a topological space) that became the first significant tool in knot theory. From the properties of the fundamental group of the complement of the knot, Max Dehn (in the early 1900's) was able to prove the knottedness of the trefoil knot and its ineqivalence to its mirror image. In this way the deep question of detecting the chirality of knots was born. In the 1920's James W. Alexander of Princeton University discovered a polynomial invariant of knots and links [1] that enabled many extensive computations. Alexander's polynomial could not distinguish knots from their mirror images, but it was remarkably effective in other ways, and it was based upon ideas from the fundamental group and from the structure of covering spaces of the knot complement. In fact, Alexander based the theory of his polynomial invariant of knots on the newly discovered Reidemeister moves, expressing the topological equivalence relation for isotopy of knots and links in terms of a language of graphical diagrams. (Reidemeister's exposition of the moves can be found in his book [25].) Alexander's version of his polynomial was expressed by the determinant of a matrix that one can read directly from the knot diagram. Invariance of the polynomial is proved by examining how this determinant behaved under the Reidemeister moves. Reidemeister wrote the first book on knot theory and based it upon his moves. It would take another sixty years to realize the power inherent in Reidemeister's approach to knot theory. Topology evolved from the 1920's onward with the seminal work of Seifert [27] on knots and three manifolds and the rapid evolution of algebraic topology. In the early 1950's the precise role of the fundamental group in knot theory was made clear by the work of Ralph Fox [9] and his students. Fox showed how one could extract the analogues of Alexander polynomials directly from the presentations of the groups by a remarkable algebraic technique (derived from the theory of covering spaces) called the free differential calculus. It was a non-commutative and discrete version of Newtonian calculus, adapted to the needs of algebraic topology and combinatorial group theory. Then in the late 1960's John Horton Conway published a startling paper [7] in which he showed how to compute Alexander polynomials without any matrices, free calculi or determinants. His method relied on a recursive formula that expressed the Conway version of the Alexander polynomial in terms of simpler knots and links. The Conway skein theory was met by puzzlement on the part of topologists. It took about ten years for knot theorists to start thinking about the Conway approach, and the first thoughts became proofs of various sorts that the Conway method was valid and that it resulted in a normalized version of the Alexander polynomial. The author of this review was one of those captured souls, who puzzled about the Conway magic. He found two approaches to it. The first [12] went back to techniques of Seifert. The second [13] went back to Alexander's original paper. The second approach rewrote and normalized Alexander's determinant, converting it to a sum over combinatorial states of the link diagram. This sum over states made it easy to prove that the resulting polynomials satisfy Conway's identities and that they are invariant under the Reidemeister moves. The state sum produced a fully combinatorial (graph theoretic) way to understand Alexander's original determinant. The state summation is analogous to certain sums in graph theory and to partition functions in statistical mechanics. The full significance of this analogy was not apparent in 1980/81 when these relations were discovered. At this point it is worth making a digression about the Reidemeister moves. In the 1920's Kurt Reidemeister proved an elementary and important theorem that translated the problem of determining the topological type of a knot or link to a problem in combinatorics. Reidemeister observed that any knot or link could be represented by a diagram where a diagram is a graph in the plane with four edges locally incident to each node, and with extra structure at each node that indicates an over-crossing of one local arc (consisting in two local edges in the graph) with another. See Figure 1. The diagram of a classical knot or link has the appearance of a sketch of the knot, but it is a rigorous and exact notation that represents the topological type of the knot. Reidemeister showed that two diagrams represent the same topological type (of knottedness or linkedness) if and only if one diagram can be obtained from another by planar homeomorphisms coupled with a finite sequence of the Reidemeister moves illustrated in Figure 2. Each of the Reidemeister moves is a local change in the diagram that is applied as shown in this Figure The first move is special for a number of reasons. One can permute the performance of the first move with the other moves. So one can save up the doings of the first Reidemeister move until the very end of a process, if one so desires. And from a physical point of view, there are good reasons to not use the first Reidemeister move. The move is designed to remove a curl in the line, but a curl in a rope does not go away, it just gets hidden as a twist when you pull on the rope. The reader should try this with a bit of string or a rubber band. Form a curl as in Figure 3 and then pull on the ends of the string or band. You will find that the curl is transmuted into a twist, and if you relax the string or band, the curl can reappear. For this reason, it is useful to consider just the equivalence relation generated by the second and third Reidemeister moves. This relation is called regular isotopy. Regular isotopy was first defined by Bruce Trace in [32] and has turned out to be a useful companion to the full equivalence relation defined by all three Reidemeister moves. One way of thinking about regular isotopy is that one is talking about a framed knot or link where, by framing one means that there is an embedding of a band (i.e. the cross product of a circle with the unit interval) for each component of the link, so that the bands do not intersect one another. The bands can twist and this twisting models the twisting of a physical rope or a rubber band. In order to model embedded bands with curly diagrams we actually do not just remove the first Reidemeister move, but we replace it by the move illustrated in Figure 3. Note that there are two ways, shown in this Figure, to make a curl out of a full twist in a band. These two curls are equated with one another and this becomes the replacement for the first Reidemeister move. With this replacement, one refers to the equivalence classes as framed links in the blackboard framing (since these diagrams can be drawn on a blackboard). We shall speak about "measuring the framing" when speaking about the curls in a diagram taken up to regular isotopy because of this connection with framed links. Figure 3 -Framing Equivalence In 1984 Vaughan Jones dropped a bombshell [10] from which knot theory and indeed modern mathematics has yet to recover. By following an analogy between the structure of Artin Braid Group and certain identities in a class of von Neumann algebras, Jones discovered new representations of the braid group and used these representations to produce an entirely new Laurent polynomial invariant of knots and links. On top of this, Jones showed that his new (one variable) invariant satisfied a skein relation that was almost the same as the relation for the Conway polynomial. Only the coefficients were changed. This was shocking. On top of that, the Jones polynomial could distinguish many knots from their mirror images, leaving the Alexander polynomial in the dust. Jones' invariant was quickly generalized to a two-variable polynomial invariant of knots and links that goes by the acronym "Homflypt" polynomial after the people who discovered it: Hoste, Ocneanu, Millett, Freyd, Lickorish, Yetter, Przytycki and Traczyk. There were collaborations with Millett and Lickorish working together, Freyd and Yetter working together and Przytycki and Traczyk working together. This generalization was proved to have its properties in a number of different ways: direct induction on the properties of knot and link diagrams, algebras related to skeins of knots and links, representations of Hecke algebras (generalizing the von Neumann algebras used by Jones), some category theory in the bargain. A few months went by, and then Brandt, Millett and Ho discovered a new one-variable invariant of unoriented knots and links with a different skein relation. This invariant did not detect the difference between knots and their mirror images, but there was a new idea in it, namely that one could smooth an unoriented crossing in two different ways (as shown in Figure 4). When the author of this review received a copy of the announcement of this invariant, he was inspired and astonished to realize that their invariant could be generalized to a two variable polynomial invariant L K (z, a) of knots and links that did detect the difference between many links and their mirror images. The key idea for this generalization is to regard the original invariant as an invariant of framed links, adding an extra variable that measures the framing. The reviewer had earlier found an analogous way to understand the Homflypt polynomial. This was not published until later in [14,15]. Figure 4 -Smoothings The invariant L K satisfies the following formulas where the small diagrams represent parts of larger diagrams that are identical except at the site indicated in the diagram. We take the convention that the letter chi, χ, denotes a crossing where the curved line is crossing over the straight segment. The barred letter denotes the switch of this crossing, where the curved line is undercrossing the straight segment. In this formula we have used the notations ≍ and )( to indicate the two new diagrams created by the two smoothings of a single crossing in the diagram K. That is, the four diagrams differ at the site of one crossing in the diagram K. Here γ denotes a curl of positive type as indicated in Figure 5, and γ indicates a curl of negative type, as also seen in this figure. The type of a curl is the sign of the crossing when we orient it locally. Our convention of signs is also given in Figure 5. Note that the type of a curl does not depend on the orientation we choose. The small arcs on the right hand side of these formulas indicate the removal of the curl from the corresponding diagram. The polynomial L K (z, a) is invariant under regular isotopy and can be normalized to an invariant of ambient isotopy by the definition where we chose an orientation for K, and where w(K) is the sum of the crossing signs of the oriented link K. w(K) is called the writhe of K. The convention for crossing signs is shown in Figure 5. This use of regular isotopy is a key ingredient in defining the polynomial L K , and it turns out to be important in defining many other knot polynomials as well. In the case of the reviewer's research, discovering L K opened a doorway to finding a remarkably simple model of the original Jones polynomial the bracket state sum model [16,17]. The bracket state sum is also an invariant of regular isotopy, and can be normalized just like L K by using the writhe. The idea behind the bracket state sum is the notion that one might form an invariant of knots and links by summing over "states" of the link diagram in analogy with summations over states of physical systems (called partition functions) that occur in statistical mechanics. The reviewer had earlier discovered a model for the Alexander-Conway polynomial in this form [13] and was convinced that such models should exist for the new knot polynomials. The first case of such a model occurs with the bracket state summation. The bracket polynomial , < K > = < K > (A), assigns to each unoriented link diagram K a Laurent polynomial in the variable A, such that 1. If K and K ′ are regularly isotopic diagrams, then < K > = < K ′ >. 2. If K ∐ O denotes the disjoint union of K with an extra unknotted and unlinked component O, then 3. < K > satisfies the following formulas where the small diagrams represent parts of larger diagrams that are identical except at the site indicated in the bracket. We take the same conventions as described for the L− polynomial. Note, in fact that it follows from the bracket formulas that Making the bracket polynomial a special case of the L− polynomial. It is easy to see that Properties 2 and 3 define the calculation of the bracket on arbitrary link diagrams. The choices of coefficients (A and A −1 ) and the value of δ make the bracket invariant under the Reidemeister moves II and III (see [16]). Thus Property 1 is a consequence of the other two properties. In order to obtain a closed formula for the bracket, we now describe it as a state summation. Let K be any unoriented link diagram. Define a state, S, of K to be a choice of smoothing for each crossing of K. There are two choices for smoothing a given crossing, and thus there are 2 N states of a diagram with N crossings. In a state we label each smoothing with A or A −1 according to the left-right convention discussed in Property 3 (see Figure 4). The label is called a vertex weight of the state. There are two evaluations related to a state. The first one is the product of the vertex weights, denoted The second evaluation is the number of loops in the state S, denoted ||S||. Define the state summation, < K >, by the formula It follows from this definition that < K > satisfies the equations The first equation expresses the fact that the entire set of states of a given diagram is the union, with respect to a given crossing, of those states with an A-type smoothing and those with an A −1 -type smoothing at that crossing. The second and the third equation are clear from the formula defining the state summation. Hence this state summation produces the bracket polynomial as we have described it at the beginning of the section. In computing the bracket, one finds the following behaviour under Reidemeister move I: where γ denotes a curl of positive type as indicated in Figure 5, and γ indicates a curl of negative type, as also seen in this figure. The type of a curl is the sign of the crossing when we orient it locally. Our convention of signs is also given in Figure 5. Note that the type of a curl does not depend on the orientation we choose. The small arcs on the right hand side of these formulas indicate the removal of the curl from the corresponding diagram. The bracket is invariant under regular isotopy and can be normalized to an invariant of ambient isotopy by the definition where we chose an orientation for K, and where w(K) is the writhe of K. By a change of variables one obtains the original Jones polynomial, V K (t), for oriented knots and links from the normalized bracket: The bracket model for the Jones polynomial is quite useful both theoretically and in terms of practical computations. One of the neatest applications is to simply compute f K (A) for the trefoil knot T and determine that f T (A) is . This shows that the trefoil is not ambient isotopic to its mirror image, a fact that is quite tricky to prove by classical methods. To this day, it is still an open problem whether there are any non-trivial classical knots with unit Jones polynomial. After the bracket polynomial model demonstrated the idea of a state summation directly related to the knot diagram there came lots of other examples of state summations, using algebraic machinery from statistical mechanics and Hopf algebras (See [18] for an exposition of some of this development). The subject grew rapidly and then underwent another change in the late 1980's when Edward Witten showed how to think about such invariants in terms of quantum field theory [34]. Witten's methods indicated that one should be able to construct invariants of three-manifolds, and in fact David Yetter [35,36] had shown a similar pattern earlier using categories and Reshetikhin and Turaev [26] showed explicitly how to accomplish this goal using the algebra of quantum groups (Hopf algebras). Witten's approach brought gauge field theory into the subject of knot invariants. In Witten's approach there is a Lie algebra valued connection (a differential one-form) defined on three dimensional space. One can measure the trace of the holonomy of this connection around a knotted loop K. This measurement is called the Wilson loop W K (A). The Wilson loop is not itself a topological invariant of K, but a suitable average of W K (A) over many connections A should be a (framed) invariant. Witten suggested the specific form of this averaging process, and indeed that average works just as advertised at a formal level. The formalism leads to the idea of a topological quantum field theory due jointly to Witten and Atiyah [2], a concept that has been highly influential since that time. Witten's techniques are highly suggestive of fully rigorous combinatorial approaches. His work continues to act as a catalyst for new approaches to the structure of the invariants, and as a connection of the subject to mathematical physics. One of the most exciting aspects of the Wilson loop formulation is that it leads to connections between knot theory and theories of quantum gravity [28,29] and string theory [22,21]. Other approaches to link invariants grew out of Witten's work, most notably the theory of Vassiliev invariants [33]. Vassiliev invariants were first seen as coming from considerations of the topology of the space of all knots and singular knots, but were then quickly connected to combinatorial topology [6] on the one hand, and to Witten's work [3,4] on the other. In [4] there is an excellent account of the approach to Vassiliev invariants via the Kontesevich integrals and in [18,19] the reader will find a heuristic account telling how the Kontsevich integrals arise in the Feynman diagram expansion of Witten's functional integral. The result of this evolution has been a very clear view of just how it is that Lie algebraic structures are related to invariants of knots and links. The reader should recall that a Lie algebra is a linear algebra with a non-associative multiplication (here denoted ab) such that ab = −ba for all a and b in the algebra, and such that a(bc) = (ab)c + b(ac) (the Jacobi identity) for all elements a, b, c in the algebra. A diagrammatic version of the Jacobi identity can be seen just beneath the surface of the Reidemeister moves, when one takes the Vassiliev point of view, and it is this transition through diagrammatic (or categorical) algebra that makes a deep connection between knot theory, algebra and mathematical physics. Three recent developments are worth mentioning. First is the discovery of a generalization (categorification) of the bracket polynomial by Misha Khovanov [11,5] that writes the original Jones polynomial as an Euler characteristic of a complex whose graded cohomology leads to new invariants and to invariants of surfaces in four-dimensional space. One extraordinary outgrowth of the Khovanov work is the deep work of Ozwath and Szabo [23] finding a categorification of the Alexander polynomial that is related to the state sum in [13]. The second is the discovery by Morwen Thistlethwaite [31] of examples of non-trivial links that have the same Jones polynomial as trivial links, and the extension of this result by Eliahou, Kauffman and Thistlethwaite [8] to infinite families of links with this property. The third is the discovery of virtual knot theory [20], a generalization of classical knot theory to knots in abstract surfaces where there exist inifinitely many nontrivial virtual knots with unit Jones polynomial. We do not yet know if any of these virtual counterexamples will yield classical knots with unit Jones polynomial. The fundamental problem remains open. Sossinsky's "Knots" The book "Knots" [30] by Alexei Sossinsky is for general readers. It is a companion piece for the book "Knots, Links, Braids and 3-Manifolds -An Introduction to the New Invariants in Low-Dimensional Topology" [24] by Prasolov and Sossinsky, published in translation by the American Mathematical Society. The latter book is not under review here, but is recommended to all readers who enjoyed reading the book that is under review. Sossinsky's "Knots" begins with a very nicely illustrated tour of practical knots and decorative knotwork. It is recalled how Lord Kelvin (Sir William Thomson) in the 1860's had a theory of vortex atoms in which the atomic constituents of matter were to be modeled on three dimensional vortices, knots in the ether. This theory inspired Kelvin to enlist the aid of mathematicians Tait, Kirkman and Little to construct the first tables of knots. The vortex theory eventually came to a stop with the rejection of the ether in favor of relativity, but the idea of knots related to physics lives on to this day. Sossinsky's book describes this beginning for knot theory and then continues with a discussion of topological equivalence, wild knots, braids and the braid group and a proof of the basic theorem of Alexander that any knot can be represented as a closed braid. Sossinsky then goes on to describe a modern algorithm due to Pierre Vogel for transforming a knot into a braid, and a new algorithm of Dehornoy for finding a minimal representation of a braid. Chapter given by Sossinsky uses a technique known to topologists as "the method of infinite repetition". This method of proof is probably more astonishing than the Theorem! The chapter ends with a discussion of the factorization of knots into prime knots. Here is the beginning of a fruitful analogy of knots and numbers that is really just beginning to be explored. Chapter 5 is packed with a number of things. There is a discussion of Conway's approach and reformulation of the Alexander polynomial. Conway's approach uses oriented diagrams and involves two operations that can be performed at a diagrammatic crossing. One can switch the two strands, or one can smooth the crossing by reattaching the strands, preserving orientation and removing the crossover in the process. These two operations are reminiscent of the operations of topoisomerase enzymes and of combinatorics of DNA recombination (as was observed some years after Conway devised his methods). Sossinsky takes the opportunity to make a quick digression into the subject of DNA topology. He then discusses how one calculates with Conway's polynomial, and how natural it is to generalize (with 20/20 hindsight!) this construction of Conway to the 2-variable Homflypt polynomial. Chapter 6 returns the discussion to the original Jones polynomial and constructs the bracket polynomial state summation in some detail, including its relationship with the Potts model in statistical mechanics. The reviewer is pleased with this exposition of the bracket. The reader will find some speculation by Sossinsky on how this construction was discovered. In this review the author of the review has given his version of this story in section 2. Chapter 7 goes on to discuss the Vassiliev invariants. The discussion of this advanced topic is accomplished neatly, with an emphasis on the ideas and the diagrammatic context. In fact, Sossinsky discusses the so-called four term relation for the Vassiliev invariants, and in this way comes right up to the fundamental relationship between knot invariants and Lie algebras. This is an example of the daring nature of this popular exposition. Chapter 8 treats relationships of knot theory with physics, including a bit about statistical mechanics, quantum field theory and quantum gravity. This book is a tour de force. It is a great read and, I believe that it is a book that can actually be understood by a very wide readership. When large scale topics are discussed, Sossinsky deftly, sometimes ironically, places key ideas before the reader. A myriad of details will have to be dealt with by one who wants to master these topics but here the ideas are laid bare. I would recommend this book as a first book on knot theory to anyone who asks. There are a few slips and misprints. I will tell them here to the best of my knowledge. This is a book that deserves many editions and it is to be assumed that the author will correct these few stumbling blocks in the very next edition of the book. Page 43, Figure 3.4 shows a diagram that actually can be reduced to the unknot by Reidemeister moves without making the number of crossings increase (contrary to the claim in the book). Diagrams with that property are not hard to construct, but this is not one of them. On page 70 in Figure 5.7, A is a figure eight knot, contrary to the book's assertion, and B is a trefoil knot. On page 71 it is claimed that the Conway polynomial of the trefoil knot and the figure eight knot are the same. This is not the case. In the book's notation we have 1 + x 2 for the Conway polynomial of the trefoil knot and 1 − x 2 for the Conway polynomial of the figure eight knot. It is important for the next edition to correct this error, since a beginner in calculating these polynomials could become mighty confused by such a claim. On page 73 it is said that Louis Kauffman is at the University of Chicago. In fact, he is at the University of Illinois at Chicago. I cannot resist ending this review with one more quote from the book. Sossinsky expresses throughout his amazing enthusiasm for mathematics. In this quote he is speaking of the last touches, completing the construction and proof of invariance of the bracket polynomial. He says " God knows I do not like exclamation points. I generally prefer Anglo-Saxon understatement to the exalted declarations of the Slavic soul. Yet I had to restrain myself from putting two exclamation points instead of just one at the end of the previous section. Why? Lovers of mathematics will understand. For everyone else: the emotion a mathematician experiences when he encounters (or discovers) something similar is close to what the art lover feels when he first looks at Michelangelo's Creation in the Sistine Chapel. Or better yet (in the case of a discovery), the euphoria that the conductor must experience when all the musicians and the choir, in the same breath that he instills and controls, repeat the "Ode to Joy" at the end of the fourth movement of Beethoven's Ninth." I could not agree more. Read the book!
2019-04-12T09:13:16.756Z
2003-12-08T00:00:00.000
{ "year": 2003, "sha1": "428469316be3fa7c12d6830120eeea20c7945895", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "428469316be3fa7c12d6830120eeea20c7945895", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Philosophy" ] }
250040658
pes2o/s2orc
v3-fos-license
Capital structure in family firms: the role of innovation activity and institutional investors Purpose – There is still an ongoing debate on the value relevance of capital structure and its determinants. Recently the issue has been explored in family firms after being explored in mature firms. This paper investigates the role of institutional investors and the firm ’ s innovation activity in influencing the firm ’ s decision and ability to acquire debt capital. Design/methodology/approach – A large sample of 700 privately-held family firms in Italy from 2010 to 2019. Two analysis techniques are used: panel analysis and path analysis. The value of debt and the debt ratio areusedasleveragemeasures.Thevalueofpatent(asaproxyforinnovation)andinstitutionalinvestoraretheexplanatoryvariables. Findings – Theresultsshowthatinstitutionalinvestorshavenorelationshipwithfinancialleveragemeasures except when controlling for an interaction variable (Institutional investors 3 Lombardy region). The patent value is positively correlated with debt; however, the ratio patent-to-asset is negatively related to financial leverage indicating higher risk exposure. The nonlinearity test demonstrates a turning point when the relationship between patent value and debt inverts. Practical implications – Firms should monitor their innovation activity since excessive innovation increases risk exposure and affects financing opportunities and value. The involvement of institutional investors does not always enhance value. Originality/value – Existing literature focuses separately on family firm innovations and financial leverage as outcome variables, emphasizing the role of institutional investors in both fields by adopting agency theory and socioemotional wealth framework. In this study, the authors go further by merging both relationships, investigating the dynamics of the institutional-family firm innovation relationship in influencing the firm ’ s capital structure. The authors contribute to the ongoing debate by providing original findings on capital structure, governance and innovation, supported by rigorous methods to enhance family firms ’ decision-making. Introduction Family businesses are vital for a healthy economy. Globally they are the most diffused form of business (Gersick et al., 1997), contributing to 40-60% of GDPs and 35-70% of job generation (Van Gils et al., 2008). Family firms have received much attention regarding governance structure and financial performance in the past decades. The family business has been studied from different perspectives and approaches (Astrachan and Jaskiewicz, 2008;Faccio and Lang, 2002;Fernando et al., 2013;Vazquez and Rocha, 2018;Zahra and Sharma, 2004). Different issues have been dealt with separately, focusing only on one single theory or framework. Institutional, agency and socioemotional wealth theories have been trying to explain the dynamics of governance structure and innovations in family firms and the potential agency conflicts (managers vs owners, large vs small shareholders, shareholders vs credit holders). Yet, there are still some gaps that need to be explored. Due to institutional, ownership and financial factors, family firms are seen to have a different financial structure that is mainly composed of owners' equity and bank financing. Stockmans et al. (2010) state that family owners would be willing to forego economic performance to preserve the family's Socioemotional Wealth (SEW). Blanco-Mazagatos et al. (2007) show that family firms will increase their debt capital even though raising equity from outside shareholders may be more efficient to preserve family control. Financial performance and capital structure in family firms are not linear but a complex decision depending on various considerations. One of the gaps in family business literature concerns the research investigating innovation activity. Family firms' innovation has been explored in the last decade as it is highly relevant from a theoretical and practical point of view. Still, no studies approach this topic that investigate the relationships among institutional investors, capital structure and innovation activity. Considering the current theoretical and practical relevancy of innovation in family businesses, this paper focuses on the relationships among institutional investors, capital structure and innovation activity to fill the literature gap and develop a future research agenda aimed at suggesting major research avenues to guide future theoretical and empirical research toward a better understanding of innovation in the context of family firms. In the same regard, the research on the dynamics of the capital structure in family firms has not captured all factors affecting it, and no conclusive findings are provided. Therefore, in this study, we establish a framework in which the dynamics of leverage in family firms could be explained by two prominent factors highly relevant to family firms: institutional ownership and innovation activity. The choice of the two factors is stemmed from their importance in family and nonfamily firms and their impact on firms' financial performance. Furthermore, the selected factors could not be entirely exogenous in family firms' capital structure (institutional investors could affect leverage through innovation channels). Previous studies show that institutional investors could be attracted by a firm's innovation activity that might affect the firm's capital structure by enhancing debt negotiations and opening more financing channels. Given the shape of the capital structure in family firms and their limited access to external finance, investigating the relationship between innovation and debt structure in family firms essential and helps bridge the gap in family firms' innovation in attracting new external finance. Accordingly, this paper contributes to the ongoing debate by providing original linkage and findings on leverage, governance and innovation, supported by rigorous methods to enhance family firms' decision-making. We provide an up-to-date analysis of institutional investors' role and innovation in determining a firm's leverage; we use a large panel of 700 firms for ten years. Additionally, we provide path analysis showing how each variable would affect leverage individually and jointly with mediation. Based on the introduction of the firm's capital structure and its relationship to innovation and institutional investors, we establish a framework to understand how both factors (innovation and institutional investors) could affect leverage in family firms, separately and jointly. (1) Through investigating the impact of ownership structure and innovation on the firm's capital structure, this study aims to: (2) Investigate whether the involvement of institutional investors in the firm affects the debt levels in the firm's capital structure. (3) Examine whether a firm's innovation activity is a means to attract external debt finance. (4) Map the potential interactions between innovation, institutional investors and financial variables in determining a firm's leverage. Responding to the above points, starting from our theoretical framework, this paper aims to integrate divergent theories within a broad framework to help the doctrinal debate investigating the role of institutional investors and the firm's innovation activity in determining the firm's capital structure. We find that institutional investors' existence has no impact on the capital structure except when we control an interaction variable (institutional investors in the Lombardy region). Patent value is positively correlated with Debt, but the ratio of patent-to-asset is not. Moreover, the nonlinearity test demonstrates a turning point when the relationship between patent value and capital structure is inverted. Path analysis also provides consistent findings, emphasizing the paths through which governance and innovations could affect leverage in family firms. Financial variables such as EBITDA, Tangibles and Intangibles explain a firm's capital structure. The rest of the paper is organized as follows: section two covers the related literature on innovation and institutional investors; section three describes data and the methodological approach; section four presents the results and discussion; and finally, in section five, conclusions, implications and suggestions for future research are presented. 2. Literature review Family firms are described in the literature as having a different financial structure from other quoted companies. Literature provides several studies dedicated to the family firms' financial structures (Fit o et al., 2013;Pindado et al., 2015). Some studies have found that family firms adopt highly conservative strategies. Zellweger (2007) describes how family firms have a longer time horizon than nonfamily companies. Zellweger confirms that the possibility of having a long-term investment horizon is a distinctive feature of the family firms and a clear competitive advantage. L opez-Gracia and S anchez-And ujar (2007) for instance, considering the case of Spain, confirm that nonfamily firms approach their level of optimal debt more slowly than do family firms, suggesting that being a family firm reduces agency costs and gives the firm more opportunities to gain access to lender resources (Poza et al., 2004). Other studies, concerning different national frameworks, have confirmed the distinct financial behaviors of family firms, indicating that a business's family nature significantly affects its debt level. Gallo et al. (2004), using a sample of the top Spanish family firms, concluded that family firms follow a particular financial logic. In the study, family firms demonstrated a specific resistance to risk, apparent in their more restricted use of full-time permanent personnel and their considerably lower level of debt than nonfamily firms. Zata Poutziouris (2001) uses a univariate statistical analysis on empirical evidence to show that family firms are systematically more dependent on internally generated funds. Mah erault (2000), comparing the "investment functions" (debt, profit and liquidity) among 49 French family firms listed on the stock market and 46 French family firms that were not listed for several years, found that a third of the businesses preferred to forego development rather than lose autonomy. Gallo and Vilaseca (1996), using a sample of 104 Spanish family firms, found that the smaller firms followed less complex financial practices and had low debt ratios. Finally, studies such as those of Memili et al. (2010) and Zahra (2005) refer to risk-taking in family firms and argue that risk plays a role in a firm's image and performance. If debt affects risk, it is essential to highlight the effects of inserting a debt recording regulation into firms' balance sheets. The impact of ownership type on a firm's strategy and performance has long been debated in the literature regarding the potential effect on business decisions (Landry et al., 2013; Capital structure in family firms Miller et al., 2011). Since Baumol (1962), studies have compared owner-controlled to managercontrolled firms based on the premise that owner control reduces agency costs and fosters growth (Jensen and Meckling, 1976a;Morck et al., 1988), but this gives rise to entrenchment and conservatism (Le Breton-Miller and Miller, 2008;Morck et al., 2005;Volpin, 2002). According to Romano et al. (2001), it is possible to observe how a comprehensive review of the existing interdisciplinary literature at the international level highlights the existence of a complex array of factors able to influence the financial decision of small-medium enterprises (SMEs) and family firms (Florackis and Ozkan, 2009;Frank and Goyal, 2009;Gonz alez and Gonz alez, 2008;Harasheh and De Vincenzo, 2022;De Miguel and Pindado, 2001;Setia-Atmaja et al., 2009). Innovation is essential for all firms' growth and survival (Cefis and Marsili, 2006;Schumpeter, 1934;Wolfe, 1994). Family businesses are one of the most complex forms of business (Neubauer and Lank, 1998), and thus innovation researchers have to deal with this specific group separately. Limited aspects of innovation in the family business have been so far investigated, such as the impact of organizational culture, the influence of human-related antecedents on innovation capacity, institutional corruption and innovation and the role of human capital (Dana et al., 2014). Given the shape of the capital structure in family firms and their limited access to external finance, investigating the relationship between innovation and debt structure in family firms essential and helps bridge the gap in family firms' innovation in attracting new external finance. 2.1 Capital structure: finance theories An essential issue in corporate finance concerns the optimal capital structure to maximize firm value. The choice between debt and equity is strategic for corporate managers (Graham and Harvey, 2001). Most studies' theoretical background on the financial structure is based on trade-off (TOT) and pecking order theories (POT), which have a complementary approach. Both theories explain the financing variations among enterprises with their corporate characteristics (Fama and French, 2002;King and Peng, 2013). Trade-off theory is constructed according to the model of Modigliani and Miller (1958), which was later revised to incorporate financial frictions such as taxes (Modigliani and Miller, 1963) and the cost of financial distress (Kraus and Litzenberger, 1973). In its static version, Burgstaller and Wagner (2015) observed that TOT assumes an optimal leverage level at which managers balance the costs and benefits of debt. At such an optimal level, the firm's value is maximized when the marginal benefits from tax savings equal the marginal cost of financial distress (Fama and French, 2002;Jensen and Meckling, 1976b;Myers, 2001;Stulz, 1990). Following Burgstaller and Wagner (2015), dynamic aspects enter the model by considering adjustment costs (Fischer et al., 1989), which induce an additional trade-off between deviating from the target capital structure and the costs of adjusting toward it. Thus, deviations from the target debt ratio are only gradually corrected over time (Frank and Goyal, 2007). In short, trade-off theory suggests, as Benkraiem and Gurau (2013) observed, that the capital structure results from rational decisions that attempt to balance the costs and advantages of leverage. Corporate managers try to maximize firm value by reaching an optimal debt ratio, in which the marginal value of debt benefits exactly offsets the costs of issuing more debt (Myers, 2001). This trade-off choice considers three financial elements: tax shields, bankruptcy costs and agency costs. An increase in debt will reduce the firm's tax liability and increase the after-tax payments to capital providers. According to POT Myers (1984) and Myers and Majluf (1984), due to asymmetric information among different market players (managers, debtholders and stockholders), firms rely firstly on internal funds from free cash flows and retained earnings and then on external funds (debts then equity) after the exhaustion of the internal funds. POT has no assumptions regarding an optimal leverage ratio (Degryse et al., 2012;Fama and French, 2002). Therefore, capital structure is driven by the firm's ability to generate profits and capital needs. In this regard, POT assumes that more profitable firms are less indebted, which means a negative association between profitability and leverage, empirically confirmed, e.g. by Rajan and Zingales (1995) and Fama and French (2002). In short, this theory suggests that firms have a particular preference order for financing decisions. Since corporate managers are generally better informed than external investors about the firm's true value and perspectives, the costs of finance will vary according to different financing decisions. Applying this strategy based on various financial sources' pecking order enables the manager to maintain corporate control and operational independence. Pecking order theory can explain why the less profitable firms generally have more debt: they lack internal funds and debt costs less than external equity. Unlike trade-off theory, pecking order theory does not suggest a target leverage ratio. As Benkraiem and Gurau (2013) state, these two theories are complementary. The pecking order theory is based on information asymmetry problems, representing a part of the costs considered in the trade-off theory. Consequently, the usefulness of these theories in explaining financing decisions depends on the importance of information asymmetry problems. When these problems dominate other costs, the pecking order theory prevails and vice versa. However, the relative importance of a theory does not render the second one useless. Indeed, recent studies emphasize the possibility of their coexistence and compatibility (Berggren et al., 2000;Holmes and Kent, 1991;Watson and Wilson, 2002;Zoppa and McMahon, 2014). Based on this theoretical background, POT and TOT could be linked through the agency theory approach. Given that agency conflict could be less marked in small and family businesses (due to the aggregation of management and ownership), equity financing and internal free cash flows become less relevant in this context (Ang, 1992;Fama and French, 2002;Jensen, 1986;Jensen and Meckling, 1976b). However, agency conflicts between owners and debtholders could emerge when debt financing is introduced, creating a considerable agency cost that affects the cost of debt (Jensen and Meckling, 1976b;Myers, 1977). Others focus on cash holdings and their relationship to SMEs' performance, suggesting a further extension to family firms due to the particular governance structure (Dimitropoulos et al., 2020). Moreover, such theoretical frameworks help define the variables explaining leverage dynamics in SMEs and family firms. Degryse et al. (2012), L opez-Gracia and Sogorb-Mira (2008), Michaelas et al. (1999) and Sogorb-Mira (2005), for example, favor the POT. However Mc Namara et al. (2017) and Sardo and Serrasqueiro (2017) favor TOT in explaining capital structure dynamics in family firms. Harasheh and De Vincenzo (2022) support TOT in a sample of Italian SMEs with an innovative approach using the dose-response function and show that SMEs are underleveraged. In this sense, the applicability of traditional theories in explaining family firms' leverage is obscured by specific financing constraints and motivations not present for larger firms. The role of institutional investors Generally, two main factors might explain the lower institutional investor participation in family-controlled businesses. First, a high level of family ownership prevents institutional investors from investing in family-controlled businesses. Second, institutional investors may avoid family-controlled businesses; institutional investors prefer large firms with high liquidity (Gompers and Metrick, 2001). Institutional investors looking for liquidity and taking significant control positions are unattractive (Coffee, 1991). Several studies concerning institutional investors' role in a firm's decisions include agency cost (Jensen, 1986;Stulz, 1990) and information asymmetry (Myers and Majluf, 1984). The studies are based on the fact that an increase in the importance of long-term institutional investors to a firm will improve monitoring and information disclosures. Cleary and Wang (2017) demonstrated that firms with a relatively larger long-term institutional investor base have lower investment outlays and higher dividends while maintaining lower cash and higher debt levels than firms with a more transient shareholder base. According to Eaton et al. (2014), Gaspar et al. (2005) and Hao (2014), the reduction of agency costs is only associated with improved monitoring or improved information asymmetry or both (Attig et al., 2013;Elyasiani and Jia, 2010). There is no evidence that the existence of institutional investors has an impact on a firm's leverage. In the context of family firms, institutional investors' existence enhances reducing asymmetric information but with no clear-cut conclusion about the impact on capital structure. However, others argue that, based on agency theory, family firms do not like external ownership, which reduces agency costs. The institutional theory might also play a role in family firms, in which institutional investors help family firms to absorb regulatory shocks. In many cases, institutional investors avoid investing in family firms (Colot and Bauweraerts, 2016;Fernando et al., 2013). Cronqvist and Nilsson (2003) and Young et al. (2008) highlight the complexities in family firms when institutional investors are engaged. More generally, ownership structure and capital structure have been studied in SMEs at different ownership levels; studies find that ownership and leverage have no linear relationship. There is a causality between the two variables (Wellalage and Locke, 2015). In the same context, Badrinath et al. (1996) and Skinner (1989) found that higher leverage is associated with institutional investors' higher involvement in family firms, while Fernando et al. (2013) found the opposite. The following hypothesis is formulated accordingly. H1. There is a positive relationship between institutional investors (INST) and leverage in family firms. In this hypothesis, we test whether the involvement of INST in family firms changes the capital structure; in particular, we investigate to what extent the participation of INST may open new debt channels with banks since family firms become backed by INST. The role of innovation Some family firms tend not to take risks, conditioning their innovative actions for this reason (Casprini et al., 2017;Zahra and Sharma, 2004). On the other hand, some argue that family firms are not necessarily less innovative than nonfamily businesses. Over time, they may become more innovative and aggressive in their markets than nonfamily businesses (Filser et al., 2016). The question of how family businesses manage innovation and avoid path dependency traps remains largely unanswered (Dieleman, 2019;N uñez-Cacho and Lorenzo, 2020). Firms no longer depend merely on their internal activities to innovate. Instead, they are accessing external resources or even, in some cases adopting an open innovation (OI) model. This situation requires firms to manage the inbound and outbound flows of knowledge necessary to develop innovation, reducing risks and costs (Chesbrough and Schwartz, 2015). Giudice and Maggioni (2014) shed light on pluralism as a driving force in the knowledge economy pushes firms in a cumulative process of adaptation and re-creation through innovative means of social interaction in global environments. Abdulkader et al. (2020) focus on value co-creation by integrating OI principles and mechanisms of value systems Orlando et al. (2021) confirm that open innovation fosters patenting activity. Casprini et al. (2017) studied the challenges in acquiring and transferring knowledge in OI processes and developed two distinctive capabilitieslabeled imprinting and fraternizationthat helped the family firm (case study) overcome the barriers to knowledge acquisition and transfer. Regarding R&D, Dong et al. (2021) show that family ownership has an inverted U-shaped relationship with cooperative R&D and political ties moderate the relationship; steeper in firms with more political ties than in firms with fewer political ties. Thrassou et al. (2018) introduced the concept of Agile Innovation in family firms indicating that these organizations have an inherent disposition toward agile innovation, with multicultural management acting as the agent of equilibrium. Carnes and Ireland (2013) theorize that resource pooling is a mediator between family and innovation. Penney and Combs (2013) add that family structure affects how different families are involved in grouping processes that help or slow down innovation. From the above discussion, it appears that no conclusions have been drawn on the motives, uses and benefits of innovations in family firms. On the other hand, some authors underline several aspects that make debt unsuitable for financing innovation (David et al., 2008;Hall and Lerner, 2010). These authors highlight that debt, even though it is one of the most popular forms of financing among small technology companies and large companies, is not appropriate for innovation because innovation is intrinsically risky, generates firm-specific assets and is non-re-deployable to different uses. Hall and Lerner (2010), L€ o€ of (2009) and O'Brien (2003) affirm that debt financing is negatively correlated with the probability of being an innovative company. However David et al. (2008) recognize that debt plays an essential role in innovation. It acts as a disciplinary mechanism, and the effects of debt on innovation depend on the monitoring mechanism adopted by debt holders. Ibrahim (2010) suggests that debt is not an uncommon funding source for many companies engaged in innovation. Choi et al. (2016) show that debt plays an important role in innovative activity by encouraging exploitation. Such studies do not draw a clear direction between innovation and debt in which bank debt can be channeled to already innovative family firms. In this study, we establish that innovation attracts more bank capital. Other studies show that family ownership discourages R&D investments, likely hindering innovation (Chen and Hsu, 2009;Chin et al., 2009;Chrisman and Patel, 2012;Craig and Moores, 2006). Moreover, firms with high family ownership may increase R&D when the CEO-chair roles are separated or when more independent outsiders are included in the board (Chen and Hsu, 2009). Even if extensive literature analyzed the impact of innovation on performance variables, there is no evidence that innovation affects a firm's leverage. In this hypothesis, we investigate whether family firms with stronger innovation activity enjoy higher debt levels; in other words, do firms with more innovation activity attract more debt, or that innovation impedes access to debt?. Thus, the following hypothesis is developed: H2. There is a positive relationship between a firm's innovation activity and leverage in family firms. The development of the third hypothesis is challenging and novel since most studies focus on family firm innovations and R&D as outcome variables; whereas, in this study, we focus primarily on financial leverage through bank financing, focusing on the roles of institutional investors and innovations in family firms individually and jointly. First, we begin with the relationship between institutional investors and innovation in family firms. Previous literature finds no conclusive evidence, adopting several theories and frameworks such as agency theory and SEW. Therefore, depending on the context and the framework employed, institutional investors might positively or negatively influence innovation activity in family firms ( (Douma et al., 2006;La Porta et al., 1999). It is also not clear how the motivation and willingness of institutional investors can help family businesses understand the need to innovate (Boh et al., 2020). Since the institutional ownership-family firm innovation nexus is established, and the institutional ownership-capital structure is also established, we go further by merging both relationships, investigating the dynamics of the institutional-family firm innovation relationship in influencing the firm's capital structure. Institutional investors are attracted to companies with growth potential. Notwithstanding, institutional investors are less inclined to be engaged in family firms to avoid conflicts with family members; they might be interested in family firms with potential growth through innovations. When banks are reluctant to lend to family firms to finance their innovations, they might lend them for the same purpose when institutional investors back family firms. Therefore, we assume indirect relationships between institutional investors, innovation and leverage in family firms. This hypothesis investigates how institutional investors and innovations determine firms' leverage through path analysis. In other words, institutional investors are more inclined to influence leverage when the firm is innovative, or innovation is more important in determining leverage when institutional investors back the firm up. Thus, the following hypothesis is formulated: H3. Institutional investors and innovation influence leverage through mediation effects. 3. Data and methodological approach 3.1 Data Data for 700 Italian non-traded family firms were collected from 2010 to 2019, whose ownership belongs to one person or family members with more than 50% of the shares. Their sales revenues are between EUR 5 million and EUR 50 million [1]. Our final dataset is 7,000 firm-year observations. The sample firms belong to three macro sectors, primary (agriculture), secondary (industrial) and third (services), distributed among 20 regions of Italy. Italy represents an interesting context for studying family firms, given the most widespread form of business; more than 780,000 family businesses, making up 85% of companies and contributing to 70% of employment. The Italian context is in line with that of the main European economies such as France (80%), Germany (90%), Spain (83%) and the UK (80%), while the factor that sets Italy apart from these countries is the lesser recourse of family businesses to external managers: 66% of Italian family businesses are fully managed by family members, while this applies to only 26% of French family businesses and just 10% in the UK. Using AIDA [2] database, we collect the following related data (see also Table 1): (1) Dependent variable: This study uses two debt measures: total debt and the total debt ratio. Both measures capture the capital structure dynamics in family firms ( € Ohman and Yazdanfar, 2017;Pacheco and Tavares, 2015) and (Burgstaller and Wagner, 2015;Fernando et al., 2013;Mc Namara et al., 2017). Debt is considered a funding channel for family firms to finance innovative activities. (2) Control Variables: We collect financial indicators (total assets, EBITDA, tangibles and intangibles); these variables are used as proxies for the firm's financial situation extracted from the balance sheet and the income statement. EBITDA is considered an appropriate financial performance indicator. It shows assets' ability to generate earnings distributed to both capital providers and can also be a proxy of operating cash flows. As evidenced in the previous studies, Tangible Assets are considered a direct object as debt collateral when granting new loans. Such variables have been used as control variables in capital structure studies in family firms (3) Independent variables: Institutional investors' existence in family firms represents a proxy for the governance and ownership aspects (Bushee and Noe, 2000; Fernando et al., 2013). It is a binary variable that takes a value of 1 (no institutional investors) and 0 otherwise. Previous literature shows no conclusive sign for the relationship between capital structure and institutional investors' existence, so the sign can be positive or negative. Institutional investors are related to family firms through the agency theory; their existence might improve the monitoring and thus the performance, or they can be a burden to the original family owner creating a type of agency conflict Patent value is taken as a proxy for the firm's level of innovation activity (Chin et al., 2009). Again, there is no solid conclusion regarding the role of patents and innovation in attracting debt (changing the capital structure). However, the excessive innovation activity with respect to the firm's dimension would increase the firm's specific risk and impede banks from financing such risky firms. We consider the value of patents as an aggregate proxy for innovation activity without distinguishing between various types of innovations (input, process or output). (4) Differences among regions and sectors are considered by inserting dummies for the industries and regions to see whether a specific region or sector behaves significantly differently from others. Financial variables used as independent variables are Assets as a measure of a firm's size. EBITDA is a clean measure of profitability and a proxy of operating cash flow not influenced by the current debt level. Tangibles variable is the value of a firm's physical assets closely evaluated as direct collateral for debt issuance. Intangibles represent the firm's ability to create patents, goodwill, brands and other intangible assets; they can also be viewed as a broad proxy of innovation. The variables appear in two forms: a natural logarithm of the value and the ratio to total assets. Theoretical framework We are aware that no single theory explains the relationship among the variables in our framework. Therefore, we explain such relationships through agency theory, informational asymmetry and institutional theory. We assume that there is a bidirectional relationship among the variables of concern. Leverage is supposed to be influenced by two independent variables, institutional investors and innovation activity. The linkage between institutional investors and leverage is expected to be positive since that institutional investors may be considered a backup for firms' debt or help open new debt channels through banks by enhancing informational efficiency. Concerning the leverage-innovation relationship, positive or negative signs could be obtained; the accumulation of existing innovations can be considered either collateral (positively) or an additional firm-specific risk factor (negatively). In the same context, the two independent variables can also be related in a bidirectional relationship; according to the agency theory, the existence of institutional investors helps firms to be more innovative by adding one external element of control and enabling firms to find innovative solutions, or innovative firms attract institutional investors. In this context, some studies are in favor of the second direction through which institutional investors look for already innovative firms which meet the objectives of institutional investors; this is also supported by a recent related study on the relationship between innovation activity and institutional investors in Italy (Harasheh et al., 2018). A firm's financial performance is considered an essential element in determining leverage used as a control variable. Figure 1 shows the theoretical framework. Models for panel ANALYSIS Following Booth et al. (2001), we address the hypotheses using the following regression models applied to panel data. Models (1) and (2) are to test hypothesis one In model (1), the natural logarithm is used. DEBT is the value of total debt, INST is a binary variable for the existence of institutional investors in the firm, ASST is the value of the firm's asset; EBITDA is the earnings before (2), we normalized variables by the value of total assets to use the leverage ratio and eliminate any size bias in the sample regarding the innovation activity. Therefore, innovation activity is considered with reference to total assets. LEV is the firm's leverage which represents the ratio of debt to total assets, EBITDA_A is a profitability ratio that equals EBITDA/Assets, TANG_A is the ratio of tangible assets to total assets, INTANG_A is the ratio of intangible assets to total assets In models (1) and (2), we also added an interaction variable (INST 3 LOMB); this variable allows us to test the relationship between Inst and Debt measures with regard to the existence of institutional investors only in the region of (Lombardy), Lombardy alone captures about one-third of firms and institutional investors in the sample. Models (3) and (4) are to test hypothesis two Like model (1), in model (3), we apply the natural logarithm of all variables, so: INNOV is the natural logarithm of a firm's innovation activity represented by patent value; IND 3 LOMB is an interaction variable for the industrial sector in the Lombardy region; the definitions of the rest of the variables remain unchanged. Model (4) is like model (2), in which values are normalized by asset value. INNOV_A is the ratio of the value of innovation activity (patent value) over total assets, and the definitions of the rest of the variables remain unchanged. SEM analysis The theoretical setup shows that there could be dynamic relationships among the variables of interest with the mediation effect. We chose mediation rather than moderation because the mediators are not exogenous and can be affected by the independent variable. Using path analysis in structural equation modeling is one technique to test such relationships and deal with possible endogeneity issues. Like panel models, we construct the following path models to test the relationship between institutional investors, innovation and the firm's leverage. (1) Impact of institutional investors on debt (INNOV as a mediator) (2) Impact of institutional investors on debt (EBITDA as a mediator) Capital structure in family firms (3) Impact of institutional investors on leverage (INNOV/ASST as a mediator) (4) Impact of institutional investors on leverage (EBITDA/ASST as a mediator) (5) Impact of patent on debt (INST as a mediator) (6) Impact of patent on debt (EBITDA as a mediator) (7) Impact of patent-to-asset on leverage (INST as a mediator) (8) Impact of patent-to-asset on leverage (EBITDA/ASST as a mediator) Table 2 presents the desriptive statistics of the variables of interest. Panel-a reports the monetary values of the selected variables. In terms of CV, EBITDA and Tangibles turn out to be the least risky variable demonstrating how Tangibles are considered plausible collateral from creditors' perspective. So does EBITDA as a broad measure of operating cash flows. However, as a proxy for innovation activity, Patent exhibits the highest variance in terms of CV, which makes it less desirable for debt collateral. On the other hand, Panel-b reports the ratios of all variables to total assets; the average debt ratio is 76%, but in some cases can exceed the value of asset resulting in a negative equity value, patent to assets can reach 42%, which may trigger a risky business activity. The correlation matrix in Table 3 shows the correlation coefficients among the two panels' variables (values in panel-a and ratios in panel-b). We notice that capital structure measures positively correlate to financial variables and innovation activity. The value of patents has a stronger correlation to debt measures than patents to assets. Financial variables are correlated; however, this does not introduce estimation bias due to multicollinearity since the VIF values are within the accepted threshold of 10 and 2.5 (as a more conservative threshold). Panel findings In both models to test H1, we find no significant relationship between the existence of institutional investors and both debt measures (DEBT and LEV); in this case, Inst do not influence the dynamics of capital structure in family firms; additionally, they seem not to contribute in reducing informational asymmetries surrounding family firms. Consequently, H1 is not accepted, providing no evidence of a relationship between institutional investors and Fernando et al. (2013), who find that leverage is negatively associated with institutional ownership. Additionally, results could also concur with Blanco-Mazagatos et al. (2007), who provide the debt choice in family firms is not related to the availability of other external equity sources to preserve family control. In this case, agency theory could provide a rationale for such a relationship in which family firms try to avoid institutional investors as much as possible to avoid any potential conflict. However, this might contradict (Badrinath et al., 1996;Skinner, 1989), who show that higher leverage is related to institutional ownership in family firms. Results are presented in Table 4. In Panels A and B of Table 4, assets and EBITDA are strongly correlated with leverage measures, showing how firm size and profitability are considered substantial factors from the lender's perspective to issue new debt for family firms. These results do not support the Pecking Order Theory since firms with high profitability also ask for external financing through debt. We also find that tangibles and intangibles are significant in explaining family firms' debt levels with no statistical differences in their effect. In contrast, the literature emphasizes the tangible asset's value as a backup for a firm's debt. Dummies for regions and sectors show no statistical differences among different sectors and regions. It is worth noting that, in both equations of H1, the interaction variable (INST 3 LOMB) turns out to be significant; this means that in the Lombardy region, institutional investors can have a positive influence on the firm's debt levels by opening new debt channels. Even though the general model shows no significant relationship between Inst and firm's debt, the Lombardy region turns to behave differently; this is because Lombardy represents a substantial and competitive platform for the interaction between firms and institutional investors, about 30% of Inst invest in family firms operating in Lombardy. In the following table, we present the results of testing H2: Panel-A concerns the relationship between the value of patent and debt; we find that patent value is a strong determinant for a family firm's debt level, indicating that creditors (banks) consider the value of a family firm's innovation activity when granting the credit, which is consistent with previous literature such as David et al. (2008) and Ibrahim (2010), who find support for the positive relationship between innovation and debt. We also confirm the importance of a firm's financial performance proxies such as Total assets and EBITDA as determinates of a firm's debt level. However, in the presence of innovation activity, the values (4) that test the relationship between leverage and patent value, where values are normalized by total assets. It shows that the firm's financial characteristics (profitability ratio, tangibility ratio and intangibility ratio) are considered essential factors in determining its leverage in family firms as in nonfamily firms. Interestingly, we find that the variable of concern, "Patent /Asset," is negatively correlated to the family firm's leverage which contradicts the positive relationship between (patent value) and (debt value) in the previous test. This would mean that the importance of innovation value diminishes as its value to total assets grows. This finding is consistent with the corporate risk theory in which firm-specific risk increases as the firm's operations mainly depend on high-risk activities such as patenting and innovations; in such cases, family firms tend to finance their risky activities either internally (EBITDA) or by looking for venture capitalists who have a higher tolerance for firm-specific risk. These findings are not uncommon, consistent with Hall and Lerner (2010), L€ o€ of (2009) and O'Brien (2003), who conclude that excessive innovation might be considered a firm-specific risk and that debt is not appropriate for financing innovations. Additionally, our findings are consistent with Chen and Hsu (2009) approach, debt can act as a disciplinary mechanism. Still, the extent of the relationship between debt and innovation depends on the monitoring mechanisms by debt providers. In this regard, H2 is supported. There is a relationship between a family firm's innovation activity and its capital structure. The direction of the relationship and the extent depend on the intensity of innovation activity. 4.1.1 Robustness test [3]. In this context, we assume that innovation attracts firms' debt, which is considered a guarantee side. However, when a firm's innovation activity reaches a certain level (high level) with respect to the general firm's activity, it becomes a specific risk factor from the banks' perspective. We test this argument by running a nonlinear regression of patent activity against debt measures; we find that the relationship between patent activity and debt level is strongly positive but up to a certain level in innovation activity. This shows an optimal level of a firm's innovation activity to be considered collateral for creditors. However, uncertainty regarding the firm's operations starts to emerge when the level of innovation activity starts to count high. Path analysis Path analysis allows for studying the impact of independent variables (institutional investors and patents) on the leverage variables using the mediation effect. In sections A and B of On the other side, sections C and D present the path models of innovation variables on debt measures with institutional investors and EBITDA (separately) as mediators. In section C, the patent is strongly related to debt in a direct path and thought both mediators. However, in section D, the ratio of patents to assets is negatively related to leverage in EBITDA as a mediator. In contrast, it is positively associated with leverage in the presence of institutional investor's variable as a mediator. In conclusion, the path analysis supports the panel findings providing further insights into how the different paths might affect the family firms' capital structure. Figure 2 demonstrates the path analysis for the eight models presented earlier; arrows show the direct effects' coefficients, whereas the total effect is shown in Table 6. Conclusions and implications Firms continuously look for new value-creation opportunities; family firms are no exception. Given the role of leverage in value creation, family firms try to optimize their debt levels as a source of external financing and a value creation tool. Most previous studies focus on family firm innovations and R&D as outcome variables. In contrast, a few studies focus on financial leverage factors in family firms. Therefore, in this study, we study primarily on financial leverage through bank financing, focusing on the roles of institutional investors and innovations in family firms individually and jointly. The role of institutional investors in a firm's capital structure and innovation is well-investigated with no conclusive evidence adopting several theories and frameworks such as agency theory and SEW. Therefore, we go further by merging both relationships, investigating the dynamics of the institutionalfamily firm innovation relationship in influencing the firm's capital structure. Institutional investors are attracted to companies with growth potential. In particular, we study the role of two factors in the firm's leverage, a classical one (institutional investor) and a non-classical factor (innovation activity), filling the current gap in the literature in establishing a framework connecting the three variables of concern. We study whether institutional investors help family firms open new debt channels since institutional investors back them up. Independently, we also examine the role of a family firm's innovation activity on leverage. We test whether more innovative family firms attract more debt and the interaction between variables in determining capital structure jointly through path analysis. We study a large sample of 700 family firms in Italy from 2010 to 2019 using panel regressions, path analysis and the nonlinearity test. The results show that institutional investors' variable has no relationship with leverage measures except when we control for an interaction variable (INST 3 LOMB); suggesting that institutional investors do not help family firms in establishing new debt channels except for Lombardy region, which is considered the biggest platform for innovation, banks and institutional investors. The value of patents is positively correlated with debt; this shows that a firm's patent activity can be used to guarantee new bank loans. However, the ratio of patents to assets is negatively related to the firm's leverage, suggesting that innovation could be a double-edged tool. This finding is consistent with the risk-return trade-off; when the level of innovation activity exceeds a certain point relative to the firm's overall activity, banks consider it an idiosyncratic risky activity. Therefore, creditors either increase the cost of financing or refuse to grant new debt. Moreover, the nonlinearity test demonstrates a turning point at which the relationship between patent value and capital structure inverts; this confirms the above argument on risky activity. Financial variables such as EBITDA, Tangibles and Intangibles explain the family firm's capital structure, which shows less consistency with the pecking order theory for internal financing preferences. The path analysis confirms the panel findings; we provide more details on possible paths on how each independent variable affects debt variables. Figure 2. Path analysis EMJB Implications: This research provides practical implications for different players in the market. At the firm's level, institutional investors do not seem to enhance firms negotiating new debt, and they just exploit existing opportunities. Furthermore, since innovation activity is a driver for financial leverage, creating a national platform showing all family and small businesses' innovation activity would enhance visibility and improve the informational efficiency of the market and improve the decision-making by fund providers. Furthermore, firms should watch out for their involvement in innovation activities since excessive activity increases risk exposure, fewer financing opportunities and ultimately affects value creation. Limitations: This research is not limitation-free; we focus only on Italy, which might affect the generalizability of the findings. We consider the value of patents as an aggregate proxy for innovation activity without distinguishing between various types of innovations (input, process or output). Other measures could also be considered to provide a more comprehensive view of the relationship in question. Moreover, we mainly focus on bank debt without distinguishing additional capital raising channels for family firms and without concentrating on certain case studies to provide more profound insights into the family firm-bank relationship. Such limitations leave space for further ad future investigations to answer unsolved issues. 3. Results of this test are omitted for space reasons, but they are available upon request.
2022-06-26T15:02:47.658Z
2022-06-27T00:00:00.000
{ "year": 2022, "sha1": "992f0a47518929159f25fba444af61b55de798ef", "oa_license": "CCBY", "oa_url": "https://www.emerald.com/insight/content/doi/10.1108/EMJB-12-2021-0191/full/pdf?title=capital-structure-in-family-firms-the-role-of-innovation-activity-and-institutional-investors", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "295b8e84e707be5948b1d0fc774d512265743a6a", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [] }
271074835
pes2o/s2orc
v3-fos-license
Hair Growth Promoting Effects of 15-Hydroxyprostaglandin Dehydrogenase Inhibitor in Human Follicle Dermal Papilla Cells Prostaglandin E2 (PGE2) is known to be effective in regenerating tissues, and bimatoprost, an analog of PGF2α, has been approved by the FDA as an eyelash growth promoter and has been proven effective in human hair follicles. Thus, to enhance PGE2 levels while improving hair loss, we found dihydroisoquinolinone piperidinylcarboxy pyrazolopyridine (DPP), an inhibitor of 15-hydroxyprostaglandin dehydrogenase (15-PGDH), using DeepZema®, an AI-based drug development program. Here, we investigated whether DPP improved hair loss in human follicle dermal papilla cells (HFDPCs) damaged by dihydrotestosterone (DHT), which causes hair loss. We found that DPP enhanced wound healing and the expression level of alkaline phosphatase in DHT-damaged HFDPCs. We observed that DPP significantly down-regulated the generation of reactive oxygen species caused by DHT. DPP recovered the mitochondrial membrane potential in DHT-damaged HFDPCs. We demonstrated that DPP significantly increased the phosphorylation levels of the AKT/ERK and activated Wnt signaling pathways in DHT-damaged HFDPCs. We also revealed that DPP significantly enhanced the size of the three-dimensional spheroid in DHT-damaged HFDPCs and increased hair growth in ex vivo human hair follicle organ culture. These data suggest that DPP exhibits beneficial effects on DHT-damaged HFDPCs and can be utilized as a promising agent for improving hair loss. Introduction Hair loss is a prevalent occurrence among individuals of varying ages [1].Given the significant association between hair and cultural identity, the onset of hair loss can profoundly impact one's overall well-being and quality of life [2,3].Androgenetic alopecia (AGA) is the predominant hair loss condition among men, characterized by the gradual replacement of thick terminal hair with fine vellus hair in genetically predisposed areas of the scalp, notably the frontal and vertex regions [4].The principal pathological changes observed in AGA involve alterations in hair follicle dynamics, marked by a progressive shortening of the anagen phase.The pivotal role of 5-reductase in dermal papilla cells is central to these changes, facilitating the conversion of testosterone (T) into DHT.Subsequently, strong binding of DHT to androgen receptors (AR) triggers a cascade of signaling events that disrupt normal hair growth, ultimately leading to the miniaturization of affected hair follicles [5,6].Despite these insights, the precise mechanisms underlying AGA remain incompletely elucidated. HFDPCs are a mesenchymal type of cell that connects to capillaries under the hair follicle [7,8].They have also been found to possess stem cell capabilities and are expected to play an important role in preventing hair loss and hair growth.Cell division and migration around the dermal papilla are closely related to hair growth.During the growth phase, new hair is created from the dermal papilla.Cells are activated by various cytokines and hormones, resulting in cell migration to the dermal papilla and affecting hair growth [7][8][9]. Numerous studies have indicated a correlation between the expression levels of prostaglandins (PGs) and hair growth [10][11][12].Prostanoids, such as PGs and thromboxane A2 (TXA2), constitute a lipid-derived group of autacoids that influence numerous physiological systems and pathological conditions [13,14].They are produced through the sequential metabolism of arachidonic acid by cyclooxygenase, resulting in the formation of PGH 2 , which is subsequently transformed into Prostaglandin D2 (PGD 2 ), prostaglandin I2 (PGI 2 ), PGE 2 , and prostaglandin F2 alpha (PGF 2α ) and TXA2 by their respective synthases.These four primary PGs are spatially and temporally expressed within hair follicles, potentially playing a role in hair loss [10][11][12]15]. Many studies suggest that PGE 2 has been highlighted for its significant contribution to tissue regeneration by promoting the activation of adult stem cells across various injured organs [16][17][18][19][20].In addition, PGE 2 was reported to mitigate radiation-induced hair loss in mice [21].Recent research indicated the involvement of PGF 2α in regulating hair growth.Notably, the PGF 2α analog, bimatoprost, is approved by the Food and Drug Administration (FDA) in the US, is commonly employed to promote the growth of human eyelashes, and has exhibited effectiveness in cultured human hair follicles [22]. Thus, we found DPP, an inhibitor against 15-PGDH, and examined whether DPP ameliorated DHT-damaged HFDPCs and was a potential therapeutic agent for improving hair growth. The Binding Modeling of the Interaction between DPP and 15-PGDH We originally identified 15-PGDH inhibitor hits by screening a selected set of compounds at Korea Chemical Bank (http://www.chembank.org(accessed on 7 July 2021)) and then discovered DPP by optimizing the lead structure.This was guided by protein-ligand docking with GNINA, a molecular docking software equipped with built-in capabilities to score and refine ligands, leveraging convolutional neural networks.The predicted binding mode of the 15-PGDH and DPP complex is depicted in Figure 1.The binding is predominantly driven by hydrogen-bonding interactions with Ser138 and Tyr151, with additional contribution from the pi-pi interaction involving Phe185.Hydrophobic interactions with Tyr217 also play a role in enhancing the overall binding affinity. DPP Enhanced the Migration of DHT-Damaged HFDPCs We performed an MTT analysis to examine the potential cytotoxicity of DPP on HFDPCs.As shown in Figure 2, DPP (0.1, 1, 5 µM) showed no cytotoxicity in HFDPCs (Figure 2).In addition, DPP increased cell viability compared to the control group, implying that DPP may promote the cell proliferation of HFDPCs. To investigate the wound healing efficacy of DPP in DHT-damaged HFDPCs, a wound healing assay was performed.HFDPCs were scratched in the center of cell culture dishes and treated with 2 µM DHT, DPP (0.1, 1, 5 µM), and 1 µM minoxidil (MIX) as a positive control, respectively.After 24 h, cell growth inhibition was observed in the DHT-treated group, whereas the increase in cell migration significantly occurred in the DHT-treated group added with MIX.Interestingly, we found that DPP significantly enhanced the wound healing efficiency of DHT-damaged HFDPCs in a concentration-dependent manner (Figure 3). DPP Increased the Expression Level of Alkaline Phosphatase in DHT-Damaged HFDPCs Alkaline phosphatase (ALP) is essential for cellular physiological processes and tissue regeneration.Its activity is mainly observed in cells with elevated metabolic rates or in tissues undergoing remodeling [28].Hair growth is significantly linked to alkaline phosphatase activity [29].This enzyme is crucial for initiating the transition of hair follicles from the telogen to the anagen phase, thus promoting the hair follicle cycle [30].As expected, treatment with 1 µM MIX enhanced the expression level of ALP in DHT-damaged HFDPCs.We observed that 5 µM DPP treatment significantly increased the expression level of ALP in DHT-damaged HFDPCs compared to the only DHT-treated group (Figure 4). DPP Reduced the Reactive Oxygen Level in DHT-Damaged HFDPCs Depending on internal or external stimuli like ultraviolet radiation and particulate matter, reactive oxygen species (ROS) are chemical compounds generated within cells, and their excessive accumulation leads to cellular damage and various diseases [31,32].Exposure of HFDPCs to ROS degrades proteins and DNA in hair follicles, triggering inflammation of adjacent tissues.These mechanisms can disrupt hair growth and ultimately result in hair loss [33,34].A previous report revealed that DHT caused the generation of intracellular ROS in HFDPCs [35].To measure the levels of ROS within HFDPCs, the DCF-DA assay is performed.As expected, we also found that the ROS level was increased in the only DHT-treated group, whereas the ROS level was lower in the DHT-treated group added with MIX.We also observed that 5 µM DPP significantly downregulated ROS levels generated in DHT-damaged HFDPCs, similar to the control group, meaning that DPP has antioxidant properties (Figure 5). DPP Recovered the Mitochondrial Potential in DHT-Damaged HFDPCs Mitochondria play a pivotal role in cellular energy production, and dysfunction in mitochondrial function is linked to cellular impairment [36][37][38].The JC-1 assay assesses changes in mitochondrial membrane potential, offering an indirect measurement of mitochondrial function [39].The accumulation of JC-1 dye within mitochondria, dependent on membrane potential, emits green fluorescence around 529 nm for its monomeric state, transitioning to red fluorescence at 590 nm due to the formation of red fluorescent J-aggregates.While a damaged membrane potential is indicated by green fluorescence, a healthy mitochondrial membrane potential is indicated by red fluorescence.We performed a JC-1 assay to evaluate changes in the mitochondrial membrane potential in DHT-damaged HFDPCs treated with DPP.As shown in Figure 6, DHT exhibited green fluorescence compared to the control group.Interestingly, treatment with 5 µM DPP showed red fluorescence in DHT-damaged HFDPCs, implying that DPP restored mitochondrial potential in DHT-damaged HFDPCs. DPP Decreased the mRNA Expression Level of Dickkopf-Related Protein 1 (DKK-1), and Interleukin 6 (IL-6) in DHT-Damaged HFDPCs To verify the expression of IL-6 and DKK-1 in DHT-damaged HFDPCs, we conducted realtime qRT-PCR analysis.DKK-1 is an antagonist of the Wnt signaling pathway, which is vital for hair growth.Downregulation of DKK-1 expression promotes hair survival and proliferation, whereas its increased expression is strongly correlated with hair growth impairment and the initiation of apoptosis [40,41].Inflammation induced by IL-6 affects hair growth.Decreasing IL-6 expression can alleviate the effects of stress and inflammation on hair follicles, consequently promoting hair growth [42].As expected, the mRNA expression levels of DKK-1 and IL-6 were upregulated in the DHT-treated group.We demonstrated that DPP significantly reduced the expression levels of DKK-1 and IL-6 in the DHT-damaged HDFPCs (Figure 7).Therefore, we assumed that the DPP increased the expression of factors beneficial for hair growth, promoted Wnt signaling, and exhibited anti-inflammatory effects. DPP Increased the Phosphorylation Levels of ERK, AKT β-Catenin, and GSK-3β in DHT-Damaged HFDPCs The ERK and AKT signaling pathways are key regulators of cell proliferation [43,44].The ERK signaling pathway plays a crucial role in cell proliferation, whereas AKT is indispensable for transmitting survival signals [44,45].The Wnt signaling pathway is essential for hair formation and the hair cycle [46,47].GSK-3β serves as a crucial kinase for regulating catenin stability.β-catenin contributes to cell cycle regulation and activation of genes involved in hair formation, playing a crucial role in ensuring the healthy growth and maintenance of hair [46][47][48]. To explore the underlying mechanisms of DPP, the effects on phosphorylation ERK, AKT, GSK-3β, and β-catenin were evaluated using Western blot analysis.As expected, the phosphorylation levels of p-ERK, p-AKT, GSK-3β, and the expression level of β-catenin were reduced by DHT-damaged HFDPCs.However, DPP treatment enhanced the phosphorylation levels of p-ERK, p-AKT, GSK-3β, and the expression level of β-catenin in DHT-damaged HFDPCs.Especially, DPP exhibited a significant reduction at a concentration of 5 µM (Figure 8).We suggested that DPP promoted cell proliferation through the activation of AKT/ERK /Wnt signaling pathways in DHT-damaged HFDPCs.HFDPCs possess self-renewal properties like stem cells, enabling three-dimensional culture [49].A three-dimensional spheroid is utilized due to the three-dimensional interaction and tissue formation capability of cells [50,51].This model allows us to understand complex cell-cell interactions compared to 2D cell cultures.Interestingly, we demonstrated that 5 µM DPP significantly increased the spheroid size in DHT-damaged HFDPCs compared to the only DHT-treated group (Figure 9), exhibiting a correlation with the improved wound healing efficiency shown in Figure 2. DPP Stimulated Hair Growth Ex Vivo The hair follicle organ culture model introduced an exceptionally accessible method to evaluate the three-dimensional interactions among epithelial, mesenchymal, and neuroectodermal cells [52].Additionally, this assay system enables the assessment of how various natural and pharmacological agents influence the growth modulation of complex tissues.As a final step, we investigated whether DPP enhanced human hair growth in an ex vivo human hair follicle organ culture model.As expected, MIX exhibited remarkable hair growth compared to the control group.Interestingly, DPP significantly increased hair growth compared with non-treated control (Figure 10).On day 8 of culture, DPP increased the length of the hair shaft by 10.7% (0.5 µM) and 13.1% (1 µM) compared with non-treated control. Discussion As the demand for hair loss treatments targeting hair loss increases among numerous patients, the hair loss solution market is also expanding [53].AGA often causes social and psychological problems for individuals, leading to hair loss due to decreased anagen and increased telogen phases [54].Considering the side effects of finasteride and MIX, commonly used for AGA and approved by the FDA [55], the development of safer drugs is needed to improve hair loss. DHT, a testosterone derivative, mainly triggers this AGA.DHT interacts with the androgen receptor in hair follicle cells to create the AR-DHT complex, which undergoes dimerization and nuclear translocation.This complex, along with AR co-activators, binds to DNA sequences.Moreover, the AR-DHT complex promotes the transcription of TGF-β1/-β2, and DKK-1, resulting in hair follicle miniaturization, shortened anagen phase, and eventual follicle regression leading to alopecia [56]. Prostaglandins play a role in regulating hair growth and differentiation [57].PGE 2 has been shown to stimulate hair growth following depilation.Thus, to enhance the PGE 2 level in HFDPCs, we decided to develop an inhibitor of 15-PGDH using our AIbased new drug development program.Our study investigated that DPP increased hair growth in DHT-damaged HFDPCs.Treatment with DPP stimulated cell migration in DHT-damaged HFDPCs (Figure 2).Additionally, it enhanced the activity of alkaline phosphatase, a biomarker for hair follicle cells (Figure 3).ROS is one of the major factors causing hair loss.We found that DPP exhibited significant suppression of ROS compared to the DHT-damaged group (Figure 4).Dysfunction in mitochondria leads to disruption in energy metabolism balance, resulting in ROS production [58].Interestingly, DPP restored mitochondrial function to levels similar to the control group (Figure 5). DKK1 is recognized as a secreted protein that acts as an inhibitor of Wnt signaling, exerting a negative regulatory role [59].Thus, we demonstrated that DPP antagonized the expression level of DKK-1 elevated in DHT-damaged HFDPCs (Figure 6).The activation of the Wnt signaling pathway increases the expression of genes involved in cell cycle progression, thereby enhancing hair follicle formation and regeneration [60].β-catenin forms a complex with GSK3β, APC, CK1, and Axin in the cytoplasm.Upon Wnt ligands activation (Wnt ON state), the ligands bind to the receptors, leading to β-catenin translocation into the nucleus and inducing the expression of genes related to hair proliferation.We verified that treatment with DPP enhanced the expression level of β-catenin.DPP promoted cell growth by inhibiting β-catenin degradation through ERK and AKT and by facilitating its nuclear translocation, implying that DPP could effectively prevent hair loss (Figure 8). The 3D spheroid model generally represents the cell-cell interactions and signaling pathways in dermal papilla cells, which typically exhibit stemness characteristics [61].This model is instrumental in mimicking realistic conditions of cellular interactions, rendering it a crucial tool for investigating hair follicle formation and regeneration.Interestingly, we revealed that DPP exhibited a significant enlargement in spheroid size, implying that DPP can potentially modulate growth in a tissue-mimicking environment (Figure 9).We finally confirmed that DPP increased hair growth in ex vivo hair follicle organ culture (Figure 10). In conclusion, we suggest that DPP can enhance hair growth and serve as a promising component for hair loss improvement by activating ERK/AKT/Wnt signaling pathways. Modeling of DPP We utilized GNINA (version 1.1) for conducting protein-ligand docking, incorporating support for scoring and optimizing ligands through convolutional neural networks (CNN) [62].The X-ray structure of the 15-PGDH and ligand complex (PDB 8CWL) was obtained from the RCSB Protein Data Bank.The protein was preprocessed using ProDy to eliminate non-protein components like water, cofactors, and ligands [63].The 3D structure of DPP was formatted in SD using OpenBabel [64].Subsequently, docking simulation was conducted with GNINA using the prepared protein and DPP.The binding site was specified to be in the vicinity of the reference ligand, which was extracted from the original PDB.Default settings were applied for all other parameters.The resulting binding poses were ranked based on CNNscore, which varies from 0 (poorest) to 1 (optimal).Finally, we selected the most favorable binding mode by considering the CNNscore and manually examining the binding interactions. Cell Culture HFDPCs were obtained from PromoCell (Heidelberg, Germany).The cells were cultured in follicle dermal papilla cell growth medium with a supplement pack and 1% penicillin-streptomycin at 37 • C in a 5% CO 2 incubator.Every 3 days, 80-90% confluence, cells were detached using a detach kit and transferred to a new 75 mm flask. Cell Viability Assay Cell viability was assessed by using the EZ-cytox cell base assay kit.HFDPCs were seeded at 2 × 10 4 cells/well density in a 96-well plate.After incubation at 37 • C in a 5% CO 2 incubator for 24 h, HFDPCs were treated with DPP (0.1, 1, and 5 µM) for 24 h.After incubation, the culture medium was aspirated from each well.Then, each well was treated with MTT labeling reagent 10 µL with follicle dermal papilla cell growth medium 100 µL was treated in each well in the dark for 1 h at a 37 • C incubator.Absorbance was measured at 450 nm. Wound Healing Assay HFDPCs were seeded at 4 × 10 4 cell density in a 6-well dish.Subsequently, incubated at 37 • C in a 5% CO 2 incubator for 24 h, the density of cells reached 80%.Then, a pipette tip (200 µL) was used to create a scratch in a horizontal line at the bottom of the dish, crossing through the confluent cells in the middle.After removing the culture medium, the new culture medium was added and treated with 2 µM DHT, DPP (0.1, 1, and 5 µM), and 1 µM MIX, respectively, and incubated for 24 h at 37 • C in a CO 2 incubator.To compare images 0 h and 24 h, all dishes are photographed using a microscope. Alkaline Phosphatase Staining (ALP) Assay HFDPCs were plated in a 24-well plate at a density of 3.125 × 10 4 cells/well and incubated for 24 h.Subsequently, the cells were treated with 1 µM DHT, 5 µM DPP, and 1 µM MIX, respectively, for 24 h at 37 • C in a CO 2 incubator.After treatment, the cells were washed with PBS-T and fixed with a fixing solution for 2 min.The cells were stained with an alkaline phosphatase staining solution for 24 h.And then it was washed with DPBS.The purple-stained colonies were counted and compared to the colorless colonies using a light Nikon microscope (Tokyo, Japan). DCF-DA ROS Assay HFDPCs were seeded in a confocal dish at 2.5 × 10 4 cells/mL density and incubated for 24 h.The cells were treated with 1 µM DHT, 5 µM DPP, and 1 µM MIX, respectively, for 24 h at 37 • C in a CO 2 incubator.Subsequently, the cells were washed with DPBS 2 times and stained with 10 µM 2,7-Dichlorofluoroscin diacetate (DCF-DA) for 15 min.Then, the cells were washed with DPBS and added DPBS 200 µL for the measure.The fluorescence was measured with a fluorescence microscope purchased from Nikon (Tokyo, Japan). Measurement of Mitochondrial Membrane Potential HFDPCs were seeded in a confocal dish at 2.5 × 10 4 cells/mL density and incubated for 24 h.Then, the cells were treated with DHT 1 µM, 5 µM DPP, and MIX 1 µM, respectively, for 24 h at 37 • C in a CO 2 incubator.Subsequently, the cells were washed with DPBS 2 times and were stained with 5 µM JC solution for 15 min.The cells were washed with DPBS and added DPBS 200 µL for the measure.The fluorescence was measured with a Nikon fluorescence microscope (Tokyo, Japan). Quantitative Real-Time Polymerase Chain Reaction HFDPCs were plated in a 6-well dish at a density of 7.0 × 10 4 cells/mL.Then, the cells were cultured for 24 h at 37 • C in a CO 2 incubator.Subsequently, the cells were treated with 2 µM DHT, DPP (0.1, 1, and 5 µM), and 1µM MIX after 24 h of incubation.Following this, the cells underwent two washes with DPBS and were utilized for RNA extraction.RNA was purified using TRIzol reagent, and the purified total RNA (4 µg) was synthesized into cDNA using the RevertAid First Strand cDNA synthesis kit.Subsequently, the assays were conducted utilizing TaqMan Universal Master Mix II, with UNG, for quantitative real-time polymerase chain reaction (qRT-PCR).The reaction mixture (total volume 20 µL) comprised 6 µL of DEPC water, 10 µL of TaqMan Universal Master fast Mix II, 3 µL of cDNA, and 1 µL of Assay primers. Western Blot Analysis HFDPCs were seeded at density 5 × 10 4 cells/well in a 100 mm dish and incubated for 24 h at 37 • C in a CO 2 incubator.The cells were treated with DHT 2 µM, DPP (0.1, 1, and 5 µM), and 1 µM MIX for 24 h.The cells were washed two times with DPBS and lysed using RIPA buffer.Then, the cell lysates were sonicated for 10 min.Total protein concentration was measured using a BCA assay with bovine serum albumin as standard.The protein of 30 µg/µL for phospho-AKT, AKT, phospho-ERK, ERK, phospho-GSK-3β, and β-catenin were separated on a sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred onto polyvinylidene difluoride (PVDF) membranes.The membranes were blocked by 3% non-fat dry milk for 1 h.Primary antibodies against p-AKT, AKT, p-ERK, and ERK were diluted at 1:1000 in a blocking solution.Primary antibodies against p-GSK-3β and β-catenin were diluted at 1:500 in a blocking solution.These primary antibodies were incubated for 1 h at room temperature and then washed 3 times with TBS-T.Horseradish peroxidase-conjugated secondary antibodies were used at a 1:5000 concentration for 1 h at room temperature and then washed 3 times with TBS-T.Finally, the ECL reagent detected the immunoreactive bands, and images were visualized with Invitrogen iBright 1500 (Waltham, MA, USA).The results were analyzed by Fiji Image J (Win 64-bit) software. Three-Dimensional Spheroid Culture of HDPCs HFDPCs were seeded in 96-well clear round-bottom ultra-low attachment multiple microplates at a density of 5 × 10 4 cells/well and incubated for 24 h.Subsequently, the cells were treated with 1 µM DHT, DPP (0.1, 1, and 5 µM), and 1 µM MIX, respectively, at 37 • C in a CO 2 incubator.The diameter of the spheroids was measured using a light Nikon microscope (Nikon, Japan). Human Hair Follicle Organ Culture Human scalp skin was obtained from nonbalding areas of patients undergoing hair transplant surgery with written consent and approval by the Institutional Review Board of Dankook University Hospital (DKUH 2021-12-005).Human hair follicles were isolated by microdissection under the microscope.Anagen VI hair follicles were chosen for the study.Each treatment group consisted of 6 hair follicles, and the experiments were repeated 3 times.Isolated hair follicles were maintained in William's E medium supplemented with 10 µg/mL insulin, 10 ng/mL hydrocortisone, 2 mM L-glutamine, and 10 U/mL penicillin, 100 ug/mL streptomycin, and 25 µg/mL amphotericin B. All cultures were incubated at 37 • C in an atmosphere of 5% CO 2 and 95% air. Statistical Analysis All results are expressed as the mean ± standard deviation (SD) of three independent experiments.According to Tukey's Multiple Comparison Test, all statistical analyses were performed using GraphPad Prism 5.0 software (San Diego, CA, USA) through a one-way analysis of variance (ANOVA).Statistical significance between the groups was accepted for p values < 0.05. Figure 1 . Figure 1.Predicted binding mode of 15-PGDH and DPP complex.The binding is primarily driven by hydrogen-bond interactions with Ser138 and Tyr151, supplemented by a pi-pi interaction involving Phe185.Hydrophobic interactions with Tyr217 also contribute to enhancing the overall binding affinity.The yellow color indicates key residues in the binding site of 15-PGDH. Figure 2 . Figure 2. The cell viability of DPP in HFDPCs.The MTT assay of DPP.Cell viability was calculated as the percentage (%) of viable cells relative to untreated cells.All data are expressed as mean ± SD (n = 4).* p < 0.05 is compared with the control group. Figure 3 . Figure 3.The wound healing effect of DPP in HFDPCs.A wound healing assay was performed on HFDPCs damaged by 1 µM DHT.Each of 0.1, 1, and 5 µM DPP and 1 µM MIX were treated for 24 h, respectively.This wound healing image was taken under a phase contrast microscope.This was a representative image of three independent experiments. Figure 4 . Figure 4.The ALP assay of HFDPCs stimulated by DHT.The ALP assay was conducted on HFDPCs damaged by 1 µM DHT.Each 5 µM DPP and 1 µM MIX were treated for 24 h, respectively.(A) This ALP image was taken under a phase contrast microscope.This was a representative image of three independent experiments.(B) A graph of the picture using image-J.All data are expressed as mean ± SD (n = 3).* p < 0.05, ** p < 0.01 compared with DHT-treated group. Figure 5 . Figure 5. Effects of DPP on ROS levels in DHT-damaged HFDPCs.The DCF-DA ROS assay was conducted on HFDPCs damaged by DHT at 1 µM. 5 µM DPP and 1 µM MIX was pretreated for 24 h followed by DHT damage.(A) DCF-DA images were captured using a fluorescence microscope, and the intensity of the green fluorescence represents the ROS concentration.This was a representative image of three independent experiments.(B) A graph of the picture using image-J.All data are expressed as mean ± SD (n = 3).*** p < 0.001 compared with the DHT-treated group. Figure 6 . Figure 6.Effects of DPP on mitochondrial membrane potential in DHT-damaged HFDPCs.The JC-1 assay was conducted on HFDPCs damaged by DHT at 1 µM. 5 µM DPP and 1 µM MIX was treated for 24 h.(A) JC-1 images were captured using fluorescence microscope.Red puncta, hyperpolarized mitochondria; green puncta, depolarized mitochondria.This was a representative image of three independent experiments.(B) A graph of the picture using image-J.All data are expressed as mean ± SD (n = 3).** p < 0.01 compared with the DHT-treated group. Figure 9 . Figure 9. Effects of DPP on the formation of 3D spheroid in DHT-damaged HFDPCs.Treatments with 1 µM DHT, DPP (0.1, 1, 5 µM), and 1 µM MIX were administered at 2-day intervals, and images were taken after 21 days of culture.(A) Each image of a 3D spheroid was taken under a phase contrast microscope.This was a representative image of three independent experiments.(B) A graph of the picture using image-J.All data are expressed as mean ± SD (n = 3).* p < 0.05 compared with the DHT-treated group. Figure 10 . Figure 10.Effects of DPP on hair growth in human hair follicle organ culture.To evaluate the effect of DPP, the anagen human hair follicles were prepared and cultured for 8 days.DPP was treated at concentrations of 0.5 and 1 µM.(A) The cultured hair follicles were photo-documented on 4, 6, and 8 days.(B) The hair shaft growth was analyzed.MIX was used as a positive control.The data represent the mean ± SD of eighteen follicles.p-values were obtained using the Mann-Whitney U test.Significantly different compared with control group (* p < 0.05, ** p < 0.01).
2024-07-10T15:13:30.792Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "438f85f9088ddaab3d41589e5149b46c22ea8809", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/ijms25137485", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5733744e93e06ec27680a472145bdedbc1fc4b8e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
266763345
pes2o/s2orc
v3-fos-license
Empowering tomorrow's leaders: the impact of the 15th Network of Young Researchers in Andrology (NYRA) meeting on male reproductive health and interdisciplinary collaboration ABSTRACT The 15th Network of Young Researchers in Andrology (NYRA) meeting, held at the Palace de Caux, Switzerland, served as a valuable platform to disseminate cutting-edge research and facilitate interactions among early-career researchers and trainees in andrology from around the world. Preceding the 22nd European Testis Workshop, the 2-day event brought together participants from a variety of countries to discuss a range of topics pertaining to men's reproductive health and biology. Specific focuses included piRNAs in mammalian reproduction, biomolecules enhancing sperm physiology, advances in in vitro spermatogenesis, reproductive strategies across species, and career development. A dedicated ‘scientific speed-dating’ social event also stood out, encouraging cross-disciplinary collaborations and strengthening ties within the scientific community. The high participation rate of the meeting highlighted its value in connecting the andrology community. Finally, the announcement of NYRA's merger with the European Academy of Andrology (EAA) marked a pivotal moment, enabling NYRA to support young researchers while collaborating with the EAA to advance andrology research. The 15th NYRA meeting played a crucial role in enhancing knowledge dissemination and andrology research, empowering young researchers, and addressing key challenges in male infertility. Introduction The prevalence of male infertility, affecting approximately 9-16% of men, has prompted a need for improved diagnostic methods and interventions (Barratt et al., 2017).The conventional method of its diagnosis involves examining semen parameters (WHO, 2023), where sperm count has decreased by over 50% in the last 40 years, with a continuous decline of 2.6% per year among Western men (Barratt et al., 2017;Levine et al., 2023).This decline not only impacts fertilization rates but also leads to increased disease burden and healthcare costs (Hubens et al., 2018;Kasman et al., 2020;Levine et al., 2017).The root causes of male infertility are complex and diverse, including (epi)genetics, psychology, lifestyle, pathogens, and environmental insults (such as xenobiotics, oxidative stress, heat stress and chemotherapeutic agents)all of which require further research to better understand their contributions (Aitken, 2022).However, for a significant number of male patients, the exact etiology remains unknown, as it can be assigned to only 30-50% of cases.Indeed, the poor understanding of male reproductive health hinders progress in research, diagnosis, and treatment of male infertility (Barratt et al., 2018;Duffy et al., 2021;Gumerova et al., 2023). The integration of cutting-edge omics technologies, alongside the continuous emergence of novel analytical and visualization tools coupled with machine learning, promises to revolutionize our understanding of male infertility.Together, these approaches will aid in elucidating key infertility biomarkers as well as the underlying intricacies of cellular heterogeneity within the male reproductive system at an unprecedented level, providing a faster and more comprehensive understanding.Promising candidates are emerging as novel biomarkers associated with male infertility as well as with recurrent pregnancy loss, often overlooked by traditional analyses.Together with novel cell culture methodologies (Richer et al., 2020), testicular organoids will allow for mechanistic studies to address and understand (sub)cellular and (epi)genetic biomarkers on testis development and functions (i.e.androgenesis and spermatogenesis) (Alves-Lopes et al., 2017).These techniques can be used for scaled and controlled experimental settings, possibly leading to effective solutions that prevent/treat disease and/or improve the male reproductive diagnosis and care. The widespread societal impact and the slow medical and regulatory advancements underlines the importance of prioritizing the challenge of male infertility.Therefore, initiatives like the Network for Young Researchers in Andrology (NYRA) have been established.NYRA aims to play a pivotal role in engaging younger stakeholders, raising awareness, and advancing research in andrology and reproductive biology.Collaborative efforts within research consortia are crucial for harnessing the full potential of emerging technologies in advancing male reproductive research. preceding the 22nd European Testis Workshop (ETW) that was coorganized by COST Action Andronet (CA20119) (Oliva et al., 2022).In total, 65 attendees representing 15 different countries (Algeria, Australia, Belgium, Croatia, Czech Republic, Denmark, Finland, France, Germany, the Netherlands, Poland, Spain, Serbia, Türkiye, and the UK) joined the meeting, mostly comprised of early-career researchers and trainees (Fig. 1A-C).The scientific program featured five prominent plenary speakers who covered diverse topics in andrology ranging from molecular aspects of male fertility to in vitro spermatogenesis and sperm physiology.In addition, the board organized a career development session focused on entrepreneurship.By disseminating cutting-edge research and facilitating interactions among young researchers, the 15th NYRA meeting facilitated knowledge exchange and networking opportunities, while strengthening ties within the scientific community.This was exemplified by the scientific speed-dating event, where participants, including the invited senior speakers and members of the NYRA board, engaged in dynamic discussions about their research and careers.Finally, the merger between NYRA and the EAA was announced, marking a significant milestone for the andrology field enhancing opportunities for young trainees through collaborative efforts. At the time of the 15th NYRA Meeting, the NYRA Board was formed by Dr Alberto de la Iglesia (President), Dr Dorte Egerberg (Secretary), Daniel Marcu (Acting Treasurer), Guillaume Richer, Dr Brendan Houston, Emily Delgouffe, Gülizar Saritas, Omar Ammar and the local organizers Lydia Wehrli and Dr Cyril Djari (Fig. 1D). Scientific highlights of the 15th NYRA Meeting Associate Professor Pei-Hsuan Wu (Genetic Medicine and Development, University of Geneva, Switzerland) delivered the inaugural plenary lecture entitled 'PIWI-interacting RNAs in mammalian reproduction' (Fig. 2A).Professor Pei-Hsuan provided us with an insight into a type of non-coding RNAs known as PIWIinteracting ( pi) RNAs and their corresponding precursors in mice.The role of specific piRNA clusters was discussed though elimination of mouse piRNAs using CRISPR/Cas technologies.The functions of the pi6 cluster, particularly beyond gamete production, were revealed to extend into post-testicular sperm maturation. Professor Christophe Arnoult (Institute for Advanced Biosciences, University of Grenoble Alpes, France) delivered a lecture entitled 'In search of (bio)molecules that improve sperm physiology' (Fig. 2B).Cryopreservation may have adverse effects on sperm competence, potentially triggering capacitation, provoking acrosome reactions, diminishing sperm motility, and thereby reducing the numbers of fertilization-competent spermatozoa.Considering the limited efficacy observed in assisted reproductive technologies (ART) and the cryopreservation-induced impacts on sperm physiology, Professor Arnoult presented a series of promising results centered on elucidating sperm functionality.These studies identified specific sperm proteins of interest as prospective targets for biomolecular interventions.Furthermore, he delved into the potential efficacy of directing therapeutic strategies toward these sperm-specific proteins to enhance the fertilization rates within ART. Biology Open Professor Jan-Bernd Stukenborg (NORDFERTIL Research Lab Stockholm, Childhood Cancer Research Unit, Department of Women's and Children's Health, Karolinska Institutet, Sweden) shifted the focus from sperm physiology to the production of testicular organoids, mini-testes in Petri dishes (Fig. 2C).During his lecture, titled 'In vitro spermatogenesisfrom mission impossible to mission incomplete', he discussed the advancements made in the development of testicular organoids, which aims to create a physiological milieu capable of supporting in vitro spermatogenesis.Notably, the important role of Sertoli cells in supporting this complex process was highlighted. The final plenary lecture delivered by Professor Brett Nixon (Priority Research Centre for Reproductive Science, School of Environmental and Life Sciences, University of Newcastle, Australia) with the title 'What makes the best swimmers?' (Fig. 2D), provided insights into reproductive strategies and fertility across various species, with a focus on epididymal sperm maturation.Reptiles, birds and monotremes such as the platypus displayed distinct differences in sperm structure and maturation compared to eutherians and humans.For example, reptiles such as female snakes are able to store functional sperm for up to 5 years before sperm were used for fertilization that resulted in pregnancy.Moreover, the lecture showcased adaptations of platypus sperm, which form bundles inside the female reproductive tract system to bolster their motility and aid in ascension of the oviduct. Career development session As part of the traditional activities organized by NYRA, the board arranged a workshop focused on showcasing the process of launching a research-oriented startup company.For this, we welcomed Dr Thomas Darde, former NYRA board member and the current founder of SciLicium, based in Rouen, France (Fig. 2E).SciLicium focuses on redefining the field of bioinformatics by connecting individuals skilled in data analysis with those possessing expertise in data interpretation.They offer a personalized approach to assist their customers in optimizing their abilities in analyzing multi-omics data, providing computational workflows and tools required to aid address the challenges of modern scientific research.Additionally, SciLicium actively supports and encourages its clients to develop strong working relationships and partnerships with experts in their respective fields of study.Dr Darde provided valuable insights into the intricacies of initiating a company following the completion of doctoral studies.During his presentation, he presented his personal journey shifting from academia to founding SciLicium.He delved into the challenges that come with initiating a company, highlighting the diverse roles he had to fulfill along the wayincluding those of a bioinformatician, accountant, sales and marketing manager, among others.To excel in such a multifaceted role, he emphasized the importance of being present and adaptable, with a constant focus on error reduction.Moreover, Dr Darde elaborated on the complexity of being a leader and its responsibilities within a team of collaborators, underscoring the utilization of skill sets acquired throughout his academic career and during the pursuit of his PhD studies.Finally, he stressed the necessity of dataset sharing through collaborative efforts.The ReproGenomics Viewer exemplifies this collaborative approach as a multi-and cross-species genomic database designed for the visualization, mining, and comparison of published omics data sets, specifically tailored to serve the reproductive science community. Scientific speed-dating One of NYRA's popular and highly anticipated networking activities is the 'scientific speed-dating' event (Fig. 2F).It was an incredibly rewarding experience that allowed NYRA members to connect on a meaningful level.The event was organized in a dynamic format: attendees were asked to randomly disperse across tables in the courtyard, and they had five minutes to provide a brief overview of their research and current career stage to a fellow participant before moving on to meet someone new.In total, each individual had the opportunity to engage with 10 different people during this event, breaking down barriers and fostering potential collaborations and discussions. NYRA history and announcement of the merger with the EAA Dr Alberto de la Iglesia, NYRA president, delivered an insightful presentation that traced the history of NYRA, from its foundation in 2006 during the 14th ETW up to the present day (Fig. 3A), highlighting the valuable contributions and achievements of previous NYRA board members.Notably, he announced NYRA's latest milestone, the merger between NYRA and the European Academy of Andrology (EAA), with NYRA becoming the 'young arm of the EAA' while preserving its independent identity (Fig. 3B).The opportunity to share this merger announcement with the audience was a moment of immense satisfaction for all NYRA board members.As highlighted by Dr de la Iglesia, now also ex officio member of the EAA Executive Council, this development is a significant breakthrough for the andrology field, aiming to enhance opportunities for early-career researchers, including trainees, through collaborative efforts. Fig. 1 . Fig. 1.Participants of the 15th NYRA meeting.15th NYRA delegates distributed by (A) country and (B) career stage.(C) Early-career scientists and trainees in andrology from around the world.(D) NYRA board members and local organizers.
2024-01-06T06:17:37.703Z
2024-01-05T00:00:00.000
{ "year": 2024, "sha1": "ae961bf799d0fb08080e63a2b1ff20e93924f848", "oa_license": "CCBY", "oa_url": "https://journals.biologists.com/bio/article-pdf/13/1/bio060178/3336044/bio060178.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "79fb4d06dd31c571921bfb06d8a57ca9e161523b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
57567903
pes2o/s2orc
v3-fos-license
A New Era of Food Transparency Powered by Blockchain ply chain, but in reality it isn’t a chain at all. The food system today—that is, the way we get our food from farm to table— has evolved into a complex network that is interdependent on many entities. And while there is no question that today’s food system provides consumers with a more diverse, convenient, and economical source of food, it also presents new challenges. For example, in today’s food system, the output from one ingredient producer could end up in thousands of products on a grocery store shelf. We saw evidence of this during the peanut butter Salmonella outbreak in 2008 and the E. coli illnesses caused by contaminated flour in 2016. Today there is no widely adopted industry standard for how each segment of the food system (farmer, processor, distributor, retailer, etc.) tracks and records data for food traceability purposes. Many simply record their data on paper, and while some are using digital methods, these methods do not enable communication with other parties in the food system. Thus, the system is limited to A NEW ERA OF FOOD TRANSPARENCY POWERED BY BLOCKCHAIN innovations / volume 12, number 1/2 traceability capabilities that are often described as "one step forward and one step back." Piecing together traceability data by sifting through hundreds or even thousands of documents during a foodborne outbreak can be slow and complicated, and it all too often is not an effective way of identifying and informing action through lessons learned to prevent future outbreaks. These inefficiencies and complexities are one of many reasons we at Walmart were looking for a technological solution to help us achieve enhanced food traceability and transparency. An outbreak of E. coli 0157:H7 in the United States back in 2006 that was caused by contaminated spinach was an example, and a warning, of the need for better traceability capabilities within the food system. As I am writing this, another multistate outbreak of E. coli involving leafy greens, this time romaine lettuce, is being investigated, and it is clear that not much has changed in regard to food traceability since 2006. The FDA stated in its update on the recent romaine outbreak that "FDA scientists and investigators are working with federal and state partners and companies as quickly as possible to collect, review and analyze hundreds of records in an attempt to trace back the source of the contaminated romaine lettuce." 1. In the same update, the FDA claimed that people fell ill beginning on March 13. However, the CDC did not issue the first public advisory informing consumers and retailers not to consume or sell romaine lettuce from the Yuma, Arizona, region until April 13. In 2006, it took the FDA approximately two weeks to trace the issue back to the source. As of this writing, it has been over two months since the first CDC advisory but a definitive source or sources of the illness has not yet been identified. The current E. coli outbreak suggests that, in the 12 years since the spinach outbreak, our food system's traceability capabilities have not significantly improved or kept up with the digital modernization that has happened in the world around us. These statements are not a criticism of the good work our nation's health officials do on a daily basis; they are, however, a critique on the food system's ability to track ABOUT THE AUTHOR Frank Yiannas is Vice President of Food Safety for Wal-mart. In that capacity he oversees all food safety, as well as other public health functions, for the world's largest food retailer, serves over 240 million customers around the world on a weekly basis. Yiannas is the 2007 recipient of the NSF International Lifetime Achievement Award for Leadership in Food Safety and the 2015 Industry Professional Food Safety Hero Award by STOP Foodborne Illness. He is also a past president of the International Association for Food Protection, a past vice-chair of the Global Food Safety Initiative, and an adjunct professor in the Food Safety Program at Michigan State University. Yiannas is the author of the books Food Safety Culture, Creating a Behavior-based Food Safety Management System, and Food Safety = Behavior: 30 Proven Techniques to Enhance Employee Compliance, by Springer Scientific.. and trace food-and an urgent call to action. Further illustrating the need for improvement is the fact that more and more food product recalls in recent years have been caused by a single ingredient. These ingredient-driven recalls can last a month or more, due to inefficient and disparate traceability systems that do not have set standards and do not communicate with each other. The 2009 outbreak, which was caused by peanut paste produced by the Peanut Corporation of America (PCA), lasted for months as suppliers slowly became aware that their products contained PCA's peanut paste. In the end, nearly 4,000 food items were recalled. A digital, transparent food traceability system could have identified where PCA's ingredients had been used in much less time. Creating a digitized farm-to-fork industry standard enabled by blockchain would likely enhance, accelerate, and optimize food traceability throughout the entire food supply. BLOCKCHAIN TECHNOLOGY AND FOOD Blockchain is a technology that enables the creation of a decentralized, distributed, and trusted digital ledger that can be used to record transactions from multiple entities across a complex network. A record on a blockchain cannot be altered retroactively without the alteration of all preceding blocks and the consensus of the network. How to enhance food traceability and transparency for our customers is one challenge that Walmart has been working on. Blockchain is often associated with cryptocurrency, but it is being looked at more and more as a solution to food-supply problems that will enhance trust and transparency. Walmart believes that using blockchain could usher in a new era of food traceability, and that it could benefit areas beyond food safety, such as improving sustainability by reducing waste and lowering costs by eliminating food system efficiencies. Moreover, using blockchain could enable the capture of data beyond mere traceability attributes (where and when), including those that promote greater transparency (How was a food produced? Was it sustainably grown?). Having worked in the food profession for more than 30 years, I have to be candid. I've been pursuing better traceability systems for many years, and when I first heard about blockchain and considered the role it might play in enhancing food traceability, I was a bit skeptical. It was only after I started learning more about blockchain, such as how it digitizes information, and more about its unique features (immutability, consensus, etc.) that I started to change my mind. After we successfully piloted the technology, I moved from being a blockchain skeptic to a blockchain believer. While you usually hear about blockchain technologies that are implemented for cryptocurrencies like Bitcoin, the truth is that enterprise-level blockchain networks are starting to emerge that have a wide range of use cases in the private and public sectors. For example, financial institutions are evaluating blockchain to improve the tracking and tracing of real currencies; transportation and logistics industries can use it to improve tracking of containers or packages; and regulatory agencies can use it to improve import control efficiency. Blockchain also can free up capital flows, improve efficiencies, reduce costs, and build trust across a broad range of stakeholders and ecosystems. Why do I believe that it is time for a technology like blockchain to transform the food sector and usher in a new era of transparency? First, I know it can improve our ability to definitively link foodborne outbreaks to their causative food vehicle, which could result in fewer and smaller outbreaks and fewer people harmed. It could allow for more efficient analysis to determine the root cause of an outbreak, which also would inform future prevention efforts. The U.S. Centers for Disease Control and Prevention estimates that 48 million consumers get sick from foodborne illnesses each year. The global estimates by the World Health Organization are even more concerning. Moreover, the economic impact of such outbreaks in the United States alone ranges from $55 billion to $93 billion (Scharf, 2012). The inability to track and trace food efficiently back to the source of the contamination is one main factor contributing to these statistics. Food fraud is another growing concern across the global food industry. From counterfeit olive oil to adulterated milk, it has been estimated that food fraud incidents cost the industry between $10 billion and $15 billion annually. (Johnson, 2014). One reason for this is innovations / volume 12, number 1/2 49 WHY BLOCKCHAIN? A blockchain-based ledger is a shared digital ledger used to record the history of transactions, which cannot be altered. In a typical transactional relationship, multiple parties are involved in the transactions along a supply chain, and every party typically has their own version of the truth. This environment is fraught with errors, duplication, and redundancies that create inefficiencies along the supply chain. This is especially true in the food sector, where there are many small to mid-sized enterprises that even today maintain paper-based records. A single shared ledger that is tamper evident alleviates many of these inefficiencies and allows all parties participating in the series of transactions to have a view into one version of the truth. There are elements that are unique to blockchain networks that make the technology a game-changer in terms of promoting greater trust and transparency in food. These elements are: Decentralized: In a blockchain network, multiple nodes hold a copy of the same data, which eliminates the risk of a single point of failure in the network. This is a key difference between a blockchain network and a centralized repository (or authority) of data. Immutable: By using cryptographic hashes and encryption, data is written onto the blockchain in a way that cannot be altered without detection. Not only does this increase confidence in the data itself, it also incentivizes all stakeholders responsible for putting their data on the blockchain to ensure the accuracy of that data the first time and every time it is uploaded. Consensus: To write data onto the blockchain requires consensus from all parties involved in a transaction. This ensures that a single entity does not control the blockchain and also allows for the permissioning of data to meet the business needs of the blockchain participants. Democratic: The governance of the blockchain can be implemented and enforced in a democratic and transparent manner, whereby a diverse group of stakeholders participating in the blockchain network have an equal voice on issues such as data ownership, rights, data sharing, and protection. In addition, as opposed to a central governing authority benefiting from the insights, all participants in a blockchain system can get smarter together, thereby creating what we refer to as shared value. that the supply chain is only as secure as its weakest link, and cargo theft is on the rise. One of the reasons unscrupulous suppliers are willing to commit food fraud is because they do not fear being caught, due to the anonymity of how food is produced and where it comes from. Having a digital, real-time ability to monitor and trace food as it flows from farm to store will be a strong deterrent for such fraudulent activities, as it will create a digital footprint that leads back to a fraudster's door. Food-safety regulations are becoming more stringent across the globe. For example, the U.S. Food Safety Modernization Act established additional record-keeping requirements, including a section on enhancing the tracking and tracing of foods. This will inevitably raise the bar on the minimum expectations for food traceability in the coming years. From a sustainability perspective, greater traceability and transparency would likely allow food system participants to optimize supply chains and reduce food waste. The current estimate is that nearly one-third of the food produced globally goes to waste. In the U.S., the amount of food waste each year equals $161.1 billion (EPA). We know we can do better. We know we must do better. By having more targeted recalls, we can both reduce the amount of unaffected food we discard and protect public health more effectively. By having longer shelf-life and providing the consumer with clearer messages about the safety and quality of food, we can reduce post-purchase consumer waste. Ultimately, blockchain-enabled traceability will create greater food transparency, which will lead to greater accountability and incentivize every stakeholder in the food system to do the right thing every time. Greater accountability will in turn encourage stakeholders to take greater responsibility for food safety, which will promote greater trust within the supply chain. Consumers are already demanding this, and it's up to the industry to step up to meet this challenge. WALMART'S PROOF OF CONCEPTS In October 2016, Walmart and IBM announced two proof of concepts (POCs) to demonstrate that blockchain technology provides a viable way to trace and authenticate food from farm to store with speed and precision. The POCs focused on two elements of the blockchain solution: traceability and authenticity. Mangoes and Traceability Consider a typical mango supply chain, starting with the seedling that takes five to eight years to mature. Once the mango is harvested, it is sorted and containerized before being loaded on a truck and shipped, often across borders. The mango then gets further processed-cleaned, sometimes sliced, and put into a clamshell-before being palletized, put on a Walmart truck, and shipped to a Walmart store. At the store, our customer will pick up the mango, check out, and take the fruit home to enjoy. Even for a fairly simple food product like a mango, you can see that it's a long and complicated supply chain with many stakeholders involved. I wanted to find out what it would take to identify the grower of a single package of sliced mangoes offered at one of our stores. So, I bought a package of mangoes at a local Walmart store in northwestern Arkansas, and during a meeting with my leadership team I asked them to find out which farm those particular mangoes came from. I told them that I was going to time them. They began by calling our immediate supplier to see if this information was readily available. In Frank Yiannas Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/inov_a_00266 by guest on 31 August 2022 today's regulatory environment, the expectation is that stakeholders maintain records for one step up and one step down. This means that each stakeholder in the mango supply chain had to work with the next node in the chain to identify the provenance of my mangoes. It took us 6 days, 18 hours, and 26 minutes to identify the farm that harvested those mangoes! While this was pretty good by industry standards, where an average traceback can take weeks or even months, in today's digital age, where information is available at our fingertips, this was unacceptable for Walmart. One of the key advantages of blockchain that Walmart and IBM wanted to prove was its ability to provide product visibility from farm to store. The spinach outbreak was evidence that alerting stores in a matter of seconds that they needed to remove a product would be invaluable to public safety. To prove this feature, Walmart and IBM used blockchain technology to trace mangoes from farms in Mexico to two stores in North America. For this test, each stakeholder in the supply chain, including farms, packing houses, transportation companies, importers/exporters, processing facilities, distribution centers, and stores, put data on the blockchain. The blockchain then linked the datasets together to tell the story of the journey this mango took from farm to store. The result was a steep reduction in the time it took to trace mangoes-from 7 days to 2.2 seconds! That is what I have referred to as "food traceability @ the speed of thought." We used mangoes for two reasons. First, the produce supply chain is one of the most complicated in the food system. Second, even though the produce supply chain is very safe when one considers percapita consumption rates, when a foodborne illness does occur, produce is one of the most frequent causes. The ability to innovations / volume 12, number 1/2 51 Figure 1. Life of a Mango pinpoint and remove a product from the shelves immediately after becoming aware of a food safety issue could prevent illnesses, and also reduce the likelihood that the wrong sources will be erroneously implicated. The amazing power of this innovation is that, once the foundational infrastructure enabling greater transparency and traceability is built, it is relatively easy to leverage the same infrastructure to collect additional data about the food, from time-temperature tracking for improved freshness to tracking certificates for food safety audits. Another benefit we observed during the mango POC was increased visibility into the speed at which food flows through the supply chain. For example, it is easy to blame the farmer for the poor quality or lower shelf life of foods that reach our stores. However, during this POC, we learned that the mangoes sat at border control for four days before reaching our direct supplier. That's four additional days of shelf life we could give back to our customers, resulting in better qual-ity products and less food waste. Using blockchain will allow us to identify where in our supply chain we can improve efficiencies and to do more "fact-finding" than "fault-finding" when issues do arise. Pork and Authenticity Food fraud is being identified increasingly and consumers are aware of this trend. Proving that blockchain could be used to build confidence in the authenticity of our products was as important as proving traceability. We wanted to demonstrate that blockchain could be used to do more than trace food and we wanted to engage our international partners as well. So, in addition to our "mango POC," we conducted a POC to trace pork from farms in China to a Walmart store, also in China. We began by collecting information about the animals at the farm in China, then at the slaughterhouse, and on through their transport and the Animal Product Quarantine Certificate Exchange. Prior to conducting the POC, a label was placed on each case of pork at the pro- Figure 2. Pork Supply Chain in China Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/inov_a_00266 by guest on 31 August 2022 cessing facility, but it contained minimal information. For the POC we added a QR code (barcode) to each case, which allowed any trusted user to verify the traceability and authenticity of the product at any point between packaging the product into the case and its arrival at our stores. Our Walmart associates could scan the label in the distribution center to digitally view the purchase order and shipment details, and thereby verify that the product was flowing through the correct distribution center. Our associates previously had to verify the details on paper, which made us vulnerable to errors. Even more exciting is that the veterinary certificates were scanned to blockchain, rather than physical copies being handed to a truck driver. The certificates were stored on the blockchain as an immutable digital copy that was accessible to any trusted user on the network. Permissioned food safety professionals in our organization had instant access to the veterinary certificates at any time, which eliminated having to chase down paper records. In the POC we reduced the amount of time it took to access the certificates and increased confidence in those records. In China, where trust and authenticity are serious issues, we brought trust to the food system. FOUNDATION PROGRAM Over the summer of 2017, after demonstrating that blockchain was a viable way to trace and authenticate food, we realized that if we sought to build a proprietary system unique to Walmart and our supply chains, we would fail. The power of blockchain networks comes from collaborative ecosystems that enable a diverse group of stakeholders to participate in the network. No single retailer or single food company, regardless of size, can (or should?) do this alone. We understood that these were still the early days of blockchain technologies being applied in an enterprise environment and that we needed to encourage innovation in this space rather than stifle it. Therefore, CEO Doug McMillon, contacted the leaders of some of the world's most influential food companies to inform them of our accomplishments and ask them to participate in additional testing and scaling of the solution. Partnership was critical to creating an open, collaborative solution that would work for everyone. If each company attempted to create solutions in isolation we would end up right back where we started on this journey, with systems that don't talk to each other and datasets that cannot be linked across the supply chain. We recognized early on that pre-competitive collaboration was essential in this space if we were to deliver a safer, smarter, and more transparent food system to our customers. For this reason, we invited even our competitors to join us in scaling this innovation. Moreover, it would have been cost-prohibitive for each supplier to implement and participate in a separate blockchain network for each retailer with whom they conduct business. Today we have a coalition of ten Foundation Partners comprised of both suppliers and retailers, which include Walmart, Kroger, Wegmans, Tyson, Driscolls, Nestle, Unilever, Danone, McCormick, and Dole. It was equally important to ensure that the self-governance structure we built around the blockchain network to resolve issues such as data ownership, privacy, and access rights was done collaboratively with the Foundation Partners in order to prevent any one stakeholder, like Walmart or our solution provider IBM, unequal authority to make decisions. GUIDING PRINCIPLES FOR THE DEVELOPMENT OF BLOCKCHAIN SOLUTIONS To promote a more collaborative approach to blockchain food applications, reduce duplication of effort, and promote more efficient and interoperable solutions, Walmart has developed the following guiding principles on how we believe blockchain solutions should develop for food. Solve for a Business Case: When pursuing a new technology application, we believe there should be a clear business case for doing so. Don't chase blockchain but consider when it is deemed more effective than existing technologies or approaches. This also means that one should begin with the business problem that is being addressed in mind-not the technology. Collaborate: As the food system is complex and interdependent on numerous stakeholders, we believe the development of blockchain solutions should be collaborative efforts, as no single company or sector can digitize the food system alone. Working together, we can reduce duplication of effort, redundancies, and gain food system efficiencies by promoting more effective and interoperable solutions. Interoperate: A collaborative, digital traceability network does not exist today, because companies have digitized their areas of the food system in isolation and created digital information silos. To prevent a repeat of the past, we believe blockchain food networks must be designed so that they are interoperable with legacy systems as well as other blockchain networks, and are based on existing standards, such as GS1. Create Shared Value: One beneficial feature of blockchain is that it democratizes information. In order to take advantage of this feature, blockchain solutions should be designed and operated in a way that provides benefits and adds value to all stakeholders in the food continuum (farmers, processors, distributions, retailers, etc.). Making sure that all stakeholders benefit is critical to creating a blockchain ecosystem that participants want to join and participate in voluntarily. It will also allow the entire food system to get better together, rather than individual entities getting better alone. Leverage: Whenever possible, blockchain solutions within an organization should utilize and build on existing technologies, processes, standards (such as GS1), and investment should be made in digitizing the food system. This will allow blockchain solutions to develop in a cost-effective and less disruptive manner. Establish Strong Governance: As blockchain solutions are less reliant on a central authority, blockchain networks must clearly establish rules for self-governance, including membership, data ownership, rules of conduct, and privacy. Blockchain is about trust. Make It Affordable: During these challenging economic times, consumers worldwide are trying to make their food dollar go farther so they can feed their families and loved ones. As advocates for the customer, we believe blockchain solutions should always undergo a thorough cost-benefit analysis to ensure that they truly deliver benefits while protecting against ineffectiveness and unnecessary cost. In addition, we believe blockchain solutions should have little to no cost to use, and that any amount users pay to participate are proportional to the benefits they derive from the solution. Scaling Trust Walmart, IBM, and the Foundation Partners have moved rapidly to scale, test, and implement blockchain-enabled traceability on a number of strategically selected SKUs, including both private and supplier brands. As of May 2018, Walmart has already tracked nearly two dozen SKUs involving 2.6 million food packages across 166,000 traceability events on the blockchain. Furthermore, we have achieved this in a production environment-that is, beyond proof of concepts or pilots. After hearing about the Foundation's successes to date, many companies across the globe have reached out to us saying that they want to learn about our approach and participate in our initiative. I truly believe that blockchain could enable a level of transparency in the food supply chains that has not existed before. It will allow us to move from a food system that has operated with a lot of anonymity and create an environment of accountability that enables and scales trust. For Walmart, the end goal is to leverage this transparency to create a safer, smarter, and more sustainable food system that benefits people and the planet. Ultimately, this will benefit our customers, whether it helps them make a better decision while shopping at our stores or allows them to scan a QR code on the package to learn everything they would like to know about the food they are purchasing-from the farmer or fisherman through each step it took in the journey from farm to store. Potential Impact of the Innovation A digital food system enabled by blockchain could enable more than just traceability and safety. It could lay the groundwork for benefits such as the fol-lowing: Transparency: Transparency is the food system's desired state, one in which a food's attributes are easily accessible to all stakeholders, including consumers, so that decisions made at every level can be more informed. Enhanced Food Flow: Blockchain will enable instant access to large amounts of data that was not previously available. This means that the best decisions about how food flows from farm to our stores will be not only possible but automated. Reduced Food Waste: One outcome of using blockchain could be a vast reduction in food waste. This aligns with Walmart's commitment to achieve zero waste to landfill in key markets by 2025, and to sell more sustainably produced products while maintaining the low prices customers expect. Deterring Food Fraud: Enhanced transparency will shine a light on each actor in the food system, which we believe will discourage unscrupulous behaviors and deter food fraud. New Model for Scaling Trust in Food: Given recent and well-publicized food scares and data scandals, some consumers have lost trust in both private and public institutions and large central authorities. This trend is not unique to food and it has broader societal implications. Some believe that a new form of distributed, digital trust that is dependent on better checks and balances is emerging. Because blockchain protocols are based on decentralization and consensus, it could help food system stakeholders restore and scale consumer confidence in food, and in the institutions that are part of the nation's food supply. Walmart and the Foundation Partners have moved rapidly to scale, test, and implement blockchain-enabled traceability on a number of strategically selected SKUs. Right now we are on the verge of demonstrating complete farm-to-store traceability for many more. As our global community becomes smaller, the business of moving food from the farm to the dinner table has become increasingly complex. Food is being distributed farther than ever before, sometimes from one distant country to another, and foodborne disease outbreaks could become more widespread. As long as foodborne outbreaks exist somewhere in the world, they can exist anywhere in the world. Today's food system requires more interdependence on multiple stakeholders than ever before. In fact, collaborative solutions have never been as important as they are today. No single food retailer can mandate better food traceability, food manufacturers in one country can't do it alone, nor can any single country's regulatory agencies. Better food traceability requires collaboration, and it must be people led and technology enabled. While we are starting with food traceability, our ultimate goal is greater food transparency, which will benefit all food system stakeholders. It will benefit food producers along the entire food continuum. It will benefit regulatory officials and NGOs. But, ultimately, enhanced food transparency will benefit our customers. By getting rid of the anonymity that exists in the current food system, blockchain technology will shine a light along every step of the way in the life of our food products to help create a safer, smarter, and more sustainable food system so that our customers can save money and live better.
2019-01-05T09:23:32.602Z
2018-07-01T00:00:00.000
{ "year": 2018, "sha1": "587bdf729f00a04106a822ad5e347a79d99d7b26", "oa_license": "CCBY", "oa_url": "http://www.mitpressjournals.org/doi/pdf/10.1162/inov_a_00266", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "719c0ef136d816250c86b6ed47ce9dcc68fc1379", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences", "Business", "Computer Science" ], "extfieldsofstudy": [ "Economics" ] }
21733593
pes2o/s2orc
v3-fos-license
Summary of the NACI Statement on Seasonal Influenza Vaccine for 2017 – 2018 Background: Influenza is a respiratory infection caused primarily by influenza A and B viruses. Vaccination is the most effective way to prevent influenza and its complications. The National Advisory Committee on Immunization (NACI) provides recommendations regarding seasonal influenza vaccines annually to the Public Health Agency of Canada (PHAC). Objective: To summarize the NACI recommendations regarding the use of seasonal influenza vaccines for the 2017–2018 influenza season. Methods: Annual influenza vaccine recommendations are developed by NACI’s Influenza Working Group for consideration and approval by NACI, based on NACI’s evidence-based process for developing recommendations. The recommendations include a consideration of the burden of influenza illness and the target populations for vaccination; efficacy and effectiveness, immunogenicity and safety of influenza vaccines; vaccine schedules; and other aspects of influenza immunization. These recommendations are published annually on the Agency’s website in the NACI Advisory Committee Statement: Canadian Immunization Guide Chapter on Influenza and Statement on Seasonal Influenza Vaccine (the Statement). Results: The annual statement has been updated for the 2017–2018 influenza season to incorporate recommendations for the use of live attenuated influenza vaccine (LAIV) that were contained in two addenda published after the 2016–2017 statement. These recommendations were 1) that egg-allergic individuals may be vaccinated against influenza using the low ovalbumin-containing LAIV licensed for use in Canada and 2) to continue to recommend the use of LAIV in children and adolescents 2–17 years of age, but to remove the preferential recommendation for its use. Conclusion: NACI continues to recommend annual influenza vaccination for all individuals aged six months and older, with particular focus on people at high risk of influenza-related complications or hospitalization, people capable of transmitting influenza to those at high risk, and others as indicated. Affiliations 1 NACI Influenza Working Group Chair, University of Alberta, Introduction Influenza and pneumonia is ranked among the top 10 leading causes of death in Canada (1).Although the burden of influenza can vary from year to year, it is estimated that in a given year, there are an average of 12,200 hospitalizations related to influenza (2) and approximately 3,500 deaths attributable to influenza (3).The National Advisory Committee on Immunization (NACI) provides recommendations regarding seasonal influenza vaccines annually to the Public Health Agency of Canada (PHAC).The objective of this article is to summarize the NACI recommendations for the use of seasonal influenza vaccine for the 2017-2018 influenza season.Complete details can be found in the Statement on Seasonal Influenza Vaccine for 2017-2018 (4). Methods In the preparation of the 2017-2018 seasonal influenza vaccine recommendations, NACI's Influenza Working Group (IWG) identified and reviewed evidence regarding the administration of live attenuated influenza vaccine (LAIV) in egg-allergic individuals and vaccine effectiveness of LAIV and inactivated influenza vaccine (IIV) in children and adolescents 2-17 years of age.Following the review and analysis of this information, the IWG proposed updated recommendations for vaccine use to NACI, based on NACI's evidence-based process for developing recommendations (5).NACI critically appraised the available evidence and approved the specific recommendations brought forward.Complete details of the literature review, rationale and relevant considerations for the updated recommendations can be found in the Addendum -LAIV Use in Egg Allergic Individuals (6), the Addendum -LAIV Use in Children and Adolescents (7), and the Canadian Immunization Guide Chapter on Influenza and Statement on Seasonal Influenza Vaccine for 2017-2018 (4). For the review of LAIV use in egg-allergic individuals, data were obtained from three prospective cohort studies in the United Kingdom (UK) and Canada (8)(9)(10).Post-licensure safety data from the Canadian Adverse Events Following Immunization Surveillance System (CAEFISS) was analyzed to seek reports of adverse events in influenza vaccine recipients who describe a history of allergy to eggs.Data on LAIV vaccine effectiveness in children and adolescents were obtained primarily from American studies using the test-negative design: the United States Influenza Vaccine Effectiveness Network (US Flu VE Network) (2010-2016) (11)(12)(13)(14), the Influenza Clinical Investigation for Children (ICICLE) study (2013-2014 through 2015-2016 influenza seasons) (15)(16)(17) and the US Department of Defense (DoD) (2013-2014 and 2015-2016 influenza seasons) (13,18).The American Household Influenza Vaccine Effectiveness (HIVE) study derived vaccine effectiveness data using an alternative household cohort design (2012-2013 and 2013-2014 seasons) (19,20).Data on LAIV vaccine effectiveness from outside of the United States of America came from the Canadian Sentinel Practitioner Surveillance Network (SPSN) (2013-2014 and 2015-2016 seasons) (21,22), Germany (2012-2013 season) (23), the UK sentinel surveillance network (2013-2014 through 2015-2016 seasons) (24)(25)(26), and Finland (2015-2016 season) (27).These studies used the test-negative design (21)(22)(23)(24)(25)(26), with one prospective cohort study (27) and two cluster randomized trials (28,29).This article also presents information not provided in the published addenda or statement: figures summarizing the LAIV vaccine effectiveness data from the cited studies, by influenza season and influenza strain, as well as LAIV vaccine effectiveness data used to inform NACI's decision that were not publicly available when the Addendum was finalized, but have subsequently been published (30,31). New for the 2017-2018 influenza season There were two changes in NACI recommendations for the use of seasonal influenza vaccine for the 2017-2018 influenza season.Both changes related to updated recommendations on the use of LAIV. LAIV is safe for egg-allergic individuals All influenza vaccine products authorized for use in Canada are manufactured from influenza virus grown in chicken eggs, which may result in the vaccines containing trace amounts of residual egg protein.The formulation of LAIV licensed for use in Canada contains a low amount of residual ovalbumin (less than 0.24 µg/dose) (written communication from AstraZeneca), which is comparable to the amounts in IIVs available in Canada. At the time of publication of the Canadian Immunization Guide Chapter on Influenza and Statement on Seasonal Influenza Vaccine for 2016-2017 (32), NACI did not recommend LAIV use in egg-allergic individuals due to a lack of data available to support this practice. However, the safety of LAIV in egg-allergic individuals has now been studied in more than 1,100 children and adolescents (2-18 years of age) in the UK and Canada (8)(9)(10).After careful review of recently published studies, NACI concludes that egg-allergic individuals may be vaccinated against influenza using the low ovalbumin-containing LAIV licensed for use in Canada.The full dose of LAIV may be used without prior vaccine skin test and in any settings where vaccines are routinely administered.LAIV also appears to be well tolerated in individuals with a history of stable asthma or recurrent wheeze; however, it remains contraindicated for individuals with severe asthma (defined as currently on oral or high-dose inhaled glucocorticosteroids or active wheezing) or for those with medically attended wheezing in the seven days prior to immunization.The use of LAIV in egg-allergic individuals is a change from previous NACI statements. Complete details of the literature review, rationale and relevant considerations for the updated recommendations can be found in the Addendum -LAIV Use in Egg Allergic Individuals (6) (32), NACI recommended the preferential use of LAIV in children and adolescents 2-17 years of age who did not have contraindications to the vaccine.This recommendation was based upon randomized placebo controlled studies and post-marketing safety data that showed LAIV to be safe, efficacious and immunogenic in children and to provide better protection against influenza than trivalent IIV, especially in young children (less than six years of age), with weaker evidence of superior efficacy in older children (33). The adjusted vaccine effectiveness estimates for LAIV and IIV against any influenza in children and adolescents (2-17 years of age) are summarized by study for the 2010-2011 through 2014-2015 (Appendix Figure 1) and 2015-2016 (Appendix Figure 2) influenza seasons.Summaries of adjusted vaccine effectiveness estimates by study and vaccine type are also provided for influenza A(H1N1)pdm09 (Appendix Figure 3), influenza A(H3N2) (Appendix Figure 4) and influenza B (Appendix Figure 5) for these same influenza seasons (Note: In some influenza seasons, sample sizes were too small to derive vaccine effectiveness estimates for all influenza strains). Based upon the US Flu VE Network data showing that LAIV provided no protective benefit during the influenza A(H1N1) dominant 2015-2016 influenza season and no evidence of effectiveness against the dominant circulating strains in the two prior influenza seasons (2013-2014 and 2014-2015), the American Advisory Committee on Immunization Practices (ACIP) recommended during its June 2016 meeting that LAIV should not be used during the 2016-2017 influenza season (34).LAIV continued to be recommended for use in children in the UK and Finland for the 2016-2017 season (35).Studies conducted in both of these countries and in Canada found a statistically significant overall protective effect of LAIV in children for 2015-2016, although sample sizes limited the precision of those estimates (22,24,27).The United States Food and Drug Administration (US FDA) has also determined that specific regulatory action for LAIV was not necessary at the time, following a review of manufacturing and clinical data supporting licensure and the totality of evidence presented at the June 2016 ACIP meeting, and continues to find that the benefits of quadrivalent LAIV outweigh any potential risks (36).Quadrivalent LAIV remains licensed for use in the US.The FDA's determination was made taking into account the limitations of observational studies in estimating vaccine effectiveness and the seasonal variability of influenza vaccine effectiveness. After careful review of available studies from the last several influenza seasons, NACI concludes that the current evidence is consistent with LAIVs providing comparable protection against influenza to that afforded by IIV in various jurisdictions and has revised its recommendations on the use of influenza vaccine in children and adolescents 2-17 years of age: 1.In children and adolescents without contraindications to the vaccine, any of the following vaccines can be used: quadrivalent LAIV, quadrivalent inactivated influenza vaccine (QIV) or trivalent inactivated influenza vaccine (TIV).2. The current evidence does not support a recommendation for the preferential use of LAIV in children and adolescents 2-17 years of age. Given the burden of influenza B disease in children and the potential for lineage mismatch between the predominant circulating strain of influenza B and the strain in a trivalent vaccine, NACI continues to recommend that a quadrivalent formulation of influenza vaccine be used in children and adolescents 2-17 years of age.If a quadrivalent vaccine is not available, TIV should be used. The observational study data reviewed highlight the challenge in interpreting the vaccine effectiveness of LAIV and IIV when point estimates by influenza subtype are derived based on small sample sizes associated with wide confidence intervals.Therefore, in making its recommendations, NACI recognizes the need to continue to closely monitor the data on the vaccine effectiveness of LAIV by influenza subtype and the relative effectiveness of LAIV compared to IIV.NACI has also identified the need for further research to address current knowledge gaps: 3. NACI strongly encourages further multidisciplinary (e.g.epidemiology, immunology, virology) research to investigate the reasons for the discordant 2015-2016 vaccine effectiveness estimates between studies and explanations for poor LAIV effectiveness against A(H1N1)pdm09 reported in some studies.4. NACI strongly recommends that sufficient resources be provided to enhance influenza-related research and sentinel surveillance systems in Canada to improve the evaluation of influenza vaccine efficacy and effectiveness to provide the best possible evidence for Canadian influenza vaccination programs and recommendations. Complete details of the literature review, rationale and relevant considerations for the updated recommendations can be found in the Addendum -LAIV Use in Children and Adolescents (7) and the Canadian Immunization Guide Chapter on Influenza and Statement on Seasonal Influenza Vaccine for 2017-2018 (4). Summary of NACI recommendations for the use of influenza vaccines for the 2017-2018 influenza season NACI continues to recommend influenza vaccination for all individuals aged six months and older who do not have contraindications to the vaccine, with particular focus on people at high risk of influenza-related complications or hospitalization, people capable of transmitting influenza to those at high risk of complications, and others as indicated in Table 1. Table 1: Groups for whom influenza vaccination is particularly recommended 1 The risk of influenza-related hospitalization increases with length of gestation (i.e. it is higher in the third than in the second trimester) Abbreviations: LAIV, live attenuated influenza vaccine (quadrivalent formulation); N/A, not applicable; QIV, quadrivalent inactivated influenza vaccine; TIV, trivalent inactivated influenza vaccine 1 Influvac ® 18 years and older, Fluviral ® 6 months and older, Agriflu ® 6 months and older, Vaxigrip ® 6 months and older, Fluzone ® 6 months and older 2 Flulaval ® Tetra 6 months and older, and Fluzone ® Quadrivalent 6 months and older 3 This information differs from the product monograph.Published and unpublished evidence suggest moderate improvement in antibody response in infants, without an increase in reactogenicity, with the use of full vaccine doses (0.5 mL) for unadjuvanted inactivated influenza vaccines (37,38).This moderate improvement in antibody response without an increase in reactogenicity is the basis for the full dose recommendation for unadjuvanted inactivated vaccine for all ages.For more information, refer to Statement on Seasonal Influenza Vaccine for 2011-2012 (39) 4 Children 6 months to less than 9 years of age who have never received the seasonal influenza vaccine require two doses of influenza vaccine, with a minimum interval of four weeks between doses.Eligible children less than 9 years of age who have properly received one or more doses of seasonal influenza vaccine in the past should receive one dose per influenza vaccination season thereafter Appendix For each study in the forest plot, the black circle represents the vaccine effectiveness point estimate and the vertical bar represents the corresponding 95% confidence interval.The 95% confidence interval lower limits are truncated at -40% 2 The Finland national cohort study reported vaccine effectiveness in children two years of age 3 The Canadian SPSN reported wide and overlapping 95% confidence intervals (exact values not publicly available at time of writing) Appendix continued 1 For each study in the forest plot, the black circle represents the vaccine effectiveness point estimate and the vertical bar represents the corresponding 95% confidence interval.The 95% confidence interval lower limits are truncated at -40% 2 The ICICLE study reported vaccine effectiveness against influenza B/Yamagata for the 2013-2014 influenza season 3 The Finland national cohort study reported vaccine effectiveness in children two years of age Figure 1 : Figure 1: Adjusted vaccine effectiveness estimates against any influenza by study and vaccine type for the 2010-2011 through 2014-2015 influenza seasons in children and adolescents 2-17 years of age 1 Figure 2 : Figure 2: Adjusted vaccine effectiveness estimates against any influenza by study and vaccine type for the 2015-2016 influenza season in children and adolescents 2-17 years of age 1 Figure 3 : Figure 3: Adjusted vaccine effectiveness estimates against influenza A(H1N1)pdm09 by influenza season, study and vaccine type in children and adolescents 2-17 years of age for A(H1N1)pdm09-dominant seasons since 2009 1 Figure 4 :Figure 5 : Figure 4: Adjusted vaccine effectiveness estimates against influenza A(H3N2) by influenza season, study and vaccine type in children and adolescents 2-17 years of age for A(H3N2)-dominant seasons since 2009 1 and the Canadian Immunization Guide Chapter on Influenza and Statement on Seasonal Influenza Vaccine for 2017-2018 (4). Recommended influenza vaccine options by specific age and risk groups and by dosage and route of administration by age are summarized in Table2and Table3, respectively. 2These include seizure disorders, febrile seizures and isolated developmental delay in children and neuromuscular, neurovascular, neurodegenerative, neurodevelopmental conditions and seizure disorders in adults, but exclude migraines and neuropsychiatric conditions without neurological conditions Table 2 : Choice of influenza vaccine for selected age and risk groups (for persons without a contraindication to the vaccine)1 1Updated recommendations noted in bold Table 3 : Recommended influenza vaccine dosage and route, by age, for the 2017-2018 influenza season
2018-05-21T21:28:04.067Z
2017-05-04T00:00:00.000
{ "year": 2017, "sha1": "ae0d8f87c6f42593140236bff7363b05ec050218", "oa_license": "CCBY", "oa_url": "https://doi.org/10.14745/ccdr.v43i05a03", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "ae0d8f87c6f42593140236bff7363b05ec050218", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249709450
pes2o/s2orc
v3-fos-license
The ILAE consensus classification of focal cortical dysplasia: An update proposed by an ad hoc task force of the ILAE diagnostic methods commission Abstract Ongoing challenges in diagnosing focal cortical dysplasia (FCD) mandate continuous research and consensus agreement to improve disease definition and classification. An International League Against Epilepsy (ILAE) Task Force (TF) reviewed the FCD classification of 2011 to identify existing gaps and provide a timely update. The following methodology was applied to achieve this goal: a survey of published literature indexed with ((Focal Cortical Dysplasia) AND (epilepsy)) between 01/01/2012 and 06/30/2021 (n = 1349) in PubMed identified the knowledge gained since 2012 and new developments in the field. An online survey consulted the ILAE community about the current use of the FCD classification scheme with 367 people answering. The TF performed an iterative clinico‐pathological and genetic agreement study to objectively measure the diagnostic gap in blood/brain samples from 22 patients suspicious for FCD and submitted to epilepsy surgery. The literature confirmed new molecular‐genetic characterizations involving the mechanistic Target Of Rapamycin (mTOR) pathway in FCD type II (FCDII), and SLC35A2 in mild malformations of cortical development (mMCDs) with oligodendroglial hyperplasia (MOGHE). The electro‐clinical‐imaging phenotypes and surgical outcomes were better defined and validated for FCDII. Little new information was acquired on clinical, histopathological, or genetic characteristics of FCD type I (FCDI) and FCD type III (FCDIII). The survey identified mMCDs, FCDI, and genetic characterization as fields for improvement in an updated classification. Our iterative clinico‐pathological and genetic agreement study confirmed the importance of immunohistochemical staining, neuroimaging, and genetic tests to improve the diagnostic yield. The TF proposes to include mMCDs, MOGHE, and “no definite FCD on histopathology” as new categories in the updated FCD classification. The histopathological classification can be further augmented by advanced neuroimaging and genetic studies to comprehensively diagnose FCD subtypes; these different levels should then be integrated into a multi‐layered diagnostic scheme. This update may help to foster multidisciplinary efforts toward a better understanding of FCD and the development of novel targeted treatment options. Abstract Ongoing challenges in diagnosing focal cortical dysplasia (FCD) mandate con- for FCDII. Little new information was acquired on clinical, histopathological, or genetic characteristics of FCD type I (FCDI) and FCD type III (FCDIII). The survey identified mMCDs, FCDI, and genetic characterization as fields for improvement in an updated classification. Our iterative clinico-pathological and genetic agreement study confirmed the importance of immunohistochemical staining, neuroimaging, and genetic tests to improve the diagnostic yield. The TF proposes to include mMCDs, MOGHE, and "no definite FCD on histopathology" as new categories in the updated FCD classification. The histopathological classification can be further augmented by advanced neuroimaging and genetic studies to comprehensively diagnose FCD subtypes; these different levels should then be integrated into a multi-layered diagnostic scheme. This update may help to foster multidisciplinary efforts toward a better understanding of FCD and the development of novel targeted treatment options. K E Y W O R D S brain, classification, epilepsy, focal cortical dysplasia, genes, seizure | INTRODUCTION In 1957, Crome first described a different form of "ulegyria" with largely irregular "nerve cells and stout tortuous processes." 1 In 1971, David Taylor coined the term "focal cortical dysplasia" based on irregular dysmorphic neurons and enlarged ballooned cells in the setting of microscopically discernable architectural disorganization of the neocortex in patients with focal epilepsies. 2 Since then, focal cortical dysplasia (FCD) has been associated with medically intractable epilepsy 3 that carries a less favorable prognosis for a seizure-free outcome following surgical resection than hippocampal sclerosis and developmental brain tumors. 4,5 However, imaging techniques have enabled the presurgical detection and increased awareness of the incidence and importance of FCD as a common pathological cause of medically intractable epilepsy. 6 These electro-clinical observations led to multiple attempts to classify FCD 7,8 with pathological subdivisions that correlate with relevant clinical, electroencephalographic, and imaging features and directly affect management of epilepsies associated with FCD and their postsurgical outcomes. From a histopathological standpoint, a category of frequently encountered architectural abnormalities of the neocortex but no cytopathology features was introduced 7 and later assigned to FCDI in the Palmini classification. 8 In addition, the Palmini classification made the first attempt toward a clinico-pathological correlation and formally classified FCD into two subtypes-FCDI and FCDII-and two additional subtypes for each one of these groups. Subsequent studies showed that the microscopic hallmarks for a reliable and consistent histopathological diagnosis of FCDI remained poor. 9 These challenges were addressed in the first international FCD consensus classification of 2011. 10 The International League Against Epilepsy (ILAE) classification expanded Palmini type I into three subtypes with reference to architectural abnormalities and lack of any other principal lesion ( Figure 1B and C). ILAE type II and Palmini type II subtypes remained identical. However, FCDIII and its four subtypes were newly introduced and defined as the presence of architectural abnormalities in association with another "principal" lesion: hippocampal sclerosis (FCDIIIa, Figure 1E), low-grade developmental brain tumors (FCDIIIb), vascular malformations (FCDIIIc, Figure 1F), or any other lesion acquired during early life (FCDIIId, Figure 1G and H). | Meetings of the task force on FCD and manuscript generation During its term (2017-2021), the Task Force (TF) met in person at the annual American Epilepsy Society meetings in Washington, D.C. (2017), New Orleans (2018), and Baltimore (2019); at the International Epilepsy Congress in Bangkok, Thailand in 2019; and during the Cleveland Clinic FCD Summit in 2019. In addition, the TF met online in December 2020. The discussions during the meetings included: (1) a review of the current state of knowledge since the first ILAE classification was published in 2011 11,12 ; (2) design, execution, and analyses of the findings of an expert survey of the current use and challenges of the FCD classification; and (3) a discussion of the results of an iterative histopathological agreement and genetic study. 13 The summary of the literature review, the results of the survey and the agreement study, and the recommendations for a first | The iterative histopathology agreement study As recently reported, the TF initiated an iterative histopathological agreement trial completed by 20 neuropathologists (of 38 invited) from 16 countries using a consecutive series of 196 surgical tissue blocks obtained from 22 patients with epilepsy at a single center. 13 In addition, five independent genetic labs performed screening or validation sequencing of FCD relevant genes, that is, the FCD gene panel, in paired brain and blood samples from the same patients. All study results were discussed comprehensively and published in a peerreviewed journal. 13 3 | RESULTS | New knowledge and challenges in the first ILAE classification The new knowledge includes the characterization of new diagnostic entities, 14 either by anatomo-clinicopathological studies in FCDII located at the bottom of sulcus 15,16 or a persistent genotype-phenotype pattern in mMCD with oligodendroglial hyperplasia and epilepsy (MOGHE) with SLC35A2 brain mosaicism. 17 in addition, new knowledge gathered in the neurophysiology of FCD, advanced neuroimaging findings, postsurgical outcome studies, progress in studying brain somatic mosaicism, and DNA methylation of human FCD tissues is reviewed and recognized in the FCD classification update. Key findings are described below. Bottom of sulcus (BOS) focal cortical dysplasia FCD that is restricted in its anatomic location and extent to the bottom of a sulcus has been identified repeatedly as a surgically remediable pathology with clear implications both on the surgical approach, management, and postoperative surgical outcome. 15,16 These lesions are identified mainly on magnetic resonance imaging (MRI). They tend to localize in the depth of frontal lobe sulci (superior frontal sulcus, inferior frontal sulcus, and central sulcus) and less frequently in the parietal or temporal lobes. Direct intralesional, intraoperative, or extraoperative depth electrode electroencephalography (EEG) recordings identify a characteristic rhythmic spiking pattern in the depth of sulcus lesion. 16,18 The complete resection of the anatomic lesion achieves seizure freedom in most patients. From a histopathological standpoint the lesions show cellular and architectural patterns of either FCDIIb (Figure 2) or, less commonly, FCDIIa. A germline frameshift insertion in DEPDC5 has been identified in one patient, 16 and another study identified somatic pathogenic variants in mechanistic Target Of Rapamycin (MTOR) in six patients and heterozygous pathogenic germline variants in two (DEPDC5 and NPRL3), 19 thus assigning this syndrome to the spectrum of mTORopathies. Mild malformations of cortical development with oligodendroglial hyperplasia in epilepsy (MOGHE) An increase in oligodendroglia and heterotopic neurons in the white matter has been described as a new epilepsyassociated histopathological entity in young children with frontal lobe epilepsy. 20 MOGHE was also documented in patients with temporal lobe epilepsy. 21,22 A subsequent series of 12 patients, including children (25%) and adults (75%), showed MOGHE lesions circumscribed to the frontal lobe in 6 (50%), the temporal lobe in 3 (25%), and multiple lobes in the remaining 3 patients (25%), with MRI findings like that of FCDIIa. 23,24 Somatic brain mosaicism in the UDP-galactose transporter gene SCL35A2 is a major etiological factor. 13,17 These results argue for the inclusion of MOGHE as a distinct pathological entity that preferentially affects the white matter of patients with early-onset epilepsy and is amenable to epilepsy surgery. 25 3.1.2 | Neurophysiology FCDII subtypes became much better characterized as clinical entities with well-defined EEG signatures in FCDIIa and IIb subtypes. The specificity of interictal patterns such as focal continuous rhythmic discharges and repetitive spiking have been suggested as possible predictors of the ictal-onset zone and of favorable postresection seizure outcome. 3,18,[26][27][28][29] Previous studies have shown that intrinsic epileptogenicity might not overlap with the MRI-observed abnormality. 30,31 Correlations between histopathological and neurophysiological studies, that is, intracerebral depth electrode recordings, also provided evidence for a contribution of dysmorphic neurons to interictal spikes, fast gamma activity, and ripples. 32 Furthermore, seizure onset and phase-amplitude coupling in areas with dysmorphic neurons suggested preserved connectivity and was related to seizure initiation. Balloon cells showed no such association. 32 3.1.3 | Neuroimaging MRI techniques have provided a noninvasive window for the characterization of some FCD. On the other hand, the strength of the magnet, the imaging protocol, the correlation with clinical semiology and EEG findings, and the examiner's experience are crucial for planning subsequent management. 33,34 A negative study undertaken on a lowfield MRI without using an epilepsy-dedicated protocol suggests a nonadequate imaging acquisition. This is further demonstrated using high-field MRI. 35 These observations highlight the need for adequate imaging studies that may transform an MRI-negative into an MRI-positive study and may fundamentally change the surgical approach, minimize the use of additional highly expensive and morbid mapping studies, and result in significantly better postsurgical seizure outcomes. 33,34 Positive MRI changes have been described for FCDI lesions in 20% to 100% of the cases in the various publications since 2011 but the type of changes have rarely been specified further. [36][37][38][39][40] In two pediatric series, one reporting on FCDIa 41 and another on FCDIa and Ib, 38 subtle increase in white matter signal in T2 and fluid-attenuated inversion-recovery (FLAIR) were reported, with reduction of the volume of the white matter in FCDIa. 41 MRI abnormalities in FCDII include abnormal gyration patterns indicated by a cortical dimple, cortical thickness changes, signal increase (mainly in FLAIR) both in the lesion and in the adjacent white matter, and gray-white matter blurring. 6,34 The transmantle sign, a linear or triangular shaped high T2/FLAIR signal extending from the lesion toward the ventricle, indicates most likely FCDIIb. 42 Although most patients with FCDII show focal MRI abnormalities, almost onethird remain MRI negative, some of which could be due to inadequate imaging, but even 3 T imaging can be negative. 6,43 It is tempting to speculate that MRnegative FCDII lesions belong mainly to the spectrum of FCDIIa. 44 MRI postprocessing using a morphometric analysis program (MAP) has identified structurally abnormal subtle FCD lesions. 45 In addition, a major benefit of 7 T high-field MRI with postprocessing was F I G U R E 2 Multichannel-immunofluorescence whole slide imaging of a bottom-of-sulcus focal cortical dysplasia (FCDIIb). Dysmorphic neurons are labeled with anti-nonphosphorylated neurofilament H-specific antibodies and were concentrated at the bottom of a sulcus (orange arrow; sulcal surface indicated by small white arrowheads in the upper right). Vimentin-positive balloon cells (in green color) aggregated in the underlying white matter (green arrow). In addition, vascular myocytes expressing smooth muscle actin were visualized in magenta pseudo-color and all cell nuclei in blue color. Scale bar = 2 mm. Modified from. 16 reported for detection of subtle FCD lesions in patients with focal epilepsies and nonlesional 3 T MRI. 34 A major benefit of 7 T high-field MRI with postprocessing is the reported detection of subtle FCD lesions in 22% of patients with focal epilepsies and previous negative 3 T MRI. 35 Additional functional imaging modalities, such as interictal fluorodeoxyglucose-positron emission tomography (FDG-PET) and subtraction of ictal/interictal single-photon emission computed tomography (SPECT) and its co-registration with structural MRI, may add important information in patients with subtle lesions that helps to increase the confidence of the structural MRI diagnosis. 43 3.1.4 | Presurgical evaluation, surgical management, and postsurgical seizure outcome Our literature survey revealed that almost half of the studies addressed surgical approaches and postoperative seizure outcomes. 11 These reports highlighted the difficulties in approaching FCDI: Even the use of the most invasive evaluation techniques fails to localize the epileptogenic zone (EZ) and subsequently results in no resections or failed surgical resections in many patients. These failures could also be due to more widespread epileptogenic pathology, as reported in all patients of the rare group of children with subtle unilateral hypoplasia of the posterior quadrant and FCDIa. 41 On the other hand, the presurgical evaluation of patients with suspected FCDII has become more streamlined, and in some instances (FCDIIb or bottom of sulcus FCD), EZ localizations, mapping, and surgical resections with excellent results have been achieved without extraoperative invasive EEG evaluations. [35][36][37] Surgical outcome studies clearly established the successes and challenges facing the current FCD classification. Excellent seizure outcomes were associated with surgical resections involving FCDII. 5 But nonfavorable outcomes have been reported following resections of FCDI 41 with the outcomes of FCDIII depending mainly on the principal lesion associated with FCD. 46 | Genetics of FCD Over the last decade, there has been growing evidence that brain mosaicism plays a major role in the etiology of FCD. Pathogenic variants were discovered initially in resected tissue of large cortical malformations such as megalencephaly and hemimegalencephaly (HME) by bulk DNA copy number assessment and targeted sequencing of genes of the PI3K-AKT3-mTOR pathway. [47][48][49] Subsequent studies revealed that smaller cortical malformations, such as FCDII, are also mosaic disorders caused by pathogenic variants in the same pathway, occurring in early neuroprogenitor cells and evolving into a mutated clonal cell population. [50][51][52][53][54][55][56][57][58][59][60] Currently, two distinct pathomechanisms are anticipated: (1) the glycosylation-related gene SLC35A2 in MOGHE, 13,17 and (2) genes belonging to the mTOR pathway (AKT3, DEPDC5, NPRL2, NPRL3, PIK3CA, RHEB, MTOR, TSC1, TSC2) in FCDII and HME. [50][51][52][53][55][56][57][58][59] In addition, there is recent evidence that a single hit (i.e., gain-of-function variant) in activators of the mTOR pathway (e.g., PIK3CA, AKT3, RHEB) or in MTOR itself is sufficient to cause the FCDII. 56 The dysregulation of the mTOR signaling pathway provides the rational mechanistic basis for a direct link between gene mutation and brain pathology involving dysmorphic neurons, balloon cells, oligodendrocytes, and astrocytes. 12,14,61 In contrast, a double hit with a germline and somatic lossof-function variant in repressors of the pathway (i.e., DEPDC5, NPRL2, NPRL3, TSC) is necessary for the expression of the brain lesion. Definite somatic second-hit events, either single nucleotide variants 19,60,62,63 or lossof-heterozygosity (LOH) 51,57 of the second allele leading to biallelic gene inactivation of DEPDC5 have now been reported, validating the two-hit model for mTORpathway repressor genes. Even among somatic variants, the number of DNA fragments that carry the mutation in a sequencing experiment is expected to serve as a surrogate marker for the number of mutated cells in a resected tissue. Accordingly, the so-called "variant allele fraction gradient" is correlated with a "dysmorphic neuron density gradient," with the highest variant load detected in the seizure-onset zone. 19,63,64 Another study reported a synergistic effect of two mosaic variants in mTOR pathway activators (RPS6 and MTOR) in a patient with HME. 65 In all studies, the mosaic fraction of the variants correlated with the lesion type, with greater mosaicism in HME reflecting the earlier timing of occurrence of the mutational event. 61 Analysis of pools of microdissected cells demonstrated that dysmorphic neurons and balloon cells carry the pathogenic variants leading to hyperactivation of mTOR. 19,51,64 These discoveries offer the opportunity to reshape the genetic landscape of FCD, distinguishing mTOR and non-mTOR-related FCD toward a new integrated genotypephenotype classification. 12,13,66 The current challenge is whether genetic findings can predict surgery outcome, the extent of the lesion, and the presence of multiple or bilateral lesions. 67 Overall mTOR-related MCD with germline or germline and somatic variants have a better surgical outcome than MCD caused by mutations in ion channel and synaptic transmission genes. 68 Two proof-of-principle studies recently reported that brain mutations can be detected in the circulating cell-free DNA obtained from cerebrospinal fluid. 69,70 If substantiated, this finding may allow for a genetic diagnosis before surgery, or when brain tissue is not available. Although the role of genetic testing in selecting surgical candidates and predicting surgical outcome are still debated, these findings point to the merit of including genetic testing results in the proposed integrated classification scheme update of FCD. | Emerging role of epigenetics in epilepsy There is compelling evidence that dysfunctional epigenetic processes are involved in the pathobiology of neurologic diseases and may serve as molecular indices for integrating the effects of inherited and acquired etiological factors and thus for modulating the clinical manifestations of a specific disease. 61 Indeed, studies assessing DNA methylation provide evidence for a role in epilepsy. 71,72 Genome-wide DNA methylation profiling in three different preclinical animal models identified a seizure-and etiology-specific epigenetic signature. 73 Furthermore, differential hierarchical cluster analysis of DNA methylation studies in resected human brain samples distinguished patients with epilepsy from controls and further classified the histopathological entities associated with a seizure phenotype. 72,74,75 These studies not only provide evidence for disease-specific methylation signatures in focal epilepsies, but also emphasize the potential role of DNA methylation to distinguish FCD subtypes, and support the development of an integrated clinico-pathologic and molecular classification system of FCD subtypes. 14 Methodological approaches aside, due consideration of clinically significant thresholds for methylation is warranted. | Challenges identified in the first ILAE classification scheme Whereas FCDIa is hitherto confirmed in a series of 19 children with early seizure onset, subtle unilateral hemispheric hypoplasia, global developmental delay, and drug resistance from seizure onset, 41 a consistent clinico-pathological characterization of the patient cohort with FCDIb and FCDIc is still lacking and convincing examples are scarce in the current literature. [36][37][38][39] In addition, Figure 2C from the original ILAE publication in 2011 showed loss of layer 4 neurons in a young boy with focal epilepsy as an example of FCDIb with horizontal dyslamination 10 ; however, upon review, this should be classified as FCDIIId, since there is evidence that loss of layer 4 neurons results from early (perinatal) hypoxicischemic injury in the occipital lobe, predominantly in boys ( Figure 1H). 76 This kind of confusion raises the issue of whether cortical architectural abnormalities other than the bona fide dyslamination of FCDIa in patients with diffuse unilateral lesions mentioned above 41 truly represents "dysplastic" abnormalities or simply variable architectural changes. Furthermore, histopathology of FCDIc was never described before the ILAE classification in 2011, and it quickly developed into a "wastebasket" of cases clinically suspected as FCD with no or very subtle MRI findings. 11 It is important to note that FCDI subtypes also lack comprehensive publications beyond isolated reports in very small patient series that characterize their molecular genotype. 77 Although FCDIII and its four subtypes acknowledged the role of the abnormal architectural organization of the neocortex in the immediate vicinity of congenital epileptogenic lesions, such as developmental brain tumors, vascular malformations, or pre-and perinatal infarction, its significance in hippocampal sclerosis and postnatally acquired brain lesions was also addressed by comments in the 2018 ILAE survey. FCDIII patterns were classified initially as FCDI with architectural disorganization in patients with hippocampal sclerosis or developmental tumors following the Palmini classification scheme. Our current literature review did not detect increased scientific engagement into these FCDIII entities. In contrast, imaging features suspected as FCD in temporal lobe epilepsy, that is gray-white matter blurring and temporopolar atrophy, were shown to represent secondary alterations in white matter, without FCD. 74,78 The diffuse and infiltrative behavior of many epilepsy-associated glioneuronal tumors can mimic FCDIIIb. Systematic histopathological reviews using refined panels of immunohistochemical markers, that is, CD34, BRAFV600E, and microtubule associated protein 2 (MAP2), [79][80][81] did not support any specific FCDIIIb patterns. Less-conflicting results were published for FCDIIIc and FCDIIId phenotypes. 82,83 Sturge-Weber syndrome almost always shows histopathological signs of complex architectural dysplasia consisting of radial and vertical disorganization of the neocortex, that is, FCDIIIc. 82,84 This FCD subtype is less frequently detected with cavernomas and arteriovenous malformations. 84 However, hypertrophic neurons can often be encountered in affected cortices but should not be confused with dysmorphic neurons in FCDIIa. 82 Perinatal hypoxemia, bleeding, and inflammatory disorders are the most common principal lesions associated with FCDIIId. These data strongly suggest progressive alterations of postmigratory plasticity as the cause of associated FCD phenotypes. 85 Notwithstanding these considerations, the true dysplastic nature of all FCDIII subtypes needs to be further elucidated based on new scientific developments in the coming years. This issue will also benefit from careful correlational studies indicating whether resection of the abnormally laminated cortex associated with the "principal lesion" impacts on surgical outcome-or if the latter is related mostly to resection of the principal lesion, that is, hippocampal sclerosis, tumor, or vascular malformation. 46 | The 2018 online survey of the ILAE task force A total of 367 members of the international epilepsy community responded to the ILAE online survey. Details of the survey results can be found in the Appendix S1. Thirty-two percent of the respondents identified themselves as neuropathologists; 38% as neurologists; and 46% as epileptologists (with multiple assignments possible). Most of the responders (75.1%) stated that they were using the ILAE classification in their clinical practice or research. The newly suggested FCD type (FCDIII) in the 2011 classification was used by more than 82% of the respondents. The responses highlighted three main areas for potential improvement: genetics, mMCD, and FCDI. More than one third (35%) of respondents were using genetic testing from blood and brain tissue for the diagnosis of FCD. More than 60% of the respondents suggested an incorporation of genetics in the workup of patients with suspected FCD (60%). The survey found that the diagnosis for mMCD remains open to subjective interpretation and may vary from center to center due to the lack of universally adopted criteria, and more than half of the respondents suggested the addition of mMCD to a revised classification proposal. The survey respondents (48%) identified the need for a better histopathology definition of FCDI subtypes and their differentiation from normal human neocortical architecture. | Results of the histopathology and genetic agreement study 2018-2020 As reported in published literature, the agreement study showed that the histopathological identification of FCD subtypes could be improved using a selected immunohistochemistry protocol. 13 Consistent with previous ILAE recommendations, the proposed antibodies include neuronal nuclear antigen (NeuN), nonphosphorylated neurofilament, vimentin, Olig2, CD34, and MAP2 antibodies. 79 NeuN immunostaining was most helpful in studying homotypic or heterotypic patterns of the human neocortex compared to architectural dysplasia in FCDI. Antibodies directed against nonphosphorylated neurofilament (SMI32) are sensitive markers of dysmorphic neurons in all FCDII subtypes. Olig2 antibodies were helpful for recognizing the cases with MOGHE. In addition, the interobserver agreement increased further to a kappa value of 0.65 (good) with the availability of all genetic testing results, that is, 7 of 22 cases revealed brain somatic mutations in MTOR, AKT3, or SLC35A2, or germline mutations in DEPDC5 and NPRL3. 15 Of interest, the agreement study highlighted cases where "no FCD" was concluded by most reviewers after all the immunostainings and negative gene testing results were made available. Acknowledging a "no definite FCD on histopathology" option in the FCD classification update may reduce, therefore, the tendency of neuropathologists to "overdiagnose" FCDI subtypes. 13 The "no definite FCD on histopathology" category should be used only in cortical epilepsy with a clinical suspicion of FCD, and when there is: (1) an abnormality of cortical organization that remains ambiguous and histopathological findings are not compatible with FCDI, FCDII, or FCDIII; or (2) there is incomplete surgical removal or sampling of the tissue. 13 On the other hand, the results confirmed the challenge in differentiating FCDI and FCDIII subtypes from normal variations in cortical architecture. The study further revealed that lentiform heterotopias in the white matter of the temporal lobe, that is, the superior temporal gyrus, which were classified as FCDIIIa in the 2011 classification scheme, represent a normal anatomic feature of the claustrum. 13 All of this new knowledge indicated that the unidimensional nature of the current ILAE classification scheme will not unequivocally allow for the integration of an ever-increasing and clinically relevant, multifaceted pool of information. The TF proposes an update for the FCD classification, therefore, that includes: (1) a panel of immunohistochemical staining 2 ; (2) two additional histological categories: white matter lesions and "no definite FCD on histopathology" (Table 1); and (3) a multi-layered classification scheme (Table 2) adding the level of genetic and neuroimaging findings to obtain a comprehensive, reliable, and integrative genotype-phenotype diagnosis. | Consensus proposal for a pathology update and the creation of a multilayered classification of FCD The proposed histopathology update to the ILAE classification of FCD (Table 1) and the multilayered classification scheme of FCD (Table 2) were achieved following multiple iterative discussions during the various meetings of the TF (as above) until unanimous agreement was reached on all items. | Update of the histopathology-based classification scheme of FCD FCDI remains a specific histopathological category characterized by architectural disorganization of the neocortex due to compromised developmental maturation, and without evidence of any additional principal epileptogenic lesion in the brain (as confirmed by MRI or histopathology). This definition will not deviate from the 2011 classification scheme. 10 FCDIa is histopathologically defined by an abundance of neuronal "microcolumns" that predominate in any low-power objective microscope magnification, to be confirmed by immunohistochemical staining with antibodies directed against NeuN ( Figures 1B and 3). Heterotopic neurons in the white matter also invading the area of U fibers are additional hallmarks of the disease. 86 A clinicopathological correlation has been established in a series of 19 children with severe drug-resistant posterior quadrant epilepsy. 41 This FCDIa presentation is, however, a lessfrequent disease condition representing only 4% of 500 operated children in this study and 14.6% of all FCD cases. Reported mutations in the SLC35A2 originally assigned to FCDIa 51,60,87,88 were reviewed and re-assigned to MOGHE in all cases 17 (see below). The classification of neuronal microcolumns as ILAE FCDIa follows the microscopic guidelines described in the 2011 classification scheme and should always be confirmed by immunohistochemical staining using NeuN ( Figure 1A). A second histopathology feature is the excess of heterotopic neurons in the white matter, as defined by Mühlebner and colleagues, 44 and should be confirmed using MAP2 immunohistochemistry. Dysmorphic neurons or balloon cells or other principal histopathology lesions will exclude this diagnosis. DNA methylation array analysis from routine formalinfixed paraffin-embedded (FFPE) tissue may support the diagnostic yield in the near future. 41,74 Accordingly, the coexistence of an excessive microcolumnar organization with heterotopic neurons in the white matter and a DNA methylation class distinct from other FCD subtypes was convincing enough for the TF to not abandon the FCDIa category. Similar patterns of microcolumnar organization of the neocortex were also described in children with genetic defects or inborn metabolic diseases, although with more widespread distribution. 89 The TF noted that such microcolumnar organization resembles neuronal radial migration streams during corticogenesis 90 The TF recommends applying immunohistochemical staining for the detection of architectural abnormalities and FCD subtypes, i.e., antibodies directed against neuronal nclear antigen (NeuN), neurofilaments, vimentin, microtubule associated protein 2 (MAP2), CD34, OLIG2, glial fibrillary acid protein (GFAP), or alpha B-crystallin. The diagnostic term of "not otherwise specified (NOS)" shall be used if the microscopic diagnosis is not based on appropriate immunohistochemical staining, e.g., FCD type I (NOS). b Mild malformations of cortical development (mMCD): not associated with any other principal lesion, such as hippocampal sclerosis, brain tumor, or vascular malformation. c Although mild malformations of cortical development with oligodendroglial hyperplasia (MOGHE) is primarily a white matter abnormality, abnormal cortical folding can be seen on MRI, and the combination of the two is often interpreted as FCD. d No definite FCD on histopathology: a descriptive report is recommended to highlight anatomic ambiguities in clinically suspected cases of FCD. from delayed or arrested maturation at mid-gestation ( Figure 3). FCDIb or FCDIc: Until now, there are no specific clinico-pathological correlations reported for patients with FCDIb or FCDIc. The TF recommends maintaining these subtypes in the classification update with the hope that future research would establish clinically meaningful phenotypes. Nonetheless, FCDIb shall microscopically reveal the disruption of the six-layered anatomical organization, that is, horizontal architectural dysplasia ( Figure 1C). The diversity of Brodmann areas in the human homotypic and heterotypic neocortex must be taken into consideration, however. Findings reminiscent of FCDIb shall therefore be confirmed by immunohistochemistry. In cases without these stainings being available, no further subtyping is recommended, and the diagnosis should read as FCDI (NOS -not otherwise specified). The same applies for FCDIc, which is characterized by a mixture of horizontal and vertical layer abnormalities. These patterns can more often be identified in FCDIIIc and FCDIIId (see below) and associated principal lesions must be excluded in the differential diagnostic workup, including MRI inspection of brain regions not included in the surgical resection sample. FCDII (Figure 4) are the most common MCD in epilepsy surgery case series representing ~9% of all cases, and 51% of histopathologically confirmed cases are localized to the frontal lobe. 4 This assessment is not different from the 2011 classification scheme. Seizure onset starts at a mean of 5 years of age. FCDII are characterized by the presence of dysmorphic, often cytomegalic neurons. 44 Their shortest diameter is above 25 μm and significantly larger than any typical pyramidal cell in age-and locationmatched postmortem controls. 44 Although glia are not part of the histopathological definition of FCDII, glial cells also are dysmorphic and often enlarged. FCDIIb is further distinguished from FCDIIa by the additional presence of balloon cells and a compromised oligodendroglial cell population. 44 Layer 1A: Histopathology diagnosis b Brief description of architectural and/or cytoarchitectural histopathology findings using H&E and appropriate immunostainings Layer 1B: ILAE histopathological subtype b Assign histopathology findings to the ILAE classification update (see Table 1) Layer 2: Genetic findings c Describe genetic findings, methodology used, and tissue source, i.e., fresh-frozen brain tissue and paired peripheral blood samples or formalin-fixed-paraffinembedded (FFPE) tissue only. If genetic testing is not available, please indicate it as "not available (NA)" This layer refers to the reporting of a neuropathologist experienced in the field of epilepsy surgery. c This layer refers to the reporting of a geneticist experienced in the field of epilepsy surgery. The integrated diagnosis should be assembled, e.g., during a postsurgical patient management conference led by the epileptologist in charge of the patient following a comprehensive multidisciplinary review of all available diagnostic reports. T A B L E 2 Integrated multi-layered FCD classification scheme a Balloon cells are of mixed lineage, expressing both neuronal and glial protein transcription products, and they often circumvent the area with accumulated dysmorphic neurons ( Figure 2). Dysmorphic neurons are the source of abnormal electrical activity, whereas balloon cells are not. 32,91 FCDII often presents with additional architectural dysplasia, that is, loss of homotypic six layers when admixed with normal pyramidal cells ( Figure 1D). The affected neocortex also has a reduced cell density, which is more significant in FCDIIb than in FCDIIa. MRI-negative FCDII lesions are likely to belong to the FCDIIa subtype, as abnormalities in cortical thickness, cell density, myelination, and oligodendroglial cell population are often subtle or remain intact. 44 Sixty percent of FCDII present with brain somatic mutations in the mTOR pathway, mostly in the MTOR gene in the FCDIIb subtype. 51 Loss-of-function germline mutations have been detected mostly in FCDIIa with a second hit mutation, that is, loss of heterozygosity, inactivating the second allele of DEPDC. 19,55,63 Of patients with FCDII, 67.4% are free from disabling seizures 5 years after surgery, and 39.4% of patients also have discontinued antiseizure medications. 5 FCDIII represents abnormal architectural organization of the neocortex in the immediate vicinity of epileptogenic lesions, such as hippocampal sclerosis (FCDIIIa), developmental brain tumors (FCDIIIb), vascular malformations (FCDIIIc), or any other lesion acquired during early life (FCDIIId), that is, pre-or perinatal infarction, bleeding, and inflammation. This assessment has not been changed from the 2011 classification scheme. Architectural abnormalities are predominantly horizontal in FCDIIIa, defined by loss of layer 2 and 3 neurons in patients with long-term epilepsy and hippocampal sclerosis 92 ( Figure 1E). A mixed phenotype with horizontally and vertically compromised cortical layering is often encountered in FCDIIIc, that is, Sturge-Weber syndrome 83,84 (Figure 1F). FCDIIId with loss of layer 4 is observed predominantly in boys with perinatal hypoxemic brain injury of the occipital lobe 76 ( Figure 1H). Dysmorphic neurons are not a feature of FCDIII subtypes. Enlarged pyramidal neurons can be detected microscopically, however, in many cases. Their retained anatomic orientation qualifies them as hypertrophic rather than dysmorphic neurons. 76,82,83 The diagnosis of FCDIIIb is rare and requires the immunohistochemical assessment to exclude glioneuronal tumor tissue infiltrating the neocortex. 10,79,81,93 There is no known genetic cause for FCDIII. Postsurgical seizure outcome is similar to that for patients with the same principal lesions irrespective of the presence or absence of associated FCDIII. 46 Mild malformations of cortical development (or mMCD; Figure 5) is microscopically recognized by an excess of heterotopic neurons in the white matter-above 30 neurons per mm 2 ( Figure 4B)-and not being associated with any other principal lesion. Densities of <30/ mm 2 were shown to be unlikely to be mMCD in a study using automated quantitation of normal white matter NeuNpositive neurons in 142 epilepsy resections compared to controls that confirmed densities. 94 mMCD was first defined in the Palmini classification, 8 also included in the 2011 ILAE scheme, and its definition will not be changed or modified herein, due to lack of consensus on their diagnostic features and on their potential epileptogenicity. mMCD can be detected in about 3% of (mainly adult) patients according to a large surgical case series. 4,5 MAP2 immunohistochemistry identifies increased neuropil of the white matter above 10% ( Figure 4B), which likely represents synaptic plexi. 44,86 Persisting neurons in cortical layer 1, that is, mMCD type I of the Palmini classification scheme, have not been confirmed in surgical case series and will not be included herein. mMCDs are reported mainly as MRI negative 95 but this is not the case in all reports. 96,97 Reported postsurgical outcomes for mMCDs are highly variable, ranging from 0 to 75% seizure freedom. 51,60,95,97,98 However, a large European-wide epilepsy cohort of 9147 cases reported 45% of patients achieving seizure freedom at 2 years postresection of mMCD. 5 DNA methylation array analysis from routine FFPE tissue may increase the diagnostic yield in the near future. 41 Building on the most recent scientific advances, the TF proposes to include lesions compromising the white matter as new F I G U R E 3 Histopathological hallmarks of FCDIa. An 18-year-old female patient. Cognitive decline with onset of daily and medically intractable seizures at age 10 years. Arrows: note the multiple regions with abundant microcolumnar organization of the neocortex, which is partially also thinned (<2.5 mm). Asterisks: Abundant heterotopic neurons in the white matter of the same gyri. Neuronal nuclear antigen immunohistochemistry of a 4-μm thin FFPE section diagnostic categories, that is, mMCD and MOGHE, as specified below. Mild malformations of cortical development with oligodendroglial hyperplasia in epilepsy (or MOGHE) is defined by an increase in heterotopic neurons in the white matter and oligodendroglial cell densities above 2200 Olig2-immunoreactive cells per mm 2 20,22-24,99-101 ( Figure 5D). Reported cases involve young children with frontal lobe epilepsy, or temporal plus epilepsy, with a median seizure onset at age 2 years (range 0.3-13 years). 20 In a retrospective clinical study of 20 patients with MOGHE, postoperative seizure outcome depended largely on the extent of the resection, with a good Engel class I outcome reported for all patients with large resections. 24,25 MOGHE represents a distinct mMCD subtype, with 45%-100% of studied patients harboring SLC35A2 somatic variants. 13,17 One study also showed that SLC35A2-mutated brain tissue had an aberrant pattern of glycosylation. 88 Most pathogenic SCL35A2 variants are nonsense or frameshift variants leading to loss-of-function of the protein in the mutated cells, that is, oligodendroglia and heterotopic neurons in the white matter. These findings demonstrated that somatic brain-only variants in the UDP-galactose transporter gene SCL35A2 are a major etiological factor and may be linked to the pathogenesis of MOGHE. No definite FCD on histopathology The TF suggests adding "no definite FCD on histopathology" as a new category to the updated histopathologybased classification scheme when the anatomic orientation and organization of the surgical specimen remains ambiguous, and an abnormality cannot be evidenced by strict histopathology measures, for F I G U R E 4 Histopathology findings in ILAE FCDIIa and IIb. A, A 42-year-old female patient with frontal lobe epilepsy since age 5 years and histopathologically confirmed FCDIIa. The arrow points to the sharp border between the cortical FCD and the normal-appearing white matter (WM). Normal six-layer neocortex (NCx). Neurofilament-immunohistochemistry, scale bar = 2,5 mm (applies also to B). B, A 19-year-old female patient with frontal lobe epilepsy since age 9 years, and histopathologically confirmed FCDIIb at a bottom-ofsulcus (BOS). The boundary toward the white matter is less well pronounced (arrow). C, Hematoxylin and eosin (H&E) staining at higher magnification of FCDIIb with opalesque balloon cells (BCs), enlarged dysmorphic neurons (DNs), and normal appearing pyramidal cells (PZs). D, Nueonal nuclear antigen immunohistochemistry demonstrating clusters of anatomically abnormally positioned dysmorphic neurons next to pyramidal cells (on the left) in FCDII. E, Balloon cells frequently stain with antibodies directed against vimentin, but also pS6 or alpha B-crystallin (not shown). Scale bar = 100 μm, applies also to C and E (A) (B) example, the resemblance with homotypic or heterotypic Brodmann areas, an oblique plane of sectioning, implantation of intracerebral depth electrodes, or perioperatively introduced tissue artifacts. Notably, the use of IHC staining is mandatory to confirm the absence of any FCD, that is, NeuN and MAP2. The TF further recommends describing any anatomic ambiguities in the pathology report. | An integrated, multi-layered, genotype-phenotype approach to diagnose FCD The TF proposes a multi-layered integration of histopathology with the level of genetic and neuroimaging findings to obtain a comprehensive and reliable genotypephenotype diagnosis ( Table 2). The various layers of the classification cover current knowledge and, at the same time, enable the seamless inclusion of future knowledge. This explicit, multi-layered integration enhances clarity and can facilitate broader international communication and collaboration in this field. Layer 1: The histopathological assessment The neuropathological workup of cortical tissue obtained from epilepsy surgery remains the gold standard in diagnosing any focal epileptic disorder. 80 It is recommended to apply the updated ILAE classification scheme presented in Table 1. The neuroanatomical punctum maximum of the lesion can be added to the report if the neurosurgeon provided anatomic labels or tissue landmarks can be microscopically identified. 80 The benefit of immunohistochemistry in supporting hematoxylin and eosin (H&E) staining for a reliable diagnostic workup has been confirmed in many histopathology agreement trials and the recent iterative ILAE TF study. 13,102,103 Therefore, the TF recommends the use of a standardized panel of IHC markers to confirm abnormal histopathology patterns that should, in turn, be specified in the report (see supplemental case series). Finally, the written histopathology report should be concise to allow unequivocal integration with all other layers of the FCD classification scheme ( Table 2). Layer 2: Integration of molecular-genetic results The second layer integrates genetic findings as an objective measure for the diagnosis of FCD, thereby specifying the patient's FCD diagnosis. Although genetic testing of somatic and germline mutations for FCD is not yet available in most epilepsy centers, it is a piece of important information for the genetic consultation whether FCD patients carry pathogenic somatic (not inherited, not transmissible) or germline (possibly inherited and transmissible) variants. Although genetic testing from surgical human brain tissue can be performed either be a neuropathologist experienced in molecular pathology and/or a geneticist, the TF recommends the following laboratory protocols for a reliable detection of low-level brain mosaicism in FCD: (1) extract DNA from lesional brain tissue microscopically confirmed by an experienced neuropathologist to enhance the diagnostic yield, that is, from fresh frozen or FFPE tissues; (2) use hybridization capture and high-depth next generation sequencing of >1000x reading depth of FCD relevant genes 60 ; (3) use somatic mutation callers, for example, MuTect2, Replow, Strelka2 13 ; and (4) validate candidate variants using orthogonal technology, for example, droplet digital polymerase chain reaction or target-site specific amplicon sequencing (for more information see supplemental material). Nine genes have been reported to cause canonical FCDII: AKT3, DEPDC5, MTOR, NPRL2, NPRL3, PIK3CA, RHEB, TSC1, and TSC2. SLC35A2 should be included in the panel in order to differentiate MOGHE 13,17,61 from its most common differential diagnosis: FCDIa. 41 The diagnostic yield using such gene panel sequencing from routine FFPE or frozen tissue ranges from 32% when assessing various epilepsy-related lesions 13,60 to 45% in patients selected for MOGHE, 19 and 63% in patients with hemimegalencephaly or FCDII. 60 The second diagnostic layer of genetic analysis should conclude with a statement about: (1) the type of findings, for example, gain or loss of function mutation of a particular gene; (2) the location of the mutation; (3) the sample used, that is, blood, tissue FFPE vs fresh frozen; and (4) the methodology used. In addition, DNA methylation array analyses from routine FFPE tissue should be added if done as it may support the diagnostic yield. 14,41,72,74,75 If genetic testing is not available, the recommendation of the TF is to indicate it as "not available (NA)" in the final report. Layer 3: Integration of neuroimaging findings MRI is an essential cornerstone in the workup of patients with focal epilepsy. 6,57,104 The recommendations for the use of structural MRI and the need for optimized data acquisition and quantitative analysis protocols early in the treatment of epilepsy were recently highlighted by the ILAE Neuroimaging Task Force 6,104 and reporting should be performed by a neuroradiologist and/or a neurologist/ epileptologist experienced in the presurgical evaluation. The information obtained from visual analysis of signal characteristics in any suspicious lesion, with or without postprocessing, its location, and its extent are fundamental to the surgical approach in these patients. In addition, certain MRI findings could be predictive of the FCD type and sub-type, for example, the presence of a "transmantle sign" in FCDIIb. 43 Bottom of sulcus (BOS) FCD is often recognized through high-resolution imaging but not necessarily by the examination of histopathology samples (e.g., when anatomic landmarks are not available). BOS is an imaging entity, such as "transmantle FCD," that has the crucial value of anticipating (1) a histopathological subtype (FCDII, usually FCDIIb), (2) the possibility of a low-cost, straightforward noninvasive presurgical evaluation, and (3) a surgical strategy (gyral resection extending to the BOS under intraoperative electrocorticographic recordings with depth electrodes). Its inclusion in the multilayered classification scheme is rather an example of the utility of this system as a predictor of the histological type. We have exemplified the BOS case further in the manuscript to appropriately address the referee's concern. MRI could be negative in some histopathologically confirmed FCDIIa or in cases with FCDI, mMCD, or MOGHE. However, it is important to note that a good proportion of negative MRI is due to substandard acquisitions coupled with interpretation of images without considering all available seizure semiology and EEG data. 6,34,44 In addition, ultra-high-field MRI could further advance the diagnostic yield in FCDI and FCDII and should be used in "MRI-negative" cases whenever possible. 34,105 For these reasons, the TF recommends the inclusion of the following MRI details as the third layer in the revised FCD classification scheme: (1) a description of the MRI abnormality (signal and morphological details, if applicable), its anatomic location, that is, side, lobe, gyrus, and topographical location, for example, the crown of a gyrus vs bottom of the sulcus; (2) the field strength of the magnet and the imaging protocol used 6 ; and (3) the analysis method, for example, visual, postprocessing, or supported by machine learning. This information is typically provided by a neuroimaging specialist and discussed by the epilepsy team during a presurgical patient management conference. For more information see supplemental material. Layer 4: Integrated diagnosis As stated in Table 2 and illustrated in the Appendix S1, this layer is the summary of all the available pertinent features described in the first three layers of the proposed FCD classification. The TF recommends that the integrated diagnosis should state the following: (1) Whether the MRI is positive or negative, (2) the histopathological type/subtype of the lesion and its anatomic location, and (3) the genetic finding (negative or positive, and type of mutation). It is the hope of the TF that the integrated diagnosis will be used as a tool for clinical management and outcome prediction. The compilation of the various layers of information for the proposed classification scheme is the job of the treating physician (e.g., neurologist, epileptologist, neurosurgeon). This may be the product of another postoperative multi-disciplinary team conference, much like the preoperative assessment of patients with FCD. The treating physician is the final arbiter in summarizing the results of the surgical evaluation, the multidisciplinary patient management conference (PMC), and its recommendations. Although a postsurgical PMC is desirable for the purpose of applying the multi-layered classification, the TF recognizes that this may not be practical in many clinical settings. Therefore, a key aspect in applying the multi-layered classification is the systematic accrual and documentation of the necessary data pertaining to each of the four layers in each patient. The treating clinician will then be able to assemble the elements into an Integrated Diagnosis. An evaluation of the significance of each layer in the context of the integrated system should move the field closer to the practice of precision medicine in the management of patients with epilepsy and FCD, and which will be further studied by an ILAE task force during the term 2021-2025. | DISCUSSION The TF concludes its work on updating the international consensus ILAE classification scheme of FCD with the proposal of an integrated, multi-layered, genotypephenotype approach to diagnose FCD. FCD diagnosis should be concise and integrate the most relevant findings obtained from the neuropathological tissue workup, histopathology assessment (Level 1), genetic analysis of resected tissue (Level 2), and the presurgical MRI findings (Level 3). The TF acknowledges that not every center will have access to advanced neuropathological, neuroimaging, or genetic analyses techniques. However, information on each of the three layers should be incorporated as it is available in different settings. This recommendation constitutes a target goal to achieve adequate proficiency in epileptology. It is hoped that it will also support the allocation of sufficient resources to diagnose and appropriately manage patients with difficult-to-diagnose and difficultto-treat focal epilepsy. The proposed update to the histopathological classification considers the new knowledge, for example, MOGHE, SLC35A2 altered, and recognizes the category of "no definite FCD on histopathology." This diagnosis should be used only when there was a clinical suspicion during the presurgical evaluation of the patient, and the microscopic tissue assessment cannot conclusively confirm the diagnosis of any FCD subtype as defined in the current classification scheme. It is the hope of the TF that the inclusion of this category will help to decrease the number of samples that may be inappropriately classified as FCDI, and eventually help to better characterize the clinical, imaging, and electroencephalographic features as well as the postsurgical outcome of FCD, and which remained a major challenge since the Palmini classification of 2004. 8 Neurosurgical sampling errors should also be taken into consideration, for example, incomplete surgical resection, laser ablation, thermocoagulation, and cavitron ultrasonic surgical aspirator tissue homogenization, when the histopathology report cannot confirm a clinically suspected lesion. The latter may result from the detection of neuroimaging abnormalities interpreted clinically as FCD, for example, hyperintense signaling in FLAIR sequences. A typical example is that of temporopolar atrophy with signal hyperintensity and gray-white matter blurring in a patient with hippocampal sclerosis. These cases have been systematically studied by high-power MRI and electron microscopy and demonstrated white matter lesions secondary to reduction in axonal density. 78,106 Indeed, 67.7% of surgical specimens with no histopathologically detectable lesion were obtained from the temporal lobe. 4 Despite the lack of any histopathological findings, 51.6% of patients remain free from disabling seizures 5 years after surgery. 5 This unprecedented percentage of seizure-free patients with no FCD warrants further research to identify possible new disease entities, for example, MOGHE, 20 or seizure-susceptible brain somatic mosaicism amenable to surgical treatment. Adding the level of genetic information to the diagnosis will have substantial impact on standardizing the diagnosis of FCD subtypes. It directly addresses the underlying pathomechanism and opens new avenues for personalized medicine. Further research and clinical trials are mandatory to achieve this goal, which has been often compromised by insufficiently characterized or classified patient and tissue cohorts. Genetic testing should increasingly become a standard element in all scientific publications addressing this matter. However, if genetic testing was not performed at the final step of integrating the FCD diagnosis, it should be noted that it was not available (NA). The importance of neuroimaging in the clinical workup, surgery planning, and clinical management of patients with focal epilepsies due to FCD has been clearly recognized in this report. It is the strong recommendation of the TF to integrate this layer of information into the integrated, multi-layered, genotype-phenotype diagnosis. Imaging (MRI) is the first noninvasive window to the identification of focal FCD lesions and, in some instances, point to their neuropathology (e.g., FCDIIb or MOGHE), inform surgical planning/type of intervention (e.g., extraoperative invasive EEG in FCDI vs intraoperative mapping in bottom of sulcus dysplasia and some FCDII), and outcome (e.g., excellent outcomes in bottom of sulcus dysplasia). | CONCLUSION This multi-layered approach resembled the currently proposed World Health Organization (WHO) classification scheme of tumors of the nervous system, which also integrates the histopathology diagnosis with genetic and/or DNA methylation markers to achieve a reliable, clinically relevant, and therapeutically targetable tissue diagnosis. Of note, the layer of MRI diagnosis as part of the multilayered approach for tumor classification was not recognized by the WHO expert panel. The compilation of the various layers of diagnostic findings into a multi-layered, genotype-phenotype classification scheme of FCD should be addressed, however, by the treating physician (e.g., neurologist, epileptologist, neurosurgeon) and preferably with an interdisciplinary effort at a postsurgical patient management conference. The ILAE Task Force expects that the currently proposed integration will foster interdisciplinary cooperation among the many professional disciplines engaged in the clinical and therapeutic management of patients with FCD. ACKNOWLEDGMENTS DL was supported by 1R01NS117544-01 from the National Institute of Neurological Disorders and Stroke (NINDS). AUTHORS' CONTRIBUTIONS All authors participated in the discussions and unanimously agreed with the recommendations of the International League Against Epilepsy (ILAE) Task Force on FCD. The report was written by experts selected by the ILAE and was approved for publication by the ILAE. The opinions expressed by the authors, however, do not necessarily represent the policy or position of the ILAE. The special report was written by Imad Najm and Ingmar Blumcke. Fernando Cendes reviewed the first draft. The other co-authors contributed to the edits of various versions of the manuscript. CONFLICT OF INTERESTS Author JHL is a cofounder and chief technology officer (CTO) of SoVarGen, Inc., which seeks to develop new diagnostics and therapeutics for brain disorders. Author IN serves on an Advisory Board and Speakers Bureau of Eisai, Inc. The remaining authors have no conflicts of interest. We confirm that we have read the Journal's position on issues involved in ethical publication and affirm that this report is consistent with those guidelines.
2022-06-17T06:17:02.071Z
2022-06-15T00:00:00.000
{ "year": 2022, "sha1": "b0b82c2515195a2796c6fe5c1d574ad75a3a2560", "oa_license": "CCBYNC", "oa_url": "https://discovery.ucl.ac.uk/id/eprint/10150581/1/Epilepsia%20-%202022%20-%20Najm%20-%20The%20ILAE%20consensus%20classification%20of%20focal%20cortical%20dysplasia%20%20An%20update%20proposed%20by%20an%20ad%20hoc.pdf", "oa_status": "GREEN", "pdf_src": "Wiley", "pdf_hash": "61b189ea02589babeb504ea34cdabf0cbb3b7336", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
90523121
pes2o/s2orc
v3-fos-license
The Effect of Bioaugmentation with Archaea on the Oxygen Uptake Rate in a Sequencing Batch Reactor The aim of this study was to evaluate the effect of bioaugmentation with Archaea domain organisms on the activated sludge (AS) expressed by the oxygen uptake rate (OUR) in a laboratory sequencing batch reactor (SBR). The influence of depletion of the external substrate in bioaugmented (SBR-A) and non-bioaugmented (SBR-B) activated sludge during aerobic stabilization was investigated. The experiment was divided into two steps. First, the OUR was measured in the standard conditions of biological treatment. Second, AS was only aerated in the absence of the substrate. It was observed that bioaugmentation with Archaea had an increasing effect on the endogenous and exogenous OUR of the sludge in both phases. In the first phase, the average endogenous OUR was 28.70 ± 2.75 and 21.63 ± 0.9 mgO2·dm·h in the SBR-A and SBR-B, respectively. Regarding the exogenous OUR, the average values were 95.55 ± 11.33 and 57.15 ± 24.56 mgO2·dm·h for the SBR-A and SBR-B, respectively. Archaea enhancing its biological activity, expressed as the OUR, exert a stabilizing effect on this parameter of AS and ensure its lower sensitivity to changes in the process conditions, substrate supply disruption and prolonged aeration. Introduction Currently, the increasing demand for control and process optimization in wastewater treatment plants (WWTPs) requires an advanced approach to the improvement of the system, e.g., by bioaugmentation.This method is defined as the introduction of a specific strain or a consortium of organisms to enhance the biological activity of a process factor [1,2].It has effectively been used in several environmental aspects such as bioremediation of contaminated soils and groundwater treatment [3][4][5][6].In WWTP, this method has been applied in both aerobic and anaerobic systems [7][8][9].Bioaugmentation has been used to increase the population of nitrifiers and increase the tolerance of microorganism against various negative factors such as pH fluctuations, toxic agents, temperature changes, and shock loading [10][11][12][13][14][15].Moreover, Van Limbergen et al. [16] indicated that bioaugmentation could improve degradation of refractory compounds as well as flocculation, which in turn affects the parameters of activated sludge (AS) floc and their composition [17,18].It was also found that this method could support the start-up of new reactors [19,20].In anaerobic systems, such a technique is involved in improvement of the process stability and biogas yields [21] as well as odor reduction [22,23]. Various organisms can be used in the bioaugmentation process, e.g., autotrophs, heterotrophs, facultative anaerobes and aerobes [2].Archaea can also be applied for this purpose [24][25][26][27][28].These microorganisms are frequent constituents of AS [29][30][31].However, their contribution to the total biomass is usually inconsiderable and most frequently does not exceed 8% of the total number of bacterial cells [32,33].In classical bioreactors for removal of C, N, and P with AS, Archaea occur mainly due to the supply of supernatants formed during the sludge treatment process in fermentation chambers.Previous studies [25][26][27][34][35][36] have shown that Archaea are involved in many biochemical processes and, therefore, they can be used for removal of nutrients from various types of industrial and municipal wastewater.An important fact in this context seems to be that the Archaea domain microorganisms have been reported to play an important role in ammonia removal from wastewater [29,33].They are useful in a removal of nitrogen compounds from wastewater both in the intermittent aeration system [24] and in the system with alternating anaerobic, anoxic, and aerobic conditions [25][26][27].They also exert a beneficial effect on the stability of the process, ensuring its lower sensitivity to shock pollutant loading, and help reduce the organic carbon demand during the biological processes of removal of nitrogen compounds [28]. One of the parameters that can be used for characterization of AS is the oxygen uptake rate (OUR).It describes the respiration rate, i.e., the amount of oxygen per unit volume utilized per time unit by the available microorganisms [37].OUR can be applied to control and optimize process conditions as well as identify potential instabilities of AS systems [38][39][40][41].Furthermore, OUR has also been used to determine microbial activity and viability [42][43][44].In AS stabilization, this measurement presents the degree of sludge stability.This parameter is related to the main biochemical processes of biomass growth/decay and substrate removal [45].The exogenous oxygen uptake rate characterize the activity of heterotrophs and assessment of easily biodegradable substrate in wastewater [46].However, the endogenous OUR measurement (absence of substrate in wastewater) indicates the consumption for bacterial growth-decay cycle, maintenance energy production and protozoa respiration [47]. The aim of this study was to determine the effect of bioaugmentation with Archaea on AS expressed by the OUR parameter, during the wastewater treatment process in a sequencing batch reactor (SBR).Moreover, the influence of external substrate depletion and aerobic stabilization in bioaugmented and non-bioaugmented AS systems was investigated. Materials and Methods The study was divided into two steps.First, labeled "feed on" (Step I), the oxygen uptake rate (OUR) (both endogenous and exogenous) was measured in standard stable operational conditions of biological treatment for 30 days of an experiment in SBRs bioaugmented and not bioaugmented with Archaea.In the second step, called "feed off" (Step II), AS was only aerated in the absence of the substrate (under conditions characterizing aerobic stabilization of AS).At this point, the supply of Archaea liquor into SBR was stopped.The second step lasted 30 days as well, counting from the end of the first, feed-on step.Both feed-on and feed-off steps were conducted at the temperature of 20 ± 0.1 • C. Experiment in the Sequencing Batch Reactor The experiment was carried out using two identical sequencing batch reactors (A, bioaugmented; and B, non-bioaugmented AS), each with an active volume of 8 dm 3 (Figure 1).At the beginning, SBRs were inoculated with AS from a municipal wastewater treatment plant (WWTP) in Lublin (southeastern Poland), a mechanical-biological plant, which employs a modified Bardenpho method, with a daily wastewater volume amounting to approximately 65,000 m 3 . The temperature in the SBR was maintained at 20 ± 0.1 • C by using a water bath with controlled and regulated temperature, because during collection of AS for laboratory SBR inoculation, the temperature within WWTP bioreactor was ca.20 • C.During the experiment, pH (in AS) was also monitored-the mean value of this parameter was 8.1 ± 0.1.The seeded SBR A and B was characterized by mixed liquor suspended solids (MLSS) 3.19 g•dm −3 and mixed liquor volatile suspended solids Figure 1.Scheme of laboratory sequencing batch reactors: 1, electric motor driving the mixing system; 2, distribution pipes for pressured air; 3, SBR-type bioreactor; 4, water bath with stabilized temperature; 5, low-speed blade stirrer; 6, membrane diffuser; 7, membrane supercharger supplying the aeration system with pressured air. In the feed-on step, the SBRs were operated using a 12-h cycle.Each cycle included four distinct phases: Filling (30 min), reacting (stirring 120 min and aeration 420 min), sedimentation (90 min) and decantation (30 min) as well as an idle phase for removal of excessive AS and sampling of probes for analysis (30 min).During the aeration step, oxygen concentration was sustained at a level of approximately 2 mgO2•dm −3 . The two SBR-type bioreactors used in the experiment were operated under the same conditions; However, SBR-A was bioaugmented.It was fed with 2.5 dm 3 of pre-settled wastewater and 0.25 dm 3 of an Archaea-containing suspension used for bioaugmentation.The second reactor (SBR-B) was fed with the same wastewater volume, but the bioaugmentation liquor was replaced with an equal amount of distilled water. The wastewater feed in the filling step was obtained at WWTP in Lublin.The presettled wastewater was taken (twice a week) from a primary sedimentation tank, then portioned into a container used for one feeding (2.5 dm 3 for each bioreactor), and kept in a refrigerator at 4 °C .About 1 h before the filling procedure, raw wastewater was transferred from the refrigerator into a temperature of 20 °C to avoid temperature fluctuation inside the SBRs.The main characteristics of pre-settled wastewater are presented in Table 1.Archaea microorganisms used for bioaugmentation was prepared as liquor in a continuous mode throughout the experiment.The liquor preparation device operated according to the principle specified below.A nylon pouch filled with a solid (powdery) substrate inducing incubation of Archaea provided by ArchaeaSolutions Inc (Evansville, IN, USA) was mounted inside the generator.The substrate was packed in a vinyl alcohol coating, which dissolved upon contact with dechlorinated tap water flowing through the generator.The release of an appropriate Archaea microbial load required a continuous flow of water through the generator at a flow rate established In the feed-on step, the SBRs were operated using a 12-h cycle.Each cycle included four distinct phases: Filling (30 min), reacting (stirring 120 min and aeration 420 min), sedimentation (90 min) and decantation (30 min) as well as an idle phase for removal of excessive AS and sampling of probes for analysis (30 min).During the aeration step, oxygen concentration was sustained at a level of approximately 2 mgO 2 •dm −3 . The two SBR-type bioreactors used in the experiment were operated under the same conditions; However, SBR-A was bioaugmented.It was fed with 2.5 dm 3 of pre-settled wastewater and 0.25 dm 3 of an Archaea-containing suspension used for bioaugmentation.The second reactor (SBR-B) was fed with the same wastewater volume, but the bioaugmentation liquor was replaced with an equal amount of distilled water. The wastewater feed in the filling step was obtained at WWTP in Lublin.The presettled wastewater was taken (twice a week) from a primary sedimentation tank, then portioned into a container used for one feeding (2.5 dm 3 for each bioreactor), and kept in a refrigerator at 4 • C.About 1 h before the filling procedure, raw wastewater was transferred from the refrigerator into a temperature of 20 • C to avoid temperature fluctuation inside the SBRs.The main characteristics of pre-settled wastewater are presented in Table 1.Archaea microorganisms used for bioaugmentation was prepared as liquor in a continuous mode throughout the experiment.The liquor preparation device operated according to the principle specified Water 2018, 10, 575 4 of 11 below.A nylon pouch filled with a solid (powdery) substrate inducing incubation of Archaea provided by ArchaeaSolutions Inc (Evansville, IN, USA) was mounted inside the generator.The substrate was packed in a vinyl alcohol coating, which dissolved upon contact with dechlorinated tap water flowing through the generator.The release of an appropriate Archaea microbial load required a continuous flow of water through the generator at a flow rate established at a level of 0.5 dm 3 •min −1 .After 30 days, the Archaea-containing pouch in the generator was replaced by a new one.The generator was linked to two serially-connected storage tanks.At the highest point of the second tank, an emergency spillway was mounted.The total volume of storage tanks was 320 dm 3 .The suspension used during the experiment was sampled every 12 h immediately before supplying into the bioreactor during the filling phase.Analysis of the substrate identical to that used in this study and obtained in a similar way [25][26][27] with the PCR technique using a GeneMatrix Soil DNA Purification Kit (EUR X , Gda ńsk, Poland) showed that the prepared liquor contained Archaea microorganisms having the 16S rRNA gene and archaeal ammonia monooxygenase subunit A genes.The physical and chemical characteristics of the Archaea liquor used for bioaugmentation are presented in Table 2. Most experimental analyses were performed with Hach Lange UV-VIS DR 5000 (Hach, Loveland, CO, USA) using Hach analytical methods.The pH values were monitored by a multimeter HQ 40D Hach-Lange (Hach, Loveland, CO, USA).Total solids (TS), volatile solids (VS) and total suspended solids (TSS) were determined according to Polish standard methods. Measurement of the Oxygen Uptake Rate (OUR) To measure OUR, the AS volume of 0.9 dm 3 was sampled from both SBR-A and -B and then added to a respirometer with a stirring mechanism and dissolved oxygen probe (HQ 40D by Hach-Lange).The respirometer was placed in a thermostatic bath (20 ± 0.1 • C).Before the measurements, the AS was continuously aerated to obtain the initial dissolved oxygen concentration (DO 1 ) of 7-8 mgO 2 •dm −3 , and then the aeration was stopped. The dissolved oxygen concentrations (DO) were measured at 30-s intervals until they reached a value close to full depletion.The respiration rate was calculated from the slope, according to the following equation: where DO 1,2 are initial and final dissolved oxygen concentration, respectively; and t 2 − t 1 is the time interval between the first and last DO measurement. The OUR measurement was determined in two replications for two variants, exogenous (OUR exo ) and endogenous (OUR endo ).The wastewater from the primary sedimentation tank effluent was used as an external substrate during the OUR exo measurements.The doses of 0.1 dm 3 were added at each measurement.The dose value was determined experimentally, as volume percentage of added wastewater.No supplementation of the substrate was applied in the OUR endo investigations.However, to ensure the same volume of suspension in the OUR endo measurements, 0.1 dm 3 of dechlorinated tap water was added at each measurement. The statistical analyses were conducted by means of R programming environment (v.3.4.3.).Each comparative analysis of means was preceded with a test of significance of variance differences, which was performed using F-test.In the case of equal variance, Student's t-test was employed for two independent samples, whereas Welch test was applied when the values differed [48]. Results and Discussion The results of the study are shown in Figures 2 and 3.The values of endogenous OUR (Figure 2) indicate the presence of the five-day-long adaptation period (Stage I) for AS transferred from a full-scale bioreactor to the laboratory-scale SBRs, in both Bioreactor A and B. This relatively short duration was related to the fact that only the scale and type of the bioreactor were changed, as the AS was transferred from a large-scale flow system to the laboratory-scale batch one.All process parameters as well as the wastewater subjected to the treatment remained unchanged.The next easily distinguishable stage of the feed-on step involved stable operation of the SBR bioreactors in standard conditions for 25 days (Stage II in Figure 2). Water 2018, 10, x FOR PEER REVIEW 5 of 11 which was performed using F-test.In the case of equal variance, Student's t-test was employed for two independent samples, whereas Welch test was applied when the values differed [48]. Results and Discussion The results of the study are shown in Figures 2 and 3.The values of endogenous OUR (Figure 2) indicate the presence of the five-day-long adaptation period (Stage I) for AS transferred from a fullscale bioreactor to the laboratory-scale SBRs, in both Bioreactor A and B. This relatively short duration was related to the fact that only the scale and type of the bioreactor were changed, as the AS was transferred from a large-scale flow system to the laboratory-scale batch one.All process parameters as well as the wastewater subjected to the treatment remained unchanged.The next easily distinguishable stage of the feed-on step involved stable operation of the SBR bioreactors in standard conditions for 25 days (Stage II in Figure 2).Inconsiderable changes in the average OURendo values were found during the Step I (feed-on step) in Bioreactor B. Regarding Bioreactor A, noticeable and statistically significant OURendo increases were observed in Stages I and II.This is caused by Archaea bioaugmentation, as bioreactor performance differs only in this aspect.It is also visible that the standard deviation of the OUR measurements was lower for Bioreactor A than for B at both stages. In the Step II (feed-off step) can be distinguished Stage I (lasting 8 days), where the OURendo decline in SBR-A was substantially lower than in SBR-B in comparison with the feed-on step; Stage II (lasting 16 days), where the OURendo decreased in SBR-A (F test showed that variances at Stages I and II are different p = 0.029, Welche test applied for comparison of average values gave the results that they were statistically different p = 2 ו10 −8 ) and increased slightly in SBR-B relative to the previous step (variances different with p = 0.002 and differences in average values are statistically insignificant p = 0.17); and Stage III (lasting six days), where the OURendo dropped significantly in both bioreactors in comparison to the previous step (for SBR-A test F showed no differences in variances p = 0.126 and T student test showed differences in average values p = 3.306, for SBR-B test F showed differences in variances p = 0.030 and Welche test showed differences in average values p = 2.911 × 10 −5 ).In the feed-off step, the standard deviation of the OURendo measurements was lower for SBR-A only in Stage I and comparable to the level achieved for SBR-B in the other two stages. While averaging the results of the feed-on and feed-off steps, it was found that, in the Step I (i.e., in substrate presence), the average endogenous oxygen uptake rate was 28.70 ± 2.75 mgO2•dm −3 •h −1 in bioaugmented SBR-A and 21.63 ± 0.9 mgO2•dm −3 •h −1 in non-bioaugmented SBR-B (Figure 2).In turn, in the Step II (when the absence of substrate and continuous aeration conditions occurred), the average endogenous oxygen uptake rate was 12.73 ± 3.93 and 10.56 ± 4.23 mgO2•dm −3 •h −1 in previously bioaugmented and non-bioaugmented AS, respectively (Figure 2).Inconsiderable changes in the average OUR endo values were found during the Step I (feed-on step) in Bioreactor B. Regarding Bioreactor A, noticeable and statistically significant OUR endo increases were observed in Stages I and II.This is caused by Archaea bioaugmentation, as bioreactor performance differs only in this aspect.It is also visible that the standard deviation of the OUR measurements was lower for Bioreactor A than for B at both stages. In the Step II (feed-off step) can be distinguished Stage I (lasting 8 days), where the OUR endo decline in SBR-A was substantially lower than in SBR-B in comparison with the feed-on step; Stage II (lasting 16 days), where the OUR endo decreased in SBR-A (F test showed that variances at Stages I and II are different p = 0.029, Welche test applied for comparison of average values gave the results that they were statistically different p = 2 × 10 −8 ) and increased slightly in SBR-B relative to the previous step (variances different with p = 0.002 and differences in average values are statistically insignificant p = 0.17); and Stage III (lasting six days), where the OUR endo dropped significantly in both bioreactors in comparison to the previous step (for SBR-A test F showed no differences in variances p = 0.126 and T student test showed differences in average values p = 3.306, for SBR-B test F showed differences in variances p = 0.030 and Welche test showed differences in average values p = 2.911 × 10 −5 ).In the feed-off step, the standard deviation of the OUR endo measurements was lower for SBR-A only in Stage I and comparable to the level achieved for SBR-B in the other two stages. Given the analysis of the OUR endo , values in SBR-A and SBR-B were are not stable during the specified stages, which is reflected in the standard deviations of the results.The increase in the SBR-A of the OUR endo value after the adaptation period in the bioaugmented system is considerable (for SBR-A, F test showed differences in variance p = 0.030 and Welche test shows differences in average results p = 8.809 × 10 −13 ; for SBR-B, F test showed differences in variance p = 0.012 and Welche test showed differences in average results p = 0.003), which generally yields a higher standard deviation of the results calculated for the total feed-on step (28.70 ± 2.75 mgO 2 •dm −3 •h −1 ) and may indicate lesser stability of the system.However, the upward trend in the changes and the analysis of the individual stages allows concluding that bioaugmentation exerts a positive effect on the respiratory activity of AS. The results of OUR exo measurements (Figure 3) indicate that, just as for OUR endo , there is a visible AS adaptation period in both laboratory-scale SBRs referred to as Stage I (five days), and a subsequent stable operation Stage II (25 days).The OUR exo values achieved in SBR-B changed substantially in the feed-on step and were considerably lower in Stage II (differences in variance p = 0.015 and differences in average value p = 5.206 × 10 −10 ).In SBR-A, they were also different in Stage I (differences in variance p = 0.054 and differences in average value p = 2.737 × 10 −4 ), but the change was not as drastic as in SBR-B.The standard deviation of the measurement results for SBR-A and SBR-B in both these stages were not different (for I p = 0.22 and for II p = 0.16). In the feed-off step, three distinct stages can be distinguished, similar to While averaging the results of the feed-on and feed-off steps, it was found that, in the first step (in substrate presence), the average exogenous oxygen uptake rate value was 95.55 ± 11.33 and 57.15 ± 24.56 mgO 2 •dm −3 •h −1 in bioaugmented and non-bioaugmented AS, respectively (Figure 3). The analysis of the OUR exo values (Figure 3) in both stages of the experiment allows a conclusion that, similar to the OUR endo results (Figure 2), SBR-A was characterized by a greater stability of the individual stages of both steps, which is reflected in the standard deviations exhibiting lower values. For both OUR endo and OUR exo , two basic stages can be distinguished in the feed-on step and three stages in the feed-off step.At the beginning of the feed-on step, there is a ca.5-day long adaptation period for AS transferred from a large-scale bioreactor to a laboratory-scale SBR, and the rest of the time (25 days) is a stable operation period.For both OUR endo and OUR exo , this parameter in the feed-on step is higher for the bioaugmented bioreactor.However, when the adaptation step is finished, the OUR exo clearly declines in both SBR-A and -B, which is particularly evident for the non-bioaugmented bioreactor, where it falls by 1/3.In the feed-off step, the OUR endo and OUR exo are always higher for the bioaugmented Bioreactor A. Interestingly, in Stage II of the feed-off step in Bioreactor B, the OUR endo increases in comparison to Stage I, which is not the case in Bioreactor A. The analysis of the OURexo values (Figure 3) in both stages of the experiment allows a conclusion that, similar to the OURendo results (Figure 2), SBR-A was characterized by a greater stability of the individual stages of both steps, which is reflected in the standard deviations exhibiting lower values.Similarly, the standard error of the measurements is usually lower for the bioreactor with the bioaugmented AS.Regarding OUR endo in the feed-on step, it is 0.5 for the bioaugmented SBR-A and 0.16 for the non-bioaugmented SBR-B, while in the feed-off step these values are higher: 0.72 and 0.77, respectively.When OUR exo is considered, the standard error in the feed-on step reaches 2.07 and 4.48 for SBR-A and -B, respectively.In the feed-off step, lower values are found: 1.95 for SBR-A and 2.27 for SBR-B. Generalization and averaging of the results for the two feed-on and feed-off steps allows concluding that the maximum respiration rate was observed in the feed-on step of the experiment, and a decrease in both OUR exo and OUR endo occurred during the feed-off step.In this case, the average endogenous oxygen uptake rate was 12.73 ± 3.93 mgO 2 •dm −3 •h −1 in bioaugmented AS, and 10.56 ± 4.23 mgO 2 •dm −3 •h −1 in non-bioaugmented one.In the case of the exogenous oxygen uptake rate, the average values were 29.74 ± 10.67 and 19.82 ± 12.42 mgO 2 •dm −3 •h −1 in bioaugmented and non-bioaugmented AS, respectively. As background for OUR measurement and supplementary tools for checking the stability of processes in bioreactors during the feed on step the effectiveness of wastewater treatment in bioaugmented and non-bioaugmented SBRs was observed.Higher changeability of effectiveness was noticed in non-bioaugmented one however no significant differences were observed between level of treatment effectiveness in both of bioreactors.In addition, higher changeability in effectiveness of treatment was observed at beginning of experiment in both bioreactors which reflects the adaptation phase of AS for laboratory condition. According to Henze et al. [49], the low values of the respiratory rate might be caused by oxygen stabilization of sludge.The typical OUR exo values for AS range from 30.0 to 100 mgO 2 •dm −3 •h −1 [50].In the study by Puig et al. [51] similar data was obtained for SBR, i.e., the exogenous respiration rate ranged from 35 to 110 mgO 2 •dm −3 •h −1 .The results presented in this paper are mostly consistent with those reported by others.However, the endogenous oxygen uptake rate exceeds the OUR endo values given by Avcioglu et al. [52] that varied from 2.0 to 8.0 mgO 2 •dm −3 •h −1 This suggests that AS used for the experiment had a good quality due to the presence of many microorganisms in the flock assemblages as well as a sufficient substrate [49]. The OUR exo results in aerobically stabilized sludge (i.e., in feed-off step) were comparable to these presented by Cokgor et al. [53].In their study, domestic sludge was aerobically digested at room temperature of 20 Both endogenous and exogenous oxygen uptake rates were higher in the case of bioaugmented AS, which corresponded to the research carried out by Jun et al. [24] (SOUR was investigated) and also mentioned by Fredriksson et al. [33].The authors suggested a symbiotic relationship between bacteria and Archaea.However, the difference between OUR values observed in the present study for non-bioaugmented and bioaugmented AS was higher in the case of AS (feed-on step) than in stabilized sludge (feed-off step).Based on these investigations, it can be supposed that the bioaugmentation-assisted process is considerably more stable, as evidenced by the lower standard deviation value in each particular stage of the feed-on and feed-off steps.Therefore, it can be assumed that Archaea have a stabilizing effect on AS and decrease its sensitivity to changes in the quality of supplied wastewater and to disruption of substrate supply.This supports the advisability of bioaugmentation of AS and confirms the findings concerning enhancement of bioreactor stability presented in the Introduction of this paper.On the other hand, the research indicates that Archaea-bioaugmented AS is characterized by higher activity (expressed by a higher OUR) at prolonged aeration and exhibits increased resistance to oxygen stabilization, which makes this type of stabilization less effective and therefore less cost-efficient. Summarizing, it should be stressed that aspects of the novelty in the work is the description of the influence of bioaugmentation with Archaea on oxygen uptake rate in AS system in wide range of process stages (adaptation phase, stable operation and aerobic stabilization).Moreover, the study confirmed the increasing as well as stabilizing influence of Archaea addition on respiration activity of AS described by oxygen uptake rate. Conclusions In this study, the influence of bioaugmentation with Archaea on OUR of AS in a laboratory-scale SBR was investigated.Furthermore, the effect of absence of an external substrate in bioaugmented and non-bioaugmented AS during aerobic stabilization was evaluated.The conclusions are as follows: (1) It was observed that bioaugmentation with Archaea had a positive effect on both the endogenous and exogenous oxygen uptake rate of AS.The values of the OUR endo and OUR exo in the bioaugmented SBR was higher than in not bioaugmented SBR during the standard performance of the SBR bioreactor operating under sufficient substrate availability.The feeding inhibition of AS together with continuous aeration resulted in gradual stabilization and aerobic digestion of the bioaugmented and not bioaugmented AS, however in presence of Archaea the mentioned process is slower.(2) The results indicate an increase in the OUR value of bioaugmented AS in comparison with non-bioaugmented one in exactly the same process conditions and greater invariability of the OUR level in the individual stages of the experiment.Therefore, it can be stated that Archaea exert a stabilizing effect on OUR of AS (increase the system's resistance to external factors) and decrease its sensitivity both to changes in the quality of supplied wastewater and to disruption of substrate supply as well as prolonged aeration.(3) Because OUR is only one of the possible parameters describing AS, future work should be conducted, for instance related to influence of Archaea bioaugmentation on biogene congested bioreactors performance, bioreactors working in high range of temperatures, but also to describe reactions of eukaryotic organisms present in AS on supplementation with Archaea. Figure 2 . Figure 2. Endogenous oxygen uptake rate values during the experiment.SBR-A contained bioaugmented and SBR-B contained non-bioaugmented AS.Standard deviations are also given. Figure 2 . Figure 2. Endogenous oxygen uptake rate values during the experiment.SBR-A contained bioaugmented and SBR-B contained non-bioaugmented AS.Standard deviations are also given. Figure 2 . These involve Stage I (lasting eight days), where the OUR exo in SBR-A declines to a level similar to that achieved for SBR-B; Stage II (lasting 16 days), where OUR exo drops in SBR-A (variance not different p = 0.46 and different averages p = 0.0002) and in SBR-B in comparison to Stage I (variance not different p = 0.29 and different averages p = 1.754 × 10 −11 ); and Stage III (lasting six days), where OUR exo in both bioreactors falls distinctly relative to Stage II, in SBR-A (variance not different p = 0.24 and different averages p = 7.842 × 10 −14 ) and in SBR-B (variance not different p = 0.47 and different averages p = 0.001). Figure 3 . Figure 3. Exogenous oxygen uptake rate values during the experiment.SBR-A contained bioaugmented and SBR-B contained non-bioaugmented AS.Figure 3. Exogenous oxygen uptake rate values during the experiment.SBR-A contained bioaugmented and SBR-B contained non-bioaugmented AS. Figure 3 . Figure 3. Exogenous oxygen uptake rate values during the experiment.SBR-A contained bioaugmented and SBR-B contained non-bioaugmented AS.Figure 3. Exogenous oxygen uptake rate values during the experiment.SBR-A contained bioaugmented and SBR-B contained non-bioaugmented AS. C for 35 days.A maximum OUR value of ca.40.0 mgO 2 •dm −3 •h −1 was observed at the beginning of aerobic digestion.Then, it decreased to 18.0 and 21.0 mgO 2 •dm −3 •h −1 after 17 and 30 days, respectively.Bernard and Gray [54] investigated aerobically digested domestic sludge at a temperature of 16.5-20 • C; they also observed a significant reduction of specific oxygen uptake rate ranging from 65.8% to 93.1% (after 35 days) (SOUR was expressed as milligram of oxygen Water 2018, 10, 575 8 of 11 consumed per gram of volatile suspended solids (VSS) per hour and determined using equation SOUR = OUR/VSS). Table 1 . Pre-settled wastewater composition (the mean value and standard deviation are given). Table 1 . Pre-settled wastewater composition (the mean value and standard deviation are given). Table 2 . Characteristics of Archaea liquor used for bioaugmentation (the mean value and standard deviation are given).
2019-04-02T13:14:12.959Z
2018-04-28T00:00:00.000
{ "year": 2018, "sha1": "9d72630c05e741ba618cdc82e0d62754d2d3e123", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4441/10/5/575/pdf?version=1525350221", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "694f1b7e66f8f32bd2ded0e7b088abfc580d5058", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
260441860
pes2o/s2orc
v3-fos-license
Legalisation of euthanasia and assisted suicide: advanced cancer patient opinions – cross-sectional multicentre study Objectives The French government voted a new law in February 2016 called the Claeys-Leonetti Law, which established the right to deep and continuous sedation, confirmed the ban on euthanasia and ruled out physician-assisted suicide. The aim of this work was to gather the opinion of patients on continuous sedation and the legalisation of medical assistance in dying and to explore determinants associated with favourable and unfavourable opinions. Methods This was a French national prospective multicentre study between 2016 and 2020. Results 331 patients with incurable cancer suffering from locally advanced or metastatic cancer in 14 palliative care units were interviewed. 48.6% of participants expressed a favourable opinion about physician-assisted suicide and 27.2% an unfavourable opinion about its legalisation. Regarding euthanasia, 52% of patients were in favour of its legalisation. In univariate analysis, the only factor determining opinion was belief in God. Conclusions While most healthy French people are in favour of legalising euthanasia, only half of palliative care patients expressed this opinion. Medical palliative care specialists were largely opposed to euthanasia. The only determining factor identified was a cultural factor that was independent of the other studied variables. This common factor was found in other studies conducted on cohorts from other countries. This study contributes to the knowledge and thinking about the impact of patients’ personal beliefs and values regarding their opinions about euthanasia and assisted suicide. Trial registration number NCT03664856. INTRODUCTION 2][3] In Europe, legislation on euthanasia differs from one country to another and is legalised in only four countries. 4Currently, physicianassisted suicide is legal in five US states and Switzerland. 4French law considers euthanasia to be 'the act of a third party who deliberately ends a person's life with WHAT IS ALREADY KNOWN ON THIS TOPIC ⇒ This is the first national prospective multicentre study to involve a large number of patients who are directly confronted with end-of-life decisions and affected by terminal conditions, while former studies included healthy participants or carers who were not personally or immediately concerned by the issue. WHAT THIS STUDY ADDS ⇒ This study contributes to the knowledge and thinking about the impact of patients' personal beliefs and values regarding their opinions about euthanasia and the legalisation of assisted suicide. HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY ⇒ These findings could serve as a basis for lawmakers in France and elsewhere to satisfy the wishes of patients as much as possible and also help caregivers to meet the requirements of patients and follow ethical guidelines within the scope of the law. Original research the intention of putting an end to a situation deemed unbearable', while physician-assisted suicide is considered to be suicide by a patient facilitated by means or by information provided by a physician aware of the patient's intent.In France, calls to legalise physicianassisted suicide and euthanasia have increased, and public interest in the subject has grown in recent years despite its prohibition.The first law to be promulgated in France regulated patients' rights and end-oflife care and is called the Leonetti Law.It explicitly allows physicians to provide far-reaching symptom control, even at the risk of shortening life, in order to relieve the person's suffering at an advanced stage of a serious and incurable disease while prohibiting physician-assisted suicide and euthanasia.A new law in February 2016 called the Claeys-Leonetti Law established the right to deep and continuous sedation at the patient's request, consisting of sedative and analgesic treatment leading to a profound and continuous change of vigilance until death if the patient is likely to suffer pain, associated with the cessation of all life-sustaining treatments including artificial nutrition and hydration.In a patient who cannot express his/her wishes, the physician discontinues a lifesustaining treatment to avoid unreasonable obstinacy.In this case, the physician implements continuous deep sedation until death to be sure that the patient will not suffer. 5The law also confirmed the ban on euthanasia per se.However, while 96% of French people have been found to be in favour of euthanasia, fewer than 50% of physicians are. 6We conducted a first feasibility study that explored opinions about euthanasia among patients receiving palliative care. 7t showed that patients with an incurable disease such as cancer in an end-of-life setting are probably more reticent to legalise euthanasia than the healthy general population.A second single-centre study by our team reported determinants of opinions about euthanasia in palliative care patients with cancer. 8This study concerns a national multicentre prospective survey on patients' opinions about continuous sedation and legalising medical assistance in dying.Determinants associated with favourable and unfavourable opinions were explored. Design and setting We performed a French national multicentre prospective study among patients in 14 palliative care units between 2016 and 2020.Patients were selected without a prior interview with psychologists.The study questionnaire is provided as online supplemental material 1. Potential participants were identified by the palliative care physician.Before starting the questionnaire, investigators (physicians) presented the purpose of the study and the nature of the questions to the patient. Population Main selection criteria were as follows: suffering from locally advanced or metastatic cancer and requiring palliative care according to the definition set out by the WHO; hospitalised in a palliative care unit or followed by a palliative care team in hospital or at home; consenting to participate in the study and freely providing written informed consent.The only exclusion criterion was being unable to understand the purpose and conditions of the study. Data collection Sociodemographics were collected.Clinical data were collected using medical records and were confirmed during completion of the questionnaire.These included: type of cancer, number of metastatic sites, Performance status (WHO), history of cancer treatments (chemotherapy, radiation, surgery) and strong opiate use.In addition, the questionnaire recorded pain level, 9 health-related quality of life (EORCT QLQ-C15Pal) 10 family support, belief in God and practice of religion.The last question referred to the participant's opinion about legalising euthanasia, physician-assisted suicide and deep and continuous sedation. Statistical analysis Categorical variables were described using counts and frequencies and quantitative variables were described using medians and ranges.Using two specific items, the sample was split into two subgroups: (1) favourable or unfavourable opinion about legalising euthanasia and (2) favourable or unfavourable opinion about legalising assisted suicide.Patients' characteristics were compared with the χ 2 or exact Fisher's exact test for discrete variables and the Rank-Wilcoxon test for continuous variables according to group.Results were expressed as ORs and their 95% CIs.The level of statistical significance was set at 1−α=0.95.Statistical analyses were carried out with the SPSS software V.20. Patients Of the 410 patients, a total of 331 patients were interviewed, with a refusal rate of 25%.Median age was 66 years (range 21-94), and 51.4% were women (n=170).The main cancer sites were Opinion about legalising euthanasia Regarding euthanasia, 52% (n=172) of patients were in favour of its legalisation and 22% (n=73) did not express an opinion (neither favourable or unfavourable opinion about legalising euthanasia).Moreover, if such a law were voted, 42% of patients declared that they might envisage it for themselves (online supplemental material 1, question: page 17). Determinants of favourable or unfavourable opinion about legalising physician assisted Univariate analysis showed differences between patients with a favourable opinion about legalising physician-assisted suicide and those without.Patients with an unfavourable opinion of legalisation significantly more often believed in God and practised a religion.No other factors were determinants.All the details are provided in table 1. Determinants of favourable or unfavourable opinion about legalising euthanasia Univariate analysis also showed that patients with an unfavourable opinion about legalising euthanasia significantly more often believed in God and practised a religion.No other factors were determinants of an opinion about legalising euthanasia.All the details are provided in table 1. DISCUSSION This is the first national prospective multicentre study to include a large number of patients who are directly confronted with end-of-life decisions and affected by advanced conditions, while former studies included healthy participants or carers who were not personally or immediately concerned.Our team previously published the first single-centre study on the feasibility of discussing euthanasia and deep sedation with end-of-life patients 7 A second single-centre study sought to identify potential determinant factors associated with a favourable or unfavourable opinion about euthanasia in a French population of 78 patients with cancer receiving palliative care.Young patients who do not believe in God and have a history of chemotherapy treatment were more likely to request the discontinuation or restriction of their treatment. 8In this study, the majority of patients (89.7%) were in favour of legalising proportionate sedation, that is, of a duration and depth appropriate for a so-called refractory symptom, and also in favour (82.2%) of deep and continuous sedation at the patient's request, when he or she has decided to discontinue life-sustaining treatment in accordance with the current law.Therefore, the Claeys-Leonetti law is probably well adapted to the situations encountered by these patients and probably reflects patients' wishes.73.7% of the patients also agreed that, in the event that a patient is unable to express their wishes and if the doctor withdraws life-sustaining treatment, then the latter shall implement deep and continuous sedation.Furthermore, 65% were also in favour of deep and continuous sedation in the event of psychological suffering without physical symptoms.It is therefore important that palliative care professionals should have the necessary skills to identify and manage psychological distress and that mental health professionals should analyse and assess the reasons for MD, missing data. Table 1 Continued Original research psychological and/or existential distress and fully investigate whether a patient's request truly reflects their wishes. 11hile 96% of healthy French people are in favour of legalising euthanasia, only half of palliative care patients are (IFOP-Le regard des Français sur la fin de vie.2014).Medical palliative care specialists are largely opposed to euthanasia, while patients are more divided on the issue. 6atients must, therefore, be listened to attentively. In multivariate analysis on a cohort of US patients, Suarez-Almazor et al showed that the only characteristics that remained statistically associated with support for euthanasia were religious beliefs and the perception that patients with cancer are a heavy burden on their families. 12 Canadian team also showed that the desire for hastened death was associated with lower religiosity, reduced functional status, a diagnosis of major depression and greater distress of individual symptoms and concerns. 13n another work, Italian patients who were in favour of euthanasia had a higher Karnofsky score.No other variables taken into consideration provided any relationship but only 25 patients (40%) were in favour with euthanasia in that study. 14A systematic review of older adults' requests for or attitude towards euthanasia or assisted suicide showed that younger age, lower religiosity, higher education and higher socioeconomic status were the most consistent predictors of endorsement of euthanasia and assisted suicide. 15Finally, a New Zealand study analysing factors associated with terminally ill people who wanted to die showed that the factors with the largest ORs were awareness of terminal prognosis, high level of depression, not finding meaning in day-to-day life and pain. 16s mentioned above, we identified the potential determining factors associated with a favourable or unfavourable opinion about euthanasia as age, belief in God and a medical history of chemotherapy. 8In that study, only belief in God was a determining factor of holding an opinion against legalising euthanasia and assisted suicide.Therefore, the only determining factor identified in our study is a cultural factor that is independent of the other studied variables such as pain, anxiety, site of disease, treatments received, general health status, level of education and duration of disease.This common factor was found in other studies conducted on cohorts from different countries. 12 13 15Additionally, the independent effect of religiosity on the opinion about hastened death has been extensively discussed in the general population. 17 18nterestingly, a Danish study examined whether the religious and spiritual characteristics of Danish physicians were associated with their attitudes toward end-of-life decisions, including euthanasia.It was shown that being more religious was associated with being more likely to oppose euthanasia 19 Religions may regard understanding death and dying as vital to finding meaning in human life. 20Unsurprisingly, all faiths hold strong views on euthanasia and believers are more reluctant to endorse it.Nevertheless, 'believing in God' is a rather simplistic way of exploring religiosity and it does not fully define what being religious entails.In France, a survey in 2021 showed that 49% of French people declared believing in God as opposed to 66% in 1947 (IFOP, Le rapport des Français à la religion, August 2021). The study had some limitations.First, the representativeness of the sample is questionable since the opinion of end-of-life patients may vary according to the pathology they are suffering from and as their health status worsens.For example, we can hypothesise that being in a condition of incurable cancer should be differently experienced in comparison with suffering of amyotrophic lateral sclerosis, due to the course disease and the profile of the patients.Patients managed in non-palliative care units may also hold opinions different from those of patients managed in palliative settings.Future studies should focus on patients with other end-of-life conditions.Second, one-third of the eligible patients refused to participate, possibly because they were more cognitively or physically impaired.If so, this factor would affect the participation rate.Third, we chose to provide an introductory section including the sentences as they were written in the official French decree for more objectivity.So the questions differed from those obtained in the opinion polls, which may bias the comparison of results. CONCLUSION This study contributes to the knowledge and thinking about the impact of patients' beliefs in God, euthanasia and assisted suicide.Future qualitative research could continue to explore the relationship between an individual's belief in God and their views on euthanasia.The context of the illness and the patient's social situation probably do not influence their opinions on these issues.A better understanding of patients' beliefs is essential for providing precise information and/or interventions tailored to the palliative context.This poses questions about our current ability to care for and accompany patients through this most difficult of life stages.Medical advances, which have transformed diseases that once led to a rapidly fatal outcome, coupled with increased life expectancy and other social phenomena linked to human development, make it likely that these situations will become more common. 21The present findings could be taken into account by deciders and lawmakers in France and elsewhere to satisfy the wishes of patients and to provide guidance for caregivers. Regarding physician-assisted suicide and euthanasia, incurable patients were: ⇒More favourable to it than palliative care specialists.⇒Less favourable to it than healthy people.⇒ The societal debate on end of life must consider the opinions of people at the end of life. KEY STATEMENTS⇒ Continuous sedation and medical assistance in dying ⇒To gather opinions of patients with advanced cancer.⇒Toestablish determinants of opinions.⇒Opinionabout legalising deep and continuous sedation Table 1 Individuals with or without a favourable opinion about legalising physician-assisted suicide and euthanasia
2023-08-04T13:04:26.822Z
2023-08-03T00:00:00.000
{ "year": 2023, "sha1": "d716e584032b0b7fbd1ce6ec5f8f2c3c8aed875f", "oa_license": "CCBYNC", "oa_url": "https://spcare.bmj.com/content/bmjspcare/early/2023/08/02/spcare-2022-004134.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "30056f8b1151b20d9eb87aab59854ac3ab933dde", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
247768929
pes2o/s2orc
v3-fos-license
Computed Tomography Radiomics-Based Prediction of Microvascular Invasion in Hepatocellular Carcinoma Background Due to the high recurrence rate in hepatocellular carcinoma (HCC) after resection, preoperative prognostic prediction of HCC is important for appropriate patient management. Exploring and developing preoperative diagnostic methods has great clinical value in treating patients with HCC. This study sought to develop and evaluate a novel combined clinical predictive model based on standard triphasic computed tomography (CT) to discriminate microvascular invasion (MVI) in hepatocellular carcinoma (HCC). Methods The preoperative findings of 82 patients with HCC, including conventional clinical factors, CT imaging findings, and CT texture analysis (TA), were analyzed retrospectively. All included cases were divided into MVI-negative (n = 33; no MVI) and MVI-positive (n = 49; low or high risk of MVI) groups. TA parameters were extracted from non-enhanced, arterial, portal venous, and equilibrium phase images and subsequently calculated using the Artificial Intelligence Kit. After statistical analyses, a clinical model comprising conventional clinical and CT image risk factors, radiomics signature models, and a novel combined model (fused radiomic signature) was constructed. The area under the curve (AUC) of the receiver operating characteristics (ROC) curve was used to assess the performance of the various models in discriminating MVI. Results We found that tumor diameter and pathological grade were effective clinical predictors in clinical model and 12 radiomics features were effective for MVI prediction of each CT phase. The AUCs of the clinical, plain, artery, venous, and delay models were 0.77 (95% CI: 0.67–0.88), 0.75 (95% CI: 0.64–0.87), 0.79 (95% CI: 0.69–0.89), 0.73 (95% CI: 0.61–0.85), and 0.80 (95% CI: 0.70–0.91), respectively. The novel combined model exhibited the best performance, with an AUC of 0.83 (95% CI: 0.74–0.93). Conclusions Models derived from triphasic CT can preoperatively predict MVI in patients with HCC. Of the models tested here, the novel combined model was most predictive and could become a useful tool to guide subsequent personalized treatment of HCC. Background: Due to the high recurrence rate in hepatocellular carcinoma (HCC) after resection, preoperative prognostic prediction of HCC is important for appropriate patient management. Exploring and developing preoperative diagnostic methods has great clinical value in treating patients with HCC. This study sought to develop and evaluate a novel combined clinical predictive model based on standard triphasic computed tomography (CT) to discriminate microvascular invasion (MVI) in hepatocellular carcinoma (HCC). Methods: The preoperative findings of 82 patients with HCC, including conventional clinical factors, CT imaging findings, and CT texture analysis (TA), were analyzed retrospectively. All included cases were divided into MVI-negative (n = 33; no MVI) and MVI-positive (n = 49; low or high risk of MVI) groups. TA parameters were extracted from non-enhanced, arterial, portal venous, and equilibrium phase images and subsequently calculated using the Artificial Intelligence Kit. After statistical analyses, a clinical model comprising conventional clinical and CT image risk factors, radiomics signature models, and a novel combined model (fused radiomic signature) was constructed. The area under the curve (AUC) of the receiver operating characteristics (ROC) curve was used to assess the performance of the various models in discriminating MVI. INTRODUCTION Hepatocellular carcinoma (HCC), one of the most common malignant tumors, currently ranks as the second most common cause of cancer-related deaths worldwide, with more than half of all HCC cases diagnosed in China (1). HCC's high potential for vascular invasion, metastasis, and recurrence after resection explains its poor prognosis. Previous studies have indicated that microvascular invasion (MVI) is one of the most important factors influencing HCC recurrence and the prognosis of patients with this tumor (2,3). Several therapies such as liver resection (LR), liver transplantation, and transcatheter arterial chemoembolisation (TACE) are conventionally used to treat HCC. LR is the most common therapeutic option, while TACE is recommended for patients without vascular invasion or extrahepatic metastasis (4). In most cases, aspiration biopsy is the first step in determining whether the tumor is benign or malignant after clinical and imaging evaluation. Clinicians then assess clinical symptoms, specific serological indicators, and pathological results to determine treatment decisions. To date, MVI has often been overlooked in these initial steps, as an aspiration biopsy does not produce sufficient tissue samples to evaluate MVI; instead, aspiration biopsy is only capable of differentiating benign from malignant tumors and grading tumors according to the Edmondson-Steiner (E-S) scale. Thus, preoperative reliable prognostic markers for patient stratification are needed. Interest in presurgical imaging for MVI assessment in patients with HCC has grown significantly in recent years. Conventional imaging-based assessments, including tumor density, shape, and enhancement, have not been capable of predicting MVI. However, previous studies have indicated that imaging features (5, 6) such as tumor margin and size, radiological capsule, locally convex nodules, multinodular fusion, intratumoural artery and low-density signs (also called two-trait predictor of venous invasion, or TTPVI), as well as portal vein tumor thrombus (PVTT), can be used to predict MVI. However, these parameters can be easily influenced by interobserver variability and were not considered quantitative features. Radiomics is an emerging field that permits the extraction of high-dimensional information from medical images and is considered to reflect tissue heterogeneity (7)(8)(9)(10)(11). Thus, the radiomics signature, which comprises multiple texture features, has become a powerful prognostic biomarker. This signature augments available clinical data and has been a significant predictor of clinically relevant factors. According to the literature, by constructing appropriate models with radiomics features, researchers have achieved successful assessment and prediction abilities in various challenging clinical tasks (12). Studies have shown that computed tomography (CT) texture analysis can be used for MVI prediction, and that predictions derived from CT-based nomograms are more accurate than other predictive models (13,14). Due to HCC's high recurrence rate after resection, preoperative prognostic prediction of HCC is important for appropriate patient management. Therefore, exploring and developing preoperative diagnostic methods has the potential for great clinical value in treating patients with HCC. With this study, we thus aimed to construct MVI-specific diagnostic models by analyzing clinical and imaging features, as well as texture analysis (TA)-derived image parameters based on standard triphasic CT. We also sought to evaluate these models' diagnostic performance to explore whether models based on standard triphasic CT could preoperatively discriminate MVI in HCC. Study Design and Patient Population This was a single-center retrospective study approved by the Institutional Review Board of the Second Affiliated Hospital of Anhui Medical University [No. YX2020-101 (F1)], which waived the need for individual informed consent; this study complies with the Declaration of Helsinki (as revised in 2013). Participants with confirmed HCC by histopathology and who underwent CT examination before surgery were included in our study between December 2018 and September 2021. Relevant demographic and clinical patient information was retrieved from the electronic medical record. The inclusion criteria were as follows: (1) complete medical information; (2) CT examination using the same equipment and scanning parameters; (3) CT examination comprising non-enhanced, arterial, portal venous, and equilibrium phase scans; and (4) confirmation of HCC by pathology with MVI evaluation. The exclusion criteria were as follow: (1) metastatic or recurrent tumor; (2) liver transplantation; (3) poor CT image quality interfering with the observation, such as breathing or motion artifacts; (4) incomplete CT images; and (5) pathological images not meeting the criteria and/or MVI not evaluated. A total of 82 participants over 18 years old (65 men and 17 women) were included in our study. Demographic data, serum alpha-fetoprotein (AFP) levels, and history of hepatitis B infection were collected and recorded. Histological Analysis Histopathologic features, including pathology results, E-S grade, MVI status, and liver cirrhosis around the tumor were evaluated by two pathologists with 5 years and 8 years of professional experience, respectively. If a difference of opinion arose, the two doctors reached a final conclusion through discussion. MVI was defined as when tumor cell nests could be observed under microscopy in the vascular space lined by endothelium. There are three additional subgrades based on the distance factor: M0, no MVI; M1 (low-risk group), ≤5 MVI in the liver tissue adjacent to the tumor (≤1 cm); and M2 (high-risk group), >5 MVI or MVI in liver tissue adjacent to the tumor (>1 cm). Finally, we divided patients with M0 into the MVI-negative group and patients with M1/M2 into the MVI-positive group. CT Scan and Image Analysis CT examinations were performed using a Philips iCT 256-slice Brilliance CT scanner (Philips Healthcare, Cleveland OH, USA). Patients were required to fast, with no intake of solid foods or liquids for >4 h prior to scanning; they were allowed to drink 800 mL of water immediately before the examination. All participants underwent non-enhanced and standard triphasic scans. A non-ionic contrast agent at a dose of 1.5 ml/kg and a flow rate of 3.0 ml/s using a high-pressure syringe was injected into the right cubital vein after completing the non-enhanced scan. The acquisition time of standard triphasic images was 30, 60, and 120 s after contrast material injection, respectively. Scanning parameters were as follows: tube voltage, 120 kV; tube current, 230 mAs; slice thickness, 5 mm; slice interval, 0 mm; and pitch ratio, 0.6. Images collected from the participants were retrieved from the hospital's Picture Archiving and Communication System (PACS). Two radiologists with 14 and 13 years of experience in abdominal imaging reviewed these images separately and were blinded to the pathological results (Figures 1A-D, 2A-D). The more senior of the two radiologists made the final decision if a difference of opinion arose. The evaluated image features included: (1) tumor margin: when using a narrow window setting to observe, the margin was considered to be smooth if more than 90% edge of the entire tumor was "pencil-thin" sharp on arterial, portal venous, and equilibrium phase images (0, smooth; 1, non-smooth); (2) diameter: the maximal diameter of the largest cross section of the tumor; (3) capsule: a hypodense halo encircling the tumor (0, absent; 1, incomplete; 2, complete); (4) locally convex nodules (0, the nodule separated by normal liver parenchyma; 1, the nodule in direct contact with the surface capsule of liver); (5) multinodular fusion (0, absent; 1, present); (6) TTPVI: two-trait predictor of venous invasion including intratumoural artery and low-density signs (0, absent; 1, present); (7) PVTT: filling defect with enhancement in the portal or hepatic vein observed at the portal venous phase (0, absent; 1, present). For multifocal HCC, the largest focus was selected for analysis. Texture Features For all study participants, texture features were extracted from each of the four CT imaging phases noted above. Preoperative CT images of all participants were exported in Digital Imaging and Communications in Medicine (DICOM) format. Images showing lesions were then used for manual region-of-interest (ROI) delineation (Figure 3). Two radiologists with 14 and 13 years of experience in CT imaging used an open-source software program (ITK-SNAP, V3.3) (15) to delineate the largest crosssectional HCC area on the images of the four phases. For multifocal HCC, the largest tumor was selected for analysis. To assess intra-observer precision, the data were collected twice by the same radiologist with a 2-week interval; furthermore, interobserver precision was assessed by measurements independently performed by two radiologists. The intraclass correlation coefficient (ICC) was applied to analyze the inter-observer and intra-observer agreements of the feature extraction. The Artificial Intelligence Kit Version (V3.3.0 GE Healthcare, Shanghai, China) which complied to IBSI (Image Biomarker Standardization Initiative) was used to extract texture features. Clinical Model Clinical and imaging characteristics analysis was performed as follow steps: (1) chi-square test or Fisher's exact test was used for nominal variables while continuous variables which complied to abnormal distribution was analyzed by Mann-Whitney Utest to filter characters which p < 0.05; (2) significant features were analyzed by using univariate logistic regression and features with p < 0.05 were enrolled in multivariate logistic regression to construct the clinical model, meanwhile, feature with Variance Inflation Factor (VIF) > 10 was eliminate to solve the problem of multicollinearity. Radiomics Signature Model Features from each phase of non-enhanced, arterial, portal venous, and equilibrium were analyzed independently through following steps: (1) features with both inter-and intra-observer ICCs > 0.75 were retained; (2) Mann-Whitney U-test was used to explore whether the features were significantly different between the two groups (p < 0.05), then significant different features were furtherly analyzed by univariate logistic regression to explore whether the features were discriminative between the two groups (p < 0.05); (3) the minimum redundancy and maximum relevance (MRMR) method was used to eliminate redundancy meanwhile retain features most relevant to the MVI; (4) finally, backward stepwise multivariable logistic regression was used to select the final features and construct the regression model, features with p < 0.05 were independent risk predictors. Considering the small number of available datasets, there was no data grouping in this study. However, 10-fold cross validation with 10 times repetition was applied to prove that the model was feasible in distinguishing one group from another and that the result was not due to overfitting. Lastly, Delong test was used to compare the performance of the two models (p < 0.05). Combined Model The scores obtained from the clinical model and from the four radiomics models were incorporated into a backward stepwise multivariate logistic regression analysis. Scores from arterial and venous were finally retained and the combined model was constructed, furthermore displayed as a nomogram. Flowchart of our study is shown in Figure 4. All statistical analyses were performed using R (version 3.5.1) and Python (version 3.5.6). Statistical significance was defined as a twotailed p < 0.05. Areas under the receiver operating characteristics (ROC) curves and areas under the curves (AUCs) were used to evaluate the diagnostic value of the final models. RESULTS Over all, 82 participants (65 men and 17 women) were included in our retrospective study. Of these participants, 33 were classified as MVI-negative and 49 as MVI-positive. Analysis of Clinical and Imaging Characteristics The baseline clinical characteristics and CT imaging features of the MVI-negative and MVI-positive groups are summarized in Supplementary Table 1. E-S grade, serum AFP levels, tumor diameter, and multinodular fusion were significantly different between groups (p < 0.05). Texture Analysis For each phase of CT image, features number with inter-and intra-ICCs bigger than 0.75, features number with p < 0.05 in Mann-Whitney U-test and univariate logistic are shown in Supplementary Table 2. Then MRMR was applied to eliminate feature redundancy while retaining the most predictive features. For each phase 12 features are retained. Supplementary Table 3 shows the predictive performance of each feature, and the performance of the optimal textural feature in each phase were shown in Supplementary Table 4. Radiomics Signature Model Using Single CT Phases Model of each CT phase was constructed by multivariate logistic regression. The four radiomics signature models were named as plain, artery, venous and delay model, which based on non-enhanced, arterial, portal venous, and equilibrium phase, respectively. model exhibited better predictive performance than did the other radiomic signature models. Combined Model The scores obtained from the clinical model and the four radiomics models were incorporated into a backward stepwise multivariate logistic regression to construct a combined model. After the stepwise backward screening, the radiomics score obtained from the artery and delay models were preserved. The nomograms combining the significant independent predictors of MVI are shown in Figure 6. The combined model was able to more accurately predict MVI. The ROC curve for our nomogram is shown in Figure 5. The nomogram exhibited the best performance of all models with an AUC of 0.83 (95% CI: 0.74-0.93). DeLong test showed a significant difference between the combined and venous models (p < 0.05). The results of 10 times 10 folds cross validation of the validation set are shown in Supplementary Table 5. DISCUSSION Radiomics is used to extract hundreds of quantitative feature parameters through a computer algorithm to improve the predictive performance of medical images (16). In this retrospective study, we extracted 12 radiomics features associated with MVI of HCC from the non-enhanced phase and from standard triphasic phase images respectively, and then selected these data for radiomics score construction. A total of 48 radiomics features were the optimal textural parameters, which were significantly different between MVI-negative and MVI-positive groups. The wavelet transform displayed the most frequent both in non-enhanced, portal venous and equilibrium phases, meanwhile Log is the most frequent one in arterial phase. The radiomics features showing the best discriminative performance in predicting MVI of each phase are as follow: the AUC of glszm. Zone Percentage based on non-enhanced phase was 0.74, which measures the coarseness of the texture; the AUC of glrlm. Long Run High Gray Level Emphasis based FIGURE 6 | Nomogram combining the radiomics signature from the arterial and delay phases for predicting microvascular invasion. The total score was calculated, and the risk of MVI is shown to go up in the x-axis. on arterial phase was 0.75, which measures the joint distribution of long run lengths with higher gray-level values; the AUC of ngtdm.Strength based on portal venous phase was 0.70, which is a measure of the primitives in an image; the AUC of shape. Maximum 2D Diameter Column based on equilibrium phase was 0.70, which is defined as the largest pairwise Euclidean distance between tumor surface mesh vertices in the row-slice plane. In these 48 texture parameters, which generated from GLSZM and GLRLM were the most frequently significant difference between the two groups (17)(18)(19)(20)(21)(22). The GLSZM and GLRLM both reflect heterogeneity of the image, which can be attributed to the more heterogeneity of MVI-positive group (including more blood vessels with abnormal hyperplasia, more necrosis due to fast tumor growth, and more uneven internal structure of tumor). We developed radiomics signature models based on nonenhanced, arterial, portal venous, and equilibrium phases capable of predicting MVI, with AUCs of 0.75, 0.79, 0.73, and 0.80, respectively. Our results indicate that the radiomics signatures of standard triphasic CT achieved good performance at all CT phases. These findings are similar to those of Zheng et al. (23), who suggested that a radiomics signature derived from CT images could predict MVI, with an AUC of 0.80 in tumors < 5 cm. In addition, the artery and delay models performed better than the plain and venous models. This result differed from previous studies (24,25), which suggested that radiomics signatures based on the portal venous phase perform better than those based on the arterial or equilibrium phase. Such inconsistencies might be associated with the presence of selection bias and the inclusion criteria used in our radiomics analysis. In addition to this radiomics analysis, we evaluated clinical and CT imaging factors. We found that tumor diameter and pathological grade were independent variables associated with MVI. Previous studies (17,26) also found that tumor size was an independent factor for predicting survival of patients with HCC; this was consistent with our results. We observed the same trend in E-S grades with MVI likelihood. Therefore, we constructed a clinical model including tumor diameter and pathological grades that could discriminate MVI-positive HCC with an AUC of 0.77. Some studies have indicated that serum AFP levels and non-smooth tumor margins are more frequently found in MVI-positive HCC cases (5,6,27). Similarly, other studies that developed radiomic signatures using tumor boundary difference and hypointense halo could also predict MVI (5,28). Interestingly, in our study, multinodular fusion and serum AFP level were also significant in univariate analysis, but they each lost their significant association with MVI in multivariate analysis. Furthermore, we developed and validated a nomogram as a surrogate biomarker of MVI in HCC, which combined the above models. A nomogram is an effective and accurate model for visualizing regression equations because it can establish scoring standards based on the regression coefficients of all independent variables (13). It transforms the complex regression equation into a simple, visual graph and has been proposed as a new standard method. Recently, it has been widely used as a prognostic tool for many tumor types, such as liver fibrosis (14), hepatocellular carcinoma (27), lung cancer (29) and thyroid cancer (30). Our combined model exhibited the best predictive performance with an AUC of 0.83, and it discriminated MVI better than the clinical and radiomics signature models that used a single CT phase; this indicates that our nomogram can provide additional prognostic and biological information, consistent with previous studies (24,27,31). Recent studies have also revealed that when combining clinical, laboratory, semantic, and radiomic signatures, the resultant model has better predictive power than a single one (17,32). One of these studies (17) showed improved model performance after combining clinical risk factors. Conversely, the other study (32) showed that a clinical-radiomics model had an AUC of 0.835 when comparing the single model (AUC of 0.734 and 0.783) in predicting MVI. In addition, Xu et al. (31) constructed a combined model which combined radiomic signatures with clinical parameters, including AFP, with an accuracy of 82.8% (AUC = 0.889). Furthermore, Zhang et al. (25) demonstrated a multi-disciplinary team-like radiomics fusion model can predict MVI status in HCC. In addition, some studies have used magnetic resonance (MR)-based radiomics to construct a nomogram for predicting MVI (27,(33)(34)(35). Our findings point to the likelihood that a combined model has better performance than any single model. Therefore, we suggest that a combined model should be proposed for use in texture analysis. In this regard, the nomogram that we developed could provide a straightforward, convenient, useful, and robust method for personalized prediction of MVI after aspiration biopsy and could be a useful tool for guiding subsequent personalized treatment such as surgery or TACE. Previous studies have indicated that differences in image acquisition parameters such as scanning equipment, image resolution, signal-to-noise ratio, reconstruction algorithms, and contrast agent injection rates could influence CT texture quantification (21,36). We carefully considered the potential impact of the above factors on repeatability, and all participants included in this study were thus scanned using the same equipment, scanning parameters, reconstruction algorithms, and contrast injection rate. However, this study has some limitations. First, this was a single-center retrospective study, with a small number of participants (n = 82). Second, MVI was only grouped into negative or positive and was not considered in the MVI 1/2 group. Further studies with larger samples are required to establish a prediction model for MVI grade and to improve the model's overall performance. In conclusion, the results of the present study suggest that standard triphasic CT scans combined with a radiomic signature can be used for preoperative prediction of MVI in HCC; all of the tested models had good diagnostic performance, with the established nomogram having the best performance among all six of the models described here. A nonogram such as the one we developed for MVI prediction could prove to be a highly useful tool for guiding subsequent personalized treatment. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Institutional Review Board of the Second Affiliated Hospital of Anhui Medical University, which waived the need for individual informed consent. AUTHOR CONTRIBUTIONS WY, SY, RZ, and JB: conceptualization, writing-review and editing. WY, SY, YG, WF, and YW: methodology and formal analysis. LX, KG, YW, and YZ: resources. WY and SY: writing and original draft preparation. WY, SY, and JB: revising. All authors have read and agreed to submit the manuscript.
2022-03-29T13:39:30.387Z
2022-03-24T00:00:00.000
{ "year": 2022, "sha1": "c53bd730a095253aadfc4d0fb0849e622e7eb36a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "c53bd730a095253aadfc4d0fb0849e622e7eb36a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7684776
pes2o/s2orc
v3-fos-license
Natural G-Constellation Families Let G be a finite subgroup of GL_n(C). G-constellations are a scheme-theoretic generalization of orbits of G in C^n. We study flat families of G-constellations parametrised by an arbitrary resolution of the quotient space C^n/G. We develop a geometrical naturality criterion for such families, and show that, for an abelian G, the number of the equivalence classes of these natural families is finite. The main intended application is the derived McKay correspondence. Introduction Let G ⊆ GL n (C) be a finite subgroup. In this paper, we classify flat families of G-constellations parametrised by a given resolution Y of the singular quotient space X = C n /G. A G-constellation is a scheme-theoretical generalization of a set-theoretical orbit of G in C n . They first arose in the context of moduli space constructions of crepant resolutions of X. Interpreting G-constellations in terms of representations of the McKay quiver of G, it is possible to use the methods of [Kin94] to consruct via GIT fine moduli spaces of stable G-constellations. The main irreducible component of such a moduli space turns out to be a projective crepant resolution of X. By varying the stability parameter θ it is possible to obtain different resolutions M θ . In case of n = 3 and G abelian, it is possible to obtain all projective crepant resolutions in this way [CI04]. For further details see [Cra01], [CI04], [CMT07a], [CMT07b]. The formal definition of a G-constellation is: D G (C n ) between the bounded derived categories of coherent sheaves on Y and of G-equivariant coherent sheaves on C n , respectively. This equivalence is known as the derived McKay correspondence (cf. [Rei97], [BKR01], [Kaw05], [Kal08]). It is the derived category interpretation of the classical McKay correspondence between the representation theory of G and the geometry of crepant resolutions of C n /G. It was conjectured by Reid in [Rei97] to hold for any finite subgroup G of SL n (C) and any crepant resolution Y of C n /G. In this paper we take an arbitrary resolution Y → C n /G and prove that it can support only a finite number (up to a twist by a line bundle) of flat families of G-constellations. We give a complete classification of these families which allows one to explicitly compute them. For the precise statement of the classification see the end of this introduction. A motivation for this study is the fact that if a flat family of G-constellations on a crepant resolution Y of C n /G is sufficiently orthogonal, then it defines an equivalence D(Y ) → D G (C n ) ( [Log08], Theorem 1.1), i.e. the derived McKay correspondence conjecture holds for Y . For an example of a specific application of this see [Log08], §4, where the first known example of a derived McKay correspondence for a non-projective crepant resolution is explicitly constructed. This paper is laid out as follows. At the outset we allow Y to be any normal scheme birational to the quotient space X and first of all we move from the category Coh G (C n ) to the equivalent category Mod fg -R G of the finitely-generated modules for the cross product algebra R G, where R denotes the coordinate ring C[x 1 , . . . , x n ] of C n . This makes a family of G-constellations into a vector bundle on Y . In Section 1 we develop a geometrical naturality criterion for such families: mimicking the moduli spaces M θ of θ-stable G-constellations and their tautological families, we demand for a G-constellation parametrised in a family F by a point p ∈ Y to be supported precisely on the G-orbit corresponding to the point π(p) in the quotient space X. In other words, the support of the corresponding sheaf on Y × C n must lie within the fibre product Y × X C n . We call the families which satisfy this condition gnat-families (short for a geometrically natural) and demonstrate (Proposition 1.5) that they enjoy a number of other natural properties, including being equivalent (locally isomorphic) to the natural family π * q * O C n on the open set of Y which lies over the free orbits in X. In this natural family a G-constellation which lies over a free orbit is the unique G-constellation supported on that orbit -its reduced subscheme structure. Thus, in a sense, gnat-families can be viewed as flat deformations of free orbits of G. Another property which characterises gnat-families is that it is possible to embed them into K(C n ), considered as a constant sheaf on Y . This leads us to study G-equivariant locally free sub-O Y -modules of K(C n ). In Section 2, we study the rank one case. A G-invariant invertible sub-O Y -module of K(C n ) is just a Cartier divisor, and we define G-Car(Y ), a group of G-Cartier divisors on Y , as a natural extension of the group of Cartier divisors which fits into a short exact sequence where G ∨ is the group of 1-dimensional irreducible representations of G. We then define Q-valued valuations of these G-Cartier divisors at prime Weil divisors of Y and define G-Div Y , the group of G-Weil divisors of Y , as a torsion-free subgroup of Q-Weil divisors which fits into a following exact sequence: We then show that the three vertical maps in this diagram, val K , the ordinary Z-valued valuation of Cartier divisors, val K G , the Q-valued valuation of G-Cartier divisors, and their quotient val G ∨ , a Q/Z-valued valuation of G ∨ , are all isomorphisms when Y is smooth and proper over X. Then, in Section 3, we observe that when our group G is abelian all its irreducible representations are of rank 1, so any gnat-family splits into invertible G-eigensheaves. Thus G-Weil divisors are all that we need to classify it after an embedding into K(C n ). We further show that, since any gnat-family F embedded into K(C n ) must be closed under the natural action of R on the latter, all the G-eigensheaves into which F decomposes must be, in a certain sense, close to each other inside K(C n ). Up to a twist by a line bundle, this leaves only a finite number of possibilities for the corresponding G-Weil divisors. Thus, surprisingly, the number of equivalence classes of gnat-families on any Y is finite. Our main result (Theorem 4.1) is: Theorem (Classification of gnat-families). Let G be a finite abelian subgroup of GL n (C), X the quotient of C n by the action of G and Y a resolution of X. Then isomorphism classes of gnat-families on Y are in 1-to-1 correspondence with linear equivalence classes of G-divisor sets {D χ } χ∈G ∨ , each D χ a χ-Weil divisor, which satisfy the inequalities Here ρ(f ) ∈ G ∨ is the homogeneous weight of f . Such a divisor set {D χ } corresponds then to a gnat-family L(−D χ ). This correspondence descends to a 1-to-1 correspondence between equivalence classes of gnat-families and sets {D χ } as above and with D χ 0 = 0, where χ 0 is the trivial character. Furthermore, each divisor D χ in such a set satisfies As a consequence, the number of equivalence classes of gnat-families on Y is finite. Acknowledgements: The author would like to express his gratitude to Alastair Craw, Akira Ishii and Dmitry Kaledin for useful discussions on the subject and to Alastair King for the motivation, the discussions and the support. This paper was completed during the author's stay at RIMS, Kyoto, and one would like to thank everyone at the institute for their hospitality. Families of G-Constellations Let G be a finite abelian group and let V giv be an n-dimensional faithful representation of G. We identify the symmetric algebra S(V giv ∨ ) with the coordinate ring R of C n via a choice of such an isomorphism that the induced action of G on C n is diagonal. The (left) action of G on V giv induces a (left) action of G on R, where we adopt the convention that When we consider the induced scheme morphisms g : C n → C n and the induced sheaf morphisms g : O C n → g −1 * O Cn , the convention above ensures that for any point x ∈ C n and any function f in the stalk O C n ,x at x, the function g.f is, naturally, an element of the stalk O C n ,g.x at g.x Corresponding to the inclusion R G ⊂ R of the subring of G-invariant functions we have the quotient map q : C n → X, where X = Spec R G is the quotient space. This space is generally singular. We first wish to establish a notion of a family of G-Constellations parametrised by an arbitrary scheme. We would like for a family of G-constellations to be a locally free sheaf on Y , whose restriction to any point of Y would give us the respective G-constellation. We'd like this restriction to be a finite-dimensional vector-space, and for this purpose, it would be better to consider, instead of the whole G-constellation F, just its space of global sections Γ(F). It is a vector space with G and R actions, satisfying On the other hand, for any vector space V with G and R actions satisfying (1.2), we can define maps g :Ṽ → g −1 * Ṽ to give the sheafṼ = V ⊗ R O C n a G-equivariant structure. It is convenient to view such vector spaces as modules for the following non-commutative algebra: Definition 1.2. A cross-product algebra R G is an algebra, which has the vector space structure of R ⊗ C C[G] and the product defined by setting, for all g 1 , g 2 ∈ G and f 1 , f 2 ∈ R, This is not a pure formalism -R G is one of the non-commutative crepant resolutions of C n /G, a certain class of non-commutative algebras introduced by Michel van den Bergh in [dB02] as an analogue of a commutative crepant resolution for an arbitrary non-quotient Gorenstein singularity. For three-dimensional terminal singularities, van den Bergh shows ([dB02], Theorem 6.3.1) that if a non-commutative crepant resolution Q exists, then it is possible to construct commutative crepant resolutions as moduli spaces of certain stable Qmodules. Functors Γ(•) and• = (•) ⊗ R O C n give an equivalence (compare to [Har77], p. 113, Corollary 5.5) between the categories of quasi-coherent G-equivariant sheaves on C n and of R G-modules. G-constellations then correspond to R G-modules, whose underlying Grepresentation is V reg . As an abuse of notation, we shall use the term 'G-constellation' to refer to both the equivariant sheaf and the corresponding R G-module. Definition 1.3. A family of G-constellations parametrised by a scheme S is a sheaf F of (R G) ⊗ C O S -modules, locally free as an O S -module, and such that for any point ι p : Spec C → S, its fiber F |p = ι * p F is a G-constellation. We shall say that two families F and F are equivalent if they are locally isomorphic as gnat-Families Let Y be a normal scheme and π : Y → X be a birational map. We wish to refine the definition (1.3) above and develop a notion of a geometrically natural family of G-constellations parametrised by Y . Any free G-orbit supports a unique G-cluster Z ⊂ C n : the reduced induced closed subscheme structure. Let U be an open subset of Y such that π(U ) consists of free orbits of G and consider the sheaf π * q * O C n restricted to U . It has a natural (R G)-module structure induced from O C n . It is locally free as an O U module, since the quotient map q is flat wherever G acts freely. Its fiber at a point p ∈ Y is Γ(O Z ), where Z is the G-cluster corresponding to the free orbit q −1 π(p). Thus π * q * O C n is a natural family of G-constellations, indeed of G-clusters, on U ⊂ Y . Its fiber at the generic point of Y is K(C n ). The Normal Basis Theorem from Galois theory ( [Gar86], Theorem 19.6) gives an isomorphism from K(C n ) to the generic fiber of any G-constellation family on Y , which we can write as K(Y ) ⊗ C V reg , but this isomorphism is only K(Y ) and G, but not necessarily R, equivariant. On the other hand, for any G-constellation in a sense of G-equivariant sheaf, we can consider its support in C n . For instance, in the natural family π * q * O C n discussed above the support of the G-constellation parametrised by a point p ∈ U is precisely the G-orbit q −1 π(p). This turns out to be the criterion we seek and we shall show that any family satisfying it is generically equivalent to the natural one. Definition 1.4. A gnat-family F (short for geometrically natural family) is a family of G-constellations parametrised by Y such that for any p ∈ Y q Supp C n F |p = π(p) (1.4) Proposition 1.5. Let Y be a normal scheme and π : Y → X be a birational map. Let F be a family of G-constellations on Y . Then the following are equivalent: 1. On any U ⊂ Y , such that πU consists of free orbits, F is equivalent to π * q * O C n . There exists an where K(C n ) is viewed as a constant sheaf on Y and given a O Y -module structure via the birational map π : Y → X. 4. F is a gnat-family. The action of Proof. 1 ⇒ 2 is restricting any of the local isomorphisms to the stalk at the generic point p Y of Y . 2 ⇒ 3: the embedding is given by the natural map F → F ⊗ K(Y ). As Y is irreducible and F is locally free, F ⊗ K(Y ) is isomorphic to F p Y , and hence to K(C n ). 3 ⇒ 5 is immediate by inspecting the natural R G ⊗ C O Y -module structure on K(C n ). 5 ⇒ 4 is also immediate, as the descent of the action of Therefore m π(p) = (Ann R F |p ) G , which is equivalent to (1.4). 4 ⇒ 5: Consider the following composition of algebra morphisms: where α is the action map of R G ⊗ C O Y on F and β p is restriction to the fiber at a point . Thus we have: This map (1.6) is an O U -algebra homomorphism of (split) Azumaya algebras over U of the same rank. By a general result on Azumaya algebras any such is an isomorphism (see [ACvdE05], Theorem 5.3, for full generality, but the original result in [AG60], Corollary 3.4 will also suffice here). Now Skolem-Noether theorem for Azumaya algebras ( [Mil80], IV, §2, Proposition 2.3) implies that locally α must be induced by isomorphisms π * q * O C n ∼ − → F. G-Cartier and G-Weil divisors If F is a gnat-family, by Proposition 1.5 we can embed it into K(C n ). We need, therefore, to study G-subsheaves of K(C n ) which are locally free on Y . In this section we treat the rank 1 case, i.e. the invertible sheaves. Now, on an arbitrary scheme S, an invertible sheaf together with its embedding into K(S) defines a unique Cartier divisor on S. But here we embed not into K(Y ) but into its Galois extension We therefore seek to extend the familiar construction of Cartier divisors to accommodate for this fact. G-Cartier divisors We write G ∨ for Hom(G, C * ), the group of irreducible representations of G of rank 1. Definition 2.1. We shall say that a rational function We shall denote by K χ (C n ) the subset of K(C n ) of homogeneous elements of a specific weight χ and by K G (C n ) the subset of K(C n ) of all the G-homogeneous elements. We shall use R χ and R G to mean R ∩ K χ (C n ) and R ∩ K G (C n ) respectively. NB: The choice of a sign is dictated by wanting f ∈ R to be homogeneous of weight The invertible elements of K G (C n ) form a multiplicative group which we shall denote by K * G (C n ). We have a short exact sequence: The following replicates, almost word-for-word, the definition of a Cartier divisor in [Har77], pp. 140-141. Observe that (2.2) gives a well-defined short exact sequence: Given a G-Cartier divisor, we call its image χ ∈ G ∨ under ρ the weight of the divisor and say, further, that the divisor is χ-Cartier. A G-Cartier divisor can be specified by a choice of an open cover {U i } of Y and functions . In such case, the weight of the divisor is the weight of any one of f i . As with ordinary Cartier divisors, we say that a G-Cartier divisor is principal if it lies in the image of the natural map K * Proof. A standard argument in [Har77], Proposition 6.13, shows everything claimed, apart from the fact we can embed any invertible G-sheaf L, with G acting by some χ ∈ G ∨ , as a sub-O Y -module into K(C n ). Given such L, we consider the sheaf L ⊗ O Y K(Y ). On every open set U i where L is trivial, it is G-equivariantly isomorphic to the constant sheaf K χ (C n ). On an irreducible scheme a sheaf constant on an open cover is constant itself, so as Y is irreducible we have K χ (C n ) and a particular choice of this isomorphism gives the necessary embedding as Homogeneous valuations We now aim to develop a matching notion of G-Weil divisors. Recall that the homomorphism from ordinary Cartier to ordinary Weil divisors is defined in terms of valuations of rational functions at prime Weil divisors of Y . Valuations at prime divisors of Y define a unique group homomorphism val K from K * (Y ) to Div Y , the group of Weil divisors. Looking at the short exact sequence (2.2), we see that val K must extend uniquely to a homomorphism val K G from K * G (C n ) to Q-Div Y , as G ∨ is finite and Q is injective. We further obtain a quotient homomorphism val G ∨ from G ∨ to Q/Z-Div Y . Explicitly, we set: Definition 2.4. Let P be a prime Weil divisor on Y . For any f ∈ K * G (C n ), observe that f |G| is necessarily of trivial weight and hence lies in K(Y ). We define valuation of f at P to be where v P (f |G| ) is the ordinary valuation in the local ring of P . For any χ ∈ G ∨ , observe that for any f, f homogeneous of weight χ their ratio f /f is of trivial character and therefore has integer valuation. We define valuation of χ at P to be where f is any homogeneous function of weight χ and frac(-) denotes the fractional part. It can be readily verified that val K G = v P (-)P and val G ∨ = v P (-)P . Furthermore, the short exact sequence (2.3) becomes a commutative diagram: G-Weil divisors Aiming to have a short exact sequence similar to (2.3), we now define the group G-Div Y of G-Weil divisors to be the subgroup of Q-Div Y , which consists of the pre-images of val G ∨ (G ∨ ) ⊂ Q/Z-Div Y . Definition 2.5. We say that a Q-Weil divisor q P P on Y is a G-Weil divisor if there exists χ ∈ G ∨ such that frac(q P ) = v P (χ) for all prime Weil P (2.7) We call a G-Weil divisor principal if it is an image of a single function f ∈ K * G (C n ) under val K G , call two G-Weil divisors linearly equivalent if their difference is principal and call a divisor q i D i effective if all q i ≥ 0. We now have a following commutative diagram: A warning: for general Y , even a smooth one, G-Cartier and G-Weil divisors may not be very well behaved. For an example let Y be the smooth locus of X. It can be shown, that while val K is an isomorphism, val K G is not even injective as G-Car Y has torsion. And val G ∨ is the zero map, thus G-Div Y is just Div Y . Proposition 2.6. If Y is smooth and proper over X, then val K , val K G and val G ∨ in (2.8) are isomorphisms. Proof. If Y is smooth, or at least locally factorial, val K is well-known to be an isomorphism ( [Har77], Proposition 6.11). It therefore suffices to show that val G ∨ is injective and hence an isomorphism. As diagram (2.8) commutes, val K G will then also have to be an isomorphism. Fix χ ∈ G ∨ . Let Y χ denote the normalisation of Y × X (C n / ker χ). It is a Galois covering of Y whose Galois group is χ(G). By Zariski-Nagata's purity of the branch locus theorem ([Zar58], Proposition 2), the ramification locus of Y χ → Y is either empty or of pure codimension one. As Y is smooth, Y χ → Y being finite and unramified would make it anétale cover. Which is impossible, since a resolution of a quotient singularity is well known to be simply-connected (see, for instance, [Ver00], Theorem 4.1). Thus, we can assume there exists a ramification divisor P ⊂ Y χ . Let Q be its image in Y . Let Ram(P ) be the subgroup of G which fixes P pointwise. Then n ram = |Ram(P )/ ker χ| is the ramification index of P . We can take ordinary integer valuations of K * χ (C n ) on prime divisors of Y χ as K * χ (C n ) ⊂ K(C n ) ker χ . It is easy to see that for any f ∈ K * where LHS is a rational valuation in sense of Definition 2.4. In particular, there would exist f χ ∈ K * χ (C n ), such that v Q (f χ ) = 0, i.e. f χ is a unit in O Yχ,P . Which is impossible: any g ∈ Ram(P ) fixes P pointwise, in particular f − g.f ∈ m Y,P for any f ∈ O Y,P . As Ram(P )/ ker χ is non-trivial we can choose g such that χ(g) = 1 and then f χ = 1 1−χ(g) (f χ − g.f χ ) must lie in m Y,P . This finishes the proof. For abelian G, this all can be seen very explicitly by exploiting the toric structure of the singularity: even though we do not assume the resolution Y to be toric, it has been proven by Bouvier ([Bou98], Theorem 1.1) and by Ishii and Kollár ([KI03], Corollary 3.17, in a more general context of Nash problem) that every essential divisor over X (i.e. a divisor which must appear on every resolution) is toric. The set of essential toric divisors is well understood -it can be identified with the Hilbert basis of the positive octant of the toric lattice of weights, and then with a subset of Ext 1 (G ∨ , Z) = Hom(G ∨ , Q/Z). This correspondence sends each divisor precisely to the valuation of G ∨ at it, see [Log04], Section 4.3 for more detail. We also show that, away from a finite number of prime divisors on Y , all G-Weil divisors are ordinary Weil. Proposition 2.7. Unless a prime divisor P ⊂ Y is exceptional or its image in X is a branch divisor of C n → X, the valuation v P : G ∨ → Q/Z is the zero-map. Proof. If P is not exceptional, let Q be its image in X. The valuations at P and Q are the same, so it suffices to prove the statement about v Q . Let P be any divisor in C n which lies above Q. As in Proposition 2.6, for any f ∈ K * G (C n ) we have v Q (f ) = 1 n ram v P (f ) where n ram is the ramification index of P . Unless Q is a branch divisor, n ram = 1 and v Q = v P . Which makes v Q integer-valued on K * G (C n ) and makes the quotient homomorphism G ∨ → Q/Z the zero map. Reductor Sets From now on, in addition to assuming that G is a finite group acting faithfully on V giv , we also assume that G is abelian. We further assume that Y is smooth and π : Y → X is proper. Let F be a gnat-family on Y . Write the decomposition of F into G-eigensheaves as χ∈G ∨ F χ . By Proposition 1.5 we can embed F into K(C n ) and, as was demonstrated in Proposition 2.3, the image of each F χ defines a χ-Cartier divisor. Hence Definition 3.1. Let {D χ } χ∈G ∨ be a set of G-Weil divisors on Y . We call it a reductor set if each D χ is a χ-Weil divisor and ⊕L(−D χ ) is a gnat-family on Y . We call a reductor set normalised if D χ 0 = 0. We say that two reductor sets Conversely, if the families are equivalent, then by applying Lemma 3.2 to each local isomorphism, we obtain the data {U i , f i }, where U i are an open cover of Y and on each U i multiplication by f i is an isomorphism One can readily check that such {U i , f i } must define a Cartier divisor and that the corresponding Weil divisor is the requisite divisor N . Corollary 3.5. In each equivalence classes of gnat-families there is precisely one family whose reductor set is normalised. Reductor Condition We now investigate when is a set {D χ } of G-divisors a reductor set. This issue is the issue of L(−D χ ) actually being (R G) ⊗ O Y -module. By definition it is a sub-O Y -module of K(C n ), but there is no a priori reason for it to also be closed under the natural R G-action on K(C n ). If it is closed, it can be checked that it trivially satisfies all the other requirements in Proposition 1.5, item 3, which makes it a gnat-family. We further observe that L(−D χ ) is always closed under the action of G, so it all boils down to the closure under the action of R. Recall, that we write R G for R ∩ K * G (C n ), the G-homogeneous regular polynomials, and R χ for R ∩ K * χ (C n ), the G-homogeneous regular polynomials of weight χ ∈ G ∨ . Then it is a reductor set if and only if, for any f ∈ R G , the divisor i.e. it is effective. Remarks: 1. If we choose a G-eigenbasis of V giv , then its dual basis, a set of basic monomials x 1 , . . . , x n , generates R G as a semi-group. As condition (3.2) is multiplicative on f , it is sufficient to check it only for f being one of x i . This leaves us with a finite number of inequalities to check. 2. Numerically, if we write each D χ as q χ,P P , inequalities (3.2) subdivide into independent sets of inequalities a set for each prime divisor P on Y . This shows that a gnat-family can be specified independently at each prime divisor of Y : we can construct reductor sets {D χ } by independently choosing for each prime divisor P any of the sets of numbers {q χ,P } χ∈G ∨ which satisfy (3.3). 3. There is an interesting link here with the work of Craw, Maclagan and Thomas in [CMT07a] which bears further investigation. In a toric context, they have rediscovered these inequalities as dual, in a certain sense, to the defining equations of the coherent component Y θ of the moduli space M θ of θ-semistable G-constellations. They then use them to compute the distinguished θ-semistable G-constellations parametrised by torus orbits of Y θ . In particular, their Theorem 7.2 allows them to explicitly write down the tautological gnat-family on Y θ and suggests that, up to a reflection, it is the gnat family which minimizes θ.{D χ }. We shall see an example of that for the case of Y θ = Hilb G in our Proposition 3.17. Proof. Take an open cover U i on which all L(−D χ ) are trivialised and write g χ,i for the generator of L(−D χ ) on U i . As R is a direct sum of its G-homogeneous parts, it is sufficient to check the closure under the action of just the homogeneous functions. Thus it suffices to establish that for each f ∈ R G , each U i and each On the other hand, with the notation above, G- and it being effective is equivalent to for all U i 's. The result follows. Canonical family We have not yet given any evidence of any gnat-families actually existing on an arbitrary resolution Y of X. Proposition 3.7 (Canonical family). Let Y be a resolution of X = C n /G. Define the set where P runs over all prime Weil divisors on Y and v(P, χ) are the numbers introduced in Definition 2.4 (lifted to [0, 1) ⊂ Q). Then {C χ } χ∈G ∨ is a reductor set. We call the corresponding family the canonical gnat-family on Y . Proof. We must show that {C χ } satisfies the inequalities (3.2). Choose any χ ∈ G ∨ , any f ∈ R G and any prime divisor P on Y . Observe that 0 ≤ v P (χ), v P (χρ(f )) < 1 by definition, As the above expression must be integer-valued, we further have This family has a following geometrical description: Proposition 3.8. On any resolution Y , the canonical family is isomorphic to the pushdown to Y of the structure sheaf N of the normalisation of the reduced fiber product Y × X C n . then N can be identified with the integral closure of the image of α in K(C n ). Due to G-equivariance α decomposes as as a product of |G| homogeneous functions is invariant. Hence (ker α χ ) |G| ⊂ ker α χ 0 = 0 as required. Write χ∈G ∨ N χ for the decomposition of N into G-eigensheaves. Fix a point p ∈ Y and observe that f ∈ K χ (C n ) is integral over the local ring In particular, the generator c χ of C χ at p lies in (N χ ) p . Observe further that for any f ∈ (N χ ) p the Weil divisor Div(f )−C χ is effective at p as the coefficients of C χ are just the fractional parts of those of Div(f ) and the latter is effective. Therefore c χ generates (N χ ) p as O Y,p -module, giving N χ = L(−C χ ) as required. Symmetries Having demonstrated that the set of equivalence classes of gnat-families is always non-empty, we now establish two types of symmetries which this set possesses. It is worth noting that from the description of the symmetries of the chambers in the parameter space of the stability conditions for G-constellations described in [CI04], Section 2.5, it follows that all the symmetries described below take the subset of gnat-families on Y consisting of universal families of stable G-constellations into itself. Proposition 3.9 (Character Shift). Let {D χ } be a normalised reductor set. Then for any χ in G ∨ is also a normalised reductor set. We call it the χ-shift of {D χ }. Proof. Writing out the reductor condition (3.2) for the new divisor set {D χ } we get: Cancelling out D −1 λ , we obtain precisely the reductor condition for the original set {D χ }. And since we see that the new reductor set is normalised. NB: Observe, that for a reductor set {D χ } and for any χ-Weil divisor N , the set {D χ +N } is linearly equivalent to the χ-shift of {D χ }. Proposition 3.10 (Reflection). Let {D χ } be a normalised reductor set. Then the set {−D χ −1 } is also a normalised reductor set, which we call the reflection of {D χ }. Proof. We need to show that which is one of the reductor equations the original set {D χ } must satisfy. As D χ 0 = −D χ 0 = 0, the new set is normalised. Maximal shift family and finiteness We now examine the individual line bundles L(−D χ ) in a gnat-family and show that the reductor condition imposes a restriction on how far apart from each other they can be. Lemma 3.11. Let {D χ } be a reductor set. Write each D χ as q χ,P P , where P ranges over all the prime Weil divisors on Y . For any χ 1 , χ 2 ∈ G ∨ and for any prime Weil divisor P , we necessarily have Proof. Both inequalities follow directly from the reductor condition (3.2): the right inequality by setting χ = χ 1 ∈ G ∨ , ρ(f ) = χ 2 χ 1 and letting f vary within R ρ(f ) ; the left inequality by setting χ = χ 2 and ρ(f ) = χ 1 χ 2 . This suggests the following definition: Definition 3.12. For each character χ ∈ G ∨ , we define the maximal shift χ-divisor M χ to be where P ranges over all prime Weil divisors on Y . Lemma 3.13. The G-Weil divisor set {M χ } is a normalised reductor set. We call the corresponding family the maximal shift gnat-family on Y . Proof. We need to show that for any f ∈ R G and any prime divisor where m χ and m χρ(f ) are chosen to achieve the minimality in (3.6). Observe that m χ f is also a G-homogeneous element of R, therefore by the minimality as required. To establish that M χ 0 = 0 we observe that for any G-homogeneous f ∈ R we have v P (f ) ≥ 0 on any prime Weil divisor P as f | G| is globally regular. Moreover for f in R χ 0 = R G this lower bound is achieved by f = 1. Observe that with Lemma 3.13 we have established another gnat-family which always exists on any resolution Y . While sometimes it coincides with the canonical family, generally the two are distinct. Proposition 3.14 (Maximal Shifts). Let {D χ } be a normalised reductor set. Then for any Moreover both the bounds are achieved. Proposition 3.15. If the coefficient of a maximal shift divisor M χ at a prime divisor P ⊂ Y is non-zero, then either P is an exceptional divisor or the image of P in X is a branch divisor of C n q − → X. Proof. Let P be a prime divisor on X which is not a branch divisor of q. Let χ ∈ G ∨ . By the defining formula (3.6) it suffices to find f ∈ R χ such that v P (f ) = 0. As R is a PID, there exist t 1 , . . . , t k ∈ R such that (t 1 ), . . . , (t k ) are all the distinct prime divisors lying over P in C n . Observe that the product t 1 . . . t n must be G-homogeneous. Since P is not a branch divisor, there exists u ∈ R such that t 1 . . . t k u is invariant and u / ∈ (t i ) for all i. Then u = u |G|−1 is a G-homogeneous function of the same weight as t 1 . . . t k and v P (u ) = 0. Now take any f ∈ R χ and consider its factorization into irreducibles. G-homogeneity of f implies that all t i occur with the same power k. Now replacing (t 1 . . . t n ) k in the factorization by (u ) k we obtain an element of R χ whose valuation at P is zero. Corollary 3.16. The number of equivalence classes of gnat-families on Y is finite. Proof. Let {D χ } be a normalised reductor set. Coefficients of D χ at prime divisors P of Y have fixed fractional parts (Definition 2.5), are bound above and below (Proposition 3.14) and are zero at all but finite number of P (Proposition 3.15). This leaves only a finite number of possibilities. For one particular resolution Y the family provided by the maximal shift divisors has a nice geometrical description. Proposition 3.17. Let Y = Hilb G C n , the coherent component of the moduli space of Gclusters in C n . If Y is smooth, then L(−M χ ) is the universal family F of G-clusters parametrised by Y , up to the usual equivalence of families. Proof. Firstly F is a gnat-family, as over any set U ⊂ X such that G acts freely on q −1 (U ) we have F| U π * q * O C n | U . Write F as ⊕L(−D χ ) for some reductor set {D χ }. Take an open cover {U i } of Y and consider the generators {f χ,i } of D χ on each U i . Working up to equivalence, we can consider {D χ } to be normalised and so f χ 0 ,i = 1 for all U i . Now any G-cluster Z is given by some invariant ideal I ⊂ R and so the corresponding Gconstellation H 0 (O Z ) is given by R/I. In particular note that R/I is generated by R-action on the generator of χ 0 -eigenspace. Therefore any f χ,i is generated from f χ 0 ,i = 1 by R-action, which means that all f χ,i lie in R. But this means that for any prime Weil divisor P on Y we have v P (f χ,i ) ≥ min f ∈Rχ v P (f ) and therefore D χ ≥ M χ . Now Proposition 3.14 forces the equality. Conclusion We summarise the results achieved in the following theorem: Theorem 4.1 (Classification of gnat-families). Let G be a finite abelian subgroup of GL n (C), X the quotient of C n by the action of G, Y nonsingular and π : Y → X a proper birational map. Then isomorphism classes of gnat-families on Y are in 1-to-1 correspondence with linear equivalence classes of G-divisor sets {D χ } χ∈G ∨ , each D χ a χ-Weil divisor, which satisfy the inequalities Such a divisor set {D χ } corresponds then to a gnat-family L(−D χ ). This correspondence descends to a 1-to-1 correspondence between equivalence classes of gnat-families and sets {D χ } as above and with D χ 0 = 0. Furthermore, each divisor D χ in such a set satisfies inequality v P (f ))P As a consequence, the number of equivalence classes of gnat-families is finite. Proof. Corollary 3.3 establishes the correspondence between isomorphism classes of gnatfamilies and linear equivalence classes of reductor sets. Proposition 3.6 gives description of reductor sets as the divisor sets satisfying the reductor condition inequalities. Corollary 3.5 gives the correspondence on the level of equivalence classes of gnat-families and normalised reductor sets. Proposition 3.14 establishes the bounds on the set of all normalised reductor sets and Corollary 3.16 uses it to show that the set of all normalised reductor sets is finite.
2014-10-01T00:00:00.000Z
2005-12-31T00:00:00.000
{ "year": 2005, "sha1": "8447afd1f47dbdcb6fe96855b82c9a4575f22d9f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8447afd1f47dbdcb6fe96855b82c9a4575f22d9f", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
256884106
pes2o/s2orc
v3-fos-license
Potential Avenues for Exosomal Isolation and Detection Methods to Enhance Small-Cell Lung Cancer Analysis Around the world, lung cancer has long been the main factor in cancer-related deaths, with small-cell lung cancer (SCLC) being the deadliest form of lung cancer. Cancer cell-derived exosomes and exosomal miRNAs are considered promising biomarkers for diagnosing and prognosis of various diseases, including SCLC. Due to the rapidity of SCLC metastasis, early detection and diagnosis can offer better diagnosis and prognosis and therefore increase the patient’s chances of survival. Over the past several years, many methodologies have been developed for analyzing non-SCLC-derived exosomes. However, minimal advances have been made in SCLC-derived exosome analysis methodologies. This Review discusses the epidemiology and prominent biomarkers of SCLC. Followed by a discussion about the effective strategies for isolating and detecting SCLC-derived exosomes and exosomal miRNA, highlighting the critical challenges and limitations of current methodologies. Finally, an overview is provided detailing future perspectives for exosome-based SCLC research. INTRODUCTION Lung cancer is the leading cause of cancer mortality globally, with approximately 2.1 million new cases and 1.8 million deaths in 2018. 1 The two primary malignant tumors of the lungs are non-small-cell lung cancer (NSCLC) and small-cell lung cancer (SCLC). NSCLC is the leading cause of death worldwide, accounting for 85% of all lung cancer cases, while SCLC accounts for 15% of lung cancer cases. 2 SCLC is a highgrade aggressive neuroendocrine carcinoma with few effective treatment choices. SLCS's early metastasis results in a poor prognosis and, as a result, frighteningly low overall survival rates. Thus, identifying distinctive markers to improve the early detection of SCLC could redefine this disease's diagnostic and prognostic landscape. 3 Circulating tumor cells (CTCs), circulating tumor-specific nucleic acids (ctDNA, ctRNA, miRNAs, lncRNAs), extracellular vesicles (EVs) (e.g., exosomes and apoptotic bodies), and autoantibodies are some of the most prevalent circulating biomarkers (CBs) with strong diagnostic, prognostic, and therapeutic potential. 4 The study of CBs in SCLC may open new avenues for monitoring the molecular phenotype of a patient's tumor during the disease and identifying biomarkers for tumor progression and ministration. 5 Among these CBs, tumor-derived exosomes (TDEs) have become potential biomarkers for the early diagnosis of various cancers, including SCLC. They have been used as a biomarker for examining cancer heterogeneity, tracking cancer patients after treatment, monitoring the resistance development to therapy, and contributing to the precise and personalized treatment of SCLC patients. 6 A critical significance of TDE-based studies is that since SCLC treatment does not include surgery, it is challenging to isolate tissue samples. They can, however, be easily isolated from the body fluids and could serve as promising "liquid biopsy" biomarkers of SCLC. Additionally, these TDEs contain proteins, lipids, and nucleic acids that resemble the molecular profiles of the originating parental cancer cells, thus making these TDEs a promising tool for investigating SCLC in a clinical setting. 7 Moreover, TDEsderived exosomal miRNAs appear to be a gene signature that could reveal information about the pathobiology and prognosis of the disease. 8 With exosomal miRNA transfer between cancer and stromal cells being linked to the development and spread of cancer in the tumor microenvironment, including lung cancer, 9 it is more apparent that using miRNAs as SCLC biomarkers for early detection and diagnosis has the latent potential to improve the course of treatment, which is critical in a clinical setting. Exosomal microRNAs are associated with numerous pathophysiological processes in SCLC, including epithelial-mesenchymal transition, growth, proliferation, migration, invasion, and angiogenesis, all of which ultimately result in tumor progression and metastasis. 10 Furthermore, the absence of biomarkers for treatment selection and monitoring patients with SCLC and their therapeutic options lead to poor outcomes, making new prognostic biomarkers to enhance their management. Given the invasive nature of diagnosing SCLC, developing alternative approaches, such as detecting molecular markers such as exosomal miRNA that are linked with this disease, may enhance the diagnostic and prognostic efficacy. 11 Additionally, introducing non or minimally invasive diagnostic tools could be rolled out for at-risk individuals to diagnosis patients at earlier stages of the disease allowing for increased prognosis. Over the past decade, several exosome separation methods have been developed and have exhibited promising results, with a dominant focus on the exosome's biochemical and physiochemical properties. The notable techniques include ultracentrifugation-(differential and density gradient), particle size-(i.e., ultrafiltration and size exclusion chromatography), immunoaffinity-, polymer-, and microfluidics-based platforms. These techniques isolate all the sample's exosomes, including exosomes from different cell types, such as carcinoma cells. After isolating and purifying all the exosomes, it is essential to accurately characterize and quantify the TDEs, primarily for diagnostic purposes. Some of the most popular methods for the characterization and quantification of exosomes include dynamic light scattering (DLS), nanoparticle tracking analysis (NTA), atomic force microscopy (AFM) imaging, and electron microscopy (EM) analysis. 12 These methods have a "fundamental limitation"; they cannot distinguish (i) exosomes from other nanoparticles of similar size and (ii) TDEs from total exosome populations. To quantify all the exosomes present in the samples, universal exosomal membrane markers, such as CD9, CD63, and CD81, have been widely used. These markers are widely expressed in exosomes released by almost all cell types. However, to quantify TDEs within the total population of exosomes, cancer-specific markers in combination with the negative control protein biomarkers have extensively been utilized for better characterization. Western blot and ELISA commonly detect these protein biomarkers. Fortunately, thanks to the quick development of microfluidics detection techniques, high throughput analysis may be carried out with outstanding precision and little reagent consumption. Examples of microfluidics-based exosomal proteins detection techniques include fluorescence correlation microscopy (FCM), colorimetric detection, surface plasmon resonance (SPR), nuclear magnetic resonance (NMR), and electromagnetic detection. 13 For the detection of exosomal miRNAs, the exosomes must be purified. Therefore, ultracentrifugation, 14 commercially available isolation kits such as the Total Exosome Isolation Kit, and polymer-based ExoQuick reagent 15 are the traditional isolation techniques. Once isolated, Northern blot, 16 quantitative reverse transcription polymerase chain reaction (qRT-PCR), and next-generation sequencing (NGS) are the most commonly used methods for detecting exosomal miRNAs, including those expressed in SCLC. Several efficient and sensitive methods for exosomal miRNA detection have also been developed based on the aptasensor, enzymatic, and nonenzymatic isothermal amplification methods, including cyclic enzymatic amplification hybridization chain reaction, catalytic hairpin assembly triggered DNA walker, and rolling circle amplification (RCA)-assisted CRISPR/Cas9 cleavage (RACE). 17,18 Exosomes and exosomal-miRNA analysis methods have many challenges, with the enrichment of exosomal subpopulations being technically complicated. As antibodies that work effectively with tissues or pure proteins may not work well with exosomes due to the orientation and/or folding of the surface protein in the membrane or the availability of the epitope on the exosomal surface, well-established methods are required. Exosomes' surface features allow them to adhere to various surfaces, including other exosomes. Nonspecific binding to the extraction tube or bead surfaces during purification processes might result in biological material loss and decreased specificity. 19 Furthermore, isolation efficiency depends on the quality of the sample, resulting in variable isolation yields. 6 Existing separation and isolation techniques yield insufficient quantities of exosomes and are expensive for large-scale production. 13 Additionally, there are still obstacles in quantifying and detecting exosomes. Although numerous methods have been developed for phenotyping and quantifying exosomes, the need for a consensus regarding detecting these vesicles has resulted in substantial controversy and contrasting findings. Due to the heterogeneity of exosomes, the low refractive index, and ineffective methods for determining the particles' size range, the assessment of these vesicles has been called into question. 6 Some cargo compositions, like miRNAs, appear to be affected by isolation approaches. Rekker et al. 20 concluded that isolation procedures could influence the exosomal miRNA profile, leading to contamination; the isolated exosome fractions are frequently "contaminated" by the coisolated plasma proteins. Exosomes and exosomal miRNA can only be validated as diagnostic, prognostic, or therapeutic biomarkers once a wellrecognized method is introduced for their characterization. While cutting-edge technologies are being introduced and evaluated to address it, the outcomes of this research are still vulnerable to criticism. 21 This Review discusses the current advances in the isolation and detection of lung and SCLC -derived exosomes and exosomal miRNA, highlighting significant biological and technical challenges associated with these methodologies. Future perspectives for enhancing exosomes and exosomal miRNA-based SCLC diagnostics research are also provided. BIOGENESIS OF EXOSOMES The endosomal sorting complexes required for transport proteins (ESCRT) are the best-described mechanism for the biogenesis of exosomes ( Figure 1). Briefly, this system involves the formation of intracellular endosomes through internal blastogenesis, the generation of multivesicular bodies (MVBs), followed by the fusion of these bodies with the plasma membrane, and finally, the release of exosomes into the extracellular space. The four protein complexes and accessory proteins that accompany the ESCRT mechanism both facilitate the storage of cargo at the endosomal membrane and cause the budding and scission of the endosome membrane containing those cargos. 22 The process of exosome formation involves many vital proteins such as Ras-related protein GTPase Rab (Rab-GTPase), the tumor susceptibility gene 101 (TSG101), apoptosis-linked gene 2-interacting protein X (ALIX), syndecan-1, and sytenin-1. To convey the information to the target cells, the neighboring cells pick up these exosomes through direct fusion, endocytosis, or specific receptor binding. 23 Composition of Exosomes Exosomes contain numerous substances, such as specific lipids, proteins, DNA, mRNA, and noncoding RNAs, which can act as autocrine and paracrine factors. 24 Exosomes' complexity is exemplified by the transference of their contents into the cytoplasm when they move from the parent to recipient cells. 22 The complex exosomal contents are a critical determining factor of intercellular communication. They aid in transferring the characteristics from the parent to recipient cells, causing exosomes to contribute to tumor formation. Therefore, a grading system for cancer progression can be evaluated using the exosome contents. 24 2.1.1. Exosomal Proteins. There are several groups of proteins carried by the exosomes: (i) membrane transport and fusion-related proteins such as annexin, Rab-GTPase, and the heat shock proteins (HSPs), Hsp60, Hsp70, and Hsp90; (ii) tetraspanins, also known as the four-transmembrane crosslinked proteins, such as CD9, CD63, CD81, CD82, CD106, ICAM (intercellular adhesion molecule)-1, Tspan8; (iii) MVBrelated proteins, for instance, ALIX and TSG101 which act as the stereotypical biomarker for the characterization of exosomes; (iv) other proteins involved with cell adhesionrelated proteins and participating cytoskeletal construction, such as integrins, actin, and myosin. All these proteins are essential components of the exosomes. 24 Similarly, TDEs secreted by the lung cancer cells have several proteins involved in tumor development, such as CD91, LRG1, Galectin-9, EGFR, and Wnt5b. 25 2.1.2. Exosomal Nucleic Acids. Exosomes also contain nucleic acids such as mRNAs and noncoding RNAs like miRNAs, lncRNAs, circRNAs, rRNAs (rRNAs), tRNAs (tRNAs), small nucleolar RNAs (snoRNAs), small nuclear RNAs (snRNAs), and piwi-interacting RNAs (piRNAs). These RNAs can play specific functional roles and are transported by exosomes from parent cells to recipient cells. 26 In addition to RNA, studies that relate to DNA are emerging. Balaj et al. 27 and Kahlert et al. 28 found tiny fragments of single-stranded DNA and large fragments of double-stranded genomic DNA in exosomes, respectively, 22 implicating that genomic DNA mutations can be determined from exosomes. Cazzoli et al. 29 investigated the expression of plasma exosomal miRNAs in lung adenocarcinoma, pulmonary granuloma, and healthy smokers. They determined that exosomal cargo is also a significant source of micro-RNAs that could be used to differentiate lung cancer patients from healthy people. 2.1.3. Common Exosome Surface Proteins and Exosomal miRNA Biomarkers for SCLC. Surface proteins can aid in differentiating TDEs from host exosomes. However, many surface markers are shared by exosomes from different cancer cell lines and between tumor and nontumor tissues. Exosome membrane proteins play a pivotal role in exosome capture. It offers the opportunity to improve the specificity of exosome diagnosis. 30 During the biogenesis of exosomes, the collection of membrane proteins exposed on the surface of exosomes formed from cancer cell endoplasmic reticulum membranes correlates well with cancer cell membrane proteins. This finding suggests that exosomal proteins can act as tumor markers. In addition to exosomal protein, exosomal miRNAs also can be used as tumor biomarkers. 31 Mao et al. 32 showed the critical role of exosomal miRNA-375-3p in modulating vascular endothelial barrier integrity and SCLC metastasis. miRNA-375−3p has the potential for monitoring metastasis and directing clinical treatment for SCLC patients. SCLC-derived exosomes enhanced in miR-375-3p could disrupt blood barriers by targeting the vascular TJ protein claudin-1, making SCLC metastasis easier. The study by Poroyko et al. 8 demonstrated that one miRNA (i.e., hsa-miR-1180) distinguished SCLC from controls, while the other three miRNAs investigated distinguished SCLC samples before and after therapy. Contrastingly, no miRNAs for NSCLC were found to differentiate between case, control, and treated patients. Moreover, a comparison of SCLC and NSCLC samples determined that 13 miRNAs could reliably distinguish SCLC and NSCLC patients. The three miRNAs (i.e., hsa-miR-331-5p, hsa-miR-451a, and hsa-miR-363-3p) were able to determine between SCLC and NSCLC cases with 100% sensitivity and specificity, thus demonstrating miRNAs' potential role as promising candidates for differentiating NSCLC and SCLC. 8 A concise form of similar miRNAs can be found in Table 1, along with other significant exosomal proteins and micro-RNAs. EXOSOME ISOLATION TECHNIQUES The many characteristics of exosomes, including density, shape, size, and the associated surface proteins, are exploited by the techniques used to isolate exosomes in sufficient quantity and purity. These techniques include ultracentrifugation, chromatography, ultrafiltration polymer-based precipitation, and affinity capture on antibody-coupled magnetic beads. 39 These were compiled by Li et al. 40 in a convenient table for exosome isolation techniques. This section will be a Anti-TSG101, along with other antibodies were used to extract the exomes. miR-665 is one of the most elevated exosomal miRNAs from NSCLC and SCLC. Specimens In vitro cell coculture system Therapeutics Increased S100A16 has been found to actively contribute to SCLC cell survival by controlling mitochondrial activity, making S100A16 a key potential target in SCLC brain metastasis. 35 TGF-β and IL-10 Cell lines ELISA Therapeutics Studies indicated that they both regulated the cellular migration of tumor cells, Therefore, if further detailed information into the contents of exosomes is undertaken, it is possible to enhance therapeutic methods with these biomarkers. continuation of their work and will discuss various isolation strategies. Ultracentrifugation Ultracentrifugation is one of the most widely used and published techniques and is considered the gold standard for exosome isolation 41 (Figure 2A and B). It accounts for 56% of all exosome isolation procedures used by exosome and extracellular vesicle researchers. 42 Ultracentrifugation can be categorized into differential centrifugation and density gradient ultracentrifugation. Differential centrifugation separates the vesicles and other subcellular particles based on their sedimentation rate. Typically composed of multiple centrifugation steps, the technique initially subjects a lysate to lowspeed centrifugation (300g for 10−15 min) to remove cells and apoptotic debris. Sequentially higher-speed centrifugations (20 000g for 30 min) of the supernatant remove larger vesicles. Typically, only three centrifugations are required to precipitate the exosomes, with final high-speed centrifugation at 100 000g for 2h. The bigger vesicles, which may include protein aggregates, apoptotic bodies, and other EV forms, cannot be separated from the tube because the larger particles toward the tube's top require a high-speed spin. This reduces sample purity and causes exosome contamination. A potential solution would be to resuspend and recentrifuge each pellet in a buffer solution (e.g., phosphate-buffered saline (PBS)), allowing for the removal of impurities. However, this will not allow for absolute separation. A sucrose density gradient with a centrifugation step is one of the better alternatives. This method separates the vesicles based on their flotation densities, allowing them to float upward into an overlaid sucrose gradient. The proteins or impurities are pelleted at the bottom of the tube and easily removed, allowing for aggregate-free exosome separation. 41 Pedersen et al. 7 utilized this technique to isolate exosomes from the plasma of SCLC patients. The sample was centrifuged at 20 000g for 30 min at 4°C and at 100 000g for 1 h at 4°C to obtain a microvesicles and exosomes pellet. With subsequent washing and chemical treatment, the separation process aided in identifying 17 unique proteins for exosomes. Similarly, Mao et al. 32 employed ultracentrifugation further to understand the role of exosomal miRNAs in SCLC as regulators in metastatic processes. The isolated SCLC-derived exosomes were then observed through fluorescence microscopy. While utilizing an ultracentrifuge is desirable, it necessitates the purchase of expensive machinery and long periods of centrifugation, limiting its applications. Additionally the high speed of spinning, can damage the exosomes, resulting in loss of their structure and integrity. The overlapping size distribution of platelets and various membrane vesicle populations also poses a significant challenge for exosome preparation through differential ultracentrifugation. 43 Ultracentrifugation alone cannot distinguish between exosome subpopulation or other microparticles of similar density and size, such as protein aggregates, nucleic acid complexes, and lipids. 44 To overcome these limitations, ultracentrifugation in combination with sucrose gradient ultracentrifugation, or immuno-isolation is often used. 43 Size-Based Techniques Ultrafiltration is a size-based method for isolating exosomes ( Figure 2C). This method uses membrane filters with specific molecular weight or size exclusion limits. Lobb et al., 45 in their comparative study on different exosome isolation methods for lung cancer diagnosis, demonstrated that ultrafiltration isolated the most number of particles (<100 nm) compared to ultracentrifugation. Even though ultrafiltration could provide pure vesicles, removing contaminating proteins remains a major disadvantage. This method is faster than ultracentrifugation and does not rely on expensive instrumentations. 46 Heinemann and Vykoukal 47 developed a gentle and scalable isolation method to combat ultracentrifugation's heavy-handedness. Sequential filtration was designed as a high-throughput method ideally suited for various largevolume biofluids, such as urine, lavage fluid, and cellconditioned media. 48 This simple method was broken up into three steps: dead-end (normal), tangential flow, and tracketched membrane filtration. Dead-end filtration removes cells, cell debris, and large EVs. Subsequently, tangential flow filtration is conducted to remove nonexosome-associated proteins, biomolecules, and small nonexosome particles while concentrating the exosomes. The final track-etched membrane filtration is used further to filtrate a size-defined division of exosomes and nonexosomal particles. Although yet to be widely accepted, this gentle approach for isolating and concentrating EVs is beneficial for its scalability and production of highly purified EVs. Size exclusion chromatography (SEC) is another size-based separation technique used in exosome isolation and has a simple working principle ( Figure 2D). It separates solutes of various molecular weights as they move through an aqueous media using a column of starch and water. Biomolecules smaller than the column's pores can pass through one column's porous stationary phase, but they move more slowly because of these pores. At the same time, the larger biomolecules cannot enter the pores due to the obstruction by smaller molecules and are washed away. 49 Jeong et al. 50 isolated lung cancerspecific exosomes using SEC. The size of isolated exosomes was verified by NTA and was found to be 30−150 nm. The isolation process aided in determining an approximately 5.8fold increase in exosome concentration in the patients compared to the healthy controls The most enticing characteristic of SEC in terms of exosome-based biomedical research is its ability to preserve the biological activity of the separated exosomes. 51 This is largely due to the tender separation process of employing passive gravity flow. Therefore, it does not affect the structure and integrity of the vesicles. 52 The approach's gentle nature can be improved further by employing elution buffers with physiological osmolarity and viscosity. 53 Additionally, due to the size of commercially available SEC columns, minimal sample volume, as little as 15 μL, is required. 54 Moreover, the technique is more time-efficient and less labor-intensive. The process can be accomplished in as short as 15 min using selective porous materials and buffer systems. Adaptability is another advantage of SEC. Adjusting the pore size of the applied materials allows for a defined subpopulation of vesicles. 55 Finally, compared to ultrafiltration-based separation, the contact-free method of SEC provides no or minimum sample loss and high yield. 51 Given all these advantages, it is no surprise that SEC-based exosome isolation has become prevalent for exosome-based scientific and clinical research. 53 SEC still ranks highly as an exosome isolation technique despite being notoriously difficult to scale up and demanding for high throughput exosome isolation applications. 49 Field flow fractionation is another technique for separating EVs with minimal interaction. This technique relies on the particle separation in a channel when subjected to crossflow from an external gradient or "field.″ This crossflow may be produced by applying different energies or forces such as thermal, electrical or centrifugal. However, a tangential flow induced through a semipermeable membrane can also be undertaken asymmetrically. 56 Once the crucial experimental parameters, including crossflow velocity, membrane cutoff, and channel thickness, are adjusted, this asymmetric type of field flow fractionation conducts a high-resolution EV subpopulation separation with 10 nm precision ( Figure 2E). 57 Zhang et al. 58 used the AF4 technique to identify exosomes, various nanoparticles, and EV subsets in the lung tissue samples. Their study showed that the AF4 technique could be utilized as an efficient analytical tool for isolating EVs and tackling the complexities of subpopulations of heterogeneous nanoparticles. Size-based isolation techniques for exosomes, while useful, require greater sensitivity and specificity to effectively separate exosomes from other similarly sized microparticles found in body fluids. Additionally, these techniques have a limited sample capacity and may damage or contaminate exosomes with other cellular components. To improve separation yields and purity, several attempts have been made to combine sizebased techniques with microfluidic devices. 59 For example, Liu et al. 60 used a microfluidic platform based on viscoelastic separation to efficiently isolate exosomes from extremely small volumes of cell culture media samples. This method produced good recovery rates (>80%) and achieved higher exosome separation purities (>90%) compared to other EVs. 60 Precipitation Methods Exosomes have also been isolated using precipitation techniques ( Figure 2F), which rely mainly on polymers to precipitate exosomes. The most commonly employed polymer, polyethylene glycol (PEG), effectively improves the enrichment and yield of exosomes. 61 This approach has been reported to separate numerous biomolecules and viruses from body fluids. 62 In this technique, the samples are coincubated with a PEG solution overnight at 4°C. Following this incubation, the exosome-containing residue can be further processed using separation procedures such as filtration and centrifugation. 45 Cazzoli et al. 29 utilized the precipitation method to isolate exosomes from lung cancer samples. They used the ExoQuick exosome precipitation solution to isolate lung cancer exosomes efficiently. Then the microRNAs were extracted from these exosomes and analyzed for their potential role as biomarkers for lung cancer. 29 ExoQuick, Total Exosome Isolation Reagent (Invitrogen, United States), ExoPrep (HansaBioMed, Estonia), Exosome Purification Kit (Norgen Biotek, Canada), and miRCURY Exosome Isolation Kit (Exiqon, Denmark) are just a few examples of commercial exosome isolation products commonly used for increasing the efficacy and efficiency in exosome isolation processes. These kits commonly rely on multistep filtration and centrifugation procedures and differ based on their efficiency and exosome quality. Lobb et al. 45 compared various isolation techniques using tunable resistive pulse sensing and protein analysis to provide a comparative analysis to indicate the efficiency and preparation purity. They concluded that the Exo-spin is better in yielding higher levels of exosome markers than ExoQuickTM kit. However, both Exo-Spin and ExoQuick have more nonexosomal protein contamination, as shown by the ratio of exosome concen-tration to protein with increased exosome marker expression in qEV columns with OptiPrepTM density gradient isolation. Due to their simplicity, speed, lack of exosome damage, and equipment-less nature, precipitation-based isolation methods are more appreciable within clinical research. However, in a comparative study of two precipitation-based methods and one column-based approach for exosome isolation from various biofluids, these methods were found to have a significant disadvantage due to the presence of various contaminants from the sample resulting from coisolation. This had downstream effects in sample analysis via mass spectrometry, proteomic analysis, and RNA tests. Adding a prefiltration step or a postprecipitation purification step with subsequent centrifugation, filtration, or gel filtering makes it possible to reduce contamination with nonexosomal contaminants. 45 In doing this, complexity is added while mitigating precipitation methods' highly sought-after equipment less appeal. However, EVs isolated by precipitation methods may be coprecipitated with lipoprotein components in the samples, as these lipoproteins tend to mimic the characteristics of the extracellular vesicles and are thus copurified with them. For example, Ludwig et al. 64 showed that small extracellular vesicle (sEV) isolated by PEG-precipitation methods can be contaminated with bovine serum albumin (BSA) from cell culture media, which could then, however, be efficiently removed by follow-up ultracentrifugation. Regardless, precipitation methods are still more appealing than other methods in clinical applications when working with biofluids due to their low requirement of sample volume and are compatible with high throughput options, which is contradictory to the golden standard method, ultracentrifugation. 63 Immuno-affinity-Based Approaches Large amounts of proteins are known to be present in exosome membranes. Immuno-affinity purification approaches have been employed to selectively capture certain exosomes from mixed populations of biological components such as cell cultures, tissues, and bodily fluids. This technique uses the immune interaction between exosomal surface proteins (antigens) and their specific antibodies or receptors and their corresponding ligands 40 ( Figure 2G). This is convenient when proteins expressed on the surface of exosomes lack soluble counterparts and are relatively quicker, simpler, and compatible with standard laboratory equipment. Immunoaffinity methods commonly use magnetic beads covalently coated with streptavidin, which can be linked to any biotinylated capture antibody (e.g., anti-CD63, anti-CD9, and anti-CD81 antibodies) with high affinity. The specificity and yield of the exosomes isolated by the immunoaffinitybased techniques are comparable to those of the ultracentrifugation. To further improve the capture efficiency, submicron-sized magnetic particles achieved 10 to 15 times more exosome capture yield compared to the ultracentrifugation method. 42 This is attributed to the larger surface area, near-homogeneous capturing process, and magnetic beadsbased immunoaffinity methods' higher capture efficiency and sensitivity than other microplate-based systems. 46 Furthermore, there are no volume limitations with these methods. 40 Immunoaffinity isolation has a significant advantage in that it can sort distinct exosome subpopulations based on their surface protein expressions, as it relies on selective antigen− antibody binding. 65 For instance, exosomes can be captured directly on the surface of a microfluidic device that has been ACS Measurement Science Au pubs.acs.org/measureau Review functionalized with specific beads or capture agents that are coated with exosome-specific antibodies. The next step is to preincubate these antibodies with the exosome-containing serum and process the samples using a microfluidic device to determine the level of expression of particular cancer-related markers. 59 An affinity-based isolation approach for EVs was developed by Nakai et al. 66 where it uses the T-cell immunoglobulin domain and mucin domain-containing protein 4 (Tim4) protein for capturing EVs. The extracellular domain of murine Tim4 was fused to the Fc fragment of the human IgG. The Tim4-Fc protein was immobilized on magnetic beads and used to capture EVs in the presence of Ca 2+ . The captured EVs were then steadily released from the beads by adding a buffer that contains a Ca 2+ chelating agent (e.g., EDTA). The yield and purity of EVs isolated using the TIM4 affinity-based approach was found to be superior to ultracentrifugation or TEI-based precipitation. 66 However, due to the presence of phosphatidylserine in many EV subpopulations, the EVs isolated by the Tim4-based approach contain exosomes and apoptotic bodies. 67 Zhang et al. 68 developed a novel Tim4@ILI-01 immunoaffinity flake material to efficiently enrich exosomes from serum samples of lung adenocarcinoma patients. This immunoaffinity material showed an exosome capture efficiency of 85.2%, which was 5.2 times greater than the ultracentrifugation method. Similarly, Shih et al. 69 developed a magnetic bead-based method for collecting circulating extracellular vesicles for studying human lung carcinoma. They used phosphatidylserine-binding protein, annexin A5, to generate a magnetic beadbased procedure for capturing EVs from fluidic samples. Their research showed these beads, called ANX-beads, could capture up to 60% of the induced apoptotic bodies. 69 Microfluidics-Based Isolation Techniques The main advantage of microfluidic techniques is their ability to isolate exosomes based on their biochemical and physical properties. Microfluidic-based isolation methods are rapid and efficient and require small input sample. They also innovatively combine with other separation mechanisms such as acoustic, electrophoretic, and electromagnetic forces to extract properties of exosomal vesicles. 65 A typical microfluidic working platform is shown in Figure 3. 3.5.1. Acoustic Nanofilter. An acoustic nanofilter employs ultrasound standing waves to exert differential acoustic forces to continuously separate exosomes and other EVs based on their size and density. The particles respond to the acoustic force exerted on them based on their size and density. The larger particle requires more force to be moved causing them to migrate slower toward the pressure node. The ultrasound transducers and underlying electronics can be tuned to separate particles of any size above and below a specific desired size. Lee et al. 70 illustrated this purification procedure by extracting nanoscale (200 nm) vesicles from cell culture media and EVs in preserved red blood cell products. Additionally, they could electronically control the filtering size in real time of the experiment due to the underlying electronics and ultrasound transducers. While this methodology is still in its early development stages, its simplicity, speed, tuneability, and low sample volume allow it to be employed in the clinical setting. 65 3.5.2. Magnetic Nanowires. Elongated magnetic nanowires (MNWs) doped with many magnetic nanoparticles (MNPs) and biotin moieties can be used to conjugate a large number of streptavidin-modified anti-CD9, anti-CD63, and anti-CD81. Lim et al. 71 used the MNWs to rapidly isolate homogeneous exosomes with high purity. These antibody cocktail-conjugated magnetic nanowires allowed for more efficient isolation and quantification of the targeted exosomes without laborious and time-consuming steps. This allowed for an approximately 3-fold greater yield than conventional exosome isolation methods. MNWs can be used in clinical applications where a highly purified population of exosomes is required to analyze embedded protein, lipid, mRNA, and miRNA. Compared to conventional methods such as Exoquick and Invitrogen exosome isolation kits, the antibody-conjugated MNWs resulted in a nearly 3-fold increase in yield. Recently, it has been employed in characterizing lung-cancer-derived circulating exosomes in patient samples 71 Because of their small lateral size, elongated structure, high surface-to-volume ratio, and strong magnetism, the MNWs-based methods can be used to isolate exosomes with high capture efficiencies and purity with potential applications in the clinical setting. Exosome Total Isolation Chip (ExoTIC). ExoITC is a filtration-based EV isolation tool developed by Liu et al. 72 This tool is a user-friendly and modular chip that aids in facilitating high-yield and high-purity EV isolation from biofluids. This method passes patient samples such as plasma, urine, and lavage through a nanoporous membrane to isolate and enrich EVs. Free nucleic acids, proteins, lipids and other small fragments are washed out, while the 30−200 nm intact EVs are collected in the membrane. Subsequent character-ization using NTA and transmission electron microscopy (TEM) demonstrated that ExoITC achieved 4−1000 times higher yield than ultracentrifugation. Moreover, the exosomes isolated via ExoTIC method show an increased expression of some miRNAs in NSCLC cell lines (HCC827 and H1650) compared to ultracentrifugation. 73 Enzyme-Linked Immunosorbent Assay (ELISA) ELISA is one of the most widely used techniques for detecting EVs. 74 This technique uses the sandwiching of an antibody, surface, and antigen of interest to immobilize the target. One of the significant limitations of ELISA-based exosome detection techniques is the nonspecific adsorption of the biomolecules during identifying exosomes from complex body fluids. 41 Ueda et al. 75 identified CD91 as a lung cancer adenocarcinoma-specific antigen on exosomes. They developed a sandwich ELISA with anti-CD9 coupled with highly porous monolithic silica microtips for a large-scale replication study to validate further the screening reliability of the identified exosome surface antigen CD91 ( Figure 4A). The author's simplistic device has the potential for biomarker discovery and a wide range of omics studies about exosomes. In another study, Yamashita et al. 76 utilized ELISA with a capture antibody (anti-CD 81) to check the potential role of the epidermal growth factor receptor on the exosome membrane as a potential biomarker for lung cancer diagnosis. Their work demonstrated significantly higher exosomal epidermal growth factor receptor expression levels in 5/9 cancer cases compared to the standard control. Although ELISA-based platforms could be helpful in cancer diagnostics, many components of the platforms still need to be appropriately optimized, such as exploiting radioisotopes or fluorescence and affinity maturation of antibodies. Western Blotting Western blotting, also known as immunoblotting, is a technique that relies on the application of specific antibodies before gel electrophoresis is used to separate and visualize the proteins depending on the nature of the sample or the gel. This technique is frequently employed in EV studies to determine the presence of purified exosomes through their specific surface proteins. A mixture of proteins is sorted based on molecular weight and type through gel electrophoresis. These products are then placed in a membrane, where each protein forms a band. 49 Cao et al. 77 used this method to study the potential role of the Profilin 2 protein in promoting growth, metastasis, and angiogenesis of SCLC and confirmed the presence of exosomes. The Western blot analysis demonstrated the expression of exosome markers of Alix and TGS101. Similarly, Jin and Yu 78 employed this method to detect exosomal markers in their study on hypoxic lung cancer cell-derived exosomal miR-21. Western blot displayed the upregulation of p-PI3K and p-AKT expression within the exosomes from hypoxic lung cancer cells compared with exosomes from normoxic lung cancer cells. Due to its ease of use, broad accessibility, and ability to detect exosomal surface and internal proteins, Western blotting is a widely used exosome analysis technique. Additionally, it aids in differentiating the molecular weight of target exosome proteins in various subpopulations. 42 As a limitation, Western blot requires a more comprehensive workflow, technical handling, and expertise while needing to be more adaptable to high throughput when compared to ELISA. 49 It is also unsuitable for multiplexing, and the specificity and reproducibility are limited by the antibodies' quality. 65 Tunable Resistive Pulse Sensing (TRPS) TRPS measures the particle size, concentration, and zeta potential of particles as they move through a nanopore. "Tunable" indicates that the nanopore may be adjusted in size in order to filter and detect specific particles. The "resistive pulse sensing" principle monitors the current flow through a hole, and a change in current can be read when a particle passes through the aperture ( Figure 4B). The pore membrane's flexibility allows for real-time pore size optimization. 41 TRPS has been utilized to measure a wide range of nanoparticle suspensions, including magnetic beads, DNA/ protein particle hybrids, and biological particles such as cyanobacteria and viruses. Several investigations have also measured the exosome particle size and concentration distributions using TRPS. 79 qNano employs this technique with a polyurethane membrane to detect the movement of particles. The pore size provides flexibility to analyze particles of a broad size range (40 nm to 10 μm). Moreover, qNano relies on a limited number of consumables and does not require carrier gases, fluidics, or optics. 80 For example, Feng et al. 18 used qNano to measure the sizes of lung adenocarcinomaderived exosomes. The exosome pellets were mixed with the 500 μL of sterile phosphate-buffered saline, filtered through a 0.22-μm syringe filter, and finally analyzed by NanoSight. The qNano and Nanosight analysis was compared with TEM and Western blot identification of CD63 and CD9 and found the vesicles were all within the range of 40−105 nm. Niu et al. 81 recently utilized this method to measure the size and distribution of exosomes. The qNano analysis revealed that the vesicles were approximately 80 nm in diameter. One significant limitation of the TRPS method is that it does not provide any information about the origin of exosomes. 41 Nanoparticle Tracking Analysis (NTA) Nanoparticle tracking analysis has found its way to being the most popular method characterizing the size and concentration of exosomes. This method relates the particle size to the rate of Brownian motion to determine the size distribution profile of nano and microparticles suspended in liquid solution ( Figure 4C). A laser beam interacts with the exosome nanovesicles in NTA, while a charge-coupled device camera captures the scattered light. A comparison of NTA with flow cytometry using human placental exosomes shows that NTA can measure as small as ≈50 nm biological nanovesicles with excellent sensitivity. 82 When compared to electron microscopy and atomic force microscopy, NTA has the potential to characterize nanovesicles in a large. 49 Fan et al. 83 used NTA to study exosomal markers for the specific early diagnosis of lung cancer. The average size of the exosomes was determined as 120 ± 80 nm within expectation, and the concentration was 1.5 × 10 7 particles/mL. The findings revealed that various types of exosomal markers expressed themselves at different levels. Revealing information on a potential substitute for measuring exosomal markers in certain diseases using clinical bioassays, Zhao et al. 84 used NTA as a comparison against the CRISPR method to determine the concentrations and size of exosomes extracted from lung cancer cell lines. The concentration of exosomes was 6 × 10 7 , 6 × 10 6 , 6 × 10 5 , 6 × 10 4 , 3 × 10 3 , and zero particles per microliter. This demonstrates the efficacy of the NTA method for utility as a benchmark to compare with other methods. The sample preparation for NTA analysis is straightforward and rapid. Additionally, the samples can be recovered in their native state after measurement. Despite its reliability in fundamental research, NTA has significant limitations in characterizing exosomes in clinical samples. 85 These come from the time-consuming procedures involved in data acquisition. Unlike flow cytometry, which can analyze 1000 particles in less than a second, NTA takes approximately 10 min. Also, long analysis times cause the bleaching of the fluorescent dye (i.e., exosomes are stained with common fluorescent dyes, such as green fluorescent protein or antibodies conjugated with the fluorescein isothiocyanate). Furthermore, this tool cannot analyze the biochemical composition of distinct subpopulations of exosomes. 41 Dynamic Light Scattering (DLS) Like NTA, dynamic light scattering (DLS) exploits the characteristics of Brownian motion to determine particle tracking. This technique determines the size distribution profile of particles with several micrometers in diameter ( Figure 4D). DLS is frequently used to validate exosome subpopulations' sampling by measuring EV size distributions. 49 By monitoring the variations in scattered light intensity and then using a mathematical model based on Brownian motion and light scattering theory, it is possible to calculate the size distribution of these EVs. The mean signal amplitude of extracellular vesicles depends on their concentration, diameter, and refractive index. 46 By using DLS, we can expect accurate size distributions of the monodisperse samples (samples containing one specific size of extracellular vesicles). However, the size distribution of polydisperse samples such as human plasma is not precise due to a broad range of particles in the sample altogether with inaccurate and outdated weighting algorithm for analysis.. Generally, DLS requires prerequisite knowledge of the shape of the size distribution in order to acquire accurate results. 49 . In their study of human lung epithelial adenocarcinoma cancer cells (A549), Gurunthan et al. 86 utilized DLS to measure the size of exosomes and the platinum nanoparticles (PtNPs). This, along with other characterization techniques such as NTA and TEM, proved that PtNPs can potentially increase exosome production in A549 cells. Flow Cytometry Flow cytometry, a physical form of analysis, is used to observe EVs visually. However, for the EVs to be analyzed, flow cytometry requires prior knowledge of the EV's protein composition. Furthermore, it requires a single particle suspension, which can be challenging to achieve when the EV concentrations are high or if they aggregate during the isolation procedure. 12 Multiple particles are observed simultaneously when EVs aggregate, resulting in incorrect data. For flow cytometry analysis, EVs must be immobilized on the surface of beads (either by immunocapture or covalent attachment). The EVs are then subjected to a fluorescently conjugated antibody against an antigen known to be or is anticipated to be expressed on the surface of the EVs after immobilization. An epifluorescent microscope (EPI) can be used to visualize the EV, bead and fluorescent antibody coupling prior to flow cytometry analysis. The sample then creates a fluorescent signal through the laser of the flow cytometer. 87 This method allows for high-throughput EV analysis and EV quantification and classification based on antigen expression. Rim and Kim 88 used this technique to perform a quantitative analysis of exosomes derived from murine lung cancer cells and classified the cancer-specific proteins and miRNA as diagnostic markers. Gao et al. 89 employed the rolling circle amplification (RCA)-assisted flow cytometry approach for examine protein patterns in exosomes from various lung cancer cell lines. The combination of amplification and flow cytometry allowed an extremely low detection limit of 1.3 × 10 5 exosome/mL. They also reported an enhanced expression of MUC1 and PD-L1 exosomal surface markers in lung cancer patient samples compared to healthy individuals. Fluorescence-activated cell sorting (FACS) is another type of flow cytometry that allows for sorting exosome nanovesicles using fluorescent labeling. 90 Exosomes can be captured and sorted based on targeted surface protein expression using specific antibodies labeled with fluorescent dyes. FACS provides information about each cell by detecting the fluorescence emitted from the flowing samples. As cells move across the detecting region, they can instantly detect their fluid state and simultaneously assess their size, function, and intracellular composition. Additionally, FACS can only separate a specific type of cell. 88 FACS has been used to characterize exosome subpopulations in recent years. 49 Rim and Kim 88 developed a FACS-based technique for analyzing the murine lung cancer cell exosomes. The exosomes were initially isolated using CD9-or CD63-coated antibodies. FACS was used to analyze exosomes after staining the sample with an exofluorescein isothiocyanate exosome staining solution. According to their study, LA-4 lung cancer cells had an upregulated amount of CD63-specific exosomes. Microfluidic-Based Detection Techniques The following detection techniques are coupled with microfluidic-based sample processing steps. Optical Detection. 4.7.1.1. Fluorescence. Because of its high sensitivity and good accuracy, fluorescence imaging has been widely used in microfluidics-based exosome analysis devices. 92 Kanwar et al. 93 developed an integrated microfluidic platform ( Figure 5A), ExoChip, to simultaneously capture and quantify exosomes directly from blood serum. The chip uses antibodies against CD63-based capture, and a fluorescent carbocyanine dye (DiO) based quantification. The ExoChip follows a three-step procedure: (i) A serum sample from a healthy or diseased individual is inserted into the inlet, precoated with anti-CD63. CD63's abundance on exosomes provides selectivity to the isolation procedure. (ii) The captured exosomes were then stained with fluorescent carbocyanine dye (DiO), which specifically stains the membrane vesicles immobilized in the chip. This allows the visualization of microscopically invisible small vesicles (30− 300 nm) for imaging purposes. (iii) Fluorescently stained vesicles were determined on-chip using a plate reader. In another study, Kang et al. 94 utilized the microfluidic device Exo-Chip integrated with phosphatidylserine (PS) specific protein to isolate exosomes from the lung cancer cell line A549. Their results demonstrated that the device isolated 35% more A549-derived exosomes than an antiCD63-conjugated device and achieved 90% capture efficiency for cancer cell exosomes compared to 38% for healthy exosomes. 4.7.1.2. Colorimetric Detection. The integration of colorimetric technologies into microfluidic platforms for detecting exosomes is ideal for point-of-care testing due to their easy operation and simple signal readout. 95 Xu et al. 96 developed a colorimetric ExoAptaSensor ( Figure 5B) for detecting cancer-derived exosomes, including lung cancer derived-exosomes from the A549 cell line. The sensor employed aldehyde latex microbeads to anchor exosomes through aldimine condensation. Using the streptavidin−biotin affinity interactions, CD63-specific biotinylated-aptamer and streptavidin-conjugated HRP were added, followed by the rapid addition of freshly prepared dopamine (DA) solution. HRP accelerated the colorimetric reaction, forming a polydopamine (PDA) which is a colored product. Due to PDA's excellent reactivity to the amine, sulfhydryl, and phenol groups of exosomal surface proteins causes increased binding of the PDA product to the target exosome. 97−99 The generated color in the substrate solution can be quantified using absorbance measurement (quantitatively) or visualization with the naked eye (semiquantitatively or qualitatively). These absorbance results directly correlate to the expression level of the CD63 marker on the exosomes. Moreover, Xu et al. 96 demonstrated the detection of exosomal biomarkers such as HER2 expression for the diagnosis of breast cancer cell, HCC1954. An accurate colorimetric quantification of HER2 from derived cell cultures with a detection limit as low as 7.77 × 10 3 particles/mL, 3−5 orders of magnitude better than conventional dot-blot methods was obtained. Surface Plasmon Resonance (SPR) Detection. A label-free optical detection method called surface plasmon resonance (SPR) imaging monitors and analyses biomolecular interactions in real-time. SPR is a promising method for characterizing exosome subpopulations with high detection specificity and sensitivity. 49 . 96 In surface plasmon resonance (SPR), when polarized light impacts the interface of two materials having different refractive indices at a critical angle, it can produce the resonance of the free electrons in the metal layer. Therefore, substantially reducing the reflected light at that specific angle. This reflected light can entirely vanish at a certain angle, known as the SPR angle. Dynamic SPR angle change can be observed, resulting from the binding interaction between biological molecules. 44 Integrating with microfluidics, SPR becomes more cost-effective and offers rapid detection. Therefore, microfluidic-based SPR approaches gradually take the lead in exosome detection. 92 Liu et al. 100 used exosomal epidermal growth factor receptor (EGFR) and programmed death-ligand 1 (PD-L1) as biomarkers to demonstrate the feasibility of a compact SPR chip in lung cancer diagnosis ( Figure 5C). The chip's design consisted of a 2 nm titanium film deposited on a glass slide, followed by a 49 nm gold film used as an SPR chip. The gold film was used as the refractive layer, while the titanium film improved the gold film adhesion and therefore increased the biochip's reliability. The sample wells (diameter of 6 mm) were then created by attaching a PDMS layer with a hole in the middle to the glass slide. Users could use a pipet to add/remove samples into/from the sample wells with this design. As such, mitigating additional training or the use of equipment makes it compatible with standard clinical sample-handling processes. 100 A549 lung cancer cell line cells were identified to have a higher level of exosomal EGFR than BEAS-2B normal cells. The compact SPR chip outperformed ELISA in detection sensitivity and had a similar sensing accuracy. In another study, Thakur et al. 101 demonstrated the localized surface plasmon resonance biosensor (LSPR) for the detection of the multivesicular vesicle (MV) in serum and urine samples from a lung cancer mouse model. LSPR is like SPR but is typically employed for nanoscale sensing applications as it is less sensitive to the interference. The results produced significant foresight into the membrane properties of tumor-derived exosomes and MVs. In this regard, LSPR biosensors can potentially be used for the direct detection and isolation of heterogeneous EVs. There are some drawbacks of SPR-based exosome biosensors. Most SPR biosensors are suitable for quantifying exosomes isolated via conventional methodologies such as ultracentrifugation. Another significant limitation of SPR biosensors is their inability to support multiplex analysis. 96 Since SPR is a mass-sensitive technique, the high molecular weight targets usually result in good detection sensitivity. In contrast, the low molecular weight compounds (i.e., smaller nanovesicles) are more challenging to be detected. In addition, the SPR technique is well-known for generating false positive responses due to mass increases via nonspecific adsorption of unwanted biomolecules in the samples. Finally, in most cases, the effective working area of the SPR chip is relatively smaller, which limits its capacity for large-scale target binding and characterization of exosomes. 49 4.7.1.4. sEV Subpopulation Characterization Platform (ESCP). Wang et al. 102 developed a sensitive, high-throughput platform known as the small extracellular vesicle (sEV) subpopulation characterization platform (ESCP) for sEV subpopulations characterization in various types of cancer, including lung cancer. The ESCP device integrated circulating nanoscopic flow with the surface-enhanced Raman spectroscopy or scattering (SERS) barcoding on a single microfluidic device to allow the ultrasensitive and multiplexed detection of the phenotypes of sEV subpopulation because each sEV carried a small quantity of the biological cargo. ESCP allows the capture of target sEVs via the on-device immunoaffinity principle. Then the captured sEVs are in situ labeled with SERS barcodes that have been functionalized with antibodies (i.e., the conjugated gold nanoparticles with Raman reporters). By utilizing the circulating nanoscopic flow within the ESCP microarrays further enhanced the assay sensitivity to detect as low as 10 3 sEVs/mL. However, this number may vary depending on the antibody affinity and the expression of biomarkers on sEVs. This highly efficient ESCP platform can be utilized to identify sEV subpopulations and could play a vital role in diagnosing cancer and monitoring its treatment. 102 4.7.1.5. Electrochemical Detection. Electrochemical detection exploits the electrical current generated from redox reactions in the testing compound. Typically for exosomes, an antibody will bind to a selectively recognized antigen on the surface. An electroactive signal transducer generates a measurable electrochemical signal to quantify the exosome amount. 103,104 This makes electrochemical detection well suited for biomolecular analysis because of its inherent advantages, such as high sensitivity and specificity, compatibility with miniaturization, simplicity, and a relatively low detection cost. The detection is then read by various voltammetry techniques, amperometry, and impedimetric techniques. 41 Mahmudunnabi et al. 107 developed an array sensor to detect exosomal miRNAs using a conductive polymer covalently bound to the sensor probe materials for lung cancer diagnosis ( Figure 5D). The sensor array magnetically isolated the specific miRNA from lysed exosomes, with a subsequent chi-miRNA-DNA formation using T4 DNA polymerase. The sensor's dual specificity echoed the attomolar level detection limit with an excellent dynamic range. Additionally, this sensor showed practical applicability for detecting four different exo-miRNAs from cell culture media and serum samples from lung cancer patients down to the femtomolar level. The developed method is reliable, requires less fabrication time, and has the potential to be utilized in clinical settings. In another study, Zhang et al. 105 developed electrochemical microaptasensors for successful exosome detection in serum samples of lung cancer patients, showing that their method has great potential for early cancer diagnosis. Using the micropatterned electrodes and hybridization chain reaction (HCR) dual-amplification strategy, the microaptasensors achieved a linear detection response for a broad range of exosome concentrations with a low detection limit of 5 × 10 2 exosomes/mL. In their study on lung cancer cells, Ahmed et al. 106 utilized inexpensive and single-use gold (Au) screen-printed electrodes (SPEs). They successfully detected the aberrantly phosphorylated EGFR and ERK protein isoforms derived from the lung cancer cell exosomes. The sensitivity was down to just 15 ng/L in samples with up to 90% excess of their nonphosphorylated (wild-type) forms. They further demonstrated the application of this platform for tracking the effects of Tyrosine Kinase Inhibitors over a period. This noninvasive method has the potential to provide new opportunities for the diagnosis of cancer and time-point monitoring of the therapeutic responses. Often electrochemical biosensor development involves complicated fabrication steps, so the bioconjugation process must be carefully controlled to ensure assay reproducibility. Signal amplification tags and biomarkers must be carefully considered to avoid nonspecific adsorption issues in electrochemical immunoassays. Nevertheless, the combination of electrochemical approaches and microfluidic platforms can result in an efficient clinical diagnostic tool, particularly in point-of-care devices for many disease detection applications utilizing exosomal biomarkers. 41 CURRENT CHALLENGES IN EXOSOME ANALYSIS AND POSSIBLE SOLUTIONS A significant obstacle to the therapeutic use of exosomes is the need for more reliable and accurate methods to recognize and detect an enriched population of exosomes amid the other nonspecific exosomes and EVs in circulation. Given the growing interest in exosome research, there is an urgent need for an effective and reliable tool for isolating specific exosomes. However, technological limitations related to the currently accessible isolation and detection methods make exact exosome separation problematic. Furthermore, various biological obstacles must be considered to create a valid method for exosome analysis. 41 Technical Challenges The International Society for Extracellular Vesicles is an international scientific organization that studies extracellular vesicles. They have started recommending a standardized and evidence-based approach to analyzing extracellular vesicles. 108 This is caused by the variability in several preanalytical processes involved in exosome isolation and characterization that has affected the outcomes of exosome analysis. One of the standards and critical challenges in the sample collection method is the presence of contaminates from the activated platelet-derived vesicles due to the physical forces associated with the blood pull. Therefore, to prevent shear stress, uniformity of the sampling sites, right-sized needles, and proper blood drawing techniques are recommended. 109 In addition, the exosome abundance often varies due to the availability and types of biofluids. 110 Despite substantial advances in the separation and purification of exosomes, developing larger-scale batch exosome production remains a significant problem. This limits the scope of exosome-based biological studies and treatments. As a result, a simple, reproducible, repeatable, and good manufacturing practice (GMP)-compliant production platform is required. In addition, for large-scale production of the modified exosomes, additional development of GMP protocols, more automated and digital production methods, and strict quality control systems are required. 111 Another challenge in exosome analysis is the discrepancy in the results caused by improper storage conditions (such as freezing). Samples for large-sample analyses are frequently collected from remote areas and freeze-stored before being analyzed. As it could impact quantification, it is always recommended to use freshly collected samples when conducting exosome analyses. 41 The process of isolating and detecting an enriched subpopulation of nanosized exosomes (such as tumor-derived exosomes) among the other normal exosomes is quite challenging due to the need for consistent and specific methods. Therefore, a comprehensive strategy for precisely separating different exosome subpopulations based on biophysical and biochemical characteristics is urgently required. 49 Exosomes have been separated throughout the past few years using traditional exosome isolation procedures according to their biophysical or biochemical characteristics. For example, differential ultracentrifugation, one of the most popular techniques for size-based exosome isolation, ignores the immune profiles of different exosome subpopulations. Similarly, exosomes are frequently lost in ultracentrifugation, and copelleted impurities occur during the isolation process. On the other hand, despite having higher immune selectivity, immunoaffinity-based isolation techniques are limited due to the lower yields of isolated exosomes and costly antibodies. Therefore, it has been proposed to combine size-based and immunoaffinity-based methods into an integrated approach to utilize the advantages of various isolation methods to accurately separate the distinct exosome subpopulations. 41 It has been demonstrated that clinical-grade exosomes can be produced by combining ultrafiltration and ultracentrifugation methods. Similarly, ultracentrifugation can first be utilized for concentrating large volumes of samples and thoroughly processing the bulk exosomes before being incubated with antibodies or aptamers-coated superparamagnetic nanoparticles and further separating the exosomes by immunoaffinity. 112 Microfluidics can provide a miniaturized platform for integrating feasible ultrafiltration and magnetic isolation techniques. In addition, multiplex exosome surface proteins might be used simultaneously to separate distinct subpopulations effectively. However, the issue with microfluidics technologies is that all the microfluidic devices use the same exosome sorting method, thus resulting in limited yield or specificity. Moreover, their low processing capacity may hinder the downstream analysis due to insufficient nucleic acids and proteins in the isolated exosomes. 112 Biological Challenges As exosomes still have unknown characteristics, several genetic, physiological, and environmental factors linked to sample heterogeneity can influence exosome isolation and analysis. For example, disease-specific exosomes vary between people depending on various factors such as gender, age, gender, body mass index (BMI), immunity, and being found in healthy ACS Measurement Science Au pubs.acs.org/measureau Review people. 113 Therefore, choosing the best-matched control for a sizable cohort of heterogeneous samples is complicated. Hence, more systemic research is required to understand the effects of sample heterogeneity on exosome biogenesis, functionality, and quantity. It is imperative to establish a predesigned sample control bank that contains controls from all potential variants of the target population, such as different ages, sexes, races, and physiological states. 41 There are just a few documented methods for effectively identifying diseasespecific exosomes in the backdrop of normal exosomes, despite recent developments that have improved the efficacy of separating exosomes from other extracellular vesicles. 114,115 Exosomal cargo is protected from harsh conditions within the exosome's encapsulated protective environment; for example, the exosomal miRNA is protected from the ribonuclease (RNase) mediated destruction of RNA. However, this significant advantage of the exosomal miRNA might create a major challenge for analyzing miRNA because it must be released from the isolated exosomes, which incurs numerous additional complex steps in the analysis. 116 Several fundamental questions about exosome functionality and content are yet to be answered. 117 For instance, it is still being determined whether the transport of exosomes and their uptake by distant recipient cells are due to phagocytosis or uptake by specific receptors of the distant recipient cells. 118 The challenge for developing exosome biomarkers is the need for large-scale studies to demonstrate that the exosome liquid biopsy could be a suitable alternative to the tumor tissue biopsy. Even though exosome-mediated therapies, diagnostics, and prognosis appear promising, additional research is needed before exosomes are used in clinical applications. 110 CURRENT BARRIERS AND CHALLENGES IN TRANSLATIONAL SCLC RESEARCH The development of effective therapy for SCLC faces several challenges. Obtaining sufficient tumor tissue for molecular diagnostic studies is difficult because few patients undergo surgery, and most diagnoses are based on small biopsies or cytological samples Furthermore, due to the aggressiveness of the disease and the comorbid conditions linked with smoking, individuals with SCLC are frequently debilitated upon diagnosis and recurrence. Efforts to enhance the clinical outcome of SCLC patients will have to overcome several challenges. One recent study, for example, implies that molecular identification of circulating tumor cells could eliminate the need for more invasive biopsies. 119 Other obstacles in the SCLC research include (1) the molecular complexity of SCLC, (2) a lack of understanding of the causes of chemotherapy resistance in recurrent disease (including distinct molecular modifications acquired after initial treatment), and (3) the disease's rapid progression and high metastatic potential along with SCLC being immunologically cold therefore limits the disease response to immune checkpoint inhibitors. Furthermore, SCLC research has received little funding in recent years (possibly due to some limitations in contrast to NSCLC research tools, including an abundance of tissue and model systems). For example, in 2012, the National Cancer Institute (NCI) research portfolio had 745 lung cancer research projects, with only17 (about 2%) of those focused on SCLC. 120 However, recently the United States Congress listed SCLC in the Recalcitrant Cancer Research Act (2013), opening up funding sources outside the National Institutes of Health (NIH). Additionally, a resurgence in SCLC research (basic and clinical) has occurred, which was commenced by the genomic sequencing efforts published by George et al. 121 There have been recent breakthroughs and current research efforts to understand disease subgroups, new treatment options, new preclinical models, tumor heterogeneity and cell of origin. However, detection and effective treatment remain urgent unmet clinical needs. The current experimental setup for SCLC research is based on the association of in vitro phenotypes with metastasis, in vivo metastasis originating from xenograft transplants, and the metastasis formed in genetically modified mouse models. Although these models provide information, each of them has limitations. For instance, the cell line-based metastatic SCLC models can depict the phenotypes acquired during the propagation in the culture. 122 CONCLUSIONS AND FUTURE PROSPECTS This Review has summarized the epidemiology of lung and small-cell lung cancer and discussed the proposed biomarkers and exosomal biomarkers in SCLC research. Additionally, we have thoroughly discussed the recent advances in exosome isolation and detection techniques. Finally, we have also addressed these techniques' significant technical and biological challenges and the major challenges of diagnosis and disease monitoring faced by SCLC research. Despite progress in understanding SCLC, many gaps have become the subject of recent research. These include identifying lung cancer risk, high metastatic behavior in early stages and prognosis reports. 123 Liquid biopsy has just become a reality in lung cancer research and clinical practice. In addition, various ongoing research efforts have focused on exosomal cargo and its functions in the genesis and progression of lung cancer as well as their application as diagnostic, prognostic, and predictive biomarkers. 124 Proteins on the cell surface and within exosomes could be exploited as lung cancer biomarkers. 30 Several protein detection or exosome capture approaches potentially allow for cancer-type differentiation using various biofluids such as CSF, plasma, serum, saliva, urine, and ascites. 36 Exosomal miRNAs have the potential in SCLC research, and basic studies have demonstrated breakthroughs in the involvement of exosomal miRNAs and lncRNAs in SCLC. 125 However, the utilization of liquid biopsy-based methods, such as circulating tumor DNA and exosomal biomarkers for SCLC diagnosis, prognosis, and treatment selection, is still in the early stages. Recent advancements in exosome and exosomal biomarker studies suggest that exosomes have unique features that make them an ideal tool for liquid biopsy in cancer research. A perfect cancer biomarker has the potential to demonstrate the presence of a tumor mass and its molecular characteristics in the very early stages. Since exosomes are detectable in almost every biofluid, researchers can choose a specific biofluid to identify patient exosomes based on the type of cancer. Exosomes contain DNA, RNAs, and proteins that could provide real-time information about the biological and clinical characteristics of the tumor mass. 126 Another advantage is that exosomes are highly stable due to their lipid bilayer encapsulation, which is critical because the genetic contents of the exosomal cargo reflect the parent tumor cell. 127 Additionally, exosome identification is straightforward 128 and exosomal surface proteins can play a critical role as diagnostic biomarkers in identifying various types of cancer. 88 Thus, exosomes are expected to be crucial biological components in liquid biopsy for both prognostic and diagnostic purposes. The fundamental issue with SCLC is that there needs to be an efficient system for the early diagnosis and detection of SCLC. Since it is very metastatic, when patients realize the symptoms of SCLC, the cancer is already in stage 3 or 4. Therefore, there is a need for earlier, noninvasive, or minimally invasive diagnostics and strategies for the early detection of SCLC. It is now well-known that liquid biopsy-based tools offer a comprehensive approach for early detection of many diseases. Therefore, with the hope that the development of the aforementioned tools and the discovery of novel exosomes and exosomal biomarkers will improve the survival outcome of SCLC patients, we anticipate conducting several of these studies in the future.
2023-02-16T16:12:10.055Z
2023-02-14T00:00:00.000
{ "year": 2023, "sha1": "184adbfab0c70a4bf41502f8fe29dbf68193103a", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1021/acsmeasuresciau.2c00068", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5bc9259915c4e9e49ef2fbc2fc2b47a39bc54842", "s2fieldsofstudy": [ "Biology", "Chemistry", "Medicine" ], "extfieldsofstudy": [] }
26042657
pes2o/s2orc
v3-fos-license
Catheter-Directed Thrombolysis Versus Standard Anticoagulation for Acute Lower Extremity Deep Vein Thrombosis: A Meta-Analysis of Clinical Trials Standard anticoagulant treatment alone for acute lower extremity deep vein thrombosis (DVT) is ineffective in eliminating thrombus from the deep venous system, with many patients developing postthrombotic syndrome (PTS). Because catheter-directed thrombolysis (CDT) can dissolve the clot, reducing the development of PTS in iliofemoral or femoropopliteal DVT. This meta-analysis compares CDT plus anticoagulation versus standard anticoagulation for acute iliofemoral or femoropopliteal DVT. Ten trials were included in the meta-analysis. Compared with anticoagulant alone, CDT was shown to significantly increase the percentage patency of the iliofemoral vein (P < .00001; I 2 = 44%) and reduce the risk of PTS (P = .0002; I 2 = 79%). In subgroup analysis of randomized controlled trials, CDT was not shown to prevent PTS (P = .2; I 2 = 59%). A reduced PTS risk was shown, however, in nonrandomized trials (P < .00001; I 2 = 47%). Meta-analysis showed that CDT can reduce severe PTS risk (P = .002; I 2 = 0%). However, CDT was not indicated to prevent mild PTS (P = .91; I 2 = 79%). A significant increase in bleeding events (P < .00001; I 2 = 33%) and pulmonary embolism (PE) (P < .00001; I 2 = 14%) were also demonstrated. However, for the CDT group, the duration of stay in the hospital was significantly prolonged compared to the anticoagulant group (P < .00001; I 2 = 0%). There was no significant difference in death (P = .09; I 2 = 0%) or recurrent venous thromboembolism events (P = .52; I 2 = 58%). This meta-analysis showed that CDT may improve patency of the iliofemoral vein or severe PTS compared with anticoagulation therapy alone, but measuring PTS risk remains controversial. However, CDT could increase the risk of bleeding events, PE events, and duration of hospital stay. Introduction Acute deep vein thrombosis (DVT) of the lower extremities occurs in about 1 or 2 cases/1000 persons in the general population. 1 Deep vein thrombosis is a potentially progressive disease with complex clinical sequelae, such as pulmonary embolism (PE) and postthrombotic syndrome (PTS). Although anticoagulation aids in the prevention of thrombus extension, PE, and thrombus recurrence, many patients develop venous dysfunction leading to PTS. 2 Postthrombotic syndrome occurs in 25% to 46% of patients with DVT 3 and is characterized by a multitude of symptoms such as leg swelling, heaviness, pain, skin hyperpigmentation, venous varicosities, and venous ulcers. 4 Postthrombotic syndrome is associated with reduced individual, health-related quality of life and a substantially increased economic burden. 5 Because anticoagulation does not directly promote thrombus dissolution to reduce the thrombus burden, chemical, surgical, and mechanical strategies have been developed for removing thrombus, rather than leaving it in situ. 6 Catheterdirected thrombolysis has been recommended as an effective therapy for DVT because it can reduce the thrombus load rapidly, relieve DVT symptoms promptly, maintain venous valve function, and reduce recurrence of DVT. [7][8][9][10] Catheter-directed thrombolysis therapy was achieved by placing a catheter in the contralateral femoral vein, the right internal jugular vein, or the ipsilateral popliteal vein for direct intraluminal thrombus infusion. An attempt was used to cross the thrombosed vein with a 0.035-inch guidewire. Once the guidewire is across the thrombus, multiple side-hole catheters were advanced into the thrombus to assure maximum delivery of thrombolytics. However, bleeding events are higher in CDT than in anticoagulation, impacting the safety of CDT therapy. 7,8 We performed an updated meta-analysis on 4 randomized controlled trials (RCTs) and 6 comparative studies contrasting CDT plus anticoagulation with anticoagulation alone for the treatment of lower extremity DVT, in an attempt to resolve this discrepancy and provide evidence to physicians. Literature Search Using PubMed, Embase, Web of Science, and Cochrane Library, we searched literature published between January 1, 1980, and April 1, 2017. The search terms included the following: CDT or standard anticoagulation or iliofemoral/lower extremity or DVT; and/or comparative studies or RCTs or cohort studies or retrospective or prospective studies. Inclusion criteria were (1) studies comparing CDT plus anticoagulation (experimental group) with anticoagulation (control group) and (2) effectiveness of intact clinical data. There were no language restrictions. Ten studies were located ( Figure 1). Two investigators (Tang and Lu) independently extracted data utilizing a data abstraction tool: number of patients in experimental (CDT plus anticoagulation) and control (anticoagulation) group, study quality, time of follow-up, and primary and secondary outcomes. The primary outcome was the percentage patency of the iliofemoral vein, and the secondary outcomes included the risk of PTS, bleeding events, PE events, death, duration of hospital stay, and hospital charges. Data Extraction and Quality Assessment Details of the publication, inclusion and exclusion criteria, demographics of the study participants, interventions, and outcomes (primary and secondary outcomes) were gathered and reviewed. Risk of bias in the studies (eg, containing masking of participants, intention-to-treat analysis, incomplete or unclear data, and time to follow-up) was also assessed. Study quality was assessed by the modified Jadad scale and Newcastle-Ottawa scale (NOS). 11 Disagreements between reviewers were resolved by consensus. Statistical Analysis Statistical analysis was performed using Review Manager (version 5.1; Cochrane Collaboration software). We used fixed-effects models for primary outcomes and partial secondary outcomes (adverse events) and utilized randomeffects models for outcomes related to duration of stay in hospital and hospital charges. Statistical heterogeneity was assessed by I 2 . The level of heterogeneity was demarcated as low (I 2 ¼ 25%-49%), moderate (I 2 ¼ 50%-74%), and high (I 2 75%) heterogeneity. Primary and secondary outcomes were analyzed using odds ratios, with a 2-sided statistical significance level of 5%. Study Characteristics and Quality The initial search strategy identified 45 full-text articles, and 35 citations were initially screened. Ten trials met the appropriate criteria for inclusion in the review (Figure 1). Four RCTs 7,10,12,15 (the ATTRACT study was presented by Suresh Vedantham, MD, at the 2017 Society of Interventional Radiology [SIR] Annual Scientific Meeting) and 6 comparative studies [12][13][14][15][16][17] included experimental groups that received CDT therapy for acute lower extremity DVT and control groups that received standard anticoagulation therapy for acute lower extremity DVT. The quality of RCTs was evaluated by the modified Jadad score and the nonrandomized trials were assessed by the NOS score. 11 Table 1 shows the baseline characteristics for each study. Discussion Anticoagulation alone was not associated with the dissolving of venous thrombus, leading to chronic venous dysfunction in patients with DVT. 18 Systemic thrombolytic therapy was also abandoned because of high risk of bleeding events and inefficiency in removing thrombus. 19 Therefore, CDT has been developed for dissolving thrombus in patients with acute lower extremity DVT. Compared with systemic thrombolytic or anticoagulation alone therapy, CDT plus anticoagulation therapy was demonstrated as more effective for dissolving venous thrombus. 20 However, in the recent guidelines for acute proximal DVT of the leg, anticoagulant treatment alone is still recommended over CDT, and the evidence grade is not high (2C). 21 Thus, CDT therapy for acute lower extremity DVT remains controversial. In the past few years, there has been a number of clinical studies about CDT for acute lower extremity DVT and assessing the treatment effects. 22 vein in 6 studies and 1 study (CAVENT trial) 7 follow-up was 24 months, revealing that rapidly eliminated iliofemoral vein thrombus could improve iliofemoral vein flow. Because CDT thepary was used in acute phase and meta-analysis revealed that CDT can increase the percentage patency of the iliofemoral vein. Moreover, this finding calls attention to prior research that has shown that 20% of patients with lower limb DVT having iliofemoral position thrombus may be recanalized independently without intravenous thrombolysis or CDT after 5-year follow-up. 26 Furthermore, 5 studies (1 RCT and 4 non-RCTs) 7,8,9,13,16 revealed that CDT was effective in reducing the morbidity rate of PTS. However, the 2-year results from the ATTRACT study demonstrated that CDT could not prevent PTS. Although there were few RCTs in this meta-analysis, the results of percentage patency of iliofemoral vein heterogeneity were not high (I 2 ¼ 44%), and the incidence of PTS was statistically significant (P ¼ .00002, I 2 ¼ 79%). Because the heterogeneity of PTS events was high (I 2 > 75%), the result was not convincing, thereby promoting the need to explore the heterogeneity. For subgroup analysis, we found that CDT did not prevent PTS in the RCT group (the CAVENT and ATTRACT trial; P ¼ .2; I 2 ¼ 59%), and the sample size of patients (>100) and follow-up time (24 months) were high compared to the nonrandomized group. In sum, quality, sample size, and follow-up time may have affected the meta-analysis results of PTS events. Two studies 8,13 and the ATTRACT study stratified according to the prevention of severe versus mild PTS. However, 3 studies did not have uniform standard definition for the grade of PTS. The Lee et al 8 study didn't report on the method that was used to classify PTS. The ATTRACT study used Villalta score to classify PTS, and the AbuRahma et al 13 study used clinical, etiologic, anatomic, pathophysiologic measures to stratify PTS. In the future, more RCTs should assess the severity of PTS using the Villalta scale and record the Villalta score in the study. Meta-analysis indicated that CDT could reduce the severity of PTS (P ¼ .002; I 2 ¼0 %). Patients' quality of life and early DVT symptoms may be improved by CDT therapy. However, CDT was not shown to prevent mild PTS (P ¼ .91; I 2 ¼ 79%). The occurrence of PTS was still debatable in CDT therapy. The CDT therapy for patients with acute iliofemoral DVT in the acute stage can significantly improve the patency rate of deep venous and prevent venous refluence in the early stage. However, for long-term follow-up, CDT therapy to prevent PTS remains controversial, and more high-quality, multiple-center, large-sample RCTs are needed. Adverse events included bleeding, PE, recurrent VTE events, and death. In our meta-analysis, we showed that bleeding and PE events were significantly higher in the CDT group (P < .00001, I 2 ¼ 33%; P .00001; I 2 ¼ 14%), and there were no significant differences in death and recurrent VTE between the CDT and anticoagulation groups (P ¼ .09, I 2 ¼ 0%; P ¼ .52, I 2 ¼ 58%). Data analysis also demonstrated that the heterogeneity of the results was low. Bleeding can be major and minor. With CDT therapy, most bleeding complications happened in the puncture site, and the severe bleeding events (eg, intracranial hemorrhage) were a small minority. 27 Additionally, many factors caused bleeding, such as the (older) age of the patients, dosage of thrombolytics or anticoagulants, history of bleeding, the duration of thrombolysis or anticoagulation therapy and so on. For patients with a relatively high risk of bleeding, CDT therapy should be implemented through careful consideration of a comprehensive benefit-to-risk assessment. 28 Moreover, adept operative technique for CDT endovascular therapy may decrease the incidence rate of puncture-related bleedings events. Four studies 10,13,14,16 demonstrated that PE was significantly increased in the CDT group. However, among these 4 studies, the sample size of the Bashir et al study 14 was significantly larger than that of the other studies; sample size may be an important factor influencing the statistical results of PE events. In addition, the other 3 aforementioned studies included symptomatic PE events, while the Bashir et al 14 study did not illustrate PE events with clinical symptoms. For sensitivity analysis, the Bashir et al study was excluded from statistical assessment, and symptomatic PE events showed no significant difference between the CDT and anticoagulation groups (P ¼ .71; I 2 ¼ 0%). Irrespective of the CDT group or the anticoagulation group, the total mortality was low, and death was primarily from PE and intracranial hemorrhage. Therefore, preventive measures (such as implantation of vena cava filter) against PE and intracranial hemorrhage should perhaps receive more attention during CDT or anticoagulation therapy. We found that the duration of hospital stay was significantly longer (P < .00001; I 2 ¼ 0%), and hospital charges were also higher in the CDT group (P < .00001; I 2 ¼ 43%). The charges for endovascular therapy and the longer duration of hospital stay may increase the economic burden of patients without health insurance. However, the expense may be worthwhile if CDT therapy could improve the patency of the iliofemoral vein, reduce the incidence of PTS, and produce no severe bleeding or PE events. Catheter-directed thrombolysis plus anticoagulation therapy could provide a safe and effective method for removing venous thrombosis in patients with acute iliofemoral DVT. After the acute phase, the duration of anticoagulation therapy should comply with the latest American College of Chest Physicians (ACCP) guideline of antithrombotic therapy for VTE disease. 21 The benefits of CDT were assessed using 3 major trials (the CAVENT trial, the ATTRACT trial, and the Catheter Versus Anticoagulation [CAVA] trial). The CAVENT trial was a multicenter, RCT that included 209 patients with acute DVT in the iliac, common femoral, and/or upper femoral vein. For this trial, data suggested that CDT improved the clinically relevant long-term outcome in iliofemoral DVT by reducing PTS compared with the conventional therapy with anticoagulation alone. The ATTRACT study was a multicenter, randomized, assessor-blinded clinical trial in the United States. Whereas in the CAVENT study, conventional perfusion catheters were used; in the ATTRACT study, pharmacomechanical catheter-directed thrombolysis (PCDT) was used. The much-anticipated 2-year results from the ATTRACT study were presented by Suresh Vedantham, MD, FSIR, on behalf of the trial's investigators, at the 2017 SIR Annual Scientific Meeting. In the ATTRACT study, PCDT does not prevent the occurrence of PTS, and there was a slight increase in bleeding with the procedure. However, PCDT did reduce early DVT symptoms and the severity of PTS. The ATTRACT study stratified randomization based on whether the common femoral iliac was involved, thus the subgroup analyses enable insight into the differences between the risk-benefit ratio of lysing iliofemoral DVT versus femoral-only DVT. Although the trial did not show statistically significant differences between the subgroups, the patients who may be most likely to benefit are those with iliofemoral DVT; however, it was difficult to justify treating those with isolated femoropopliteal DVT. The inclusion of patients with only a femoropopliteal DVT who still have good outflow through the common femoral vein could influence the outcome negatively, as conservative treatment in these patients is not expected to perform poorly. Furthermore, the Dutch CAVA trial is an ongoing RCT for CDT therapy compared with anticoagulation therapy alone. The data from CAVA trial may give more clinical evidence for CDT versus anticoagulation therapy. Nevertheless, the risk-benefit ratio for patients with DVT must be considered before any therapeutic protocol is clinically implemented. Our meta-analysis had limitations. Six non-RCTs were included in this meta-analysis, and the quality of literature was not high. Thus, the data from the non-RCTs may influence the statistical results of the meta-analysis. Although the heterogeneity of most primary and secondary outcomes was not high, we did not carefully explore the sources of heterogeneity. Also, the literature quality, sample size of the studies, and follow-up time may be important factors affecting the results of the metaanalysis, and we did not rule out the other sources of heterogeneity like age, body surface area, race, usage of iliac stenting and different drugs for thrombolysis, or anticoagulation. Similarly designed trials are required to reduce heterogeneity and offer more convincing statistical data. Conclusion This meta-analysis demonstrated that CDT improved the patency of the iliofemoral vein or the severity of PTS compared with anticoagulation therapy alone, while demonstrating that PTS incidence remains debatable. However, substantially more bleeding and PE events occurred in the CDT group. The average duration of hospital stay was also higher in the CDT group compared with the anticoagulation therapy group. Authors' Note This article does not include studies involving human participants or animals performed by any of the authors. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
2018-04-03T06:03:40.322Z
2017-11-13T00:00:00.000
{ "year": 2017, "sha1": "2cf4d6ccf625543a6b9deccd97945efcc78eb73d", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc6714738?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "da3cc4274cffae6ca834bb1b2059654c86993185", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247380033
pes2o/s2orc
v3-fos-license
Green Veterinary Pharmacology for Honey Bee Welfare and Health: Origanum heracleoticum L. (Lamiaceae) Essential Oil for the Control of the Apis mellifera Varroatosis Varroatosis, caused by the Varroa destructor mite, is currently the most dangerous parasitic disease threatening the survival of honey bees worldwide. Its adverse effect on the welfare and health of honey bees requires the regular use of specific acaricides. This condition has led to a growing development of resistance phenomena towards the most frequently used drugs. In addition, another important aspect that should not be understated, is the toxicity and persistence of chemicals in the environment. Therefore, the identification of viable and environmentally friendly alternatives is urgently needed. In this scenario, essential oils are promising candidates. The aim of this study was to assess the contact toxicity, the fumigation efficacy and the repellent effect of Origanum heracleoticum L. essential oil (EO) against V. destructor mite. In the contact tests, each experimental replicate consisted of 15 viable adult female mites divided as follows: 5 treated with EO diluted in HPLC grade acetone, 5 treated with acetone alone (as negative control) and 5 treated with Amitraz diluted in acetone (as positive control). The EO was tested at concentrations of 0.125, 0.25, 0.5, 1 and 2 mg/mL. For each experimental replicate, mortality was manually assessed after one hour. The efficacy of EO fumigation was evaluated through prolonged exposure at different time intervals. After each exposure, the 5 mites constituting an experimental replicate were transferred to a Petri dish containing a honey bee larva and mortality was assessed after 48 h. The repellent action was investigated by implementing a directional choice test in a mandatory route. During the repellency tests the behavior of the mite (90 min after its introduction in the mandatory route) was not influenced by the EO. In contact tests, EO showed the best efficacy at 2 and 1 mg/mL concentrations, neutralizing (dead + inactivated) 90.9% and 80% of the mites, respectively. In fumigation tests, the mean mortality rate of V. destructor at maximum exposure time (90 min) was 60% and 84% at 24 and 48 h, respectively. Overall, these results demonstrate a significant efficacy of O. heracleoticum EO against V. destructor, suggesting a possible alternative use in the control of varroatosis in honey bee farms in order to improve Apis mellifera welfare and health and, consequently, the hive productions. Introduction To date, varroatosis is the main parasitic disease of honey bees, with a great impact on the welfare and health of bees and consequently, on their productivity and performance. This disease is caused by the Varroa destructor mite, which was assumed to be Varroa jacobsoni before the last century [1,2]. Since its first infestation against Apis mellifera, the mite has spread rapidly all over the world, profoundly changing the approach to beekeeping [2]. This ectoparasite, in addition to exerting an activity of depletion of essential nutrients for the proper physiological maintenance of the honey bee, is vector of many viruses during its meal [1]. The risk associated to a reduced survival of honey bee colonies is mainly related to the viral vector action of the parasite. This peculiarity means that V. destructor is often indicated as the main cause of the collapse of the hives [3][4][5]. Following a high-grade infestation, the transmission of viruses, such as acute paralysis virus (ABPV) and deformed wing virus (DWV), increases [6][7][8]. To prevent virus outbreaks, control strategies against mite populations are essential for beekeepers [6]. Pharmaceutical preparations currently on the market are formulated using synthetic acaricides, organic acids and essential oils (i.e., ApiLifeVar ® , Chemicals Laif, SpA, Vigonza, PD, Italy), prepared with thymol, eucalyptus and menthol essential oils). Oxalic acid and Amitraz are the most widely used drugs among organic acids and synthetic acaricides, respectively [9]. The growing trend in the use of synthetic acaricides has been dictated by their ease of use and a formulation that allows them to cover two brood cycles. However, medicinal products containing such active ingredients are not free from side effects. Very low doses/concentrations have been demonstrated to have a sublethal effect on the physiology, neurology, metabolism and/or behavior of honey bees [10]. Furthermore, chemicals can remain in beehive products and this accumulation can lead to chronic exposure of both adult honey bees and their immature forms to sublethal doses of acaricides [11,12]. This can be detrimental to the fate of the colony because the sub-lethal effects can result in progressive depopulation of the hive [13,14]. Pettis et al. (2013) showed that consumption of pollen contaminated with fungicides (chlorothalonil or pyraclostrobin) and acaricides (2,4 dimethylphenyl formamide, a metabolite of Amitraz, bifenthrin or fluvalinate) can double the relative risk of Nosema infection [15]. This observation is supported by the evidence gathered by Cutler (2013) [16], who found that worker honey bees exposed to Amitraz show a higher defecation rate than untreated honey bees. Therefore, there is a strong correlation between N-2,4-dimethylphenyl-Nmethylformamidine residues (intermediate metabolites of Amitraz) and Nosema levels in honey bee samples [16]. Furthermore, low doses of Amitraz double the heart rate of honey bees and impair their reactivity to the noxae pathogenic virus [17]. Drones exposed to cumaphos or fluvalinate have also been shown to have reduced sperm counts and viability and suffer from reduced body weight [18]. On the other hand, organic acids do not remain in the hive products but are toxic to adult honey bees and larvae. Following contact with oxalic acid, honey bees show a shorter life span and less active behavior within the colony [19,20]. The identification of alternative methods for the control of V. destructor becomes urgent, also in the light of the increasingly and widespread phenomena of drug resistance reported [21][22][23][24][25]. Control methods based on the use of essential oils offer considerable promise due to their low toxicity to humans and the environment [26]. Essential oils are low molecular weight volatile substances produced by the secondary metabolism of plants. They are generally composed of a complex mixture of mono-and sesquiterpene hydrocarbons, oxygenated materials, phenyl proponoids and other compounds [27]. Thanks to their particular and diversified chemical composition, the parasitic populations treated with essential oils are often unable to develop drug resistance phenomena [26,28,29]. Many essential oils have produced excellent results in controlling the growth of many species of insects and parasites that are harmful to crops, food and animals [30,31]. In this perspective, the ethnobotanical knowledge are extremely useful in choosing the most promising plant extracts [32]. Thyme essential oil has provided the most promising results against V. destructor mite and is the most represented active ingredient in products on the market [33]. Other essential oils have also been found to be effective in both in vitro and in vivo tests [34] such as that derived from the perennial herb plant of oregano. It looks similar to a small shrub of variable height depending on the species. Oregano (Origanum spp.) grows naturally in sunny arid regions but is also grown as an aromatic plant and for its therapeutic properties. It is an aromatic plant native to western and southwestern Eurasia and the Mediterranean subregion [35]. Known worldwide as a fresh or dried spice in the culinary field, it is also used as an antioxidant additive or preservative in many foods [36]. There are several species belonging to the genus Origanum and, among these, the best known are O. vulgare and O. majorana [37,38]. The oregano plant species that grow in Italy are: O. vulgare, O. heracleoticum, O. majorana and O. onites [39]. The common oregano species O. vulgare can be found in central and northern Italy, while it is lacking in southern Italy. On the contrary, the species O. heracleoticum grows spontaneously in southern Italy [40,41]. The essential oils extracted from the plant show antimicrobial and antifungal activity against human, plant and food-borne pathogens [42,43]. It has been tested for the control of different species of beetles, diptera and lepidoptera with promising results [44]. In general, the essential oils isolated from plants of the genera Oreganum have proved to be excellent alternative tools for controlling the V. destructor mite. In one of the most recent studies conducted by Sabahi et al. (2017), a particular method of administering essential oils to honey bee colonies was evaluated. Electric vaporizers containing oregano essential oil (Sigma ® , Missasauga, ON, Canada), were installed above the brood chamber and, running continuously, resulted in a 97-98% reduction in the mite population of the hive [45]. Based on these results, the aim of the present study was to investigate the in vitro activity of the essential oil of oregano, referring to the less studied species Origanum heracleoticum., against the ectoparasitic mite V. destructor. The data obtained could give further boost to the commercialization and use of a pharmacological product based on essential oil of oregano against V. destructor infestations. Materials and Methods The experiments were carried out in the parasitology laboratory of the Interdepartmental Center Veterinary Service for Human and Animal Health (CISVetSUA), University "Magna Graecia" of Catanzaro, in the month of July 2021. Two apiaries in the province of Catanzaro (Calabria Region in southern Italy), heavily infested naturally with V. destructor, were used as a source of mites. Colonies enrolled in the study were not treated with acaricides in the previous six months. In brief, several frames in which the drone brood had been reared, were transported from the apiary to the mite harvesting laboratory. The collection of the mite was performed in the laboratory as described below. Each brood cell in the frame was deprived of the wax operculum that ensured its closure and was inspected. If mites were present, they were picked up with a fine paintbrush and moved into Petri dishes with live honey bee larvae and pupae to prevent starvation during harvesting operations. This method is time-consuming, but allows for the collection of less traumatized mites than those would be obtained with other methods, such as the sugar smoothie method [46]. The tests were performed immediately after the collection of the mites. Prior to each test, mites that appeared to be newly molted, weak or abnormal were excluded because they may have had a different response in bioassays. Plant Material and Extraction Technique The aerial parts of O. heracleoticum were harvested in June directly in the field in a natural growth area on the Ionian side of the Pollino massif (southern Italy), at an average altitude of 450 m above sea level. This autochthonous Origanum species typically grows on the eastern spurs of the Pollino National Park, on the Calabrian side. The taxonomic identification was confirmed by Dr. V. Musolino and Dr. C. Lupia, Department of Health Sciences, University "Magna Graecia" of Catanzaro. A specimen voucher is deposited at the Ethnobotanical Conservatory of Castelluccio Superiore, Potenza, Italy, under position number 48 of the Labiatae family. To obtain the essential oil, the fresh plant material was washed and extracted by steam distillation for 2 h, using a Clevenger-type apparatus (Albrigi Luigi, Verona, Italy). The essential oil obtained was dried over anhydrous sodium sulfate and stored at +4 • C until needed. Gas Chromatography-Mass Spectrometry (GC-MS) Analyses The chemical composition of the essential oil was assessed by gas chromatographymass spectrometry (GC-MS). Analyses were performed using a Hewlett-Packard 6890 gas chromatograph (Agilent, Milan, Italy) equipped with an SE-30 capillary column (100% dimethylpolysiloxane; 30 m length; 0.25 mm internal diameter; 0.25 µm film thickness) coupled to a Hewlett Packard 5973 mass spectrometer (Agilent, Milan, Italy). Analyses were performed with helium as carrier gas (linear velocity, 0.00167 cm/s) using a programmed temperature (from 60 to 280 • C, at a rate of 16 • C/min). The injector and detector were set to 250 • C and 280 • C, respectively [47]. The mass spectra of the detected molecules were compared with the Wiley 138 mass spectra library to identify the constituents of the essential oils. Acute Toxicity towards the Mites The methods of Gashout and Guzmán-Novoa (2009) [48], adapted from Bava et al. (2021) [26], to assess the acute toxicity of essential oil for mites have been used. For each daily test, at least 100 mites were collected to establish experimental replicates. The frames were removed from the original colonies and transported to the laboratory. Previously, each cell of the frames was inspected for mites. The mites found were collected with a fine paintbrush and transferred into a Petri dish with live honey bee larvae. The tested essential oil and the active ingredient Amitraz (Merck, 45323) were diluted in HPLC grade acetone to a concentration of 2 mg/mL, 1 mg/mL, 0.5 mg/mL, 0.25 mg/mL and 0.125 mg/mL. Amitraz and acetone alone were used as positive and negative controls, respectively. The eppendorf tubes (2 mL) were filled with 50 µL of essential oil diluted in acetone and placed open in the oven to evaporate the diluent. The tubes were often rolled up on the walls to allow evaporation of acetone and to coat the walls of the tubes with the essential oil. This process was repeated for 15-20 min. As verified by Gashout and Guzmán-Novoa (2009) [48], it is unlikely that a significant amount of the tested product has evaporated within this time period due to its high boiling point, which exceeds 200 • C [49]. Subsequently, for each technical replicate and positive and negative control, five live adult female mites were gently transferred into the previously prepared tube using a fine paintbrush. Once the mites were transferred inside, the tubes were sealed and placed in a dark room at 34 • C and 65% of relative humidity. The temperature and humidity conditions set for incubation are the natural ones present at the brood level. In previous Vet. Sci. 2022, 9, 124 5 of 15 studies, these conditions were more conducive to the development and reproduction of the Varroa mite [26,50,51]. Ten technical replicates were set up for each concentration of tested solution, using for each of them Amitraz for the positive control and acetone for the negative control. Acute toxicity was determined by recording mite mortality after 1 h of exposure. At the end of the hour, the parasites contained within each tube were transferred to a Petri dish and examined under a stereoscopic microscope. The mites were considered dead if they did not move when pushed. Mites that only moved one or more legs were classified as inactive [26]. The inactivity condition was considered equivalent to death. Dead and inactive mites were classified as neutralized. Fumigant Toxicity towards the Mites To verify the toxicity of the fumigation of the essential oil, a cotton swab was inserted into the recess on the inner surface of the cap which allows the hermetic closure of the Eppendorf tube. For each experimental replicate, 5 adult female mites were transferred to the bottom of a 2 mL Eppendorf tube using a fine paintbrush. A piece of tulle was immediately inserted into the test tube and interposed between the mites and the cap. This arrangement made it possible to avoid contact between the mites and the cotton present in the cap of the test tube. The cotton ball was then soaked with 40 µL of essential oil diluted in distilled water to a concentration of 1 mg/mL. The tube cap was hermetically sealed. The Eppendorf tubes thus prepared were placed in an incubator at a temperature of 34 • C and 65% of relative humidity. The mites were exposed to the vapors for several times: 15, 30, 45 and 90 min [46]. In particular, the fumigation toxicity tests were conducted by exposing 5 mites to the vapors produced by 40 µL of 1 mg/mL essential oil (water dilution) into a final volume of 2 mL (final concentration 20 mg per liter of air). For the negative controls, the mites were transferred to small Petri dishes (diameter = 6 cm) with one honey bee larva for every five mites were placed back to the incubator under the same temperature and humidity conditions (34 • C and 65% of relative humidity). Mite mortality was determined after 24 and 48 h after incubation with a honeybee larva (as nourishment) [46]. Five experimental replicates were established for each exposure time; in total 140 mites were tested. Repellent Effect of the EO towards the Mites An amount of 0.3 g of essential oil was mixed with 30 g of liquid wax (concentration 1%). The solidified wax was placed at one end of a hollow tube. At the other end, pure wax without essential oil was placed. A mite was then inserted through a hole in the tube from its apex and center [52]. The location of the mite was observed and recorded at 10-min intervals for a total of 90 min. According to Kraus et al. (1994), the most pronounced orientation effects can be observed during this period [53]. To evaluate the repellency of the tested essential oil, 20 experimental replicates were analyzed. For each assay, one V. destructor mite was used and placed in an arena, as explained in the materials and methods section. The subject's behavior was observed for 90 min following his introduction. Honey Bee Workers: Toxicity Evaluation To determine the toxicity of EO for adult honey bees, a pool of random individuals was collected. To obtain a random sample of honey bees of different ages, the subjects that make up the pool came from different hive frames. In particular, after being stirred in a container, the bees were sprayed with water to prevent flight and mixed [54]. The randomly collected bees were processed for toxicity tests. Five technical replicates were analyzed. For each trial, as suggested by Bava et al. (2021) [26], two 50 mL Falcon tubes were filled with 1.6 mL of acetone and essential oil. The amount of liquid to be inserted was determined in relation to the volume used in Vet. Sci. 2022, 9, 124 6 of 15 the toxicity tests for the mites. As for the viability tests of V. destructor, the tubes were rolled several times on the walls, to coat the walls with liquid and to evaporate the acetone contained in the solution. Once dried, five honey bees were transferred to the tubes that previously contained the essential oil solution [26]. Therefore, the Falcon tubes with honey bees inside were transferred to an incubator (34 • C and 65% of relative humidity). One hour after exposure, the honey bees were placed in cages (cylindrical plastic box, Ø = 90 mm, height = 100 mm), according to William et al. (2015) [54]. These cages were equipped with feeders (50% sucrose solution and water). The honey bees were then observed for the next 48 h. In addition, as in the fumigation tests for mites, twenty adult honey bees were exposed to essential oil vapors. A two-story cage (Ø = 90 mm, height = 100 mm) equipped with feeders (50% sucrose solution and water), was composed. In the lower part, a gauze soaked with essential oil was inserted, while, in the upper part the honey bees were housed [55]. The cages where placed in an incubator (34 • C and 65% of relative humidity) and the subjects were observed for 48 h. In both cases, abnormal behavior and/or mortality was detected. Statistical Analysis The graphical representations of the datasets were created with jmpSAS software (JMP ® Pro, 2014. Version 14, Cary, NC, USA) using the graph builder module. Kruskal-Wallis tests were performed using the statistical module of jmpSAS to assess the effect of the treatments. Dunn's Multiple Comparison with Bonferroni correction was used to evaluate the significance of the difference between the different experimental groups. Phytochemical Profile The essential oil from the fresh aerial parts of O. heracleoticum was extracted by steam distillation for 2 h with a Clevenger-type apparatus obtaining an extraction yield of 0.8% w/w. The essential oil obtained was analyzed for its chemical composition providing the results summarized in Table 1. Overall, 37 compounds were identified. 15 components belong to the group of monoterpenic hydrocarbons, of which α-ocimene (5.7%) and sabinene (3.8%) are the most abundant chemical constituents. Furthermore, β-ocimene and mircene were detected at rates greater than 2%, and 4-carene and thujene at rates greater than 1%. Nine oxygenated monoterpenes were also identified, including linalyl acetate (17.6%) and linalool (10.0%) which represent the most abundant essential oil components, followed by linalool propionate (6.6%) and eucalyptol (3.7%). Other oxygenated monoterpenes have been identified with percentages ranging from 0.2% to 0.7%. Moreover, 13 sesquiterpenes were also detected, of which bicyclosesquiphellandrene and β-bisabolene are the most abundant compounds (3.6% and 1.8%, respectively). Acute Toxicity The average neutralization percentages achieved after exposure to the different concentrations of essential oils and the relative standard deviations are reported in Table 2. In addition, the result graphical representation (Figure 1) shows how many mites, out of a total of 5 that made up each replicate (n = 10), were neutralized on average, at each concentration of EO when compared to the negative and positive control. a total of 5 that made up each replicate (n = 10), were neutralized on average, at each concentration of EO when compared to the negative and positive control. As reported in Table 2, the percentages of inhibition ranged from 54% to 91% depending on the concentration of EO used. Comparing the effect of each concentration with the negative control (acetone) the calculated p-values were always below 0.01 (Dunn's Multiple Comparison with Bonferroni correction). The percentages of inhibition were very similar to the inhibition measured with the same concentrations of Amitraz. Fumigant Toxicity As can be seen, the toxicity of EO enhanced drastically in relation to the time of exposure ( Figure 2). Figure 3 and Table 3 As reported in Table 2, the percentages of inhibition ranged from 54% to 91% depending on the concentration of EO used. Comparing the effect of each concentration with the negative control (acetone) the calculated p-values were always below 0.01 (Dunn's Multiple Comparison with Bonferroni correction). The percentages of inhibition were very similar to the inhibition measured with the same concentrations of Amitraz. Fumigant Toxicity As can be seen, the toxicity of EO enhanced drastically in relation to the time of exposure ( Figure 2). Figure 3 and Table 3 report the percentage of mortality reached at 24 and 48 h for the different exposure times. After 15 min of exposure to essential oil vapors, an average mortality rate of 8% at 24 h was registered, which increased to 12% at 48 h. A 30-min exposure resulted in a mortality rate of 28% at 24 h which reached 32% at 48 h. After one day, a 45-min exposure, resulted in a mortality rate of 56% of the subjects tested which rose to 72% after two days. Finally, the mortality rate at the maximum exposure time (90 min) was 60% and 84% at 24 and 48 h, respectively. The comparison of mortality percentages after 48 h between the exposure of 15 min and the exposure of 45 and 90 min was statistically significant (p = 0.0116 and p = 0.0015, respectively, calculated with Dunn's Multiple Comparison with Bonferroni correction). Fumigation tests support the findings of Sabahi et al. (2017) [45]. Indeed, a longer exposure time of mites to the vapors of essential oil allows a better control of the infestation rate, resulting in a mortality related to subacute toxicity phenomena. Vet. Sci. 2022, 9, 124 9 of 15 time (90 min) was 60% and 84% at 24 and 48 h, respectively. The comparison of mortality percentages after 48 h between the exposure of 15 min and the exposure of 45 and 90 min was statistically significant (p = 0.0116 and p = 0.0015, respectively, calculated with Dunn's Multiple Comparison with Bonferroni correction). Fumigation tests support the findings of Sabahi et al. (2017) [45]. Indeed, a longer exposure time of mites to the vapors of essential oil allows a better control of the infestation rate, resulting in a mortality related to subacute toxicity phenomena. Repellent Activity In general, it has been observed that the behavior of the mite is not influenced by the essential oil. The mites move indifferently in the direction of the object treated with the oil Mortality rate Repellent Activity In general, it has been observed that the behavior of the mite is not influenced by the essential oil. The mites move indifferently in the direction of the object treated with the oil and in the opposite direction. The results are summarized in Table 4. No statistically significant differences were observed between the direction choices. Discussion The widespread phenomena of drug resistance and the strict legislation on chemical residues in food of animal origin require a remodeling of pharmacological treatments in farms. In this regard, medical treatments based on natural substances can represent a valid, safe and effective alternative therapeutic approach. Essential oils isolated from many plants contain several bioactive compounds with different properties that can influence the behavior and vitality of insects [56,57]. These compounds have been shown regulatory or inhibitory effects on growth, development, reproduction and orientation of insects. These actions are often associated with other admirable properties. This aspect can extend the spectrum of action of essential oils to other pathologies in addition to parasite control as has been investigated in this study [40]. Some essential oils have been shown to be effective against American Foulbrood and the chalkbrood disease [29,58]. Therefore, the use of essential oils could be beneficial in several ways. These natural compounds can be proposed for the development of safe, effective and fully biodegradable insecticides. In this study, the beneficial properties of the essential oil of Origanum heracleoticum were investigated. The present research study does not constitute a simple scientific pretension but is extremely important in the light of the knowledge acquired over the years. Particularly relevant is the excellent acaricidal efficacy of the essential oil of oregano which, in all the concentrations tested, was greater than that of the positive control. In the contact tests, the toxic compounds must penetrate the cuticular barrier to act, while, in case of fumigation they must be inhaled by the arthropod/insects. In our experiments, the essential oil of O. heracleoticum was effective both by contact and by fumigation. The results of our study are comparable to those of Hybl et al. (2021) [59] which verified the efficacy of the essential oil of Origanum vulgare against adult female Varroa mites, in laboratory conditions. Indeed, in their in vitro study, Hybl et al. (2021) used the same test employed in our study (glass-vial residual bioassay) to evaluate the acaricidal activity. Importantly, Hybl et al. (2021) [59] estimated the mortality of Varroa mite after 2 and 4 h of essential oil exposure, with a 100% mortality rate. In the present study, following the indications of Bava et al. (2021) [26], the mortality rate was evaluated one hour after EO exposure, reaching a mortality percentage of 90.9% (2 mg/mL concentration). The choice of time is particularly important, as mites are sensitive to artificial environments. In fact, mites suffer more from hunger and water loss if kept away for a few hours from their natural habitat [60]. Previous studies on other Origanum species have shown how the efficacy of a pharmacological treatment is influenced by the method of administration and the duration of exposure to the treatment. The choice of the most effective pharmacological method of administration of the drug cannot ignore the knowledge of the methods of action involved. In this regard, important scientific evidence is reported in the studies conducted by Sabahi et al. (2017) about exposure times [45]. The authors applied oregano oil (Sigma ® , Missasauga, ON, Canada) to hives using electric vaporizers (in vivo study), obtaining a reduction in the degree of infestation close to 97.4% [19]. These results are markedly better than those obtained by Romo-Chacón et al. (2016) with an average mortality between 57-74% by soaking a cotton towel with a solution containing oregano (Lippia berlandieri) essential oil [61]. A high mortality rate was obtained in tests performed by Gal et al. (1992) [62] which, in any case, resulted lower than that obtained by Sabahi's group [35]. Therefore, the constant production of essential oil through fumigation for two weeks results in a more efficient treatment. In our study, fumigation tests gave similar results, confirming how longer exposure times resulted in higher mortality rates 48 h after application. However, in field conditions, various factors can influence the overall effectiveness of the treatment. These include the presence or absence of brood and the external environmental temperature [26]. Furthermore, the composition in active ingredients of the aromatic plants slightly varies depending on the time of harvest, the conditions of cultivation and how the plant is collected and stored [63]. The extraction method can also affect the final chemical profile of the essential oil extracted from a specific botanical species. We believe that the characterization of the phytochemical profile of the tested substances is a fundamental element that must always be associated with toxicity studies. Comparative studies of the phytochemical profiles and toxic activity of an essential oil derived from a particular botanical species, grown under different conditions, could help to better understand the effectiveness of the essential oil. For this reason, in our study the chemical characterization of the essential oil tested was performed. The study of the phytochemical profile is particularly important as the species tested in our study (O. heracleoticum) is different from those tested in previous in vitro and in vivo tests. In our opinion, much of the scientific value of this work lies in this element. As can be seen in the characterization table, the most represented monoterpene molecules in the essential oil tested are linalyl acetate, linalool propionate and eucalyptol. The monoterpenoids are volatile compounds to which the fumigant activity is certainly referred. The fumigant action of essential oils has been investigated in several studies aimed at controlling parasites of stored products, lice, ticks and mites such as Psoroptes and Sarcoptes scabiei [64][65][66]. These studies have led to different products currently on the market based on these monoterpenoids. Our results demonstrate that oregano oil can achieve a significant acaricidal effect against V. destructor in vitro. The assessment of acute toxicity by direct contact of the mite with a treated surface has been successful in terms of efficacy. As can be easily understood, the higher concentrations were more toxic to the mites and resulted in greater mortality phenomena. To our knowledge, the system used in the present study to perform the in vitro fumigation tests has never been used before. We found that, maintaining the concentration of the oil used constant (1 mg/mL), the mortality of the mites progressively increased as the time of exposure to fumigant vapors increased. Instead, the repellency tests revealed results with more difficult interpretation. Overall, there was no repellent effect of the presence of EO in wax. V. destructor moves indifferently in the direction of the material treated with the essential oil and in the untreated one. However, this lack of repellent ability may not represent an adverse effect of oregano oil. Indeed, the low repellency can favor the opportunities of contact of the parasite with a possible pharmacological formulation that exploits the contact toxicity of the active ingredients as well as the toxicity of evaporation. Conclusions In recent years, the phenomena of drug resistance are becoming more and more widespread. Essential oils can be a valid alternative to synthetic acaricides for the control of varroatosis in beekeeping [21,22,24,67,68]. Essential oils are environmentally friendly and, as their mechanisms of action are involved in different molecular pathways, it is rare for treated parasitic populations to develop resistance mechanisms. In particular, the essential oil tested has provided data in line with those reported in other works previously published for other species of oregano [45,61,62,69]. The use of Origanum heracleoticum essential oil for the varroatosis control could represent a valid alternative to the use of synthetic drugs. Therefore, it is important to investigate in field tests the most effective methods of administration, which can ensure a lasting persistence of the active ingredient, with a gradual release over time. In conclusion, this study is part of the green veterinary pharmacology [70], a branch of the pharmaceutical veterinary that, nowadays, must be implemented to reduce the phenomena of drug resistance and the persistence of residues in the environment. The advantages on animal welfare and health are indisputable, also linked to the consequent reduction of the transmission of bacteria and viruses, such as deformed wing virus (DWV) and slow bee paralysis virus (SBPV) by V. destructor mite to honey bees [71][72][73][74][75].
2022-03-11T16:16:14.394Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "b3fe29454f37d632b225865e8ff57d71e1603df4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2306-7381/9/3/124/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cab3ba6095cad3488eb56c8e169918b8fd18482e", "s2fieldsofstudy": [ "Environmental Science", "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
220336829
pes2o/s2orc
v3-fos-license
Effect of small-scale heterogeneity on biopolymer performance in carbonates Polymer flooding is a well-established chemical method for enhancing oil recovery in sandstones; however, it has a limited application in carbonates. This is due to the harsh reservoir conditions in carbonates including high temperature, high salinity, and high heterogeneity with low permeability. This paper numerically investigates the effect of Schizophyllan biopolymer on oil recovery from carbonates. The effect of biopolymer on oil recovery was predicted by running several 1D simulations. Biopolymer flow behavior was modeled based on experimental data. The results showed that the effect of the investigated biopolymer on oil recovery was not much pronounced compared to conventional waterflooding. This is due to small-scale heterogeneity, which increased effective shear rate and hence, decreased in-situ polymer viscosity. Formation permeability, polymer viscosity, and oil saturation maps were consistent in justifying this observation. The findings of this study were supported by fractional flow and mobility ratio analyses. This work highlights the importance of small-scale heterogeneity of the core in modeling polymer flooding, particularly the shear effect on polymer viscosity. Introduction and background A large fraction of original oil in place (OOIP) remains trapped in the reservoir after both primary and secondary recoveries. The latter necessitates the need for enhanced oil recovery (EOR) techniques to improve the oil recovery in an economic way under certain market and technology conditions (Lake 1989). Different EOR techniques are being used including solvents, chemical flooding, thermal methods, and others (Al-Shalabi et al. 2015;Al-Shalabi andSepehrnoori 2016, 2017). Polymer flooding is a widely used chemical EOR technique that has been studied and practiced for many years (Chang 1978). In this technique, polymers are added to the water in order to increase its viscosity and hence, improve water sweep efficiency through decreasing the mobility ratio between the displacing fluid (water) and the displaced fluid (oil). Consequently, this viscous water dampens viscous fingering effects and results in better volumetric sweep efficiency (Sorbie 1991). This paper discusses in particular, the application of "biopolymer" flooding as an EOR technique on oil recovery from "carbonate" reservoirs under "harsh" conditions. Carbonates under harsh conditions More than half of the world's hydrocarbon proven reserves are present in carbonate reservoirs. The majority of these reservoirs have a complex nature that leads to low oil recovery (about 30% in average) including both primary and secondary recovery stages. The complexity of these reservoirs includes high temperature (above 80 °C), high formation water salinity (TDS above 60,000 ppm), presence of divalent cations (hardness), low permeability (less than 100 mD), and the wetting nature (about 90% are mixed-to-oil wet) (Austad et al. 2008;Chandrasekhar and Mohanty 2013;Adegbite et al. 2018;Diab and Al-Shalabi 2019). The low recovery in carbonate reservoirs assures higher potential for enhanced oil recovery (EOR). Biopolymer types Two main types of polymers are generally used for EOR purposes including synthetically produced partially hydrolyzed polyacrylamides (HPAM) and biologically produced polysaccharides such as Xanthan biopolymers. Polyacrylamides have long molecules with small effective diameters, which results in a high sensitivity to mechanical degradation and consequent viscosity reduction. In addition, polyacrylamides are sensitive to saline solutions where in high salinity water, polymer molecules tend to curl up and lose their viscosity-building capability. On the other hand, Xanthan polymers are more tolerant to mechanical shear and saline conditions, but they are very sensitive to biological degradation. Xanthan polymers are not retained on rock surfaces whereas polyacrylamides are subjected to retention, which causes reduction in rock permeability. Both polymers are unstable in high temperature conditions and oxidation by dissolved oxygen in the injected water (Green and Willhite 1998;Delamaide 2018). Promising biopolymers include Scleroglucan and Schizophyllan biopolymers, which have many similarities to Xanthan. Pu et al. (2017) reported different eco-friendly and tough biopolymers as opposed to synthetic polymers including Scleroglucan, HEC, CMC, Welan gum, Guar Gum, Schizophyllan, Mushroom polysaccharides Cellulose, and Lignin. The main challenge with biopolymers is the poor filterability and plugging during flow in the porous media. This plugging problem is due to the presence of organic material or microgels (Carter et al. 1980). Kohler and Chauveteau (1981) stated that a potential polymer should be qualified through corefloods with low polymer retention (< 84 μg/g of rock) and less pore volumes to achieve a steady state pressure drop (2-3 PVs). Biopolymer applications Several applications have been reported for biopolymers by which they improve volumetric sweep efficiency beyond conventional waterflooding. These applications include improving oil fractional flow, reduction mobility ratio, diverting water to unswept regions of the reservoir, and improving vertical sweep efficiency through conformance control (Needham and Doe 1987;Huh and Pope 2008;Qi et al. 2017;Erincik et al. 2018;Azad and Trivedi 2019). Biopolymer screening studies Several factors affect the performance of polymer flooding including characteristic of polymer solution itself, technical, economical, and reservoir conditions. Characteristics of polymer solution include but not limited to polymer molecular weight, polymer concentration, degree of hydrolysis, viscoelastic properties of the polymer solution, salinity, and pH of make-up water solution. Better sweep efficiency is usually achievable with a combination of low salinity, high molecular weight, and high concentration of polymers. Other technical, economical, and reservoir factors include injectivity problems, shear dependence behavior of polymers, polymer retention/adsorption, connate water salinity, pH, rock wettability, reservoir temperature, oil viscosity, and amount of mobile oil saturation left after conventional waterflooding (Sorbie 1991;Green and Willhite 1998). Polymer retention affects the economics as well as the performance of polymer. Initial rock wettability state was found to affect polymer adsorption. Usually, in intermediate-wet reservoirs, significant portions of rock surface are occupied by crude oil components, which reduce the adsorption sites available for polymers. Hence, in water-wet reservoirs, polymer adsorption is more pronounced which results in a poor sweep efficiency (Chiappa et al. 1999). As most of carbonates are mixed-to-oil wet, then polymer adsorption is expected to be lower than that of sandstones. Biopolymer corefloods Polymer screening studies are followed by corefloods to qualify a polymer for EOR. The Polymer corefloods have been mainly conducted on sandstones with a limited application on carbonates where the promising results motivated researchers to perform further studies (Manrique et al. 2006). The corefloods conducted on carbonates are few with temperature up to 109 °C and salinity of 343,000 ppm using mainly synthetic polymers as opposed to biopolymers (Al-Hashim et al. 1996;Bennetzen et al. 2014;Han et al. 2014;and Zhu et al. 2015). Biopolymer numerical modeling After conducting both polymer screening studies and corefloods, the properties of these qualified polymers are needed as an input in a numerical simulator for history matching as well as upscaling of these corefloods to field-scale studies. The simulations are later used to optimize polymer flooding through running sensitivity studies on polymer concentration, injection rates, and other factors that affect the economics of this process (Lee 2015). Bondor et al. (1972) introduced rheological behavior modeling of polymers with an emphasis on polymer injectivity and near wellbore effect. The authors included the nonideal mixing of polymer and water, polymer adsorption, and permeability reduction due to polymer adsorption in their simulator. Since that time, several numerical studies have been performed to highlight the effect of polymer flooding on enhancing oil recovery as well as optimizing this process. A summary of the basic functions in a polymer module was provided by Goudarzi et al. (2013), which include models for polymer viscosity as a function of both concentration and shear rate, polymer adsorption, permeability reduction, inaccessible pore volume, salinity, and hardness. One should note that very few numerical studies investigated the effect on polymer flooding in carbonates under harsh conditions. Therefore, this work focuses on the numerical investigation of biopolymer injection in carbonate reservoirs under harsh conditions. The focus of this work is mainly on the core-scale simulations, which is considered the first step towards field-scale studies. Experimental data This work includes predictions of Schizophyllan biopolymer effect on oil recovery from carbonate cores. The study starts with history matching a waterflooding process on a Middle Eastern carbonate cores. The latter is followed by modeling the Schizophyllan biopolymer properties as reported from the screening studies by Quadri (2015). Afterwards, predictions of oil recovery by this biopolymer are performed in both secondary and tertiary modes of injection. The simulator used in this work is UTCHEM (Technical Documentation 2000), which is a 3D multiphase-flow, transport, and chemical-flooding simulator developed at the University of Texas at Austin. Quadri (2015) presented a screening study for Schizophyllan biopolymer to be used in Middle Eastern carbonate reservoirs with high temperature and high salinity conditions. The latter polymer showed shear thinning behavior with excellent thermal stability (at 120 °C) and salt tolerance (up to 200,000 ppm). In addition, Schizophyllan showed good injectivity on cores with permeability higher than 30 mD. Dynamic adsorption was also discussed on cores of different permeabilities (3-163 mD) and was found to be low within 7-48 μg/g of rock. Later, Li (2015) conducted several corefloods to highlight the injectivity of this Schizophyllan biopolymer. Moreover, a polymer concentration of 200 ppm was used in these experiments. The current study utilizes polymer solution properties through the screening studies conducted by Quadri (2015) as well as rock and fluid properties through corefloods conducted by Li (2015). The latter data are used to numerically investigate the effect of this biopolymer with the proposed concentration (200 ppm) on enhancing oil recovery for the target reservoirs in the Middle East. One of the corefloods conducted by Li (2015) was utilized to get rock and fluid data as well as relative permeability and capillary pressure curves for formation waterflooding through history matching using UTCHEM. In this coreflood, the core was flooded with formation water at reservoir conditions (248 °F and 3000 psig). The core plug used has an average porosity of 13.12% and an average liquid permeability of 30.5 mD. More details about this coreflood including rock and fluid properties are listed in Tables 1 and 2. More information about the screening work of the Schizophyllan biopolymer used can be found elsewhere (Quadri 2015). Simulation model A 2D Cartesian grid was used with 10 × 1 × 10 gridblocks to simulate the heterogeneous carbonate core plug for the coreflood. The simulation model was created horizontally to match the coreflood that was run in the lab. The heterogeneity was considered by generating permeability distribution with an arithmetic mean of 30.5 mD and Dykstra Parson's coefficient (V DP ) of 0.85. The arithmetic mean value matches the core's average permeability. Also, the latter V DP was selected based on the experiences in dealing with carbonate cores from the same formation (Al-Shalabi 2014a, 2014bAl-Shalabi et al. 2014a, 2014b. A spherical variogram and a lognormal permeability distribution were used. The spherical variogram was utilized which represents a medium level of heterogeneity between Gaussian and Exponential variograms. Equal correlation lengths in x-and y-directions were used; however, a lower value was assigned in the z-direction. This resulted in generating horizontal layers of different permeabilities as shown in Fig. 1. The latter horizontal layers aided in capturing the breakthrough of water observed during the experimental coreflood. It is worth mentioning that the number of gridblocks and the heterogeneity distribution used were chosen in a way to capture the physics and mimic the heterogeneity of the actual core where several grid sensitivities were performed. The simulation model has two vertical wells 1 3 including an injector and a producer. More details about the simulation model are listed in Table 3. The sections below discuss formation water cycle history matching, biopolymer properties modeling, analytical determination of optimum biopolymer concentration, both fractional flow and mobility ratio analyses, and numerical core-scale simulations. Waterflooding history matching (secondary injection) The experimental data of the formation water coreflood, including pressure drop and oil recovery, were analyzed to find the relative permeability curves. Darcy's law and the stabilized pressure drop were used to determine the water endpoint relative permeability (k rw * ) and the oil endpoint relative permeability (k ro * ) was provided in the experimental work. The latter two parameters were used in Corey's correlation along with Corey's oil and water exponents (n o and n w ) to estimate the relative permeability curves for the formation water injection cycle. Moreover, Brooks-Corey correlation for imbibition capillary pressure of mixed-wet rocks was used to estimate the capillary pressure curve for this injection cycle (Brooks and Corey 1966). One should note that the final selection of Corey's exponents and capillary pressure parameters was based on the best history matching of both oil recovery and pressure drop experimental data for the formation water cycle. Endpoint relative permeability data analysis as well as a summary of relative permeability and capillary pressure parameters for the formation water injection cycle are listed in Tables 4 and 5, respectively. Relative permeability and capillary pressure curves used in history matching are depicted in Figs. 2 and 3, respectively. Both relative permeability and capillary pressure curves are consistent as they show a mixed-to-weakly water wet carbonate rock. The latter curves resulted in a good history match for both oil recovery and pressure drop data as shown in Figs. 4 and 5, respectively. As seen from these figures, both heterogeneity and capillary pressure effects were considered in the history matching process. It is worth to highlight that a reasonable history match was only obtained Biopolymer properties modeling In this section, modeling of the different properties of the Schizophyllan biopolymer is discussed. It is worth mentioning that the equations used below are usually applicable for both synthetic-and bio-polymers; however, the input parameters for these equations are different and usually they are obtained from the screening studies in the lab. In this work, the screening tests by Quadri (2015) were utilized. Polymer viscosity, salinity, and concentration Polymer viscosity is important in mobility control of the injected polymer solution. Polymer viscosity increases with increasing polymer concentration whereas it decreases with increasing the solution salinity. The dependence of polymer solution viscosity at zero shear rate ( 0 p ) on both polymer concentration and salinity are modeled using the Flory-Huggins equation (Flory 1953): where w is the water viscosity in cP, C p is the polymer concentration in water, A p1 , A p2 , A p3 , and S p are fitting constants, and C sep is the effective polymer salinity. It should be noted that the units for the parameters inside the parentheses must be dimensionless so that the unit for 0 p be the same as w . C sep captures the dependency of polymer viscosity on both salinity and hardness, and is defined as: where C 51 and C 61 are the anion and the divalent concentrations in the aqueous solution in meq/mL, respectively. C 11 is the water concentrations in the aqueous phase and it is expressed as water volume fraction in the aqueous phase. β p is measured in the laboratory, with typical value of about 10. In C sep calculation, usually the total amount of chloride ion is considered because NaCl is the most common salt in the water used and the current technology cannot describe the effect of every single ion on chemical EOR. Also, β p was assumed to be equal to 1, which means the hardness effect on polymer solution was ignored. Moreover, C 11 was assumed to be 1. It is worth mentioning that S p is the slope of 0 p − w w versus C sep on a log-log plot (Fig. 6). Figure 7 shows the viscosity experimental data as well as data matching using Eq. Shear effect Biopolymer solutions are pseudoplastic or shear thinning fluids, which means that their viscosity decreases with increasing the shear rate. The shear effect on polymer viscosity is modeled using Meter's equation (Meter and Bird 1964): (1) where P α is an empirical parameter that is obtained by matching laboratory-measured viscosity data, 0 p is the limiting viscosity at low shear limit (approaching zero), w is the water viscosity which is the limiting viscosity at high shear limit (approaching infinity), and ̇1 ∕2 is the shear rate at which polymer viscosity is the average of the 0 p and w . The equivalent shear rate ( ̇e q ) is defined using Cannella equation (Cannella et al. 1988): where u is Darcy's velocity in ft/day, k is the formation average permeability in Darcy, and C is the shear rate coefficient that depends on permeability and porosity. In this work, C was assumed as 2.55 as the reported C value for Scleroglucan is around 2 to 3.1 (Kulawardana et al. 2012). Figure 8 depicts the shear rate effect on polymer viscosity where the results show that P α is 1.7, ̇c is 10.12, and ̇1 ∕2 is 0.173. One should note that polymer concentration and shear rate effects on Schizophyllan biopolymer viscosity are presented at 25 °C in Figs. 7 and 8, respectively. This is because Quadri (2015) observed a negligible effect of temperature on this biopolymer through his temperature sweep tests. Polymer adsorption Polymer retention could be either dynamic (mechanical trapping and hydrodynamic trapping) or static (adsorption). Mechanical trapping occurs due to the use of polymers with sizes greater than the pores of the porous medium and it happens during polymer flow. The latter could be controlled by using polymers in high permeability medium or pre-shearing of polymer solution. Hydrodynamic trapping occurs also during polymer flow in the medium where the polymer retention depends on the flow rate. Usually, this effect is negligible especially at field-applications. Adsorption is the most important mechanism, which occurs due to the interaction between polymer molecules and the solid rock surface. Adsorption depends on the surface area exposed to the polymer solution. Researchers usually use the term polymer retention to describe polymer loss or they simply use the term adsorption (Sheng 2011). The Langmuir isotherm was used to describe adsorption (Lakatos et al. 1979): where C p is the injected polymer concentration, ⌢ C p is the adsorbed polymer concentration, C p −Ĉ p is the equilibrium concentration in the rock-polymer solution system, a p and b p are empirical constants. It should be noted that both C p and ⌢ C p have the same units, and b p has the reciprocal unit of C p. Also, a p is dimensionless and defined as: where a p1 and a p2 are fitting parameters, C sep is the effective salinity, k is the formation permeability, and k ref is the reference permeability of the rock used in the laboratory measurement for adsorption. It must be noted that Langmuir model assumes equilibrium conditions, instantaneous polymer adsorption as well as reversible adsorption in terms of polymer concentration. Polymer adsorption depends on polymer type, salinity, and rock surface. Figure 9 shows polymer adsorption modeling using the simulator where a p1 , a p2 , and b p obtained through data matching are 1.857, 0, and 100, respectively. The experimentally reported adsorption is 6.9 μg/g of rock using a 200 ppm polymer concentration and a 30.5 mD reference core permeability. Permeability reduction There is a noticeable formation permeability change during polymer flooding compared to waterflooding. This permeability reduction is due to polymer adsorption. The permeability reduction factor (F kr ) is defined as: where k eff,w is the effective permeability when rock is flooded by water and k eff,p is the effective permeability when the rock is flooded with polymer solution. This factor is modeled using the following equations: where b kr and c kr are input parameters derived from data matching, A p1 is the constant in Eq. (1), C sep is calculated using Eq. (2), and S p is from Fig. 6. It should be noted that the term b kr C p must be dimensionless, similarly is the case for F kr,max , which has an assumed empirical value of 10. Permeability reduction was modeled as seen in Fig. 10 through data matching using b kr , and c kr values of 1000 and 0.00883, respectively. As the polymer adsorption process can be considered sometimes irreversible due to the prolonged pore volumes of water injection to restore the initial permeability, the residual resistance factor (F rr ) was introduced. The latter parameter is defined as the ratio of water mobility before polymer flow to water mobility after polymer flow. However, F rr does not take into account the increase in viscosity caused by polymer flooding. Hence, the resistance factor term (F r ) was introduced, which is defined as the ratio of water mobility during water flow to polymer mobility during polymer flow. It should be noted that viscosity increase and permeability reduction due to polymer flooding is only applicable to the water phase by modifying the polymer viscosity by F kr . Inaccessible pore volume (IPV) It refers to the fraction of pore volume where the radii of the pores are smaller than the size of polymer particles, especially when polymers with high molecular weight are used. These pores are usually filled with irreducible water. IPV has a positive effect on sweep efficiency of polymer solutions and hence, better oil recovery due to boosting the advancement of the polymer solution front. Moreover, the IPV is useful from an economical point of view where it results in less contact between rock surface and polymer solution and hence, less polymer adsorption/retention. The only disadvantage of IPV is when these inaccessible pores have movable oil droplets. In this situation, polymer solutions will not be able to contact these oil droplets and that oil remains as residual oil saturation (Dawson and Lantz 1971). IPV is modeled by multiplying the porosity in the conservation equation for polymer by an input parameter (EPHI4) defined as the effective porosity. EPHI4 is 1 -IPV and assumed to be 1.0 in this study. A summary of the parameters used in modeling the properties of the Schizophyllan biopolymer using the UTCHEM simulator is listed in Table 6. More details about modeling of polymer properties can be found elsewhere (Sheng 2011). Optimum biopolymer concentration analysis The optimum concentration of the Schizophyllan biopolymer was determined using an analytical method, which is based on achieving the minimum total relative mobility of Effective porosity (EPHI4) 1.0 oil and water phases during the formation water injection cycle. The total relative mobility ( rT ) is defined as: where rw and ro are the relative mobility of water and oil phases, respectively. Figure 11 shows the calculated total relative mobility for water and oil phases where the minimum mobility was determined. Afterwards, the designed polymer viscosity is the inverse of this minimum total relative mobility as follows: The obtained desired polymer viscosity of 10 cP was used in Fig. 12 to determine the required optimum polymer concentration. The analysis shows that the optimum polymer concentration is 200 ppm, which results in a polymer viscosity of 10 cP. The latter is consistent with the polymer concentration used in Li (2015) studies. Fractional flow and mobility ratio analyses In this subsection, both fractional flow and mobility ratio analyses for biopolymer flooding are discussed. Fractional flow analysis Several assumption were considered in the fractional flow analysis including no dispersion, uniform adsorption of polymer on rock, one-dimensional linear flow, continuous injection of polymer, no chemical reactions, polymer only in aqueous phase (does not partition to oil), polymer viscosity depends on polymer concentration, isothermal reservoir, and local equilibrium adsorption of polymer on rock. Two fractional flow curves were considered. First, the water-oil fractional flow equation for horizontal flow and neglecting capillary pressure effect: Second, the polymer-oil fractional flow equation for horizontal flow and neglecting capillary pressure effect: It is worth mentioning that the fractional flow analysis was conducted for the secondary mode of injection. The frontal advance loss (D p ) in cc polymer/cc pore volume was calculated using the following equation: where w ps is the mass concentration of adsorbed polymer and s is the rock density (2.71 g/cc). Afterwards, the intercept of the water saturation axis ( e − D p ) was found and a tangent was drawn to polymer fractional flow curve resulting in (S w * , f p * ) values and the intersection of this tangent with the water-oil fractional flow curve, results in the water shock (oil bank) saturation and fractional flow values (S wB , f wB ). The parameter e is defined as follows: It should be noted that e is zero in this study as IPV is zero. Water saturation at breakthrough and the corresponding water fractional flow value were considered in the analysis (S wbt , f wbt ). The fractional flow analysis is shown in Fig. 13 (13) (14) where a more favorable fractional flow curve was obtained using polymer flooding as opposed to conventional formation water flooding. This finding is consistent with Fig. 14a-d showing water cut, oil cut, dimensionless cumulative oil recovery, and saturation profile at 0.2 PV using fractional flow calculations. The positive response of polymer flooding is clearly seen in the latter figures where a piston like displacement front was obtained using polymer flooding to the extreme of having one shock instead of the conventional two shocks solution. It should be noted that this solution was obtained under the assumptions made in this study. Mobility ratio analysis This analysis is essential in designing a polymer flooding process for a certain field. One should make sure that the mobility ratio is less than 1 for achieving a good mobility control by polymer and avoiding water fingering through the oil as well as early water breakthrough. In this work, the endpoint mobility ratio was calculated before and after polymer flooding using Eqs. (17) and (18), respectively. The results show that the endpoint mobility ratio during formation waterflooding ( M * w/o ) is 1.29 whereas the endpoint mobility rate during polymer flooding ( M * p/o ) is 0.013. The latter results indicate a slightly favorable endpoint mobility ratio during the formation waterflooding; however, this mobility ratio was further improved using the Schizophyllan biopolymer. This finding further indicates the capability of this biopolymer on improving volumetric sweep efficiency in carbonate reservoirs with harsh conditions of high temperature, high salinity, and low permeability. Fractional flow and mobility ratio analyses give an indication about the performance of the polymer flooding technique; however, capillary pressure and heterogeneity effects are not captured. Therefore, the next subsections discuss the predictions of polymer flooding in secondary and tertiary modes of injection using the UTCHEM simulator with comparison between these two injection modes. Biopolymer flooding prediction (secondary injection) The prediction of biopolymer injection in the secondary mode of injection was investigated. Figure 15 shows cumulative oil recovery prediction, and Fig. 16 depicts total pressure drop prediction. The figures show two polymer concentrations include 200 and 800 ppm. The reason behind using an 800 ppm polymer concentration is the small-scale heterogeneity of the core model, which resulted in higher effective shearing effects and hence, lowers polymer viscosity than the desired Figure 17 shows the polymer viscosity corresponding to polymer concentration of 200 ppm whereas Fig. 18 shows that the use of 800 ppm polymer concentration achieves the desired polymer viscosity of 10 cP. In these two figures, there are zones with high viscosity, which is related to their corresponding high permeability. This can be crosschecked with the permeability model, which was previously shown in Fig. 1. Zones with high permeability experience low effective shear rate, and consequently, high polymer viscosity, which is consistent with Eqs. (3) and (4). The higher polymer viscosity results in better sweep efficiency as shown in the 3D saturation maps in Figs. 19 and 20. Figure 19 shows the water saturation in the core model as a result of formation water flooding at 3 pore volumes of water injection where it is clear that zones with high permeability have better sweep compared to others. On the other hand, Fig. 20 shows the improvement in sweep efficiency as a result of polymer injection in the secondary mode at 3 pore volumes of injection. The improvement is related to the decrease in water mobility by decreasing effective water permeability and increasing water viscosity by using the Schizophyllan biopolymer at concentration of 800 ppm. The results of secondary biopolymer injection are also listed in Table 7. Both Fig. 15 and Table 7 show that polymer flooding with 800 ppm concentration results in better oil recovery compared to the 200 ppm polymer concentration. Moreover, biopolymer flooding is more favorable as opposed to conventional formation waterflooding resulting in incremental oil recovery of about 1.36 and 2.26% OOIP for 200 and 800 ppm polymer concentrations, respectively. In addition, it is worth mentioning that the injectivity of polymer is lower than that of waterflood due to the increase of pressure drop, which is about 2-5 times higher than that of the waterflood when using 200 and 800 ppm polymer concentrations, respectively (Fig. 16). It is worth mentioning that permeability, viscosity, and saturation maps are consistent in justifying the incremental oil recovery achieved by polymer flooding. Biopolymer flooding prediction (tertiary injection) The injection of the Schizophyllan biopolymer was also investigated in the tertiary mode of injection using UTCHEM. This mode starts with injecting formation water for about 3 pore volumes followed by polymer injection for another 3 pore volumes, which makes the total injection of 6 pore volumes. Figures 21 and 22 depict the cumulative oil recovery and the total pressure drop predictions as a result of tertiary polymer flooding. The stabilized pressure drop during the polymer flooding indicates that there is no injectivity or plugging problems. However, it is also evident from Fig. 22 that the injectivity of polymer is lower than that of waterflood due to the increase of pressure drop, which is about 2 to 5 times higher than that of the waterflood when using 200 ppm and 800 ppm polymer concentrations, respectively. The findings are consistent with the secondary mode of injection where the 800 ppm polymer resulted in better oil recovery than the 200 ppm polymer and that the tertiary polymer injection is more favorable compared to conventional formation water injection for about 6 PVs. The incremental oil recovery using tertiary polymer flooding is about 1.38 and 2.29% OOIP for 200 and 800 ppm polymer concentration, respectively (Table 8). Biopolymer flooding comparison (secondary vs. tertiary injections) The results of Schizophyllan biopolymer injection in both secondary and tertiary modes of injection are shown in Table 9 as well as Fig. 23. The comparison was considered for polymer concentration of 800 ppm, which achieves the desired polymer viscosity of 10 cP needed for reduction of water mobility and improvement of oil sweep efficiency. The results show that in terms of cumulative oil recovery, both secondary and tertiary modes of polymer flooding are comparable as they result in about 2% incremental OOIP compared to conventional formation water injection. The latter makes sense as the biopolymer used does not have a viscoelastic effect and hence, it is not expected to decrease residual oil saturation. Nevertheless, secondary polymer injection is more favorable compared to tertiary polymer injection in terms of economics through early boosting oil production rate and achieving higher money time value. One should note that more experimental datasets are needed to further validate and generalize the results obtained in this study. Summary and conclusions The performance of Schizophyllan biopolymer in carbonates with high temperature, high salinity, and low permeability conditions was successfully evaluated at core-scale. The following findings can be made based on the particular experimental dataset used in this work: • Small-scale heterogeneity of the core is important in history matching formation water corefloods as well as in modeling polymer properties, particularly the shear effect on polymer viscosity. • Formation permeability, polymer viscosity, and oil saturation maps should be consistent in justifying incremental oil recovery achieved by polymer flooding. • The Schizophyllan biopolymer improves both displacement and volumetric sweep efficiencies through achieving favorable water fractional flow curve as well as endpoint mobility ratio compared to conventional waterflooding. • Schizophyllan biopolymer improves oil recovery compared to conventional waterflooding through decreasing effective water permeability as well as increasing water viscosity, and hence reducing the water phase mobility. • Secondary injection of Schizophyllan biopolymer is more favorable as opposed to tertiary injection due to boosting oil production rate at earlier time and hence, higher money time value. • An optimum concentration of 800 ppm is recommended for maintaining the desired polymer viscosity as well as achieving a minimum total relative mobility of oil and water phases. In the future work, validation of polymer corefloods through considering different polymer datasets as well as field-scale studies will be considered to highlight the advantages of using Schizophyllan biopolymer on oil recovery from carbonate reservoirs. Moreover, sensitivity analysis as well as optimization studies will be performed.
2020-07-05T14:10:15.999Z
2020-07-04T00:00:00.000
{ "year": 2020, "sha1": "ca5628ba3830495be5ee8b9efd803ec5c46a242e", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s13202-020-00949-7.pdf", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "ca5628ba3830495be5ee8b9efd803ec5c46a242e", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
261251481
pes2o/s2orc
v3-fos-license
Knowledge, Attitude and Practice of Personal Safety Measures Adopted by Medical Practioners During the COVID 19 Pandemic Hemapriya.L Kukreja (  drpriya_911@hotmail.com ) JSS Medical College & Hospital, JSSAHER, Mysore Maureen Prativa Tigga JSS Medical College & Hospital, JSSAHER, Mysore Neha Wali JSS Medical College & Hospital, JSSAHER, Mysore Prathap.T JSS Medical College & Hospital, JSSAHER, Mysore Anil Kumar M R JSS Medical College & Hospital, JSSAHER, Mysore Shreya Chandran JSS Medical College & Hospital, JSSAHER, Mysore Introduction In December 2019 , a set of patients in the city of Wuhan, Hubei Provence, China presented with severe pneumonia of unknown origin. Epidemiologically these were linked to a seafood market in the city. On January 7, 2020 the causative organism was identi ed to be a novel coronavirus -now termed SARS-CoV-2 1 . In March 2020 the World Health Organization (WHO) declared it a global pandemic. 2 The coronavirus (COVID-19) outbreak has fundamentally changed the world and also changing the reality of healthcare workers. This pandemic is creating profound changes in global economy and healthcare systems. 3 COVID-19 virus is transmitted between people through close contact and droplets. 4 The people most at risk of infection are those who are in close contact with or who care for COVID-19 patients. Healthcare workers are at signi cant risk of acquiring the infection. They are required to protect themselves and prevent transmission in the healthcare setting. Various measures should be inculcated in day to day life by health care professionals to protect themselves including social distancing, hand hygiene, N95 masks, goggles, gloves, gowns, face shields, cover all's, precautions for aerosol generating procedures and frequent sanitization. Health care professionals should be educated on when to use which personal protective equipment (PPE), how to put on & take off, how to change them by themselves to prevent contamination and how to properly disinfect and discard this equipment. Health care institutions should have procedures and policies that describe the correct order of donning and do ng PPE in a safe manner. 5 This study evaluates the knowledge, attitude and practice of the various personal safety measures used by medical practitioners to protect themselves from exposure to this pandemic. Materials And Methods Our study is an online survey using a preformatted questionnaire. The institutional ethical committee approval was obtained. We collected data using a questionnaire sent through e-mail or Google form and recorded all responses. The survey consisted of multiple choice questions where we assessed the knowledge, attitude and practices adopted by medical practitioners for their personal safety during the COVID 19 pandemic. All medical practitioners who agreed to participate in the study from the rst to the thirtieth of June 2020 were enrolled in the study. We obtained 576 responses. The questionnaire was given as a pilot on 10 subjects to make sure that it was easy to understand and not time consuming. Based on the feedback obtained, it required no changes. The average time to complete the survey during the pilot was ve minutes. The piloted subjects and the subjects who did not complete the questionnaire were excluded and the nal number included for analysis after exclusion was 536. Statistical Analysis: The data was compiled and analyzed using MS Excel and SPSS software version 25 at 5% level of signi cance. The tools of statistics such as -Descriptive statistics, chi-square test, some parametric/non-parametric tests were used for data analysis. Results The demographic characteristics of our respondents are tabulated below. [Please see the supplementary les section to view Table 1 .] Fifty two percent of our respondents reported to have had encountered suspected COVID 19 patients. However, only 12.9% were quarantined following such exposure. Five percent of the practitioners were forced to move out of their homes following suspected exposure, in order to minimize risk to their family members. The personal safety measures taken by the various practitioners are tabulated below. Out of the 536 subjects, 86.9% were using sanitization, mask and glove, while only 12.3% were using full PPE as a precautionary measure during their working hours. Almost half the subjects (50.4%) reported interference of PPE in the quality of work sometimes. Twenty seven percent of the subjects felt the interference of PPE with work frequently. There was no significant difference between male and female practitioners regarding the various personal safety measures. Similarly, on comparing the various age groups, designation of practitioners, place of work and area of work; we found no significant difference in the use of personal safety measures. However, on comparing the different specialties, we found that dentists were significantly more inclined to use full PPE in their practice as compared to others (P value <0.0001). Female practitioners were more likely to place sanitizers, masks at the entrance of their consulting rooms, as a measure for their own safety, although not statistically significant. Regarding the use of Hydroxychloroquine prophylaxis, 58% of females had used it as compared to 41% of males, which is statistically significant (P=0.005). Practitioners in the age group of 23-30 and 30-40 years were significantly more likely to have taken hydroxychloroquine than older practitioners. However, those in older age group had administered prophylaxis more often to their family members, as compared to younger doctors. Similarly, physicians and general surgeons reported to have taken hydroxychloroquine along with their families significantly more than other specialists. Those in the younger age group, between 23 -40 years were more likely to maintain distance with their family members, especially physicians and pediatricians. Government doctors were significantly more likely to do so (p<0.001) as compared to private practitioners. Discussion The coronavirus disease 2019 (COVID19) has become an international health crisis, and the global health care system was ill equipped to handle a crisis of such magnitude. The safety of healthcare workers has become the top priority, in order to prevent collapse of healthcare systems and also to prevent transmission of infection from health workers to the community. Medical practitioners are at the highest risk of infection because of frequent close contact with patients who are known or suspected to be infected. A similar situation was seen during the previous SARS-CoV-1 epidemic, where 20% of the cases comprised of health care workers. 6,7 Worldwide, over one million people were con rmed to be infected with SARS-CoV-2 by April 2020. Assuming that healthcare workers were infected at the same rate as in the SARS-CoV-1 epidemic, it would foretell the collapse of health care system, especially in developing countries. There has always been an acceptance that working in a healthcare setting carries a level of personal risk, however, it would seem unreasonable for a healthcare worker to carry out a healthcare activity if there was a high risk of death. 8 Hence the need for personal protection of frontline warriors is of utmost importance to provide unconditional healthcare services. Earlier in the pandemic, infection of healthcare workers was as high as 29%, and this dramatically decreased thereafter due to PPE measures put in place to appropriately protect healthcare workers. 9 Current personal protective equipment (PPE) and infection control guidelines from the World Health Organization (WHO) are based on the assumption that the primary mechanism of transmission is direct and indirect droplet spread. 10 In the current crisis, health-care workers not only have to work harder and longer hours, they often do so in a context where the knowledge and understanding of the novel pathogen is still suboptimal. More than 50% of the subjects in the current study reported interference of PPE in their quality of work. The use of PPE also interferes with vision, di culty in operating or carrying out procedures. It not only hampers movement and interferes with skills, also, the regular donning and do ng of full PPE add to physical fatigue and psychological stress. 11 Aerosol generating procedures may lead to an increase in transmission rates among practitioners. However, the evidence is limited. Infections of health workers following the performance of aerosol generating procedures have been reported, but the exact timing and cause of transmission is unknown. 12 The risk is observed not only during the procedure, but during all periods of contact with the infected patient. Therefore, precautions, and proper PPE usage should be followed not only during procedural periods alone but increase this protection to all times of risk. 13 While awaiting a vaccine, hygiene measures, social distancing and personal protective equipment are the only primary prophylaxis measures against SARS-CoV-2, but they have not been su cient to protect our healthcare professionals. Some evidence of the in vitro e cacy of hydroxychloroquine against this virus is known, along with some clinical data that would support the study of this drug in the chemoprophylaxis of infection. However, there are still no data from controlled clinical trials in this regard. In the aftermath of the current pandemic, the exact mode of transmission may still remain controversial as was the case with SARS-CoV-1 and in uenza. Urgent further research is required to investigate SARS-CoV-2 transmission, risk factors and strategies to assure the safety of healthcare workers. In the interim, healthcare workers may choose to take a precautionary approach until robust evidence is available. Conclusion The medical work force is at high risk of exposure as well as increased viral load and although there is a need to balance limited supplies with staff and patient safety, this should not leave the healthcare professionals treating patients with inadequate PPE. Along with extrinsic organizational, infrastructural and procedural conditions, the intrinsic state and wellbeing of the health-care worker must also be addressed in order for him or her not to be the weakest link. Personal protection of frontline workers to provide unconditional healthcare services is of utmost importance in the current era as loss of even 1 doctor equals loss of services of almost 1000 patients.
2021-01-06T06:11:54.291Z
2020-10-21T00:00:00.000
{ "year": 2020, "sha1": "3d6266a2c042c281c817b2dc95d306ba59a3a3a6", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-93569/latest.pdf", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "06506331126599382b1e3a497ff7df9f06ac9608", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
6426705
pes2o/s2orc
v3-fos-license
Towards minimax policies for online linear optimization with bandit feedback We address the online linear optimization problem with bandit feedback. Our contribution is twofold. First, we provide an algorithm (based on exponential weights) with a regret of order $\sqrt{d n \log N}$ for any finite action set with $N$ actions, under the assumption that the instantaneous loss is bounded by 1. This shaves off an extraneous $\sqrt{d}$ factor compared to previous works, and gives a regret bound of order $d \sqrt{n \log n}$ for any compact set of actions. Without further assumptions on the action set, this last bound is minimax optimal up to a logarithmic factor. Interestingly, our result also shows that the minimax regret for bandit linear optimization with expert advice in $d$ dimension is the same as for the basic $d$-armed bandit with expert advice. Our second contribution is to show how to use the Mirror Descent algorithm to obtain computationally efficient strategies with minimax optimal regret bounds in specific examples. More precisely we study two canonical action sets: the hypercube and the Euclidean ball. In the former case, we obtain the first computationally efficient algorithm with a $d \sqrt{n}$ regret, thus improving by a factor $\sqrt{d \log n}$ over the best known result for a computationally efficient algorithm. In the latter case, our approach gives the first algorithm with a $\sqrt{d n \log n}$ regret, again shaving off an extraneous $\sqrt{d}$ compared to previous works. Introduction In this paper we consider the framework of online linear optimization: at each time instance t = 1, . . . , n, the player chooses, possibly in a randomized way, an action from a given compact action set A ⊂ R d . The action chosen by the player at time t is denoted by a t ∈ A. Simultaneously to the player, the adversary chooses a loss vector z t ∈ Z ⊂ R d and the loss incurred by the forecaster is a ⊤ t z t . The goal of the player is to minimize the expected cumulative loss E n t=1 a ⊤ t z t where the expectation is taken with respect to the player's internal randomization (and possibly the adversary's randomization). In the basic version of this problem, the player observes the adversary's move z t at the end of round t. We consider here the bandit version, where the player only observes the incurred loss a ⊤ t z t . As a measure of performance we define the regret of the player as a ⊤ z t . In this paper we are interested in the dual setting, where the adversary plays on a dual action set, i.e., A and Z are such that |a ⊤ z| ≤ 1, ∀(a, z) ∈ A × Z. Contributions and relation to previous works In the full information case, the online optimization setting (for convex losses) was introduced by Zinkevich [2003]. The specific online linear optimization problem with bandit feedback was first studied by McMahan and Blum [2004] and Awerbuch and Kleinberg [2004]. Our first contribution to this problem is to complete the research program started by Dani et al. [2008] and Cesa-Bianchi and Lugosi [2011]. In these papers the authors studied the EXP2 (Expanded Exp) algorithm, also called Geometric Hedge, Expanded Hedge, or ComBand. This strategy applies to a finite set of actions; it assigns an exponential weight to each action, and then draws an action at random from the corresponding probability distribution. Using a basic estimation procedure (first used by Auer et al. [2002] for the basic multi-armed bandit problem), one can estimate the loss vector z t . However, to control the range of the estimates, one has to mix the probability given by EXP2 with an "exploration distribution". Dani et al. [2008] chose this distribution to be uniform over a barycentric spanner for the action set, while in [Cesa-Bianchi and Lugosi, 2011] the distribution was uniform over all actions. Using ideas from convex geometry, we propose a new distribution that allows us to derive a minimax optimal regret bound. More precisely, we show that for any finite action set, EXP2 with the exploration distribution given by John's Theorem (see Theorem 3) attains a regret of order √ dn log N for any set of N actions. This improves by a factor √ d over previous works. Moreover this rate is optimal: there exists action sets (such as the hypercube) where the minimax rate is of order d √ n -see [Dani et al., 2008]. Surprisingly, this result also shows that EXP2 with John's exploration can be used for linear bandits with N experts to obtain a regret of order √ dn log N , which is no worse than the minimax regret for the basic d-armed bandit with N experts problem. While these results show that, without further assumption on the set of action, the regret of EXP2 is optimal, they do not say anything about optimality for a specific set of actions. In fact, it was proven by Audibert et al. [2011] that for some pair (A, Z) the exponential weights is a provably suboptimal strategy (with a gap of order √ d). To address this issue, another class of algorithms has been studied for online optimization: the Mirror Descent style algorithms of Nemirovski and Yudin [1983] -this class of algorithms was rediscovered in the learning community, see for example Kivinen and Warmuth [2001]. In recent years the number of papers using Mirror Descent to solve problems in online optimization has been growing very rapidly. In the full information setting (when one observes z t ), we have a very good understanding of how to use Mirror Descent to obtain optimal regret bounds that adapt to the geometry of the problem -see [Rakhlin, 2009, Hazan, 2011, Bubeck, 2011. In particular, a recent paper suggests that in this basic setting Mirror Descent is "universal", see [Srebro et al., 2011]. On the other hand, in the limited feedback scenario the picture is much more scattered. In the particular cases of semi-bandit feedback -see [Audibert et al., 2011]-and two-points bandit feedback -see [Agarwal et al., 2010], we know how to use Mirror Descent to obtain optimal regret bounds. However, in both scenarios the feedback is much stronger than in the more fundamental bandit problem. In this latter case, there is only one paper that successfully applies Mirror Descent, namely the seminal work of Abernethy et al. [2008] -see also the follow-up paper Abernethy and Rakhlin [2009]. Unfortunately, for a convex and compact set A, this approach (which combines Mirror Descent with a self-concordant barrier for the action set) leads to a regret bound of order d √ θn log n for any θ > 0 such that A admits a θ-self concordant barrier. For example, in the case of the hypercube the best we know is θ = O(d), which results in the suboptimal d 3/2 √ n log n regret (compared to d √ n for EXP2 with John's ellipsoid). However, note that in this particular case it is not known if EXP2 can be implemented efficiently, while Mirror Descent is polynomial time. Our second main contribution is to propose an efficient algorithm based on Mirror Descent, with an optimal regret bound for two canonical pairs (A, Z). Namely, the (hypercube, crosspolytope) pair, which corresponds to an L ∞ /L 1 type of constraints, and the (Euclidean ball, Euclidean ball) pair, which corresponds to an L 2 /L 2 constraint. In the former case this results in the first computationally efficient algorithm with a regret of order d √ n, while in the latter case it is the first efficient algorithm with a regret of order √ dn log n. Indeed, the approach of Abernethy et al. [2008] only gives d √ n log n for the pair (Euclidean ball, Euclidean ball) since there exists a O(1)self concordant barrier for the Euclidean ball. Note also that this specific example was studied in Abernethy and Rakhlin [2009], we discuss their result in Section 5. Outline of the paper The paper is organized as follows. In Section 2 we introduce the two algorithms discussed in the paper: Expanded Exp (EXP2) and Online Stochastic Mirror Descent (OSMD). In both cases we state a general regret bound. In Section 3 we detail our exploration strategy for EXP2, and show the corresponding regret bound. We also discuss briefly the extension to linear bandits with expert advice. Then in Section 4 (respectively Section 5) we show how to use OSMD to obtain a computationally efficient strategy with optimal regret for the hypercube (respectively for the Euclidean ball, up to a logarithmic factor). Algorithms We briefly describe here the two algorithmic templates that we shall use in this paper. First, EXP2 is described in Figure 1. The general regret bound for this algorithm is the following. The proof of Algorithm: EXP2 with exploration µ. Parameters: learning rate η; mixing coefficient γ; distribution µ over the action set A. (c) Update the exponential weights, for all a ∈ A, . this result follows a standard argument, see for example [Chapter 7, Bubeck [2011]]. Theorem 1 Let Figure 2 describes OSMD in the bandit setting. Note that step (c) can be written in several equivalent ways, such as a Follow The Regularized Leader equation, or a mirror gradient descent step if F is a Legendre function. When written as a gradient descent step, one usually has to project back on A (using the Bregman divergence associated to F ). Here the projection is implicit in the evaluation of ∇F * . The following theorem states a general regret bound for OSMD. Recall that the Bregman divergence with respect to F is defined as In the following, we write x t 1 to denote x 1 + · · · + x t . Theorem 2 Let A be a compact set of actions, and F a function with effective domain Algorithm: OSMD. Parameters: learning rate η > 0; regularization function F : R d → R ∪ {+∞} with effective domain A, and such that the Legendre-Fenchel dual F * is differentiable on R d ; perturbation scheme for step (a) below. Let a 1 ∈ argmin a∈A F (a). For each round t = 1, 2, . . . , n; (a) Play a t at random from some probability distribution p t over A ( a t is a randomly perturbated version of a t , see Section 4 and Section 5 for examples). Proof The proof is adapted from Kakade et al. [2010]. Using Young's inequality, one obtains Taking into account the randomness induced by a t and z t is then an easy exercise, see for example [Bubeck, 2011, Chapter 7]. This theorem proves to be particularly useful when applied with a Legendre function F -see [Cesa-Bianchi and Lugosi, 2006, Chapter 11] for the definition of a Legendre function. Indeed, in that case F * is differentiable if F is differentiable, and moreover the corresponding gradient mappings are inverse of each other, which gives a simple way to do computations with the Bregman divergence D F * . EXP2 with John's exploration We propose here a new exploration distribution µ for the EXP2 strategy, that allows us to derive the first √ dn log N regret bound for online linear optimization with bandit feedback. We use the following result from convex geometry, see [Ball, 1997] for a proof. Theorem 3 Let K ⊂ R d be a convex set. If the ellipsoid E of minimal volume enclosing K is the unit ball in some norm derived from a scalar product ·, · , then there exists M ≤ d(d + 1)/2 + 1 contact points u 1 , . . . , u M between E and K, and µ ∈ ∆ M (the simplex of dimension M − 1), such that To use this theorem, we need to perform a preprocessing of the action set as follows: 1. First, we assume that A is of full rank (that is such that linear combinations of A span R d ). If it is not the case, then one can rewrite the elements of A in some lower dimensional vector space and work there. 2. Find John's ellipsoid for Conv(A) -i.e., the ellipsoid of minimal volume enclosing Conv(A): The first preprocessing step is to translate everything by x 0 . In other words, we assume now that A is such that x 0 = 0. Furthermore, we define the inner product x, y = x ⊤ Hy. 3. We can now assume that we are playing on A ′ = H −1 A, and the loss of playing a ′ ∈ A ′ when the adversary plays z is a ′ , z . Indeed: H −1 a, z = a ⊤ z. Moreover, note that John's ellipsoid for Conv(A ′ ) is the unit ball for the inner product ·, · because H −1 x, 4. Find the contact points u 1 , . . . , u M and µ ∈ ∆ M that satisfy Theorem 3 for Conv(A ′ ). Note that the contact points are in A ′ , thus they are valid points to play. We say that µ is John's exploration distribution. In the following we drop the prime on A ′ . More precisely. we play on a set A such that John's ellipsoid for Conv(A) is the unit ball for some inner product ·, · , and the loss is given by a, z . Thus, we also need to slightly change the algorithm to account for the fact that the loss is now an arbitrary scalar product. Step (c) in Figure 1 is modified as: . We also modify the loss estimate given by step (b) as follows. Recall that the outer product u ⊗ u is defined as the linear mapping from R d to R d such that u ⊗ u(x) = u, x u. Note that one can also view u ⊗ u as a d × d matrix, so that the evaluation of u ⊗ u is equivalent to a multiplication by the corresponding matrix. Now let: Note that this matrix is invertible, since A is of full rank and p t (a) > 0, ∀a ∈ A. The estimate for z t is given by: Note that this is a valid estimate since (a t ⊗ a t ) z t = a t , z t a t and P −1 t are observed quantities. Moreover, it is also clearly an unbiased estimate. We can now prove the following result. Proof With the chosen scalar product, it is easy to see that the condition η|a ⊤ z t | ≤ 1 in Theorem 1 rewrites as η| a, z t | ≤ 1, while the third term in the regret bound rewrites as E a∈A p t (a) a, z t 2 . Thus it remains to control those two quantities. Let us start with the latter: Now we use a spectral decomposition of P t in an orthonormal basis for ·, · and write P t = This concludes the bound for E a∈A p t (a) a, z t 2 . We turn now to a, z t : a, z t = a t , z t a, P −1 t a t ≤ a, P −1 t a t ≤ 1 min 1≤i≤d λ i where the last inequality follows from the fact that a, a ≤ 1 for any a ∈ A, since A is included in the unit ball. Now to conclude the proof we need to lower bound the smallest eigenvalue of P t . Using Theorem 3, one can see that P t γ d I d , and thus λ i ≥ γ d concluding the proof. Using the discretization argument of Dani et al. [2008], EXP2 with John's exploration can be used to obtain a regret of order √ dn log n for any compact set of action A. Computational issues If A is given by a finite set of points, then Grötschel et al. [1993] give a polynomial time algorithm for computing a constant factor approximation to the John's ellipsoid (and this approximate basis will provide the same order of regret). However, if A is specified by the intersection of half spaces, then Nemirovski [2007] shows that obtaining such a constant factor approximation to this ellipsoid is NP-hard in general. Here, it is possible to efficiently compute an ellipsoid where the factor of d in Theorem 3 is replaced by d 3/2 -see [Grötschel et al., 1993], which leads to a slightly worse dependence on d in the regret bound. In special cases, we conjecture that the John's ellipsoid may be computed efficiently, as for certain problems, there are efficient implementations of GeometricHedge that lead to optimal rates (such as shortest path problems and other settings where dynamic programming solutions exists). Application to bandits with experts Consider the following model of linear bandits with N experts. At each time step t = 1, 2, . . . , n, each expert k = 1, . . . , N suggests an action a t (k) ∈ R d . The goal here is to compete with the best expert, that is at each time step the strategy chooses an expert k t ∈ {1, . . . , N} and the regret is given by: One can use EXP2 with John's exploration to obtain a regret of order √ dn log N for this problem. Indeed, it suffices at every turn to do the preprocessing step on A t = {a t (1), . . . , a t (N)} and to build the corresponding John's exploration µ t , the straightforward details are omitted. For example, at each time t each expert i = 1, . . . , N is associated with a hidden loss estimate z t (i) ∈ Z and an arbitrary "context set" A t ⊆ A is observed. Each expert i then suggests the best action according to the current loss estimate, a t (i) = argmin a∈At z t (i) ⊤ a . This can be viewed as a natural nonstochastic variant of the contextual linear bandit model of Chu et al. [2011]. Another notable special case is the d-armed bandit problem with expert advice, where we can view the suggested actions as the corners of the d-dimensional simplex. Here, the EXP4 algorithm of Auer et al. [2002] achieves a regret of order √ dn ln N . Interestingly, the regret achievable in the more general d-dimensional linear optimization setting is no worse than in the seemingly simpler d-armed bandit with expert advice setting. Computationally efficient strategy for the hypercube In this section we restrict our attention to the action set A = {x ∈ R d : x ∞ ≤ 1}. Using EXP2 with John's exploration on {−1, 1} d one obtains a regret bound of order d √ n for this problem, and as it was shown by Dani et al. [2008] this regret is minimax optimal. However, it is not known if it is possible to sample from the exponential weights distribution in polynomial time for this particular set of actions. In this section we propose to turn to OSMD, and we show that with the appropriate regularizer F and random perturbation a t (see step (a) in Figure 2), one can obtain a minimax optimal algorithm with computational complexity linear in d. More precisely we use an entropic regularizer together with the following perturbation of a point a t in the interior of A: With probability γ, play a t uniformly at random from the canonical basis (with random sign). With probability 1 − γ, play a t = ξ t where ξ t (i) is drawn from a Rademacher with parameter 1+at(i) 2 . It is easy to check that this perturbation is almost unbiased, indeed one has: and thus: We can now prove the following result. (2) satisfies, for any η and γ ∈ (0, 1) such that ηd γ ≤ 1 2 , Theorem 5 Consider the online linear optimization problem with bandit feedback on In particular, with γ = 2d log 2 3n and η = log 2 3n , R n ≤ 2d 3n log 2. Remark that the regularizer (2) used here is in the class of Legendre functions with exchangeable Hessian. More precisely, following Audibert et al. [2011], (2) can be written (up to a numerical constant) as This type of regularizer was first studied (implicitely) by Audibert and Bubeck [2009] and Audibert and Bubeck [2010]. Proof Since F is Legendre on A, F * is differentiable on R d and the gradient mapping of F * is the inverse of the gradient mapping of F . Therefore, (∇F * ) i = tanh because (∇F * ) i = tanh −1 . Then, thanks to (3) and Theorem 2, the regret can be bounded as: For the first term it is easy to see that F (a) − F (a 1 ) ≤ d log 2. For the term involving the Bregman divergence, using elementary computations one obtains To prove (4) we need to show that In fact, we prove that this inequality is true as soon as u − v ∞ ≤ 1 2 . The fact that the property is satisfied for the pair (u, v) = −η z t 1 , −η z t−1 1 under consideration is established at the very end of the proof. Using a basic hyperbolic identity, and the elementary inequalities exp(x) ≤ 1 + x + x 2 , ∀x : |x| ≤ 1 and log(1 + x) ≤ x, one obtains which concludes the proof of (4). Now for the proof of (5) we first compute the matrix P t : To obtain (5) first note that ( we use a spectral decomposition of P t in an orthonormal basis and write: To conclude the proof it remains now to show that η|| z t || ∞ ≤ 1 2 . First note that the smallest eigenvalue of P t is larger than γ/d, and thus: where the penultimate inequality follows from |e ⊤ i a t | ≤ 1 and the last inequality follows from the assumption on η and γ. Improved regret for the Euclidean ball In this section we restrict our attention to the action set A = {x ∈ R d : x ≤ 1}, where · denotes the Euclidean norm. Using EXP2 with John's exploration on a discretization of the Euclidean ball one obtains a regret bound of order d √ n log n for this problem. A similar regret bound can be obtained with a computationally efficient algorithm, using the technique developed by Abernethy et al. [2008]. Here we show that in fact one can attain efficiently a regret of order √ dn log n using OSMD with the approriate regularizer F and random perturbation a t . More precisely here we use F (x) = − log(1 − x ) − x (the motivation for this particular regularizer comes from the proof, see below). Moreover we perform the following perturbation of a point a t in the interior of A: Let ξ t be a Bernoulli of parameter a t , let I t be drawn uniformly at random in {1, . . . , d}, and let ε t be Rademacher with parameter 1 2 . If ξ t = 1, then play a t = a t / a t , else play a t = ε t e It . It is easy to check that this perturbation is unbiased, in the sense that E a t | a t = a t . Here we modify the estimate of step (b) in Figure 2, and instead we use: It is easy to check that this estimator satisfies the same key unbiasedness property than the one in step (b) in Figure 2, that is E z t | a t = z t . Note that the problem studied in this section was also specifically considered in Abernethy and Rakhlin [2009], with an emphasis on high probability bounds. In this paper the authors used the selfconcordant barrier F (x) = − log(1− x 2 ) with a similar perturbation scheme to the one proposed above. They obtain suboptimal rates, but a more careful analysis (precisely slightly modifying Section V.B., step (E)) can actually yield the same rate than the one we obtain. The strength of our approach is that it is in a sense more elementary (e.g., we do not require any results from the Interior Point Methods literature), but on the other hand the result of Abernethy and Rakhlin [2009] holds with high probability (though it is not clear if it possible to get the rate √ dn log n with high probability). Proof First, it is clear that by playing on A ′ instead of A, one incurs an extra γn regret. Second, note that F is stricly convex (it is the composition of a convex and nondecreasing function with the euclidean norm), differentiable, and In particular F is Legendre on A = {x ∈ R d : x ≤ 1}, and thus F * is differentiable on R d . Now the regret with respect to A ′ can be bounded as follows, thanks to Theorem 2, The first term is clearly bounded by 1 η log 1 γ (we use the fact that a 1 = 0). For the second term we need to do a few computations (the first one follows from (9) and the fact that F is Legendre): ∇F * (u) = u 1 + u , F * (u) = − log(1 + u ) + u , u, v). First note that Thus, in order to prove (7) it remains to show that Θ(u, v) ≤ u−v 2 , for (u, v) = −η z t 1 , −η z t−1 1 . In fact we shall prove that this inequality holds true as soon as u − v 1+ v ≥ − 1 2 . This is the case for the pair (u, v) under consideration, since by the triangle inequality, equations (6) and (10), and the assumption on η: Now using that log(1 + x) ≥ x − x 2 , ∀x ≥ − 1 2 , we obtain that for u, v such that u − v which concludes the proof of (7). Now for the proof of (8) it suffices to note that: along with straightforward computations.
2012-02-14T16:12:09.000Z
2012-02-14T00:00:00.000
{ "year": 2012, "sha1": "328d6beeebad1cc731ceb8f989ec65f2a05011b6", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d5ea46211884e837ee5e2ef23e058be52c88ef90", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
236266419
pes2o/s2orc
v3-fos-license
Uncertainty and Bias in Global to Regional Scale Assessments of Current and Future Coastal Flood Risk Abstract This study provides a literature‐based comparative assessment of uncertainties and biases in global to world‐regional scale assessments of current and future coastal flood risks, considering mean and extreme sea‐level hazards, the propagation of these into the floodplain, people and coastal assets exposed, and their vulnerability. Globally, by far the largest bias is introduced by not considering human adaptation, which can lead to an overestimation of coastal flood risk in 2100 by up to factor 1300. But even when considering adaptation, uncertainties in how coastal societies will adapt to sea‐level rise dominate with a factor of up to 27 all other uncertainties. Other large uncertainties that have been quantified globally are associated with socio‐economic development (factors 2.3–5.8), digital elevation data (factors 1.2–3.8), ice sheet models (factor 1.6–3.8) and greenhouse gas emissions (factors 1.6–2.1). Local uncertainties that stand out but have not been quantified globally, relate to depth‐damage functions, defense failure mechanisms, surge and wave heights in areas affected by tropical cyclones (in particular for large return periods), as well as nearshore interactions between mean sea‐levels, storm surges, tides and waves. Advancing the state‐of‐the‐art requires analyzing and reporting more comprehensively on underlying uncertainties, including those in data, methods and adaptation scenarios. Epistemic uncertainties in digital elevation, coastal protection levels and depth‐damage functions would be best reduced through open community‐based efforts, in which many scholars work together in collecting and validating these data. Introduction The increase of damages due to flooding caused by coastal extreme sea-level events, resulting from the interplay of tides, mean sea level rise, storm surges, and waves, may be one of the costliest aspects of climate change. Global to world-regional scale (called broad scale, hereafter) assessments of current and future coastal flood risks (CFR) are thus needed to inform a range of policy decisions including: (i) setting global mitigation targets in the context of the United Nations Framework Convention on Climate Change (UN-FCCC) to avoid "dangerous interference with the climate system" (UNFCCC, 1992); (ii) informing Global Assessment Reports on Disaster Risk Reduction by the United Nations Office for Disaster Risk Reduction (UNDRR, 2019); (iii) designing global financial mechanisms for adaptation (UNEP, 2016), disaster relief and loss & damage (Jongman et al., 2014); and (iv) strategic long-term development and adaptation planning. A key requirement for informing these policy decisions, as well as for informing decisions in general, is that underlying assessments need to consider all major uncertainties in models, methods and data applied, because the failure to do so may mislead policy decisions leading to poor policy outcomes (Jones et al., 2014;Kunreuther et al., 2013;Morgan et al., 1990;Simpson et al., 2016). The state-of-the-art of climate impact assessments generally, and broad-scale CFR assessments specifically, does not yet meet this requirement. Most efforts, notably those under the Climate Model Intercomparison Project (Eyring et al., 2016) and the Intergovernmental Panel on Climate Change (IPCC, 2014a), have focused on the exploration of uncertainty in climate models under different emission scenarios. More recently, a range of impact model intercomparison projects united under the umbrella of the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP; https://www.isimip.org/) have started to explore uncertainties in impact models across different sectors such as water (Zaherpour et al., 2018) or forests (Petter et al., 2020). However, there are also many uncertainties beyond climate and impact models, which have been hardly explored, even though many of these uncertainties are known to be substantial. For broad-scale CFR assessments, these include uncertainties in subnational population data (Merkens et al, 2016), flood depth damage functions (Huizinga et al., 2017), extreme value analysis (EVA) methods (Mentaschi et al., 2016;Wahl et al., 2017), defense failure mechanisms (Allsop et al., 2007), and adaptation scenarios. To the best of our knowledge, no study has attempted to assemble and compare all major dimensions of uncertainty underlying current and future CFR. This study contributes to filling these gaps by providing a literature-based comparative assessment of the major sources of uncertainty and biases relevant in broad-scale CFR assessments. We focus on broad-scale assessments, because the methods applied differ from methods applied in local-scale assessments (de Moel et al., 2015), mainly due to the limited availability of global data and computational resources. Assessing and comparing uncertainties in these methods is particularly timely, because they have developed substantially in recent years (Abadie et al., 2016;Diaz, 2016;Hallegatte et al., 2013;Hinkel et al., 2014;Tiggeloven et al., 2020;Voudoukas, Mentaschi, et al., 2020;Vousdoukas et al., 2018) with broad-scale extreme sea-level models and datasets becoming increasingly available (Calafat & Marcos, 2020;Mentaschi et al., 2017;Morim et al., 2019;Muis et al., 2020;Tadesse et al., 2020;Vitousek et al., 2017;Vousdoukas et al., 2017;Woodworth et al., 2016). In our uncertainty assessment we consider uncertainty in drivers and future projections of the four components of CFR, following the risk definition of the Intergovernmental Panel on Climate Change (IPCC) Wong et al., 2014): 1. Mean and extreme sea-level hazards, including sea-level rise, tides, surges, waves, river run-off and their interactions; 2. Hazard propagation onto the shore and the floodplain, including interaction with natural (e.g., dunes) and artificial (e.g., dikes) defences; 3. Exposure in terms of area, people and coastal assets potentially threatened by these hazards; and 4. Vulnerability, which refers to the propensity of the exposure to be adversely affected by the flood hazard (IPCC, 2014b). For each component and driver we extract, to the extent available in the literature, quantitative estimates of how sensitive components and resulting flood risk are to variations in the drivers. Finally, we compare results across components and drivers, and provide directions for future research toward the goal of attaining broad-scale CFR estimates that consider all major dimensions of uncertainty and thus adequately inform relevant policy processes. This paper is a product of the Coastal Impact Model Intercomparison Project (COASTMIP; www.coastmip.org), a community-driven effort bringing together coastal system and impact modellers from around the world to better understand and project the long-term impacts of climate change on coastal systems. Materials and Methods Broad-scale assessment of CFR involves the application of many datasets and chains of numerical and statistical models, including climate models, land-ice models, tide, surge and waves models, defense failure models, inundation models, and damages models. Figure 1 provides an overview of how these data and models are generally combined in CFR assessments. Each step involved is discussed in more detail in the subsections that follow. A major methodological difference in those broad-scale CFR assessments is that they have either been conducted for collections of major European or global coastal cities (Abadie, 2018;Abadie et al., 2016;Hallegatte et al., 2013;Hunter et al., 2017;Prahl et al., 2018) or for entire coasts at continental or global scales (Brown et al., 2016(Brown et al., , 2019Diaz, 2016;Hinkel et al., 2014;Nicholls et al., 2018;Tiggeloven et al., 2020;Vousdoukas et al., 2020). While the former studies generally take information on extreme sea-levels directly from observations (e.g., tide gauges located near the cities), the latter ones require the application of tide, surge and wave models to also have information for ungauged locations. In an ideal situation, uncertainty assessment could proceed as a global sensitivity analysis using an integrated modeling system that covers all of the steps of Figure 1, and allowing all uncertain variables to vary HINKEL ET AL. 10.1029/2020EF001882 3 of 28 simultaneously (Saltelli et al., 2008). Given the number of datasets, models (including their alternative formulations and parameterizations), and the large number of uncertain variables involved, as well as the high computational costs required for running these models, this is far from being possible today. As a consequence, the available literature on uncertainty in CFR specifically, and on climate impacts generally, has focused on exploring few selected dimensions of uncertainty, mostly by varying one or few uncertain variables at a time (Frieler et al., 2017). Furthermore, the literature is compartmentalized into sets of literature addressing uncertainty in individual components of CFR. For example, part of the literature focuses on mean-sea-level rise uncertainty, part focuses on extreme sea-level uncertainty, part focuses on wave uncertainty, and part focuses on uncertainty in flood exposure. As our paper assesses uncertainty based on the published literature, we structure the presentations of results according to these components. For each of the four components of CFR we consider one or several target variables, which are the outcome variables that broad-scale studies generally report upon. These are mean-sea-level rise (Table 1), extreme sea-levels (Table 2), wave heights (Table 3), flood damages (Tables 4 and 6), and area, people and asset exposure (Table 5). For each of the target variables, we consider one or several sources of uncertainty pertaining to the following three dimensions: 1. Scenario uncertainties, which are due to unpredictable human choice and include here socio-economic development scenarios, greenhouse gas emission/concentration scenarios and adaptation scenarios. 2. Epistemic uncertainties, which are due to imperfect knowledge and hence can be reduced in principle. This includes data uncertainty (e.g., digital elevation data, population data, etc.), climate model uncertainty (including downscaling methods), impact model uncertainty (e.g., hydrodynamic model used to simulate tides, waves and surges and their interactions; defense failure models and inundation models applied) and methodological uncertainty (e.g., uncertainty in methods for extreme value analysis). 3. Aleatory uncertainty, which is internal to the system studied and cannot be reduced (e.g., natural climate variability). Table 1 Uncertainty in Global Mean Sea-Level Rise. A->B Denotes a Variation in a Variable From Value A to Value B and the Variation Factor is B/A We assess by how much the target variables vary within the uncertainty range of each individual source variable. The results are shown in Tables 1-6. If possible, we report on uncertainty ranges of target variables in absolute and relative terms. We thereby denote the absolute variation in a variable from value A to value B as "A->B". If a study does not report on A and B explicitly, we report on the absolute difference between A and B (i.e., B-A). The relative size of the variation in the target variable is given as the variation factor B/A. This factor is then used to compare uncertainties across target and source variables. To the extent allowed by the published studies, we try to use consistent uncertainty ranges for the source variables. For greenhouse gas emission/concentration uncertainty, we use the range from the representative HINKEL ET AL. Mentaschi et al., 2016) 100-year ESL Mean +10 1.1 Note. NR Stands for Values Not Reported and AIS for Antarctic Ice Sheet. a Spatial metric denotes the method applied to aggregate values from different locations (e.g., grid cells, tide gauges) of the study area. This includes "Mean", "Median", "Max" (i.e., maximum) and, in the case results are reported for a single location, "None". Table 2 Uncertainty and Bias in Current and Future Extreme Sea-Levels. A->B Denotes a Variation in a Variable From Value A to Value B and the Variation Factor is B/A concentration pathways (RCP) 2.6 to 8.5, as this is the range most commonly reported upon in the literature. For socioeconomic uncertainty we use the range over the Shared Socioeconomic Pathways (SSP), which is the standard set of socioeconomic scenarios used in climate change-related research and consists of five alternative futures describing different challenges to adaptation and mitigation (Kriegler et al, 2012;O'Neill et al., 2017). We acknowledge that these two ranges might not necessarily span the full range of uncertainties. For example, alternative socio-economic scenarios have come up with both higher and lower population numbers in 2100 than the SSPs (Vollset et al., 2020). Similarly, some authors argue that there is a 35% chance of exceeding RCP8.5 (Christensen et al., 2018) and others argue that RCP8.5 is an extreme and very unlikely case (Hausfather & Peters, 2020). Generally, we aim to report on both uncertainty and bias. Bias is assessed either as the difference between observations and model or method results. When observations are not available, as is the case for many HINKEL ET AL. (Meucci et al., 2020) 100-year Hs b Max +500 1.5 Note. NR stands for values not reported. a Spatial metric denotes the method applied to aggregate values from different locations (e.g., grid cells, tide gauges) of the study area. This includes "Mean", "Median", "Max" (i.e., maximum) and in the case results are reported for a single location, "None". b centered-root-mean-square-difference. c significant wave height. d maximum wave height. Table 3 Uncertainty and Bias in Current and Future Wave Height. A->B Denotes a Variation in a Variable From Value A to Value B and the Variation Factor is B/A components of CFR, we report on uncertainty ranges for target variables obtained through the use of alternative datasets, models or methods, as generally done in climate and impact model intercomparison projects. Uncertainties in future components of flood risk are reported for 2100, because this is the time horizon most frequently used in assessments. Mean Sea-Levels Methods applied. The mean sea-level metric relevant for CFR assessments is local relative sea-level change, which is generally obtained by combining information on the following components: (i) sterodynamic SLR obtained from multi-model ensembles of Atmosphere-Ocean General Circulation Models (AOGCM), (ii) contribution of land-ice models (Antarctica, Greenland) and glaciers forced by AOGCM output in terms of mainly temperature and precipitation, (iii) contribution of land water storage changes. From these components local relative sea-level rise is obtained with a sea-level equation model that calculates the gravitational and rotational patterns in sea-level rise (Slangen et al., 2014); (iv) contribution of glacial-isostatic adjustment; and v) contribution of uplift and subsidence processes, especially in geologically recent sedimentary deposits such as deltas and alluvial plains . Results are typically expressed by their mean values and a selected percentiles (e.g., 17%-83%, Oppenheimer et al., 2019), but full probability density distributions have also been produced (Kopp et al., 2014). Major uncertainties. According to the process-model based assessment of the IPCC's Special Report on the Oceans and the Cryosphere in a Changing Climate (SROCC; Oppenheimer et al., 2019), uncertainties in 21st century global mean SLR due to uncertainty in emission scenarios and uncertainty in models (climate and land-ice) are roughly at equal footing ( Table 4 Uncertainty and Bias in Flood Propagation. A->B Denotes a Variation in a Variable From Value A to Value B and the Variation Factor is B/A mean SLR ( van de Wal et al., 2019). Uncertainties in the ice-sheet contributions are high because some of the processes that may lead to large contributions by the end of the century are only captured in a highly parameterized way in state-of-the-art ice-sheet models. This includes hydrofracturing of ice shelves, leading to enhanced mass loss from the ice sheet by marine ice sheet or marine ice cliff instability (DeConto & Pollard, 2016;Pattyn et al., 2018), or the development of dark ice surfaces due to inorganic matter or ice algae accumulating due to warmer and wetter conditions on the ice sheet of Greenland, which in turn accelerates surface melting (Tedstone et al., 2017). For this, and other reasons, expert elicitation studies (Bamber et al., 2019) on the contributions of the ice-sheets to SLR consistently produce higher SLR estimates than process-model based studies, particularly for higher RCPs and higher percentiles. In these studies, 21st century global mean SLR is about twice as sensitive to model uncertainties (including expert judgment for ice-sheets) as compared to emission uncertainties (Table 1) for the 95% percentile. Related to this, epistemic uncertainties exist on the covariance between the contributions of different components (Lambert et al., 2021). Most studies use Monte-Carlo Analysis assuming complete independence, which certainly is not justified for some components as they are driven by the same climate forcing. Including this could significantly increase or decrease the uncertainty in the local relative SLR and thereby affect the higher and lower percentiles of the probability distribution function. HINKEL ET AL. Note. NR stands for values not reported. a low elevation coastal zone. Table 5 Uncertainty in Current and Future Exposure. A->B Denotes a Variation in a Variable From Value A to Value B and the Variation Factor is B/A Another major dimension of local mean sea-level uncertainty is human-induced subsidence, which is mainly the enhancement of sediment compaction through the actions of humans, especially through groundwater withdrawal (Shirzaei et al., 2020;Syvitski et al., 2009). The effect of this is small in terms of global mean sea-levels, but it is large in terms of flood risk, because coastal populations are preferentially located in subsiding locations such as the large river deltas and their associated cities in Asia. While the global average rate of mean sea-level rise is 3.3 mm/yr , the average rate experienced in subsiding regions is currently up to three times faster at 8-10 mm/yr (Nicholls et al., 2021). Maximum current subsidence rates in some of the worst affected delta cities are as high as 120 mm/yr in Bangkok and 180 mm/yr in Jakarta (Erkens et al., 2015;Herrera-García et al., 2021). These rates are, however, difficult to extrapolate into the future, because even such high rates of subsidence can be reduced or stopped through appropriate water management measures, as has happened, for example, in Tokyo. The global effects of this uncertainty have therefore not been explored yet. Extreme Sea-Levels Methods applied. Global data on ESL hazard is generated based on high frequency tide gauge records where available, and on numerical tidal, storm surge, wave and river models for ungauged locations and future conditions. Numerical storm surge simulations generated with process-based models are now becoming available at broad scales Vousdoukas et al., 2017). Computationally more efficient numerical models based on statistical relationships between surges and atmospheric pressure fields have demonstrated similar performance as process-based models, at least in some regions (Cid et al., 2018;Rashid & Wahl, 2020;Tadesse et al., 2020), but they require observations (or output from process-based models) for training and have not yet been applied for broad-scale CFR assessments. A new observation-based probabilistic assessment of ESL has recently been applied along the European coasts (Calafat & Marcos, 2020), providing improved accuracy at both gauged and ungauged sites. Despite its promising performance, these types of models are still at their initial stages and further developments are needed for CFR. Tide-surge models generally underestimate observed ESL by a few decimeters for the 100 years events on average (Table 2), but the underestimation of the surge models of the strongest events is much higher than the global average value. Differences between modeled and observed ESL are significantly larger in areas hit by tropical cyclones, specifically for large return periods, because the temporal and spatial resolution of climate reanalysis/simulations are insufficient to fully include the strong winds of tropical cyclones and do not contain a sufficient number of tropical cyclones to obtain reliable statistics of extreme values (Hodges et al., 2017;Muis et al., 2020;Woodruff et al., 2013). For example, maximum observed ESL in New Orleans during hurricane Katrina have been found to be 2.7 to 4 times higher than simulated 1000-yr ESL (Muis et al., 2016). Ongoing work is addressing these limitations. For example, using high-resolution climate data (Bloemendaal et al., 2018) or parametric wind models combined with best track data (Lin & Emanuel, 2016) Note. NR stands for values not reported. Table 6 Uncertainty in Current Local Vulnerability and Flood Damage. A->B Denotes a Variation in a Variable From Value A to Value B and the Variation Factor is B/A Furthermore, synthetic resampling techniques can be applied to extend the observed records to thousands of years (Bloemendaal et al., 2020;Emanuel et al., 2006). Wave set-up and run-up. Wind-waves contribute to ESL via three processes: infragravity waves, wave setup and wave runup (Dodet et al., 2019) with uncertainties associated with each, introduced via uncertainties in: (i) offshore wave characteristics, that is, how well observed or simulated they are, which will be described in the next subsection; and (ii) how waves propagate and interact with the nearshore morphology and coastal profile. Considering wave contributions to ESL via process-based modeling is challenging at broad scales, due to the computational cost of large-scale numerical models with the necessary high spatio-temporal resolution, and a lack of observational records as coastal tide-gauges are generally preferentially positioned in locations sheltered from any contribution from wind-waves. As a result, only a few broad-scale CFR assessments have considered the contribution of wave-set up to coastal flooding, and those that have, used simple parameterisations dependent on offshore wave information and coastal morphology Vousdoukas et al., 2018). Alternative parameterizations for assessing wave set up for sandy coasts are available, for example, setup = 0.2*Hs by Holthuijsen (2010), or that of Stockdon et al. (2006), which have been applied globally for beach shorelines (Melet et al., 2020;Rueda et al., 2017;Vitousek et al., 2017) or modified for other environments such as coral reefs (Beck et al., 2018). The simple parameterization of Holthuijsen (2010) can significantly overestimate wave-setup locally. For example, using a local coupled surge-wave model, Amores et al. (2020) find a wave set-up of 40 cm caused by waves with a significant height of about 800 cm during the storm Gloria in the north-western Mediterranean, as compared to 160 cm that would be obtained by applying the Holthuijsen (2010). One major uncertainty in applying other parametrizations that rely on input parameters such as beach slope is the lack of broad-scale data on morphology across the shoreface (nearshore, foreshore and backshore), due to the lack of an observing system capable of measuring this at an affordable cost and appropriate spatio-temporal scales. To circumvent the problem, global studies have assumed a constant beach slope, for example, 0.1, by Melet et al. (2018), which can locally over-or under-estimate wave set-up given that observed beach slope ranges between 0.001 and 0.6 (10th to 90th percentiles) globally (Athanasiou et al., 2019). On a global average, the contribution of wave-set up to ESL is relatively small , but wave-set up can locally reach 40-50 cm under strong storm conditions (Amores et al., 2020;Bertin et al., 2015). Wave-run up contributions were considered in broad scale studies by Melet et al. (2018Melet et al. ( , 2020. While these contributions are short time scale (on order of wind-wave frequencies) and unlikely to lead to sustained flooding, they can play an important role in initializing failure of coastal defenses such as dikes or dunes. Runup estimates are very sensitive to beach slope assumptions (Stockdon et al., 2006). Statistical dependencies between ESL components. Further bias in broad-scale assessments of ESL is introduced through the non-consideration of the statistical dependence between surge, tide, river discharge and wave contributions to ESL. While tides are the major contributor to ESL globally (Merrifield et al., 2013), nonlinear tide-surge interactions are generally not considered in broad-scale assessments of ESL, which can overestimate ESL by up to 70 cm at some locations . Similarly, broad-scale ESL assessments generally do not include the influence of river discharge on ESL in river deltas and estuaries. This effect has not been quantified for river months at global scale, but it has been found that including the coastal ESL into river flood models increases extreme water levels on average by about 10 cm for many global deltas and estuaries Ikeuchi et al., 2017). Finally, the occurrence of high storm surges and wind waves is correlated at about 55% of the global coastline, and neglecting this effect can underestimate the contribution of wave setup to ESL significantly (Marcos et al., 2019). This also holds true for many ESL records from tide gauge locations, as gauges are usually located in wave-sheltered harbors and hence underestimate the contribution of waves to ESL (Lambert et al., 2020). Extreme value analysis (EVA) methods. Uncertainties in ESL also arise from the different extreme value analysis (EVA) methods regarding the selected frequency analysis approach, statistical model applied and the return period curve fitting (Buchanan et al., 2017;Hamdi et al., 2014;Wahl et al., 2017). The Gumbel distribution, for example, which has been extensively used in broad-scale ESL studies (Hunter, 2012;Hunter et al., 2013;Muis et al., 2016), tends to overestimate global return levels by 22 cm on average as compared to the Generalized Pareto Distribution . In many studies, stationary EVA models are applied to quasi-stationary slices of data, typically with a length of 30 years (Vousdoukas et al., 2016). Nonstationary EVA methods enable studying time-varying return levels and thus increase the sample, generally leading to a decrease of the statistical uncertainty (Menendez et al., 2009;Mentaschi et al., 2016). Furthermore, limitations in the observational data set, including short length of the time series, lack of representativeness and, associated to this, a lack of observed strong events, limits the accuracy of ESL estimates, in particular for long period return levels (Table 2). Globally, Wahl et al. (2017), for example, quantified that the 100-year ESL increases about 15 cm on global average, and up to more than half a meter at certain locations, when 70 years instead of 20 years of observations are used. This effect is specifically pronounced if exceptionally large extreme events have not been included in the extreme value analyses. Including such events, either by updating return levels after large events or by extending tide gauge records with information on ESL found in historical documents or through modeling efforts (see section on Tide-surge models) can reduce these uncertainties. For example, integrating historical records into the tide gauge record of Venice increases the 50-year ESL by factor 1.3 (Marcos et al., 2009). Decadal variability of 50-year ESL has been found to lie at around 10 cm (Marcos et al., 2015;Menendez & Woodworth, 2010;Rashid et al., 2019). Future ESL. At broad-scales estimates of future ESL have been generally produced by considering the following two climate change effects separately: (i) effects of mean SLR on ESL (mean SLR forcing, hereafter) are captured by displacing ESL distributions upwards (or downwards) with changes in mean sea-levels, which means that the uncertainties involved are those related to mean sea-levels presented above; and ii) effects of changing atmospheric conditions on ESL (atmospheric forcing hereafter) are assessed by forcing surge models with wind and pressure data from climate models. The latter effect has been studied less at broad scales, but generally this effect is much smaller as compared to the former, estimated to influence ESL by less than 10% on global average under RCP4.5 and RCP8.5 (Vousdoukas et al., 2018). These estimates are median values based on multi-model ensembles and locally changes in storminess can have a larger effect on ESL, either positive or negative. Further uncertainties in estimating future ESL that have hardly been quantified at broad scales include ocean model errors related to the reduced spatial resolution of both the meteorological forcing and model grid (Calafat et al., 2014;Conte & Lionello, 2013). Uncertainties related to future tides, which change due to a number of processes including mean sea-level change have generally not been considered in broad scale CFR analysis. SLR alters tides by reducing bottom friction, changing resonance properties and increasing reflection at the coast (Idier et al., 2017). The effect of SLR on tides has been estimated to be smaller than ±16 cm change in MHW under 1 m SLR at the 136 largest coastal cities, assuming a fixed coastline (Pickering et al., 2017). A further source of uncertainty in future ESL relates to local nearshore effects of rising mean-sea levels on waves, tides and surges Arns et al., 2015;Du et al., 2018;;Roland et al., 2012;Schmitt et al., 2018;Zijl et al., 2013). For example, sea-level rise increases shallow water depth, which can increase tidal ranges and surges by reducing damping friction. Furthermore, deeper nearshore waters reduce wave set up but increase wave amplitudes and hence wave runup in the case of rigid/fixed coastlines (Cheon & Suh, 2016). In the case of sandy beaches that are able to retreat landwards wave setup will remain constant, since the beach profile will not change. These effects are specifically pronounced in shallow continental shelf areas such as the German Bight, where it has been found that these effects have the same order of magnitude as the direct increase of ESL through mean sea-levels . While some of these effects are beginning to be captured globally, for example, the effects of SLR on tides and tide-surge dynamics , these effects have so far not been considered in broad-scale CFR analysis. Combining MSL and ESL. Finally, for the interpretation of projections of future ESL, it is important to note that different approaches have been applied for combining MSL and ESL. One approach adds ESL distribution to deterministic scenarios of MSL (Hallegatte et al., 2013;Hinkel et al., 2014) and the other approach convolutes ESL distributions with probabilistic sea-level scenarios (Buchanan et al., 2017;Vousdoukas et al., 2018). Care needs to be taken, because the resulting future ESL distributions have different interpretations. In the former case, the ESL distribution attained represents a possible ESL that could occur in the future under the assumption that the chosen deterministic SLR scenario materializes. In the latter case, the future distribution of ESL attained will never materialize (i.e., probabilistic scenarios do not materialize by definition). The obtained distribution rather represents the likelihood of occurrence of a specific ESL at a given moment in the future. Wind Waves Methods applied. In-situ wave observations (wave buoys) provide measurements of the full wave spectrum in deep waters, thus resolving wave period and length. Wave buoys are, however, sparsely distributed globally, have limited record lengths and, in many instances, there are homogeneity issues due to changing buoy measuring platforms (Gemmrich et al., 2011). Globally wave observations have been available for the last 40 years from satellite altimetry, but this provides only information on wave heights at low temporal resolution (∼10 days) and not on wave periods or directions. Hence, understanding global-scale wave characteristics relies heavily on numerical models, typically third-generation spectral models such as Wavewatch III (Tolman, 2009), WAM (Komen et al., 1996) or SWAN (Booij et al., 1997) or statistical models that capture the relationships between wave heights and atmospheric fields (Camus et al., 2011;Wang et al., 2014). Current wind waves. Averaged over broad-scales, models simulate wave heights remarkably well, but variations in calibration data (observed waves) and forcing products (surface winds of varying spatial or temporal resolution) can lead to significant uncertainties (Table 3), typically greatest for the extreme waves, which are also those most relevant for flood risk. Low space-time resolution increases the uncertainty related to numerics in dynamical wave models and model calibrations can be resolution dependent. Furthermore, subscale processes such as unresolved islands can have significant consequences on the model skill if not properly parameterized (Mentaschi et al., 2020;Tolman, 2003). Locally, the uncertainty in extreme waves is thereby stronger for events dominated by mesoscale dynamics (Mentaschi et al., 2015), notably for tropical cyclones, for which increasing model resolutions can increase maximum wave heights significantly (Table 3). Similar to the surge component of sea-level hazard discussed above, model calibration and validation during tropical-cyclones is hampered by the scarcity of observations. Uncertainties in wave period and direction, which are equally important for the wave-related coastal flooding, are greater compared to wave heights. Low spatial resolution of wave models is also a particular limitation for the coastal and nearshore zone, as the nearshore wave dynamics, such as wave setup, are poorly resolved (Saulter et al., 2017). Model resolution (spatial and spectral) also limits representation of important processes in determining wave driven coastal sea-level, for example, infragravity waves. Generally, the comparison between climate change wave studies is hampered by the inconsistency of wave variables reported (Morim et al., 2018). Future wind waves. Broad scale ESL studies and CFR assessments commonly assume wave climate stationarity Melet et al., 2018;Vitousek et al., 2017). Of studies that consider climate driven changes in wave characteristics, uncertainties in projections of future offshore wave conditions are dominated by uncertainties in future forcing from climate models, followed by wave model uncertainty and RCP uncertainty has the smallest contribution. Climate model uncertainty is globally significant and an order of magnitude larger than climate scenario uncertainty . Using an ensemble of 148 wave climate projections using different global climate and wave models shows robust changes in at least one parameter of wave climate (significant wave height, wave period and wave direction) for about 50% of the ocean (Morim et al., 2019). Under RCP4.5, however, all robust changes in wave climate fall within the present-day natural variability, but under RCP8.5 changes exceed this over ∼50% of the world's ice-free ocean area. Other sources of uncertainty include unresolved forcing characteristics, for example, tropical cyclones (Appendini et al., 2017;Timmermans et al., 2017) and those associated with model resolution and uncertainties surrounding EVA methods. Nearshore wave climate is generally more sensitive to climate change than offshore wave climate due to effects of SLR on coastal morphology (e.g., deeper waters) (Wandres et al., 2017). Uncertainties surrounding future coastal morphology including, for example, issues around sediment availability and shoreface slope changes (Cowell et al., 1995;Goodwin et al., 2006) and reef stability (Hongo et al., 2018) have, however, not been explored at broad-scale. Hazard Propagation Processes and methods applied. The propagation of mean and extreme sea-levels into the hinterland causing coastal flooding is shaped by how sea levels interact with the coastal profile including the natural (e.g., dunes) and artificial (e.g., dikes, seawalls) flood barriers in place. The presence or absence of coastal protection and its design standard have large effects on flood extent and depth. If no barriers exist, ESL propagate inland where they exceed land elevation. Where barriers are present, flooding is caused by the following three failure mechanisms: (i) defense overtopping by waves, if wave runup exceeds the height of the defenses; (ii) defense overflowing by ESL, if ESL exceed the height of the defenses; and (iii) defense breaching, where part of the defense is removed by ESL and waves, or geotechnical failure ( Figure 2). Furthermore, different types of inundation models are applied to assess flood propagation, ranging from static, bathtub approaches to hydrodynamic models. Defense failure mechanisms. The uncertainties of many of the above processes have only been quantified at local scales (Le Cozannet et al., 2015). Modeling defense failure mechanisms, in particular wave overtopping and breaching, requires high resolution hydro-morphodynamic modeling and data on coastal profiles and defense designs, which are not available at broad scales. Hence broad scale CFR assessments have either: (i) focused on flood exposure and ignored coastal protection (Hanson et al., 2011); (ii) only considered overflow but not breaching (Vousdoukas Bianchi, et al., 2018); or (iii) only considered breaching assuming that once an ESL exceeds defense heights, defenses breach and fail completely (Diaz, 2016;Hinkel et al., 2014;Tamura et al., 2019), or a combination of the latter two (Hallegatte et al., 2013). Ignoring coastal protection is misleading because extensive defense systems exist in developed and well populated coastal locations around the world, with notable concentrations in East Asia and North-West Europe, and many populated deltas . Concentrating on defense overflow considers that flood defenses may still provide some protection even if ESL exceeds their height, as defenses still reduce the amount of water flowing into the floodplain. However, it has also been found that coastal defenses often breach once overflow occurs (Hall et al., 2003). These different approaches have not been compared, and the uncertainties have not been assessed at broad-scales, but have been shown to be large at local scale. For the HINKEL ET AL. 10.1029/2020EF001882 13 of 28 Solent in the UK, for example, it has been shown that switching from simulating overflow to overtopping and breaching increases the number of properties inundated by factor 3.7 (Wadey et al., 2012). Not considering wave overtopping in broad-scale analysis leads to an underestimation of CFR in areas in which flooding is wave-dominated as found in mid to low latitudes such as western Australia, eastern Madagascar, the Maldives and small islands in the Pacific (Beetham & Kench, 2018;Rueda et al., 2017;Wadey et al., 2017). Current protection levels. Independent of how defense failure mechanisms are modeled, there is a large uncertainty on current protection levels, because data on the presence of coastal defences, their nominal protection standard, probability of failure, and maintenance level are not systematically available at broad scales. While efforts are underway to collect some of this data (e.g., the FLOPROS database by Scussolini et al., 2016), expert judgment and modeling is presently required to fill the large gaps. For example, Yohe and Tol (2002) and Hinkel et al. (2014) model protection standards as a function of societal wealth and land use/population density. For the 136 largest coastal cities in the world using data supplemented by expert judgment it was estimated that 60% of city defences are below 1 in 100 years, 30% are 1-in-100 years, and 10 per cent were above 1-in-100 years up to 1 in 10,000 years for Amsterdam and Rotterdam in the Netherlands, where the highest defense standards in the world are presently found (Hallegatte et al., 2013). Using the FLOPROS modeling approach, Tiggeloven et al. (2020) have also estimated coastal flood protection for all regions of the world. Uncertainty in resulting protection levels differs substantially between regions but this has hardly been explored. Inundation modeling. Local scale process-based inundation models are computationally too expensive for broad-scale analysis and require high resolution topographic data that are not readily available. Therefore, broad-scale studies have applied either computationally more efficient reduced-complexity models like LISFLOOD-FP (Bates et al., 2010), for example, at European scales (Vousdoukas et al., 2016), or a static inundation model (i.e., a bathtub approach) in which the coastal water levels are projected inland across the floodplain where defences are overtopped (Diaz, 2016;Hallegatte et al., 2013;Hinkel et al., 2014;Tamura et al., 2019). Locally, the static approach has been found to overestimate flood extents in flatter terrains as compared to dynamic approaches by factor 0.5-2, when the main flooding process involved is overflow (Breilh et al., 2013;Gallien, 2016;Ramirez et al., 2016;Seenath et al., 2016). At broad scales, this has hardly been assessed. Only one study has compared the bathtub approach with LISFLOOD at the European scale, finding that flood extents using the former are about 1.6 times larger than using the latter (Vousdoukas et al., 2016). In this context it should be noted that hydrodynamic models are not necessarily providing better results either, but need to be calibrated to regional circumstances, which is difficult at broad scales. In terms of flood damages, both approaches have been found to produce similar results for Europe (Vousdoukas et al., 2020), which could be explained by an overestimation of the protective effect of defenses being overflown in the dynamic approach, as discussed above. Irrespective of the type of inundation model applied, other key uncertainties relate to the accuracy of digital elevation model (discussed in the next Subsection), its resolution and, in the case of hydrodynamic approaches, data on surface roughness. As a pixel represents the average elevation height, lower resolutions lead to simplifications and smoothing of the terrain, having indications on the modeling of the flood propagation. For example, increasing the resolution of Lidar data from 100 to 10 m and using a hydrodynamic inundation model has been found to double the 100-year coastal floodplain in Faro, Portugal (Vousdoukas et al., 2018). In contrast, in North Carolina it was found that increasing the DEM resolution from 15 to 6 m using a bathtub 8-side connectivity model reduces the area below 1.1 m by factor 0.8 (Poulter & Halpin, 2008). Future adaptation. Many CFR assessments do not consider human adaptation when assessing future flood risk, which leads to an overestimation (called no adaptation bias hereafter) of CFR by 2-3 orders of magnitude under SLR in 2100 Hinkel et al., 2014, Table 4). There is wide consensus, both in the flood risk literature generally Di Baldassarre et al., 2015;Haer et al., 2019), and the coastal flood risks literature specifically Oppenheimer et al., 2019;Wong et al., 2014), that assuming no adaptation is not a plausible scenario for several reasons. Coastal adaptation, specifically the form of building and enhancing coastal defenses, is widespread today, very effective in reducing CFR and societies have a long history of reducing CFR through adaptation (Charlier et al., 2005). This specifically includes these places where potentially the highest coastal flood damages could occur, such as urban areas in river deltas, where local sea-levels have risen by up to several meters due to human-induced subsidence during the last 100 years (See Section 3.1.1). Furthermore, coastal urban areas are also those places where protection is economically very favorable, with benefits of protection being much higher than its costs during the 21st century, even under high SLR scenarios, which suggest that this approach will be widespread in the future Hallegatte et al., 2013;Hinkel et al., 2018;Oppenheimer et al., 2019;Scussolini et al., 2017). As a result, the bias introduced by not considering adaptation in assessing CFR is very large. But even when considering adaptation, uncertainties in future CFR are large, because a wide range of alternative coastal adaptation scenarios are plausible for several reasons. First, current adaptation practise is diverse, ranging from high flood hazard standards over cost-benefit analysis to large adaptation deficits (Bisaro et al., 2020;McEvoy et al., 2021;Nicholls et al., 2019). Second, adaptation has been mostly reactive depending on the experience of an extreme sea-level event (Rasmussen et al., 2021). Third, social conflicts often impede adaptation efforts and it is impossible to predict how this will evolve in the future . Despite adaptation scenarios uncertainty being large, this has hardly been explored at broad scales. Adaptation modeling has almost exclusively focused on coastal protection and most studies have thereby focused on normative adaptation models such as maintaining protection levels (i.e., the annual probability of being flooded) constant (Hallegatte et al., 2013;Hoozemans et al., 1993;Nicholls et al., 2019;Tiggeloven et al., 2020), static cost-benefit analysis (Diaz, 2016;Nicholls et al., 2019;Tiggeloven et al., 2020;Vousdoukas et al., 2020), and robust adaptation using the criterion of benefit-cost ratios . Based on these kinds of models it was found that alternative adaptation scenarios influence CFR in 2100 by factors of 20-27 (Nicholls et al., 2019, Table 4). Future work is needed to also explore other adaptation options such as accommodation, retreat and advance and it is expected that this will increase the adaptation scenarios uncertainty range. Furthermore, descriptive models, which are models aiming at mimicking actual human adaptation behavior, have not much been developed, with the exception of Hinkel et al. (2014), who applied an econometric model that explains observed protection levels through socio-economic indicators. Future shoreline change. Another set of uncertainties is related to future shoreline change and the feed-back that coastal adaptation has on this in terms of preventing shoreline change. Generally, broad-scale assessments of CFR assume that the shorelines (and beach morphology) will not change, which is obviously not generally the case. Currently about 24% of the global sandy beaches are eroding at rates exceeding 0.5 m/ yr and 28% are accreting (Luijendijk et al., 2018). SLR could lead to a complete loss of 50% of the world's sandy beaches (Vousdoukas et al., 2020) and local studies show that these effects alter flood extent (Passeri et al., 2015). Furthermore, it has been shown that allowing the shoreline to retreat with SLR (rather than fixing the coastline through protection) influences tidal characteristics and extreme water levels and hence flooding (Idier et al., 2017). Exposure Methods. The broad-scale assessment of exposure of land, population and assets is based on combining elevation data with spatially explicit datasets of population and land-use, and, in the case of exposure to ESL, applying inundation models for assessing the exposure to a flood with a given return period (e.g., 100year flood). Hence, the main uncertainties in assessing current exposure are associated with the accuracy of the underlying elevation, population and asset datasets. If exposure relative to a given ESL is assessed, then uncertainties in hazard and hazard propagation as discussed in Sections 3.1 and 3.2 also play a key role. Generally, there is little information and systematic exploration of the error introduced in exposure estimates through data (in-)accuracy. However, a range of studies has explored the uncertainty in exposure through applying alternative datasets. Elevation data. Large uncertainties in exposure are associated with the accuracy of near-global digital elevation models (DEM). Most of these are digital surface models (DSM) and not digital terrain models (DTM), which means they represent the elevation value of the first reflectance surface and not necessarily the terrain itself (McClean et al., 2020). The DEM widely used in earlier broad-scale coastal flood exposure and risk studies include GTOPO30, GLOBE and SRTM30, which all have a spatial resolution of 30 arc-seconds (∼1 km at the equator) (Lichter et al., 2011;McGranahan et al., 2007). More recent studies have employed the newer versions of SRTM90, which have a spatial resolution of 3 arc-seconds (approximately 90 m at the equator) or derivatives that improved SRTM such as MERIT and CoastalDEM (Kulp & Strauss, 2018). According to the product specification, the root mean square error (RMSE) of SRTM30 is 9.7 m (Rodriguez et al., 2005), but this has a considerable spatial variation and numerous studies have reported considerably higher accuracies, with errors of 7, 5, 2 m or even 0.5 m, particularly in coastal areas (Gorokhovich & Voustianiouk, 2006;Kellndorfer et al., 2004;Luana et al., 2015). For the Low Elevation Coastal Zone (LECZ), which is the area below 10 m that is hydrologically connected to the ocean, for example, the RMSE of SRTM90 has been estimated at 5.6 m, and those of its derivatives MERIT and CoastalDEM to 3.1 m (Gesch, 2018). As there is a lack of global high accuracy data against which to validate global DEM, claims about the performance of global DEM in assessing flood exposure need to be taken with caution. Generally, one would expect DSM such as SRTM to underestimate flood extent. For example, the CoastalDEM correction of SRTM increases global population in the LECZ by factor 1.3 and the population in the 1-year floodplain by factor 3.7, but the neural network applied for this correction has only be trained in the US and Australia where local LIDAR (Light Detection and Ranging) data was available (Kulp & Strauss, 2018), and its performance in other regions of the world is largely unknown. Conversely, local scale analysis have found SRTM and other global DEM to overestimate both coastal (Wolff et al., 2016) and river flood extents (McClean et al., 2020). The recently released open Global Lowland DTM (GLL_DTM_v1), based on satellite LIDAR, has a RMSE of 0.5 m in the LECZ (Vernimmen et al., 2020), but only a horizontal resolution of 5.6 km, which makes it currently unsuitable for CFR assessment. Higher resolution versions are announced to be produced, which could then significantly improve broad-scale CFR assessments. Further uncertainties that have not been explored at broad scales relate to the horizontal resolution of DEM (see subsection on inundation modeling above) and overlaying elevation data with the coastline and population data as these datasets don't match (Lichter et al., 2011;McGranahan et al., 2007;Neumann et al., 2015). Current population exposure. The main uncertainties in population exposure relate to: (i) the temporal and spatial quality/accuracy of the census input population data; (ii) the implications of the methodological population redistribution approach applied; and (iii) the quality and spatial/temporal accuracies of the ancillary/covariate data used and offsets between DEMs and population data, for instance, due to different coastlines. A number of studies have explored uncertainties in global population exposure by using different global population and elevation datasets Kulp & Strauss, 2019;Lichter et al., 2011;Mondal & Tatem, 2012;Neumann et al., 2015). Switching between the two most widely used spatial global population datasets, Global Rural-Urban Mapping Project (GRUMP) (CIESIN et al., 2011) and LandScan (Dobson et al., 2003) increases current global exposure estimates by a factor of up to 1.7, depending on the elevation data used (Table 5). This difference is due to different approaches applied and in particular the extent to which population data distribution is modeled. GRUMP is a lightly modeled population data set that assumes that people live where they are registered and that within administrative units the population is allocated to urban or rural areas, which are determined based on settlement points and night-time lights (Balk et al., 2006). Conversely LandScan is a highly modeled population data set that represents the ambient (average over 24 h) population by using roads, land cover and slope to redistribute population within administrative areas (Dobson et al., 2003). There are two more recent higher resolution global population datasets, but these have not been compared to GRUMP and LandScan. The Gridded Population of the World (GPWv4) (Center For International Earth Science Information Network-CIESIN-Columbia University, 2016) gives the non-modeled population per administrative unit and has a spatial resolution of 30 arc seconds. Worldpop (Tatem, 2017) provides a modeled population at the spatially finest resolution of 100 m. Worldpop, GPW and GRUMP have in common that they are adjusted to UN-estimates, whereas LandScan gives the ambient 24 h population and is therefore not adjusted to UN-estimates. Current assets exposure. For broad-scale assessments, information about individual assets is usually not available and hence this is derived based on other datasets. One approach (Hallegatte et al., 2013;Hinkel et al., 2014) uses data on population exposure and population-to-assets multipliers that are empirically derived from global economic data such as the Penn World Table (Feenstra et al., 2015). Other approaches (Huizinga et al., 2017) use land-use maps and national information on building construction or replacement cost density. Various datasets exist at relatively high resolution, such as the 30 m GLC30 data set (Chen et al., 2015), and products denoting the percentage of urban surface are also becoming increasingly available, such as the Global Human Settlement Layer (Pesaresi et al., 2013). These global maps represent generic urban areas not subdivided into different types of uses (e.g., commercial, industrial, etc.), which represents an important uncertainty in broad-scale assessments compared to local scale assessment, for which generally a more detailed categorization of uses is available (de Moel et al., 2015). To the best of our knowledge, no study has so far explored the uncertainty in data and methods applied for generating assets exposure at broad scales. Future exposure. For the assessment of future exposure, sea-level rise, socio-economic and adaptation scenario uncertainties become relevant. Most broad-scale CFR assessments have focused on future population exposure applying national population growth to current exposure under different SSPs (Merkens et al., 2016(Merkens et al., , 2018Neumann et al., 2015), which ignores the spatial dynamics of exposure due to processes such as urbanization, land use change and coastward migration. To address this, Jones and O' Neill (2016) and Merkens et al. (2016) created spatial explicit global population grids for each SSP based on the population projections of KC and Lutz (2017). The main difference between the approaches is that Jones and O' Neill (2016) used a gravity-based downscaling model to account for changes in urban extents, whereas Merkens et al. (2016) explicitly accounted for differences in coastal to inland population changes. These subnational projections increase exposure by up to a factor of 1.4 as compared to applying national average population growth. The effect of regionalized population projections on global population exposure has the same magnitude as the effect of switching between SSP scenarios, but a larger magnitude than switching between SLR scenarios. Vulnerability Methods. Broad scale CFR assessments have focused on asset vulnerability, generally represented as depth-damage functions (DDF) that map the water depth to a relative or absolute damage. There is a wide variety of DDF, generally developed for a specific country, such as HAZUS for the USA (Scawthorn et al., 2006) or the multi-colored manual for the United Kingdom (Penning-Rowsell et al., 2014), though also a global database has been developed (Huizinga et al, 2017). Local studies apply DDF to different building types, but as such detailed data are not available at broad scales, broad scale studies apply DDF to either land-use types Vousdoukas et al., 2018), or assets (Hallegatte et al., 2013;Hinkel et al., 2014). Contrary to global river flood assessments, human vulnerability, for example, in terms of flood mortality rates (Dottori et al., 2018;Jongman et al., 2015), has not been studied much in broad-scale CFR assessments. Depth-damage functions. While uncertainties in DDF at broad scales have hardly been studied, a wide range of local studies has investigated the uncertainty in flood damage modeling, often focusing on river flooding, but sometimes also addressing coastal flooding. Such studies have shown that uncertainties in the damage estimation are usually the same order of magnitude or larger as uncertainties in the flood hazard assessment (de Moel & Aerts, 2011;de Moel et al., 2014;Jongman et al., 2012;Winter et al., 2018). For example, in a sensitivity analysis, de Moel et al. (2012) find that when varying flood damage model parameters, flood damages vary by up to factor 8 in breach locations on the coast of the Netherlands, with DDF being the biggest contributor to total damage uncertainty (about 30%-45%). Similar ranges have been found for coastal flooding on Small Islands (Parodi et al., 2020) and river floods in case studies in the UK and Germany (Jongman et al., 2012). On top of this, the elevation of exposed assets with respect to the ground surface can influence damage estimates considerably (Koivumäki et al., 2010). For example, flood risk reduces by 50% when assuming that the ground floor of all buildings is 50 cm above the ground surface, or 61% when assuming that all buildings would be dry flood proofed (de Moel et al., 2013). When drawing conclusions for broad scale CFR analysis from these local findings, assumptions on heterogeneity are critically important, because when modeling a large number of individual objects, uncertainties cancel out if they are considered completely independent in terms of vulnerability and asset value (Saint-Geours et al., 2015;Wagenaar et al., 2016). Given the nature of building types in coastal cities, complete independence per object does not seem reasonable, though some heterogeneity is obviously present. Any bias in the functional form of the DDF chosen, however, will not be canceled out. Overall, studies show that between sets of damage functions there are large differences, but uncertainty estimates within a set of functions, generally associated with the value at risk, is substantially lower, as is illustrated by the different estimates of the Multi-Colored Manual (Penning-Rowsell, 2013) and the global database developed by the JRC (Huizinga et al., 2017). Further uncertainties, which have however not been explored at broad scales, relate to water depth being the only variable taken into account in the calculation of damage with DDF, which means that other factors also mediating the damage caused, such as hydraulic pressure, wave forces and salinity, are not considered. Future vulnerability. Projecting future CFR requires understanding the vulnerability of future societies. While globally, both human and economic vulnerability has decreased significantly in recent decades (Bouwer & Jonkman, 2018;Formetta & Feyen, 2019;Kreibich et al., 2017), the future dynamics in vulnerability (e.g., due to improved early warning and emergency response, flood proofing of infrastructures, better health care, more resistant building practices etc.), have not been addressed in broad-scale CFR assessments studies. What are the Major Uncertainties and Biases and How Could They be Reduced? When comparing uncertainties and biases across dimensions of broad-scale CFR assessments, by far the largest ones are associated with future human adaptation behavior, which potentially has a multiple order of magnitude effect on future flood risk ( Table 7). The largest part of this effect is due to the no-adaptation bias, resulting from ignoring coastal adaptation, which is still a default assumption in many broad-scale CFR assessments (and which dominates associated media coverage). This assumption is, however, misleading and should not be included in the sets of adaptation scenarios that inform policy. In today's world we see extensive adaptation (mainly protection) in coastal areas, high economic benefits thereof and widespread discussion and plans for further adaptation. Hence, it can be asserted that no adaptation scenarios describe a future that will never be seen. The other part of this effect is adaptation scenario uncertainty (factors 20-27), which cannot, by definition, be reduced in principle, because it is not possible to predict how societies will adapt in the future . A diversity of adaptation behaviors is plausible, ranging from hard engineered to nature-based solutions and coastal retreat, which need to be explored in broad scale CFR assessments. This can be supported by developing sets of plausible adaptation scenarios that can be used consistently across modeling efforts, similarly to the SSPs. HINKEL ET AL. Next down the ladder of highest global uncertainties are those associated with socio-economic development, ice sheet models and digital elevation data influencing CFR by factors of 2.3-5.8, 1.6-3.8 and 1.2-3.8, respectively (Table 7). Regarding the digital elevation data, it has been argued that a collaborative and open effort toward a freely available high accuracy DEM is needed to address this limitation (Gesch, 2018;Mc-Clean et al., 2020;Schumann & Bates, 2018). A higher resolution version of the newly released and freely available satellite LIDAR-based Global Lowland digital terrain model by Vernimmen et al. (2020) may constitute a big step in this direction. This is followed by GHG emission uncertainty (mean SLR forcing), which contributes with factors of up 1.6 to 2.1 to global mean sea-level uncertainty. Uncertainties in population data, wave model calibration datasets and inundation models have not been studied much at broad scales, but in the few available broad-scale studies these are globally relevant with factors of around 1.5. Emission uncertainty due to atmospheric forcing driving changes in surges and waves is comparably small. The same holds true for the bias introduced by not considering the wave set up contribution globally. While being small in terms of global mean SLR uncertainty, the contribution of human-induced subsidence to local relative sea-level rise in river deltas, especially the rapid subsidence observed in many urban areas in such settings, is globally substantial at least until 2050. Given that this is influenced directly by human agency, however, future human-induced subsidence is highly uncertain. There are also uncertainties that have been observed to be significant locally, but that have not been quantified at broad scales. Concerning the sea-level hazard, by far the largest uncertainty is related to maximum surge and wave heights under tropical cyclones (Tables 2 and 3). For example, locally surge models have underestimated ESL of large return periods during tropical cyclones by up to a factor of 4 (Muis et al., 2016). What this means for flood risk, which integrates over all return periods, is yet unknown, but as a large fraction of the global coastal zone is exposed to tropical cyclones, these uncertainties are expected to also be significant at global scale. Other uncertainties in sea-level hazard are locally significant for some regions, but probably not so relevant globally. This includes uncertainties due to local shallow water interactions between SLR, tides, waves and surges. In locations with shallow water slopes or extensive tidal flats, such as the German Bight, these processes can double ESL in 2100 . Ongoing efforts to implement numerical models fed with all forcings is expected to reduce these uncertainties. Concerning the other components of CFR, uncertainties that have been observed to be significant locally, but that have not been quantified at broad scales, include, foremost, uncertainties in depth-damage functions, for which local studies have shown an effect on flood damages by up to factor 16 (Table 6). Related to this, uncertainties in asset exposure are also expected to be significant globally, given the prevalent large uncertainties in estimates of local GDP and fixed capital stock (Huizinga et al., 2017). With regards to hazard propagation, large uncertainties lie in levels, quality and associated failure modes of coastal defenses. The latter, for example, has been shown to affect flood damages by a factor of up to 4 (Wadey et al., 2012), which is much larger than the local biases reported in association to the bathtub approach (factors 0.5 to 2). Addressing these uncertainties requires the collection of large amounts of local data on observed depth-damage relationships and defense failures, and this would, similarly to what was described above for digital elevation data, also benefit from open community efforts. What do These Uncertainties Mean for the Interpretation of CFR Assessment? When interpreting our results, it is important to recognize that the information on global uncertainties available in the literature is limited, and as a result, we may have overestimated or underestimated some uncertainties. For some dimensions of uncertainties such as defense levels, defense failure and depth damage models, for example, only local estimates were available. Local estimates are generally higher than broadscale estimates, because in aggregation some of the local uncertainties are canceled out. Furthermore, local studies are generally conducted where uncertainties are expected to be particularly large, which can distort the broad-scale picture. But uncertainties may also be overestimated in broad-scale studies themselves, because many studies have only explored uncertainty ranges, ignoring that the likelihood of the "true" value is generally not uniformly distributed across these ranges. Another limitation inherent in our literature-based approach to CFR uncertainty as compared to a full global sensitivity analysis (see Section 2) is that we were not able to vary all uncertain variables simultaneously and hence may not have captured all interactions between uncertain variables. It turns out, however, that the most important interactions, specifically between different components of CFR, have been covered by the available studies. Hence, it is highly unlikely that conducting a global sensitivity analysis would change the results of this paper significantly. Arguably the major source of nonlinearity in CFR assessments comes from the non-linear distribution of exposure across elevation levels. This could lead to the situation that under low SLR (and hence lower ESL) variations in exposure (or vulnerability, adaptation, etc.) may only have a small effect on CFR, while under high SLR the effect would be large. Hence, it is important to assess interactions specifically between SLR and other uncertain variables, which is generally done by CFR studies because SLR is the main motivation of these studies in the first place. For example, the high sensitivity of damages to adaptation is found for the full range of plausible 21st century sea-level rise scenarios (Table 4). Nevertheless, uncertainties in broad-scale CFR assessments remain substantial and their results must be understood as indicative of the broad magnitude of flood risks. For the purpose of addressing some of the broad-scale economic questions listed in the Introduction (e.g., global cost of adaptation, risk transfer mechanisms, disaster relief funds, etc.), this is generally not problematic because outputs of CFR, such as expected annual flood damages, must be seen in relation to other economic variables such GDP, population or asset density, which all differ by several orders of magnitude between regions and countries. Furthermore, annual GDP growth rates and discount rates, which typically lie in the order of several percent, inflate or deflate economic variables faster on decadal time scales than sea-levels rise. Both of these points are, for example, illustrated in broad scale cost-benefit analysis of coastal protection measures (Diaz, 2016;Hinkel et al., 2018;Tiggeloven et al., 2020;Vousdoukas et al., 2020), which show order of magnitude differences between benefit-cost ratios around the world's coasts. The same holds true for expected annual coastal flood damages relative to national GDP (Hinkel et al., 2012;Tol, 2007). Against this background, a number of simplifications made in broad-scale CFR assessments can be justified or are even necessary in order to enable the large number of simulations necessary for some economic analysis. One example is the use of the bathtub model in economic optimization approaches, because the large number of simulations required for this can generally not be conducted with hydrodynamic flood models Tiggeloven et al., 2020). At the same time, there is a need to better explore the biases in both bathtub and reduced-complexity hydrodynamic models in conjunction with defense failure modes, which has hardly been investigated at broader scales. Another example of a feasible simplification is to ignore changes in ESL distribution due to changing wind and pressure fields under different emission scenarios (atmospheric forcing) and climate models (also called changes in storminess). Given that the contribution of these processes to changes in future ESL is about one order of magnitude smaller than the contribution of changes in mean sea-levels to future ESL (Table 7), substantial computation time can be saved by displacing extreme water levels upwards with mean sea-levels only, instead of also running tidesurge models forced with wind and pressure fields from climate models. For other purposes such as flood hazard mapping, or when zooming into a particular location, epistemic uncertainties in mean and extreme sea-level hazards, though smaller than socio-economic ones, become very relevant. Determining how safe coastal populations are in a given place requires a much higher accuracy in the water levels and elevation than the broad scale, more economically oriented studies. Finally, we note that we only considered CFR in terms of direct damages, ignoring the propagation of these through the economic and financial systems to cause indirect economic damages, which extend beyond the flooded area. For example, floods interrupt the production and flow of goods, decrease labor productivity of those affected, and land permanently submerged leads to a loss of land input into agricultural production. This, together with rising adaptation costs, increases the demand for construction services and capital, and hence public debt (Parrado et al., 2020), which in turn propagates through global economic (Bosello et al., 2012;Parrado et al., 2020;Schinko et al., 2020) and financial (Mandel et al., 2020) networks. Locally, and in the direct aftermath of disasters, indirect effects increase overall damages. For example, this has been estimated to account for 40% of the total damages in the case of Hurricane Katrina (Hallegatte, 2008). Longer term macroeconomic effects of disasters can be both positive and negative (Lazzaroni & van Bergeijk, 2014), and one major uncertainty thereby is the economic dynamics of recovery after the event (Koks et al., 2019). We do not consider these effects here, because their assessment relies on wider assumptions on the whole economy, which cannot be covered in this study. Conclusions This study provided a literature-based comparative assessment of the major sources of uncertainty and bias in broad-scale CFR assessment, considering the four components of: (i) mean and extreme sea-level hazards (including sea-level rise, tides, surges, waves, river run-off and their interactions); (ii) propagation of these hazards into the floodplain, including their interaction with natural and artificial flood barriers; (iii) exposure in terms of area, people and coastal assets threatened by these hazards; and (iv) vulnerability, mainly in terms of depth-damage functions of coastal assets. We find that there are substantial uncertainties in all dimensions, which highlights the interdisciplinary nature of CFR assessments and the need for dedicated disciplinary efforts to improve the assessment of each dimension. At the same time, there is the need to collectively work across disciplines and toward the needs of the global policy community, as the importance of some issues/uncertainties only become apparent when looking beyond disciplinary bounds. For example, while from a disciplinary perspective there are many uncertainties worth exploring in sea level, from a transdisciplinary perspective of supporting global policy, uncertainties in the order of 10 or 20 cm higher or lower sea-levels may be secondary to many of the other uncertainties raised in this paper. Globally, by far the largest bias in assessing future CFR is introduced by not considering human adaptation, which can lead to an overestimation of CFR in 2100 by up to factor 1300. But even when considering adaptation, uncertainties in how coastal societies will adapt to sea-level rise dominate, with a factor of up to 27, all other uncertainties. Other large uncertainties that have been quantified globally are associated with socio-economic development (factors 2.3-5.8), digital elevation data (factors 1.2-3.8), ice sheet models (factor 1.6-3.8) and greenhouse gas emissions (factors 1.6-2.1). Local uncertainties that stand out but have not been quantified globally, relate to depth-damage functions, defense failure mechanisms, surge and wave heights in areas affected by tropical cyclones (in particular for large return periods), as well as nearshore interactions between mean sea-levels, storm surges, tides and waves. We conclude that the advancement of broad-scale CFR assessment requires more comprehensive analysis of uncertainties, including considering uncertainties in methods and data, which have received little attention so far. In particular, adaptation should be considered more explicitly and community efforts to develop consistent adaptation scenarios to be tested in CFR models would be beneficial. The reduction of epistemic uncertainties in hazards requires continued incorporation of new developments in numerical and statistical modeling, specifically taking into account the non-linear interactions between mean sea-level rise, surges, tides and waves. Finally, reducing epistemic uncertainties in digital elevation, coastal protection levels and depth-damage functions requires open community-based efforts, in which many scholars work together in quantifying and validating data from multiple sources.
2021-07-26T00:05:26.572Z
2021-06-16T00:00:00.000
{ "year": 2021, "sha1": "acbc90bbcef6172a08117ed52ef2ea5bdecb636d", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2020EF001882", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1317223accf587be2016a036ad16f81f8405755d", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Environmental Science" ] }
141660290
pes2o/s2orc
v3-fos-license
Mathematical Metaphysics: Modelling Determinism and Free-Will Along the Lines of Theological Compatibilism In this paper, an attempt is made to lay a systematic framework that helps answer a deeply perplexing philosophical question: “Can blind obedience to a set of immutable laws of nature pose a sufficient explanation for all phenomena in the world?” From the perspective of the human person, this question can be re-phrased as follows: “Do the events in a person’s life happen because they are pre-determined to do so, or is there some role for free-will to operate?” More succinctly stated, “Is the principle of determinism or the faculty for free-will responsible for the occurrence of an event?” An acceptable answer to these difficult questions must first require a better understanding of what precisely the terms determinism and free-will mean. In religion and mythology, the doctrine of determinism is embodied in an equivalent notion called destiny, which may be defined as a pre-ordained, inescapable, inevitable event. An accident, on the other hand, is a purely random and unpredictable event with neither intent nor design backing its occurrence. Religion holds that there are no such things as accidents and that every event is infused with divine purpose. Paradoxically, religion (Christianity, in particular) holds dear man’s capacity for free-will, which is in direct contradiction to the idea of destiny. How can free-will be truly free, if everything is already determined? Science too, is in a similar muddle on the problem of free-will, because it is still unsure whether the universe, the human mind included, runs on a deterministic or an indeterministic basis. After exploring the opinions gathered from diverse fragments of human knowledge (Philosophy, Physics, Neuroscience, Literature, Religion), two novel frameworks that are grounded in mathematical rigor are forwarded which fits both determinism and free-will into a single, indivisible philosophical paradigm. The Stance of Philosophy There are four major philosophical positions in the determinism-free-will debate. Two of them, namely Hard Determinism and Metaphysical Libertarianism, regard determinism and free-will to be mutually incompatible notions [1,2]. What this means is that, if determinism is true, then free-will is not possible and if free-will is possible, then determinism is not true. Philosophers who subscribe to either of these positions are called Soft Incompatibilists. The position of Hard Incompatibilism holds that free-will is irreconcilable with both determinism and indeterminism [3]. For this reason, philosophers belonging to this camp, are referred to as Pessimistic Incompatibilists. The position of from the knowledge of initial conditions. For instance, if the velocity and position of a moving particle is known at a particular instant, then using the laws of motion, its velocity and position can be computed for any other instant. According to this paradigm, there is no place for such a thing as free-will or deliberate self-made choices, but only an inevitable destiny or pre-determined outcome for all things. But by the end of the third decade of the 20 th century, Quantum Mechanics with its central tenet -the Uncertainty Principle demonstrated that there exists a fundamental limit to what can be known about the physical world. The principle states that it is impossible to possess an exact knowledge of a particle's position and velocity simultaneously. In fact, the more accurately you know one, the less accurately you know the other. Thus, a particle cannot be said to be here or there and moving with this or that velocity without invoking some degree of uncertainty in both those parameters as well. We can only speak in terms of the probability of finding a particle in this position and moving with that velocity, at a particular instant of time. In a sense, the particle can be said to exist everywhere at once, (omnipresence?), with a spatial distribution of probabilities that extends to infinity in all directions. However, the likelihood of finding it at one particular place may be greater than in all other places. The introduction of randomness into Physics (or rather, the departure of Physics from determinism) at such a fundamental level, has created a sort of portal for free-will to operate. Perhaps, the earliest source that can be found in the literature on free-will that can be said to subscribe to this view-point is the book Miracles: A Preliminary Study by CS Lewis [5]. Determinism in Classical Physics There are different versions of the doctrine of determinism, depending on the context in which it is used. The two versions, pertinent to this essay are Causal (or Physical) Determinism and Theological Determinism. Causal Determinism, holds that every event in the physical universe has a cause, which precedes it. The picture of a series of upstanding dominoes placed closely beside each other is often invoked to illustrate this principle. When the first domino is nudged forwards with the finger, the one infront of it gets knocked over, and that one goes and knocks over the next domino infront of it, and so on. Each domino falls, because the one preceding it, caused it to fall. This is true for all the dominos in the series except for the very first domino, which had fallen because some agency gave it a first push. If each falling domino is considered an event, then it is clear that there is an ordering in time for all the events, i.e., each event is preceded in time by another event. Each preceding event is called a cause and each proceeding event is called an effect. Thus, it can be said that each effect serves as a cause for the subsequent effect. The entire series of dominos represents a causal chain of events, and it is this chain that forms the underlying basis of determinism. A key feature of the causal chain is that no effect is without its cause, except for the very first cause, which must necessarily be causeless if it is to initiate the chain. But can there exist any such thing as an uncaused cause? For an event to have no cause would mean that it has no beginning in time either, because a beginning would necessitate another cause to bring it into existence. The only conjectured entity that fits this dual description of having no beginning in time and no cause to bring it into existence, is God. Hence, God is the first cause or the uncaused cause or the causeless cause. The ancient and medieval philosophers, like Aristotle and St. Aquinas amongst many others, subscribe to this viewpoint [6]. We can conclude that for the doctrine of determinism to be true in a universe of a finite age, the existence of God must necessarily also be true. Indeterminism in Quantum Physics If determinism is false, then indeterminism is true, i.e., a cause need not precede an effect. But this would necessitate the existence of uncaused causes, which are events that can spring into or out of existence in the absence of any prior warranting conditions. If the creation of the universe is one such uncaused cause, then the existence of a creator God is unnecessary. One well known example of a physical event without a preceding cause (i.e. an uncaused cause) is the phenomenon of quantum tunneling, where an electron can effortlessly burrow through an energy barrier that Classical Physics prohibits. Quantum Mechanics furnishes a beautiful mathematical explanation for this and other similar bizarre subatomic-level phenomena that is grounded in Probability and Statistics. However, a precise causal explanation or a material mechanism eludes the theory. The physicist David Bohm gives an elaborate account on this subject in his book Causality and Chance in Modern Physics [7]. The dominos metaphor described above may be adapted in the context of indeterminism as follows: an upright domino can spontaneously fall to the ground in the absence of any physical initiating event that precedes it, like the impact from another falling domino or air currents or radiation pressure etc. Bridging the Determinism-Indeterminism Divide If, however, it is insisted that a cause be attributed to the physically uncaused fall of the domino, then it must be necessary to invoke a non-physical, transcendent agent permeating all matter and space that willed the domino's fall. By postulating the existence of such an agent, the doctrine of determinism can be preserved, since every event will then have a preceding cause, including those events that are not caused by a physical agency. Moreover, the doctrine of indeterminism is also preserved, since it is impossible to possess foreknowledge of a transcendent agent's will. And in the absence of complete knowledge, it is necessary to resort to the mathematics of randomness (Probability Theory) in order to make a reasonably reliable prediction of the occurrence of events. So, if such a transcendent agent possessing free-will exists, then both determinism and indeterminism are true. The Stance of Neuroscience Mainstream neuroscientists assert that it is a stream of electrophysiological and biochemical events in the brain that finally culminates in an individual making a conscious decision. Experiments have shown some of these events to be identifiable in advance of the subject becoming aware of his own choice. In the case of the famous EEG experiments of Benjamin Libet, the event referred to, is an electrical signal called the Readiness Potential [8]. These experiments seem to conclude that the brain decides before the subject consciously decides, which would mean there can be no such thing as free-will. However, it should be noted that the methodologies adopted in these no free will studies are still highly controversial. It's also uncertain whether their interpretation can extend to more general contexts outside the controlled setting of a laboratory and into daily life. The plausibility argument for free-will has suffered much disregard in neuroscience, owing to its materialist foundation. According to the mainstream, the mind should be viewed as nothing more than a natural phenomenon, an outcome of a complex cascade of neural activity in the brain, with molecules and membranes playing the role of prime actors. The neuroscientist, guided by this rule of thumb, sees no reason to invoke any agency outside the contents of the cranium to explain the working of the mind. What is referred to as the ghost in the machine hypothesis (a. k. a. Cartesian Dualism) is simply rejected as unnecessary. However, a turn in the tide of opinion has begun in recent years that questions the sufficiency of neural activity per se for human cognition. Quantum physics with its central doctrine of indeterminism may have something substantial to say about brain processes and is likely to trigger a revolution in our understanding of the mind, in the near future. The works of Roger Penrose and Stuart Hameroff offer a potential first step forward in this direction [9,10]. One predictable consequence of these efforts will be the elevation of free-will from its current status as a mere illusion to an established fact. Determinism in Neuroscience As stated above, neuroscience is founded on the tenet of material monism, and neuroscientists adopt a strictly deterministic approach to their discipline. They promulgate that it is the biophysical-biochemical processes, and the wiring and firing of neurons in the brain, that is responsible for the decision-making process. Hence, conscious decision making is said to always lag behind subconscious information processing in the brain. The causal chain for the decision-making process can be sequenced as follows: Information → Brain → Conscious mind makes a choice → Brain → Body enacts decision. The conscious mind (the place or thing in which one would expect the seat of free-will to reside) makes a choice only after some amount of prior information processing has occurred in the brain. Indeterminism in Neuroscience There is however, atleast one scenario wherein the above causal chain may be missing an important element. It is a common human experience, that when the same environmental stimulus is presented to the sensory faculties on two different occasions, the same brain may end up making different choices. But if the brain operates on a deterministic model, it is expected to yield the same outcome every time the same stimulus is presented. This has to mean that the multitude of internal processes going on in the brain, compel it to operate to some degree in an indeterministic fashion. It could even be that the brain which is necessary for cognition to occur, may not be wholly sufficient for decision making. Also, there are two assumptions the deterministic paradigm makes, neither of which need be true. The first is that, the conscious mind can assert no influence of its own in the decision-making process. It acts strictly as just one of the relay stations during information processing in the brain. The second is that, the only portal of entry for information from the outer world into the conscious mind is the brain. Both these assumptions can be done away with by postulating a faculty that operates independently of the brain and which is itself not subject to the causal argument. That is, it can cause events to occur in the brain, but is not itself caused by events occurring in the brain. A similar idea has been previously asserted by the neurophysiologist Sir John Eccles and the philosopher of science Karl Popper in their Interaction Dualism theory [11]. It is precisely this faculty that can be referred to as free-will. However, the seat of origination of free-will, if it exists, is still a mystery. Neither the scientist nor the philosopher dares to claim knowledge of where it stems from. To the theologian, however, free-will is an inherent trait of God. By virtue of the scripturally inspired tenet that man is made in the image or likeness of God, free-will is one of man's imbibed traits. Free-will, when viewed in this light, shares the same status as that of any other given in physics (e.g. charge, mass, spin etc.), whose source of origin is unknown and is simply taken for granted. The philosopher David Chalmers holds a similar view on consciousness [12]. Metaphysical Libertarianism The above non-causal argument made for free-will favors the philosophical position known as Metaphysical Libertarianism. According to this doctrine, free-will can over-ride physical causality. Plausible mechanisms have been proposed to describe how this may happen, in the form of Two-Stage Models [13]. A Three-Stage Model of free-will is proposed here, that bears a close semblance to, but is subtly different from the Two-Stage Models. In the first stage, the alternative possibilities for action are generated in the conscious mind, indeterministically. In the second stage, the agent's will makes an evaluation of the best single action from amongst those possibilities, again indeterministically. Finally, in the third stage, the choice made by the agent's will is conveyed to the brain by influencing various internal processes, (which operate deterministically), leading to the performance of action. To visualize the Three-Stage Model with the help of an analogy, think of the conscious mind as a small green circle in the plane of this paper (see Figure 1). Then represent the alternative possibilities for action by equally spaced multiple black arrows emerging from the green circle and pointing in all possible directions. Free-will can be represented by a single red arrow that visits each potential action in the same manner as the second hand of a clock visits each minute on its face. Once an agent's free-will has made a specific choice of action, the red arrow comes to rest on top of the corresponding black arrow, consequent to which the green circle gets spatially displaced in that direction. The displacement of the green circle to its new position in the plane of the paper, represents the internal processes of the brain operating deterministically to execute the specific choice of action. The Stance of Literature The great determinism-free-will debate can trace its origins, to the mythological literature of the ancient Greeks. Sophocles' play "Oedipus: King of Thebes" best exemplifies how both these opposed perspectives can operate in collusion [14]. The story goes as follows: Oedipus was at birth prophesized, to take the throne after killing his father (the King of Thebes) and marrying his mother (the Queen of Thebes). This oracle was directly conveyed to the King, who then made arrangements for baby Oedipus to be killed. But despite his extreme efforts to avert potential disaster, all that was foretold eventually comes to pass. At the onset of the drama, it appears as if free-will is fully operational, with independent choices being consistently made by different players. But very gradually it becomes clear that fate is subtly at work from start to finish and not free-will. In fact, free-will seems to be reduced to a time constrained mirage, masking the truth about a much deeper layer of reality. Every decision made throughout the play only draws the predicted disaster closer, not further away. The story cleverly demonstrates how even the choices we think we make freely and independently without coercion or counsel, are predetermined to direct us towards some inevitable destiny or fate that is assigned to us beforehand. The Stance of Religion The theology underlying the Judeo-Christian Faith is chosen as a reference for what religion has to say on the subject. No doubt, other religious traditions may well have their own takes, but they shall not be explored here. Three types of biblical accounts are analyzed in turn, very briefly. The first concerning the lives of particular individuals, the second regarding the exodus of the Hebrew people from the land of Egypt where they were held captive for 430 years, and the third, about the end-times prophecies. Finally, a theological proposal for the origin of free-will is made. The Lives of Particular Individuals i. The Lives of Adam and Eve The first and second chapters of the biblical book of Genesis describes the Fall of Man. Adam and Eve -the first man and woman -were commissioned by God to look after the newly created Garden of Eden. They were told that they could eat the fruit from any tree in the Garden except from the tree of the knowledge of good and evil. God further warned them that the day they eat from it, they would surely die. The story symbolizes the Creator's allowance for free-will to operate and His non-interference with the decision-making process. They were granted the freedom to choose for themselves, to either eat or not eat of the forbidden fruit. They could either attune their free-will in perfect alignment with that of God's will or choose to do otherwise. Besides an allegorical reference to free-will, the Garden of Eden story also serves to symbolize two peculiarities of the human condition. The first is man's innate desire to be independent of God and the second is to achieve a God like stature by becoming like Him. Independence from God would mean that man no longer has to worry about whether his actions please or displease God. The only person he need please is himself. Man, in that sense would become a source of happiness unto himself. The inclusion of God into the picture would only detract from this. To be like God would mean to know all, be all and do all without the fear of reprobation. These qualities are exclusive to God's nature and no thought can be more enticing for man than the possibility of outshining his own Creator. Adam and Eve decided to cave in to these innate desires and exercise their free-will in direct disobedience to God's warning. The consequence was just as promised, with death entering the world. History is one long account of how mankind has ever since been fending forces, both natural and otherwise, that threaten to take life. ii. The Life of Abraham The Story of Abraham, in the Bible succinctly shows how certain pre-assigned events that are destined to happen in the future, will come to pass no matter how unlikely they may seem to be in the present. When God first called upon Abraham while he was living in Ur, Mesopotamia, he was 75 years old. The Lord instructed him to leave his father's home and go to the land that would be shown to him. God then blessed him and promised to make a great nation out of his offspring. He later promised to make him the father of not one, but a multitude of nations and that their numbers would be as numerous as the stars in the sky and the sands in the seashore. This promise was made when Abraham and his wife Sarah were well past child bearing age. And by all human standards it would seem a ridiculous thing to hope for. But Abraham believed and had faith in what God could do. Bible Scholars place the approximate year on the world timeline when the couple lived, to be around 2000 BC. More than 4000 years later, Abraham is today celebrated as the Grand Patriarch of three of the great religions of the world, namely Judaism, Christianity and Islam. Their adherents, are spread out over the six inhabited continents and collectively constitute more than half of the world's population. This would number to approximately three and a half billion peoples by current estimates. Despite the myriad differences in the faith and practices of the three religions and the frequent clashes that have kept cropping up between each sect through history, they all nonetheless zealously revere and refer to him in their respective traditions as Father Abraham. iii. The Life of Joseph Jacob, the grandson of Abraham, had twelve sons. Among them was Joseph, the eleventh child, who was Jacob's personal favorite. Whenever the older brothers were up to some mischief, Joseph would promptly report it to his father, which earned their collective disdain. Jacob made no effort to conceal his favoritism towards Joseph and on one occasion, he even got his whole family together to witness him gifting Joseph with a special robe. This event aroused much jealousy and hatred amongst the brothers an added to all of this, whenever Joseph had a peculiar dream at night, he would promptly describe it in vivid detail to his brothers. In one such dream, he saw himself with all his brothers tying bundles of grain, and all of a sudden, his bundle stands up, while all the other bundles gather around and bow down low before his. In another dream, he saw the sun, moon and eleven stars bowing low before him. The meaning of these dreams was plain and clear to his brothers. It absolutely infuriated them that the second youngest in the family should have the audacity to see himself seated in a position of authority above them all. What began as jealousy gave way to raw malice and they plotted to have Joseph killed. They would have succeeded in their plans had it not been for the intervention of Judah (one of the eleven), who suggested to sell him for a price, as a slave instead. His life was thus spared, at the cost of a demotion in station, from the comfort of living by his father's side to the mercy of a slave trader's whip. Though marred by a long string of ups and downs including a false conviction of rape and serving time in a dungeon, Joseph trusted in God through all his tribulations, public humiliation and personal shame. His trusting was not in vain. When least expected, he gets miraculously appointed as Egypt's Prime Minister, second only to Pharaoh in authority. Joseph was only 17 years old when sold into slavery and 30 by the time he was appointed to high office. By his 38th year, all the events predicted in his boyhood dreams came to pass, including the reunion with his family after a separation of nearly 20 years. He was no longer the former young boy who could be easily bullied and pushed around by his older brothers, but a man of immense power and influence, to be feared and respected by all. iv. The Life of Gideon Gideon was a very ordinary man belonging to the tribe of Manasseh. He possessed none of the qualities that are often seen in men of great leadership, like bravery or determination. Yet it is interesting to note that God chose him to rescue Israel from their enemies -the Midianites. These warring peoples would frequently plunder and pillage the Israelites, leaving them to starve or run to the hills for refuge. It was during these hard times that an angel of the Lord appeared before Gideon and addressed him 'O, Mighty man of valor'. Ironically, it was while he was busy hiding away some leftover grain in his father's winepress, so that it wouldn't be looted. The angel then commissions him to lead Israel to victory against the Midianites. Feeling understandably a little disoriented by both the encounter and the commission, he takes a quick stock of his immediate situation and asks the angel how the weakest member of the weakest family in the tribe of Manasseh was going to carry out such an impossible task. The angel reassures him, that God would be by his side all the way. As events unfold, the timid and fearful Gideon transforms into a mighty military general who finally defeats Midian completely. There was peace in the land for the rest of Gideon's 40-year reign as Judge over Israel. v. The Life of Jesus Jesus is perhaps the most enigmatic figure in all of history. No man can be said to have had a greater impact on the world than this Jewish son of a carpenter, from Nazareth. Brilliant teacher, passionate preacher, astute philosopher, pragmatic spiritualist, sublime moralist -these are just a few of the many ways to understand him, on the surface at least. While much about his life is shrouded in mystery, like the unaccounted 18-year interval between the ages 12 to 30 years old, a lot is known about the circumstances surrounding his birth and also the 3-year interval spanning the ages 30 to 33 years old. It appears that everything about the man was predestined. He had a mark on his head from the day that he was born to fulfill some great purpose. The greatest purpose, as was foretold by the Prophets of the Old Testament, was to suffer and die as an atonement for the sins of mankind, followed by resurrection three days later. But to get to that point, he had to pass through several situations wherein free-will had an undeniable role to play. A few of these instances include: a. The devil tempting him three times in the wilderness and he does not yield; b. When a crowd of followers forcefully try to make him King, he just slips by; c. In the garden of Gethsemane, he prays that 'the cup be passed', but immediately thereafter confesses that it is the Father's will and not his own will that was important; d. When the soldiers come to arrest him at midnight, he does not offer the slightest resistance; e. When the court officials question and jeer at him, he remains silent; f. While yet on the cross in agonizing pain, he chooses not to beckon for his Father's angels to rescue him. g. In the light of these instances, it can be said that even Jesus had free-will, just like Adam and Eve did back in the Garden of Eden. But unlike them, he was sure to align his will at all points during his brief life, along that of his Father's will. The Exodus of the Children of Israel The period following the settlement of Jacob's family in Egypt and Joseph's death, was marked by a phase of rapid population growth of their community. From an initial strength of seventy, they grew to millions and a new Pharaoh who did not know Joseph or what he had done for Egypt, became fearful of a possible hostile takeover. He began a reign of oppression and terror, turning the Hebrew people into slaves. God had foretold this event to Abraham in a dream, that his descendants would suffer much at the hands of a foreign nation and that after 400 years of slavery they would return to Canaan. Moses was the man to set them free from Pharaoh's bondage and later Joshua led them into the Promised Land. The interesting thing to note about the Exodus was its duration, which spanned about 40 years. The journey by foot from Egypt to Canaan should have taken, by most estimates, much less than a month to cover. Bible scholars agree that the reason for its protraction was their own obstinacy and tendency to use their free-will to grumble and complain against God and actively pursue the things He specifically told them not to. End-Times Prophecies Concerning the World The book of Genesis (the first book of the Bible), describes the grim fall of man and his permanent banishment from a paradise like state of existence in the garden of Eden. The book of Revelation (the last book of the Bible), speaks of a time of bliss that is yet to come, wherein the lion shall lie down beside the lamb and there shall be no more darkness or sickness or pain or tears or death. In other words, a Paradise once lost, restored again. Looking at the current state of the world and all of the past states it has gone through, one can't help but wonder how such an age can ever come to be. However, the Probabilistic Calculus that is developed in §3 and §4 of this paper, shows that the only ingredient needed is sufficient time and the world will make the transition. A Theological Proposal for the Origin of Free-Will In the Biblical Story of Creation, God says "Let us make man in our Image" (Genesis 1:26). Another translation puts it this way, "Let us make man in our likeness". This is a mysterious verse. How can man be anything like God? What qualities do God and man share in common? God is omnipotent, omnipresent and omniscient. Human beings clearly have none of these qualities. So, what precisely did God mean by creating man in His likeness? It is proposed here, that there are two special gifts granted exclusively to man by his Creator, which make him just like Him. First, is the capacity to create things and second, is the capacity for free-will. The focus of this essay shall be on the latter. Theological Determinism This doctrine holds that every event in the world is preordained or predestined to happen by virtue of a transcendent agent's will or omniscience. This transcendent agent is God. Bridging Theological Determinism and Metaphysical Libertarianism If theological determinism is true, how then can free-will be possible? The explanation given below reconciles theological determinism (God's Omniscience) with an agent's free will and forms the basis of a novel proposal for Theological Compatibilism. God is indeed omniscient in all matters, particularly in regards to the gamut of all possibilities that an agent's free-will can choose from. However, He does not interfere with the choices made by the agent. That is, He does not prevent or coerce an agent to adopt any particular course of action. Rather, the agent is free to choose any course of action he pleases. Having said this, it should also be understood that there are certain events in an agent's existence that are predetermined to happen and cannot be evaded. These are called Determined Events. Every event that is not a Determined Event, is a Random Event. In the space of all possible events, a given course of actions leads an agent from one Determined Event to the next Determined Event, via a series of causally linked Random Events. The course of actions forms the trajectory of the Agent's existence through the space of all possible events. The multitude of different trajectories joining any two successive Determined Events in that space, can be thought of as an index for the Will's freedom. For the purpose of analogy, consider the game of tic tac toe or chess involving two players. God being omniscient, possesses the full knowledge of every possible game that can be played between them, which is a finite number for both games (although an extremely large number in the case of chess). The final outcome for any game will be one player wins, or both players draw. If each possible game, defined as a sequence of chosen moves, is considered a trajectory through the space of all possible sequence of moves, then there will be three bundles of trajectories that converge on three possible Determined Events: (i) player-1 wins, (ii) player-2 wins, (iii) draw. Though God can see the end from the beginning, He does not influence any player to choose a particular game plan to follow, but instead allows their free-will to operate and thus lead to a destined outcome. The above description somewhat resembles the Principle of Quantum Superposition, where different potentialities can co-exist in superposition, until the collapse of the wavefunction occurs by an act of observation. God sees the entire universe in a state of one big superposition of all possibilities. He also has full knowledge of every possible final outcome for the collapse of the wavefunction. However, it is not His act of observation that triggers the wavefunction to collapse, but our use of free-will. Summary of Thesis Now that the different views of various disciplines have been explored and the principal (Compatibilist) thesis of this essay presented, the underlying motivation may be stated: "It is logically erroneous or at least unnecessary to settle for the stance that the twin notions of determinism and free-will are incompatible." Using the Mathematical Theory of Probability, it is rigorously demonstrated how determinism and free-will can be meshed together to form inseparable parts of a whole, analogous to the two sides of a coin that make up the coin. In other words, it aims for and successfully accomplishes the task of performing a synthesis of the two opposed perspectives into a single, indivisible philosophical paradigm. Operational Definitions 1. Agent: One that is endowed with the power of free-will. Field: An agent's existence with all its potentialities is represented by a 2D spatial plane. 3. Event: A geometric point in the field. 4. Determined Event: An event that is pre-assigned and inescapable. 5. Random Event: An event that depends on free-will and is evitable. 6. Free-will: The total number of possibilities that emerge from an event. 7. Action: A freely made decision that leads an agent to the next event. 1. Step: Each action taken is represented by a short line of a fixed length with an arrow head marking the direction of progress through the field. 2. Station: An event in the field to which each step leads an agent. There are three types of stations in the field, namely a Start station, an End station and an Intermediary station. The names of these stations signify their relative positions in the field. Postulates 1. For every event in the field, the available potential actions from which to choose, carry equal probability a priori. 2. Each step taken in the field is independent of the preceding step. Theory Let A be a start station and B be an end station in the field F (see Figure 2), which represents two determined events. At each intermediary station in the field (1, 2, 3 … N-1), starting from station A, a step is drawn when an action is taken. These stations represent random events. Figure 2. An example of a random walk journey from start station A to end station B in the field F. The equal probability a priori postulate implies that a simple reciprocal of free-will at each event will yield the probability for choosing any particular action at that point in the field. 1 ; Where , , , … , denote the free-wills at stations A, 1, 2, 3, … N-1 in the field F, respectively. In the Theory of Probability, the Law of Multiplication holds that the probability of the joint occurrence of independent events is equal to the product of the individual probabilities of those events. Therefore, if we apply this law to our current context after invoking the second postulate that each step taken in the field is independent of the preceding step, then it logically follows that the probability of a particular trajectory in the field, is given by: This is the general equation defining the probability of a given path through the field F. For the purpose of illustration, say that the free-will at each point in the field is fixed and equal to 4 which can be pictorially represented as the directional options: up, down, right and left (see Figure 3). Then, since 4 , the above equation for N steps collapses down to: 1 4 This would mean that the probability of a path diminishes as the inverse power of the number of steps necessary to move from A to B. A corollary that follows is that the probability of a given path would tend to zero (i.e. an impossible path) as the number of steps needed to move from A to B in the field increased indefinitely. Formally stated, limit ( 0 This is however a limiting case and can be ignored since it would require an indefinite amount of time to cover an indefinite number of steps. For any two arbitrary points A and B within the field and a prescribed step size it is conceivable that only a finite number of steps would be necessary to make the journey from A to B within a finite amount of time. It can be readily shown that this probabilistic framework can accommodate both determinism and free-will into a coherent philosophical system, wherein both contraries operate in collusion. Concretizing the Random Walk Model Let us say that the agent with free-will is a man. Then his life is represented by the field F and every choice he makes is represented by a step that moves him from one station to the next. If Events A and B be the start and end stations of his life (not necessarily representing his birth and death, but rather any two conspicuous events in between), then determinism mandates that his trajectory must pass through B within his lifespan. And free-will mandates that there are an infinite number of possible step-wise routes that can be chosen by the agent to move from A to B. In other words, he is free to choose from amongst the infinity of possibilities, a particular path. It should be noted that this same luxury of choice does not extend to the end points A and B which are pre-determined, pre-assigned, unchangeable, inevitable, inescapable events. In the language of non-linear dynamical systems, the points A and B can be said to be points of unstable equilibrium (repulsion) and stable equilibrium (attraction), respectively. In table 1, the events A and B for each of the examples listed in §1.7 are identified. Augmenting the Random Walk Model The Random Walk Model can be augmented in two aspects. The first concerns the temporal aspect of an agent's progression through the Field, which is dealt with only implicitly in the original model. In order to make the time-factor explicit, the 2D spatial Field F must be extended to a 3D space-time Field by introducing a perpendicular time component in the decision-making process (see Figure 4). The start station A and end station B would then be defined by three co-ordinates each, two of space and one of time. Consequently, the trajectory of an agent would resemble an ascending staircase like trace. The projection of this path onto the XY plane is equivalent to the previous 2D treatment depicted in Figures 2 & 3, provided that the Actions are taken in equal time intervals. Since time always progresses in the forward direction (pointing upwards in the diagram), the Steps at each point in the 3D Field F, are bounded to the circular base of a right cone with semi-vertical angle * < 90° and long axis parallel to the Time axis (see Figures 5 & 6). Note that the projection of point B into the XY-plane, is fixed. However, the position of point B in the XYT-space is variable, depending on how soon the Agent makes the transit from A to B. The second aspect concerns the rules for mapping an Agent's Action to the precise direction and size of a Step, which is again not made explicitly clear in the original model. The rule for direction can be established if at any given point in the Field, there are a set of Actions that are directed towards Event B (call them destinophilic actions) and a set of Actions that are directed away from Event B (call them destinophobic actions). From a religious perspective, the former actions are those that are in alignment with God's will and the latter actions are those that are in opposition to God's will. By aligning free-will in perfect accord with God's will, the shortest trajectory towards B can be traversed (see Figures 7 & 8). Finally, the size of a Step can be determined if we assume the speed of transit to be the same along any chosen trajectory. Let ∆/ units of distance be the step size, taken in ∆0 units of time. And say that it takes 1 units of time to traverse the entire length of the chosen trajectory → , with a total of steps. If 2 be the uniform speed of transit, then we can say: From this, we see that the step size depends on: i. Total number (n) of Actions taken to make the journey from A to B, ii. Rapidity (v) of transit from A to B, iii. Time of transit (τ) from A to B. While 2 is arbitrary, n and τ can be known only post hoc, once the journey → has been completed. Also note, from Figure 6, the semi-vertical angle of the Decision Cone is equal to the inverse tangent of the speed of transit: * = tan T (2) Remarks Take, 2 = 1, then * = 45° and ∆/ = ∆0 = τ/n The computational parameters n and τ can then be used to tailor make the Random Walk Model, to suit the trajectory of any particular Agent. Operational Definitions 1. Agent: One that is endowed with the power of free-will. 2. Field: An agent's existence with all its potentialities is represented by a 2D spatial plane. 3. Event: A geometric point in the field. 4. Determined Event: An event through which the trajectory of an agent's existence must pass through. 5. Random Event: An event through which the trajectory of an agent's existence may pass through. 6. Free-will: The freedom to choose a particular trajectory. Propositions 1. Every random event in the field F is associated with a finite probability of finding the agent there, which can be computed using a special Probability Function Formula. 2. The two determined events A and B are associated with a minimum probability (zero) and a maximum probability (unit), respectively. 3. There exists a gradient of the probability function for all random events in the field F. The gradient of the probability function is zero at both the determined events A and B. 4. The gradient of the Probability Function is called a Probability Vector Function. The divergence of this Vector Function is positive at the Determined Event A and negative at the Determined Event B. That is, the former acts as a Source and the latter acts as a Sink in the Probability Vector Field. Postulate The Agent courses a trajectory from A to B through the Field F, along the direction of the gradient of the probability function at each point in F. The magnitude of the probability function increases from zero at A to unit at B. Theory The Probability Function that satisfies all the above definitions, propositions and postulate is (see figure 9): Applying the gradient operator to the above, Again, e X, Y f g h i X, Y f g and i X, Y is zero at both points 1/√2, 0 and b1/√2, 0c. That is, i b 1/√2, 0c 0 i b1/√2, 0c. Thus, Proposition-3 is satisfied. By the first part of Proposition-4, the Probability Vector Function is defined as the Gradient of the Probability (Scalar) Function X, Y . That is, jk _ X, Y The quiver plot of the Probability Vector Function is shown in figure 10. The expression for its Divergence is, Incorporating Time into the Probabilistic Vector Field Model The Probabilistic Vector Field Model developed so far, does not make an explicit inclusion of the Agent's time of transit through the Field F. Inorder to make the inclusion, it is first necessary to state a theorem from Vector Calculus: "A vector field is said to be conservative if there exists a scalar field such that, the vector field can be expressed as the gradient of that scalar field." The principal property of such a conservative vector field, is that its line integral between two extreme points, is the same regardless of the chosen path of integration, i.e. its line integral is Path Independent and is equal to the difference in the values of the scalar field at the two extreme points. By Propositions 3&4, the vector field jk and the scalar field X, Y , satisfy this theorem. Hence, the line integral of jk between the points A and B can be written as follows: Proposition-5 The Time of Transit of the Agent between any two points C and D in the Field F, is directly proportional to the line integral of jk between those points. Proposition-6 The Time of Transit of the Agent between any two points C and D in the Field F is directly proportional to the difference in the Probabilities of finding the Agent at those points. Proposition-7 The instantaneous time rate of change of Probability is a constant, for every possible trajectory between A and B, as the Agent traverses through the Field F. Remarks Clearly, Propositions 5, 6 & 7 are equivalent statements that validate each other, since the same result can be drawn from each of them, independently: The above formula, defines the instant of time 0 at which the Agent can be found at a particular point (X, Y) along the chosen trajectory → in the 2D Field F. Or alternatively, it defines the Space-time point (X, Y, 0) of the Agent in the 3D field F. If B represents a determined event that is to occur at a particular time τ, then regardless of which pathway the Agent chooses by virtue of free-will, he will always arrive at B, at the appointed time τ. Thus, in the Probabilistic Vector Field Model, both the position of the determined event B in the field F and the time of the Agent's arrival there τ, are fixed. But in the Random Walk Model, only the position of B in the field F is fixed, while the time of arrival τ is variable. Final Conclusion In quintessence, a merger between determinism and free-will is mathematically plausible, upon accepting the proposition that "it's the paths that one can choose, the end points are already chosen". That is, though a person can make free and independent choices in life, the final consequence of the series of choices made are pre-determined. The question which naturally follows from this is: "Does God make use of a similar 'Probabilistic Calculus for the World' by designating some events as determined and others random?" Listed below, are a few verses from the New International Version of the Bible that answers this question in the affirmative and provides the philosophical impetus for the two mathematical models forwarded herein, namely the Random Walk Model and the Probabilistic Vector Field Model. There is a time for everything, and a season for every activity under the heavens (Ecclesiastes 3:1). He (God) has made everything beautiful in its time. He has also set eternity in the human heart; yet no one can fathom what God has done from beginning to end (Ecclesiastes 3 :11). A person's days are determined. You (God) have decreed the number of his months and have set limits he cannot exceed. (Job 14:5). My times are in your hands (Psalm 31:15). I (God) choose the appointed time. (Psalm 75:2). For the revelation awaits an appointed time; it speaks of the end and will not prove false. Though it linger, wait for it; it will certainly come and will not delay (Habakkuk 2:3). From one man, He (God) made all the nations, that they should inhabit the whole earth; and He marked out their appointed times in history and the boundaries of their lands (Acts 17:26). It is not for you to know the times or dates the Father has set by His own authority (Acts 1:7) 6. Mathematical Appendix Derivation of the Expression for Probability Function †( ‡,ˆ) Consider the two-variable function ‰ = K(X, Y) defined as follows and whose plot is shown in figure 12: If a hypothetical ball were allowed to roll down from the top of the mountain (labelled A), it would end up at the bottom of the valley (labelled B) after coursing a certain trajectory (drawn as a series of pink arrows). No matter how the ball is released from A, it will always end up at B. The function K X, Y would therefore, form the ideal candidate to model both Determinism and Free-Will. The fixed points A and B can represent Determined Events and the variable trajectories from A to B can represent Free-Will. Also the Theory of Probability can be introduced, by designating zero probability (minimum) to point A and unit probability (maximum) to point B, for finding the Agent at these points. Consequently, every other point will be associated with an intermediate probability, lying between zero and unit. In order to derive X, Y (the probability of finding the Agent at a point X, Y ) from K X, Y , first the minimum and maximum values of the latter function must be calculated. Then the Normalization Formula is to be used, to constrain K X, Y within the range 0 to 1. The Discriminant Function is evaluated for the three different critical points and is tabulated in table 2. Now using the Normalization Formula, to constrain K X, Y within the range 0 to 1,
2018-12-05T02:01:34.572Z
2019-07-01T00:00:00.000
{ "year": 2019, "sha1": "14d1252e3956a82b45e892ccffca55e8e2785af2", "oa_license": "CCBY", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ijp.20190702.18.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5c8180c928d08ae83c9d57073245bcb4ce93feb6", "s2fieldsofstudy": [ "Philosophy", "Mathematics" ], "extfieldsofstudy": [ "Philosophy" ] }
237384984
pes2o/s2orc
v3-fos-license
VARIATIONS OF RED COLOR COVERAGE OF CULTURED NEON TETRA (Paracheirodon innesi) FOR BREEDING IMPROVEMENT STRATEGIES Red color coverage (RCC) is a commercial trait developed and refined to improve the appearance of many ornamental fish commodities. In neon tetra, the status of variation of RCC is not yet investigated or reported. This study aimed to analyze the RCC variation of cultured neon tetra as a basis for breeding strategies. A total of 900 neon tetras (standard length, SL, of 2.29 ± 0.16 cm) were collected from Bojongsari, Curug, and Pondok Petir fish farms located in Depok Districts, West Java. All fish were relocated and reared in a fish farm specialized in culturing neon tetra for two weeks using nine aquariums with photoperiod set up of 12 hours bright and 12 hours dark. The RCC traits were determined according to the percentages of RCC length (%LRCC), RCC width (%WRCC), and RCC area (%ARCC) and quantified using the digital image method. The result showed that the RCC varied by sex, size, and original location (P<0.05) in a low coefficient of variation (1.89%-11.41%). The RCC values in the male group were higher than that of females based on %LRCC and %ARCC parameters (P<0.05). Males had the highest %LRCC at size LXL, which was correlated with SL (r 0.25, P<0.1), of females at M size. The %LRCC values of the neon tetra population from the Bojongsari farm were higher than those from the other locations. Based on these findings, breeding strategies of the RCC traits should consider sex, size, and population (farm location) variations. Specifically for neon tetra, this strategy should be based on selecting the SL or %LRCC parameter of M for females and LXL for males. INTRODUCTION Neon tetra (Paracheirodon innesi) is an endemic fish from South America (Weitzman & Fink, 1983) and has been the main export commodity of ornamental fish from Indonesia (BRBIH, 2011;Putra, 2014). This characid member is cultured in West Java with its production center located at Bojongsari Subdistrict, Depok City. The total production of neon tetra from the production center was reported to reach 25.3 million fish per year (DKP3, 2018). The wild type of neon tetra is characterized by (a) a dark brown-black strip on the dorsal area; (b) a greenblue strip on the lateral side; and (c) a red pigment from the caudal fin to the middle of the body (Weitzman & Fink, 1983). Other varieties, including albino, xanthic, golden strip, and blue diamond spots on the head, were resulted from aquaculture fish breeding through random mating (Balon, 2004;Live Aquaria, 2019;Seriously Fish, 2019). These varieties contributed to increasing variety and prices (Rodi Fish Farm, 2019). In the ornamental fish business, the development or refining of a fish strain is essential to improve the strain variants' characteristics, price, and quality (Shaddock, 2012;AquariumGlaser, 2020). Red color coverage (RCC) is a commercial trait that is interesting to consumers and has been developed in many ornamental fish commodities (Colihueque & Araneda, 2014;Li et al., 2015;De Kock & Gomelsky, 2016). The RCC trait breeding strategies that have been developed include extending, narrowing, or eliminating the body's red color areas Variations of red color coverage of cultured neon tetra ..... (Ruby Vidia Kusumah) (Shaddock, 2012;Pedersen, 2013). Some studies of RCC in fish have been carried out, including quantification techniques, variation analysis, crossbreeding between fish with different percentages of RCC, and analysis of RCC controlling gene (David et al., 2004;Novelo & Gomelsky, 2009;Du et al., 2018). In neon tetra, scientific information on the variation of RCC is not yet investigated or reported. Color variations, including RCC, can be influenced by genetic, sex, size, physiological responses, and environmental conditions (David et al., 2004;Parichy et al., 2009;Kelley et al., 2016;Meilisza et al., 2017;Linhares et al., 2018). The RCC variations are essential to reveal the status of color performance of cultured neon tetra based on sex, size, and farm location to improve breeding techniques of the fish species. Based on the farm location, neon tetra populations could have different genetic variations (see Sahoo et al., 2018) influenced by environmental factors, feed management, and the farming system itself. Bojongsari, Curug, and Pondok Petir are villages in the Bojongsari sub-district (Figure 1), where most fish farmers cultivate neon tetra. Preliminary surveys and interviews revealed the farms in these locations show similarities in rearing conditions (Table 1), feed management and farming systems. In these locations, neon tetra is cultivated in one-meter aquariums with a stagnant water system and daily siphoning. Fish were fed twice a day with Artemia, Moina sp., Tubifex (adlibitum), or commercial pellets (at satiation). However, based on trade, farms chosen for this study in each location are not connected (fragmented). Therefore, the population with no interaction with other populations can exhibit different genetic variations/ structures (Beer et al., 2019). This study aimed to analyze the RCC variation of cultured neon tetra as a basis for breeding strategies. The research was expected to improve the breeding techniques and overall quality of farmed neon tetra, specifically its red color coverage variations. MATERIALS AND METHODS The research was performed between March and May 2019 in a fish farm at the Neon Tetra Culture Center, Curug, Bojongsari Subdistrict, Depok City, West Java. In Indonesia, this location is the center of production of neon tetra and supported by multiple culture facilities. Fish Collection and Maintenance A total of 900 neon tetras measuring the M-XL sizes of red border length (RBL) ranging in 1.9-2.7 cm were collected from three fish farms located in Bojongsari, Curug, and Pondok Petir villages of Bojongsari Subdistricts (Figure 1). The rearing parameters in each fish farm, including temperature, pH, TDS, DO, and light intensity (Table 1), were measured in the morning, noon, and afternoon for three days. The measurements were carried out in the sampled aquariums used for the broodstock, larvae, and juvenile. The fish were then separated into nine groups using the combination of fish location and size (M, ML, L-XL). The fish groups were kept in nine 100 cm x 50 cm x 30 cm aquariums of 25 cm water height with 100 fish/aquarium density. Each aquarium has a photoperiod set up of 12 h light : 12 h dark and was continuously aerated to maintain dissolved oxygen (DO) level 3 mg L -1 (SNI 8111: 2015(BSN, 2015). The fish rearing media was previously added with approximately 2 g L -1 of salt and 12 pieces of dried "ketapang" leaves (Terminalia catappa) folded and bound with rubber bands. Ketapang leaves were used to create culture media resembling neon tetra habitat in black water, lowering the pH, and sources of humic substances (humic acid, fulvic acid, tannin, flavonoid) beneficial to the fish health. Fish were feed twice daily, with Moina sp. (ad-libitum) in the morning and Tubifex (ad-libitum), or commercial pellets (at satiation) in the afternoon. Every two days, the aquariums were siphoned and measured its water quality and ambient parameters, including temperature, pH, DO, total dissolved solid (TDS), and light intensity. Documentation and Characterization of RCC After maintenance of 14 days, fish samples were photographed one by one to document their RCC on a 10 cm-diameter petri dish with a light intensity condition of 310.9-726 lux. Each fish was photographed five times on each body side and then returned to the aquarium. The camera used in the RCC documentation was Canon EOS 450D, positioned 30 cm from the sample in a perpendicular (90°) direction. The camera setting used focal length: 55 mm, no flash, F-number F/5.6, exposure time: 1/60 second. Digital photos (images) produced were saved in the "JPEG" format with a resolution of 4272 x 2848 pixels (12.2 Megapixels). The total number of fish analyzed from each location was 36 fish considering the fish sex composition (1:1) and size (total 108 fish). Male, female, and sizes of fish were randomly collected from nine maintenance aquariums. Sex was determined by morphological dimorphism. The male has a slender and longer body, while the female is rounder and broader on the abdomen. Based on the dorsal viewpoint, the female stomach swells while the male is flat. The neon tetra breeder guided sex Indonesian Aquaculture Journal, 16 (1), 2021, 1-11 determination to ensure accuracy. The size criteria were determined based on the red border length, measured from the mouth to the red border on the caudal fin. The measurement of red border length (RBL), standard length (SL), body area (BA), and RCC traits, including LRCC (RCC length), WRCC (RCC width), ARCC (RCC area) (Figure 2), were performed using Adobe Photoshop CS5 Extended software version 12.0 x 64. The LRCC and WRCC parameters were divided by SL (%LRCC and %WRCC), while ARCC was the percentage of BA (%ARCC) (Kusumah et al., 2016). Data Analysis The data obtained were classified by sex, size, and location. The size of fish was categorized based on the standard neon tetra market size: M (1.9-2.1 cm RBL), ML (2.2-2.3 cm RBL), L (2.4-2.5 cm RBL), LXL (2.5-2.6 cm RBL), XL ( 2.7 cm RBL) converted to standard length (SL). RBL and SL correlation are depicted by a linear regression model in the total samples (A: pooling, 108 fish). SL comparison tests based on sex (B: 54 samples for each sex), size (C: 4-43 samples for each size), and location (D: 36 samples for each location) were analyzed using ANOVA followed by the LSD Fisher test. Population estimation of the SL parameter was calculated using the formula: Information: n= number of samples Based on the RCC parameters (%LRCC, %WRCC, %ARCC), sex criterion was firstly used to analyze RCC variations. There were 54 samples for each sex (B) to compare RCC variations on fish male and female. If the RCC variation of sex criteria revealed a significant difference, the comparison test was repeated with size and location criteria in each sex. Total samples of M : ML : L : LXL : XL sizes respectively were 3:13:21:14:3 fish for male and 3:23:22:5:1 fish for female (E). Based on location, total samples of Bojongsari : Curug : Pondok Petir farms respectively were 18:18:18 fish for each sex (F). Normal distributions of RCC variations were analyzed using the Kolmogorov-Smirnov test (KS). The homogeneity test used the Levene test (Lv). The coefficient of variations (CV) was analyzed descriptively. The RCC comparison based on sex (B), size (E), and location (F) were analyzed using ANOVA followed by the LSD Fisher's test, correlation test, and regression of RCC parameters with SL (B) and within RCC parameters (%LRCC, %WRCC, %ARCC) (B). The Kruskal-Wallis test were used to analyse RCC parameters that were not normally distributed and/or homogenous. The similarity of Total RCC (%LRCC + %WRCC + %ARCC) between locations (D) and between SL, BA, LRCC, WRCC, and ARCC values (A) were analyzed using cluster analysis. Data are presented in tables and graphs. All data analyses were carried out using Minitab 16.2.4.0 and Microsoft Excel software. Standard Length Variations of Neon Tetra The standard length (SL) of neon tetra varied with the coefficient of variation (CV) range from 1.71%-7.32% (Table 2). The SLs of male fish groups were higher than that of the females (P<0.05). Meanwhile, the fish from Bojongsari farm had longer body length than those of the other locations (P<0.05). Each size group showed significant differences (P<0.05) with the lowest CV for size M and the highest CV for a size XL. The estimated range of variation in the SL at the population level was higher and tighter than the sample data range. In breeding programs, the low range and variation of the population were positively correlated with selection intensity and selection response. The standard length of samples in this study ranged between 1.93-2.76 cm (2.01-2.84 cm RBL) or M-XL size (Table 2), which showed that RCC characters have developed (Supplement 1). According to Chapman et al. (1998), the color has appeared in neon tetra larvae like in the adult fish on 28-32 days after hatching, categorized as S size (± 1 cm RBL). The S-sized fish needs about 45 days more to reach M size (1.9-2.1 cm RBL). Red border length (RBL) is the standard parameter for determining the neon tetra's market size used in the Bojongsari Sub-district. In this study, RBL significantly correlated to standard length (SL) (r= 0.99, P<0.001) with the regression model is: SL= -0.0120 + 0.972 RBL (R 2 = 0.97, P<0.001). Red Color Coverage Variation of Neon Tetra The red color coverage (RCC) character values, including %LRCC, %WRCC, and %ARCC indicated normal and homogeneous distribution patterns (P>0.05), except the %LRCC in the male fish population from the Bojongsari farm and the %WRCC from Curug farm (P<0.01). Several studies reported that color characters could be categorized as qualitative traits influenced by single or multiple genes (David et al., 2004;Gorshkov, 2014) or quantitative traits that can be measured, normally distributed, controlled by many genes influenced by environmental conditions (Gomelsky, 2011). The results of this research have confirmed that the RCC character in neon tetra is categorized as a quantitative trait (Kusumah et al., 2016;Rankin et al., 2016;Meilisza et al., 2017) and valid for parametric analysis. The RCC performances of the neon tetra samples were significantly different (P<0.05) based on sex, size, and farm location, with the coefficient of variations (CV) ranging from 1.89% to 11.41% (Table 3). This variation of RCC is lower than the variation in the brown coverage in Cynoglossus semilaevis by 63.78% (Liu et al., 2016), and other colors such as black (23%-75%), blue (16.7%-75%), iridescent (25%-53.8%), orange (25%-66.7%) of Poecilia reticulata (Martínez et al., 2016). Aquaculture Journal, 16 (1), 2021, 1-11 The variation value of a character is the basis for developing strains through breeding (Shaddock, 2012;Kottler et al., 2013;Ponzoni et al., 2013;Moses et al., 2020), including red color coverage. Genetically, the red color is controlled by multiple genes, the same genes controlling the synthesis of pteridine and melanin, metabolism of tyrosine, and genes related to cell responses to stress (Li et al., 2015;Zhang et al., 2017). In neon tetra, the performance of red color is influenced by light conditions, which could increase or fade fish's color intensity (Hayashi et al., 1999;Linhares et al., 2018). Based on sex criteria, the RCC variation in male fish is generally lower than the females. According to fish size, the LXL group has a higher variation than the other sizes. Based on farm location, the fish population from Bojongsari was more varied than others (Table 3). In certain fish species, sex is a morphological criterion to distinguish between male and female (sexual dimorphism) in aquaculture broodstock selection. In general, the character performance of males is more attractive than females (Camelier et al., 2018;Kottler & Schartl, 2018). This research showed that %LRCC and %ARCC values in males were higher than that of females (P<0.05) (Table 3). However, farmers usually differentiate the sex of neon tetra based on the shape, size, and color of the abdomen. For example, the abdomen is yellowish for females or blackish for males, or the body shape is slender and elongated for males but rounded and widened for females. Based on the size and location criteria, the sexual charac-teristics of male RCC were more varied and significant (P<0.05) but tended to be stable in females (P>0.05) ( Table 3). This information can be used as a basis for consideration when selecting male neon tetra with specific RCC characters. The male fishes sized LXL have the highest %LRCC and %ARCC (P<0.05) compared to the other sizes with a difference of 1.5%-4.6% SL and 0.08%-3.38% BA, respectively (Table 3). The %LRCC values in the male fish group showed an increasing trend followed by an increase in size from M to LXL, whereas female fish indicated a declining trend. These results confirmed that the RCC trait has optimal limits, which differ by sex. The optimal limit of RCC in male fish is at LXL size. In contrast, the female limit is at M size. The size is an indicator of the morphological (ontogeny) development of fish species (Parichy et al., 2009). In immature fish, its morphological character has not developed yet, including color traits (Parichy et al., 2009;Baras et al., 2012;Marinho, 2017;Salis et al., 2018). The male fish from Bojongsari farm showed the highest %LRCC variation of between 0.7%-1.6% SL compared to the fish populations from other locations (Table 3). Each farming location may have specific characteristics of environmental factors, including water quality, food availability, management, systems, aquaculture technology, and the possibility of gene flow between fish farm populations. In some aquaculture species, several parameters affect morphological variations, including body-color character. This Population estimation N Mean ± SD (cm) CV (%) Range (cm) Range ( Variations of red color coverage of cultured neon tetra ..... (Ruby Vidia Kusumah) research confirms that the fish population from Bojongsari farm has the highest RCC, which can be considered the basic population in improving the strain and breeding techniques of neon tetra. Correlation and Regression Analysis Between RCC Parameters in Neon Tetra The Pearson correlation analysis and simple linear regression between RCC parameters (Table 4) showed that %WRCC in female fish was negatively correlated with standard length (P<0.05), and %ARCC was correlated with %LRCC (P<0.001). These results indicate that the %LRCC parameter could be used as a selection indicator in male fish to obtain the expected %ARCC, whereas, in female fish, the %WRCC is used. Clusters Analysis of RCC Variations in Neon Tetra Cluster analysis based on a combination of three RCC parameters (%LRCC, %WRCC, %ARCC) showed that neon tetra from different locations showed a high similarity (> 99.5%) (Figure 3). The RCC variations of the fish populations from Curug and Pondok Petir showed a higher resemblance to Bojongsari. The low (Table 3) explains the high similarity of RCC character in neon tetra between locations (Figure 3). This similarity also indicates that gene flow in the three populations still occurs due to high trade intensity and broodstock demands between farmers of these areas. Since 2014, the neon tetra trade from Pondok Petir was disconnected from Curug farm. Meanwhile, since 1994, Bojongsari farm did not supply the fish to Curug farm. However, not all farms in Bojongsari Sub-district have disconnected their trade like the farms chosen for this study. Many farms are still connected due to their proximity (around 2-3 km) and supply exchange of neon tetra. Bojongsari, Curug, and Pondok Petir are three villages that are bordering each other. Based on the survey, most of the water quality and ambient parameters at each location were not significantly different (P>0.05), except DO (Table 1). The fish samples were adapted for two weeks at the same place in Curug Fish Farm. The measured ambient and water Variations of red color coverage of cultured neon tetra ..... (Ruby Vidia Kusumah) quality parameters of the rearing media were temperature, pH, DO, TDS, and light intensity ranged between 27°C-29°C, 3.5-5.7, 4.2-5.1 mg L -1 , 327-979 mg L -1 , and 70-245.6 lux, respectively. These measurements were conducted before RCC variation documentation (Table 3) to ensure that the source of variation was not from environmental factors but genetic. The low variation of RCC is also possibly affected by the occurrence of inbreeding of neon tetra, so the RCC characteristic tends to be stable from the past until the present. Further, it is necessary to compare the RCC variation of neon tetra from wild and cultured populations. The relationship analysis between parameters affected RCC variations (%LRCC, %WRCC, %ARCC) in neon tetra showed that standard length (SL) is closely related to body area with 97% similarity. In comparison, the RCC length (LRCC) is related to RCC area (ARCC) with 94% of similarity and RCC width (WRCC) shows a close relationship with other parameters at 81% similarity level (Figure 4). CONCLUSIONS Red color coverage (RCC) variations of neon tetra are influenced by sex, size, and population (farm location). The values of %LRCC and %ARCC in males are high than in females. The variation of female %WRCC is negatively correlated to standard length. The optimal RCC values in males and females are at LXL and M sizes, respectively. The standard lengths and RCC variations of neon tetra populations from Pondok Petir and Curug have a higher similarity to each other than to the fish population from Bojongsari. Breeding strategies of the RCC trait should consider the level variations of sex, size, and population (farm locations). In neon tetra, breeding strategies of the RCC trait should be based on the selection of the standard length or %LRCC parameter of M size for females and LXL size for males. ACKNOWLEDGMENTS The authors thank The Research Institute for Ornamental Fish Culture, Depok, for partially funded the research. The National Outstanding Farmers and Fishermen Association (KTNA) chapter Bojongsari Subdistrict, Community Empowerment Organization (LPM) chapter Curug Village, Rodi Fish Farm, Ape Fish Farm, and Joy Fish Farm are thanked for providing access to farm facilities and fish samples. The authors gratefully acknowledge the following individuals who provided assistance during the research period: Riyon Iki, Robi Darwis, Safendi, Irwan Setiawan, and all of the genetic group members. The authors thank Abdan Julian Kusumah, Isa Budi, and Tarsiwan for their help in data analysis. Sincere thanks to Mrs. Indriyani Kusumah, who significantly contributed to improving the readability of this paper. RCC Variation in Sex and Sizes and RCC Position in The Body of Neon Tetra Red color coverage (RCC) of cultured neon tetra is formed by red pigment cells (erythrophore) associated with other color cells. RCC is distributed from caudal fin to adipose fin in dorsal, go down to stomach area, anal fin in the abdomen area, and then back to caudal fin (Supplement 1b). Erythrophore overlapping melanophore on the back of the lateral area Supplement 1. RCC variation in sex and sizes (a) and RCC position in the body of neon tetra (b-h), (b) RCC position in the whole body, (c-e) erythrophore and mel anophore overlapping, (f) erythro phore and leucophore/iridophore over lapping, (g) erythrophore distribution on the edge of dorsal, (h) erythro phore distribution on dorsal fin rays, af: adipose fin, s: stomach, l/i: leucophore/iridophore, df: dorsal fin, M: 1.9-2.1 cm RBL, ML: 2.2-2.3 cm RBL, L: 2.4-2.5 cm RBL, LXL: 2.6-2.7 cm RBL, XL: >2.7 cm RBL, RBL: red border length. (Supplement 1c), in the top, middle, and bottom of caudal peduncle area (Supplement 1d), in the dorsal and the end of the neon strip (Supplement 1e), and then enter the back of the stomach area and partially covered silver-gold of the structural color cell (leucophore/iridophore) (Supplement 1f). Red pigment area is also found in the front edge of an adipose fin in the dorsal region (Supplement 1g) and dorsal fin rays (Supplement 1h). In our study, the RCC character was only the red pigment on the body, not including on the fins (Supplement 1a).
2021-09-01T15:08:24.163Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "fb16b8214d53167c918a3bc69a81525261bc0994", "oa_license": "CCBYSA", "oa_url": "http://ejournal-balitbang.kkp.go.id/index.php/iaj/article/download/9106/7268", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "18de735633bc196c0c3a67777eec2200275bc18d", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
244515496
pes2o/s2orc
v3-fos-license
“If worse comes to worst, my neighbors come first”: social identity as a collective resilience factor in areas threatened by sea floods Research on collective resilience processes still lacks a detailed understanding of psychological mechanisms at work when groups cope with adverse conditions, i.e., long-term processes, and how such mechanisms affect physical and mental well-being. As collective resilience will play a crucial part in facing looming climate change-related events such as floods, it is important to investigate these processes further. To this end, this study takes a novel holistic approach by combining resilience research, social psychology, and an archeological perspective to investigate the role of social identity as a collective resilience factor in the past and present. We hypothesize that social identification buffers against the negative effects of environmental threats in participants, which increases somatic symptoms related to stress, in a North Sea region historically prone to floods. A cross-sectional study (N = 182) was conducted to analyze the moderating effects of social identification on the relations between perceived threat of North Sea floods and both well-being and life satisfaction. The results support our hypothesis that social identification attenuates the relationship between threat perception and well-being, such that the relation is weaker for more strongly identified individuals. Contrary to our expectations, we did not find this buffering effect to be present for life satisfaction. Future resilience studies should further explore social identity as a resilience factor and how it operates in reducing environmental stress put on individuals and groups. Further, to help communities living in flood-prone areas better cope with future environmental stress, we recommend implementing interventions strengthening their social identities and hence collective resilience. Introduction Floods are stressful events for any affected population and threaten the very fabric of their shared existence. They cause distress on the individual and collective levels and therefore harm people's and group's physical and mental well-being. Considering the intensification of future devastating climate change-related flood events (CRED and UNDRR 2020), there is some urgency in gaining comprehensive knowledge of collective resilience factors facilitating a population's physical and mental well-being, and in turn, the willingness to stay in flood-prone areas. For example, Ntontis et al. (2019) and Drury (2018) investigated social identification as a factor helping the group to counter negative effects of stress by focusing on the psychological and behavioral processes of individuals in the aftermath of a flood. However, relying mainly on individual accounts of group performance after such emergency events took place, provides an unbalanced assessment of social identity as a collective resilience factor. Therefore, exploring collective resilience from different temporal perspectives provides a more nuanced picture of how the process is shaped by operating resilience factors. Subsequently, we contribute to the growing collective resilience literature by focusing on the group level via a social identity approach. To this end, we aim to investigate social identification as a collective resilience factor in a flood-prone coastal area in Germany (East Frisia), during a time of no acute threat of flooding and combine the findings with archeological evidence from the same area. The knowledge we gain from such a combined "past and present perspective" can strengthen future responses to climate change-related events, because not all populations have the means (e.g., economically) or willingness (e.g., culturally) to relocate. For example, in medieval times, floods would unquestionably have had detrimental psychological outcomes on an already deprived population (Brown 2021). However, past populations endured in East Frisia for centuries and demonstrated their ability to cope, to adapt and to implement technical solutions to secure their survival and well-being. Hence, by examining well-being and life satisfaction in relation to the social identity theory as resilience outcomes in the past and present, we contribute to psychological research directed at possible flood-related mental health issues such as anxiety, post-traumatic stress disorder (PTSD), depression, and most relevant to our study, a "decreased sense of self and identity from loss of place and grief reactions" (Palinkas and Wong 2020, p. 12). Theoretical and conceptual framework As shown in Fig. 1, we connect the concept of collective resilience to well-being and life satisfaction via the social identity theory (Jetten et al. 2012;Tajfel and Turner 1979;Turner et al. 1987). Here, the social identity theory acts as a merger in the resilience process by shaping collective cognition and action (Turner 2010). Social identification has previously been found to increase the well-being of the group and its members (Jetten et al. 2012) and is linked to mechanisms relating to group maintenance (Molenaar et al. 2021). However, it has yet to be corroborated more thoroughly as a collective resilience factor. Archeology, on the other hand, adds a bird's eye perspective on long-term social developments. There is archeological evidence for strategies of collectively navigating the stressor of floods by adapting settlement structures to the given environmental situation and the needs of the group. For example Swierczynski et al. (2013) report on changes in Neolithic settlements, located at the Mondsee in the Alps. There, inhabitants collectively changed their settlement pattern from "dwellings on the wetlands […] to constructing lake dwellings on piles upon the water" in order to cope with increased flood risk from the lake around 5400-4700 cal. yr. BP (Swierczynski et al. 2013(Swierczynski et al. , p. 1610. Hence, archeology fuses well with the social identity theory and collective resilience processes which are assessed using each discipline's methods of analysis. Concerning psychology, the present research incorporates empirically collected survey data from a population potentially affected by floods. Regarding archeology, material findings from excavations are contextually interpreted based on theoretical concepts, i.e., social identity and collective resilience. We accomplish a present no-threat and past perspective on collective flood resilience by focusing the psychological data and archeological evidence on well-being and life satisfaction as the outcome of a collective resilience process. Collective resilience Resilience at an individual or collective level is a wide-ranging topic and its definitions heavily depend on the context and stressors studied (Southwick et al. 2014). Broadly speaking, resilience at the individual level can be defined as a "capacity and dynamic process of adaptively overcoming stress and adversity while maintaining normal psychological and physical functioning" (Wu et al. 2013, p. 1). Moreover, research has looked at resilience factors, which help the individual counter harmful effects of stress and thus increase well-being (Diehl et al. 2012). However, in the last decade, researchers have realized that focusing on the individual level alone misses the processes and factors that help groups demonstrate resilience after being exposed to stressors, e.g., natural disasters. Therefore, recent resilience literature has prioritized the collective levels (see Drury et al. 2009;Lyons et al. 2016;Norris et al. 2008). Similar to individual resilience, collective resilience definitions center on how communities deal with adverse conditions by overcoming them with a positive outcome of either returning to status ante or achieving an even better condition than prior to the negative event (Lyons et al. 2016;Southwick et al. 2014). Resilience and well-being are two inseparable concepts (Harms et al. 2018). A recent study on twins and siblings argues for a strong correlation between wellbeing and resilience, with both having "some causal effects on each other" (de Vries et al. 2021, p. 11). Moreover, de Vries et al. (2021) found that this relationship is determined equally by genetic and environmental factors. Hu et al. (2015) highlight the necessity to measure the positive as well as negative indicators of mental health since both have a significant impact on the resilience process. Therefore, it is critical to measure negative aspects of well-being (physical complaints) and positive aspects of well-being (life satisfaction) as resilience outcomes. Furthermore, several collective resilience factors relevant to overcoming natural disasters and well-being include social support (Kaniasty 2012;Kaniasty and Norris 2008;Masson et al. 2019;Norris et al. 2008), collective efficacy (Benight 2004;Norris et al. 2008), and community cohesion (Hetherington et al. 2018;Kaniasty 2012). Appraisal-coping processes also influence how individuals and groups deal with stressors and the extent to which this negatively affects their well-being (Jetten et al. 2012;Lazarus and Folkman 1984). Having experienced a flood will result in a sense-making process by which the individuals and groups appraise "causes, danger and future threat" (Biggs et al. 2017;Rochford and Blocker 1991, p. 176). Especially the "primary appraisal determines the meaning and significance of a transaction to wellbeing" (Biggs et al. 2017, p. 353). In this sense-making process, individuals and groups can appraise a flood event as a stressor, and if those affected think of it as a threatening or harmful event then negative emotions associated with threat and harm can have negative effects on well-being (Biggs et al. 2017;Rochford and Blocker 1991;Tapsell and Tunstall 2008). Therefore, we hypothesized that threat perception in relation to floods is a key stressor for people living in flood-prone areas and this is likely to have a negative effect on well-being and life satisfaction, hence we predict: Hypothesis 1 The perception of threat (by floods) in present-day inhabitants of Krummhörn is related to their levels of (a) well-being and (b) life satisfaction. The social identity approach The social identity approach encapsulates all the positive and negative effects group memberships can have on the individuals' sense of self that arises from their membership within these groups (Jetten et al. 2012). This approach is based on the social identity theory by Tajfel and Turner (1979) and self-categorization theory by Turner et al. (1987). While Tajfel and Turner (1979) investigated intergroup relations and how belonging to a group shapes the sense of self and perception of other groups, there has recently been a turn toward the positive effects group memberships have on the well-being of the individual. The physical and mental health benefits of social identification, also known as the social cure effect, are now a main focus of social identity research Jetten et al. 2012). For social identification to have a positive effect on physical and mental well-being, groups must be valuable to the individual, and a "sense of sharing" that social identity (Häusser et al. 2012, p. 973) needs to be salient (DeMarco and Newheiser 2019). Moreover, research shows that social identification relates to better physical and mental health via the resilience factors of social support and collective efficacy (Avanzi et al. 2015(Avanzi et al. , 2018aJunker et al. 2019). The underlying mechanisms of the social cure effect are as follows: with higher social identification (a) group members get more social support (Frisch et al. 2014) and (b) develop a stronger "sense of collective self-efficacy" (Van Dick et al. 2017, p. 2) which is associated with high group esteem (DeMarco and Newheiser 2019). In addition, social identification is then able to satisfy psychological needs (Greenaway et al. 2016) such as the need for certainty or belonging (Van Dick et al. 2017) and successfully reduces negative effects of stress in response to threat by decreasing cortisol levels (Häusser et al. 2012), cardiovascular responses to stress (Gallagher et al. 2014), depression (Cruwys et al. 2014), and burnout symptoms (Steffens et al. 2014). Recent work on mass emergencies and community resilience linked to natural disasters highlights the importance of a shared social identity in explaining the rapid growth of cohesion and solidarity in emergency situations such as floods ). Emanating from "shared self-categorization" (Williams and Drury 2009, p. 295), shared social identities are created by individuals recognizing the common fate of all group members involved, i.e., neighbors or the whole village (Williams and Drury 2009). Hence, on the group level, shared social identity fosters "cognitive, relational and emotional transformations which allow people to co-act effectively and thereby constitute a source of social power" (Reicher et al. 2018, p. 29) and motivates collective action (De Weerd and Klandermans 1999). Williams and Drury (2009) established the term collective psychosocial resilience to describe the resilience process of people acting as a group in emergency situations. It is assumed that "collective reactions to emergencies and disasters are more typically resilient" (Williams and Drury 2009, p. 294), as instead of panic and chaos exhibited only by very few individuals, it is cohesion, solidarity, and social support that make up the group's response to adversity . The mechanism that helps groups to exhibit collective resilience can also be traced back to group appraisal-coping processes and the extent to which a stressor negatively affects collective well-being (see Häusser et al. 2020). However, as noted before, social identification has been found to buffer against negative effects of stress by positively influencing appraisal-coping mechanism on the group level (Jetten et al. 2012), and thus we predict: Hypothesis 2 Social identification moderates the relations between threat perception and (a) well-being and (b) life satisfaction, such that highly identified residents will be less negatively affected by perceptions of threat. Background of the study area Our specific research area Krummhörn is located in the south-eastern region of the North Sea coast in Germany, also known as East Frisia. Krummhörn is located in the Wadden Sea landscape comprising marshlands, geest, and peat bogs which are shaped by the tidal nature of the North Sea. Moreover, its location at a mean sea level of 0 means the area is exposed to North Sea storm surges as no natural elevation exists. Consequently, when storm surges approach Krummhörn's coastline they flood the surrounding landscape with ease if no protection such as dykes exists. The National Oceanic and Atmospheric Administration defines a storm surge as "the abnormal rise in seawater level during a storm, measured as the height of the water above the normal predicted astronomical tide" (NOAA 2020) and depending on "the orientation of the coastline with the storm track; the intensity, size, and speed of the storm", they can be a very destructive and deadly natural event (NOAA 2020). Such a special and dynamic landscape also shapes the local ways of life eternally, with Brinckerhoff Jackson (1984, p. 8) noting: Landscape is not a natural feature of the environment but a synthetic space, a man-made system of spaces superimposed on the face of the land, functioning and evolving not according to the natural laws but to serve a community -for the collective character of the landscape is one thing that all generations and all points of view have agreed upon. […] A landscape is thus a space deliberately created to speed up or slow down the process of nature. His thoughts convey similar ideas to those expressed by social ecologists, who propose that "our thinking, feeling and behaviors are influenced by our ecologies, and that our ecologies are shaped in part by our thinking, feeling and behaviors" (Uskul and Oishi 2020, p. 181). Thus, Krummhörn is a place made up of a harsh physical environment, shaping peoples' coping behaviors over centuries and ensuring the continued existence of the population. Archeology of the study area: theoretical integration and analysis In addition to our empirical study of present Krummhörn inhabitants, we are especially interested in how past populations coped with storm surges and floods, the tidal nature of the North Sea, and the wet, cold, and stormy weather characteristic of this coast. One feature of the region standing out is the specific type of village settlement pattern known as Terp. This type of settlement also exists along the coastline of North Frisia, Groningen, and other regions in the Netherlands bordering the North Sea. From archeological excavations of Krummhörn Terps, we know that the construction of these settlements in East Frisia started around 100 BC and ended with the beginning of dyke construction in 1000 AD (Erickson 2018). Terps are artificially erected dwelling mounds of an oval to round or long shape (Erickson 2018). The different generations living on these Terps accumulated waste materials such as household rubbish, manure, and soil over hundreds of years to increase the elevation of their settlement in order to protect inhabitants from floods. The Terps in Krummhörn have a height between one and six meters above mean sea level (Nicolay 2014) and are located between one and five kilometers away from the dyke. However, considering the potential intensity of some storm surges and resulting floods, the elevation of the Krummhörn Terps are not very high. Moreover, as populations grew over time, these settlements became very densely populated areas as more dwellings were built on top of the Terps. This, in turn, might have increased the vulnerability of the population. Therefore, the environmental and social situation at the time of Terp building as well as the circumstances during dyke building required resilient collective action from the residents. Furthermore, the archeological record holds information about the health conditions of individuals living on Terps. The cold and wet living conditions in East Frisia were generally tough in the past and the population suffered from heavy physical stress and chronic protein and vitamin deficiency, accompanied by an accumulation of life-threatening cases of mastoiditis and otitis, which are complicated ear infections and inflammations (Bärenfänger 2006;Burkhardt 1998Burkhardt , 2009). Human skeletal remains from the 12th to the sixteenth century AD from Krummhörn and its neighboring regions show traces of different phases of acute malignant otitis externa, an ear infection affecting the "skull base" and "external ear" in such way that the bones become necrotic (Carfrae and Kesser 2008). For example, during excavations in the church of the Terp village Rysum, also one of our surveyed villages, archeologists discovered a sixteenth century burial of an eight-year-old girl in the church choir. She was buried without grave goods and her skeletal remains revealed that she was not able to survive the acute ear infection which had already spread to her brain and damaged her skull bones. The slow process of such an infection, which is accompanied by severe pain, can put additional mental health stress not only on the affected individual but also the whole group who has to witness and endure the suffering of others (Burkhardt 1998(Burkhardt , 2009. Hence, Terp building might also be an expression of the groups' willingness to stay in such a harsh environment by moving closer together. This gain in extreme closeness among group members is likely to have increased the populations' collective resilience, because it may have helped create a shared social identity (group level) leading to collective action (Terp building) which, in turn, promoted further physical and mental well-being on the individual and group level. Therefore, we believe that Terps are a local feature of the Krummhörn landscape which demonstrate how a shared social identity fostered collective action-taking as well as collective resilience among the residents in recurring emergency situations and beyond. It helped the affected population to resist the threats from floods and endure poor living conditions while simultaneously enabling them to continue living in this region. Henceforth, shared social identity and its associated cognitive processes (see Williams and Drury 2009, p. 295 on "shared self-categorization") provide a basis for conceptualizing social identity as a potential collective resilience factor. Procedure Recruiting participants from very rural areas is a challenging undertaking as most communities live a segregated way of life. In addition, as there is no good broadband connection in the Krummhörn area, we decided to use a paper and pencil survey method. The study was announced in the local newspaper and the Mayor of the Krummhörn region informed the heads of the villages on our behalf. These heads supported the first author in distributing questionnaires in each of the 19 villages. The questionnaires could be answered alone, as each questionnaire was in a sealed envelope, including a paid return envelope for an easy return to Frankfurt Goethe University. To get closer to random sampling, the author herself went to Krummhörn for a weekend and randomly rang doorbells in each village in order to get in touch with the people, promote the study and distribute 27 more questionnaires. Due to health issues, questionnaires could not be distributed in one village. Importantly, 17 of the 18 participating villages were Terps. In the questionnaire, participants were first informed about the study and then presented with one of the three experimental threat perception conditions. The remaining measures were answered by all participants after the threat perception manipulation. At the end of the questionnaire, participants were informed about the experiment. Participants Out of 723 questionnaires sent, 202 residents of Krummhörn participated in this study and filled out an anonymous paper and pencil survey. Our inclusion criteria for the study meant we had to exclude twenty participants as they gave incomplete answers on the measures of the independent and dependent variables, including the variables age and gender. Accordingly, the final sample for statistical analysis includes 182 residents (33.5% women; M age = 57.03, SD = 16.72). Due to additional missing values in the demographic section, the following results are based on a sample including 176 residents (33.5% women; M age = 57.08, SD = 17.00). The majority of participants (62%, n = 176) can trace their ancestry in Krummhörn back three or more generations and 99% have German citizenship. There was only one person with an immigration background in our sample. Fifteen percent of participants (n = 176) reported having received degrees at institutions of higher education, 38% completet vocational training, 22% completed Higher Secondary Education (6 years of secondary school), followed by 16% completing Lower Secondary Education (4 years of secondary school). The remaining 9% of participants either completed Gymnasium (8 years of secondary school) or did not complete school. We also asked whether participants would have the financial means to leave Krummhörn and live somewhere else. Out of those who responded (n = 176), the majority (72%) answered positively, which means that financial issues are not the main reason to remain in Krummhörn. When asked if they had ever considered moving away, the majority (80%, n = 176) answered with no. Moreover, the majority (95%) of participants do not have a second place of residence outside of the Krummhörn region, however, 38% of our participants have jobs in neighboring regions or commute further away. On average, participants (n = 176) live 6.02 km (SD = 3.82) away from the dyke. Design Participants were randomly assigned to one of three experimental conditions of a between-subjects design. The random assignment was assured by stacking the paper and pencil survey in such way, following the order of condition one (high threat), condition two (medium threat) and condition three (control group), condition one (high threat), condition two (medium threat) and continuing. When handing out the questionnaires each participant received a copy (in a sealed envelope) that was on top of the stack. All experimental conditions manipulated threat perception (independent variable) in order to explore its effect on well-being (dependent variable) and life satisfaction (dependent variable). Three short texts of about 200 words each were presented at the beginning of the questionnaire and included a high threat manipulation, a medium threat manipulation, or a control condition. In the high threat condition (n = 66), participants were informed about climate change and future rise in North Sea levels, resulting in more destructive North Sea floods. The medium threat condition (n = 61) was a summary about the soil quality of the Krummhörn region. The control condition (n = 55) informed participants about the thirteenth century church architecture of Krummhörn. The reading task was followed by the manipulation check in which three questions assessed threat perception (see Measures). The moderating variable was social identification and gender (male and female) was controlled for as a possible extraneous variable that might have an influence on the independent variables. Measures An overview of all measures is shown in Table 1, also briefly explaining the connection between stress, well-being and resilience. Threat perception We developed three items to assess flood risk perception based on the survey of De Dominicis et al. (2015). Specifically, using a 6-point Likert scale (1 = very low risk to 6 = very high risk), participants were asked to rate how high they perceived the risk that: "Your residence will be affected by a flood", "Your residence will be affected by a flood within the next 12 months", and "Your house will be affected by a flood within the next 12 months". Cronbach's α was 0.70. "Your residence will be affected by a flood." "Your residence will be affected by a flood within the next 12 months." "Your house will be affected by a flood within the next 12 months." 1 = Very low risk to 6 = very high risk Well-being (Rating the frequency of somatic symptoms, stress-related) "Stomach pains" "Feeling of weakness" "Headaches" "Palpitation of the heart" "Dizziness" "Feeling of faintness" "Heartburn" Well-being Well-being was measured with the Gießener Beschwerdebogen (Brähler and Scheer 1983;Van Dick et al. 1999), a 7-item health complaints scale developed in Germany to measure somatic symptoms related to stress. Participants were asked about the frequency of health complaints on a 6-point Likert scale (1 = rarely to 6 = very often). Health complaints included problems such as "stomach pains", "palpitation of the heart" and "heartburn". The items for ill-health were inverted in order to account for well-being. Altogether, Cronbach's α is 0.73 for this study. Life satisfaction Life satisfaction was measured with the 5 item "Satisfaction with life scale (SWLS)" by Diener et al. (1985), translated by Janke and Glöckner-Rist (2012). It asks participants to assess their current life situation on a 6-point Likert scale by specifying whether they 1 = disagree strongly to 6 = agree strongly. Sample items are "In most ways my life is close to my ideal." and "I am satisfied with my life". The scale yielded a Cronbach's α of 0.91. Social identification Social Identification was measured with 3 items based on the 4 item scale of Doosje et al. (1995), translated by Van Dick (2004). On a 6-point Likert scale (1 = disagree strongly to 6 = agree strongly), participants assessed how much they agree with the following statements: "I identify with my fellow villagers", "I am glad to live in my village" and "I feel strong ties with my fellow villagers". Cronbach's α was 0.87. Preparation All statistical analyses and graphs were done using R 4.0.3 (R Core Team 2020), including the packages psych (Revelle 2019), hash (Brown 2019), car (Fox and Weisberg 2019), dplyr (Wickham et al. 2020), rmisc (Hope 2013) and interactions (Long 2019). Using the package boot.pval (Thulin 2021) residuals were bootstrapped with a 95% confidence interval and 5000 iterations. The data was screened for the requirements of the regression analysis and oneway ANOVA analysis, yielding acceptable results. No outliers were detected and participants with missing values were removed from the analyses. For accuracy of the computation, all predictor and criterion variables were centered to counter interpretation and multicollinearity issues (Aiken and West 1991;Dardas and Ahmad 2015;Hayes 2017). Multicollinearity between the criterion variables was not a problem. The variance inflation factor (VIF) for the criterion variables in the final regression models for both predictor variables were all below 1.13, with a VIF of 1 indicating no multicollinearity present. Moreover, for all variables we averaged the Likert scale values to proceed with the regression analysis. The data file and questionnaires used to analyze the current study are available from the OSF repository (https:// osf. io/ vu47y/). Manipulation check Contrary to initial expectations, the ANOVA revealed no significant differences in threat perception (F(2, 725) = 0.20, p = 0.82) between the high threat condition (n = 66, M = 4.35, SD = 0.53), the medium threat condition (n = 61, M = 4.32, SD = 0.59) and the control condition (n = 55, M = 4.29, SD = 0.52) (also see Table 2). As a result, we combined the three conditions into one single variable with which we then calculate threat perception in our regression models. Therefore, our hierarchical regression analyses only used the averaged Likert scales measuring threat perception, social identification, life satisfaction, and well-being to examine our hypotheses. Table 3 shows the intercorrelations, means, standard deviations, and reliabilities for all variables. The correlations were computed using the Spearman correlation method to meet the assumptions for correlation testing. As expected, results show a negative correlation between our independent and dependent variables, which is supported by threat perception being negatively associated with well-being (r s = − 0.08, p = 0.27) and life satisfaction (r s = − 0.09, p = 0.24). However, the relationship between threat perception and well-being as well as life satisfaction is not very strong which supports our assumption of contingency factors influencing the relations. Correlations Social identification is positively and significantly related to life satisfaction (r s = 0.47, p < 0.01), indicating a strong relationship which shows that individuals with high levels of life satisfaction also highly identify with their group. It also supports the robust finding that groups are good for individuals and that life satisfaction is related to our social lifestyle (Jetten et al. 2012). However, contrary to our expectations, social identification is negatively (r s = − 0.07, p = 0.38) but not significantly related to well-being (physical health). Hence, one can again only cautiously presume that high levels of social identification is connected to low levels of well-being as groups improve our overall wellbeing, which in turn might also relate to better physical conditions (Jetten et al. 2012). The relationship between social identification and threat perception is negative (r s = − 0.08, p = 0.26), but not significant. Finally, the dependent variables well-being and life satisfaction have a positive and significant association (r s = 0.21, p < 0.01), which seems reasonable as being more satisfied with ones' life is likely to also be related to ones' physical and mental well-being. Moderation analysis Hierarchical regression analyses were performed to predict well-being from threat perception as well as social identification and life satisfaction of Krummhörn inhabitants and to show how exactly our model changes with respect to the coefficients. In addition, we controlled for age and gender by holding them constant in the hierarchical regression analyses. Controlling for age and gender removes their potential effect on the relationships between threat perception and well-being as well as life satisfaction, and the moderating effect of social identification of that relationship. For example, age could influence the relationship between threat perception and well-being as well as life satisfaction, because older individuals are generally more satisfied with their lives and seem to have higher levels of well-being (Baird et al. 2010). Also, older participants in our study are more likely to have experienced floods and therefore might feel less threatened. Further, gender differences could also be expected as numerous studies report gendered differences in risk perception related to floods (Lechowska 2018). Moreover, we tested if the effect of threat perception on wellbeing and life satisfaction was moderated by the inhabitant's social identification. In order to test the robustness of our regression models, we used the bootstrapping method; the scaled and centered residuals were bootstrapped with a 95% confidence interval and 5000 iterations to estimate the ̂ ⋇ coefficient. Discussion This study aimed at furthering the understanding of collective resilience processes related to populations living in flood-prone areas such as Krummhörn. Of specific interest was to explore the role of social identification in collective resilience processes and how it contributes to the physical and mental well-being in areas threatened by sea floods. With respect to our first hypothesis, the results of our empirical study show that threat perception is negatively and insignificantly correlated with well-being and life satisfaction. However, the moderation analysis based on our second hypothesis demonstrates that the aforementioned relationship is contingent on the level of participants' social identification. High levels of social identification strengthen populations in flood-prone areas by reducing threat perception and hence increase well-being. Moreover, highly identified participants seem less negatively affected by the continuous threat of floods than those who are weakly identified. The findings of our study are in accordance with previous research on the social cure effect, verifying that social identification improves physical and mental wellbeing. However, contrary to our predictions, we did not find that social identification moderated the relationship of threat perception and life satisfaction, although social identification and life satisfaction are significantly correlated. This result, on the other hand, is not surprising as social identification fosters physical and mental well-being, which in turn, should then have positive effects on life satisfaction (see Jetten et al. 2012). Taken together, our results demonstrate that social identification is critical to collective resilience as it is the very fabric shaping every group's shared existence. Interestingly, we observe that people in Krummhörn are well aware of the risk of being flooded, which might be related to the fact that participants live approximately 6 km away from the main dyke. This finding is evidenced by the results of our manipulation check, assessing the effectiveness of our experimental design. As can be seen in Table 2, the mean score of each experimental condition is relatively high with regard to the Likert scale value range. Participants have answered the manipulation check questions with an approximate average of 4, which translates to perceiving a possible flood in the present and the near future as a "relatively high" risk of happening. Hence, given that inhabitants of this area are conscious of the risk, the worst-case scenario for such populations would then be growing inner conflict, deepening their mental and physical health problems, which in turn would have compounding detrimental effects on their well-being as individuals and as a group. Consequently, such a scenario would also impact levels of collective resilience and make it much harder to collectively bounce back quickly after a flood event. Haslam and Reicher (2006) found that with reduced social identification, the group becomes disorganized, and experiences lower levels of social support, communication, and trust. Collective efficacy also suffers greatly under such circumstances and the group risks falling apart, evidenced by the observation of higher levels of withdrawal in group members (Haslam and Reicher 2006). Thus, the survival and functioning-in our case the continuation of coastal populations-depends on their levels of social identification and shows that the environmental impact (de Vries et al. 2021) on the relationship between well-being and resilience can also be attributed to the groups or communities of which we are members. The present-day benefit of social identification in groups collectively navigating floods is further supported by the archeological context of Krummhörn. The harsh environment and living conditions fostered collective action, which in turn is based on immense agreement by group members and tremendous group effort, which resulted in the construction of multiple settlements built on an artificially elevated pieces of land. Moreover, due to its form, size, and dense living conditions, Terps may have encouraged and simultaneously strengthened a shared social identity and, in turn, well-being across many generations. Hence, combating the threat of floods on a long-term basis required the two-way interaction of ecologies shaping behavior and behavior shaping ecologies (Uskul and Oishi 2020). Therefore, the outcomes of living on Terps for the East Frisian population were twofold-it secured the survival of the group in a harsh environment prone to floods, and ensured the present and future functioning of the population living in Krummhörn by keeping up their well-being. Terp building and living were not the only activities fostering and maintaining high levels of a shared social identity in Krummhörn. The highly ambitious dyke building projects in this area also helped to bring inhabitants closer together to collectively defy the life-threatening conditions created by floods. Together with the evidence from human remains of the East Frisian population we have further support for a social cure effect, as a short glimpse into Krummhörn living conditions and their impact on physical and mental health in the sixteenth century shows that the groups persisted in spite of high mortality rates. Combining these archeological data with recent psychological results, we can assume that only those populations with high levels of social identification, which increases levels of social support and collective efficacy (Avanzi et al. 2015;Frisch et al. 2014;Junker et al. 2019;Van Dick et al. 2017) as well as decreases cortisol levels (Häusser et al. 2012), were resilient and continued to function throughout these relentless times. On the basis of our results, we believe that the strong connection between present-day inhabitants of Krummhörn, whose ancestry can be traced back three or more generations, reflects the collective resilience and identification with their social and environmental surroundings in the past. In other words, despite the threat from floods, people still reside in Krummhörn with relative high levels of satisfaction and well-being. Importantly, the long-term viewpoint contributed by our archeological perspective allows for conjectures about collective resilience processes over time. Accordingly, we suggest that if social identity is nourished and strengthened at its core, it will sustain for generations, maybe even over centuries. Therefore, combined with our empirical findings, we would like to take the opportunity here to propose social identification as a collective resilience factor. Henceforth, we think the social identity approach has huge potential for supporting communities and, subsequently, the individuals within. Especially in regions where people do not want to leave their homes due to financial reasons or the cherished shared social support available to them (Mallick and Mallick 2021). Thus, communities at risk of floods, can be supported by group interventions, such as the Groups 4 Health program, with the potential benefit of enhancing collective resilience, and thus, cohesion as well as physical and mental health. The Groups 4 Health program is an intervention developed by Catherine Haslam and colleagues ) aimed specifically at improving the mental health and wellbeing of groups. Those who participated in their five-module program increased their well-being, mental health, social connectedness, as well as developed and strengthen their social identification within their chosen groups . Moreover, Groups 4 Health is a flexible program that can be used for multiple purposes, e.g., water resource management (Ananga et al. 2021) or different age groups (Islam 2020), as it is adaptable to unique environmental circumstances that require unique ways of strengthening cohesion and collective resilience. Limitations Although both the empirical study and archeological perspective of this paper are in support of the social cure effect and promoting social identity as a collective resilience factor, our study is not without limitations. Highly identified residents of Krummhörn were more likely to respond, as questionnaires were distributed by the respective head of each village. We acknowledge that they were mainly able to hand them out to those who generally participated in village activities. However, to counter possible biases in our data related to our sampling method we used the bootstrapping method to corroborate our findings. Further, the causal relation of our assumptions is theory-driven and should be further corroborated by using experimental or longitudinal design. In addition, our research design did not include a control group and hence, we were not able to compare the Krummhörn results with other regional municipalities along the North Sea coast that exhibit the same settlement pattern or populations at risk of floods in other areas of the world. We have also not looked at how populations without such settlement structures cope with living on the North Sea coast. Therefore, although we can account for high external validity, our presented results are, to some extent, one-sided. Future research exploring social identification as a collective resilience factor in representative and diverse populations facing other environmental stressors is needed. Moreover, within the last twenty years, no life-threatening flood or storm event took place in Krummhörn. Consequently, many of our participants could not relate to any recent event. Although the threat conditions were developed with the unique Krummhörn context in mind, this study is limited by relying on self-reported perceptions of climate events as opposed to behaviors following lived experiences. Self-report data also have the potential to introduce issues with common method variance. However, given that the hypothesized models included interaction effects and were tested with regression analyses (including bootstrapping), common method variance is likely less problematic (Evans 1985;McClelland and Judd 1993). Additionally, the theoretical translation of current psychological findings such as the social cure effect to behaviors of past populations is not without challenges. However, we believe it to be a strength to strive for an interdisciplinary approach when developing stimulating ideas and solutions to ameliorate risks to physical and mental well-being in populations facing climate change-related environmental events. Finally, as the literature on social identity as a collective resilience factor develops, future research should also test programs that promote social identity, such as Groups 4 Health , in populations facing flooding and other environmental and climate-related stressors. Conclusion High levels of social identification enable the survival and functioning of the group by strengthening their physical and mental well-being. We found that threat perception (of floods) is negatively related to well-being, but that this relationship is buffered by high levels of social identification. However, other mechanisms need to be explored to investigate the relationship between threat perception and life satisfaction as social identification does not moderate this assumed association. Future studies on collective resilience should address social identification as a major collective resilience factor as it enables collective action, cohesion, collective self-efficacy, and shared social support, even in emergency situations. Particularly, with regard to todays' looming climate change-related events such as sea level rise and its consequences, future research should concentrate on communities with indications of low social identification and no financial means to leave as they would suffer the most by such devastating prospects. Moreover, we show that fostering and strengthening social identification needs time and intervention studies require longitudinal designs to achieve improved physical and mental well-being in participants. Finally, our archeological perspective should encourage future studies to investigate and pay tribute to past strategies of collective coping as they might impact collective resilience over centuries and surely are culturally very diverse.
2021-11-24T16:59:48.137Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "672c16c2582fc76742a047adc2e9990c54d778ca", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s43545-021-00284-6.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "71855fdea33531d0ef2092e03b1285a917cc6d6b", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
232364305
pes2o/s2orc
v3-fos-license
SOCS2 is a potential prognostic marker that suppresses the viability of hepatocellular carcinoma cells Hepatocellular carcinoma (HCC) is the fourth leading cause of cancer-associated mortality worldwide. Thus, there is an urgent requirement to identify novel diagnostic and prognostic biomarkers for this disease. The present study aimed to identify the hub genes associated with the progression and prognosis of patients with HCC. A total of three expression profiles of HCC tissues were extracted from the Gene Expression Omnibus (GEO) database, followed by the identification of differentially expressed genes (DEGs) using the GEO2R method. The identified DEGs were assessed for survival significance using Kaplan-Meier analysis. Among the 15 identified DEGs in HCC tissues [cytochrome P450 family 39 subfamily A member 1, cysteine rich angiogenic inducer 61, Fos proto-oncogene, forkhead transcription factor 1 (FOXO1), growth arrest and DNA damage inducible β, Inhibitor of DNA binding 1, interleukin-1 receptor accessory protein, metallothionein-1M, pleckstrin homology-like domain family A member 1, Rho family GTPase 3, serine dehydratase, suppressor of cytokine signaling 2 (SOCS2), tyrosine aminotransferase (TAT), S100 calcium-binding protein P and serine protease inhibitor Kazal-type 1 (SPINK1)]. Low expression levels of FOXO1, SOCS2 and TAT and high SPINK1 expression indicated poor survival outcomes for patients with HCC. In addition, SOCS2 was associated with distinct stages of HCC progression in patients and presented optimal diagnostic value. In vitro functional experiments indicated that overexpression of SOCS2 inhibited HCC cell proliferation and migration. Taken together, the results of the present study suggest that SOCS2 may act as a valuable prognostic marker that is closely associated with HCC progression. Introduction Hepatocellular carcinoma (HCC) is one of the most highly malignant and fatal cancers worldwide (1). It is the fourth leading cause of cancer-associated mortality worldwide, with ~841,000 new cases and 782,000 mortalities per year (1). The risk factors of HCC include hepatitis B virus or hepatitis C virus infection, consumption of aflatoxin contaminated food, alcohol abuse, obesity and smoking (2). Several strategies have been used for HCC treatment, such as surgical resection, chemotherapy, radiotherapy and targeted therapy (2). However, for patients with end-stage HCC, the 5-year survival rate remains <10% (3). HCC is a neoplastic disease with complex molecular mechanisms, which are affected by genetic or epigenetic mutations, genomic instability and environmental factors (3,4). Chronic inflammation, dysregulation of angiogenesis, changes in cellular metabolism and abnormal endocrine hormones may also be involved in the tumorigenesis of HCC (5). Recently, several studies have revealed key signaling pathways and genes that play critical roles in HCC (6,7). However, the underlying molecular mechanisms of HCC onset and progression remain unclear. Thus, it is important to investigate the molecular mechanisms of HCC pathogenesis to identify key molecular targets for the early diagnosis and treatment of patients with HCC. Recently, the development of high-throughput sequencing and microarray technologies have provided a novel platform for studies of gene expression profiles and identification of key factors associated with tumor development. Microarray technique is a method used to analyze general genetic alterations, which has been extensively applied in the investigations of tumorigenicity to identify promising biomarkers for cancer diagnosis, treatment and prognosis (6,8). For example, analysis of gene expression profiles of 64 primary prostate tumors and 24 metastatic samples revealed that patients with metastasis had 415 upregulated and 364 downregulated genes, indicating high heterogeneity of the metastatic samples (9). Systematic analysis of publicly available sequencing data using integrated bioinformatics methods may be an efficient way to overcome limitations, such as the use of different sequencing platforms or small sample sizes, and can provide further insight for identifying novel diagnostic markers and therapeutic targets in different types of tumor tissues, such as endometrial cancer (10), osteosarcoma (11), non-small cell lung cancer (12) and gastric cancer (13). The present study analyzed three independent sequencing datasets of HCC tissues and identified differentially expressed genes (DEGs) using a series of bioinformatics analysis methods. A protein-protein interaction (PPI) network was constructed, and function enrichment and survival analyses were performed to thoroughly investigate the molecular features of the DEGs. In addition, the key targets affecting the tumorigenesis of HCC cells were identified via biological function studies. Materials and methods Data source acquisition. The gene microarray expression datasets, GSE22058 (14,15), GSE57957 (16) and GSE14323 (17), including 186 HCC tissue samples and 150 adjacent tumor tissue samples, were downloaded from the Gene Expression Omnibus (GEO) database (https://www.ncbi.nlm.nih.gov/geo). The microarray datasets included in the present study satisfied the following selection criteria: i) They included human HCC tissues and adjacent tumor tissues; ii) the number of cases in the HCC and adjacent tumor groups was at least 10 and iii) they had intact RNA expression profiles for further analysis. The data acquisition and application methods in the present study complied with the guidelines and policies of the GEO database (18). Identification of DEGs. The GSE22058, GSE57957 and GSE14323 expression profiles were normalized and analyzed using the GEO2R tool (https://www.ncbi.nlm.nih. gov/geo/geo2r). The criteria of P<0.05 and |logFC|>1 was applied to screen for the DEGs. The volcano plot of each dataset was constructed using the 'volcano R' package (version 3.2.0; R Foundation). The overlap of DEGs between the GSE22058, GSE57957 and GSE14323 datasets were categorized as common DEGs, which were retained for further studies. PPI network and module analysis. The PPI network was constructed using Cytoscape software (version 3.4.0; National Resource for Network Biology). Associations between the DEG-encoded proteins were analyzed using the Search Tool for the Retrieval of Interacting Genes/Proteins (STRING) database (https://string-db.org/cgi/input.pl). PPIs with a confidence score ≥0.4 were reserved. Functional enrichment analysis of the DEGs. Gene ontology (GO) enrichment and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analyses (19) were performed using the Database for Annotation, Visualization and Integrated Discovery (DAVID) (https://david.ncifcrf.gov) to identify the biological processes, molecular functions and cellular components, and signaling pathways associated with the DEGs. P<0.05 was considered to indicate statistical significance. (23) and The Cancer Genome Atlas (TCGA) (24), which included 1,000 patients with clinical information, were selected. OS analysis was performed using the Gene Expression Profiling Interactive Analysis (GEPIA) database (http://gepia.cancer-pku.cn). A total of 80 patients with HCC from the 920th Hospital were classified into the high expression group (n=40) or the low expression group (n=40), based on the median expression value (22.178) to determine the prognosis of suppressor of cytokine signaling 2 (SOCS2) using the 'survival R' package (version 3.2.0; R Foundation). P<0.05 was considered to indicate prognostic significance. Detailed information of the 80 patients is listed in Table SV. Cell lines and clinical tissues. The HCC cell line, Huh7, was purchased from the Chinese Academy of Sciences Cell Bank and maintained in DMEM medium supplemented with 10% fetal bovine serum (Gibco; Thermo Fisher Scientific, Inc.), at 37˚C in 5% CO 2 . A total of 12 pairs of HCC tissues and adjacent normal tissues (5 cm away from HCC tissues) were collected from patients with HCC who received surgical resection at the 920th Hospital between June 2018 and October 2019. The patients included 9 men and 3 women (age range, 53-65 years; mean age, 60.17 years). The histopathologic features of tumor tissues and adjacent normal tissues were confirmed by H&E staining. Fresh clinical samples were stored at -80˚C until subsequent experimentation. The present study was approved by the Ethics Committee of the 920th Hospital (Kunming, China; approval no. 2018-020-01) and written informed consent was provided by all patients prior to the study start. Reverse transcription-quantitative (RT-q)PCR. Total RNA was extracted from Huh7 cells using TRIzol ® reagent (Invitrogen; Thermo Fisher Scientific, Inc.), according to the manufacturer's protocol. A total of 1 µg RNA was reverse transcribed into cDNA using the Hifair ® II 1st Strand cDNA Synthesis SuperMix kit (Yeasen Biotech Co.), under the following conditions: 25˚C for 5 min, 55˚C for 15 min and 85˚C for 5 min. qPCR was subsequently performed using the Hieff ® qPCR SYBR Green Master Mix kit (Yeasen Biotech Co.) in QuantStudio™ 5 System (Thermo Fisher Scientific, Inc.). The following thermocycling conditions were used for qPCR: 95˚C for 5 min, followed by 40 cycles at 95˚C for 10 sec, 60˚C for 30 sec and elongation at 72˚C for 2 min. The following primer sequences were used for qPCR: SOCS2 forward, 5'-GAG CCG GAG AGT CTG GTT TC-3' and reverse, 5'-ATC CTG GAG GAC GGA TGA CA-3'; and GAPDH forward, 5'-GGT CTC CTC TGA CTT CAA CA-3' and reverse, 5'-GTG AGG GTC TCT CTC TTC CT-3'. Relative mRNA levels were calculated using the 2 -ΔΔCq method (25) and normalized to the internal reference gene GAPDH. Western blotting. Total protein was extracted from Huh7 cells using RIPA lysis buffer (Beyotime Institute of Biotechnology), containing x100 protease inhibitor cocktail (Bio-Rad Laboratories, Inc.). Protein concentrations of lysates were detected using the bicinchoninic acid (BCA) assay (Beyotime Institute of Biotechnology). Equal amounts of protein lysates (20 µg/well) were separated by 10% SDS-PAGE, transferred onto polyvinylidene difluoride membranes (Bio-Rad Laboratories, Inc.) and blocked with 5% skim milk solution for 1 h at room temperature. The membranes were incubated with primary antibodies against SOCS2 (cat. no. A5703) and GAPDH (cat. no. AC001) (both 1:1,000 and purchased from ABclonal Biotech Co., Ltd.) overnight at 4˚C. Following the primary incubation, membranes were incubated with horseradish peroxidase-conjugated Goat Anti-Rabbit IgG secondary antibody (1:2,000; cat. no. AS014; ABclonal Biotech Co., Ltd.) for 1 h at room temperature. Protein bands were visualized using BeyoECL Plus regent (Beyotime Institute of Biotechnology) in ImageQuant LAS4000 (GE Healthcare). GAPDH was used as the loading control. Cell proliferation assay. HCC cells were transfected with the indicated siRNAs or plasmids for 24 h. Subsequently, cells were seeded into 96-well culture plates at a density of 4,000 cells/well. Cell proliferation was assessed via the Cell Counting Kit-8 (CCK-8) assay (Dojindo Molecular Technologies, Inc.) at 0, 12, 24, 36 and 48 h following cell culture. The cell proliferation curve at each time point was plotted using the values of relative absorbance. EdU immunofluorescence staining was performed using the EdU kit (Thermo Fisher Scientific, Inc.), according to the manufacturer's protocol. The results were quantified using ImageJ software (version 1.8.0; National Institutes of Health). Wound healing assay. HCC cells were seeded into 6-well plates at a density of 4x10 5 cells/well and cultured until they reached 90% confluency. The cell monolayers were subsequently scratched using 200 µl pipette tips to create a gap. Cells were washed with phosphate buffered saline and cultured in fresh DMEM serum-free medium (Gibco; Thermo Fisher Scientific, Inc.). Images were acquired at 0 and 24 h using the Olympus IX73 light microscope (Olympus Corporation) to assess cell migration. Wounded areas between the cells were analyzed using ImageJ software (version 1.8.0; National Institutes of Health). Statistical analysis. Statistical analysis was performed using GraphPad Prism 6 software (GraphPad Software, Inc.). All in vitro experiments were performed in triplicate and data are presented as the mean ± standard deviation. A two-tailed unpaired or paired Student's t-tests were used to compare differences between two groups. One-way ANOVA followed by Dunnett's test were used to compare differences between multiple groups. Pearson's correlation analysis was performed to determine the correlation between SOCS2 and FOXO1 expression. Survival curves were obtained via Kaplan-Meier analysis and the log-rank test between patients in the high and low expression groups, and Landmark analysis was performed when the survival curves cross. Age, gender, stage and SOCS2 expression level of 80 HCC patients were made for univariate and multivariate Cox analyses using SPSS software (version 21; IBM Corp.). P<0.05 was considered to indicate a statistically significant difference. Identification of DEGs. To identify the key DEGs in a large cohort of HCC samples, three sequencing datasets from the GEO database were selected, including HCC samples and adjacent normal tumor samples. Detailed information of the three datasets are presented in Table I. According to the screening criteria, a total of 2,657 DEGs were identified between the HCC tissues and adjacent normal tumor tissues in the GSE22058 dataset, which included 981 upregulated genes and 1,694 downregulated genes. A total of 584 DEGs 14 (17) were identified in the GSE57957 dataset, which consisted of 256 upregulated genes and 328 downregulated genes in HCC tissues compared with adjacent normal tumor tissues. The DEGs obtained from the GSE14323 dataset included five upregulated genes and 146 downregulated genes in HCC tissues. The volcano plots of the three datasets are presented in Fig. 1A. A total of 15 overlapping DEGs (13 downregulated genes and two upregulated genes) were identified by intersecting the three datasets ( Fig. 1B and Table II). The heat map displays the detailed expression data of these 15 DEGs in each tissue sample of the three sequencing datasets (Fig. 1C). PPI network and functional enrichment analysis of DEGs. To determine the associations between DEG-encoded proteins, a PPI network was constructed using the STRING database. In total, 34 proteins, 14 DEGs and 20 neighbor genes were obtained, and 251 edges were included in the PPI network (Fig. S1A). In the network, a large protein node indicated a strong ability to interact with other proteins. The top five highest value nodes were serine/threonine-protein kinase 1, V-Jun avian sarcoma virus 17 oncogene homolog, mitogen-activated protein kinase 8, mammalian target of rapamycin and early growth response 1, all of which have been reported to be involved in the development of multiple tumors (26)(27)(28)(29). The top five highest value DEGs were Fos proto-oncogene (FOS), SOCS2, forkhead transcription factor 1 (FOXO1), growth arrest and DNA damage inducible β (GADD45B) and cysteine rich angiogenic inducer 61 (CYR61), which are also considered to play critical roles in tumor progression (30)(31)(32)(33)(34). GO and KEGG enrichment analyses of the DEGs were performed using DAVID. The results demonstrated that the DEGs were predominantly enriched in 'nucleoplasm' and 'transcription factor AP-1 complex'. Furthermore, they were significantly enriched in multiple biological processes and molecular functions associated with response to 'abiotic stimuli', 'regulation of cell death', 'transcription factor binding' and 'activity of DNA-binding transcription activator' (Fig. S1B-D and Tables SI-III). According to KEGG pathway enrichment analysis, the DEGs were significantly involved in 'colorectal cancer', 'FOXO signaling pathway', 'osteoclast differentiation', 'insulin signaling pathway' and 'prolactin signaling pathway' (Fig. S1E and Table SIV). Notably, low expression levels of FOXO1, SOCS2 and TAT, and high SPINK1 expression were associated with poor prognosis of patients with HCC. SOCS2 is associated with HCC stages and demonstrates good diagnostic ability for HCC. The expression levels of the four prognosis-related DEGs in TCGA dataset were assessed. The results demonstrated that the expression levels of FOXO1, SOCS2 and TAT were significantly downregulated in HCC tissues, whereas SPINK1 expression was significantly upregulated in HCC tissues compared with adjacent normal tissues. In addition, FOXO1 and SOCS2 expression were associated with HCC progression, whereby lower expression levels were observed in stage 4 patients with HCC (Fig. 3A). Furthermore, FOXO1 expression was positively correlated with SOCS2 in the GSE22058 dataset (Fig. 3B). ROC curve analysis demonstrated that SOCS2 presented a good diagnostic ability for HCC (Fig. 3C) (Table SV). SOCS2 expression was further validated in 12 pairs of clinical HCC tissues and adjacent normal tissues. The results demonstrated that SOCS2 mRNA and protein expression levels Table II. Detailed information of the 15 key differentially expressed genes. were downregulated in HCC tissues compared with adjacent normal tissues (Fig. 3D). Based on the median SOCS2 expression values, 80 patients with HCC from the 920th Hospital were classified into the high expression group (n=40) or the low expression group (n=40) ( Table SVI). Long-term follow-up (0.7-5.0 years) showed that HCC cases in the low expression group were associated with a poor prognosis compared with the high expression group (P= 0.0012), while there was no significant difference between the two groups in the short-term follow-up (0.0-0.7 years) (Fig. 3E). Furthermore, univariate and multivariate Cox analyses demonstrated that HCC clinical stage 1 and SOCS2 levels were independent prognostic factors for patients with HCC (Table III). SOCS2 is a tumor suppressor for HCC progression. To determine the biological function of SOCS2 in HCC cells, SOCS2 expression in Huh7 cells was exogenously changed using recombinant expression plasmids and siRNAs. Both the overexpression and knockdown phenotypes of SOCS2 were confirmed via RT-qPCR and western blot analyses (Fig. 4A). Ectopic overexpression of SOCS2 inhibited HCC cell proliferation, as measured by the CCK-8 assay (Fig. 4B) and EdU staining (Fig. 4C). The wound healing assay demonstrated that overexpression of SOCS2 inhibited HCC cell metastasis compared with cells in the control group (Fig. 4D). Notably, SOCS2 knockdown promoted HCC cell proliferation and metastasis. Taken together, these results suggest that SOCS2 inhibits tumorigenesis in HCC cells. Discussion HCC carcinogenesis is a sophisticated and complex pathological process associated with specific tumor genes, multiple signaling cascades and epigenetic modifications (35,36). Recently, bioinformatics analyses have been extensively performed to identify novel diagnostic markers and therapeutic targets for different types of cancer (37,38), thus providing useful tools in tumor research. The present study performed bioinformatics analyses to investigate the DEGs between HCC tissues and adjacent normal tumor tissues, based on three independent GEO expression datasets. A total of 15 overlapping DEGs were identified, including cytochrome P450 family 39 subfamily A member 1, CYR61, FOS, FOXO1, GADD45B, inhibitor of DNA binding 1, interleukin-1 receptor accessory protein, metallothionein-1M, pleckstrin homology-like domain family A member 1, Rho family GTPase 3, serine dehydratase, SOCS2, TAT, S100 calcium-binding protein P and SPINK1, all of which exhibited consistent expression patterns in the three sequencing datasets. A total of four potentially prognostic DEGs (FOXO1, SPINK1, SOCS2 and TAT) were further identified via Kaplan-Meier analysis. FOXO1 is one of the forkhead family transcription factors, which participates in several processes of tumor development (30,39). It is well-known that FOXO1 expression is downregulated in the early stages of human pancreatic ductal adenocarcinoma, and may function as a valuable diagnostic marker (39). Sequencing analyses have demonstrated that FOXO1 and paired box 3 (PAX3) expression are upregulated in alveolar rhabdomyosarcoma; double knockdown of PAX3 and FOXO1 significantly inhibits tumor cell proliferation, survival and Cell Counting Kit-8 assay was performed to assess proliferation in SOCS2 overexpressed and suppressed Huh7 cells. (C) EdU staining was performed to assess proliferation in SOCS2 overexpressed and suppressed Huh7 cells. Scale bar, 50 µm. (D) The wound healing assay was performed to assess the migratory ability in SOCS2 overexpressed and suppressed Huh7 cells. Scale bar, 100 µm. * P<0.05, ** P<0.01. SOCS2, suppressor of cytokine signaling 2; OE, SOCS2 overexpression; NC, negative control; si, small interfering. migration by targeting interleukin-24 (30). SOCS2 is involved in the inhibition of signal transduction (40). It has been reported that SOCS2 regulates the immune response during acute liver injury caused by acetaminophen (41). SOCS2 also plays a critical role in the development of prostate cancer (42). TAT is a tyrosine transaminase that is predominantly expressed in liver tissues, and its deficiency can lead to tyrosinemia (43). Low TAT expression has been observed in HCC and is associated with tumor progression (44). Previous studies have reported that SPINK1 is associated with the development of different types of cancer (8,45,46). Microarray analysis has demonstrated that SPINK1 expression is upregulated in HCC, which promotes the proliferation, migration and invasion of HCC cells (45). SOCS2 was selected as a hub gene in the present study due to its pathological stage association and good diagnostic ability in patients with HCC. Functional studies demonstrated that overexpression of SOCS2 inhibited HCC cell proliferation and migration, whereas SOCS2 knockdown promoted HCC tumorigenesis, suggesting that SOCS2 may function as a tumor suppressor in HCC development. The present study was not without limitations. First, mechanistic experiments for the antitumor role of SOCS2 are required to determine the molecular mechanism of SOCS2 in HCC progression. Secondly, the present study only investigated the function of SOCS2 in HCC cells. Further biological experiments are required to validate the roles of other DEGs in the diagnosis and treatment of HCC. Thirdly, the quality and heterogeneity of the public data that were uploaded by other researchers and used in the present study cannot be accurately determined. In conclusion, the present study identified four key DEGs from the HCC gene expression profile datasets using integrated bioinformatics analyses. The PPI network, and functional enrichment and prognostic analyses suggest that these DEGs may be involved in the pathogenesis and prognosis of HCC. In addition, the antitumor role of SOCS2 was investigated, and the results demonstrated that SOCS2 may serve as a potential diagnostic marker in patients with HCC. Collectively, these results provide further insight on the prognostic prediction and molecular targeting therapy for patients with HCC.
2021-03-27T05:15:04.576Z
2021-03-18T00:00:00.000
{ "year": 2021, "sha1": "b0b51bc2e11c5d95d4c2f63e0dab8f87817a4063", "oa_license": "CCBYNCND", "oa_url": "https://www.spandidos-publications.com/10.3892/ol.2021.12660/download", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b0b51bc2e11c5d95d4c2f63e0dab8f87817a4063", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258302978
pes2o/s2orc
v3-fos-license
Qualitative assessment of the national initiative to implement antimicrobial stewardship centres in French administrative regions Background In May 2020, the French Ministry of Health funded the creation of regional antimicrobial stewardship (AMS) coordination centres (CRAtb) in preparation for the new national framework for the prevention of antimicrobial resistance. This study aimed to assess through qualitative methods the implementation process, the activities carried out, and the interactions with other regional stakeholders of the newly created CRAtb. Methods We conducted a mixed-method study based on a cross-sectional survey and semi-structured interviews by French regions among implemented CRAtb. Of the eight eligible French regions with an existing CRAtb, seven participated to the online survey. Regional partners involved in AMS from the eight regions were interviewed between September 2021 and April 2022. The survey questionnaire addressed, through closed questions, the organization of the CRAtb, articulation with other regional actors involved in AMS and infection prevention and control (IPC), and AMS activities. The semi-structured interviews approached the implementation and the role of CRAtb, and the collaboration of other AMS and IPC stakeholders. Interview transcripts were analysed using thematic content analysis methodology. Results AMS activities carried out by CRAtb were mainly focusing on hospitals (n = 3), primary care (n = 2) and nursing homes (n = 1). Education mostly relied on training days and AMS help lines, communication on websites and newsletters. CRAtb members reported still being more engaged in providing advice to professionals for individual antibiotic treatments rather than collective-level AMS activities. Interactions were frequent between CRAtb, IPC regional centres and health authorities, but rarely involved other stakeholders. Interviews were performed with 28 professionals involved in AMS from eight regions. Pre-existing networks and working relationships in AMS and more broadly facilitated the implementation of CRAtb. Streamlining and decompartmentalizing IPC and AMS regional activities were considered a way to optimise the prevention of antimicrobial resistance across sectors. The engagement with liberal health professionals was identified as a significant obstacle for CRAtb. Conclusions Two years after the launch of a new national framework, the implementation of CRAtb appeared complex in most regions. An integrative model joining IPC and AMS efforts, relying on existing networks, with engagement from liberal health profession organisations may be the next pivotal step. Supplementary Information The online version contains supplementary material available at 10.1186/s13756-023-01245-9. Background Antimicrobial resistance (AMR) is identified by the World Health Organization (WHO) as one of the biggest threats to global public health [1]. The relationship between antibiotic consumption and the rise of AMR has been clearly documented for some time, [2] paving the way for antimicrobial stewardship (AMS) initiatives worldwide. Despite three successive national antibiotic action plans (2001, 2007, and 2011), recent data from the European Centre for Disease Prevention and Control (ECDC) placed France as the fourth largest consumer of antibiotics in primary care in Europe, with 18.7 DDD per 1000 inhabitants in 2020 [3] and 80% of all antibiotic prescriptions originating from this setting [4]. Responding to the critical need to re-examine its national AMS policies, a new French AMR prevention strategy was introduced in 2022 which incorporated, on the one hand, a renewed focus on the primary care sector, and on the other hand, a closer integration of IPC and AMS activities [5]. In France, the public administrative body in charge of regional implementation of the health policies (the Regional Health Agency), is responsible for ensuring proper coordination between the various regional stakeholders [6]. In 2017, regional Infection Prevention and Control (IPC) coordination centres were created to provide expertise and to coordinate regional strategies in healthcare-associated infection prevention [7]. In May 2020, the French Ministry of Health funded, in building its new national framework, the creation of regional centres for the coordination of AMS (CRAtb) with a focus on hospitals, nursing homes, and primary care [8]. Composed at least of an Infectious Disease Specialist and a general practitioner (GP), CRAtb have been tasked with: (i) a mission of expertise and support to health professionals in the field of AMS, and (ii) a mission of coordination of networks of health professionals in charge of AMS programs [9]. The mandate provided a set of guidelines for CRAtb including specific tasks to achieve their missions (e.g. identify regional issues around antibiotic misuse, support the implementation of training policy (initial and ongoing) for health professionals, setup of a teleexpertise service providing advice on antibiotic therapy, among others). In certain regions, CRAtb were intended to replace and build upon pre-existing regional centers for advice on antibiotic therapy. The Ministerial directions outlined a regional strategy position with a collective focus for CRAtb, supported by close collaborations with corresponding regional IPC teams. The national framework further included the creation of locally operating AMS and IPC consultant teams, aiming to provide on call or onsite advice to local stakeholders. In this study, we conducted a qualitative evaluation to better understand how CRAtb were implemented in French regions following the national initiative, the interventions carried out, partnerships and collaborative efforts. Study design and setting We conducted a qualitative case study analysis to assess the implementation of the CRAtb in French regions. A cross-sectional survey using an online questionnaire, and semi-structured interviews were conducted to answer three research questions: (1) What resources and activities CRAtb engaged two years following the launch of the national initiative? (2) How did regional IPC, AMS and health authority's partners describe the barriers and facilitators of CRAtb implementation? (3) How did the different partners describe the alignment and coordination of the AMS efforts in their region? The study included CRAtb officially created at the time of the study corresponding to eight of the 17 eligible French regions. Data collection and analysis Between September 2021 and April 2022, active members from the eight existing CRAtb were invited to answer an internet-based survey addressing (i) human resources, (ii) implemented activities (education and training, communication, monitoring and feedback, evaluation and audit), and (iii) interactions (frequency of meetings) with other regional actors involved in the prevention of AMR. (Supplemental Document S1) In addition to the online questionnaire, CRAtb and their corresponding regional IPC coordinating centres and Regional Health Agency from the eight regions were contacted by e-mail and invited to participate in semi-structured interviews. All the eight regions contacted agreed to participate and completed a one-time videoconference interview between February 2022 and April 2022. A thematic interview guide (Table 1) was developed by the research team and pilot tested to ensure clarity and comprehensiveness. The guide focused on three areas: implementation of CRAtb, the role of CRAtb in AMR prevention, and interactions between AMR prevention stakeholders. The full interview guide was used with CRAtb coordinators, however only the third section on focusing on interactions between stakeholders was used during interviews with regional IPC coordinating centre and Regional Health Agency representatives. Interviews were carried out by three researchers (AA, AGL, MC). All of the interviews were done through videoconferencing, recorded in their entirety, transcribed verbatim, and anonymized prior to analysis. Interviews were carried out in French. Interview transcripts were analysed using thematic content analysis methodology [10]. Both deductive and inductive coding was used, with an a priori codebook developed from the interview guide (Supplemental Table S2). Analysis of transcripts was conducted by two researchers (AGL, MC) in parallel after having jointly analysed two transcripts for coding adequacy and consistency. Once parallel coding was complete, the researchers reviewed the results to discuss any coding inconsistencies and to identify major, minor, and cross-cutting themes. QDA Miner Lite software (Version 2.0.9; Provalis Research, Montreal, Canada) was used to facilitate data analysis. Supporting quotes were translated into English by MC (native English/French speaker) and reviewed for accuracy by AGL and GB (native French/ fluent English speakers). Results Of the eight CRAtb contacted, seven responded to the online survey. The official creation dates of the participating CRAtb ranged from 2020 to 2022, with structures therefore at different stages of implementation. (Table 2) In accordance with the national framework, Infectious Disease Specialists and GPs were systematically engaged in all CRAtb, mostly part time. Five CRAtb (over 7) involved other professional categories such as clinical microbiologists (n = 3, 0,2 to 0,7 FTE), pharmacists (n = 3, 0,2 to 0,7 FTE), infection prevention and control specialist (n = 1, 0,7 FTE), public health specialist (n = 1, 0,4 FTE), and engineer (n = 1, 0,2 FTE). Among the six CRAtb that answered to the question pertaining to which care sector was currently the focus of most of their actions, three declared focusing on AMS in hospitals, two on primary care, and one on nursing homes. Training days on AMS were organised by all CRAtb and AMS help lines were open in six. Among communication tools, six CRAtb owned a website and four shared newsletters. The frequencies of interactions between the main regional partners involved in AMS are presented in Fig. 1. CRAtb mainly organised frequent meetings with health authorities (monthly in one region and quarterly of more for others) IPC regional centres (monthly in two regions and quarterly of more for others). Interactions with other stakeholders (e.g. local AMS actors, health insurance, users) were infrequent to inexistent. A total of 21 interviews were held with 28 participants from CRAtb (n = 10 participants), IPC coordinating centres (n = 9), and Regional Health Agencies (n = 9). Details of participants are available in Supplemental Table S3. Fifteen interviews were one-on-one, though at the request of participants, six interviews comprised two or three participants with shared roles in the same structure. Interviews lasted between 10 and 50 min. Stakeholders involved in CRAtb implementation were mostly wrestling with the challenges of adapting Table 1 Semi-structured interview guide Position and role in the CRAtb What position do you hold in your CRAtb? How did you get to the position you hold today? Implementation of the CRAtb • Who were the actors involved in the prevention of antibiotic resistance before the establishment of the CRAtb in your region? • When the CRAtb was established, how did you work with other regional actors involved in the prevention of antibiotic resistance? Please give examples. • When the CRAtb was established, how did you work with other regional actors involved in the prevention of antibiotic resistance? Please give examples. • What were the biggest obstacles to the establishment of the CRAtb? • What positive changes have occurred with the establishment of the CRAtb in your region? Role of the CRAtb in the regional coordination of antibiotic resistance prevention • What are the activities of your CRAtb for the regional coordination of the prevention of antibiotic resistance? • In your opinion, which of your activities are most critical to the prevention of antibiotic resistance? • What are the greatest strengths of your CRAtb in preventing antibiotic resistance in your region? • What are your CRAtb's weaknesses in preventing antibiotic resistance in your region? Please give examples. How could these weaknesses be improved? • What are the main actions to come in the next few months? Please specify. Interactions with other regional antibiotic resistance prevention stakeholders • In your region, how does the CRAtb work with the various actors in the prevention of antibiotic resistance? Please give examples. • How does the CRAtb communicate with other actors involved in antibiotic resistance prevention? How is data shared? • How would you describe the relationship between your CRAtb and other stakeholders involved in antibiotic resistance prevention? How could they be improved? Are there future collaborations to come? • If there are no interactions with some of the actors in the prevention of antibiotic resistance, can you explain why? Can you comment on possible future collaborations? How could this be approached? Final question Is there anything we haven't talked about that you think is important to address? Abbreviations: CRAtb, regional antimicrobial stewardship coordination centres to a new operational organization and applying the national framework within regional and local realities. Three cross-cutting themes were identified in the semistructured interviews which related to a spectrum of implementation aspects: (i) streamlining and decompartmentalizing IPC and AMS activities, (ii) engaging with liberal health professionals (i.e., those working outside of public healthcare establishments, namely GPs, primary care nurses, and community pharmacists and dentists), and (iii) the role of pre-existing networks and working relationships. (Table 3, Supplemental Table S4) Streamlining and decompartmentalizing IPC and AMS activities As outlined in the national framework, CRAtb are expected to collaborate closely with their IPC counterparts. To facilitate this, many teams decided to opt for physical proximity, choosing to share offices with IPC teams. Moreover, CRAtb and IPC coordinating centre teams saw the benefit of pooling their resources through sharing of support staff (secretaries and data managers, for example), as well as sharing of consultants (Infectious Disease or Infection Control Specialists). With IPC having a well-established history in the French healthcare system, CRAtb were able to further benefit from gaining access to IPC centres' regional directories and health data. Beyond the practicalities of shared human and material resources, participants from all three organizations (Regional Health Agencies, IPC coordinating centres, CRAtb) elicited the advantages of working in close partnership with IPC teams, for example when working with care facilities in which both preventative and curative Building on the complementarity between IPC and AMS in an integrative and sustained way was widely acknowledged as a key factor in a more effective approach to combating AMR. .. There is still a history, a recognition at the regional level, collaborations with many infectious disease services and structures in the region. " CRATB coordinator (Region 1) Subtheme: Continuity of relationships and actions Q4: "Because now, we're thinking that the CRATB will promote a lot of the actions that we were carrying out before with [the previous AMS structure] in order to get our foot in the door. " CRATB coordinator (Region 8) Q5: "We have to build our regional organization in relation to the previous one. We're not going to say, let's just sweep up everything and start over and... that's pointless. " Regional Health Agency officer (Region 4) Q6: "The CRATB is being established. And we change nothing. In fact, we are continuing the actions that are already in place. We will, we will, we won't change anything, we'll continue these actions and we'll especially develop new ones because we're better structured. " CRATB coordinator 2 (Region 3) CRATB: Regional antimicrobial stewardship coordination centers; IPC: Infection Prevention and Control; AMS: Antimicrobial Stewardship Participants recognized that the boundaries of each specialty overlapped whilst remaining distinct yet complementary in other areas. Engaging with liberal health professionals CRAtb are required to have a GP as a core team member and when possible, as a member of the locally operating AMS consultant teams. However, recruitment of GPs was a difficult process for the majority of CRAtb, and some participants expressed concern about remaining too hospital-centred should they not succeed in obtaining substantial GP participation. Factors impacting the recruitment process were thought to be financial (modest salary), lack of time, and finding adequate channels for advertising the position. Having a GP on these teams was nonetheless perceived as a pivotal asset to enhance understanding of prescribing practices, gain access to GP networks, and increase the impact of AMS activities in primary and community care. Overall, not having an "entry point" (notably for communications) to primary care prescribers and other liberal health practitioners was frequently identified as a significant obstacle across regions. Role of pre-existing networks and working relationships In all eight participating regions, a pre-existing AMS network was identified (namely with Regional Health Agencies, regional branches of the National Health Insurance, the Observatories for drugs, medical devices and therapeutic innovations, hospital Infectious Disease teams and Regional Unions of Health Professionals ̶ with variations between regions). These networks and associated working relationships were perceived as defining factors in the implementation of CRAtb. Participants acknowledged the facilitating aspect of having known regional stakeholders who were accustomed to working together. Furthermore, some of the networks were long established, contributing to regional recognition and legitimacy of these actors in engaging in AMS strategies and actions. Many regions were also committed to the continuity of already initiated actions, and some participants perceived the establishment of CRAtb as a continuation of what was already being done but within a new and improved framework. "The CRAtb is being established. And we change nothing. In fact, we are continuing the actions that are already in place. We will, we will, we won't change anything, we'll continue these actions and we'll especially develop new ones because we're better structured. " CRAtb coordinator 2 (Region 3). Other themes were identified relating to the facilitators and barriers of implementation and partner interactions, as well to perceived strengths and weaknesses of the CRAtb (Fig. 2, Supplemental Table S4). CRAtb were perceived as a chance to strengthen AMS strategies and activities at the regional level, by the dedicated time allocated to develop relevant actions in an extended network of professionals. However, the geographical region to cover and the disparities appeared as a weakness for the organisation and implementation of consistent AMS actions. The administrative constraints linked to budget use and recruitments was a strong barrier for the implementation of CRAtb, generating delays in recruitments. Interactions with partners involved in AMS were relying on individual motivations and willingness to collaborate. The COVID-19 pandemic provided opportunities for collaborations across stakeholders and specialities, Discussion This study sought to assess the implementation of newly created regional AMS coordination centres in France. The new governance for IPC and AMS is multi-level, relying on the ministry of health providing national priorities and plans, regional centres coordinating the national strategy and actions at the regional level, and local teams providing proximal counselling to healthcare professionals in hospitals, nursing homes, and primary care. This framework aims to harmonize practices at all levels to enhance AMR prevention. At the European level, other countries have implemented national strategies with similar approaches. The Swedish strategic program against AMR (known as Strama), [11] the Scottish management of AMR action plan, [12] and the PIRASOA program (Institutional Programme for the Prevention and Control of Healthcare Associated Infections and Associated Use of Antimicrobial) in Spain's autonomous community of Andalusia [13] have relied on regional and local groups to adapt national frameworks to local conditions. Key aspects to the success of these European models are also integrated into the French national framework, namely the multidisciplinary nature of regional or local teams, the implementation at multiple levels of care (primary care, nursing homes, hospital), the close collaboration with prescribers, and the linking of IPC with AMS actions [11,14,15]. Our findings highlighted the varying level of implementation of CRAtb across regions, with the transition towards the new organizational roles as outlined by the Ministerial directions (with core responsibilities around Fig. 2 Main themes identified in the semi-structured interviews, relating to the facilitators and barriers to implementation and to partnerships and collaborations, as well to perceived strengths and weaknesses within the CRAtb or its programming (illustrative quotes are found in Supplemental) Abbreviations: CRAtb, regional antimicrobial stewardship coordination centres ; AMS, Antimicrobial Stewardship; IPC, Infection Prevention and Control; GP, General Practitioner regional strategies and network coordination) a work in progress. Semi-structured interviews allowed for a more complete picture of the barriers and facilitators to CRAtb implementation, as well as perceived strengths and weaknesses of the model. Our findings suggest that the shift from clinical responsibilities to more collective-focused ones is likely challenging for stakeholders from clinical backgrounds, as evidenced by the continuation of AMS help line activities. The experience of IPC coordinating centres in roles of regional facilitation, coordination, and surveillance may serve as a model for CRAtb coordinators, with the value of collaborating closely and mutualizing resources between IPC and AMS teams clearly acknowledged throughout the qualitative interview data. With regard to other identified themes in the semistructured interviews, these are in line with factors involved in effective collaborations in One Health approaches (namely AMR prevention) which have been explored previously [16]. These can serve to inform future implementation efforts in the field and include individual factors (e.g., prior experience and existing relationships), organizational factors (e.g., information sharing, intentional engagement, clearly defined roles), and network factors (e.g., established partnerships, institutionalization of effective collaborative structures). Evaluation in term of implementation and overall impact of these structures were not discussed by participants. Cost-effectiveness analysis of these organisations will be required to better understand the impact of their actions on the antimicrobial use and the antimicrobial resistance. Building on existing assets is an obvious, valued, and effective approach in developing regional and local AMS strategies (pre-existing organization, stakeholder investment, complementary professional competencies). Being able to engage with liberal health professions working in primary care (such as GPs) is crucial for the prevention of AMR. As mentioned previously, with 80% of antibiotic prescriptions in France stemming from primary care, clear and effective communication channels need to be established with this sector as a priority. This study revealed that for now, having an impact on the primary care sector remained challenging for CRAtb, since half of the structures interviewed reported mostly focusing on AMS in hospitals, and since the difficulties encountered in trying to engage with liberal health professionals were identified as a recurring theme in the semi-structured interviews. IPC and AMS teams have everything to gain by collaborating closely. The ECDC's proposal for EU guidelines for AMS incorporates this last principle in its recommendations [17]. However, our study demonstrated that French IPC and AMS teams are moving beyond more collaboration and trending towards mergers of operational resources (administrative support, databases and data managers, communication channels). The national strategy's success may also lie in the facilitation of this approach toward an integrated IPC and AMS model. This study has a number of limitations. Of note, the research team only interviewed participants from eight out of the 18 French regions. This was due to the current number of formally implemented CRAtb and absence of identified stakeholders in other regions. However, this implies that key issues in the implementation process may have been missed, as those centres which are currently established may have faced less barriers. Further, social desirability bias may be present in the results. Participants may have reported their experiences and perceptions more positively knowing that, though anonymized, their responses would be available to coordinating agencies at the regional and national levels. Conclusion Two years after the publication by the French ministry of health of their legal framework, organisation, and financing, the implementation of CRAtb appeared difficult with the creation of these structures in less than half of French regions. Increased engagement from national and regional liberal health profession organisations is urgently required if advancements in AMS in primary care are to be made. An integrative model of IPC and AMS for the prevention of AMR may be the next pivotal step in addressing this pressing public health threat.
2023-04-25T14:02:02.898Z
2023-04-25T00:00:00.000
{ "year": 2023, "sha1": "db0127eb9135e10da8c964611ba9c989a14f1756", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "db0127eb9135e10da8c964611ba9c989a14f1756", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
12551482
pes2o/s2orc
v3-fos-license
Polarized parton distributions from NLO QCD analysis of world DIS and SIDIS data The combined analysis of polarized DIS and SIDIS data is performed in NLO QCD. The new parametrization on polarized PDFs is constructed. The uncertainties on PDFs and their first moments are estimated applying the modified Hessian method. The especial attention is paid to the impact of novel SIDIS data on the polarized distributions of light sea and strange quarks. In particular, the important question of polarized sea symmetry is studied in comparison with the latest results on this subject. PACS: 13.65.Ni, 13.60.Hb, 13.88.+e Since the observation of the famous spin crisis in 1987 [1] one of the most intriguing and still unsolved problems of the modern high energy physics is the nucleon spin puzzle. The key component of this problem, which attracted the great both theoretical and experimental efforts during many years is the finding of the polarized parton distributions functions (PDFs) in nucleon. The analysis of data on inclusive polarized DIS enables us to extract such important quantities as the singlet ∆Σ(x, Q 2 ) and nonsinglet ∆q 3 (x, Q 2 ), ∆q 8 (x, Q 2 ) combinations of the polarized PDFs, and, thereby, the sums of valence and sea PDFs ∆q + ∆q ≡ ∆q V + 2∆q. Besides, dealing with DIS data, the gluon helicity distribution ∆G(x, Q 2 ) is determined due to the evolution in singlet sector and weak dependence on ∆G of the polarized structure function g 1 in NLO QCD. However, even for singlet combinations ∆u + ∆ū, ∆d + ∆d, ∆s + ∆s, considered as well-determined within DIS, we still meet the problem: there are only two equations corresponding to inclusive asymmetries A 1 measured on proton and deuteron targets from which we should determine three unknown combinations ∆q 3 (x), ∆q 8 (x) and ∆Σ(x) (or, alternatively, ∆u + ∆ū, ∆d + ∆d, ∆s + ∆s). So, it is unavoidable to involve some additional assumptions performing the fitting procedure for the purely inclusive DIS data. Moreover, DIS data can not help us to solve the important problem of valence and sea PDFs separation. The basic 4 process which enables us to solve these problems is the process of semiinclusive DIS (SIDIS). However, until recently the quality of the polarized SIDIS data was rather poor, so that its inclusion in the analysis did not helped us [2] to solve the main task of SIDIS measurements: to extract the polarized sea and valence PDFs of all active flavors. Only in 2004 the first polarized SIDIS data with the identification of produced hadrons (pions and kaons) were published [3]. These data were included in the global QCD analysis in Ref. [4]. Further, COMPASS [5] presented the polarized SIDIS data (without identification of hadrons in the final state), in particular, in the low x region unaccessible for HERMES. This data were included in the latest parametrization of Ref. [6]. Recently, the new data on the SIDIS asymmetries A π ± d , A K ± d were published [7] by the COMPASS collaboration. It is of importance that this data cover the most important and badly investigated low x region. In this paper we include this data in the new global QCD analysis of all existing polarized DIS and SIDIS data. The elaborated parametrization on the polarized PDFs in some essential points differs from the parametrization of Ref. [6] (see below). Nevertheless, the results on PDFs obtained with both parametrizations are compatible within the errors. The gluon PDF is parametrized as It is easy to see that in our parametrization the coefficients η are just the first moments of the respective local quantities. In particular, the advantage of ∆q 3 and ∆q 8 parametrization in the form (2), (3) is that with the such choice it is very convenient to apply and control the SU f (2) and SU f (3) sum rules Besides, the such inheritance with Ref. [8] allow us to clearly see the impact of SIDIS data on the results of pure inclusive DIS data analysis. Further, to properly describe the SIDIS data we, besides ∆Σ, ∆q 3 and ∆q 8 , parametrize the sea PDFs of u and d flavors: Then, ∆u and ∆d are determined from Eqs. (4) and (9), while the valence PDFs are determined by ∆u V = ∆u − ∆ū and ∆d V = ∆d − ∆d. Thus, all polarized PDFs are completely determined within the parametrization. Comparing DIS sector, Eqs. (1)-(3), (6) with the respective parametrizations from Ref. [8], one can see some distinctions. These are additional factors γ ∆q 3 x, γ ∆q 8 x in Eqs. (2), (3) which are introduced to provide the better flexibility of the parametrizations on the respective quantities, required by the inclusion of SIDIS data. Besides, we introduce the additional factors δ ∆q 8 √ x in Eq. (3) and γ ∆G x in Eq. (6) to provide the possibility of sign-changing scenarios for ∆s and ∆G, respectively (see below). We consider two options for handling of SU f (2) and SU f (3) sum rules. In the first case we apply the constraints (7) and (8) putting η ∆q 3 and η ∆q 8 equal to the central values of the respective constants. In the second case we allow η ∆q 3 and η ∆q 8 to vary within the uncertainties for F + D and 3F − D. It is of importance that, as we will see below, both options produce almost the same results. We analyze the inclusive A 1 and semi-inclusive A h 1 asymmetries. The inclusive asymmetry reads where the polarized structure function g 1p in NLO QCD looks as Throughout this paper we use the MS factorization scheme. The respective coefficient functions ∆C q,g can be found in Ref. [9]. The semi-inclusive asymmetries, besides x and Q 2 , depend also on hadronic variable z. As usual, we apply the semi-inclusive asymmetries integrated over the cut z > 0.2, which corresponds to the current fragmentation region. The semi-inclusive structure functions g h 1 in NLO QCD are given by The respective Wilson coefficients ∆C qq,qg,gq can be found in [10]. Of great importance for the SIDIS data analysis is the choice of parametrization on the fragmentation functions. We use here the latest NLO parametrization from Ref. [11]. Calculating F 2 in Eq. (10) and F h 2 in Eq. (12) we use parametrization for R from [12] and the recent NLO parametrization on unpolarized PDFs from Ref. [13]. For the α s (Q 2 ) calculation we apply the same procedure as in Ref. [13] (i.e. α s (Q 2 ) in MS scheme is calculated just as in [13] with α s (M 2 Z ) = 0.1145). The deuteron structure functions g 1d and g h 1d are calculated applying g In our analysis the positivity constraint |∆q| < q and |∆G| < G holds with the precision 0.001. Of importance is the correct application of the factor 1 + γ 2 = 1 + 4M 2 x 2 /Q 2 in the analysis -see the discussion on this question in Ref. [14]. When one rewrites the asymmetry A 1 in terms of F 2 instead of F 1 then this factor precisely cancel out in the ratio of g 1 (1 + γ 2 ) and F 1 = F 2 (1 + γ 2 )/2x(1 + R), so that we just arrive at right-hand side of Eq. (10). The same cancellation holds in SIDIS case, so that one arrives at Eq. (12). At the same time, sometimes [15] the inclusive data is tabulated in the form g 1 /F 1 (not in the form A 1 = (1 + γ 2 )g 1 /F 1 ). In this case we fit the experimental values One of the main conditions of the successful global QCD analysis is the robust program for the DGLAP solution. The respective program should be fast, well tested and should provide a good precision of DGLAP solution. The elaborated in Ref. [8] program for the polarized DIS data analysis, based on inverse Mellin transformation method, satisfies to all these requirements. So, we again use here this program package, properly modifying it in accordance with the peculiarities of SIDIS data (calculation of SIDIS structure functions and asymmetries in the space of Mellin moments are included). All procedures of the global QCD analysis are based on the construction of the effective χ 2 function that describes the quality of the fit to data for a given set of varying parameters {a i }. In the case of polarized DIS and SIDIS analysis one usually uses the χ 2 function in the form (see, for instance, [6] for detail) where A exp is the measured value of the asymmetry, δ(A exp ) is its uncertainty 5 , A theor is its theoretical estimation. For the minimization of χ 2 function we use the MINUIT package [16]. For our analysis we collected all accessible in literature polarized DIS and SIDIS data. We include the inclusive proton data from Refs [17,18,19,20,15], inclusive deuteron data from Refs [21,17,18,22,20,15] and inclusive neutron data from Refs [23,24,25,26]. The semi-inclusive data (asymmetries are taken from Refs [3,27,5] and, besides, we include the latest COMPASS data from Ref [7]. In total we have 232 points for the inclusive polarized DIS and 202 points for semi-inclusive polarized DIS. If we fix η ∆q 3 and η ∆q 8 by the center values of F + D and 3F − D in sum rules (7), (8), then for 16 fit parameters χ 2 0 | inclusive = 188.4 and χ 2 0 | semi−inclusive = 194.8 for DIS and SIDIS data, while χ 2 0 | total = 383.9 for the full set of data (434 points). On the other hand, if η ∆ q3 and η ∆q 8 are varied within the errors on F + D and 3F − D, then for 18 fit parameters the resulting χ 2 values are: χ 2 0 | inclusive = 188.2, χ 2 0 | semi i nclusive = 194.8 with χ 2 0 | total = 383.7. Thus, one can conclude that the fit quality is quite good: The optimal values of our fit parameters are presented in Table 1 for both options, with fixed and varied η ∆q 3 and η ∆q 8 . As it is seen from Table 1, the results are almost the same in both cases: the differences in parameters are less than 1%. Our calculations show that the fit quality does not decrease if we cancel the extra parameters setting β ∆q 8 equal to β ∆q 3 , since their values occur very close to each other when we try to find the best fit. Besides, we use the equality β ∆ū = β ∆d = β ∆G (just as in Ref. [6]) since the polarized data at x > 0.6 ÷ 0.7 is just absent, while in the region 0.4 ÷ 0.7 the statistical errors are too large to feel the difference in values of these parameters. Certainly, the construction of the best fit should be accompanied by the reliable method of uncertainties estimation. We choose the modified Hessian method [28], [29] which well works (as well as the Lagrange multipliers method -see [6] and references therein) even in the case of deviation of χ 2 profile from the quadratic parabola, and was successfully applied in a lot of physical tasks. Let us recall that the standard Hessian method is based on the assumption that the global χ 2 is quadratic near the minimum χ 2 0 : Here a 0 i are the fit parameters values at the global minimum and H ij are the elements of the Hessian matrix -the matrix of second derivatives of χ 2 in the minimum. Then, the uncertainty on any physical quantity F which depends on M fit parameters {a i }, can be estimated applying The Hessian method in the simple form (16) is implemented in the MINUIT program [16] ("Hesse procedure") and it perfectly works if χ 2 has the parabolic profile near the minimum. However, in practice we often meet the problems which decrease the reliability of this simple version of Hessian method, and main of them is the deviation of χ 2 from parabolic form -see [28] for detail. To deal with asymmetric χ 2 profiles we apply the modification [28], [29] of the standard Hessian method, which is proved to be well working even in the strongly asymmetric cases. Shortly, the essence of procedure is following. First, one starts with still symmetric errors ±δF on the physical quantity F , which one finds via Eq. (16) rewritten [28] in terms of eigenvectors and eigenvalues of the diagonalized Hessian matrix 6 . Simultaneously, one calculates the values of fit parameters {a i } corresponding to F +δ(F ) and F −δ(F ) and, thereby, the respective χ 2 values. To obtain asymmetric errors [29] corresponding to the real χ 2 profile one varies ∆χ 2 in Eq. (16) (rewritten in terms of eigenvectors and eigenvalues) and calculates the respective values of the parameters, finding the intersections of χ 2 (F ) curve with the straight line χ 2 = χ 2 0 +∆χ 2 , where ∆χ 2 is already fixed quantity (∆χ 2 = 1 or ∆χ 2 = 18.065 here -see below). Then the differences of F values in these intersections with F = F 0 in the global minimum just determine F + δ (+) (F ) and F − δ (−) (F ), where δ (±) (F ) are in general asymmetric uncertainties on the physical quantity F -see Fig. 1. Besides, very important question arises about choice of ∆χ 2 determining the uncertainty size. The standard choice is ∆χ 2 = 1, just as we did before in Ref. [8] (see also [6] and references therein). However, the such choice of ∆χ 2 can lead to underestimation of uncertainties, as it was argued in Ref. [31]. The alternative choice of ∆χ 2 (see, for example, Refs. [31], [32] and references therein) is based on the equation where P=0.68 (1σ deviation) is the probability to find the values of all M fitting parameters inside the hypervolume determined by the condition χ 2 ≤ χ 2 0 + ∆χ 2 . In our case (17 parameters) the ∆χ 2 value calculated from Eq. (17) is equal to 18.065. We calculate the uncertainties for both ∆χ 2 = 1 and ∆χ 2 = 18.065 options. The local distributions together with their uncertainties are presented in Fig. 2. The first moments of PDFs together with their uncertainties are presented in Table 2. As it was mentioned above, the results for both scenarios with fixed and varied η ∆q 3 and η ∆q 8 are almost the same (the differences in fitting parameters are less than 1%). That is why we present all figures and Table 2 only for the option with the varied η ∆q 3 and η ∆q 8 . Let us now discuss the obtained parametrization. First, one can see that the results on the first moments ∆ 1 Σ ≡ η ∆Σ and ∆ 1 G ≡ η ∆G are very close to the respective results 6 For the respective calculations we apply the program ITERATE by J. Pumplin [30]. (scenario with ∆G < 0) obtained in Ref. [8] in the case of pure inclusive DIS. Indeed, even with the more smaller errors, corresponding to the option ∆χ 2 = 1, at Q 2 = 3 GeV 2 we have ∆ 1 Σ = 0.372 +0.008 −0.014 and ∆ 1 G = −0.161 +0.096 −0.146 in this paper and ∆ 1 Σ = 0.329 +0.004 −0.012 and ∆ 1 G = −0.181 +0.042 −0.031 in Ref. [8], respectively. Notice that the obtained value ∆ 1 Σ = 0.372 +0.008 −0.014 is even more close to the respective NLO value a 0 (Q 2 = 3 GeV 2 ) = 0.35 ± 0.03 obtained by COMPASS [8] directly (without fitting procedure), from the first moment of the measured structure function g 1d . From Table 2 one can see also that in the more pessimistic case ∆χ 2 = 18.065, ∆ 1 G value is just zero within the errors -we will return to this question below. The impact of SIDIS data on the ∆G(x) shape will be also discussed some later. What concerns nonsinglet combinations ∆q 3 (x) and ∆q 8 (x), we see (compare Table 1 with Table 3 from Ref. [8]) that, even despite the small difference in initial scales (Q 2 0 = 1 GeV 2 here and Q 2 0 = 3 GeV 2 in Ref. [8]), the values of ∆q 3 (x) are very [31]. The such comparison seems to be reasonable because among the number of all parametrizations applying the DIS data, the SIDIS data is included only in the sequel of papers Ref. [2], [4], [6]. At the same time, though SIDIS data is not included in the analysis of Ref. [31], the RHIC π 0 production data is added 7 there (just as in Ref. [6]), which should provide significant impact on ∆G in the RHIC x region [0.05, 0.2]. The results of comparison are presented in Fig. 3. From this figure one can see that the best fit results on ∆G presented in this paper and in Ref. [6] are very close to each other while they are significantly differ (in both ∆G < 0 and ∆G > 0 regions) from the respective result of Ref. [31], even for the same sign-changing scenario for ∆G. On the other hand, comparing the results on the first moment ∆ 1 G ≡ η G , one can see that the central values of this quantity are almost the same for all three parametrizations: these are −0.183, −0.118 and −0.120 in this paper, Refs. [6] and [31], respectively. Moreover, including the uncertainties in comparison (see Table 2 here, Table III in Ref. [6] and Table IV Ref. [31]), we see that the first moment ∆ 1 G is just zero within the errors for all three parametrizations. Thus, we can conclude that with the present quality of the data it is hardly possible to realize is the first moment ∆ 1 G zero or not. To answer this question and also to distinguish between the different shapes of ∆G(x), still more 8 precise data is necessary. Let us now discuss the impact of SIDIS data on the polarized strangeness in nucleon. It is of importance because, after the appearance of the first results on ∆s extraction from SIDIS data performed by HERMES [3], we met the puzzle with the positive ∆s in the middle x HERMES region 0.023 < x < 0.6, while the total moment ∆ 1 s definitely should be negative [33] in accordance with the sum rule (8). Thus, to satisfy this requirement, ∆s(x) should possess the compensating negative behavior in the unaccessible for HERMES low x region 0 < x < 0.023, i.e. the sign-changing scenario for ∆s should be realized. Looking at Fig. 2 we see that this is indeed the case and we produce the best fit namely for the sign-changing ∆s scenario, as well as in Ref. [6]. However, we see also the distinction in the ∆s shape, in comparison with Ref. [6] -see Fig. 4. Namely, while within parametrization [6] ∆s changes the sign one time, within our parametrization ∆s changes the sign twice. It seems that this distinction occurs due to the inclusion of the latest COMPASS semi-inclusive data [7], which allow to better fix 9 ∆s shape. This is illustrative to compare the NLO results on ∆s obtained here with the respective results of direct ∆s extraction in LO by COMPASS -see Fig. 5. We see very similar ∆s behavior in both cases. Certainly, one should be careful comparing LO and NLO results. However, the LO and NLO results do not differ too drastically, so that the such comparison is very useful and allow us to make at least qualitative conclusion about the PDFs behavior (shape of distributions). The new COMPASS data on the kaon asymmetries in the wide Bjorken x region [0.003, 0.7] will be available in the nearest future (the paper in 8 Notice that within this paper we apply the sign-changing scenario for ∆G(x) and do not consider scenario ∆G(x) > 0 which was considered in Refs. [8] and [31]. For this scenario the first moment of ∆G is of opposite sign and larger in absolute value, about +0.3 ÷ +0.4 instead of −0.1 ÷ −0.2. Nevertheless, the such scenario also produce quite acceptable χ 2 0 value. The such arbitrariness even in the sign of ∆ 1 G once again tell us that we still need more data to properly fix ∆G and that even inclusion of π 0 production data (Refs. [6], [31]) still poorly enables us to solve the problem. 9 Notice that after inclusion of the latest COMPASS data to our fit χ 2 /D.O.F. value becomes small only if we allow ∆s to change the sign twice (due to the additional parameter δ ∆q8 √ x ). preparation). Thus, one can hope to eventually fix the strangeness in nucleon. As it was discussed above, the exclusive task for SIDIS is the finding of sea ∆q (and, consequently, valence ∆q V = ∆q − ∆q) PDFs in nucleon. We again compare our results on ∆ū and ∆d with the respective results from Ref. [6]. Comparing the central values (best fit values) of ∆ū and ∆d one can see that they are quite similar -see Fig. 6. At the same time, there are also some distinctions, and main of them are connected with the sum ∆ū(x) + ∆d(x) and its first moment ∆ 1ū + ∆ 1d . The point is that recently, analyzing the SIDIS data on h ± production COMPASS obtained rather surprising result [5] that the sum [∆ 1ū + ∆ 1d ](Q 2 = 10 GeV 2 ) is just zero within the errors (see Table 2 in Ref. [5]) This result was confirmed in the subsequent COMPASS paper [7], where sum ∆ū(x, Q 2 = 3 GeV 2 ) + ∆d(x, Q 2 = 3 GeV 2 ) of the local PDFs was extracted from the measured asymmetries A 1d , A π ± 1d , A K ± 1d in the region 0.004 < x < 0.3 (see Fig. 4 in Ref. [7]) and occurs to be about zero in the whole this region (the central values occur in both positive and negative vicinities of zero). Thus, at least in the leading order (COMPASS analysis) the sum ∆ū(x) + ∆d(x) is about zero in the region 3 GeV 2 < Q 2 < 10 GeV 2 , which sheds new light on our understanding of polarized light quark sea. Namely, the sea is extremely asymmetric (∆ū ≃ −∆d), on the contrary to the assumption of symmetric sea scenario ∆ū(x, Q 2 0 ) = ∆d(x, Q 2 0 ), applied in the practically all 10 existing parametrizations based on the pure inclusive DIS data analysis. Our analysis shows that the sum ∆ū + ∆d is very small quantity in NLO QCD too. It is very illustrative for qualitative comparison to put our best NLO QCD fit for ∆ū(x) + ∆d(x) (evolved to the COMPASS Q 2 = 3 GeV 2 ) on the figure (Fig. 4 in Ref. [7]) showing the COMPASS LO results on this quantitysee Fig. 7. From Fig. 7 it is clearly seen that ∆ū(x) + ∆d(x) is very small quantity in both LO and NLO QCD orders. In turn, the first moment ∆ 1ū + ∆ 1d for the proposed parametrization is also just zero within the errors even in the case ∆χ 2 = 1. Notice that QCD evolution weakly influences this result. Even at extremely large Q 2 value 100 GeV 2 (for instance, for COMPASS upper bound on accessible Q 2 is about 60 GeV 2 ) the quantity ∆ 1ū + ∆ 1d is still very close to zero within the errors, even in the case ∆χ 2 = 1: Notice that looking on parametrization of Ref. [6] (see Fig. 3 there) we see that the distributions ∆ū(x) and ∆d(x) are also of opposite sign. However, they are significantly differ in their absolute values (just as in the DIS parametrization GRSV2000 [34] -see footnote 10). As a result, the central value of [∆ 1ū + ∆ 1d ](Q 2 0 = 1 GeV 2 ) in Ref. [6] (see Table IV there) is −0.08 (i.e. almost the same as ∆ 1d central value −0.11) instead of −0.01 in Eq. (19). The important remark should be made here. At present, the SIDIS data is of such quality that all above conclusions about sea PDFs should be considered as preliminary. Indeed, looking at Fig. 2 here and Fig. 3 in Ref. [6] we see that the uncertainties on ∆ū and ∆d are rather large even for the option ∆χ 2 = 1, while in the case of ∆χ 2 determined by Eq. (14) (∆χ 2 = 18.065 here) ∆ū and ∆d are still the zeros within the errors, and even just to see them within this option for ∆χ 2 we need more SIDIS data (first of all the expected COMPASS data). In conclusion, the new combined analysis of polarized DIS and SIDIS data in NLO QCD is presented 11 . The impact of modern SIDIS data on polarized PDFs is studied, which is of especial importance for the light sea quark PDFs and strangeness in nucleon. The obtained results are in agreement with the latest direct leading order COMPASS analysis of SIDIS asymmetries [7] as well as with the recent global fit analysis in NLO QCD of Ref. [6], where the SIDIS data were also applied. Nevertheless, there also some distinctions concerning, first of all, the polarized quark sea peculiarities. At the same time, the quality of SIDIS data is still not sufficient to make the eventual conclusions about the quantities influenced mainly by SIDIS. In this situation of especial importance becomes the direct ∆q extraction in NLO QCD, where (just as in LO QCD) the central values of asymmetries and their uncertainties directly propagate to the extracted ∆q values and their errors. The such NLO QCD method, free of any fitting procedure with a lot of parameters, was elaborated in the sequel of papers [35] (see review [36] for details). At present the respective analysis of the whole existing polarized DIS and SIDIS data is in preparation. In any case, the new COMPASS semi-inclusive data should be available in the nearest future. In particular, we expect π ± , K ± data on the proton target in the wide x region (today the only available such data is the HERMES data in the narrow x region and only for π ± production), which should essentially increase the precision of ∆ū, ∆d and ∆s extraction.
2009-10-12T10:03:14.000Z
2009-08-23T00:00:00.000
{ "year": 2009, "sha1": "89dd845a37bc2fd5195733aa81491b54587ec58e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0908.3296", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "89dd845a37bc2fd5195733aa81491b54587ec58e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
226643288
pes2o/s2orc
v3-fos-license
Seasonal Wind Characteristics and Prospects of Wind Energy Conversion Systems for Water Production in the Far North Region of Cameroon This study aimed at investigating the characteristics of the wind power resource in the Far North Region of Cameroon (FNR), based on modelling of daily long-term satellite-derived data (2005-2020) and in-situ wind measurements data (1987-2020). Five different reliable statistical indicators assessed the accuracy level for the goodness-of-fit tests of satellite-derived data. The two-parameter Weibull distribution function using the energy factor method described the statistical distribution of wind speed and investigated the characteristics of the wind power resource. Six 10-kW pitch-controlled wind turbines (WT) evaluated the power output, energy and water produced. A 50 m pumping head was considered to estimate seasonal variations of volumetric flow rates and costs of water produced. The results revealed that the wind resource in FNR is suitable only for wind pumping applications. Based on the hydraulic requirements for wind pumps, mechanical wind pumping system can be the most cost-effective option of wind pumping technologies in FNR. However, based on the estimated capacity factors of selected WT, wind electric pumping system can be acceptable for only four out of twenty-one sites in FNR. Introduction Wind has nowadays become a stable form of power supply and is considered as one of the most cost-effective means for delivering low-carbon energy services, particularly to the most vulnerable segments of the population in numerous developing nations. It's anticipated that by 2050, wind power could contribute to more than 25% of the total emissions reductions needed (approximately 6.3 gigatons of carbon dioxide annually), under the energy goals set out in the United Nations 2030 Agenda and the Paris Agreement. Wind energy (WE) would then generate more than 35% of total electricity needs, becoming the prominent generation source by 2050 [1]. Over the last two decades, the yearly growth rate of global WE has been as high as 38.56% (2001), as low as 9.61% (2018) and on average 22%. At the end of 2019, global WE generation capacity amounted to 622.7 gigawatts (GW), which represented 25% of renewable generation capacity by energy source. Hydropower, the largest share of the global total, accounted for 47% (1190 GW), while the share of solar reached 23% (586 GW) in 2019 [2]. Globally, WE performed particularly well in 2019, expanding by 58.9 GW (10.44%). Asia accounted for 49.47% of new capacity in 2019, increasing its WE generation capacity by 29.13 GW to reach 258.32 GW (41.48% of the global total). WE capacity in Europe and North America expanded by 14.02 GW (+31.46%) and 11.48 GW (+19.85%), respectively [3]. Oceania and the Middle East were the fastest growing regions (+22.18% and +17.75%, respectively), with 2.47% and 0.19%, representing their share of global WE capacity, respectively. Africa accounted for 0.51%, the lowest of new capacity in 2019, increasing its wind energy capacity by only 0.3 GW to reach 5.7 GW (0.93% of the global total). Compared to 2018, capacity growth in Africa and Middle East was somewhat lower than in 2019, but higher in Asia, Europe and North America [3]. Despite being the least growing region in terms of WE generation capacity, Africa has WE resources and potential that can meet its current needs, if properly tapped. Several studies have shown that the wind resource in Africa is greatest around the coasts and in the eastern highlands [4] [5]. However, the WE development in the African continent remains very slow as a result of limited support at the level of the continent, since the vast majority of WE projects necessitate financial support from organizations based out of the continent [6]. By the end of 2019, North Africa and the Republic of South Africa continued to dominate, with 49.44% (2.85 GW) and 36.32% (2.09 GW), representing their share of WE capacity in the African continent. Sub-Saharan Africa, accounted for 14.24%, representing the lowest share of WE capacity. At roughly 0.82 GW, the entire WE generating capacity of the 47 countries of sub-Saharan Africa (excluding the Republic of South Africa), is less than that of Morocco. As a result, sub-Saharan Africa has the world's lowest WE generation capacity, despite the wind potential that is essentially untapped. Furthermore, transition-related clean energy investments in Sub-Saharan Africa is [7]. Moreover, sub-Saharan Africa displays the lowest electricity access of only 45%, far lower than the world average of 89%. Furthermore, the vast majority of people (over 99%) deprived of electricity are in developing nations, and four-fifth of them live in rural South Asia and sub-Saharan Africa [8]. Similarly, Cameroon, does not have any installed WE capacity, despite the existing potential. Neighboring countries with comparable wind potential, have taken steps in exploring wind power. By the end of 2019, WE generation capacity in Chad and Nigeria amounted to approximately 1 and 3 megawatts (MW), respectively [3]. Most of the analyses performed to assess the potential of wind power have shown that the whole country lays in low wind resources regime, with very limited high wind sites. The vast majority of sites fall under poor to marginal wind regime. However, detailed information on the potential wind resource, which is of paramount importance when forecasting wind power for the optimal site selection, has yet to be precisely acknowledged. Locally measured wind data are generally available at meteorological stations located at the main airports, while there are no ground station measurements for the vast majority of locations which are far (at least 50 km) from the main airports. When meteorological measured wind data from masts are missing, wind resource estimation using daily long-term satellite-derived data are considered [9] [10] [11]. Furthermore, for comparison analysis, both meteorological observations and satellite-derived data are used to estimate the local accuracy [12] [13] [14]. All things considered, the proposed work aims at investigating the characteristics of wind power resource from twenty-one locations in FNR, using daily long-term satellite-derived data for the period 2005-2020 and 3-hourly time step observed wind speed data from 02 weather recording locations (Kousseri and Maroua) for the period 1987-2020. The main objective of this study is to provide a reasonable wind power resource assessment in the early phase of wind farm projects using satellite-based wind resource, before higher-accuracy in-situ mea- Wind Data Description and Source For this study, in situ measurements (3- [18]. Table 1 provides geographical coordinates of the twenty-one sites considered, as well as satellite and in situ measurements periods. Wind Speed and Standard Deviation In this research, the first step in the assessment of seasonal wind characteristics in FNR, is to analyze in situ measurements (3-hourly time step observed wind speed data) from 02 weather recording locations at Kousseri and Maroua and daily long-term satellite-derived data, recorded at a height of 10 m agl, using mean wind speeds and standard deviations. Figure 2 recapitulates monthly, annual and seasonal mean wind speeds and standard deviations using in situ and satellite measurements at Kousseri and Maroua. It is seen in Figure 2 that the highest in situ wind speeds occur in the dry sea- (2) where: σ = standard deviation of the mean wind speed [m⁄s]; ( ) i v = wind speed [m⁄s]; N = number of wind speed data. Table 2 provides at the twenty-one selected sites, annual, dry and rainy seasons mean wind speeds, standard deviations and ambient temperatures using daily long-term satellite-derived data for the period 2005-2020, recorded at a height of 10 m agl. Mean wind speeds in FNR vary in the ranges of 2.99 -4.32 m/s, 2.12 -3.23 m/s, 3.43 -4.87 m/s for yearly averages, rainy and dry seasons, respectively. It observed that the variance of streamflow occurrence in the rainy season is smaller than that of the yearly average and dry season, which may suggest a more accurate prediction. On the other hand, higher SD in the dry season present streamflow values that are widespread and may be less accurate. Mean ambient temperatures values are between 25.74˚C and 29.67˚C. Lower temperatures are seen in the rainy season, while higher values occur in the dry season. Weibull Probability Density Function The Weibull probability density function (PDF) is used to describe the statistical distribution of wind speed. The Weibull PDF is a useful tool to characterize the wind speed and power in a given location, as well as to evaluate mean monthly, yearly and seasonal net energy production and performance wind energy systems [19] [20]. The Weibull PDF can be described by its PDF ( ) f V and cumulative distribution function (CDF), ( ) F V [21] using Equations (3) and (4). ( ) f v = probability of observing wind speed v; v = wind speed [m⁄s]; C = Weibull scale parameter [m⁄s]; k = Weibull shape parameter. The determination of the two-parameter Weibull PDF requires the knowledge of the shape (k, dimensionless) and scale (C in m/s) parameters. Various well-established estimation methods are used for the purpose of computing Weibull parameters at a given location [22]. In this work, Weibull shape and scale parameters are computed using the energy pattern factor method (EPF). First, the energy pattern factor ( pf E ) [23] [24] [25] is given by Equation (5). Smart Grid and Renewable Energy Then, the shape and scale parameters are computed using Equations (6) and (7). Statistical Indicators for Accuracy Evaluation To assess the accuracy level for the goodness-of-fit tests of satellite-derived data, five reliable statistical indicators have been used to compare measured 3-hourly time step observed wind speed data and daily long-term satellite-derived data. These statistical indicators are presented using Equations (8) to (12) ( ) 2) Root mean square error (RMSE) [28] [29]: ( ) 3) Relative root mean square error (RRMSE) [19] [30]: ( ) where: Extrapolation of Wind Speed The wind speed data were collected at a height of 10 m agl. (13) and (14). 10 10 The power law exponent n is given by Equation (15). where, z and z 10 are in meters, Weibull C 10 and k 10 parameters are determined at 10 m height agl. Mean Wind Power Density and Energy Density Expressed in watts per square meter (W/m 2 ), wind power density (P(v)) considers the wind speed frequency distribution of a given location and the power of wind which is proportional to the air density and the cube of the wind speed. The power of wind (P(v)) can be estimated using Equation (16). The mean wind power density ( D p ) based on the Weibull probability density function can be calculated using Equation (17). The mean energy density ( D E ) over a period of time T is expressed as Equation (18). where: ρ = air density at the site; A = swept area of the rotor blades [m 2 ]. The air density (in kilograms per cubic meter) at a given site is computed as the mass of a quantity of air (in kg) divided by its volume (in cubic meter). It depends on elevation and temperature above sea level and can be computed [35] using Equation (19). (19) where: Z = elevation (m); T = temperature at the considered site (˚K). Power Curve Model and Capacity Factor The typical power curve of a 10-kW pitch-controlled WT is shown in Figure 3(a), while the power curves using the six selected pitch-controlled WT of 10 kW rated capacity are plotted in Figure 3(b). As a result of the pitch regulated systems, the voltage of the electricity at which pitch-controlled WT generate power at WS above their rated levels, does not decrease [36]. Four different zones are observed in this curve (Figure 3(a)). For WS in the range of zero to V I (cut-in WS), wind speed (v R ), cut-off wind speed (v F ) and rated electrical power (P eR ). All these speeds and power are computed using the parabolic law [37], as a combination of Equation (20). The average power output ( , e ave P ) of the WT, based on the Weibull PDF, can be computed using Equation (21). The ratio of the average power output ( , e ave P ) to the rated electrical power ( eR P ) of the WT is known as the capacity factor CF. CF can thus be expressed [38] as Equation (22). Water Pumping Capacity The water pumping capacity rate ( w F ) is related to the net hydraulic power output ( out P ) and the efficiency of the pump. To determine a volume of water , the net hydraulic power output ( out P ) and volumetric flow rate of water ( w Q ) are computed [39] using Equations (23) and (24). where: Costs Analysis The water pumping capacity rate ( w F ) is related to the net hydraulic power output ( out P ) and the efficiency of the pump. To determine a volume of water With the following assumptions: • I is the investment cost, which includes WT price in addition to 20% for civil works and other connections; • Average specific WT cost per kW is USD 2600, for WT rated power less than 20 kW [41]; • n is the useful lifetime of WT in years (20 years); • i 0 is the nominal interest rate (16%); • S is the scrap value (10% of WT price); • i is the inflation rate (3.6%); • C om is the operation and maintenance costs (7.5% of the investment cost). The total energy output ( WT E ) over WT lifetime (in kilowatt-hour) is computed using CDF of wind speeds at which WT produce energy (A), rated power of the WT, capacity factor CF and WT lifetime working hours. WT E is computed as Equation (28). The costs of energy (COE) per unit kWh and costs of water (COW) per unit m 3 are estimated using Equations (29) and (30). PVC COE The annual volume of water V w (m 3 /year) produced is determined using Equation (31). Figure 4 and Figure 5 show monthly average PDF at 10 m height agl, respectively at Kousseri and Maroua using both measured and satellite-derived data, while At the site of Maroua, it is observed under the same conditions, lower probability (around 0.15) using measured data and higher probability (around 0.31) using measured data. Statistical indicators for the accuracy of satellite-derived WS at Kousseri and Maroua are displayed in Table 4. Weibull CDF values provided data for the statistical analysis and comparison between measured and satellite-derived data. Table 4 shows different values obtained using the five statistical indicators for the accuracy of satellite-derived wind speed at Kousseri and Maroua. Statistical Indicators for the Accuracy of Satellite-Derived WS at Kousseri and Maroua The shows an accuracy level in the range of excellent to good, to test the goodness-of-fit of satellite-derived data. Therefore, satellite WS are found to be a good fit with high correlation at both locations. Figure 8 presents seasonal average PDF at 10 m height agl for the twenty-one selected locations. Dry season average PDF (Figure 8(a)) displays lower percentage probability, with a larger range of speeds, while rainy season average PDF ( Figure 8(b)) shows higher percentage probability, with a narrower range of WS. Table 5 Figure 9 illustrates seasonal average PDF at 30 m height agl for the twenty-one selected locations. As previously described, dry season average PDF ( Figure 9(a)) displays lower percentage probability, with a larger range of speeds, whereas rainy season average PDF (Figure 9(b)) shows higher percentage probability, with a narrower range of WS. [43]. Wind electric pumping system can be implemented at Hilé-Alifa, Blangoua, Kousseri and Goulfey, using WT characteristics similar to WT 1 . Based on hydraulic requirements for wind pumps, the use of Mechanical wind pumping system is highly suggested as the most cost-effective option of wind pumping technologies in FNR. Figure 11 shows mean monthly flow rate capacity (m 4 /h) histograms using WT 1 , at (a) Blangoua, (b) Goulfey, (c) Hilé-Alifa and (d) Kousseri. Higher flow rate capacity are observed in dry season, whereas lower values are seen in rainy season. The lowest flow rate capacity are observed in September followed by August, whereas the highest values are shown in March followed by February. Table 11 illustrates mean seasonal volumetric flow rate of water (m 3 /day) at 50 m dynamic head using the six selected WT at the twenty-one selected locations. Volumetric flow rate (Q w ) and flow rate capacity (F w ) are lineary related to each other, hence they follow the same trend when ranking WT performance. WT 1 achieves the highest volumetric flow rate, whereas WT 3 , WT 2 and WT 4 rank, respectively 2 nd , 3 rd and 4 th . WT 4 reveals the same performance as WT 5 Table 12 illustrates mean seasonal costs of water (XAF/m 3 ) at 50 m dynamic head using the six selected WT at the twenty-one selected locations. COW and flow rate capacity are lineary related to each other, hence they follow the same tendency when ranking WT performance. WT 1 achieves the highest volumetric flow rate, whereas WT 3 , WT 2 and WT 4 rank, respectively 2 nd , 3 rd and 4 th . WT 4 reveals the same performance as WT 5 . The least efficient is WT 6 . With consideration to WT 1 , the most performing of considered WT, dry season COWare 9.06, Figure 12 displays mean monthly COW and volumetric flow rate using WT 1 , at (a) Blangoua, (b) Goulfey, (c) Hilé-Alifa and (d) Kousseri. With respect to the PVC method, COW are inversely proportional to volumetric flow rate. It is observed that the higher the volumetric flow rate, the lower the COW. Lower COW are observed in dry season, whereas higher COW are experienced in rainy season. COW are highest in September and August, while March and February display the lowest COW. Conclusion In this work, seasonal wind characteristics, net energy production and performance of selected 10-kW pitch-controlled WT in twenty-one selected locations in FNR have been evaluated using measured wind and satellite-derived wind data at 10 m height agl. Five reliable statistical indicators have been employed to assess the accuracy level of satellite-derived data. The 2-parameter Weibull PDF using the energy factor method provided the required tool to investigate seasonal wind characteristics, net energy production and performance of selected WT. The outcomes of this study show that mean wind speeds at 10 m height agl in FNR vary in the ranges of 2.99 -4.32 m/s, 2.12 -3.23 m/s, 3.43 -4.87 m/s, respectively for yearly average, rainy and dry seasons. Satellite-based wind resource can be appropriate to assess the potential of wind energy in the early phase of wind farm projects, before higher-accuracy in-situ measurements are available. The wind resource in FNR is deemed suitable for wind pumping applications. Based on the hydraulic requirements for wind pumps, mechanical wind pumping system can be the most cost-effective option of wind pumping technologies in FNR. Wind electric pumping systems using WT, with cut-in WS (less than 2 m/s) and rated WS (less than 10 m/s) can be a cost-effective option for water pumping for four locations only, namely, Blangoua, Goulfey, Hilé-Alifa and Kousseri.
2020-10-28T18:49:26.538Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "2075098cdf148a1c3ed1da46d31db1c75556581d", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=103292", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "1d8a636d1105889ebeaa2279acf3c6116cad93f5", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
213664385
pes2o/s2orc
v3-fos-license
TEACHING-LEARNING MODEL FOR THE SCIENCE OF ELECTRONICS We present a method for the teaching of Electronics, defined as the scientific discipline that studies the movement and behavior of electrons in semiconductor materials and in vacuum. Electronics can be considered as a science with a solid physical foundation. Within the field of Electronics there are different disciplines, some of them can be considered as pure science, while some others are more oriented to applications. Our methodology has been applied to the wide range of courses that develop the different approaches to Electronics, from the physics of semiconductors or the physics of microelectronic devices, generally taught at physics faculties, to microelectronic fabrication technology or microelectronic design, subjects that typically have a more application-oriented character. To ensure an effective learning of these subjects, a teaching-learning model has been established. This model involves the criteria for developing the programs and defining objectives according to the curricular competences, as well as the development of a series of activities in which the methods, techniques, forms of presentation and didactic resources most useful to achieve the proposed objectives will be used. An evaluation system that assesses the effectiveness of the educational process and detects its anomalies is also included. The impact of this method on the effectiveness of the teaching-learning process was evaluated by a comparative analysis of the results of the surveys distributed by the university to the students for the assessment of quality, together with surveys to the lecturers of the subjects of Electronics. The purpose of this article is to present the methodology used to teach different subjects of the field of Electronics, where Electronics is considered in its broadest sense as a science with physical principles and engineering applications (Felder & Silverman, 1988). It is based on the experiences realized in the degrees in Telecommunications Systems and Telematic Engineering at the Higher Technical School of Telecommunications (ETSIT) of the Polytechnic University of Cartagena (UPCT) to teach four subjects of Electronics: Electronic Components and Devices, Electronic Circuits and Functions, Electronics for Telecommunications, and Design and Manufacture of Electronic Circuits. All these subjects have six ECTS (European credit transfer and accumulation system) credits. The first one is a fundamental or basic subject, the second and third ones are compulsory subjects, and the last one is an optional subject. The first two subjects are taught during the second course, while the third and fourth are imparted, respectively, in the third and fourth year. The realization of a teaching-learning model that appropriately guarantees the teaching in Electronics should consider the following points: methodology and definition of objectives according to curricular competences, development of a series of activities in which these methods will be used, techniques, forms of presentation of the subject and more useful didactic resources to achieve the proposed objectives, and a system of evaluation of the effectiveness of the educational process allowing also to detect its anomalies. The impact of the proposed methodology on the effectiveness of the teaching-learning process will be analyzed through a comparative study of the results of the surveys carried out on students by the university's quality service, as well as through teacher surveys. Methodology and Objectives The elaboration of a didactic for a subject requires addressing methodological aspects of a scientific and pedagogical nature (González & Triviño, 2018). The organization of each lecture goes through a set of considerations that we will address in this section, such as the methodology and objectives. The Scientific Approach Since Electronics has a scientific and technical nature, we believe it is appropriate to consider the didactic method (necessary to carry out an effective work of knowledge transmission) in the context of the scientific method (Lodico, Spaulding & Voegtle, 2006). It is not possible to give a complete training to the student of a scientific-technological degree without transmitting a scientific methodology that will enable them to fully use their knowledge in the development of their professional activity, since research and development are essential parts of their professional field. In this sense, the teacher must transmit to the student the contents of the program and at the same time stimulate their critical and creative capacity. The scientific method is a process in which experiments are used to answer questions. The application of the scientific method can be divided into three specific cyclic stages as illustrated in Figure 1. Objectives To define the objectives that any subject of Electronics should contemplate, we have adapted the Klopfer's taxonomy (Klopfer, 1976). The most important objectives that we intend to achieve according to the curricular competences of Electronics subjects are the following: • Acquire a basic set of knowledge in Electronics (Analog and Digital). • Identify and apply the knowledge acquired to habitual and new situations, in order to recognize the problems and solve them with flexibility. • Identify and properly manipulate instruments, components and laboratory techniques. • Describe rigorously and with the appropriate language designs and experimental observations. Create an ability to write technical reports. • Identify, access and use the bibliography search information tools. • Develop favorable attitudes towards Science in a broad sense (Haladyna & Shaughnessy, 1982), and Electronics in particular, and assimilate the scientific method as a way of thinking. • Acquire critical thinking and group work habits. Teaching Activities Any proposal for the program of a subject should consider carrying out a series of activities in which the most useful methods, techniques, forms of presentation of the subject and teaching resources will be used to achieve the previously proposed objectives. The activities through which teaching has been traditionally developed are of several types: theoretical lectures, practices (problems and laboratory), complementary activities, and evaluation. In addition, there are a few hours of tutoring in which the teacher must be available to students, as well as the necessary activities for the development of the end-of-career project. The introduction of methodologies adapted to the European Higher Education Area (EHEA) does not imply the elimination or replacement of these traditional activities, since in most cases there must be theoretical lectures, practices, exams and other complementary activities. What has taken place is an update of the methodologies to improve the effectiveness in achieving the objectives that are intended to be reached with these activities. For this purpose, we propose below some innovative methodologies that we follow in the development of the different didactic activities mentioned above, as they are carried out in the Electronic subjects adapted to the EHEA (Vidal-Prado, 2012). Theoretical Lecture The objective of the theoretical lectures is the rigorous and orderly presentation of the theoretical bases necessary for the development of a discipline to a certain group of students. The most used form of presentation of the subject, throughout the course, is usually the lecture, and the method of exposure to explain the content is the use of the blackboard and audiovisual resources (Gotsick & Gotsick, 1996;Apperson, Laws & Scepansky, 2008, Rodríguez-García, Hinojosa-Lucena & Ágreda-Montoro, 2017. Keep in mind that not all students learn in the same way and that not all subjects can be developed in the same way, so it will have to adapt to each particular situation. For this purpose, we present in this article the application of some teaching innovations, some already known and practiced since the implementation of the European Higher Education Area, such as the "flipped classroom", and some other proposed by us and which we have called "dialogued lecture" and "Historicist method". The flipped classroom aims to break the monotony of the theoretical lectures. It can be realized with some subjects that best suit to this methodology (Walvoord & Anderson, 2010;Bergman & Sams, 2012). In this case, the presentation of the subject taught by the teacher is reversed. Students study the contents of the topic at home, for which it is important that students have an available support material, and then we work in classroom to reinforce the knowledge acquired and meet the needs of each student through exercises, problems and projects. Not all competences can be acquired through the methodology of the flipped classroom, so it is necessary to look for other methodological innovations for the theoretical lectures. We propose the concepts of "dialogued lecture" and "historicist method". The dialogued lecture is a way of teaching the theoretical lecture in which the acquisition of competences is carried out through an exposition of the contents in a permanent dialogue with the students. It is not simply that the students ask any doubt that may have been raised, nor is it that the teacher asks to the class from time to time if they have understood what he has explained. It is about the students themselves developing the subject topics guided by the teacher. Obviously, this way of giving the lecture is slower than the traditional theoretical lecture, so it will be necessary to select very well the concepts that will be developed in the classroom, limiting itself to the key and essential points of the subject topics, and leaving everything accessory or complementary to the laboratory practices and non-face-to-face work of the student. On the other hand, the "historicist method" is a way of imparting the lecture that starts from the fundamental concept of the philosophical theory of "historicism", according to which everything human only acquires its true meaning when it is considered as part of a continuous historical process. This concept, together with the intrinsic appeal that history has, allows us to design a fun and entertaining lesson approach for students, in which the development of the lecture revolves around anecdotes about how those concepts arose and the life or biography of the scientists or engineers who made them possible. Practices The main objective of the practices is to help the student to fix and assimilate the subjects exposed during the theoretical lectures. Within the practical classes, we distinguish two types: problem classes and laboratory practices. Both lend themselves very well to a methodological innovation known as project-based learning (Dutson, Todd, Magleby, & Sorensen, 1997;Nunes de Oliveira, 2011). This method is based on different stages in which the student is the main actor in their learning process, as described in Figure 2. During the entire development of the project, the teacher supports the students and acts as a guide. Nowadays, this method is difficult to carry out in the basic and compulsory subjects of the ETSIT of the UPCT, due to the large size of the student groups and the large number of subjects taught in each course. However, its application is possible in the second semester of the last course, dedicated to optional subjects, internships in companies and Erasmus stays. Therefore, this type of methodology has been applied to the optional subject: Design and Manufacture of Electronic Circuits. Schedule for Queries or Tutoring In addition to the previous activities, the teacher has the obligation to establish a schedule for queries or tutoring (Álvarez-Pérez, 2013;Gezuraga-Amundarain & Malik-Liévano, 2015). The teacher should encourage students to use these hours, since, from a didactic point of view, they allow direct contact with him through individualized attention. In this way, any doubts can be resolved for those students who have difficulties in the contents of the programs, as well as in the development of the different activities. The use of these tutoring, at times not strictly coinciding with the dates before the exams, can be a good indication of the degree of motivation of the students for the subject. In the new framework of the EHEA, the main innovation in this subject has been the introduction of group tutoring. Our experience in organizing these types of tutoring shows that they can be an effective tool for exam preparation. In order to distinguish themselves from a traditional problem-solving class, both the teacher and the students must properly prepare this group tutoring. The best thing for the teacher is to prepare a list of doubts or typical difficulties that his experience with the subject allows him to know that the students will find, and that he may not have had time to deal with the breadth or tranquility necessary within the theoretical lectures. The presentation and resolution of these issues of specific difficulty will raise their own doubts in the students, in addition to those already brought prepared in advance, so that the question time of the group tutoring will be intense and enriching for everybody. End-of-career Works The end-of-career project is a key phase in the preparation of university graduates, both in science and engineering, who must be able to use and apply the knowledge and skills acquired throughout the academic courses. Unlike the rest of the teaching activities, the EHEA has not introduced significant innovations in the end-of-career work, which had already been carried out in practically all the degrees, both humanities and social sciences, natural sciences or engineering since the reforms of the curricula occurred during the 1990s with the application of the LRU (Ley de Reforma Universitaria). From the point of view of the students, practically the only difference they have noticed is that those who wish to obtain the master's degree have to do two final projects, one at the end of the degree and another at the end of the master. The teacher must take into account in his offer of proposals of end-of-career project the different nature of these two works, which must synthesize the skills acquired in their respective degrees. Other Complementary Activities Within the complementary activities, those that do not have a predetermined academic arrangement and that, most of the time, have an optional character, are contemplated. Some of them had been realized since before the implementation of the EHEA, but this has brought some novelties. Among them it is worth highlighting two: internationalization and bilingualism. Internationalization is affecting the entire life of the Spanish university, which increasingly seeks to attract students from emerging countries, as well as research talent trained in prestigiously foreign centers. We mention here this reality of internationalization, because one of its dimensions is the possibility of inviting foreign professors to give talks, seminars or workshops to our students, so that a traditional complementary activity thus acquires a new added value. Similarly, Spanish/English bilingualism has been incorporated into university teaching in a generalized way. We mention it specifically in this section because it is in the complementary activities such as lectures or seminars given by invited foreign professors where the second language (English in most cases) begins to appear more naturally. Evaluation The evaluation of the students should serve to verify the degree of learning achieved by the students and, in the same way, measure the effectiveness of the teaching activity of the team of teachers (Bloom, Hastings & Madaus, 1975;Villardón-Gallego, 2006). Thus, the exams offer a very effective form of student-teacher feedback (García-Sanpedro, 2012). Within the framework of the EHEA there have been some innovations in the assessment systems, and teachers/researchers continue to deepen them at this time. An attempt has been made to introduce a continuous assessment system based on deliverables, at least as a significant percentage of the overall assessment, and the tendency is that this percentage will increase until, in some cases, the final exam is eliminated. The main difficulty teachers have encountered in implementing this continuous assessment system has been the absence of a teaching support staff similar to the figure of the "teaching assistant" or "TA" of the Anglo-Saxon system. Without this support, in very large groups it is difficult to think about the possibility of correcting some deliverables with sufficient periodicity to allow a continuous assessment. These "TA" are usually students of the last courses or recent graduates who have demonstrated a good acquisition of skills in the subject they are going to support. For them it is like a first part-time job while they finish their studies or probe the labor market, with a small salary that helps them support themselves during this period. In return, the help they can provide is invaluable to make possible not only a true continuous assessment, but also other teaching innovations. Relationship between Teaching Activities and Methodological Innovations As a summary of this section we present an illustrative figure (Figure 3) that summarizes the relationship between traditional teaching activities and methodological innovations. Depending on the characteristics of each subject, it may be more convenient to apply one or the other. The important thing is that these new methodologies help to achieve the proposed aims, which are none other than the acquisition of the competences established for each subject. It is not a matter of changing just for the sake of change without knowing the direction, but that innovation must have a clear meaning and purpose, and above all it must demonstrate its effectiveness. Figure 3 relates each methodological innovation to teaching activities in which it is normally more effective, but is not intended to be exhaustive or exclusive. Other possibilities exist and each teacher must evaluate them for their particular case. Practical Application of the Proposed Methodology In this section we describe the practical application of the proposed methodology to specific cases of Electronic subjects adapted to the European Higher Education Area at the Polytechnic University of Cartagena. We analyze a case of flipped classroom, another of project-based learning, and finally some new methodologies for the theoretical lecture that we have called the dialogued lecture and historicist method. Finally, teaching innovations applied to complementary activities based on bilingualism and internationalization are also described. Flipped Classroom In the subject of third course "Electronics for Telecommunications" of the degree in Telecommunications Systems an activity is carried out that has elements of both flipped classroom and seminar. It is a workshop prepared by the students themselves and entitled "Workshop on Emerging Technologies for Telecommunications". In this activity, students work in groups of 4 or 5 people to prepare a topic related to the contents of the subject, which they should then expose to the class and the teacher (flipped classroom). It is usually a topic of current technological interest, but it must have a good theoretical foundation in the subject. The topics are proposed and assigned by the teacher according to these criteria. For example, during the 2018/2019 academic year the following topics were selected: microsatellites, power amplifiers for radio frequency communications, vacuum tubes in power electronics for radio frequency, radio frequency identification (RFID) technology, and memristors. These topics have not been chosen based on the research lines of the department, so the activity does not have a research orientation. Two of the topics deal about power electronics for telecommunications (amplifiers on the one hand and vacuum tubes on the other hand), so they are directly linked to the subject topics. Two other topics, microsatellites and memristors, are more general, but students are expected to give an approach according to the subject (that is, electronics for microsatellite telecommunication systems, or application of the memristors in specific circuits for telecommunications). It is therefore not a question of generalist expositions, but with a well-defined character. Finally, the RFID topic contains elements of both electromagnetism and electronics, so it can be suitable for various subjects. Therefore, we can see that depending on the topic chosen, the activity may have a flipped classroom character or rather a seminar. In general, we can consider it a mixed activity, of great value for the students for what it means of active learning and development of transversal competences such as teamwork or oral communication in public. The accumulated experience with this activity has been very positive. So much so that it has been carried out continuously since the beginning of this subject in the 2012/2013 academic year with the implementation of the degrees. Among the innovations that have been introduced throughout these 7 editions, two of them are worth highlighting: on the one hand, the adaptation to bilingual teaching in Spanish and English, and on the other hand the opening of the workshop sessions to the entire School staff (ETSIT), both students and teachers. For this purpose, this activity is publicized in the corresponding dissemination lists of the School and is carried out in an environment suitable for public assistance such as the conference room. As an example of the benefits obtained with this opening, it is worth mentioning that during one of the sessions a fruitful discussion was established between the student speakers and a professor who attended the session, which subsequently led to the completion of a final degree project, a Master thesis, and three publications in indexed journals. The teaching load assigned to this activity is 1.5 hours of face-to-face activity plus 9 hours of nonface-to-face work for the preparation of the expositions, out of a total of 6 ECTS (or 180 hours) of overall subject load. The teaching guide contains very detailed information on competencies and learning outcomes, content, aims, teaching and evaluation methodology, bibliography and digital media. Project-Based Learning The second of the specific applications that we are going to present is carried out in the optional subject of the fourth course "Design and Manufacture of Electronic Circuits". This subject is a clear example of project-based learning. As it is an optional subject, the number of students enrolled favors the application of this methodology. The fact that the theoretical and practical teaching hours of this subject are concentrated in a single continuous session of 4 hours per week also contributes to this. This allows working in the laboratory during those 4 hours, interspersing the explanations of the teacher on the blackboard available in the laboratory with the work of the students in their practice places. To this end, a laboratory was specially designed for this purpose with U-shaped benches arranged around a blackboard, in such a way as to enable the vision of it and the teacher from any work place. Thus the teacher can easily address all students and can supervise their work without having to move continuously from one end of the laboratory to the other. From the first to the last day of class, students work in the laboratory following the flow of tasks described in Figure 2 for project-based work. The aim is the development of an electronic prototype going through all its phases, from its conception and determination of its performance, to the design, manufacture, and finally verification and testing. In this way many skills are worked, which condense to a large extent all the knowledge of Electronics acquired in the previous subjects of the degree. As an example, in the 2018/2019 academic year the proposed project consists of a transmission system with FM modulation. The system consists of several modules, each of which is assigned to a group of students, which is responsible for its design fulfilling the specifications established for the system to work properly when the various stages are connected. This way of working is very instructive for the students, since it constitutes a training in the work methods that they will find in the exercise of their profession and a preparation for the assumption of responsibilities, while providing great satisfaction, since at finalizing the project they have a complete vision of the development of an electronic product and a feeling of understanding the usefulness of everything they have previously studied in the Electronic subjects of the degree. This example of project-based learning is really innovative for several reasons. First, if we compare it with traditional practical teaching in electronics, we see that this has been done so far on test circuit assembly plates called "protoboards" or circuit assembly plates by insertion. In this type of plates the electronic components (resistors, capacitors, transistors, integrated circuits, etc. ...) are inserted into the holes of the assembly plates and connected by flexible external cables, so that no welding is necessary and all material is recovered after practice to be reused. However, this is not the way to work in industry applications, where circuits are made on printed circuit boards with photolithography techniques to define interconnection tracks and using various types of welding techniques to connect and fix components. It is difficult to introduce this type of techniques in university practical teaching, due to the expense of fungible material and the convenience of having a support staff with the necessary training. For this reason, our experience is a pioneer in this field and can serve as an example for other subjects. Another novelty of our approach is the collaborative work that involves the fact that each group of students must develop a part of the global project, so that by bringing the different parts together the system will work. It is a novel approach compared to the traditional practice in which each group worked in a self-sufficient way performing and testing their assembly. Our new way of working brings numerous advantages, including the development of competences for teamwork and for the development of a sense of responsibility at work, allowing students to realize that the success of a company can depend on the correct performance of our part of the work, no matter how big or small our contribution may seem. The complete subject of "Design and Manufacture of Electronic Circuits" is based on the methodology of project-based learning, so we can say that all its teaching load (6 ECTS) is impregnated with this methodology. At the end of the course a presentation of the global project is made, in which each group presents its part. This face-to-face activity is 1.2 hours, plus 23.6 hours of non-face-to-face work deemed necessary for the preparation of the presentation. The evaluation is carried out based on this presentation and the work done during the subject (presentation of reports and laboratory work), so the final exam has disappeared from this subject. Innovations in the Theoretical Lectures: Dialogued Class and Historicist Method In the second course subjects "Electronic Components and Devices" and "Electronic Circuits and Functions", it is more complicated to introduce the previous methodologies due to the high number of students enrolled. Therefore, it is necessary to look for other active learning methods that recognize and take into account the fact that not all students learn in the same way and that the attention and understanding of students during the traditional theoretical lecture cannot be taken for granted. To this end we have developed our own methodology, which we have called historicist in reference to the philosophical current born in the nineteenth century and which argues that the nature of human works is only understandable if they are considered as an integral part of a continuous historical process. Based on this concept, we build the lecture from historical subsections in which we comment on the history of microelectronics, such as anecdotes that surrounded the invention of the transistor or brief biographical reviews of its protagonists. We also comment on aspects of economic or social nature such as the repercussions of the development of the microelectronic industry. These subsections contribute to stimulate the curiosity of the students and help a lot to maintain their attention. Another tool that we use for this purpose is what we call "dialogued class", and which consists in developing the theoretical lecture in a constant dialogue of questions and answers with the students, encouraging them to constantly think about the next step that we are going to take in the theoretical exposition. It can therefore be affirmed that it is the students themselves who are building the theoretical development of the lecture, guided by the teacher. On the other hand, it should be noted that both subjects have a great practical load, specifically the same number of practical credits as theoretical (3 ECTS of each type). Practical activities include both laboratory work and problem classrooms. Laboratory practices have been designed to some extent with elements inspired by project-based learning, since the scripts that students must follow are not a mere recipe for instructions that can be followed mechanically, but often pose a challenge and invite to the student to freely look for his solution. On the other hand, problem classrooms sometimes incorporate the methodology of the flipped classroom, in which students are invited to bring a problem prepared in advance and solve it in front of their classmates and the teacher. It is worth mentioning that the practical lectures, both laboratory and problem solving, take place in groups much smaller in size than the theoretical lectures, which facilitates the application of these active learning methodologies. Complementary Activities Occasionally we have also used complementary activities to reinforce the subjects of Electronics. These include a trip to the National Microelectronics Center of Barcelona, where the largest clean micro/nano-manufacturing room in Spain is located. During this visit, students were able to see the equipment commonly used in microelectronic manufacturing, such as photolithography systems, ion implantation, thin film growth, plasma etching, and a long list of complementary techniques and supporting infrastructures. This visit provides students with a vision of the complexity of the manufacturing processes of the microelectronic industry, which have made it possible to achieve absolutely amazing levels of integration in the microchips manufactured today by the large companies in the sector. It also allows us to get an idea of the equipment cost, which has reached such exorbitant levels that only a few companies worldwide can face. To get an idea, the investment in R&D of the top 10 companies in the sector reached an impressive amount of 36,000 million dollars in 2017, which represents a percentage of investment in research with respect to income of these companies much higher than any other industry (for example, 21% in the case of Intel). Courses have also been organized with foreign visiting professors of international relevance in topics ranging from photonic crystals to microelectronic manufacturing technology. These courses allow students to make contact with foreign leaders in research fields related to electronics, or with industry experts who can give an updated and first-hand view of what is being forged in the microelectronics industry production centers. Since these production centers are located abroad (especially around technological poles such as Silicon Valley, Israel, or Taiwan, to name a few), it is an activity that is part of the internationalization of the university. It also contributes to teaching innovation from the point of view of bilingual teaching, since these sessions usually take place in English. These courses have been recognized as credits of free configuration, and in some cases also as a specific training course, with a typical duration between 20 and 30 teaching hours (3 ECTS). Evidences and Impact Evaluations of the Proposed Methodology The process of implementing the degrees adapted to the framework of the European Higher Education Area (EHEA) began in the 2010/2011 academic year and culminated in the 2014/2015 academic year. Since that time, the proposed method in the subjects of Electronics has been applied to students of a degree in Telecommunications Systems Engineering and Telematics Engineering of the Polytechnic University of Cartagena. The impact of the proposed methodology has been analyzed through a comparative study of the results of the surveys carried out to the students by the quality service of the university. Figure 4 graphically shows the improvements achieved through the application of this methodology in the teaching of the Electronics area. The answers of the students in the satisfaction surveys with the teaching activity of the teaching staff have been represented in a graph. For this, the last of the questions asked in these surveys has been chosen, which refers to the degree of general satisfaction. The literal text of the question is the following: "In general terms I am satisfied with the teaching activity developed by the teacher". It is therefore a Likert element (Norman, 2010), in which the student evaluates their degree of agreement on a 5-point scale from 1 to 5. In the graph of Figure 4 we have represented the average of the answers to this question for all the subjects of the Electronics area that have implemented the methodology developed in this article, as well as the average of all the subjects of the Polytechnic University of Cartagena. The results are offered as a function of time, for academic courses from 2014/2015 to the last course for which data are available at the time of preparing this study, that is, the 2017/2018 course. The reason for choosing this time interval is that the 2014/2015 course was the first in which the degrees adapted to the European Higher Education Area were fully implemented, and therefore the results are not affected by the coexistence of subjects in the old and new frameworks. We can see that the results for the university average are approximately constant, with small fluctuations. On the contrary, the results for the average of the subjects of the Electronics area initially start from a disadvantageous position compared to the average of the university (position that we attribute to the intrinsic difficulty of the Electronic subjects), but they experience a remarkable improvement during the period of implementation of this methodology and finally reach, and even exceed in one case, to the mark of the university at the end of the period. Surveys were also passed to full-time teaching staff who teach the subjects of Electronics in the degrees of the Higher Technical School of Telecommunications Engineering (ETSIT) of the Polytechnic University of Cartagena (UPCT). The following two considerations were used to assess the impact of this method: • Interest of teachers in applying this teaching-learning method in Electronic subjects. • Capacities and knowledge achieved by students by means of this teaching-learning method. In the case of the first consideration ( Figure 5), the surveys revealed that: • 100% of teachers believe that the method is useful, since its scientific approach is well suited to the teaching-learning of Electronic subjects. • 80% consider that they have improved their work by applying this method, due to the homogenization of the teaching methodology in the Electronic subjects. • 90% indicate that it is difficult to perform any complementary activity. According to the comments of the professors, and although these activities are optional, there are too many subjects during the semester that make it impossible to find free time slots to schedule any of the complementary activities. In addition, they receive complaints from students (undergraduate) about the time and work burden that this entails. For the second consideration ( Figure 5), the surveys showed that: • 80% of teachers estimate that students adapt adequately to the method when applying the different activities. The most difficult activity of teachers is to maintain active attitudes of students during the theoretical lectures. • 100% consider that the level of responsibility and motivation of the students in the different activities of an Electronic subject increases throughout the course, due to a greater participation of the students in these. • 100% think that the system used for the evaluation of students is adequate, since it has an objective characteristic. • 100% indicate that students achieve adequate skills and knowledge by means of this teaching-learning method. Since the 2014/2015 academic year in which the proposed method has been applied, and as a result, the methodology for teaching the subjects of Electronics and the coordination between professors of the department has been improved and homogenized. This improvement is reflected in Figure 4, where it can be seen that the satisfaction of the students with the Electronic subjects has been growing since the beginning (2014/2015 academic year) to stabilize today. The teaching of Electronics at the university level usually presents a special difficulty for students, mainly in the first subjects found in this matter. Unlike other subjects of fundamental type such as physics or mathematics (which are gradually introduced in high school), the common situation in relation to electronics is that students have not seen anything related to it or its basic concepts before, and meet it for the first time in university. This has as a consequence an initial situation of perplexity that can result in misunderstanding or even rejection of the students towards the Electronic subjects, as was the case at the beginning of the implementation of the proposed method (Figure 4). To reverse this situation, the teachers were involved in developing the content of the subjects in accordance with the curricula of the degrees in Telecommunications Systems Engineering and Telematics Engineering of the Polytechnic University of Cartagena, since it is important to place the Electronics subjects in relation to the profession (Telecommunications Engineer). The teachers noticed positive changes in student behavior during the classes, in terms of achieving greater participation and motivation of the students in the different activities. Given the scientific-technical nature of Electronics, they also highlighted a great progress of students in the development of related experimental skills such as: handle electronic instruments and components; experimentally check theoretical explanations; analyze data and draw conclusions; learn to combine design with analysis and write technical reports. To improve the teaching methodology of Electronics, studies prior to ours have been carried out, both national (Herrero, Pardo, Fernando & González, 2011) and international (Patil & Prasad, 2016). In comparison with these studies, which refer to the particular context of industrial engineering (Herrero et al., 2011) or of a specific country (Patil & Prasad, 2016), our work focuses on Electronics understood as a fundamental subject for Information Technology and Communications. It also has the advantage that it has been applied once the process of implementation of the European Higher Education Area (EHEA) has been completed, so that its effectiveness can be evaluated (Figure 4) without being affected by coexistence with study frameworks prior to the EHEA. Therefore, the cause of the positive evolution of the results reflected in Figure 4 is solely attributable to the application of this methodology. Conclusion This article has described the teaching-learning model used in four Electronics subjects, taught in Telecommunications engineering studies at the Polytechnic University of Cartagena (UPCT). One of the three subjects is of a basic or fundamental type, while the other two are mandatory and one is optional. In this model we can highlight the emphasis on the part of the teachers to achieve the didactic aims of the different subjects of Electronics through active learning activities, avoiding passive attitudes of the students, promoting responsibility and ethics, boosting learning and promoting competence development The motivation and learning of the students in the Electronic subjects are reinforced through the practical activities. The impact of the methodology has been visualized through the results of the student satisfaction surveys with the teaching activity of the teachers, which show a clear positive trend for the subjects of the Electronics area during the period in which this methodology has been implemented. Teacher surveys have shown an almost absolute predisposition to apply this method, since it focuses well on Electronics subjects, facilitates their work and students achieve adequate skills and knowledge. They have also positively appreciated the adaptation of the students to the method, because of the increase in their participation in the different activities of the Electronics subjects during the course. Declaration of Conflicting Interests The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
2020-03-05T10:31:08.785Z
2020-02-19T00:00:00.000
{ "year": 2020, "sha1": "5c57d48758046d9484f855e1bea748e37763ff17", "oa_license": "CCBY", "oa_url": "http://www.jotse.org/index.php/jotse/article/download/604/452", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "68661a51b90558f7924257b891ba73cc71c2d83d", "s2fieldsofstudy": [ "Physics", "Engineering", "Education" ], "extfieldsofstudy": [] }
222353745
pes2o/s2orc
v3-fos-license
Combination Therapy Using Low-Concentration Oxacillin with Palmitic Acid and Span85 to Control Clinical Methicillin-Resistant Staphylococcus aureus The overuse of antibiotics has led to the emergence of multidrug-resistant bacteria, such as methicillin-resistant Staphylococcus aureus (MRSA). MRSA is difficult to kill with a single antibiotic because it has evolved to be resistant to various antibiotics by increasing the PBP2a (mecA) expression level, building up biofilm, introducing SCCmec for multidrug resistance, and changing its membrane properties. Therefore, to overcome antibiotic resistance and decrease possible genetic mutations that can lead to the acquisition of higher antibiotic resistance, drug combination therapy was applied based on previous results indicating that MRSA shows increased susceptibility to free fatty acids and surfactants. The optimal ratio of three components and the synergistic effects of possible combinations were investigated. The combinations were directly applied to clinically isolated strains, and the combination containing 15 μg/mL of oxacillin was able to control SCCmec type III and IV isolates having an oxacillin minimum inhibitory concentration (MIC) up to 1024 μg/mL; moreover, the combination with a slightly increased oxacillin concentration was able to kill SCCmec type II. Phospholipid analysis revealed that clinical strains with higher resistance contained a high portion of 12-methyltetradecanoic acid (anteiso-C15:0) and 14-methylhexadecanoic acid (anteiso-C17:0), although individual strains showed different patterns. In summary, we showed that combinatorial therapy with a low concentration of oxacillin controlled different laboratory and highly diversified clinical MRSA strains. Introduction Over decades, the overuse of antibiotics has brought about multidrug-resistant bacteria [1]. Among these, methicillin-resistant Staphylococcus aureus (MRSA) is difficult to treat in communities and healthcare facilities owing to its quick spread and multidrug resistance [2]. To be more specific, they biofilm formation of community-associated MRSA (CA-MRSA) LAC and its ∆agr mutant, which has a higher oxacillin minimum inhibitory concentration (MIC) (>200 µg/mL) [9]. Concomitant treatment with palmitic acid and oxacillin led to a dramatic increase in the efficacy of oxacillin. Similarly, span85, which is mainly used in medicines, cosmetics, textiles, paints, and petroleum as an emulsifier, thickener, anti-rust agent, and biodegradable surfactant based on a natural fatty acid (oleic acid) and sugar alcohol (sorbitol), eliminated the biofilm of the MRSA strains and decreased the MIC of oxacillin on MRSA. However, the effect of palmitic acid and span85 was investigated only at fixed concentrations, and the combinatorial effect was not investigated for possible applications. Considering that soaps contain more than 10% palmitic acid and that span85 is used at a concentration of 0.5-5% in drugs and cosmetics, these compounds could be used to control resistant bacteria causing several skin issues. In an initial analysis, the effects of different concentrations of palmitic acid and span85 were tested for dose-dependency; furthermore, ∆agr mutant, which is more resistant than the LAC MRSA strain, was used to evaluate the antibacterial activity of palmitic acid and span85. The results showed that palmitic acid at concentrations lower than 1 mg/mL exerted no significant effect on the growth of the ∆agr mutant cells ( Figure 1A). However, it started to inhibit the cell growth when the concentration was over 1 mg/mL. Span85 also inhibited the growth of the ∆agr mutant cells at concentrations over 0.1% (v/v) ( Figure 1B). Collectively, the MICs for each antibacterial compound are calculated (oxacillin = 256 µg/mL, palmitic acid = 2 mg/mL and span85 = 2% (v/v) ≈ 19.14 mg/mL) Antibiotics 2020, 9, x FOR PEER REVIEW 3 of 11 which has a higher oxacillin minimum inhibitory concentration (MIC) ( >200 μg/mL) [9]. Concomitant treatment with palmitic acid and oxacillin led to a dramatic increase in the efficacy of oxacillin. Similarly, span85, which is mainly used in medicines, cosmetics, textiles, paints, and petroleum as an emulsifier, thickener, anti-rust agent, and biodegradable surfactant based on a natural fatty acid (oleic acid) and sugar alcohol (sorbitol), eliminated the biofilm of the MRSA strains and decreased the MIC of oxacillin on MRSA. However, the effect of palmitic acid and span85 was investigated only at fixed concentrations, and the combinatorial effect was not investigated for possible applications. Considering that soaps contain more than 10% palmitic acid and that span85 is used at a concentration of 0.5-5% in drugs and cosmetics, these compounds could be used to control resistant bacteria causing several skin issues. In an initial analysis, the effects of different concentrations of palmitic acid and span85 were tested for dose-dependency; furthermore, Δagr mutant, which is more resistant than the LAC MRSA strain, was used to evaluate the antibacterial activity of palmitic acid and span85. The results showed that palmitic acid at concentrations lower than 1 mg/mL exerted no significant effect on the growth of the Δagr mutant cells ( Figure 1A). However, it started to inhibit the cell growth when the concentration was over 1 mg/mL. Span85 also inhibited the growth of the Δagr mutant cells at concentrations over 0.1% (v/v) ( Figure 1B). Collectively, the MICs for each antibacterial compound are calculated (oxacillin = 256 μg/mL, palmitic acid = 2 mg/mL and span85 = 2% (v/v) ≈ 19.14 mg/mL) Response Surface Methodology Analysis to Study the Effect of the Interaction of Different Antibacterial Agents The advantage of combination therapies is the reduction in the antibiotic concentration used, as multiple activities can better attenuate or evade the antibiotic-resistance mechanisms of pathogenic bacteria. Response surface methodology analysis using the Box-Behnken design was introduced to set up the optimal concentration of three antibacterial agents to effectively eliminate MRSA [21][22][23]. Using concentrations higher than the MIC of each compound is meaningless; thus, the MIC50 of each agent was selected for the Box-Behnken design using Minitab 18 software to analyze the interaction and examine the desired response. The three significant variables, namely oxacillin, palmitic acid, and span85, were investigated with the values shown in Table 1 based on the diagonal sampling method [24,25]. Response Surface Methodology Analysis to Study the Effect of the Interaction of Different Antibacterial Agents The advantage of combination therapies is the reduction in the antibiotic concentration used, as multiple activities can better attenuate or evade the antibiotic-resistance mechanisms of pathogenic bacteria. Response surface methodology analysis using the Box-Behnken design was introduced to set up the optimal concentration of three antibacterial agents to effectively eliminate MRSA [21][22][23]. Using concentrations higher than the MIC of each compound is meaningless; thus, the MIC 50 of each agent was selected for the Box-Behnken design using Minitab 18 software to analyze the interaction and examine the desired response. The three significant variables, namely oxacillin, palmitic acid, and span85, were investigated with the values shown in Table 1 based on the diagonal sampling method [24,25]. To monitor the effect of the oxacillin concentration, we selected three values with different concentrations of oxacillin. The range of the oxacillin concentration changed from 0 to 100 µg/mL. The experimental design and results are shown in Table 2. The regression equation obtained after analysis of variance gave the response (optical density, 595 nm) as a function of three significant variables. To Antibiotics 2020, 9, 682 4 of 11 obtain a polynomial equation, a quadratic model was conducted to fit the data by least squares, and all terms, regardless of their significance, were included in the following equation (1): where X 1 : oxacillin, X 2 : palmitic acid, and X 3 : span85. Oxacillin (X 1 , µg/mL) 0 50 100 Palmitic acid (X 2 , mg/mL) 0 0.75 1.5 Span85 (X 3 , %) 0 0.25 0.5 Table 2. Box-Behnken experimental design matrix with experimental values of the cell growth of the ∆agr strain. A surface plot for oxacillin, palmitic acid, and span85 is shown in Figure 2. Analysis of variance of the selected response showed a p value of <0.05, which indicated that the designed model was appropriate (Table 3). Surface plots showed that the concentrations of oxacillin and palmitic acid were important for the bactericidal effect ( 50 0.75 0.25 0.07 ± 0.00 A surface plot for oxacillin, palmitic acid, and span85 is shown in Figure 2. Analysis of variance of the selected response showed a p value of <0.05, which indicated that the designed model was appropriate (Table 3). Surface plots showed that the concentrations of oxacillin and palmitic acid were important for the bactericidal effect ( Effect of Combined Therapy on the ∆agr Strain and Clinically Isolated Strains Once the concentrations of the three significant variables (oxacillin, palmitic acid, and span85) were set with the ∆agr mutant by response surface methodology analysis and response optimizer, we treated clinically isolated strains with the different combination therapies (Table 4). Low to high concentrations of oxacillin were considered. The focus was on the low concentrations, as antibiotic overuse has led to the emergence of multidrug resistance in MRSA. Thus, we examined whether the selected conditions were effective even for clinical isolates. Except for MRSA6230 and MRSA14459, the other MRSA strains were HA-MRSA, having SCCmec type II and type III, but all clinical strains were found to have an oxacillin MIC of over 128 µg/mL ( Figure 3A). The MIC for each strain is listed in Table 5. Multilocus sequence typing and spa typing, which were determined in the previous study, are also included in the table [26,27]. In addition, biofilm formation was compared between the different oxacillin concentrations ( Figure 3B). Depending on the degree of their antibiotic resistance, the clinical isolates were classified into high-resistance (MRSA8471 and MRSA9291), intermediate-resistance (MRSA2065, MRSA6288, MRSA7557, MRSA12779, MRSA14278, and MRSA14459), and sensitive (MRSA6230 and MRSA7875) groups. typing, which were determined in the previous study, are also included in the table [26,27]. In addition, biofilm formation was compared between the different oxacillin concentrations ( Figure 3B). Depending on the degree of their antibiotic resistance, the clinical isolates were classified into highresistance (MRSA8471 and MRSA9291), intermediate-resistance (MRSA2065, MRSA6288 IV 20 t008 8 2065 MRSA III 1024 t037 239 6230 MRSA IV 128 t324 72 6288 MRSA III 1024 t037 239 7557 MRSA II 1024 t9353 5 7875 MRSA IV 128 t664 72 8471 MRSA II 1024 t9353 5 9291 MRSA II 1024 t601 5 12779 MRSA II 1024 t2460 5 14278 MRSA II 1024 t9353 5 14459 MRSA IV 1024 t324 72 As a control experiment, we found that all three conditions were sufficient to kill the ∆agr mutant, which mimics the characteristics of HA-MRSA with attenuated virulence. Additionally, the growth of MRSA2065, MRSA6230, MRSA6288, MRSA7875, and MRSA14459 was totally inhibited by all three combinations ( Figure 4A). All clinical SCCmec type III and IV strains have a much higher MIC of oxacillin of more than 128 µg/mL. This makes it difficult to kill MRSA with a low concentration of oxacillin. However, our combination therapy sets were able to kill five different strains, even with 15 µg/mL of oxacillin. Though the combination therapy was set up with the ∆agr mutant strain, the results showed that it was still enough to kill the clinical SCCmec type III and IV strains. However, the oxacillin concentration used was not enough to eliminate MRSA7557, MRSA8471, MRSA9291, MRSA12779, and MRSA14278, which are found to be SCCmec type II strains, which exhibit multidrug resistance with an oxacillin MIC level of about 1024 µg/mL. Killing assays were carried out once again after the confirmation of the oxacillin MIC; the results confirmed that even the SCCmec type II clinical isolates could be killed with 256 µg/mL of oxacillin ( Figure 4B). The clinical isolates contained a high ratio of odd anteiso-fatty acids in the membrane (data not shown), and the biofilm was thicker in the clinical isolates [28]. The individual treatment of the clinical strains with palmitic acid and span85 showed a similar pattern to the ∆agr mutant (Figure 1), but with different values. However, the combination of the three components decreased the amount of each component. even the SCCmec type II clinical isolates could be killed with 256 μg/mL of oxacillin ( Figure 4B). The clinical isolates contained a high ratio of odd anteiso-fatty acids in the membrane (data not shown), and the biofilm was thicker in the clinical isolates [28]. The individual treatment of the clinical strains with palmitic acid and span85 showed a similar pattern to the Δagr mutant (Figure 1), but with different values. However, the combination of the three components decreased the amount of each component. Characterization of Clinically Isolated strains with Phospholipid Fatty Acid (PLFA) Analysis MRSA strains with higher antibiotic resistance tend to have extraordinary features, such as compositional changes in membrane lipid, biofilm formation, persistent cell formation, stable membrane integrity for membrane microdomain assembly for optimal oligomerization of PBP2a, high mecA expression, and increased cell surface hydrophobicity [29,30]. To elucidate the reason for the different effects of our combinations, we performed phospholipid fatty acid (PLFA) analysis because fatty acid composition in the cytoplasmic membrane can affect the antibiotic resistance of pathogenic bacteria [9,31]. PLFA analysis showed that a major portion of phospholipids in the clinical MRSA strains contained abundant 12-methyltetradecanoic acid (anteiso-C15:0) and 14methylhexadecanoic acid (anteiso-C17:0) instead of hexadecanoic acid (C16:0) ( Figure 5). It is known that methyl branching modifies the thermotropic behavior and enhances the fluidity of lipid bilayers. It reduces lipid condensation, decreases the bilayer thickness, and lowers chain ordering and formation of kinks at the branching point [32]. Highly resistant SCCmec type II strains appeared to Characterization of Clinically Isolated strains with Phospholipid Fatty Acid (PLFA) Analysis MRSA strains with higher antibiotic resistance tend to have extraordinary features, such as compositional changes in membrane lipid, biofilm formation, persistent cell formation, stable membrane integrity for membrane microdomain assembly for optimal oligomerization of PBP2a, high mecA expression, and increased cell surface hydrophobicity [29,30]. To elucidate the reason for the different effects of our combinations, we performed phospholipid fatty acid (PLFA) analysis because fatty acid composition in the cytoplasmic membrane can affect the antibiotic resistance of pathogenic bacteria [9,31]. PLFA analysis showed that a major portion of phospholipids in the clinical MRSA Antibiotics 2020, 9, 682 8 of 11 strains contained abundant 12-methyltetradecanoic acid (anteiso-C15:0) and 14-methylhexadecanoic acid (anteiso-C17:0) instead of hexadecanoic acid (C16:0) ( Figure 5). It is known that methyl branching modifies the thermotropic behavior and enhances the fluidity of lipid bilayers. It reduces lipid condensation, decreases the bilayer thickness, and lowers chain ordering and formation of kinks at the branching point [32]. Highly resistant SCCmec type II strains appeared to show different PLFA compositions, except for MRSA7557, showing a lower amount of 12-methyltetradecanoic acid (anteiso-C15:0) than type III and IV strains and a relatively higher amount of 13-methyltetradecanoic acid (iso-C15:0) and hexadecanoic acid (C16:0), although it was difficult to link this to the increased resistance of SCCmec type II strains. Although SCCmec type II strains showed different results than those of type III and IV, these results in clinical strains showed the potential of combination therapy by decreasing oxacillin concentration with the same antibiotic activity. Bacterial Strains, Media, and Culture Conditions For cell preparation, the Δagr mutant [33] was cultured in tryptic soybean broth (TSB) agar and/or liquid broth. For pre-culture, a single colony of the strain from a TSB agar plate was used to inoculate 5 mL of TSB medium. Next, 1% (v/v) of the cell culture suspension was inoculated in a 96well plate for the antibiotic resistance test, and the cells were cultivated overnight in an incubator at 37 °C without shaking unless stated otherwise. Antibacterial Agents Oxacillin, palmitic acid, and span85 were purchased from Sigma-Aldrich (St. Louis, MO, USA). Stock solutions of these agents were prepared at various concentrations in sterile dimethyl sulfoxide. Analysis of Cell Growth and Biofilm Formation Cell growth was measured at 595 nm using a 96-well microplate reader (TECAN, Männedorf, Switzerland). Biofilm formation was analyzed by crystal violet staining according to a previously described protocol [9]. Briefly, the supernatant was aspirated. The biofilm was then fixed with methanol, air-dried, and stained with 200 μL of 0.2% crystal violet solution for 5 min. Next, the crystal violet solution was removed, and the biofilm was washed with distilled water and air-dried. Finally, the optical density of the biofilm was detected at 595 nm using a 96-well microplate reader. Bacterial Strains, Media, and Culture Conditions For cell preparation, the ∆agr mutant [33] was cultured in tryptic soybean broth (TSB) agar and/or liquid broth. For pre-culture, a single colony of the strain from a TSB agar plate was used to inoculate 5 mL of TSB medium. Next, 1% (v/v) of the cell culture suspension was inoculated in a 96-well plate for the antibiotic resistance test, and the cells were cultivated overnight in an incubator at 37 • C without shaking unless stated otherwise. Antibacterial Agents Oxacillin, palmitic acid, and span85 were purchased from Sigma-Aldrich (St. Louis, MO, USA). Stock solutions of these agents were prepared at various concentrations in sterile dimethyl sulfoxide. Analysis of Cell Growth and Biofilm Formation Cell growth was measured at 595 nm using a 96-well microplate reader (TECAN, Männedorf, Switzerland). Biofilm formation was analyzed by crystal violet staining according to a previously Antibiotics 2020, 9, 682 9 of 11 described protocol [9]. Briefly, the supernatant was aspirated. The biofilm was then fixed with methanol, air-dried, and stained with 200 µL of 0.2% crystal violet solution for 5 min. Next, the crystal violet solution was removed, and the biofilm was washed with distilled water and air-dried. Finally, the optical density of the biofilm was detected at 595 nm using a 96-well microplate reader. Response Surface Methodology Analysis After selecting the optimal concentrations for oxacillin, palmitic acid, and span85, combination therapies were optimized using Minitab software 18 (Minitab, State College, PA and SPSS, IBM Corp. 2011, Version 18, Armonk, NY, USA) through the Box-Behnken design and response surface methodology analysis. Experiments were conducted in triplicate, and the cell growth of MRSA was determined. Coefficients were determined using the experimental values using the full quadratic model f(x, y, z) = (x, y, z) = ax 2 + by 2 + cz 2 + dxy + eyz + fxz + gx + hy + iz + j, (a,b,c 0). Using surface plots, the relationships between the variables were investigated and validated. PLFA Analysis Briefly, 10 mL of the liquid culture was cultivated in TSB with 1% inoculum in an incubator at 37 • C with shaking at 200 rpm. Samples were collected at 8 and 16 h. Next, the samples were centrifuged at 3500 rpm for 20 min, and total fatty acids were extracted with 0.15 M citric acid buffer/chloroform/methanol (7:7.5:5, v/v/v) and incubated in an incubator at 37 • C with shaking at 200 rpm for 2 h. The chloroform phase was collected, and the chloroform was slowly evaporated under compressed N 2 to avoid oxidation. The sample was loaded into a sialic acid column and then serially eluted with 5 mL each of chloroform, acetone, and methanol. The methanol phase was collected for PLFA analysis. Next, 1 mL of toluene was added to the sample, which was subjected to mild alkaline trans-methylation with 1 mL of KOH/MeOH at 37 • C for 15 min, followed by cooling to room temperature. A 2 mL aliquot of 4:1 n-hexane/chloroform was added, and the sample was then neutralized with 1 mL of 1 M acetic acid. Subsequently, 2 mL of Milli Q water was added, and the phases were separated by centrifugation. The upper hexane layer was removed, and this step was repeated with fresh 2 mL aliquots of 4:1 n-hexane/chloroform. The combined hexane fractions were concentrated under compressed N 2 , and the fatty acids were re-solubilized with chloroform and analyzed. Conclusions In this study, we examined the effect of oxacillin combined with the fatty acid palmitic acid and the surfactant span85 on clinical strains, due to their antibacterial characteristics. To discover the optimal condition, we used the Box-Behnken design and response surface methodology analysis. We then proposed several conditions that were optimal to kill highly resistant clinical strains with very low concentrations of oxacillin. In addition, we showed that it is possible to kill more resistant strains, such as SCCmec type II strains, by increasing the oxacillin concentration. To elucidate the reasons for the high resistance of clinical strains, PLFA analysis was conducted, and the results revealed different patterns of membrane fatty acid composition: more resistant strains contained a higher ratio of odd-chain fatty acids, such as 12-methyltetradecanoic acid (anteiso-C15:0) and 14-methylhexadecanoic acid (anteiso-C17:0). Although they may not be directly linked to the higher resistance of clinical samples and the effectiveness of a simple combination in killing all the strains, the different PLFA patterns appeared to be responsible for the higher resistance, based on the interaction between fatty acids and surfactants, which affected the membranes. Our results showed that by combining oxacillin with palmitic acid and span85, the same level of antibacterial effects could be achieved with a lower concentration of oxacillin, thereby reducing the possibility of the strain acquiring drug resistance. In conclusion, our data suggested a possible recycling strategy of safe antibiotics at present, in which their efficacy against resistant bacteria is increased via combined use with effective molecules.
2020-10-15T13:05:31.978Z
2020-10-01T00:00:00.000
{ "year": 2020, "sha1": "b375fa9c5cbb26b537c8d09ab6366005e9419d03", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-6382/9/10/682/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dcf9e276329d5ba7b8a9b4b55c667b96bd1bbab4", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
169210713
pes2o/s2orc
v3-fos-license
Factors that Influence Food Security in Nicaragua and the Role of Home Gardening in Reducing Food Insecurity and Improving Income Food insecurity and malnutrition are widely recognized as global issues that require immediate attention using multifaceted approaches. In 2015, the Food and Agricultural Organization reported that in the last quarter of the century, undernourishment in the developing world reduced by more than half to an average of 12%. Despite several interventions, United Nations Children's Fund reports that some countries still have high rates of chronic malnutrition. Nicaragua, for example, had a chronic malnutrition rate of 22% in 2013. Home gardening is a system of crop and animal husbandry on small plots of land within the vicinity of human dwelling practiced majorly to improve household food security and income. The purpose of this paper is to assess the major factors affecting food insecurity in developing countries with specific emphasis to Nicaragua. The role of home gardening in improving food security of developing countries is approached through studies from India, Bangladesh, Nicaragua, Senegal, Mexico and South Africa. Overall, this literature finds positive impacts of home gardening towards reducing food insecurity while providing opportunities for improvement of income and quality of life. In conclusion, community gardening requires limited inputs and can be a useful tool in reducing food insecurity if barriers are addressed. Introduction Globally, there was a general declining trend in food insecurity from 18.6% (1011 million) in 1990 to 10.9% (795 million) in 2014 [1]. Food security entails aspects of economic access and safety of food [2]. Chronic malnutrition which is defined as long-term nutrient deprivation is a major indicator of food insecurity [3]. The global decline in chronic food insecurity (prevalence of undernourishment) is majorly due to a decline in global poverty [4]. However, with declines in poverty coupled with declines in chronic malnutrition Sub-Saharan Africa, South Asia and Latin America still showed higher levels of undernourishment in 2015 [1]. More to that, projections by the Food and Agriculture Organization of the United Nations show that 637 million people will be undernourished in 2030 hence falling short of achieving zero hunger by 2030 [5]. The increase in chronic food insecurity translates into chronic malnutrition and is due to increased conflicts, climate-related shocks and economic slowdown that have led to difficulty in implementing strategies to protect vulnerable populations [1,6]. Home gardening is seen as a necessary tool to avert food insecurity given that it increases the availability of food and improves diet diversity. Additionally, home gardening improves income, enhances rural employment by encouraging off-season production and decreases agricultural production risks through diversification [7]. Nicaragua is a country in Central America that has recorded a reduction in poverty levels over years but has persistently high levels of undernourishment [8,9]. The objectives of this paper are therefore, to define the major factors that influence food security in Nicaragua and to examine the role of home gardening in improving food security as well as constraints to successful adoption of home gardening. Causes of Food Insecurity in Nicaragua There are several factors that influence food insecurity in Nicaragua including: poverty, education, employment opportunities, social capital, policies and climate change. Poverty is defined in absolute, relative and social subjective terms [10]. The concept of absolute poverty is more applicable to developing countries and relates to having an income level that falls below some minimum (poverty line) necessary to meet basic needs [10,11]. Relative poverty is defined based on the overall distribution of income in a country set as a share of the country's mean income whereas subjective poverty is based on what people perceive as the minimum income that a person, or household needs in a specific society not to be considered poor [10,11]. Researchers have concluded that poverty in rural households is a key contributor to the household's food insecurity and thus of malnutrition [12][13][14]. Nicaragua is a developing country with reported poverty, food insecurity and malnutrition issues. Even though it is the second poorest country in the western hemisphere [15,16], Nicaragua has seen poverty levels drop from 42.5% to 24.9% between 2009 and 2016and extreme poverty has dropped to below 15% [16,17]. As a result, there has been a decline in food insecurity leading to a decrease in undernourishment from 55.1% to 20.1% between 1990 and 2010 [18,19] and to 16.6% in 2016 [20]. Among other reasons, the decline in undernourishment is due to expansion of several government assistance programs especially in rural areas of the Caribbean Coast [21,22]. However, around 2.4 million Nicaraguans still live below the poverty line with some 83,000 living in extreme poverty [22]. Even in one country, extreme poverty differs between urban and rural areas and is defined as living on less than $1.90 per person per day [23]. Close to half of Nicaragua's population lives in rural areas [24,25]. According to Harvey, rural poverty rates are three times higher than the 14.8% in urban areas; 70% of the poor live in rural areas [26]. In rural areas, one in six households is extremely poor compared with one in twenty for urban areas. Poverty is more severe in central Nicaragua and the Caribbean coast, despite their high potential of agriculture and forest activities [27,22]. As a sum up, in 2016 Nicaragua had a Global Food Security Index score of 50 out of 100 and is ranked number 72 out of 113 countries based on food security. More to that, in 2016 Nicaragua had an absolute poverty level of 24.9% and an undernourishment rate of 16.6% [9,16,17,19,28]. These statistics are reflective of high values of poverty, food insecurity and malnutrition when compared with other countries. Education Formal education is one of the main determinants of an individual's income and a key factor for achieving economic and social opportunities [29][30][31]. Adult specific informal education services such as Agricultural Extension can increase food security through the transfer of skills and behaviors [32]. According to 2014 statistics by the Education and Policy Development Center, 37% of 15-24 year olds in Nicaragua did not complete primary education. The same statistics showed that approximately 21% of boys and 15% of girls of primary school age did not attend school. Nearly 39% of female youth and 47% of male youth of secondary school ages are held out of school in Nicaragua [33]. Mothers in Nicaragua with a secondary education had 47% lower odds of moderate/severe household food insecurity as compared to households with lower maternal education [34]. Higher maternal education was associated with lower food insecurity in Honduras [35], Bangladesh [36], and in Mozambique [37]. Access to education and having a higher education beyond elementary school was reported as a key determinant of food security [37][38][39][40][41]. The connection between higher education status and improved food security may be because educated individuals often possess more assets and have access to better infrastructure thus providing opportunities for non-agricultural employment and reducing dependence on agricultural sources of income [42]. Having access to education positively correlated with having fewer children where women with higher education levels had an average of two children compared to six children in rural uneducated women. Employment Unemployment is positively associated with food insecurity [42][43][44], it leads to a decline in living wage and hence increasing the risk of food insecurity [45]. The unemployment rate in Nicaragua fell from 8.2% in 2009 to 5.3% in 2013 but again rose to 5.9% in 2016. The total unemployment rate for Nicaragua in 2016 was below the world average of 5.7% [46]. Agriculture has been the main source of job creation, helping to stabilize Nicaragua's employment rate. Rural households earn 60% of their income from agriculture, 27% from nonfarm activities and 13% from transfers. However, agricultural jobs are mainly informal, low skilled and low income [22]. Despite improvement in primary education completion rates, attainment of labor skills remains the major reason for unemployment [47]. Nicaragua presents the lowest minimum wage in Central America with all sectors having an hourly minimum wage below U.S. $2 per hour. This is one of the reasons that 29.6% of the population that lives in poverty and 8.3% in extreme poverty [48]. Men and women are recruited for low wage jobs in local agro-industries, road and house construction sites as well as ago-cultural farms hence having low monthly income [49]. Some small scale farmers have resorted to migrating to neighboring countries like Costa Rica and el Salvador during harvesting seasons in order to obtain money for sowing , Social capital Hanifan defined social capital as good will, fellowship, mutual sympathy, and social interaction among individuals in a social unit [53]. Stronger social networks and higher levels of social capital are consistently associated with better health and community well-being [54,55]. Nicaragua was found to have a low social capital in terms of net percent trust and community participation scale compared to other Latin American Countries [56]. Social networks and social capital can provide the food insecure with private transfers in times of need that may decrease the severity of food insecurity episodes [57]. Social capital was found to be helpful in promoting sack gardening in the Kibera slum of Kenya. This was helpful especially in attainment of seeds, shared space for placing the sacks, topsoil and sharing of the produce [58]. Social capital improves food security by enhancing unity of group members, access to information from external institutions and observance of group norms. For social capital to be effective in improving food security, it should be accompanied by human capital enhancement [59]. The low social capital of Nicaragua compared to other countries in Latin America may indicate lesser social interaction, a quality that could be a good coping strategy in food insecurity situations [56]. Policy The Nicaragua law on food and nutritional sovereignty and security was adopted in 2009 [60] to replace the unmet initiatives for food security policy made in the late 1990s and early 2000s [61]. Past policies favored privatization of natural resources and deregulating markets in favor of large agribusiness companies hence dismantling programs that benefited small-scale farmers [62]. Under the food sovereignty law of 2009, local governments, civil society, farmer and peasant organizations liaise to promote the development and adoption of food security policies with emphasis on food sovereignty [60]. The law is in agreement with international laws on human rights to food [63][64][65][66]. Agro-export industry and trade agreements are discussed here as the major policies that affect food security in Nicaragua. Agro-export industry: In 2017, Nicaragua's exports were higher than its imports, creating a negative trade balance [67]. Due to support from the government and from policymakers [68], growth in agricultural exports is significantly higher than growth in food production [69]. Unlike agro exports that are produced by large-scale farmers, smallholder farmers, who are the poorest in the region, handle more than 75% of domestic food production [70]. Domestic food production is therefore unable to meet household food demands shifting consumption patterns towards imported foods [71]. It is therefore not surprising that Nicaragua imports a third of its grains for domestic consumption [72]. Trade agreements: All United States consumer and industrial exports enter Nicaragua duty free with tariffs on U.S. agricultural products expected to be phased out by 2024 [73]. Nicaragua signed free trade agreements with Mexico in 1998, Chile in 2011, Panama in 2009, Dominican Republic in 2002, Taiwan in 1967 and the European Union in 2010 [74]. Nicaragua has the second lowest GDP per capita in the western hemisphere after Haiti, which translates into having one of the lowest level of exports [15]. Nicaragua is among the countries in Latin America with least exports to the European Union [75], United States, Asia and to China [76]. Between 1992 and 2017, Nicaragua had a negative trade balance indicating that Nicaragua is not maximizing benefits from the free trade agreements [77]. This is heightened by the fact that after 1990, price controls set by the Sandinista government of Nicaragua as a measure to reduce macroeconomic imbalances were eliminated [78]. This left Nicaragua with no direct public intervention for controlling prices. Due to globalization and trade liberalization, nontraditional crops like flowers and soya beans have become profitable but peasant farmers lack capital, technical skills, and access to infrastructure to compete in the export market. Therefore, peasant farmers cannot compete with cheap imports driven by free trade agreements [49]. Under the Food for Progress initiative, U.S. agricultural commodities are donated to Nicaragua and sold on local market [79]. This may indirectly compete with local Nicaraguan produce for market. This policy along with support to the agroexport industry may be responsible for the severe decline in prices for traditional locally produced staple foods. The decline in market prices may lead to lower income and higher inputs hence forcing peasant farmers to abandon food production. Climate change The global climate risk index shows that Honduras, Myanmar and Nicaragua experienced the greatest effects of climate change from 1992 through 2011 [80]. Climate change has an impact on agriculture [81][82][83][84]. Due to its geographical location in the inter-tropical convergence zone, one sixth of Nicaragua's surface is in zones with high or very high sensitivity to climate events [85][86]. The Northern Caribbean coast is the highest risk area to climate events, with gradual decrease in risk towards the south [86]. Nutrition & Food Science International Journal Impacts of climate change are of utmost importance to Nicaragua because its economy largely depends on agriculture, cattle raising and fishing; all of which are highly sensitive to climatic conditions. Nicaragua has taken shocks from major climatic events including Hurricane Mitch in 1998, the 1972 earthquake in the capital Managua, landslides, and volcanic eruptions [87,88]. Hurricane Mitch of October 1998 created significant flooding and mudslides that were responsible for a loss 30% of the coffee crop in Nicaragua [89]. Projection show that between 2020 and 2050 Nicaragua will have average temperature increase of between 1 °C and 2 °C and between 3 °C and 4 °C by the end of this century. This will be accompanied by a reduction in precipitation at the national level and a slight increase in the Pacific region [85,90]. The dry corridor of Central America of which 20% belongs to Nicaragua is predicted to experience severe drought conditions [86]. Climatic events were responsible for annual economic losses of 1.89% in GDP between 1990 and 2012 [86]. Predicted climate events will affect food security, jobs, economy, social structure and overall development [90]. Backyard Gardening as a Strategy to Reduce Food Insecurity in Developing Countries There was a decline in global food insecurity from 2005 to 2015 [1] attributed to numerous multidisciplinary strategies aimed at reducing hunger [91][92][93][94][95]. However, the increase in the world's population and driving forces that accompany it pose a threat to the success of current food security strategies necessitating new or improved strategies to combat hunger [96,97]. Even with the decline in food insecurity between 2005 and 2015, one in seven people were still food insecure [1]. With declining arable land [98] and a predicted decline in precipitation [99][100][101] the currently employed food security strategies should be rethought. Strategies aimed at improving food security may be applied to both developing and developed societies depending on the existing social, political and economic resources available to design and implement the interventions [102]. Home gardening also referred to as backyard gardening is a food security strategy that has been promoted for decades in urban, rural, developed and developing communities [102][103]. Usually home gardening projects start with a demonstration community garden followed by skill transfer to backyard/ home gardens. Home gardens are usually small portions of cultivated land within walking distance from homes planted with mixed crops and some livestock with an aim of providing supplemental food and income [104]. Ninez describes home gardens as smallscale production systems that are located near dwellings and have a primary purpose of supplying both plant and animal items that would not otherwise be obtained, affordable or readily available from local markets, field cultivation, hunting, gathering or fishing [105]. Home gardens employ family labor, low capital investment and simple technologies [105]. The following section reviews published literature on the role of home gardening projects in reducing food insecurity and increasing household income. In this section nutrition and income benefits as well as constraints to successful implementation of home gardens are discussed. A Google Scholar search was done with search terms of "Economic and nutritional importance of home gardening, community gardening, backyard gardening in developing countries." This search yielded over 10,000 results which were filtered to include only studies involving more than one garden, and that included evaluation aspects. Studies were limited to just one per country based upon sample size leading to a result of six different studies in six different countries to include India, Senegal, Bangladesh, Mexico, Nicaragua and South Africa. Home gardening in India A cross sectional study in Andaman and Nicobar islands of India was conducted to determine species diversity and productivity using a sample of 430 home gardens [106]. Andaman and Nicobar islands have limited opportunities for employment due to lack of industries and factories. Plant diversity: Subsistence as well as commercial farming characterized home gardening. The planted species in Andaman and Nicobar Islands' home gardens included vegetables, fruit trees, palms, spice trees and agro-forestry trees. This diversity of crops grown in home gardens was reported by others [107][108][109][110] and may be tied to economic status of garden owners [111]. The variety of plants in home gardens is advantageous in maintaining plant genetic resources [112][113][114]. In most households, the variety of species in home gardens is determined by the household's capacity to obtain social capital and planting material [102]. Labor and input supply: In terms of labor, home gardens in Andaman and Nicobar frequently employed family labor with designated gender roles. Use of family labor is indicative of limited capital investment and small size of home gardens. Although generally limited, mechanization and hired labor were employed especially during tilling and in some cases, draft animal powers were used. Maroyi reported the frequent use of family labor and limited use of machinery and hired labor in his study on characteristics of home gardens in Zimbabwe [109]. In home gardens of Andaman and Nicobar, farmers did not apply pesticides to control diseases but used biological control measures where pheromones traps were applied around the Nutrition & Food Science International Journal garden. Rice was cultivated without use weeding or use of insecticides but vegetable gardens required both weeding and insecticide application. The Andaman and Nicobar islands are endowed with plenty of rainfall distributed over nine months but home gardeners lacked knowledge to utilize rainwater effectively. Gender roles in home gardening: Even though women play a vital role in providing food for most households, their involvement in home gardening is determined by sociocultural norms [104]. In some cultures of rural Russia, Senegal and Latin America, women were reported as sole caretakers for home gardens [115][116][117]. On the other hand, women were reported to play only supportive roles at certain stages of plant production in home gardens [118]. Food and income benefits: More than 70% of the vegetable, rice and fruit yields from home gardens in Nicobar and Andaman islands were sold in the market leaving the remainder as food for household consumption. Some produce especially coconut kernels were fed to pigs, which in turn were not sold but distributed to neighbors. Researchers reported several uses of home gardening to include source of fresh food, reduction in food budget, hobby and relief of emotional stress [119]. Reyes-Garcia et al. [120] reported practicing home gardening for reasons other than economic benefits [120]. The average monthly commercial yield from all home gardening components in this Indian study was estimated to be over US $2,000. Other economic benefits from home gardening were reported by other researchers to include; promotion of entrepreneurship and rural development [112,121], sale of produce, development of small cottage industries [104], the purchase of additional food items, as well as savings for education and other services [122]. Major constraints to gardening in Andaman and Nicobar islands were ineffective use of rainwater and absence of mechanization, which were thought to limit production. These challenges are different from those reported in other studies, which included limited extension services, limited finance and credit facilities, lack of adequate water, cultural barriers and lack inadequate labor [104,118,123]. Home gardening in senegal In Senegal, researchers determined the impact of community gardening on health and food security by comparing longitudinal data between a baseline survey done in 1970 and four post gardening surveys in 1980 [124]. Cross sectional data was compared among the four surveys done in 1980s. Families surveyed in the 1980's were different from those surveyed in 1970s hence there were no paired t-tests for results. Food and income benefit: Food frequency data showed that nutrient intakes of iron, retinol activity equivalents, calcium and ascorbic acid did not differ between the two-time periods. Income from vegetable sales during the dry season supplemented income obtained from the sale of main crops in the normal farming wet season. Income from vegetable sales was more helpful in case of bad harvests from main crops due to inadequate rainfall. In Navajo, residents reported positive economic savings from home gardens by not spending money to purchase vegetables in the markets [125]. In other studies, researchers reported absence of evidence for home gardens to significantly improve nutrient intake [126,127]. Galhena et al. [102] & Lombard [128] acknowledged that selling excess produce to farmers markets could supplement income for progressive individuals. Unlike income from main crops that is budgeted by men, women kept profits from vegetable sales [102,128]. This raised purchasing power for the women enabling them to better cater for basic needs in the family. Marek et al. [129] explained that family income might indirectly affect health and nutritional status through improving education attainment, raising cognitive performance and improving environment sanitation and personal hygiene [129]. Participants from home gardening households consumed more vegetables than those from non-home gardening households indicating a possibility of having better health [130][131]. Toher researchers reported increased consumption of vegetables and general diet diversity due to home gardening [130][131][132]. Bangladesh In response to high rates of night blindness in Bangladesh, Helen Keller International (HKI) initiated a pilot gardening project in conjunction with nutrition education [133][134]. The aim of the HKI project was to increase the number of households that sustainably produced dark green leafy vegetables and fruits throughout the year and to increase the variety and consumption of vitamin A rich foods. Collaborating gardening projects with existing local programs: During program implementation, HKI constantly reviewed ongoing gardening programs in Bangladesh to learn from others' experiences. The HKI project was elected in partnership with local NGOs as a means of collaborating the gardening project to ongoing developmental activities. Creating partnerships with local NGO's is necessary because the various causes of malnutrition are linked and local NGOs are already involved in the community and may already have a good understanding of the target community. This partnership may also be necessary to avoid replication of studies. In most cases, Local NGOs have established infrastructure within the community a factor necessary for scale up. Linkage with already Nutrition & Food Science International Journal established NGOs also ensures that the target community has access to multiple developmental services and that the program can be sustained after HKI terminates its services. FAO states that institutionalization of gardening projects is key to sustainability. Sustainability implies independence from long-term external inputs and participation of all stakeholders [135]. Benefits of home gardening in bangladesh: After a year of implementing home gardens in Bangladesh, HKI reported an increase in percentage of households who practiced year-round gardening of multiple vegetable varieties on fixed plots from 3% to 33%. HKI reported a decrease in households without a garden from 25% to 2%. Production of a variety of vegetables and Consumption of betacarotene rich foods increased significantly in households that practiced improved home gardening. An increase in vegetable consumption among children and adults in a community-based participatory gardening study was reported [131]. Households that participated in improved gardening saw a rise in income from sale of garden produce that was used primarily for purchase of food with the remaining profits being used for developmental activities [131]. Constraints realized in the HKI project: Even though the HKI project realized several benefits, households needed a regular supply of quality seeds and inputs to sustain a change in gardening practices. The HKI project reported poor soil fertility, inadequate fencing, and poor irrigation infrastructure were additional constraints to sustained home gardening. Even though the community-based participatory research by Carney et al. [131] reported improved food security, health and economic benefits, it had challenges comparable to those reported in this HKI project and by other researchers [104,118,123]. The nutritional benefits of home gardening in the HKI study were limited by cultural beliefs about child feeding and intake of certain foods during pregnancy. This is different from a study done in Nepal where Jones et al. [136] reported a cultural practice where women showed an increase in consumption of special foods during pregnancy [136]. Home gardening in nicaragua In Nicaragua, a research study was done to determine the extent to which home gardens could effectively lead to food sovereignty and why farmers resist changing their food consumption strategies to embrace biodiverse home gardens [51]. This study was done through in-depth interviews across four cooperative societies including sixteen men and eight women in Estelli and Somoto municipalities of Northern Nicaragua. In addition, researchers interviewed the project management team members as key informants to prove responses from participants. Benefits and constraints to home gardening in nicaragua: Results showed that 90% of farmers perceived home gardens as contributing to diversified and healthy diets while offering an opportunity to save money by not purchasing food from local supermarkets [51]. Likewise, Arimond et al. [137] reported increasing food availability and access through production for household consumption as one of the major pathways by which agricultural interventions influence nutrition [137]. Farmers in this Nicaragua study reported that the cost of raw materials and amount of labor discouraged them from engaging in home gardening. Other farmers in the same study reported that it was cheaper to plant more coffee, sell to the international market, and in turn purchase goods from the local market [51]. This same situation in another review [137]. The high cost of inputs, the needs to construct fences and unreliable rainfall especially in the dry season were perceived as the major constraints to home gardening in this Nicaragua study. Lack of sufficient/appropriate land near the home was also perceived to hinder gardening. The authors reported that farmers perceived the cost of materials outweighed the benefits from home gardens. In this coffee farming community, 95% of respondents who engaged in subsistence growing of crops such as corn and beans depended on international sale of coffee to buy seed and other materials. However, respondents explained that the sale price for vegetables in the markets was low yet transport costs to the markets were high making the sale of vegetables unprofitable. Other farmers explained that they lacked experience growing vegetables and that it would require them to re-prioritize their labor and economic investments to accommodate home gardens. The costs of seed saving and storage techniques as well as food preservation technologies were a hindrance to successful home gardening. Home gardening in a Mayan community of Mexico Using semi structured interviews and participant observations, a study in Mexico aimed at identifying the principal factors that defined the type, number and performance production of home gardens in a Mayan community in the state of Yucatan [138]. In this Mayan study, 31 home gardens were assessed through cash flows and production cost ratios. Ninety per cent of households reported keeping a home garden primarily for economic reasons and because they enjoyed exchanging products with other members of the village. Residents established home gardens for sharing produce with neighbors, relatives and community members [139]. In descending order, respondents in this Mayan Mexican community study identified having fruit trees, fowls, pigs, vegetables, ornamentals and cattle. Some home gardens had pigsties, irrigation systems, and chicken pens. Livestock, fowl and pigs had negative cash flows due to the high costs of fodder and high initial capital. Households reported major Nutrition & Food Science International Journal requirements in improvement as water for plants and animals, animal enclosures, motivation, time, and technical assistance. South Africa In Easton side South Africa, researchers assessed the impact of home gardens on nutrient intake, access to food and dietary diversity in pre-school children [140]. Food consumption and dietary diversity was based on a 24-hour recall of children's consumption as reported by the caregivers. Analysis of nutrient intakes before project start showed that average nutrient intakes were below recommendations for optimal nutrition except for protein that was double and vitamin A was above RDA. Changes in food frequency: Using 24-hour recall conducted at project start and end, Selepe & Hendriks reported an increased frequency of consumption of fresh fruits and vegetables. The consumption of nuts and legumes doubled by project end. The number of children consuming dark green vegetables and other vegetables increased by 25%. The number of children consuming fish and eggs increased by almost a quarter. Paired t-tests showed statistically significant changes in the consumption of vitamin A rich vegetables, other vegetables (seeds, nuts and legumes), cereals, meat, organ meats and milk. In this South African Study, improved dietary diversity representing a direct positive impact of home gardens was reported. In other studies, researchers reported increased consumption of vegetables due to promotion of gardening [141][142][143]. Increased diet diversity as a result of promoting home gardening was reported by researchers in Bangladesh [130,144]. Changes in nutrient intake: According to Selepe & Hendriks nutrient intake from consumed foods was established using a computer package Dietary Manager. Using before and after 24-hour recalls, the only significant nutrient intake was vitamin A and iron but vitamin A intake was way above the recommended daily allowance at the beginning of the project. No significant change in fiber intake, and macronutrients through paired t-tests was observed. Intakes of energy, fat, fiber and calcium remained inadequate by the end of study. However, lack of significant changes in nutrient intakes was reported in other studies [126][127]. Success of agriculture and nutrition interventions should take into account women's economic status, education level, access to and use of health and sanitation services [137] and whether nutrition education services were provided [145][146][147][148]. Conclusion This literature review presents challenges to achievement of food insecurity and discusses how researchers in different countries have employed home gardens as a tool to improve food security and economic stability in households. Using Nicaragua as a case study, the review pulls together researchers' views on how high levels of poverty, unemployment, unfavorable trade policies, low social capital and climate change are a hindrance to attainment of food security. The challenges could indicate that a multidisciplinary approach that includes experts in climate studies, government bodies, local nongovernment organizations, community leaders as well as community members is required. It can also be concluded that interventions aimed at improving food security should be tailored to needs of the target community given the differences in policies, climatic effects and demographic factors. In this literature review, households engage in home gardening for economic reasons, social reasons or for attainment of food for household members. The importance of home gardens varies not only among communities but also within households in the same community. This may indicate that researchers hoping to start community gardening should design home gardening interventions to cater for various household expectations. Home gardening projects should also allow for promotion of different crop and animal varieties to meet the varying food staples of the community. Barriers to home gardening include labor, limited inputs, poor soil fertility, inadequate fencing, cultural beliefs and poor irrigation infrastructure. Barriers to home gardening should be addressed as much as possible to attain maximum benefits. In conclusion, given economic and food production benefits discussed in the review, home gardening can be used to improve household food security in developing countries. Developing countries like Nicaragua have limited budgets to invest in large-scale agriculture, climate change mitigation, nutrition, health care and education. Home gardening that requires family labor, small pieces of land and small initial investment can employed by governments, researchers, community organizations and non-government organizations to improve food security.
2019-05-30T23:45:41.266Z
2018-05-31T00:00:00.000
{ "year": 2018, "sha1": "805149d8621185948dc887a36828bab6d799632e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.19080/nfsij.2018.06.555697", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "c2f6e39c1b352fda54b50bbcc0f7751c0a05618d", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences", "Economics" ], "extfieldsofstudy": [ "Business" ] }
1583990
pes2o/s2orc
v3-fos-license
PtdIns(3,4,5)P3-Dependent and -Independent Roles for PTEN in the Control of Cell Migration Summary Background Phosphatase and tensin homolog (PTEN) mediates many of its effects on proliferation, growth, survival, and migration through its PtdIns(3,4,5)P3 lipid phosphatase activity, suppressing phosphoinositide 3-kinase (PI3K)-dependent signaling pathways. PTEN also possesses a protein phosphatase activity, the role of which is less well characterized. Results We have investigated the role of PTEN in the control of cell migration of mesoderm cells ingressing through the primitive streak in the chick embryo. Overexpression of PTEN strongly inhibits the epithelial-to-mesenchymal transition (EMT) of mesoderm cells ingressing through the anterior and middle primitive streak, but it does not affect EMT of cells located in the posterior streak. The inhibitory activity on EMT is completely dependent on targeting PTEN through its C-terminal PDZ binding site, but can be achieved by a PTEN mutant (PTEN G129E) with only protein phosphatase activity. Expression either of PTEN lacking the PDZ binding site or of the PTEN C2 domain, or inhibition of PI3K through specific inhibitors, does not inhibit EMT, but results in a loss of both cell polarity and directional migration of mesoderm cells. The PTEN-related protein TPTE, which normally lacks any detectable lipid and protein phosphatase activity, can be reactivated through mutation, and only this reactivated mutant leads to nondirectional migration of these cells in vivo. Conclusions PTEN modulates cell migration of mesoderm cells in the chick embryo through at least two distinct mechanisms: controlling EMT, which involves its protein phosphatase activity; and controlling the directional motility of mesoderm cells, through its lipid phosphatase activity. similarly generated by PCR with the sense primer PTEN351tail Eco-S. A vector expressing the C2 domain of TPTE was produced similarly by using the TPTE C2 clone S and TPTE C2 clone AS. Construction of an expression vector for GFP-TPTE based upon EGFP-C1 has been previously described [S3]. All experiments here use expression of TPTEg from the nomenclature of Tapparel et al. [S4]. TPTE-R was produced by using the primer TPTE reactivate S and its reverse complement. Mutagenesis of the potential TPTE active-site cysteine residue was performed, but caused the protein to express poorly in E. coli, in the chick embryo, and in cultured HEK293 cells, presumably through effects on protein stability. Oligonucleotides In each case, the indicated oligonucleotide sequence and its reverse complement were used for PCR mutagenesis. RNA-Interference Experiments An siRNA ''smartpool'' targeting the chicken PTEN gene was purchased from Dharmacon. siRNA was diluted to a concentration of 1 mM in 20 mM KCl, 6 mM Hepes (pH 7.5), and 200 mM MgCl 2 and electroporated according to the protocol and conditions of the plasmid constructs. Standard whole-mount in situ hybridization used an antisense probe against cPTEN according to the method of Wilkinson and Nieto [S6]. Substrate Preparation and Phosphatase Assays Poly-Glu:Tyr (4:1) (Sigma) was phosphorylated by using 1 mg of polymer in 0.2 ml of a kinase buffer (50 mM Hepes [pH 7.4], 12 mM MgCl 2 , 1 mM EGTA, 1 mM 2-mercaptoethanol, and 18.5 MBq Figure S2. PTEN Expression in Chick Embryos at Developmental Stages HH1-HH8 Expression of PTEN mRNA was addressed by in situ hybridization in the developing chick embryo. Sections taken as indicated by the dashed line are shown in (C 0 ) and (F 0 ). g-[ 33 P] ATP), with incubation with 1 mg insulin receptor kinase domain (Upstate) for 1 hr at 32 C before addition of 0.5 mg further kinase and incubation for another hour. The reaction was stopped by addition of 0.5 ml 20% TCA at 4 C and centrifuged at 14,000 3 g for 10 min, and the pellet washed once in fresh 10% TCA. The pellet was then resuspended in 500 ml 1M Tris (pH 7.4) and dialysed twice at 4 C against 10 mM Tris (pH 7.4) by using a 3,500 MW cut-off dialysis cassette (Pierce). 3-[ 33 P] PtdIns(3,4,5)P 3 was prepared as previously described [S7]. PtdIns(3,4,5)P 3 assays were conducted with substrate vesicles prepared by sonication of 100 mM phosphatidylcholine, 10 mM unlabeled PtdIns(3,4,5)P 3 , and 100,000 dpm 3-[ 33 P] PtdIns(3,4,5)P 3 . These were incubated in 50 mM Tris [pH 7.4], 150 mM NaCl, 1 mM EGTA, and 10 mM DTT with 1 mg enzyme for 30 min at 32 C. PolyGlu-Tyr(P) phosphatase assays were conducted in 50 mM Tris (pH 7.4), 1 mM EGTA, and 10 mM DTT with 1 mg of enzyme and 100,000 dpm (approximately 1 mg) of phosphorylated substrate per assay, also at 32 C for 30 min. Reactions were terminated directly by the addition of 500 ml of ice-cold 1M perchloric acid and 100 mg/ml BSA, left on ice for 30 min, and spun at 15,000 3 g at 4 C for 10 min. The supernatant was removed, and ammonium molybdate was added to a final concentration of 10 mg/ml. After extraction with 2 vol of toluene/isobutanol (1:1 vol/vol), the upper phase was removed and radioactivity was determined by scintillation counting. pNPP assays were performed with 2 mg of enzyme, 20 mM pNPP (Sigma), 50 mM Tris (pH 7.4), 1 mM EGTA, and 10 mM DTT, and activity was measured by absorbance at 405 nm. Embryos were transfected with expression vectors encoding (A) GFP TPTE C2 or (B) GFP DdPTEN C2 and observed during development as described in the text. Merged bright-field and fluorescence images for the start of the experiment (t = 0) and the end (t = 20 hr) of the experiment are shown. Expression of each C2 domain interferes with the normal directional patterns of migration through the embryo; see, for example, Figure 1 and Figure S1. Figure S8. Overexpression of the PTEN C2 Domain Activates Akt/PKB PTEN null 1321N1 astrocytoma cells were infected with baculoviruses encoding GFP or the indicated GFP-PTEN fusion proteins. Cells were incubated for 24 hr and lysed before the endogenous Akt/PKB was immunoprecipitated and assayed in vitro against the peptide substrate Crosstide. Activity data are presented as the mean + standard deviation. Expression of GFP and PTEN in these lysates was assessed by western blotting with antibodies against GFP and PTEN. The A2B1 monoclonal antibody used to detect PTEN recognizes an epitope that is in the C-terminal tail and is absent in GFP-PTEN C2 only. Figure S9. The Localization of the Indicated GFP-PTEN Fusion Proteins Is Shown Embryos were transfected and development was allowed to proceed for 15 hr before cells were photographed. Only the GFP-PTEN C2 + tail construct seems strongly membrane localized, with the C2-only and C2 DPDZ constructs displaying some weak membrane enrichment. Figure S10. TPTE Lacks Detectable Phosphatase Activity (A) An alignment of the active-site P loop of PTEN, TPIP, and TPTE is shown, with the PTEN catalytic-cysteine residue boxed in red. The black boxing shows the threonine and aspartic-acid residues at which TPTE differs from the active phosphatases PTEN and TPIP in this region and the mutation of these residues performed in reactivated TPTE (called TPTE-R). (B) When expressed as a GFP-fusion protein in HEK293 cells, TPTE is located largely on the plasma membrane. (C) The protein preparations used in the presented phosphatase assays (here and in Figure 6) are shown with Coomassie total-protein stain and anti-GST western blotting (WB). The GST-TPTE fusion protein contains all of the main intracellular regions of the protein, lacking the N-terminal transmembrane domains. (D) The activity of the indicated PTEN and TPTE proteins was assayed with either 33 P-radiolabeled PtdIns(3,4,5)P 3 or phosphorylated polyGluTyr as substrate. Data are shown as mean activity and standard deviation from triplicate assays. (E) Activity is shown against the artificial substrate para-nitrophenol phosphate (pNPP) as the mean measured optical density (OD) from duplicate samples. (F) TPTE lacks activity against other phosphoinositide substrates. Two micrograms of recombinant GST-PTEN or GST-TPTE were incubated with each phosphoinositide lipid for 1 hr at 30 C before released inorganic phosphate was measured with malachite green reagent. Lipids were present as synthetic diC8-soluble compounds at 100 mM. An image of the experiment is shown. No significant increase in measured OD at 620 nM was detected in TPTE samples relative to control (data not shown). WT EMT block 8/9 0/9 9/10 1/6 7/9 Cell migration: directed 1/9 8/9 1/10 0/6 2/9 Cell migration: random 0/9 1/9 0/10 5/6 0/9 PTEN mutant and double-mutant proteins are identified from the combinations of the top row and the left column. Each experimental set shows the three possible experimental outcomes: an EMT block with cells failing to escape the primitive-streak graft; random cell migration, with cells escaping the graft, but migrating randomly through the embryo; or directed cell migration, with cells escaping from the graft and following their normal migratory tracks through the embryo. Numbers represent the frequency of each outcome and the number of experiments performed.
2016-10-06T20:56:18.162Z
2007-01-23T00:00:00.000
{ "year": 2007, "sha1": "3f1d650ca6bba37d2de531c220d5ea7789739414", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.cub.2006.12.026", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "3f1d650ca6bba37d2de531c220d5ea7789739414", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
119494894
pes2o/s2orc
v3-fos-license
Effective Thermal Physics in Holography: A Brief Review It is well-known that a Rindler observer measures a non-trivial energy flux, resulting in a thermal description in an otherwise Minkowski vacuum. For systems consisting of large number of degrees of freedom, it is natural to isolate a small subset of them, and engineer a steady state configuration in which these degrees of freedom act as Rindler observers. In Holography, this idea has been explored in various contexts, specifically in exploring the strongly coupled dynamics of a fundamental matter sector, in the background of adjoint matters. In this article, we briefly review some features of this physics, ranging from the basic description of such configurations in terms of strings and branes, to observable effects of this effective thermal description. Introduction Thermodynamics is ubiquitous. Typically, for a collection of large number of degrees of freedom, be it strongly interacting or weakly interacting, a thermodynamic description generally holds across various energy scales and irrespective of whether it is classical or quantum mechanical. The underlying assumption here is a notion of at least a local thermal equilibrium, for which such a formulation is possible. Intuitively, this is simple to define: a thermal equilibrium occurs when there is no net flow of energy. Typically this can be characterized by an intensive variable, temperature, with zero or very small time variation. The smallness needs to be established in terms of the smallest time-scale that is present in the corresponding system. While thermodynamics has a remarkable reach of validity, equilibrium is still an approximate description of Nature, at best. Most natural events are dynamical in character. Of these, a particular class of phenomena can be easily factored out, that of systems at steady state. While steady state systems are not strictly in thermodynamic equilibrium, they can be described in terms of stationary macroscopic variables. For such systems, there is a non-vanishing expectation value of a flow, such as an energy flow or a current flow, which does not evolve with time. Typically, such states can be reached asymptotically starting from a generic initial state, or they appear as transient states before time evolution begins. In this article, we will consider a similar situation. The prototype will consists of a bath degrees of freedom, which is assumed to be infinitely large and will serve the purpose of a reservoir. In this bath background, we will consider the dynamics of a probe sector. In this sector, a stationary configuration can be easily constructed, by dumping all the excess energy into the reservoir. For example, consider a non-vanishing current flow in the probe sector. There will be work done to maintain the constant current flow, therefore it is expected that the actual description is dynamical. However, if we engineer the bath sector as a source of providing this energy, or a sink in which this energy is deposited, the resulting configuration remains stationary. In the framework of quantum field theory (QFT), a similar construction was considered by Feynman-Vernon in [1], based on the Schwinger-Keldysh formalism of [2,3]. For some review on this formalism, see e.g. [4,5,6]. In recent times, much work has gone into the reformulation of the Schwinger-Keldysh formalism, see e.g. [7,8,9]. While many of our subsequent discussion potentially has a leg deep inside this formalism, we will not make explicit use of the formalism, to keep a terse discourse. The basic idea is based on the so-called thermofield double construct, which has appeared long time back in [10]. Subsequently, in [11], a connection of classical black holes with the thermofield double construct was also established. For us, these two ideas are sufficient. Consider the basic idea behind the thermofield double. Consider a quantum mechanical system, with a Hamiltonian H and a complete set of eigenstates |n , such that H |n = E n |n , (1.1) where E n is the energy of the corresponding eigenstate. Evidently, {|n } constitute a basis of the Hilbert space. Let us now double the total degrees of freedom, by considering two copies of the same system. At the level of the dynamics, these two copies of degrees of freedom are non-interacting between them. 1 Therefore, the Hilbert space of the doubled quantum system is spanned by {|m 1 |n 2 }. Given this, the thermofield double state is defined as: This is certainly a special state in the doubled quantum system. We can assign a density matrix corresponding to this state: ρ TFD = |TFD TFD|. This is a pure density matrix, as can be explicitly checked by establishing: ρ 2 TFD = ρ TFD . Given such a pure density matrix, let us compute the reduced density matrix while integrating out one copy of the system. Thus we obtain: (1. 3) The reduced density matrix, denoted above by ρ th appears thermal in nature, with a temperature β −1 . Thus, given the thermofield double state, one can construct an equivalent thermal description. The process of integrating out one copy of the system may be conducted in various ways: this can be thought of as integrating out a subsystem to compute entanglement between the two. In the context of a black hole, similar to [11], or in the presence of a causal horizon, one can construct a Kruskal extension of the geometry. This maximal extension of the geometry can be thought of as the thermofield double and by integrating out one side, an effective thermal density matrix is obtained. It is also clear from the above discussion that, given any gauge invariant observable or a collection of such operators acting on the untraced system, denoted by O, the expectation value is simply given by which is the thermal expectation value. A similar picture holds true in the Holographic framework, which will be the primary premise in our subsequent discussions. In [12], eternal black holes in an asymptotically AdS geometry were proposed to be dual to two copies of the conformal field theory (CFT). Each CFT corresponds to the dual CFT that is defined on the conformal boundary of AdS. The basic picture is represented in figure 1. In the Euclidean signature, the corresponding thermofield double state is created Figure 1: The Penrose diagram corresponding to an eternal black hole in AdS. The left and the right boundaries are where the dual CFT is defined, which are schematically denoted by S φ 0 L and φ 0 R , respectively. by the Euclidean path integral over an interval of β/2. The thermofield double state, defined in (1.2), is maximally entangled from the point of view of the doubled degrees of freedom. Tracing over one copy produces a thermal effective description and this seems to lie at the core of the construction. Motivated by this, one can surmise that a qualitative emergent description of a thermofield double state ensures an effective thermodynamics. For this to happen, the essential ingredients is a black hole like causal structure. An intriguing idea relating quantum entanglement and the existence of an Einstein-Rosen bridge has recently been proposed in [13]. Let us now crystalize our discussion towards our specific goal. All of the above discussions are assumed to go through the framework of QFT. In general, of course, for weakly coupled QFT systems, explicit perturbative calculations are sometimes feasible, although those are certainly not simple for processes involving real-time dynamics. Furthermore, the existence of such a small coupling is far from guaranteed in Nature, e.g. the quark-gluon plasma (QGP) at the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC) at TeV-scale (see e.g. [14]), or the cold atoms at unitarity at eV-scale (see e.g. [15]). In general, at strong coupling, powerful symmetry constraints in terms of Ward-Takahashi identities or non-perturbative Schwinger-Dyson equations can sometimes help. However, for many explicit cases, these approaches have limitations. One can certainly try to formulate such complicated issues on a computer, using lattice-methods and its modern generalizations, at the cost of analytical control over the physics. An interesting avenue is to explore the Gauge-Gravity duality, or the Holographic principle, or the AdS/CFT correspondence [16,17,18,19]. While all these words, in a precise sense, carry different meaning, we will not distinguish between them. The basic statement here is: A large class of strongly coupled QFTs, such as the N = 4 super Yang-Mills (SYM) theory with an SU(N c ) gauge group in the limit N c → ∞, is dual to classical (super) gravity in an AdS 5 × S 5 geometry. By this duality, one translates questions of strongly coupled large N c gauge theories into questions of classical gravity. The latter is a more familiar and tractable framework for explicit calculations. Although this class of QFTs are not what one would like to understand for experimental processes in the RHIC or the LHC, they certainly can serve the purpose of instructive toy examples. Mathematically, the duality statement schematically takes the form: where the left hand side is an expression in the CFT and the right hand side is defined as the gravity partition function in AdS, subject to a specific boundary condition. Correspondingly, correlations in the CFT can be calculated by taking functional derivatives on the LHS, with respect to φ 0 . The duality relates this correlation function to a similar calculation in the bulk gravity scenario on the RHS. For a detailed discussion on correlation functions in this framework, see [20] for Minkowski-space correlators, [21] for a Schwinger-Keldysh framework, [22] for a realtime analysis, [23,24] for a detailed account of the real-time correlation functions. During the past couple of decades, a wide range of research has been carried out in this framework, in the context of quantum chromodynamics (QCD) as well as several condensed matter-type systems. Popularly, these efforts are sometimes dubbed as AdS/QCD or AdS/CMT literature. By no stretch of imagination, we will attempt to be extensive: for some recent reviews, see e.g. [25,26,27] for the former, and [28,29] for the latter. The Holographic framework, which can also be viewed as the most rigorous definition of a theory of quantum gravity, is a promising avenue to explore complicated gauge theory dynamics, qualitatively. There already exists a large literature analyzing various dynamical features of thermalization, quench dynamics for strongly coupled systems. We will not attempt to enlist the references here, however, present one example of new results, such as scaling laws in quantum quench processes [30]. For a more general discourse on non-equilibrium aspects in Holography, see e.g. [31]. Even though remarkable progress has been made in understanding dynamical issues, they remain rather involved and, in general, far more complex than equilibrium physics. Thermal equilibrium is particularly simple since it can me macroscopically described in terms of a small number of intensive and extensive variables. Intriguingly, for steady state configurations, which is neither in precise thermal equilibrium nor fullly dynamical, an effective thermal description may hold. See for example, [32,33,34], in systems at quantum criticality [35] or aging glass systems [36]. In this review, we will briefly summarize a similar construct for a wide class of strongly coupled gauge theories, within the Holographic framework. As mentioned earlier, we have one bath sector and one probe sector. These are made of the adjoint matter of an SU(N c ) gauge theory and a fundamental matter sector, respectively. A canonical example of this is to consider an N = 2 hypermultiplet matter as a probe introduced in the N = 4 SYM system. We will work in the limit N c N f , where N f is the number of the fundamental matter. This limit ensures the suppression of backreaction by the matter sector and we can safely treat them as probes. In a more familiar language, this limit is similar to the quenched limit in the lattice literature, wherein one ignores loop effects of quark degrees of freedom (which are the fundamental sector here), but includes loop effects in the gluonic matter (which is the adjoint matter). The adjoint matter sector in (p + 1)-dimension comes from the low energy limit of a stack of Dp-branes, from the open string description of the brane. Equivalently, the closed string low energy description of the same stack of branes is given by a classical supergravity background in ten-dimensions. Gauge-Gravity duality is an equivalence between the two descriptions. Now, in this picture, one can introduce an additional set of N f Dq branes, in the limit N f N c , which introduces an additional set of open string degrees of freedom. In the probe limit, the gravitational backreaction of the Dq-branes are ignored and therefore these only have an open string sector description. This open string sector, geometrically, can be studied by analyzing the embedding problem of Dq branes in the given supergravity background. Within this framework, many interesting physics have been uncovered within the probe fundamental sector, specially the thermal physics, see e.g. [37,38,39,40]. There is a vast literature on this, and we will not attempt to provide a substantial reference here. For us, the steady state configuration will be engineered in this probe sector, by exciting a U(1)-flux on the probe D-brane worldvolume. This steady state will be maintained by working in the N f N c limit. Pictorially, this is demonstrated in figure 2. The non-linear dynamics of the brane, along with the U(1)-flux will induce an event horizon to which only the brane degrees of freedom are coupled, with an open string analogue of equivalence principle. Qualitatively, therefore, one inherits a black hole like causal structure and a corresponding thermofield double description. As we have discussed above, we are thus led to a thermal density matrix starting from a maximally entangled state. This construction lies at the core of our subsequent discussions. Interestingly, this description can be explicitly realized both on a string worldsheet, in which one studies a probe long open string in a supergravity background, by studying the dynamics of Nambu-Goto (NG) action; as well as on a probe D-brane by studying the Dirac-Born-Infeld (DBI) action. In the latter, although open strings are present, they may not appear explicit. Based on these ideas, a lot of interesting physics has been explored over the years. Here we will merely present a few broad categories and some representative references, which we will not explicitly discuss in this review. In [41,42], the drag force on an external quark passing through the N = 4 SYM was calculated, meson dissociation by acceleration was explored in [43], the causal structure on the string worldsheet was discussed in details in [44]. Stochasticity and Brownian motion associated with the worldsheet event horizon was explored in [45,46,47,48]. For a review on the hard probe dynamics, see e.g. [26,27], for a quark dynamics review, see e.g. [49], and for a general review, see e.g. [25]. The idea of ER=EPR has been explored on the string worldsheet in e.g. [50,51,52,53]. On the probe brane, a non-equilibrium description of phase transition and effective temperature is explored in e.g. [54,55,56,57,58]. Relatedly, thermalization on this probe sector is discussed in e.g. [59,60,61]. This article is divided into the following parts: In the next section we begin with a brief description of how non-linearity can result in a black hole like causal structure, and how this is currently being investigated to understand features of QCD. In section 3, we introduce the Holographic framework. In the next three sections, we discuss some explicit and instructive features of the effective temperature on a string worldsheet as well as a D-brane worldvolume, based on analytically controllable examples. Section 7 is devoted to a generic discussion of the physics, without reference to any explicit example. Finally, we conclude in section 8. Temperature, Outside the Folklore The standard folklore concept of temperature is certainly in describing equilibrium thermal systems, at least in a local sense. In physics, there are various ways to define the temperature of a system: In thermodynamics, temperature is defined as an intensive variable that encodes the change of entropy with respect to the internal energy of the system. In kinetic theory, the definition of temperature can be given in terms of the equipartition theorem for every microscopic degree of freedom. In linear response theory, temperature can also be defined in terms of the fluctuation-dissipation relation. Outside the realm of equilibrium thermal description, the notion of a temperature can sometimes be generalized. This is a vast and evolving topic in itself and we will refer the interested to reader to e.g. [62]. One particular method, which is also relevant for our discussions, is to use the fluctuation-dissipation relation to define temperature in a non-equilibrium process with a slow dynamics, see e.g. [63]. In particular, these apply reasonably well within classical and quantum mechanical systems at steady state. However, it is unclear, even within such systems, whether these ideas hold at strong coupling. Intriguingly though, strong coupling seems to suggest a much simpler scenario. Most of our review will concern with the standard strongly coupled systems (and toy models) within the framework of gauge-gravity duality, where a fluctuation-dissipation based temperature is already explored in [64]. However, in this section, let us briefly review some recent interests and activities in strongly coupled quantum chromodynamics (QCD) itself. The Schwarzschild radius of a typical hadron of mass 1 GeV turns out to be ∼ 10 −39 fm. This estimate assumes gravitational interaction in determining the Schwarzschild. Instead, one replaces the gravitational interaction by the strong interaction, which amounts to multiplying the above estimate by a factor of α s /G N , where α s ∼ O(1) is the strong coupling constant and G N is the Newton's constant. This yields a Schwarzschild radius ∼ 1 fm [65]. Let us discuss further motivation, following [65]. As described in [66], the non-linear effects of a medium for simple electrodynamics can lead to the trapping of photons. This is described in terms of an effective Lagrangian, denoted by L(F ), and an emergent metric of the form: The photon trapping surface is simply obtained by solving the equation g 00 = 0. Thus, even with a simple U(1) theory, non-linear effects lead to an black hole like configuration. Now, QCD or any such non-Abelian gauge theory, is inherently non-linear and produces a non-trivial medium for itself. In this case, the effective action can be fixed to be (see the discussion in [65]): where (g QCD F ) represents a dielectric variable of the medium and g QCD is the bare QCD coupling. Perturbatively, this function can be evaluated and at one-loop we get: where Λ is the cut-off scale. Setting = 0 yields an algebraic solution for g QCD F , which is given by This already indicates a possible event horizon structure, since this zero is the locus where the effective kinetic term in (2.2) changes sign. Note that, this event horizon structure is already visible at the perturbative level with a non-linearly interacting theory. However, this is not strictly rigorous, since the perturbative result for assumes a small g QCD . Thus near the = 0 locus, perturbative calculations are not valid. Based on the hints above, the basic conjecture was made in [65]: due to confinement, the physical vacuum is equivalent to an event horizon for quarks and gluons. This event horizon can be penetrated only through quantum processes such as tunnelling. This constitutes a QCD analogue of the Hawking radiation, which is hence thermal. While the basic idea is essentially based on the thermofield double type construction, the above also makes certain predictions in terms of an universality in thermal hadron production in high energy collisions, in terms of an effective temperature that is defined in terms of the QCD confinement-scale. This is based primarily on two inputs: (i) A thermal description of high energy collision processes with an effective temperature determined in terms of the QCD string tension, or the gluon saturation momentum [67,68]; (ii) The experimental data of hadron production across a large energy scale from GeV to TeV exhibits a universal thermal behaviour, with an effective universal temperature T eff ∼ 150 − 200 MeV [69,70,71]. The resulting essential consequences are as follows: Color confinement and vacuum pair production leads to the event horizon. The only information that can escape this horizon is a color-neutral thermal description with an effective temperature. The resulting hadronization is essentially a result of such successive tunnelling processes. Also, there is no clear notion of "thermalization" through standard kinetic theoretic collision processes. A thermal description emerges as a consequences of the strongly coupled dynamics and the pair production. We will end our brief review here, since this is still an active field of research and is an evolving story. The Holographic Framework In this article, we will primarily review the progress made and important results obtained in the framework of holography, the AdS/CFT correspondence [16] to be precise. AdS/CFT correspondence can be viewed as a successful marriage between 't Hooft's early ideas on large N gauge theories as string theories [72] and, later, the idea of holography pioneered by 't Hooft and Susskind in [73,74]. This correspondence arises from a more fundamental duality between open and closed strings, in string theory, and has become the cornerstone of quantum gravitational physics. So much so that the modern understanding of the correspondence identifies it with the most rigorous definition of a quantum theory of gravity, in an asymptotically AdS-spacetime. For our purpose, we will only require the technical aspects of this correspondence. To do so, we briefly review the standard understanding of how this correspondence (or, duality) emerges. Closed string theories describe a consistent theory, and in the low energy limit, this consistent description can be truncated to various supergravity theories, in general. On the other hand, open string theories naturally arise as boundary conditions which contain the information of the string end point. The canonical construction begins with a stack of N Dp branes and analyzing the corresponding low energy description of the system. Let us recap the best understood example of this, i.e. when p = 3. For a stack of N D3-branes, the massless spectrum arising from open strings can be readily obtained. Assuming that the open strings are oriented, the degrees of freedom at the open string end points can be shown to transform under the adjoint representation of U(N ) group. Furthermore, global symmetry of this stack of D3-branes uniquely determine the interacting description in the massless sector of the open string spectrum: the N = 4 super Yang-Mills (SYM) in (3 + 1)-dimension. Also, the U(N ) gauge group splits naturally into a global U(1) and an SU(N ). For the gauge theoretic description of the system, we can safely ignore the overall U(1) mode, since this corresponds to an overall excitation, see e.g. [19] for more details. Sasaki-Einstein Manifold (SUSY) Einstein Manifold (no SUSY) Figure 3: A schematic presentation of a generic D-brane construction, A certain N number of D3-branes are placed on the tip of a cone, whose base could be a Sasaki-Einstein manifold (susy preserving), or simply an EInstein manifold (susy breaking). This picture assumes the cone is a six-dimensional manifold. This condition is not required and can be relaxed easily for other dimensional cases, at least, in principle. The red curves are strings beginning and ending on the stack of the D-branes, whose low energy limit can be captured by a standard gauge theory, similar to the N = 4 SYM. Alternatively, a stack of D3-branes will source gravity, which is captured by the closed string excitation of this system. In this case, such a stack of N D3-branes can be obtained explicitly by solving the low energy description of the corresponding closed strings, which is given by type IIB supergravity in ten dimensions. This solution is given by where µ, ν = 0, . . . , 3; i, j = 4, . . . , 9; α yields the inverse string tension, g s is the string coupling. The dilaton field, denoted by φ, sets the string coupling constant and the D3-brane sources the so-called Ramond-Ramond 4-form, denoted by C (4) . Defining a new radial coordinate, u = r/α and taking the α → 0 limit then decouples the near horizon physics of the geometry in (3.1) from the asymptotic flat infinity. The geometry becomes an AdS 5 × S 5 and this corresponds to the low energy description of the D3 branes' physics. Thus, the correspondence reads: type IIB superstring theory in AdS 5 × S 5 background is equivalent to N = 4 SU(N ) SYM theory in (3 + 1)-dimension. This (3 + 1)-dimensional geometry is simply the conformal boundary of the AdS 5 -background. A similar construction can be obtained for a generic D3-brane, placed on the tip of a cone, as shown in figure 3 and correspondingly obtain a similar duality. In fact, this can be further generalized for a stack of Dp-branes [75]. The construction above can now be generalized by introducing additional degrees of freedom. In the dual gauge theory, this corresponds to introducing additional matter sector transforming in different representations of the SU(N ) gauge group; in the gravitational description this corresponds to the inclusion of new gravitational degrees of freedom. A specially interesting case is to consider a matter sector that transforms in the fundamental representation of the gauge group. This can be done by adding a N = 2 hypermultiplet to the N = 4 SYM. Gravitationally, this can be realized by introducing a stack of N f D7 branes in the near-horizon (i.e. decoupled) AdS 5 × S 5 geometry of (3.1). A particularly instructive limit to explore is to take N f N such that gravitational backreaction of the additional D7 branes can be safely ignored. Pictorially, this is represented in figure 4. Evidently, these constructions can be generalized for a wide class of examples, following the approach in [76]. We will elaborate more on these constructions in later sections. The prototype model in our subsequent discussion is based on the dynamics of this additional set of branes, which capture the dynamics of open long strings. There is a "bath" with a large number of degrees of freedom which is provided by the stack of N D3-branes, in which N f D7branes are embedded, in the limit N f N . Generically, we intend to study the embeddings of a probe Dq-brane in the background of a large number of Dp-branes. In this probe sector, a steadystate can be engineered by simply pumping energy into the system, which can be dissipated into the bath without causing any energy change of the bath. This is facilitated by the N f limit, and away from this limit the above approximation breaks down. We will concentrate only on this probe regime. Before moving further, let us note the following: The fundamental matter sector, essentially, is described by long, open strings. It is not unreasonable to assume that there is a sensible limit in which the probe D-brane becomes irrelevant and the essential physics can be captured by the dynamics of explicit strings. Indeed, for specific models, this limit can be made precise: For examples, if we introduce a mass of the N = 2 hypermultiplet sector, in the large mass limit, the probe D7 brane is pushed far away at the UV, leaving a family of strings that connect between the UV D7-brane and the IR D3-brane. In this limit, one can simply consider the open string sector only. Motivated by this, we will review the physics of open strings in an AdS-background, in the next section. The String Worldsheet Description We begin with reviewing a general description of a string worldsheet which is embedded in an AdS-background, following closely the treatment of [77,78]. The simplest case is to take an AdS 5 × S 5 geometry, in which the AdS/CFT dictionary is very well-known. The discussion below, however, is far more generic and applies to a more general supergravity background which can be bisected by an AdS 2 . Let us begin with the Poincaré patch: where R is the curvature scale of the AdS 5 , {t, x} represent the R 1,3 in which the dual N = 4 SYM theory is defined, and z is the AdS-radial coordinate. The conformal boundary is located at z → 0 limit and the infra-red (IR) is located at z → ∞. In the presence of a bulk event horizon, the IR is cut off at some finite location, which we denote by z h . Note that, it is possible to select general radial foliations of the bulk AdS geometry, such that the dual CFT (in this case, the N = 4 SYM) is defined on the corresponding Lorentzian manifold M 1,3 , which is realized as the conformal boundary of the bulk spacetime. This is best expressed in the so-called Fefferman-Graham patch [79], which is suitable to describe any asymptotically AdS geometry: where g AB is a function of the radial co-ordinate z, as well as the x A co-ordinates. The corresponding CFT is defined on the background whose metric is given by ds 2 CFT = g AB dx A dx B z→0 , which defines the line-element on M 1,3 . The background in (4.2) is uniquely determined in terms of the boundary data g AB | z→0 and a sub-leading mode of the function g AB (z), in a z-expansion around the conformal boundary. This sub-leading mode, essentially, contains the data of the CFT stress-tensor. The detailed procedure of extracting the resulting CFT stress-tensor, which is based on holographic renormalization, is discussed in e.g. [80,81,82]. The corresponding 't Hooft coupling is given by λ ≡ g 2 YM N c = R 4 /α 2 , where g YM is the gauge theory coupling and N c determines the rank of the gauge group, while α is the inverse tension of the string. The dynamics of the string is governed by the Nambu-Goto action: where {τ, σ} represent the worldsheet coordinates, α = l 2 s sets the string tension (while l s is the string length), and S boundary is a generic boundary term. The background manifold is described by the metric G µν (X), while X represents the coordinate patch chosen to describe the manifold. The various indices are chosen to represent the following: a, b, . . . denote the worldsheet coordinates, A, B, . . . denote the coordinates on the manifold where the CFT is defined (in general, denoted by M 1,3 ) and µ, ν, . . . denote the full ten-dimensional supergravity background. We will consider cases in which the string is extended along the radial direction, and therefore the boundary term can capture the coupling of the end-point with an applied external field: Before proceeding further, let us offer some comments regarding the boundary term. The physical picture is as follows: Given the ten-dimensional geometry, one considers such long strings which extend from the IR of the geometry all the way to the UV. Note that, these open strings must have an end point on a D-brane and therefore it makes sense to think of the end point as a point on a D-brane at the UV. There are various ways to think about such configurations, for our purpose we can introduce a UV cut-off z UV where the string ends. Such a string certainly carries an energy, as a function of {z IR , z UV }, where z IR is the IR end of the geometry. Irrespective of the details, one can naturally assign a physical mass-scale, M , associated to the heavy string with a static and constant profile: The √ λ behaviour simply comes from the string tension, which is proportional to α −1 . Thus, we simply introduce a fundamental degree of freedom (i.e. like a quark) with a non-vanishing mass. In the context of N = 4 SYM, this fundamental matter can sit inside an N = 2 hypermultiplet, for example. Given the above action in (4.3), one can solve the classical equations of motion. Instead of choosing a gauge and solving for the equations of motion, let us review the general solution of [83] in the embedding space formalism. We begin with a description of the AdS 5 geometry as an hyperbolic submanifold in R 2,4 , described by To describe a two-dimensional worldsheet, let us define a light-like vector M (τ ) which obeys: Using the relations above, it is easy to show that the general solution of (4.6) is given by As demonstrated in [83], (4.8) is an extremal surface inside AdS 5 . The overall sign in front of the first term in (4.8) corresponds to a purely ingoing and outgoing non-linear wave solutions, with respect to the location of the source. The induced worldsheet metric is subsequently given by Following the discussion in [78], we introduce the following: where, A 6 represents the proper acceleration, defined on the embedding space R 2,4 . The resulting induced metric is given by . (4.11) In general, A 6 is a τ -dependent function. However, for cases when A 6 is a constants, the worldsheet describes an AdS 2 -black hole. The corresponding causal structure is best described in terms of the Penrose diagram, which is shown in figure 5. The relevant change of coordinates are given in [78], which we also review below: for σ > A 6 . For σ < −A 6 , the corresponding patches are obtained by adding a π-shift to (4.12) and (4.13). For the region Abs(σ) < A 6 , the relevant coordinate changes are given by Note that, the worldsheet metric, in the {V, U } coordinate takes the form where the proportionality factor is a function of the U and V coordinates. For the case when A 2 6 < 0, the following coordinate change [78]: 17) Figure 5: The left diagram corresponds to A 2 6 > 0, with a subsequent causal structure which is divided in two regions by the σ = A 6 curve. The end points of the worldsheet, denoted by σ = ±∞ corresponds to two fundamental degrees of freedom with opposite charges. The right Penrose diagram corresponds to A 2 6 < 0, and the resulting causal structure does not possess any event horizon. Dashed lines on both diagrams correspond to curves with constant σ. This diagram is taken from [78], with the Authors' consent. again brings the worldsheet metric back to the desired form. A detailed analysis of various trajectories are discussed in [78], which results in the Penrose diagram shown in figure 5. The crucial point to note is the presence of a causal horizon in the case A 2 6 > 0 (the left Penrose diagram in figure 5), which is absent for A 2 6 < 0 (the right Penrose diagram in figure 5). We will now discuss a few simple and explicit examples. The string with a single end point at the UV, which is what is described above, can be written in a particularly recognizable form [53]: where the background is written in the standard Poincaré patch of (4.1). The positive sign corresponds to a retarded solution, in which energy flux propagates from the end point of the string to the Poincaré horizon and the negative sign corresponds to an advanced solution with a reverse energy flow. The time argument t r denotes retarded or advanced time, correspondingly. The above solution corresponds to an infinitely massive fundamental matter, for a non-vanishing but finite mass, the corresponding solutions are discussed in [84,85,86]. Asymptotically Uniform Acceleration We can describe the string profile in terms of the embedding space, however, in this case it is straightforward to adopt a Poincaré patch description of the profile. This can simply be done by using e.g. Poincaré slicing of the embedding space and subsequently choosing an appropriate vector M . Evidently, various slicing of the embedding space yields various AdS-metrics, and correspondingly the manifold on which the CFT is defined. For an explicit exposition of such slices, see e.g. [87]. Here we will discuss the particular case, in which an infinitely massive quark (i.e. the string end point) undergoes an uniform acceleration. In the standard Poincaré-patch, this worldsheet embedding is given by [88] x (t, z) 2 = a −2 + t 2 − z 2 . (4.20) Clearly, we have chosen a static gauge σ = z, τ = t to describe the above solution of the Nambu-Goto action in (4.3). The embedding function x(t, z) is simply one of the Minkowskidirection coordinates of the CFT. There is a clear sign choice involved in the function x(t, z), this corresponds to the overall charge of the string end point. For simplicity, we can easily think of them as quark and anti-quark degrees of freedom. The causal structure induced from the embedding in (4.20) is similar to a black hole in AdSbackground, in which the event horizon is located at z h = a −1 . Moreover, the embedding in (4.20) can be constructed by patching two sections of (4.18) and (4.19). Physically, because of the uniform acceleration, a given retarded string profile terminates at the worldsheet event horizon: z h = a −1 , where the local speed of propagation exceeds the speed of light. Therefore the rest of the profile needs to be completed accordingly. This can be done by smoothly patching the retarded solution with an advanced solution. This class of embeddings, along with various generalizations, were discussed in details in [53,50]. The vertical axis is the z-axis and the horizontal axis is the x-axis. The left diagram corresponds to t < 0 and the semi-circular profiles shrink as time increases towards positive values. On the right, we have t > 0 and the semi-circular profiles grow as time increases. In both, the advanced solution is denoted by the blue, thin curve and the retarded solution is represented by the red, thick curve. The green horizontal dashed line corresponds to the worldsheet event horizon, located at z h = a −1 . We have taken these diagrams from [53], with the Authors' consent. Patching a retarded solution and an advanced solution has been discussed extensively in [53]. The advanced and the retarded configurations cover non-symmetric regions of the configuration space. This construction is perhaps best represented in terms of the figures in [53], which we include in figure 6, with the Authors' consent. Uniform Circular Trajectory A simple but instructive example is to consider the string end point moving in a circle of constant radius. Evidently, the end point does undergo an acceleration. Let us review this based on [78]. Let us begin with the Poincaré slicing of the embedding in (4.6), given by The induced AdS is given by the Poincaré disc and the dual field theory is defined on an R 1,3 . To describe a string embedding, further, we need to specify the vector M , which is given by where the x 2 is constructed by taking an inner product with a Minkowski metric η µν of the vector x µ ∈ R 1,3 . To satisfy (4.7), we further impose: (∂ τ x µ ) (∂ τ x µ ) = 1. The parameter τ is physically identified with the proper time associated to the string end point. Fixing σ −1 = z, the embedding in (4.8) is given by Let us now describe the particular case, first discussed in [90]. In this case, the string end point is moving with a constant angular velocity ω, in a circle of radius r 0 . The solution can be described by a collection of the functions x µ (τ ). The non-trivial components are given by while (4.7) yields: Thus, the full solution can be represented by The induced geometry on the worldsheet inherits an event horizon at z h = 1/(ω 2 γv). Propagating modes across this radial scale become causally disconnected and therefore yield an effective temperature: On these simple examples, the general picture is already emerging and clear. We will not further explicitly discuss more possibilities, specially the ones that appear in the global patch of AdS. However, let us briefly summarize the physics. In the global AdS patch, the boundary theory is kept at a finite volume and likewise develops a mass gap. Thus, a rotating string will not develop an event horizon on the world sheet for any value of the frequency, unlike the Poincaré-slice result. In this case, for sufficiently large angular frequency ωL typical > 1, where L typical is the typical length-scale associated with the finite volume, the worldsheet develops an event horizon and measures an effective thermal description on the worldsheet. For ωL typical < 1, there is no event horizon and thus no effective thermal description holds. As pointed out in [78], this is similar to the behaviour of an Unruh-DeWitt detector [89], undergoing a circular motion in a compact space. The corresponding Hawking radiation of the string worldsheet was analyzed in terms of a synchrotron radiation in the dual gauge theory in [90]. There certainly are numerous interesting such string configurations which demonstrate very interesting physics, driven by the worldsheet event horizon. We will not elaborate more on examples, instead we will briefly discuss some recent advances in understanding the same physics from a slightly different perspective, in the next section. Chaos: A Recent Development A thermal effective description ensures that the string end point will undergo a stochastic motion, because of the thermal fluctuations. This results in a Brownian motion for the string end point. In [45], this effect was explored in details by considering fluctuations of the string around a classical saddle. The fluctuation degrees of freedom, effectively, propagate in a curved geometry with an event horizon and therefore exhibits random fluctuations due to the Hawking-Unruh radiation. In [46] a Schwinger-Keldysh description of the stochastic was discussed, which is based on a Kruskal-type extension of the worldsheet geometry. Such Brownian motions are expected to be dissipated in the medium, since the system is thermalized. Given a thermal system, how fast a small perturbation relaxes to the thermal value sets the thermalization time scale for such small excitations. This is closely related to the scrambling time for the system, which determines the speed at which a quantum system spreads a localized information. In [91], it was conjectured that black holes are the fastest scramblers in Nature, for which the scrambling time scales as the logarithm of the number of degrees of freedom. In the context of Holography, therefore, a strongly coupled system in thermal equilibrium will also satisfy this bound. The notion of ergodicity and thermalization are intimately related. In a semi-classical description, the physics of thermalization is further related to a particular notion of growth of n-point correlation functions, where n ≥ 4. Also, the correlator must contain operators which are not time-ordered. Conventionally, these are known as the out-of-time order correlator, or OTOC in brief. The basic idea here is rather simple, which we quickly review below. Given a classical system, a phase space description is provided in terms of canonical coordinates and momenta variables: q(t) and p(t), respectively. The notion of classical chaos is defined as a response of a classical trajectory at late times, in terms of the variation of it's initial value. Typically, a chaotic system is characterized by exponentially diverging trajectories at sufficiently late times: where the real number λ L is known as the Lyapunov exponent. The left most expression is the standard classical Poisson bracket. Within a semi-classical framework, the diagnostic of chaos, along with the notion of Lyapunov exponents, can be easily generalized for a quantum system, by replacing the Poisson bracket with a commutator, and finally, computing the square of this commutator to make sure that no accidental phase cancellation takes place. 2 Thus, a chaos Here, qualitatively, f (t) = 1 − C(t), while both f (t) and C(t) individually carries the information of an exponential decay (for f (t)) or growth (for C(t)) of the corresponding correlator. The exponential behaviour occurs in a regime between a O(1) time-scale, such as the dissipation time t d , and a parametrically large time-scale, such as the scrambling time t sc . This hierarchy of t d and t sc is ensured in the → 0 limit. diagnostic can be defined in terms of the following correlator: where W (t) and V (0) are generic Hermitian operators. The function, C(t) possess both timeordered and out-of-time ordered correlator; but the exponential growth is visible in the OTOC sector only. In terms of this diagnostic function C(t), the scrambling time is also defined as the time scale when C(t) ∼ O(1). The basic behaviour of the diagnostic function is pictorially demonstrated in figure 7. The basic idea here is simple: Given a thermal state, one computes an OTOC based on a Schwinger-Keldysh construction. Now, in the limit t t d , where t d is the dissipation time-scale, one observes a growth in the correlator. In case this is an exponential growth, it is simple to extract the corresponding Lyapunov exponent. This particular notion of the growth of an operator is intrinsic to a semi-classical description. It is not simple to calculate higher point OTOCs, in general and there are currently a handful of tractable examples exist. Nevertheless, in [93], a bound for the Lyapunov exponent was derived λ L ≤ 2πT , for a system with temperature T , and in natural units. It was further conjectured that the maximal chaos limit is saturated for holographic theories. For a black hole in AdS, this saturation is simply guaranteed by the near horizon dynamics, where a local Rindler description holds. This intuition is based on recasting the four point OTOC in terms of a two-two high energy scattering amplitude in the thermofield double picture, see e.g. [94,95,96]. Given the black hole like causal structure on the string worldsheet, it is therefore expected that such a saturation will hold on the worldsheet horizon as well. Indeed, it was explicitly shown in [97,98] that the maximal saturation occurs on the string worldsheet, with the effective temperature. Moreover, a corresponding soft sector effective action, which is a Schwarzian derivative action, was explicitly obtained in [99,100], and its coupling with various heavy modes were explicitly determined. The soft sector action can be simply obtained by embedding the string worldsheet in an AdS 3 -background and using the Brown-Henneaux large diffeomorphisms, projected on the worldsheet. This yields: where α is the inverse string tension, IR is a physical scale, typically given by the event horizon on the induced worldsheet, and ϕ(x) is the dynamical degree of freedom. This effective description has a natural interpretation as an Euclidean theory, however, because of the high derivative coupling, the Lorentzian description has pathologies. The soft sector physics, described by (4.32), is ultimately responsible for the maximal chaos on the worldsheet. Qualitatively, this is straightforward to understand. In the two-two elastic scattering process, the four point vertex of any semi-classical fluctuation of the string worldsheet can be resolved in terms of two interaction vertices of the fluctuation with the soft sector mode, and a propagator of the soft sector itself. This is, pictorially demonstrated in figure 8. This soft sector dynamics on the string worldsheet is in close resemblance with the soft sector dynamics of AdS 2 in the Jackiw-Teitelboim gravity [101]. Finally, by setting C(t) ∼ O(1), we can determine the scrambling time. Note, further, that a direct four point OTOC correlator yields a scrambling time t sc ∼ β log √ λ, where λ is the corresponding 't Hooft coupling [98]. The same result is obtained by analyzing the soft sector physics, and its' coupling with heavier modes in the string fluctuations [99,100]. It can now be anticipated that on a D-brane horizon, one would observe a similar physics, which is demonstrated explicitly in [99,100], in which the corresponding scrambling time depends both on the rank of the gauge group, N c , and the 't Hooft coupling. We will end this brief section here, leaving untouched a remarkable amount of recent research in related topics. Chaotic properties of strongly coupled CFTs, with or without a holographic dual is a highly evolving field of research. Moreover, the notion of chaos in quantum mechanical systems is a rich field in itself and only some explicit calculations using a particular semi-classical prescription have been made. Thus, we will leave a detailed discussion for future, and for now, shift our attention to the effective thermal description of a D-brane. The Brane Worldvolume Description: A Model Let us consider a stack of N c D3-branes sitting at the tip of a cone, with a Sasaki-Einstein 5-manifold base, henceforth denoted by SE 5 . When the SE 5 ≡ S 5 , the 10-dimensional geometry is given by AdS 5 -Schwarzschild×S 5 which is dual to the N = 4 super Yang-Mills (SYM) theory with an SU (N c ) gauge group. The corresponding gravity data are given by 3 where {t, x} are the field theory space-time directions, r ∈ [b, ∞) is the AdS-radial direction, R is the curvature of AdS and dΩ 2 5 is the metric on a unit 5-sphere. These branes source a self-dual F (5) . Here denotes the Hodge dual operator. The unit sphere metric can be written as The corresponding black hole temperature is given by The AdS curvature sets the 't Hooft coupling for the dual field theory via R 4 = α 2 g 2 YM N c , where α is the string tension and g YM is the gauge theory coupling. The matter content of N = 4 SYM is: the gauge field A µ , four adjoint fermions λ and three complex scalars Φ a (a = 1, 2, 3). This theory has an SU (4) ∼ SO(6) R-symmetry, which corresponds to the rotational symmetry of the S 5 in the dual gravitational description. To introduce fundamental matter, one introduces open string degrees of freedom which is equivalent to introducing additional probe branes, along the lines of [76]. As mentioned earlier, of particular interest is to add N f probe D7-branes, in the limit N f N c to to suppress backreaction. These probe branes are extended along {t, x} and wraps the S 3 ⊂ S 5 . The codimension 2 brane is described by {θ(r), φ(r)}. Isometry along the φ-direction implies φ(r) = 0, without any loss of generality. Thus, the embedding is described by a single function θ(r). In terms of the dual gauge theory side 4 , we introduce an N = 2 hypermultiplets in the background of N = 4 SYM. The hypermultiplets consists of two Weyl fermions, denoted by ψ andψ and two complex scalars, denoted by q andq. Here, {ψ, q} transforms under the fundamental representation of SU(N c ) and {ψ,q} transforms under the anti-fundamental. The dual operators corresponding to the D7-brane profile functions θ and φ are given by where Φ 1 is a complex scalar field in the N = 4 supermultiplet and m q is the mass of the fundamental quark. The Lagrangian of the worldvolume theory can be written in the N = 1 language, as follows: Here W α denotes the vector multiplet, Φ I , with I = 1, 2, 3 denotes the chiral superfields. Both of these are obtained from the N = 4 vector multiplet. On the other hand, Q r ,Q r , where r = 1, . . . , N f denotes the N = 2 matter sector. Further details are summarized in table 1, where the SO(4) ≡ SU (2) Φ × SU (2) R symmetry corresponds to the S 3 isometry, wrapped by the D7-brane. The transverse SO(2) symmetry can be explicitly broken by giving a mass to the hypers: m q = 0, which is proportional to the separation of the D3-branes and the D7-branes. The effective action for the probe D7-brane is given by the Dirac-Born-Infeld (DBI) La-grangian with an Wess-Zumino term 5 Here P [G ab + B ab ] denotes the pull-back of the NS-NS sector fields: G denotes the closed-string background metric and B denotes the NS-NS field; f ab is the worldvolume U (1) gauge field on the probe. The four-form potentials C (4) andC (4) yield the self-dual five-form. Collectively, {ξ} denotes D7-brane worldvolume coordinates, T D7 = µ 7 /g s denotes the D7-brane tension. Here g s is the string coupling constant. 6 We intend to design a system in which energy is constantly pumped into the probe sector. This can be achieved by exciting the gauge field 7 A = a 1 (r)dx 1 = (−Et + a(r)) dx 1 , This ansatz (5.12), which is consistent with the equations of motion, manifestly breaks the O(3) →O (2) in the x-plane. The constant electric field E is applied along the x 1 -direction, to which only the probe sector couples, in the probe limit. The function a(r) is governed by the equation of motion obtained from (5.11), using the ansatz above. The resulting effective action for the probe D7-brane is a functional of two functions θ(r) and a(r). The physical meaning of these functions can be understood from the corresponding asymptotic behaviour: In the dual gauge theory, the corresponding source and vevs are given by (in units b = 1) Here J 1 is the expectation value of the current, m q is the mass of the hypermultiplet and ψ ψ . In the limit of vanishing electric field, thermal physics drives a first order phase transition in this system, see e.g. [38,39], as (m q /T ) is tuned. This transition separates two phases: one 5 Here we are using the Lorentzian signature. 6 Note that, even though we take N f = 1, we work with an Abelian version of the effective action. This is clearly a truncation of the full non-Abelian action to an Abelian sector. We will assume such a truncation holds true. 7 We absorb the factor of (2πα ) in the field strength. containing bound mesonic degrees of freedom and a plasma phase, consisting of the N = 4 adjoint and N = 2 hypermultiplet degrees of freedom. A similar physics exists in the global patch of AdS as well [40]. In the presence of E but not background event horizon, i.e. b = 0, the boundary current in (5.15) by demanding a reality condition of the on-shell effective action, much like [41]. This exercise yields [103]: The gauge theory current expectation value vanishes when E = 0, which is expected; it also vanishes when θ(r * ) = 0. The latter corresponds to a shrinking of the S 3 (which is wrapped by the D7-brane) before the brane can reach the radius r * . For all our purposes, this radius r * acts as an event horizon on the D-brane worldvolume, thereby defining a causal structure similar to a black hole in AdS. We will make this connection more precise later. Geometrically, therefore, there is a close parallel to the purely thermal physics. When there is a black hole present in the background, the probe brane can either fall inside the black hole, or stay above it. This depends on the asymptotic separation between the D3 branes and the D7-branes. This is how a first order phase transition separates the two phases, as the parameter (m q /T ) is tuned. This is pictorially demonstrated in figure 9. Now, when the background black Figure 9: In the vanishing electric field limit, the D-brane (denoted by the curves above) can stay above the black hole (right-most), fall into the black hole (left-most). These two phases are separated by a critical embedding, corresponding to the middle one. hole is absent, but a non-vanishing electric field is excited on the worldvolume, one can identify two inequivalent class of probe brane profiles. These profiles are distinguished in terms of their boundary condition at the IR, namely: These boundary conditions are obtained either by regularity of the embedding function, or directly from the equations of motion. Pictorially, this is demonstrated in figure 10. This transition Figure 10: With a non-vanishing electric field, the new radial location r * emerges, which is shown as the dashed circular curve above. In this case, also, the embedding functions can be divided into two categories, separated by a critical one. The critical curve is subtle to define, since the conical singularity in the critical curve of figure 9 can occur either at the location of the background event horizon, or at r * . Here, we will choose the latter, which will be further justified later. is discussed in details in [105,106], and figures 9 and 10 are taken from [105]. Having discussed the classical physics, we now turn to analyze the physics of fluctuations of the probe brane sector. Fluctuations: Bosonic Sector The D7-brane fluctuations correspond to the meson operators in the dual N = 2 field theory. For example, the scalar meson operators are given by 8 To obtain the fluctuation action, we can expand (5.11) around the classical profile discussed above: where all the fluctuations δθ, δφ and δf ab depend on all the worldvolume coordinates of the D7brane. In general, the fluctuations can be coupled and therefore quite complicated to analyze. Substantial simplification occurs at θ 0 (r) = 0, which corresponds to the massless case. Also, we can consistently truncate the fluctuations oscillating only along the Minkowski-directions. With these, the effective scalar and vector fluctuation actions are given by Here the emergent metric S is given by where S x 2 x 3 is identical to the metric components in that plane, S S 3 is given by metric components along the S 3 . The components of S in the {t, x 1 ≡ x, u}-plane are given by This metric S is known as the open string metric [108], which we will elaborate on subsequently. Evidently, there is no reason that the background geometry, denoted by G, and the emergent metric S has the same causal structure. We will discuss some generic features of this emergent metric, however, for now, let us calculate the fermionic part of the action. Fluctuations: Fermionic Sector The fermionic fluctuations of the D7-brane correspond to the supersymmetric partners of the mesonic operators, discussed in the previous section. These operators are of two types 9 F α ∼qX ψ † α +ψ α X q , (5.28) 29) with conformal dimensions ∆ = 5 2 + and ∆ = 9 2 + . The quadratic order fluctuation action is somewhat involved in explicit form, and therefore we will simply write down the final massaged from, obtained in [110], which is given by where M is a mass matrix which we do not specify here. The corresponding equation of motion: It is clear that the fluctuation equation for the fermionic sector also couples to the effective metric S, which couples to the scalar and the vector fluctuations in the bosonic sector. These equations can be subsequently solved for the spectrum. It can already be said that the spectrum will have a quasinormal mode spectrum, analogous to a black hole geometry, when S contains an event horizon. It turns out that, with a non-vanishing electric field on the classical probe profile, the metric S indeed inherits an event horizon at r * , which governs an effective thermal spectrum. Open String Metric: Some Features where we have redefined the time coordinate The full metric, hence, is given by For brevity, we do not explicitly write down the expressions for γ uu , γ xx . The background in (5.35) is asymptotically AdS, and in the IR, it has en event horizon. This is similar to AdS-BH geometry. However, (5.35) has a u-dependent curvature scalar and a singularity at u = 0. This singularity is present event in the u H = 0 limit, and is solely supported by the worldvolume gauge field E. With θ 0 = 0 and u H = 0, (5.35) takes a simpler form: Here we have suppressed the S 3 -directions. Given this geometry, we can easily check some global features, such as the status of energy conditions. For this, let us choose the null vector: n µ = (n τ , 1, 0, 0, 0) with n 2 τ = (−g xx /g τ τ ). It can be checked that E µν n µ n ν < 0, where E µν is the Einstein tensor corresponding to the metric (5.37). On physical grounds, this is not unexpected and in keeping with the idea that such an emergent geometry can not be obtained from any low energy closed string sector. Thus, this is an intrinsic open string description. relation between τ and t are needed. An effective temperature can be defined from the metric in (5.37): by Euclidean continuation of τ → iτ E , periodically compactifying the τ E direction and demanding Euclidean regularity yields: It is clear from the above formulae, that T eff > T . While the closed string low energy sector measures a temperature T , the corresponding open string sector measures T eff . The system is, thus, inherently non-equilibrium. However, in the N f N c limit, any heat exchange is suppressed and an equilibrium-like description holds. Note that (5.38) and (5.39) corresponds to the massless case: θ 0 = 0. When m q = 0, the general formula is obtained in e.g. [109]. Let us go back to (5.37). The similarity of the causal structure with a black hole can be established by going through a set of coordinate transformations that finally describes (5.37) is a Kruskal-type patch. Focussing on the near-horizon region, this can be achieved by: The metric in (5.37), in the {τ, u}-plane can be written as: For our purposes, ds 2 ⊥ will play no role. Correspondingly Kruskal-Szekres time and radial coordinates can be defined as: Thus, a Penrose diagram can be drawn which, schematically takes the form in figure 11. Finally, consider the τ = const hypersurfaces R L P F Figure 11: A qualitative picture of the Kruskal extension of the open string metric. We have drawn the Penrose diagram in a "square" form, which is not strictly correct. We will elaborate more on the Penrose diagram in later sections. This diagram is taken from [110]. in the Kruskal patch. In the following coordinate system: the τ = const hypersurfaces are simply R β ⊗ R x ⊗ R 2 . Nevertheless, for a given α, there are two roots of β, which are related by the symmetry: β → M 2 /4β. The corresponding two regions, parametrized by the two roots of β, are connected by a constant-size Einstein-Rosen bridge. All these features are very similar to a standard black hole geometry. D-Brane Description: A Different Model It is possible to further construct explicit examples of such probe brane configurations, in a background geometry. Since such constructions can be varied in richness, we will not attempt to provide a classification, rather we will present another explicit and instructive example. In this case, we are motivated by the analytical control the model provides us with. We begin with the standard AdS 5 × M 5 , where M 5 is some Einstein manifold (Sasaki-Einstein, if we want to preserve, at least, N = 1 supersymmetry). Let us choose M 5 ≡ S 5 . The probe degrees of freedom are now introduced by adding an N f D5-branes, as shown in table 2. In table 2, Brane 0 1 2 3 4 5 6 7 8 9 (N c ) D3 ----X X X X X X (N f ) D5 ---X ---X X X Table 2: Fundamental sector introduced in the background of N c D3-branes. the notation − stands for a direction along the worldvolume of the brane and X represents the directions transverse to it. The directions {0, 1, 2, 3} represents the Minkowski directions. The rest of the directions are an R 6 transverse to the stack of D3-branes. The D3-branes are extended along the Minkowski directions, and let us split R 6 ≡ R 3 ×R 3 . Now, let us use {ρ, Ω 2 } and { ,Ω 2 } to represent the two R 3 's. Thus, {4, 5, 6} corresponds to {ρ, Ω 2 } and {7, 8, 9} represents { ,Ω 2 }. This configuration, which preserves eight real supercharges, was studied initially in [111,112], from the perspective of analyzing a defect CFT system. The degrees of freedom are: an adjoint vector multiplet and a hypermultiplet coming from the D3-branes in (3 + 1)-dimensions. The probe D5 sector yields a (2 + 1)-dimensional hypermultiplet in the fundamental representation. The geometry is given by The D5-brane embedding function can be parametrized by a single function: ψ(u). We assume that P [Ω 2 ] = 0, whereΩ 2 denotes the metric components which yields the line element dΩ 2 2 . As in the previous case, there are distinct families of embedding which corresponds to different physics in the probe sector. Also, the analysis becomes simple when the fundamental sector is massless. This corresponds to setting ψ = 0. The induced worldvolume metric is simply an To study a non-trivial dynamics of the D5-brane, we can excite a similar U(1)-flux on the worldvolume. The corresponding fluctuation analysis, by virtue of the open string equivalence principle 10 , exhibits a similar coupling with the open string metric. The metric is given by Again, we do not explicitly write down the expressions of γ xx and γ uu . On the embedding ψ(u) = 0, the open string metric in (6.4) yields AdS 4 × S 2 when both u H = 0 and E = 0, or asymptotically. The IR-behaviour is different from an AdS 4 -BH, since the functional form of f (u) is different here. Also, demanding positivity of γ xx readily yields the expectation value of the current in the dual gauge theory. 11 The corresponding effective temperature is now given by The near-horizon geometry has a similar structure as a black hole and a corresponding Penrose diagram can be drawn. Further note that, due to a substantial analytical control of this system, in [113], current-current correlation function, in the probe sector, was explicitly calculated. This correlation function turns out to possess a thermal behaviour with a temperature T eff . In particular, this effective temperature appears in the fluctuation-dissipation relation. For other (scalar or fermionic) sectors, even though analytical calculations may not be feasible, because of the open string equivalence principle, a similar behaviour is expected. Further note that, this thermal imprint continues to hold beyond standard 2-point functions. In [99,100], a four-point out-of-time order (OTOC) correlator is explicitly calculated in the same vector sector discussed in [113]. This OTOC exhibits an exponential growth with real-time and a Lyapunov exponent which satisfies the maximal chaos bound: λ L = 2πT eff . This bound is shown to hold for a generic system, and is expected to saturate for systems with large number of degrees of freedom with a gravity dual. From a gravitational perspective, this saturation is universally described by the near-horizon dynamics of a black hole. In our case, a similar statement holds true for a geometry with a causal structure which is similar to a black hole. Continuing on this theme, we can explicitly construct other examples where the adjoint matter sector is not described by a CFT, unlike the N = 4 SYM. Such gauge theories can be constructed by considering the near-horizon dynamics of the Dp-branes with p = 3. We will, however, not explicitly discuss such examples, but point the reader to e.g. Sakai-Sugimoto model considered in [114,115]. Branes in Spacetime: A General Description In this section we will discuss the essential physics, without referring to any explicit example. The basic idea is as follows: Given a low energy closed string data, in string frame, {G, B, φ}, where G is the metric, B is the NS-NS 2-form and φ is the dilaton, along with the RR sector fluxes, the geometry is essentially described by a solution of type II supergravity. In this tendimensional background, we imagine describing a defect D-brane, as an embedded hypersurface. This hypersurface spontaneously breaks translational symmetry of the background and the corresponding low energy sector is described by the so-called DBI action, along with Wess-Zumino terms. The latter is topological in nature, and will not affect directly our discourse and therefore let us ignore such terms. The background RR-fluxes couple only to this topological sector and therefore we can ignore them for our discussions. The essential dynamics is given by the following action, in the notation of [116]: where T Dq encodes the tension of the brane, F is the U(1)-flux on the worldvolume of the brane. The action also contains fermionic degrees of freedom, denoted by ψ, which can be completely fixed by demanding supersymmetry of the bosonic DBI action. Here, γ denotes a gamma-matrix and ∇ is an appropriately constructed covariant derivative. For our purpose, this schematic description suffices. Around a classical saddle of (7.1), there are three kinds of fluctuations modes: (i) scalar, (ii) vector and (iii) spinor. Schematically, these fluctuations can be written as: where the classical saddle is described by the data {X µ (0) , F ab }. Here µ, ν run over the entire spacetime directions, a, b run over the worldvolume directions and i, j run over the directions transverse to the D-brane. In the scalar sector we consider only the transverse fluctuations since longitudinal fluctuations can be gauged away by using worldvolume diffeomorphism. The Volkov-Akulov type fermionic terms is defined with gamma matrices that the following algebra: where Finally, the quadratic fluctuation Lagrangians are obtained as: The indies above are raised and lowered with the metric G, as usual. Therefore, corresponding to two different metric G and S, we have two inequivalent causal structures. Note that, such a possibility has already been discussed in the literature in [117,118], irrespective of holography. We will now discuss some specific aspects of this open string metric, in more details. Event Horizons: Some Comments In [116], a general formula for T eff was obtained. This is the effective temperature that the probe sector measures. In general, one always observes T eff > T , however there are known exceptions, such as [55]. Let us now consider a general case, exciting other fluxes on the worldvolume. Depending on the flux, this can correspond to turning on a chemical potential in the dual gauge theory (in the probe sector), and/or introducing a constant magnetic field. 12 In general, we can arrange the electric and the magnetic field to be (i) parallel, or (ii) perpendicular. Assume that we have an R 1,3 -submanifold as the conformal boundary, where the dual gauge theory is defined. Consider, now, the case when (i) E B. The corresponding ansatz is: where a x (u) contains the information about the gauge theory current and a t (u) contains the information about the chemical potential. The open string metric can be obtained as and The location of the open string metric event-horizon is given by with the following definitions: From the boundary gauge theory perspective, the fundamental sector current is proportional to j and the charge density is, up to an overall constant, given byd. As before, the expectation value of the current can be determined by imposing regularity on the OSM metric. Note that, a corresponding membrane-paradigm description, for computing transport properties of the dual gauge theory, was developed and explored in [109]. Along similar lines, one can explore the case when (ii) E ⊥ B. In this case, an additional Hall current flows, which was analyzed in e.g. [124]. Ergoplane: A Special Feature The open string metric can also contain an "ergoplane" and therefore similar related black hole physics. This is explicitly demonstrated with the D3-D7 system in [109]. To see this, we excite a chemical potential. By setting B = 0 in (7.14), one obtains: The event horizon is located at (G tt G xx + E 2 ) = 0, and the ergoplane is located at: This directly yields a root u erg = u * . As before, j 2 is determined from a simple algebraic equation: Let us take a simple example, in which the background if given by an AdS d+2 , with a vanishing dilaton. One then obtains: The event horizon and the ergoplane merge in the limit (d 2 /E d ) → 0 and/or (1/d) → 0. 13 Exactly Solvable Toy Models: More Examples In addition to the explicit D-brane constructions that have been discussed before, here we briefly mention some examples in which explicit analytical calculations can be performed, to arrive at the same conclusion. These are the so-called Lifshitz background of the following form: where x is a 2-dimensional vector and v is the radial coordinate (v → 0 corresponds to boundary and v → ∞ corresponds to the deep-IR). As before, we can consider a similar DBI-dynamics in the (7.26), with a worldvolume flux: a x = −Et+h(v). One readily obtains an equation of motion for the function h(v), which can be solved in terms of a first integral of motion. As before, this first integral of motion can be subsequently determined in terms of e = (2πα ) E. 13 The same statement holds for an AdS d+2 -BH background geometry as well. Around this classical saddle, we consider fluctuations which yields: (7.27) where δf denotes gauge field fluctuations. This action results in the following equation of motion In the {t, v}-plane the corresponding OSM is given by With the following ansatz: some explicit solutions, with purely ingoing boundary condition, can be obtained as follows: Here c x is a constant. Using these explicit solutions, we calculate e.g. two point correlator, similar to [113]. As a result, we get a fluctuation-dissipation relation, with an effective temperature: T eff = z+1 π e z/(z+1) . While this is not surprising, it adds more explicit evidence to the theme. The Probe Limit: Validity Our entire discussion is based on the probe limit. Since this is a crucial ingredient in our construction, let us briefly review what this entails. At the level of the equations of motion, the Einstein tensor of a given solution must be parametrically large compared to the stress-tensor of the probe sector. Consider the decoupling limit of N c coincident Dp-branes. These, in the string frame, are given by [75] Here dΩ 8−p denotes the line element of an (8 − p)-sphere and ω 8−p denotes the corresponding volume form. For these geometries, Einstein tensor behaves as: Here α, β run over the gauge theory directions. Consider N f -number of probe D(p + 4)-branes. These span {t, x p }-directions and wraps three-cycle X 3 ⊂ S 8−p . As before, we excite a U(1)-flux: A x 1 = −Et + a 1 (u). The probe energy-momentum tensor is given by and A comparison between (7.38) and (7.39) establishes validity of the probe limit, except the case with p = 4. Similarly, comparing (7.38) with (7.41), we conclude that the IR will be heavily modified. In fact, this modification can only be controlled by placing an event horizon in the bulk. Further note that, T tt ∼ (E · J) has an Ohmic dissipation nature, in terms of the dual gauge theory. Similar conclusions were reached in e.g. [125,126]. Effective Thermodynamics: A Summary The basic statement of gauge-gravity duality is an equivalence between the bulk gravitational path integral and the dual field theory path integral: Z bulk = Z QFT , Z bulk = e iS bulk , S bulk = S grav + S matter , subsequently Z QFT = e iS QFT , S QFT = S gauge + S matter . (7.42) In writing the above, we have explicitly assumed that the dual field theory is a gauge theory, with some matter content. When the matter sector consists of probe degrees of freedom, the corresponding path integral factorizes into two parts: Z bulk = Z sugra Z sugra . In the N c → ∞ limit, a semi-classical description is viable by considering small perturbations around a classical saddle: sugra [δφ,δG]+... , (7.43) where S (2) sugra is the quadratic fluctuation term. Similarly, the probe sector also has a semi-classical description. Thus, at the semi-classical level, the entire path integral factorizes into a classical piece and a quadratic fluctuation piece. Schematically, these take the following form [127]: back−reac +... , (7.44) where g s is the string coupling and S back−reac denotes the backreaction of the probe sector on the classical saddle. Naively, by tuning N f N c , we can safely ignore the backreaction. Note, however, that this conclusion is subtle to make, as we have already demonstrated in the previous section. The quadratic fluctuation part can be schematically represented as: where ϕ grav and ϕ brane represent gravity and brane fluctuation modes, respectively. From this, we can already compare the relative scaling of two-point functions in the gravity and in the brane sectors, which are given by ϕ grav ϕ grav ∼ 1/N 2 c and ϕ brane ϕ brane ∼ 1/(N c N f ). Let us focus on the D-brane sector. In this sector, let us rewrite the generic form of the scalar and vector fluctuations: Here, κ denotes an overall constant and . . . represent various interaction terms. The fields ϕ i , F represent the scalar and vector fluctuation modes, respectively. In (7.46)-(7.47), S is the open string metric, which we have defined before. The kinetic term in (7.46) and (7.47) can be written in a more canonical form: −detSS ab (∂ a ϕ) (∂ b ϕ) , and −detSS abS cd F ac F bd , The conformal factor Ω needs to be determined for each case, separately. Before discussing the Euclidean path integral (i.e. the partition function), let us briefly comment on the stress-tensor of the dual field theory. The corresponding data can be represented primarily in terms of the open string metric data, in the following form [128]: where γ is a constant and φ is the dilaton field and µ.ν are the directions along the dual field theory. In [128], several examples were discussed with explicit form of the energy-momentum tensor in the dual field theory. In the Euclidean patch, the path integral yields a thermodynamic description in terms of a partition function. It is now expected that the probe sector thermodynamics will be simply determined in terms of the effective temperature T eff . However, the entropy, which can be obtained from the partition function itself, is not given in terms of the area of the OSM event horizon. It was also argued in [129], that, in the probe limit, only free energy can be reliably calculated by computing on-shell action in the probe sector. For thermal entropy and such, the contribution coming from backreaction of the defect degrees of freedom mixes with the probe sector contribution. check the exact statement here. For the case in consideration, a proposed thermodynamic free energy was discussed in [130], and subsequently generalized in [116]. The Helmholtz free energy is given by Here u * is the OSM event-horizon, S DBI is the Euclidean DBI action. This is the only extensive quantity that we can define. The free energy, in the special case of AdS 3 , contains a universal term of (T 2 eff − T 2 ), where T eff and T are the probe and the background sector temperatures. Such a term, intuitively, captures the heat exchange across the two systems in a two dimesional CFT. Furthermore, the presence of the OSM event horizon, and the open string data: specify this where G denotes the worldvolume metric, B denotes the anti-symmetric two-form (with factors of α absorbed). Here G s is the open string coupling [108]. There is a natural geometric area which defines an entropic quantity, given by However, the physical meaning of this is unclear and it is certainly not the thermal entropy. An intriguing possibility was suggested in [51,50,52], in which one identifies this entropy with the entanglement entropy of the pair produced in the presence of a strong electric field. Before leaving this section, let us note that, a slightly different non-equilibrium thermodynamics has been proposed and explored in [54,56], in which the prescription is provided in the Lorentzian section of the geometry, unlike in the Euclidean section which we have discussed here. Some Causal Features A detailed analysis of the causal structure of such OSM geometry was discussed in [127]. We will briefly review them here. As simple examples, let us discuss cases when the background is AdS 3 and AdS 4 . We imagine a space-filling D-brane (and thus a corresponding DBI-action), with the U(1) flux turned on. The explicit form of the metric are given in [127], and we will only discuss the qualitative physics here. One usually begins with a Poincaré patch description, and by going to a Kruskal extension, eventually ends up with a Penrose diagram. The resulting Penrose diagram for a purely AdS 3 and a purely AdS 4 background is shown in figure 12. The Penrose diagrams are qualitatively similar to what one obtains by solving Einstein gravity with an AdS asymptotics. Note, however, that in asymptotically AdS 3 , the standard BTZ-type black holes do not have any singularity. For the OSM embedded in AdS 3 , this is not the case and a curvature singularity exists. This singularity is visible by computing the Ricci-scalar itself. Let us discuss some aspects of "energy conditions" for the OSM geometry. A simple way to define such conditions is to demand that the OSM geometry can be obtained by solving a Einstein equations, with a suitable matter sector and translating the energy conditions on the corresponding matter sector. Thus, imagine: G µν + Λg µν = T µν , (7.54) where Λ = −d(d − 1)/2 is the cosmological constant in asymptotically AdS d+1 -background, G µν is the Einstein tensor of the OSM and T µν is the putative matter field. Defined this way, an AdS 3 -OSM yields where t µ is an arbitrary timelike vector; and hence Weak Energy Condition (WEC) is satisfied. Similarly, Strong Energy Condition (SEC) is also satisfied. The Null Energy Condition (NEC), on the other hand, yields: T µν n µ n ν < 0, for arbitrary null vectors n µ , and is violated. For the AdS 4 -osm, with a similar choice for the timelike and the null vector, WEC yileds: T µν t µ t ν < 0 and is thus violated. On the other hand, the NEC evaluates to T µν n µ n ν = 0. In view of (7.48), one may explore similar questions for the conformal metricS. For the conformal OSM, in asymptotically AdS 3 , one obtains: T µν n µ n ν > 0 =⇒ NEC satisfied , (7.57) For asymptotically AdS 4 , the conformal factor is identity. We can also check that a conformal AdS 5 -OSM violates WEC, while both NEC and SEC are satisfied. In brief, the the conformal OSM always violates the WEC. Therefore there cannot be any area-increasing theorem for such event horizons. A Dynamical Example In a special example, discussed in [34], a dynamical OSM geometry can also be constructed. Let us begin with the following background: where we have used Eddington-Finkelstein ingoing coordinate dt = dV + h(u)du , with h(u) = − 1 u 2 f (u) . (7.59) The metric takes the usual black hole metric form in the {t, u}-patch. The corresponding DBI action takes the following schematic form: where the U(1)-flux on the worldvolume is given by f = f xu (u)dx ∧ du + f xV (u)dx ∧ dV A simple solution of the resulting equations of motion can be obtained for AdS 4 [34], characterized by one undetermined function: f xV = E(V ). The corresponding OSM now takes the form: which has an AdS-Vaidya form, with a dynamically evolving apparent and event horizon. Choosing an appropriate function E(V ), an event horizon formation on the brane can be easily described. In the dual gauge theory, one subsequently obtains a time-dependent current j(t) ∼ E(t). Conclusions In this review, we have discussed various examples in which the non-linear dynamics can give rise to an effective causal structure. When this causal structure is similar to that of a black hole, many similarities to classical as well as quantum properties of black holes become manifest. This bears substantial resemblance to other non-linear systems that describe gravity-like phenomena, known as Analogue Gravity, see e.g. [131] for a detailed review on this. For the cases considered here, this causal structure emerges from an open string equivalence principle and this is what forms the analogue to gravity. However, our attempts have been to briefly review the essential ideas, some explicit and simple examples on which these ideas are rather manifest. We have certainly not made any attempt to review the vast literature on the physics of probes traveling through a thermal medium in a strongly coupled gauge theory. The basic ingredients used to study this are open strings in a supergravity background. In the probe limit, the corresponding string profile can develop a worldsheet event horizon, which plays a crucial role in determining energy loss, drag force, stochastic Brownian motion of the probe. For a more extensive review on these, we will refer the reader to [31]. Beyond the probe limit, the open string backreaction estimates e.g. also the radiation produced by the quark-like degree of freedom. See e.g. [132,133] for some of the early studies including the string backreaction. In a similar time-dependent context, far less explicit examples are known with backreaction from D-branes, since, technically, this a much harder problem to tackle. See e.g. [134] for a perturbative analysis of the backreaction. Staying in the probe limit, while there are similarities with a black hole, there are unsettled issues as well. One of the main issues is about the physical meaning of the area of the event horizon in all of the cases we have discussed above. For a black hole, this corresponds to the thermal entropy, moreover, one can prove area increasing theorems (with certain energy conditions for the matter field) in coherence with the second law of thermodynamics. This, however, is in stark contrast with what we have reviewed here. First, we can identify the Euclidean path integral as the corresponding thermal free energy and, therefore, derive the thermal entropy from this free energy. It is rather easy to see that this does not equal the area of the event horizon on the string worldsheet, or of the open string metric geometry. Secondly, as we have explicitly discussed energy conditions for such cases, there is no area increase theorem for such horizons. One intriguing proposal of [51] is to view this area as a measure of entanglement entropy, since, the open string event horizon is created due to a Schwinger pair creation on the D-brane and is therefore intimately related to entanglement. This is explicitly demonstrated by establishing that the Lorentzian section of the string worldsheet in [88] is the analytic continuation of worldsheet instantons that describe Schwinger pair production of fundamental degrees of freedom. A similar statement on D-branes is desirable, however, to the best of our knowledge no such explicit connection has been made yet. Complexity is a well-defined notion in quantum systems, which measures the minimum number of unitary operations required to reach a certain state starting from a base-state. Based on the eternal black holes in AdS, recently is has been suggested in [135] that computational complexity in the dual CFT is measured, geometrically, by the Einstein-Rosen bridge in the eternal black hole spacetime. Since, at present, it remains unclear how to precisely define complexity in a QFT framework, we will not delve deeper into this issue. We will, however, simply note that from a geometric point of view, the string worldsheet or the D-brane open string metric that we have reviewed here, naturally allows us to define such a quantity. Indeed, in [98], it has already been explicitly discussed. It would be very interesting to explore this issue further, since the two-dimensional worldsheet allows for an excellent analytical control over such issues. Finally, we end by briefly commenting on one of the most interesting questions in black hole physics: the information paradox. Since the causal structure in the probe sector is closely similar to the causal structure of a black hole, in the eternal patch (i.e. the thermofield double), one can address the nature of this paradox, following the proposal in [12]. In the standard thermofield double scenario, the mismatch between the gravity correlator and the unitary CFT correlation occurs at an order e −c 1 S , where S is the entropy of the system and c 1 is an irrelavant constant. For a purely gravitational system, S ∼ 1/G N and therefore the mismatch in correlators occurs at a non-perturbative order in 1/G N . This regime requires a highly interacting string theory, since G N ∼ g 2 s . On a string worldsheet, a naíve analysis following [12] would imply that the mismatch between the geometric correlator and the CFT correlator occurs at a non-perturbative order in e −c 2 S , where c 2 is, as before, an unimportant constant. In this case, S ∼ 1/ 2 s , where s is the string length and it does not depend on the string coupling. The Nambu-Goto theory receives no correction in s and therefore, it may be possible to make quantitative progress in estimating such non-perturbative effects. Finally, we end this review with the mention of the Lieb-Robinson bound, which provides an emergent upper bound on how fast information can propagate in a generic non-relativistic quantum mechanical system, with local interactions. While this limiting velocity is system specific, it rather intriguingly suggests a relativity-like structure for quantum non-relativistic systems, at least in the kinematic sense. It will be interesting to understand the physics of this better and perhaps explore a possible connection to what we have discussed in this review.
2018-12-22T04:23:06.000Z
2018-12-22T00:00:00.000
{ "year": 2018, "sha1": "ebcf3730251fd2a265c301276860983cbc4adf47", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ebcf3730251fd2a265c301276860983cbc4adf47", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
225294803
pes2o/s2orc
v3-fos-license
Sleeping on a tightrope: White-breasted Cormorants Phalacrocorax lucidus and African Darters Anhinga rufa roosting on transmission lines Though recent research has explored the negative impact of human infrastructure on large waterbirds, few studies have examined behavioural byproducts such as roosting or nesting on transmission wires. Here, we document our observation of a joint roost of White-breasted Cormorant Phalacrocorax lucidus and African Darter Anhinga rufa on transmission lines in the Western Cape, South Africa. We highlight current gaps in understanding communal roosting, joint roosts between species, and roosting on infrastructure, and provide recommendations for future directions of study.   White-breasted Cormorant Phalacrocorax lucidus and African Darter Anhinga rufa are large waterbird species, both widespread across sub-Saharan Africa and considered Least Concern by the IUCN (IUCN 2020). White-breasted Cormorant is common in salt and freshwater habitats ranging from coastal to wetland and riverine (Chittenden et al. 2016). African Darter is locally common in freshwater bodies and wetlands, preferring open, slow-moving water with vegetated or rocky banks (Chittenden et al. 2016). Both species are nomadic in response to changing water quality and levels and are somewhat gregarious at breeding and roosting sites, often aggregating in mixed flocks with other large piscivorous birds (Chittenden et al. 2016;Sinclair et al. 2011). Inland, White-breasted Cormorant and African Darter often form mixed breeding colonies, together with Reed Cormorant Microcarbo africanus, African Spoonbill Platalea alba, and herons (Tarboton 1977;Chittenden et al. 2016). Roosting of White-breasted Cormorant and African Darter in natural structures such as trees, bushes or large reedbeds has been previously reported as common (Chittenden et al. 2016). Volume 1 of the Handbook of the Birds of the World lists four cormorant species (Great Cormorant Phalacrocorax carbo, Brandt's Cormorant P. penicillatus, Double-crested Cormorant P. auritus and Neotropic Cormorant P. brasilianus) as "capable of perching on cables" (del Hoyo et al. 1992, p. 330), however, this statement is made without references. From the context, it is also not clear whether roosting (as opposed to simply perching) on cables was reported. There are few mentions of cormorants roosting on electricity transmission infrastructure (Bartholomew 1943, Brown & Lawson 1989. Perhaps the bestdocumented incidence stems from the city of Orleans, Massachusetts, in the United States, where guano from a large roost of Double-crested Cormorant on power lines resulted in a court case and the subsequent removal of the infrastructure (Genter 2019). Still, there appear to be no formal and easily accessible accounts of any species of cormorant or darter roosting on transmission lines. This note reports an observation of a joint roost of White-breasted Cormorants and African Darters on electricity transmission lines in the Western Cape, South Africa. Methods Over the period 26 to 29 June 2020, we conducted a road trip dedicated to data collection for the Virtual Museum (vmus.adu.org.za), and especially for BirdPix (the section of the Virtual Museum dedicated to the curation of bird distribution data), in the northern arm of the Western Cape. We stopped regularly to photograph birds. On both 26 and 29 June we travelled along minor roads to the west of the N7, the national road from Cape Town to Namibia. We carried out fieldwork between the ocean and the N7, in the area between Citrusdal in the south and Klawer in the north (Fig. 1). Results On both 26 and 29 June we passed a small farm dam, overgrown with Phragmites reeds, alongside the road numbered R365 and close to the Bergvallei River, near Paleisheuwel. The road served rural communities in a broad valley, and had little traffic (Fig. 1). An 11kV electricity rural transmission line (see figure 2.14 in Scott 1992) ran along the east side of the road and over the dam. On 26 June, both White-breasted Cormorants and African Darters were observed on this dam, as well as on numerous other small dams in the valley. No more than two birds were present at each locality. On 29 June 2020, at 17h45 (dusk), we encountered a mixed flock of these two species gathering to roost on the 11kV transmission line. The roost was directly above the dam (32.512°S, 18.711°E; Fig. 1). Surrounding habitat consisted primarily of farmland, with fields used for crops (mainly potatoes) and grazing, and included several farm dams and scrubby natural vegetation (Fig. 1). In total, 34 White-breasted Cormorants and 30 African Darters were seen roosting together on transmission lines (Fig. 2). Numbers reflect our best estimates from photographs; actual totals may have differed slightly because birds continued to fly in and out as we left. The two records are curated in the BirdPix section of the Virtual Museum: White-breasted Cormorant and African Darter. Choice of roost site The valley itself contained large Eucalyptus trees, some within 1 km of the observed roosting site (Fig. 1). Inland cormorants and darters roost mainly on trees (Doug Harebottle pers. comm) and both Whitebreasted Cormorant and African Darter have also been documented breeding in Eucalyptus (Smith 1974). There is no obvious explanation as to why the birds chose to roost on transmission wires instead of on the available trees. Given the small numbers of birds at each of the farm dams in the valley, a roost of this size must host White-breasted Cormorants and African Darters from a reasonably large surrounding area, with birds dispersing several kilometres during the day to forage. In light of this, the choice of transmission lines remains puzzling. Speculative suggestions include that the birds preferentially roost above water, or, perhaps more intriguingly, transmission lines may be perceived as providing increased protection from predators. Waterbirds utilizing electricity transmission line infrastructure Few studies have described waterbirds utilizing transmission lines, especially for roosting. Bartholomew (1943) described large groups of cormorants roosting on a power line near San Francisco in the USA over 75 years ago; the author then hypothesized that cormorants use power lines when a) no suitable natural roosts are present, and b) power lines are relatively free of human disturbance. Since this publication, however, accounts of waterbirds roosting on transmission lines are anecdotal; the majority of research in this field focuses on negative impacts of infrastructure (i.e. power line collision, electrocution, etc.) rather than adaptive behaviour (Bernardino et al. 2018). An exception is a study by Kucherenko et al. (2014), who documented 86 species using overhead transmission lines in the Crimean Peninsula, 17 of which used the infrastructure for nesting. Among the 86 species were seven waterbirds, including the Great Cormorant, which was once observed perching on a transmission line. Communal roosting Our search of the literature also identified a gap in knowledge related to communal roosting. Reasons for this behaviour are poorly understood; three main hypotheses include improved thermoregulation, decreased risk of predation, and increased foraging efficiency (Beauchamp 1999). Siegfried et al. (1975) found that although Cape Cormorants Phalacrocorax capensis roost and forage together, other southern African cormorant species tend to forage solitarily. This potentially rules out foraging efficiency as an explanation for communal roosting in White-breasted Cormorant. Aside from Beauchamp (1999), we found no studies discussing the significance of joint roosting behaviours between species of large waterbirds. Though joint roosts are mentioned in several publications, they are seldom the subject of research for any bird species. The few formalised accounts include a summary of species composition in communal roosts of Indian birds (Gadgil & Ali 1975), African Sacred Ibis Threskiornis aethiopicus and Marabou Stork Leptoptilos crumeniferus roosting together in Kenya (Evans et al. 1982), and mixed-species roosts of parrots/parakeets in Costa Rican dry forest (Chapman et al. 1989). This scarcity of information highlights a need for further research into ecological processes underlying the patterns visible in roosting behaviours, as well as a stronger overall understanding of cormorant and darter behavioural ecology. Conclusions Based on our observations and a brief survey of the literature, we recommend further studies investigating 1) novel or adaptive interactions between large waterbirds and human infrastructure, i.e. roosting or nesting on transmission lines; 2) the origin of communal roosting in cormorant and darter species; 2) potential mechanisms driving joint roosting behaviour in both species. Clearer insights in these three areas will deepen our understanding of waterbird behaviour within novel ecosystems, ultimately informing conservation action for these species. Biodiversity Observations The scope of Biodiversity Observations includes papers describing observations about biodiversity in general, including animals, plants, algae and fungi. This includes observations of behaviour, breeding and flowering patterns, distributions and range extensions, foraging, food, movement, measurements, habitat and colouration/plumage variations. Biotic interactions such as pollination, fruit dispersal, herbivory and predation fall within the scope, as well as the use of indigenous and exotic species by humans. Observations of naturalised plants and animals will also be considered. Biodiversity Observations will also publish a variety of other interesting or relevant biodiversity material: reports of projects and conferences, annotated checklists for a site or region, specialist bibliographies, book reviews and any other appropriate material. Further details and guidelines to authors are on the journal website (https://journals.uct.ac.za/index.php/BO/).
2020-08-27T09:08:39.724Z
2020-08-21T00:00:00.000
{ "year": 2020, "sha1": "2c31cb05f4ad121b06646271f8f825774e28f115", "oa_license": "CCBY", "oa_url": "https://journals.uct.ac.za/index.php/BO/article/download/993/695", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cdcc995ebd8241f717fbcf27342cb10af611eedd", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
73659744
pes2o/s2orc
v3-fos-license
Low-loss design for the high-intensity accumulator ring of the Spallation Neutron Source This paper summarizes the low-loss design for the Spallation Neutron Source accumulator ring [“Spallation Neutron Source Design Manual” (unpublished)]. A hybrid lattice consisting of FODO arcs and doublet straights provides optimum matching and flexibility for injection and collimation. For this lattice, optimization focuses on six design goals: a space-charge tune shift low enough (below 0.15) to avoid strong resonances, adequate transverse and momentum acceptance for efficient beam collimation, injection optimized for desired target beam shape and minimal halo development, compensation of magnet field errors, control of impedance and instability, and prevention against accidental system malfunction. With an expected collimation efficiency of more than 90%, the uncontrolled fractional beam loss is expected to be at the 1024 level. I. INTRODUCTION In recent years, high-intensity ion beams have been proposed for a wide variety of applications.These include spallation neutron sources, neutrino factories, transmutation of nuclear waste, heavy ion fusion, and muon collider drivers [1].Beam power in these machines, usually 1 MW or more, is 1 order of magnitude above that of existing accelerator facilities.In the design of such next-generation facilities, the primary concern is that radioactivation caused by excessive uncontrolled beam loss can limit a machine's availability and maintainability.Based on operational experience at the LAMPF linac [2] at Los Alamos National Laboratory and the alternating gradient synchrotron (AGS) and Booster [3] at Brookhaven National Laboratory, hands-on maintenance [4] demands an average uncontrolled beam loss not exceeding a couple of watts of beam power per tunnel meter.At megawatt power levels, this corresponds to a fractional beam loss of 10 26 per meter.Equivalently, for a storage ring several hundred meters in circumference, the tolerable fractional beam loss is about 10 24 . Existing proton synchrotrons and accumulator rings have beam losses as high as several tens of percent, mostly at injection, capture, and initial ramping.The smallest beam loss is about 3 3 10 23 , achieved at the proton storage ring (PSR) at the Los Alamos National Laboratory.This loss value, however, is still more than 1 order of magnitude higher than desired for next-generation machines. The Spallation Neutron Source (SNS) is based on an accelerator producing an average proton beam power of 2 MW at a repetition rate of 60 Hz [5,6].Table I compares the main parameters of the SNS and some other existing and proposed neutron facilities.Table II lists the main parameters of the SNS ring.During 1999, the first year of construction, a study was performed comparing a full-energy linac with accumulator ring to an RCS.As opposed to an RCS, an accumulator ring simplifies the capture process and avoids ramping complications.The study concluded that, especially due to the stringent beam-loss limit of a 2 MW source, the required RCS design is technically demanding and consequently less cost effective [7].The SNS accelerator complex now comprises a source and front end, a 1 GeV full-energy linac, an accumulator ring, and its transport lines.With a circumference of 220 m, the accumulator ring compresses the proton beam into 0.6 ms pulses of 2 3 10 14 particles, and delivers them at a rate of 60 Hz to a liquid mercury target for spallation neutron production.This paper summarizes the low-loss design optimization for the SNS accumulator ring [8,9].Many of the design concepts and conclusions can be applied to future highintensity facilities.Section II presents the FODO-doublet hybrid lattice.Considerations of physical and momentum acceptance are given in Sec.III.In Sec.IV, we compare injection painting scenarios and discuss injection halo control.Section V discusses the extraction layout.In Sec.VI, we address loss mechanisms, halo development, and space charge issues.Beam collimation, beam gap cleaning, and collimator design are discussed in Sec.VII.Radio frequency system design is presented in Sec.VIII.Magnet field error analysis and chromatic and resonance corrections are discussed in Secs.IX and X. Impedance and instability issues are discussed in Sec.XI.The diagnostics system is discussed in Sec.XII.The functions of the transport lines to the ring are presented in Sec.XIII.A summary is given in Sec.XIV. A. Layout and functions Lattices used in typical high-intensity proton accelerators have either a FODO structure (AGS Booster [10], IPNS upgrade [11], Japan Joint Project (JJP) ring [12], previous SNS ring [5], etc.) or a doublet/triplet structure (ISIS [13], ESS [14], etc.).A FODO lattice structure has the advantage of relatively low quadrupole gradients, relatively smooth lattice function variations, and easily implemented chromatic and resonance corrections.However, the uninterrupted drift space is often short and not desirable for injection and collimation arrangements.Moreover, possible lattice mismatch caused by unequal FODO cell lengths can reduce machine acceptance.On the other hand, a doublet/triplet lattice structure has the advantage of long uninterrupted drift spaces for injection and collimation optimization. The newly optimized SNS ring lattice has a hybrid structure with FODO bending arcs and doublet straight sections [6].The lattice combines the FODO structure's simplicity and ease of correction with the doublet structure's flexibility.As shown in Fig. 1, the accumulator ring has a fourfold symmetry comprising four FODO arcs and four dispersion-free straights.The four straight sections house injection, collimation, rf, and extraction systems, respectively.Each straight section consists of one 9 m and two 5.5 m long dispersion-free drifts. Figure 2 shows the layout and content of one of the four superperiods.Each arc consists of four 8 m long FODO cells.Five of the arc quadrupoles, at sites of large dispersion, are sandwiched by a chromatic sextupole and an orbit correction dipole.The other quadrupoles are sandwiched by corrector packages containing both linear elements for orbit correction and decoupling and nonlinear elements for amplitude detuning and resonance correction.an achromat.The dispersion vanishes in the straight sections.Each dipole is centered between two quadrupoles so as to maximize the vertical acceptance of the dipoles. C. Working points The horizontal and vertical tunes can both be adjusted by more than one unit without significant optical mismatch.The vertical tune is adjusted using the two families of arc quadrupoles, and the horizontal tune is adjusted using the three families of straight-section quadrupoles. Working points in tune space are chosen mainly to avoid the major low-order structure resonances.Working points with tunes split by more than a half-integer avoid possible strong coupling caused by space-charge forces and systematic magnet errors, thus preserving the painted beam distribution for the target.Table III in (Q x , Q y ) tune space.A more detailed working-point comparison based on resonance analysis is currently under study. D. Alternative lattices Several alternative lattices have been studied and compared with the nominal lattice.One alternative reverses the polarity of the quadrupoles.In that case, the maximum dispersion is increased by about 10%.Another alternative [11] uses missing-dipole dispersion suppressors, instead of the achromats, to reduce dispersion in the straights.In this case, dispersion matching can reduce the maximum dispersion of the ring by about 40%, and the horizontal phase advance in the arc is more flexible.However, the addition of eight FODO half-cells with missing dipoles increases the ring circumference by about 15%. III. PHYSICAL AND MOMENTUM ACCEPTANCE An adequate acceptance-to-emittance ratio is essential to allow for collimating the beam and minimizing uncontrolled beam loss.Collimation studies indicate that efficient cleaning of a beam extending out to 240p mm mrad (as in correlated painting where we paint to a square beam with emittances of 120p in both x and y directions) requires a ring acceptance of 480p mm mrad.This accep-tance will allow particles scattered by the collimation system to return to the collimators without being lost elsewhere in the ring.When anticorrelated painting is used (where we paint to a circular beam), the beam extends to just 160p mm mrad, and the acceptance-to-emittance ratio becomes larger (now 3 instead of 2), making collimation more efficient. We therefore require a minimum ring aperture of 480p mm mrad everywhere around the ring.Each element is designed to provide the necessary aperture, taking into account the optic functions.Along the arcs, where the dispersion is nonzero, acceptance depends on momentum deviation.And there we require the same acceptance of 480p mm mrad for particles with momentum deviation dp͞p 0 61%.This corresponds to the rf bucket acceptance.Figure 4 illustrates schematically the transverse dimensions of the beam, collimators, and beam pipe around the ring. Finally, to allow for longitudinal collimation with the beam-in-gap kicker (see Sec. VII), we require the longitudinal acceptance of the arc to be dp͞p 0 62% for a beam extending out to 160p mm mrad.Then offmomentum particles can reach the gap between bunches without hitting the vacuum chamber at high-dispersion areas.This longitudinal acceptance will allow one to make chromaticity measurements and upgrade the rf to cure instabilities. IV. INJECTION The H 2 beam is transferred from the linac, stripped by the foil, and then injected into the ring.A maximum field A. Injection layout As shown in Fig. 5, a 9 m drift between quadrupole doublets houses the fixed chicane that assures adequate clearance for injection.To prevent stripping of H 0 in n 4 and lower excited states, the injection stripping foil is located at the downstream end of the injection dipole, and the field of the subsequent dipole magnet is 2.4 kG.The fringe field of the injection dipole is shaped so that stripped electrons spiral down to where they can be easily collected. The two 5.5 m drifts accommodate the symmetrically placed horizontal and vertical dynamic kickers used for injection painting.The b-function perturbation caused by the injection chicane and the orbit bumps is about 2%.The maximum residual dispersion is about 0.2 m.The amplitude dependent tune shift produced by the chicane (0.004) is small compared with that produced by spacecharge forces (0.15; see Table VII). The fixed chicane does not overlap ring lattice magnets.When the lattice is tuned, the strengths of the dynamic kickers will be adjusted so that the fixed chicane remains constant.The injection system is thus decoupled from the lattice tuning. B. Painting scheme comparison Injection painting creates the transverse density and beam profile desired at the mercury target.In addition, the painting is designed to achieve an average of less than eight foil hits per particle during the full 1225-turn accumulation. Various painting schemes have been explored: correlated, anticorrelated [15], and transverse coupled painting (see Fig. 6).Particle-in-cell simulations, using the codes SIMPSONS and ORBIT, were performed with beams having unnormalized rms emittances of about 0.25p mm mrad, but different distributions.However, because of the very small injected phase-space area, the difference in injected distributions has little effect on the final beam distribution.Ideally, anticorrelated painting using opposite horizontal and vertical orbit bumps produces a distribution with an elliptical transverse profile and a uniform density distribution.Such a distribution can also be realized by painting in one direction and steering in the other.However, in the presence of space charge this scheme produces an excessive beam halo during the early stage of painting, when the beam is narrow in one direction (see Fig. 7).Further analysis indicates that this beam halo can be reduced by a new anticorrelated painting: alternating anticorrelated orbit bumps in both the horizontal and vertical directions.Correlated painting using parallel horizontal and vertical orbit bumps produces a rectangular transverse profile.Figure 8 shows the beam distribution in the case of the half-integer split tune working point (6.30, 5.80).This scheme has the advantage that the beam halo is constantly painted over by freshly injected beam.The main concern is whether the rectangular beam profile can be preserved in the presence of coupling produced by space charge and 080101-6 080101-6 magnet errors.Figure 9 compares the beam tail development between cases with and without full-integer tune split.Obviously, splitting the transverse tunes greatly reduces space-charge induced beam tail and helps preserve the rectangular beam shape.Also, as shown in Fig. 10 (without space charge), splitting of the tunes can reduce the impact of a systematic skew quadrupole component induced, for example, by quadrupole rolls. V. EXTRACTION The beam accumulated in the SNS ring forms a single 590 ns long bunch with a gap of 250 ns.The extraction system consists of 14 fast kickers and a single Lambertson septum (see Fig. 11).Extraction is a two-step process: kick the beam vertically with the fast kickers into the Lambertson-type septum magnet, then use the septum magnet to deflect the beam horizontally. In most machines, the extraction region has a high radioactivation level resulting from accidental beam loss caused by and kicker malfunction.The SNS ring extraction system is designed to accept the fully painted beam without loss even when one of the 14 kickers fails.The acceptance of the extraction channel is 400p mm mrad, as compared to the ring acceptance of 480p mm mrad.To achieve a fractional beam loss below 10 26 at the extraction channel under normal operating conditions, the beam is collimated for an extra 20 turns after accumulation. A. Beam tail and halo development Space-charge forces and magnet field errors can drive particles into resonance, resulting in emittance increase and particle loss.Figure 12 presents SIMPSONS [16] simulation results showing the beam tail developed in the presence of space charge and magnet errors when operating at the same-tune working point (5.82, 5.80).At this working point, we are very close to a space-charge induced coupling resonance 2Q x 2 2Q y 0. Although difference resonances are not considered dangerous to the operation of accelerators, this observed coupling resonance leads to significant beam tail growth which is undesirable for the low-loss design.The present lattice design allows one to choose different working points in a tune range of more than one unit.Clearly, the beam tail generated by the coupling resonance can be significantly reduced when a split-tune working point is chosen (recall Fig. 9).These results have been independently confirmed using ORBIT codes [17] and UAL packages [18]. B. Parametric halo in ring A 1:2 parametric resonance is believed to be the principal mechanism behind space-charge induced halos in linacs (see the extensive literature listed in [19]).This resonance between the motion of individual ions and collective beam oscillations is governed primarily by the rms beam mismatch.The dynamics in ring is quite different.In the low space-charge regime typical of high-intensity rings with tune depressions of a few percent, the transverse particle motion near the core of the beam is regular.The rate at which particles are trapped by the 1:2 resonance becomes very small.In addition, in proposed high-intensity rings, because of the multiturn injection painting, the final intensity is reached only at the end of the injection process, and almost immediate extraction leaves little time for a parametric halo to develop.Furthermore, because of the multiturn injection painting, the mismatched coherent modes of the beam may be damped by phase mixing.Hence, for halo development in the SNS ring, mechanisms besides parametric resonance may be more important [20]. C. Effective space-charge tune shift Space charge is a fundamental limitation in highintensity circular accelerators.Rings are usually designed to avoid strong resonances using a formula for the incoherent space-charge tune shift which is based on the assumption of a constant beam size.Taking into account the oscillation of the beam envelope [21,22], the resonance condition is actually [23] where Q 0 is the base tune, DQ sc is the incoherent spacecharge tune shift, n is the excited harmonic, m is the resonance order, and deviation of the coefficient C m from 1 represents the contribution of envelope oscillations.This contribution is most significant for low-order resonances [24,25].For a round beam near a half-integer resonance (m 2), when the horizontal and vertical tunes are close to each other, C 2 1͞2 for the symmetric mode, and C 2 3͞4 for the antisymmetric mode.In the case of split tunes, C 2 5͞8.For the SNS ring at the split-tune working point with DQ sc ഠ 0.15, the effective tune shift is DQ eff 5DQ sc ͞8 0.09.The actual space-charge limit is therefore less restrictive. VII. BEAM COLLIMATION The incoming linac halo is cleaned by the linac-toring transport line collimation system, which consists of adjustable stripping foils for H 2 particles followed by shielded collection elements.The linac's energy tail is cleaned by a momentum collimation system located in the maximum dispersion region of the transport line's achromat bend.In the ring, an entire straight section is dedicated to a two-stage collimation system.A beam-in-gap kicker is used to clean the residual in the gap between subsequent beam pulses. A. Transverse collimation The machine is designed to localize beam losses to shielded locations using beam collimators [26,27].Ring collimation employs a two-stage system located in a shielded nondispersive straight section.The adjustable primary scraper, shielded by a collimation block, is used to intercept incoming halo and can also be used for beam diagnostics.Two secondary collimators, located at an acceptance of about 300p mm mrad, are designed to catch particles scattered by the scraper.Their phase advances relative to the primary collimator are determined from efficiency optimization and lattice constraints. Figure 13 compares the collimation inefficiency of the previous all-FODO lattice with that of the hybrid lattice.With the long drift space provided by the hybrid lattice, the collimators can now be arranged at locations of optimum betatron phase to enhance the efficiency.We expect the FIG. 13. (Color) Comparison of collimation inefficiency be- tween the previous all-FODO lattice (upper curve) and the present hybrid lattice (lower curve).The inefficiency is defined as the number of halo particles escaping the collimation system after one turn above a given amplitude.collimation efficiency to be around 90%.The secondary collimation units, comprising layers of steel, borated water, and the like, are designed to contain secondary charged and uncharged particles [28], as shown in Fig. 14.Because of stringent engineering requirements for stress, heat tolerance (about 1 kW power), and shielding, these units are not adjustable.Based on the results of a recent experiment at the BNL Tandem [29], a serrated coated surface will be used to minimize secondary electron emission. B. Momentum and beam-in-gap cleaning Various mechanisms, including chopper inefficiency and foil ionization, can produce a residual beam between microbunches, resulting in uncontrolled loss at extraction.A gap-cleaning kicker is designed to resonantly excite coherent betatron oscillations, driving the gap beam into the primary collimator, where beam loss is measured with a gated fast loss monitor [30].The hardware is similar to that of the RHIC damper/tune monitor system, which uses commercially available MOSFET banks to supply 5 kV, 120 A, 10 ns rise and fall time pulses to a transmission line kicker.The burst mode frequency, ,1 MHz, will permit turn-by-turn kicking.With a kicker length of 5 m, the gap can be cleaned in about 20 turns. VIII. RADIO-FREQUENCY SYSTEMS The main purpose of the rf system is to maintain a 250 ns gap for the rise of the extraction kicker.It will also (i) control the peak beam current to prevent space charge stop-band related losses and (ii) maintain a large momentum spread to prevent coherent instabilities.This momentum spread will also Landau damp coherent quadrupole oscillations which can drive halo formation.Compared with a single harmonic rf, a dual rf system has significant advantages.A barrier bucket rf system may be even better, but issues such as beam loading still need to be resolved. 080101-8 080101-8 The SNS ring will have a dual harmonic system with peak rf amplitudes of 40 kV for harmonic h 1 and 20 kV for h 2. Canonically, the voltages are phased so that the small amplitude synchrotron frequency vanishes.The design of the rf system and power amplifier are driven by beam loading requirements.The power amplifier is designed to fully compensate the beam current while providing the quadrature component to drive the gap voltage.As the beam is accumulated in the ring, a feed-forward system will adjust the input into the low-level rf drive.Shown in Fig. 15 are the results of a simulation using a linac bunch length of 546 ns with the first harmonic voltage ramped from 30 to 40 kV over the first 500 turns.The amplitude of the second harmonic was half the amplitude of the first for the entire simulation, which included beam loading and longitudinal space charge.The rf was corrected using both feed forward and low level loops.We note that the full momentum spread (Dp͞p) of the incoming linac beam, after the energy-spreading cavity, is 60.26%.Also, at the end of injection the beam bunching factor is approximately 0.46. A. Expected fringe field errors and compensation The bore of the SNS ring magnets is necessarily large to provide the required acceptance.The quadrupole aspect ratio, inner diameter over magnetic length, is about 0.5.With such a high aspect ratio, contributions from the magnet ends are significant.Table IV indicates that, in the absence of pole-tip shimming, the error of the first allowed multipole (dodecapole, b 5 ) in the quadrupole magnet is exceedingly large.Multipole contributions from the ends of the unshimmed dipole include a similarly large sextupole component. With detailed pole-tip compensation, the integral field error can be greatly reduced.The effect of the magnet fringe fields, which is often negligible in high-energy low-aspect-ratio machines (e.g.RHIC, LHC), is important for a large-acceptance high aspect ratio ring such as the SNS.Indeed, the relative impact of the longitudinal fringe field on a particle's transverse momentum is proportional to the ratio of the transverse emittance to the effective magnetic length [31].For the lattice quadrupoles, the octupolelike transverse kick induced by this edge effect [32] has been evaluated, and its dynamic effect is quite large (see the next subsection).A flexible correction scheme with three families of octupole correctors [33] has been proposed in order to compensate the fringe field and other residual octupolelike effects. B. Expected tune spreads Table VII lists the tune spread produced by various effects -space charge, natural chromaticity, magnet imperfection, and the like-as indicative of their relative impact on the beam. Because of broadening of the beam momentum before injection, tune spread produced by the natural chromaticity is significant.Section X discusses chromatic compensation using multifamily sextupoles. The tune spread produced by the kinematic nonlinearity-obtained by using either UAL / TEAPOT [34] or MARYLIE [35] and by analytical evaluation [36] based on perturbation theory -originates from the large g functions in the straight sections of the SNS ring.It is proportional to the 4th power of the particle transverse momenta (i.e., an octupolelike effect) and becomes noticeable in the SNS, which has a large-emittance beam. Tune spreads produced by magnet fringe fields, as mentioned in the previous subsection, have been obtained with mapping, tracking, and analytical evaluation.For the mapping approach we used MARYLIE to construct a symplectic Lie map up to third order using either a hard-edge fringe field model (taking the limit of the fringe length to 0) or a 3D magnetic field calculation using OPERA 3D.Comparison of the two shows that the hard-edge model provides an excellent approximation of the effect.For the tracking approach we first extracted a 2D multipole expansion from the 3D field calculation.Then UAL / TEAPOT was used to perform turn-by-turn tracking to evaluate detuning.Comparison of the tracking approach with either mapping or analytical evaluation shows that using a 2D multipole expansion, extracted from the 3D field calculation, grossly underestimates the importance of quadrupole fringe fields.On the other hand, the tune spread produced by the dipole fringe field is negligible. C. Dynamic aperture As an example of a dynamic aperture study, Fig. 16 shows the impact of magnet field errors and the improve- ment from field compensation and orbit correction.The six-dimensional element-by-element computer tracking is performed with the UAL [18] over the entire accumulation period.Initially, particles are launched at three momenta (Dp͞p 0, 60.7%), in five transverse directions, with increasing betatron amplitude.The average dynamic aperture and the statistical errors are obtained from the results of ten random seeds.The green curve shows the combined effect resulting from the integrated 2D fringe field component of the uncompensated magnet pole tips (given in Table IV) and misalignments (0.5 mm, 1 mrad).The violet curve indicates the sole effect of 2D integrated fringe field for uncompensated magnet pole tips.Finally, the blue curve represents the case of expected errors of 10 24 level (see Table V) with corrected misalignments (0.1 mm, 0.2 mrad). X. CHROMATIC AND RESONANCE CORRECTION Four families of chromatic sextupoles are needed to adjust the chromaticity to desired values across the beam momentum of 60.7%. Figure 17 shows that, with a two-family scheme, the optical distortion in b function is as large as 30% for off-momentum orbits.With a four-family scheme, the off-momentum optics is greatly improved (less than 2% distortion). To compensate for field imperfections and magnet alignment errors, the ring is designed with a set of correction elements consisting of horizontal and vertical dipoles, skew quadrupoles, normal and skew sextupoles, and octupoles.Trim windings on the lattice quadrupoles allow for any necessary quadrupole corrections.The correction dipoles are mounted close to each quadrupole.Additional windings on the correction dipole cores (similar to those employed in the AGS Booster) provide four skew quadrupole and two skew sextupole correctors per lattice superperiod.The addition of two families of individual short fast trim quadrupoles is currently under consideration to allow for a small tune variation (of the order of 60.1) from injection to extraction.Two sextupole and octupole correctors are mounted near horizontal and vertical beta maximums in each superperiod.This placement of correctors ensures that appropriate harmonics can be produced for the correction of resonances and closed-orbit distortion.An additional family of octupole correctors can be placed between the last arc quadrupole and the doublet in order to cancel the octupolelike tune shift, caused mainly by the quadrupole fringe fields [33]. XI. IMPEDANCE AND INSTABILITIES Because the SNS has an accumulation period of just 1 ms, the instability of greatest concern is the so-called PSR instability [37], suspected to be caused by the electron cloud.Based on experience at the PSR, we have adopted several measures for the SNS ring to avoid the PSR instability: a larger beam momentum spread to damp instabilities, a cleaner beam gap, achieved using both a higher rf voltage and a gap-cleaning kicker, reduction of beam scraping and secondary emission using a multistage collimation, collection of electrons produced at the injection foil, and, finally, coating of the vacuum chamber's inner surface with TiN to reduce the secondary electron emission yield. Operational experience at the AGS and its Booster has shown that the conventional formulation used for the resistive wall instability overestimates the growth rate, presumably because various Landau damping mechanisms are neglected.That formulation predicts a modest 1 ms growth rate for the SNS, at the end of stacking.Hence, the choice of stainless steel chamber is adequate.Experiences at the AGS, Booster, and ISIS have also shown that, for the longitudinal microwave instability, the conventional 080101-11 080101-11 PRST-AB 3 J.WEI et al. (2000) Keil-Schnell criterion is largely overestimated below transition.With total 40 kV rf voltage, the beam momentum spread in the ring is about 60.7%, which is more than adequate to avoid this instability.The transverse mode coupling is not likely to occur for the SNS ring, since the very long bunch cannot couple with the broadband impedance resonated at 1 to 2 GHz.Also, since the entire accumulation cycle is only one synchrotron period, a conventional head-tail-type instability is not a problem.Finally, transverse microwave instabilities are not predicted for the SNS ring.These types of instabilities have not been seen in existing low-energy proton synchrotrons, possibly because of damping effects caused by space charge. To minimize resonance effects and impedance complications, vacuum chamber steps are tapered and bellows and ports are shielded.Low-frequency impedance is found to be dominated by the 14 window-frame extraction kicker magnets.Efforts are made to optimize the terminations to reduce their impedance.Similar kickers used in the AGS injection and the Booster extraction and dump lines have not been associated with any beam instabilities. XII. DIAGNOSTICS AND INSTRUMENTATION The ring accumulation is a dynamic process during which the beam intensity increases by more than 3 orders of magnitude and the transverse beam radius increases by more than a factor of 12.The ring diagnostics instrumentation is designed with a wide range of sensitivity and turn-by-turn capability to monitor beam intensity ( beam current monitors), position ( beam-position monitors), transverse and longitudinal profiles (ionization profile monitor, wall current monitor), and beam loss (loss monitor).Dual-plane beam-position monitors are installed in the critical areas, near the ring straight-section doublets and the middle of the arcs for orbit monitoring and local decoupling.Any beam residual between subsequent microbunches can be detected by a beam-in-gap monitor and removed by a beam-in-gap kicker.Electron detectors are planned to monitor the electron cloud in the vacuum chamber.The controls system is designed to immediately shut off subsequent beam pulses when a critical device failure is detected. A. High energy transport line A 140 m long high energy beam transport (HEBT) line connects the linac to the accumulator ring and keeps the entire SNS facility within the desired footprint.Far more than a simple transfer line, the HEBT line plays an essential role in reducing beam loss at ring injection.It optically matches the linac to the ring.Its diagnostic systems characterize the beam and detect accidental ion source and linac Beam density FIG.18. (Color) Comparison of energy distributions obtained by the energy spreader and by a conventional debuncher at the injection foil.An energy spreader significantly suppresses the beam tail.Note that the distribution for the debuncher corresponds to each linac bunch, while the distribution for the energy spreader is an average over all linac bunches within the pulse.malfunction to protect the ring.The collimation system cleans both transverse and momentum halos.The energy corrector, operating at the linac frequency, partially compensates energy jitter in the incoming linac beam.The energy spreader, operating at a phase-modulated mode of the linac frequency, provides the energy spread required for beam stability.Adjustments can be made for ground settlement and magnet misalignments.Beam-position monitors and steering magnets are designed to detect and correct closed-orbit deviations. The HEBT line has a transverse acceptance of 30p mm mrad to accommodate an incoming beam with 0.25p mm mrad rms unnormalized emittance, and it has a momentum acceptance of 61% to accommodate an incoming rms momentum spread of 0.033%.After the energy-spreading cavity, a total momentum spread of about 60.26% is achieved with negligible beam tail, as clearly demonstrated in Fig. 18. B. Ring to target transport line The 150 m long ring to target beam transport (RTBT) line delivers the beam to the mercury target with the required specifications.Its acceptance (transverse 480p mm mrad and momentum 61%) accommodates one extraction kicker failure with no beam loss.The diagnostic system detects fault conditions and, together with the collimation system, protects components.Adjustments can be made for ground settlement and component misalignments, and steering magnets and beam-position monitors are designed to correct closed-orbit deviation.The betatron phase advance between the kickers and the target will be adjusted to an integer multiple of 2p so that a kicker error does not lead to a displacement of the beam at the target. XIV. SUMMARY Reliability and maintainability are of primary importance to the SNS facility.Hands-on maintenance for the accumulator ring demands an average radioactivation at or below 1-2 mSv͞h, 30 cm from the machine device [4].The corresponding uncontrolled fractional beam loss is about 10 24 for a 1 GeV beam in the SNS accumulator ring. To achieve this goal, the SNS ring design addresses the six issues that lead to heavy beam loss: (1) The beam is painted to a quasiuniform distribution to keep the space-charge tune shift below 0.15.(2) A significant transverse acceptance/emittance ratio allows the beam tail and beam halo to be cleaned by the collimation system before hitting the rest of the ring, and a stationary rf bucket confines the beam to within 70% of its momentum acceptance (Dp͞p 61%), while the machine's vacuum chamber provides a full momentum aperture of Dp͞p 62%. (3) The layout and magnetic field at injection are designed to prevent premature H 2 and H 0 stripping and excessive foil hitting.(4) A moderate main magnet field avoids saturation effects, and shimmed pole tip ends in both dipole and quadrupole magnets help compensate fringe field effects.(5) To avoid all instabilities, the vacuum chamber is coated, chamber steps are tapered, bellows and ports are shielded, and the beam momentum is broadened [38][39][40].(6) The HEBT is designed against accidental ion source and linac malfunction, and the ring and RTBT are designed against accidental extraction malfunction.Significant beam loss and radio activation is expected only at the shielded collimation and injection regions. FIG. 1 . FIG. 1. (Color) Schematic layout of the SNS accumulator ring.The four straight sections are designed for beam injection, collimation, extraction, and rf systems, respectively. Figure 3 FIG. 2 . Figure3shows the lattice functions in one lattice superperiod, consisting of a FODO arc and a doublet straight.The arcs and straight sections are optically matched to ensure maximum betatron acceptance.A horizontal betatron phase advance of 2p rad across each arc makes each arc FIG. 3 . FIG. 3. (Color)Lattice functions of one lattice superperiod consisting of a FODO arc and a doublet straight.The horizontal phase advance across the arc section is 2p rad.The dispersion in the straight section is zero. FIG. 5 . FIG. 5. (Color) Schematic layout of the injection straight section.The red elements are the fixed injection chicane, the blue elements are ring lattice quadrupoles, and the yellow and green elements are the vertical and horizontal dynamic kickers, respectively. FIG. 7 . 5 FIG. 8 . FIG. 7. (Color)Vertical emittance growth in anticorrelated painting (tunes: 6.3, 5.8).For the data shown in blue, space charge was neglected; for the data shown in red, the space charge force for a 2 MW beam was included.Space charge produces a significant beam tail. FIG. 9. (Color)Comparison of beam tail growth between same tune and split-tune working points (correlated painting).A horizontal-vertical tune split significantly suppresses beam-tail growth caused by space-charge coupling. FIG. 10. (Color) Growth in 99.9% emittance (without space charge) as a function of systematic quadrupole roll for unsplit tune, half-integer split tune, and integer split-tune working point.Only one quadrupole per lattice superperiod is rolled.A horizontal-vertical tune split significantly suppresses emittance growth caused by quadrupole misalignment.The actual expected quadrupole roll in the SNS is in the range from 0.2 to 1 mrad randomly distributed. FIG. 11 FIG. 11. (Color) Ring extraction layout and orbit.The beam is kicked vertically by 14 kicker modules and extracted horizontally by a Lambertson septum magnet. FIG. 12. (Color) Beam tail driven by space charge and magnet errors.The development of beam tail is noticeably enhanced by the combination of these two driving sources.The same-tune working point (5.82, 5.80) is chosen to illustrate the effect. FIG. 14 . FIG. 14. (Color) Schematic of the SNS ring collimator showing layers of material for radioactivation containment. FIG. 15 FIG. 15. (Color)Longitudinal phase space at the end of 2 MW beam accumulation.The blue curve outlines the rf bucket.The vertical lines delineate the edges of a 250 ns gap.The effects of space charge and cavity beam loading are included. FIG. 16. (Color) Dynamic aperture obtained from 6D UAL computer tracking.Each data point gives the mean and standard deviation obtained using ten random seeds. FIG. 17 FIG. 17. (Color) Off-momentum lattice function perturbation when using a two-family chromaticity sextupole correction scheme.This perturbation reduces the available aperture for the off-momentum beam. TABLE I . Main parameters of some existing and proposed accelerator-based neutron facilities. TABLE II . Major machine parameters for the original hybrid lattice Spallation Neutron Source ring. TABLE III . Comparison of SNS ring working points in the tune space. Table V shows the expected magnetic error of the ring quadrupoles based on measure- TABLE IV . Integrated quadrupole end field from one magnet end before pole tip end shimming, extracted from a 3D TOSCA calculation (normalized to 10 24 of the main field at the reference radius R ref ).For regular ring quadrupoles, R ref 10 cm; for large ring quadrupoles, R ref 12 cm (approximately 92% of the quadrupole iron pole tip radius). magnetic errors of ring quadrupoles.The multipole strengths are normalized to 10 24 of the main field at the reference radius R ref . TABLE VII . Tune spread produced by various mechanisms on a 2 MW beam with transverse emittance of 480p mm mrad and momentum spread of 61%.
2018-12-23T17:40:43.954Z
2000-08-31T00:00:00.000
{ "year": 2000, "sha1": "62e7550f0f2fb62697c1b8e5a108a5c2a9320e41", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevSTAB.3.080101", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "62e7550f0f2fb62697c1b8e5a108a5c2a9320e41", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
259321528
pes2o/s2orc
v3-fos-license
Achieving Digital Health Equity by Personalizing the Patient Experience Background: COVID saw a significant increase in the use of virtual care, supporting its utility and its benefits. It also revealed that unfortunately there are limitations and gaps we still need to address, including inequitable access to digitally enabled health care tools. Methods: On November 8, 2022, the Mass General Brigham held the Third Annual Virtual Care Symposium: Demystifying Clinical Appropriateness in Virtual Care and What's Ahead for Pay Parity. One panel addressed digital health equity and key points are summarized here. Results: Four experts discussed the key domains of digital equity and inclusion in the session titled “Achieving Digital Health Equity: Is it a One-Size-Fits-All Approach or Personalized Patient Experience?” These included lessons from strategies and tactics being used by hospitals and health systems to address digital equity issues; and opportunities to achieve digital health equity for specific populations (e.g., Medicaid). Conclusions: Understanding the drivers of digital health disparities can help organizations and health care systems develop and test strategies to reduce them and improve access to quality health care through digitally enabled technologies and delivery channels. Introduction COVID saw a significant increase around the world in the use of virtual care, supporting its utility and its benefits. 1,2 It revealed a lot about what is feasible and where it could be of help. It also revealed that although virtual care is appropriate for many clinical specialties and patients, unfortunately there are limitations and gaps we still need to address. 3,4 Some of these gaps are unique to certain countries (e.g., licensure and other legal requirements, reimbursement), but some of them are more widespread. One of the biggest gaps that became quite evident was inequitable access to digitally enabled tools and virtual care opportunities. 3,4 There are many ways to address these gaps as no single solution will be relevant to every patient and health care institution; thus, the intent of this article is to provide some key examples, data, and lessons learned as a starting point for promoting change. First, we need to understand the key domains of digital health inequity. Second, we can learn important lessons from the strategies and tactics being used by hospitals and health systems to address digital health inequities. Finally, we can examine the gaps and opportunities to achieve digital health equity for specific populations such as those eligible for Medicaid or government assistance programs. Although the strategies and solutions discussed here are based primarily on experiences in the United States, they should be readily generalizable to other populations and circumstances around the world. Key Domains of Digital Health Equity and Inclusion This section will cover three core themes: how to define digital health equity, why are we talking about this now, and how do we go about achieving digital health equity. Digital health equity can be thought of as everyone having a fair and just opportunity to engage with and benefit from digital enabled health tools. This is borrowed from a medical-level health equity definition 5 that everyone should have a fair and just opportunity to benefit from medical care. In our discussion, we are referring to the patient facing digitally enabled health tools such as mobile health apps, patient portals, remote monitoring, telehealth, and texting solutions. So why are we talking about this now? As noted, there was a seismic increase in virtual care use over the past 2 to 3 years. In addition, during vaccine deployment, health systems frequently relied on online-based scheduling tools that brought to the forefront that certain parts of the population did not have access to the internet. Unfortunately, these were often the populations that were disproportionately affected by COVID. Further, something less discussed, is the enactment of the 21st Century Cures Act, 6 which put into place a law that empowered patients to have easier access to their health care data. Of course, that requires going online. If you think about HITECH Act 7 and the development of patient portals, for example, the early literature highlights substantial disparities in portal use and access, with low rates among marginalized populations. 8,9 So, it is not surprising that these disparities became starkly visible during the pandemic as demonstrated by the significant disparities in the use of video visits by marginalized populations. When thinking about digital health equity there are five key domains: technology access, technology infrastructure, and health literacy (which incorporates English proficiency), implementation, policy, and standard of care. Technology access is the classic digital divide-those who have access to the internet and those who do not. However, subsequent work has highlighted that those digital inequities are multifactorial. For example, broadband infrastructure (who has wired internet lines or Wi-Fi signal available in their home) and broadband affordability (are you able to afford it and the necessary devices) are additional factors driving digital inequities. These access gaps represent an opportunity for health care systems to serve as a touch point to determine whether we must screen for these things and then refer patients to the appropriate resources. Structural barriers to digital equity, like digital redlining, are increasingly being recognized as critical to digital equity. Digital red lining or digital discrimination refers to a practice by internet service providers (ISPs) in which they deploy internet services in ways that disadvantage certain groups. For example, in some areas, ISPs deploy internet access at varied speeds and quality, but charge customers the same price. Customers living in underserved areas often pay the same amount for slower speeds. Digital literacy or the use of digital tools presents key gaps and opportunities to achieve digital equity. Digital literacy is the ''ability to use information and communication technologies to find, evaluate, create, and communicate information, requiring both cognitive and technical skills.'' 10 The design and development of the platform can affect a patient's ability to use digital tools. For example, apps that are not designed for patients from underserved populations can limited their utility. Sarkar et al. found that patients from safety net clinics had difficulty in completing data entry and data retrieval tasks on chronic disease mobile apps. 11 Even when the technology is designed optimally, many patients may still need some extra support and help to get connected and feel comfortable. One strategy is the increasing use of digital health navigators to help people on-board or use these technologies as new members of the health care team. Prior work has piloted digital health navigators in primary care clinics. In the pilot, the navigator reached out to about 400 patients who were not active on the patient portal and supported patients in enrolling in the portal. The team was able to contact most of them and enrolled about a third. Of those enrolled, about 80% logged into the portal again so this one-on-one support really made a difference in connecting patients with this new tool and making it part of something to use over the long term. Finally, it is critical to consider that digital literacy is not unique to health care. Collaboration with experts from adult education and youth education can impact a patient's life beyond health care, including in civic engagement, access to social benefit programs, public assistance, and workforce development. From an implementation standpoint there are four things to highlight. First, we need to involve patients and families in the development and implementation of digitally enabled tools for equity, so they can help guide the digital experience. Patient preference can shape how digitally enabled tools are implemented in health care, and we must build systems that provide both digital and non-digital options. Second, the equitable implementation of digitally enabled tools requires monitoring use across patient demographics. Then, organizations can create venues (e.g., newsletters, blogs, white papers) to share the data to those in leadership to highlight gaps and strategize on what to do to address them. Third, workflows can facilitate or hinder the success of digitally enabled tools, especially among underserved populations. For example, workflows that include interpreters as part of a virtual care visit can ensure the use of digitally enabled tools among patients with limited English proficiency. Fourth, patient privacy, especially when sharing data remotely, must be maintained and patients educated on what security measures are in place, especially among communities that already have mistrust with the health care system. 12 In terms of policy, the Infrastructure Investment and Jobs Act that was passed in 2022 includes a section on digital equity. It provides policy and funding to gaps in technology access and literacy. It makes digital equity a public focus, which has implications for the health care sector. 13 For example, the Affordable Connectivity Program provides an internet and device subsidy for eligible patients who have these digital needs. Further, it includes funding for digital literacy programs. This policy can serve as the foundation for health care organizations to partner with community organizations and develop community-wide digital equity programs. From a health care organization standpoint, being able to refer patients to these programs is one big opportunity for health care organizations. Finally, we need to view digitally enabled tools as the standard of care available to all patients. Data suggest that only certain patients are offered these tools. In one study, Black and Hispanic patients were offered patient portals at significantly lower rates than White individuals. 14 Clinicians and providers can play a significant role in promoting digital equity in delivering care. Conversations need to occur to make sure we do not act on implicit bias about who is going to use technology and who is not. The goal is digital health equity but only to the extent that it gets us to the ultimate goal of total health equity. Strategies and Tactics Being Used by Hospitals and Health Systems to Address Digital Equity Issues Digital health solutions have become essential tools for health care providers as they deliver care. Therefore, making sure patients can access and understand mobile health apps, patient portals and virtual care must be addressed at both the system and patient levels. At a system level, health care providers and technology companies can commit to designing and implementing strategies in a way that all patients can access and understand. In addition, they can commit to including digital health equity as part of the conversation, not only in the design phase but also in the implementation and evaluation phases. For example, CommonSpirit (Chicago, IL) has embedded health equity and digital health equity into their strategy. They include questions related to technology access and digital health literacy in the process designed to evaluate digital solutions. They then go beyond this to evaluate the use of technologies. Another system-level approach is to create a process that engages diverse patient voices. That can include partnering with community-based organizations that know the needs of patients and communities. It can also include creating avenues for under-represented patients, communities, and providers to give feedback on whether digital solutions are accessible and understandable by real patients and communities. For example, hospitals could engage moms with low health literacy to understand their communication preferences and tailor services to meet their needs and build their trust during this important time of their life. Another system-level solution is tracking patient engagement with digital tools to understand the patients and populations that are using these tools. Common-Spirit has implemented Docent Health, a program to provide culturally competent navigators to pregnant moms. These navigators are embedded in the communities that they are serving. CommonSpirit analyzes the data around the use of this program, stratifying it by race, ethnicity, and social vulnerability to determine which communities are engaging in Docent Health. This analysis allowed CommonSpirit to not only know who is and is not using the solution, but it also allowed the team to learn that for some Spanishspeaking populations in California providing a Spanishspeaking navigator was not enough. Those patients needed a navigator who spoke the right dialect of Spanish for their specific community. The last system-level strategy is to advocate for policies that allow patients to access digital health solutions. For example, the American Hospital Association 15 is currently advocating for investments in infrastructure, including broadband access as well as increased federal funding, coverage, and reimbursement for the expanded use of virtual care and other technologies. Patient-driven strategies are also important drivers for digital health equity. While there are federal and state laws that must be considered, one way to improve digital health access for individuals is to provide patients and physicians with technology (including hardware and software). 16 For example, the University of Mississippi Medical Center launched a pilot program network in diabetes virtual care. It provided patients in the program with tablet computers at no cost and they were then able to take and report their own vital signs daily, which led to improved patient outcomes, increased medication management, and patients were more willing to participate in virtual care visits. 17 Health care providers can also proactively work with technology companies to select solutions that minimize barriers to access. When Boston Medical Center was looking for a way to provide remote patient monitoring to address postpartum hypertension, it considered a variety of different solutions. The team at Boston Medical Center recognized that most of their patients had access to smartphones, but there was a divide in patients' ability to connect. Some patients did not have consistent access to Wi-Fi or did not have a data plan that could be used to support a video framework. Ultimately, the team selected Rimidi (Atlanta, GA), a solution that uses the local cellular network, making it accessible to anyone with a smartphone. Health care providers and technology companies also may need to acknowledge that video-based digital solutions are not always the best path forward. One popular alternative is using asynchronous digital technologies such as SMS text messaging to provide information and an access point for patients to connect with health care providers. Research indicates that 90% of text messages are read within 90 sec of when they are received. 18 As a result, texting can be a powerful way to reach patients with the right message at the right time. It can also be done in different languages, is relatively cost-effective, and can triage patient needs, provide timely information, and improve communication and engagement with patients. Another strategy is to develop digital solutions that are linguistically and culturally sensitive and inclusive. Providence (Renton, WA) uses Wildflower (San Francisco, CA), which provides information and resources to patients from pregnancy through delivery. The individuals who designed this solution realized quickly that they needed more than mere translation from English to Spanish; they needed to achieve localization to adequately meet the needs of their Spanishspeaking population. At its core, translation transforms text, whereas localization (i.e., the process of adapting and transforming content so it resonates in another country or locale) transforms the entire product or content from one language to another (e.g., using metric vs imperial for measures, proper currency, use of emojis). They needed to evaluate the functionality of the app and ask the question, how would a Spanish-speaking individual approach this app? As a result, they created a solution that considered individuals' cultures, as well as how, when, and where they would be using the app. These solutions can also be tailored to meet the needs of those with lower digital literacy. For example, by using more videos, images, emojis, and symbols, providers can ensure that patients, regardless of their literacy level, are able to access and understand the information. In addition, those with low digital literacy may be afraid to use technology or trust their providers. To address these concerns, health care providers can offer training to support their patients throughout the process. Similar to the digital health navigator referenced earlier, Ochsner Health (New Orleans, LA) launched the ''O Bar'' concept many years ago. The O Bar, located within Ochsner's Center for Primary Care and Wellness, carries physician-recommended digital products and is staffed by a full-time technology specialist that can help patients choose the right tool and help set up, guide, and support these individuals as they use those tools. Nemours Children's Hospital (Wilmington, DE) redeployed staff as digital health navigators to help patients and their families complete digital forms, troubleshoot connectivity issues, and better engage in virtual care visits. The last strategy is that organizations can develop workflows that better allow clinical teams to engage patients in these tools. Froedtert (Milwaukee, WI) and the Medical College of Wisconsin (Milwaukee, WI) implemented Babyscripts (Washington, DC), a platform designed to connect expectant mothers with doctors and resources to improve perinatal outcomes. Expectant mothers are invited to join Babyscripts after it is determined that they are pregnant. However, early in the implementation, Froedtert realized that certain communities and populations were not engaging with the app. They created a process where their team proactively reaches out to individuals who do not accept the initial invitation and encourages them to join and use the Babyscripts tool, and this has a positive impact on enrollment. Digital Equity and Inclusion in Medicaid Beneficiaries Medicaid provides health benefits and other services for low-income individuals. There are about 76 million Americans on Medicaid, and dual-eligible patients are those older adults on Medicare who are also low income, accounting for over 12 million Americans. The Government Accountability Office recently published an analysis of five state experiences and how they were using virtual care access to care for Medicaid populations. 19 They examined virtual care utilization among Medicaid beneficiaries pre-pandemic (before March 2020) and then what happened through February 2021, where there was an incredible 15-fold increase across the 5 states. They choose states representing a broad swath of population in terms of percent in rural areas, having access to broadband, and variations among demographics such as age, income, and education. Right before the pandemic, about 11% of Medicaid beneficiaries (e.g., in Arizona) were accessing virtual care for one or more of their health care services. In the year the pandemic started (March 2020 to February 2021), that shot up to 43.8. Thinking about this increase in virtual care use with regards to a very specific very vulnerable population and how they were able to access virtual care changes the picture. Medicaid supports those who qualify through means, so their income thresholds are below a certain amount. But there are also subgroups who have disabilities as well as being low income and those who are older, and they have specific health care needs, including behavior health for many with disabilities. Given the need for behavioral health services combined with limited staffing, the majority of states now provide Medicare coverage for audio-only behavioral health services. 20 For older adults, long-term care is a big issue and Medicaid is the primary source of coverage for longterm services and supports (LTSS) in America. Thus, when we think about digital health tools and access to health care via virtual care, we need to consider how it would apply in these settings. Currently, every state has a waiver for home-and community-based services where, in the case of older Medicaid beneficiaries and those with disabilities and chronic illnesses, they have the option to go into a nursing home to receive care or they can live independently and receive LTSS in their home to assist them with their daily needs. The problem is that the demand to age in place far outweighs the supply. Across the United States, Medicaid-eligible older adults are waiting an average of 3 years across with over 820,000 Americans waiting to receive care in the home instead of going to an institutional setting. 21 This not only affects Medicaid-eligible older adults but also has a real impact for our whole society in terms of how we are caring for an older population. Digital health tools and the ability to have access to care via virtual care in the home supports individuals aging in place, which is a goal that many Americans share. This is happening incrementally. During the pandemic, Appendix K (from Medicaid.gov it is part of the Emergency Preparedness and Response for Home and Community Based (HCBS) 1915(c) Waivers-a standalone appendix that may be utilized by states during emergency situations to request amendment to approved 1915(c) waivers. It includes actions that states can take under the existing Section 1915(c) home and community-based waiver authority to respond to an emergency) was added to home-and community-based waivers that many states took advantage of trying to digitize home-and community-based services to allow for more digitally enabled health tools in the home, and allow virtual care visits as opposed to in-person care. 22 That could be a way going forward, as states are increasingly considering options available in Medicaid waivers to provide a greater number of seniors with LTSS in the home and off the waiting lists. If you have very limited means, it makes a big difference in terms of how health care can be reimbursed. Payment parity means the same amount of reimbursement for in-person care as for virtual care, whereas coverage parity means that the service is simply covered. If it is not covered at all, how would an individual with very limited means receive the care they need and would prefer? States were doing different things before the pandemic started and then changed during the public health emergency to allow access to virtual care for Medicaid beneficiaries from their homes. Notably, behavioral health has greatly benefited from virtual care options and its use was high during the pandemic and has remained high. Other specialty services experienced a huge jump, particularly dental, occupational health, and longterm supports and services. Many states followed Medicare's approach to expanding coverage for care delivered via virtual care during the pandemic, and modified regulations for their Medicaid populations in terms of widening the numbers of providers and types of services available to them. Further, data on virtual care utilization among Medicaid beneficiaries were further broken out by accessing services with video/audio care as well as audio-only. Some data broke use down by race and ethnicity, income, and education levels. 23 For the most part, white individuals were much more likely to access virtual care via video/audio; whereas those who were of a Latino background were much more likely to access virtual care via audio-only. Thus, if we limit access to care in ways to be only video/audio, it will impact certain population groups within this already very vulnerable Medicaid population. There are other ways to enhance access to care outside of state legislation and governor executive action, and that is what individual Medicaid departments are doing in certain states. California's Department of Health Care Services, which operates the state Medicaid program, released a list of virtual care modifications that include payment parity for audio-only care as an example of how they were trying to ensure care for their vulnerable population group; and Ohio's Department of Medicaid permanently expanded coverage for audio-only and authorized a board range of providers and services eligible to deliver care virtually to their Medicaid-eligible population. Other states craft their legislation around virtual care definitions. Arizona has a detailed approach to how virtual care is defined in AZ HB 2454 with a very specific audio-only carve-out in which only in situations due to technology or other infrastructure limits would it be considered appropriate. This audio-only carve out has real implications for access to care in Arizona. The state legislature created an advisory council that initially recommended to reduce the number of codes that would be eligible for audio-only by nearly twothirds but has since then amended their recommendation to align audio-only coverage with Medicare. 24 In contrast, Colorado's definition is quite broad-any type of care delivered at a distance without an audioonly carve-out in CO HB 1190. This broad definition allows for emerging technologies to be used. Thus, if we require video/audio services or in-person visits this will impact access to care, especially those in vulnerable patient-population groups who disproportionately access care via audio-only. Interestingly, the Federal Trade Commission says it is the standard of care that matters, not how it is delivered. A final example is what has been happening across the country when it comes to rural populations trying to access palliative care. Patients must make really tough decisions between being able to remain in their home and have their pain managed or going to more urban or tertiary centers to receive palliative care. This has been shown to have a significant impact on patients being able to access palliative care in their home and is a huge equity issue just allowing patients to have that decision. 25,26 A large randomized trial of telepalliative care is underway and will provide significant insight into the role of digitally-enabled care delivery for this population (NCT01038271). Summary What can the individual practitioner do to utilize virtual care through the lens of health disparities and health equity? Simply talking to the patient is a great starting point-what is the pain point the patient is feeling and is there a technology that can address it, and if not are there other options? Sometimes we try to think of technology very broadly, but for the individual patient we need to tie it to a framework with health equity in mind moving toward structural and social determinants of health. The health care community need to identify through these conversations what the challenges are how do we bridge them. We need to talk with patients about what they want. Some dislike using technology and want in-person visits all the time as they enjoy the communication and conversation. Some on the other hand are very tech savvy and want everything done through an app or portal. We need to ask these questions and not make assumptions about what someone would prefer just because they are old or young, male or female, etc. However, once the questions are asked, we need to acknowledge and address the fact that many providers do not have the resources needed to support the preference that an individual may have. We need to better embed resources in the system. At the first point of engagement, we can ask ''How would you prefer to have your visit conducted?'' then let people choose between text, email, audio only, visual regardless of their payor-Medicare, Medicaid, private, out of pocket, and treat patients equally across the board. We also must educate patients as to what these terms (e.g., telemedicine, eHealth, virtual health). There are many stories about how patients picked the virtual option then at the scheduled time do not realize what they picked and that it meant needing a computer and internet to connect. Even though people can go through a checklist and pick options, it is not just a matter of do they have the access, but do they understand what these options are. Without these basic one-on-one conversations about what options are available, what they entail and whether resources exist, digital health equity will not be attained. Author Disclosure Statement No competing financial interests exist. Funding Information No funding was received for this article.
2023-07-05T05:07:12.799Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "a0f9cce5fd0cf8ebaec21e6343c8ab7400cc37ac", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1089/tmr.2023.0018", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a0f9cce5fd0cf8ebaec21e6343c8ab7400cc37ac", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
23314175
pes2o/s2orc
v3-fos-license
Deterministic Single-Phonon Source Triggered by a Single Photon We propose a scheme that enables the deterministic generation of single phonons at GHz frequencies triggered by single photons in the near infrared. This process is mediated by a quantum dot embedded on-chip in an opto-mechanical circuit, which allows for the simultaneous control of the relevant photonic and phononic frequencies. We devise new opto-mechanical circuit elements that constitute the necessary building blocks for the proposed scheme and are readily implementable within the current state-of-the-art of nano-fabrication. This will open new avenues for implementing quantum functionalities based on phonons as an on-chip quantum bus. We propose a scheme that enables the deterministic generation of single phonons at GHz frequencies triggered by single photons in the near infrared. This process is mediated by a quantum dot embedded on-chip in an opto-mechanical circuit, which allows for the simultaneous control of the relevant photonic and phononic frequencies. We devise new opto-mechanical circuit elements that constitute the necessary building blocks for the proposed scheme and are readily implementable within the current state-of-the-art of nano-fabrication. This will open new avenues for implementing quantum functionalities based on phonons as an on-chip quantum bus. Engineering of periodic nanostructures has proven to be an immensely powerful tool to shape the properties of a material. Most notably are photonic crystals, which are created by the periodic modulation of the refractive index. Their photonic bandgaps and defect modes have been widely studied [1] and have found a number of applications in nano-photonics. For example, in solid-state quantum optics, they are commonly used to control lightmatter interaction [2]. Similarly, the periodic modulation of mechanical properties leads to the formation of phononic crystals [3]. Chip-scale devices that allow engineering of both the photonic and the phononic density of states simultaneously have recently been proposed [4][5][6][7]. This has lead to a range of theoretical [8][9][10] and experimental [11][12][13] breakthroughs in opto-mechanics. In parallel there has been significant progress on coupling single solid-state emitters, in the form of nitrogen-vacancy centers or self assembled quantum dots(QDs), to mechanical resonators either via magnetic gradient coupling [14,15] or strain coupling [16][17][18][19][20][21][22]. Additionally, the potential of using phononic crystals to control single-phonon mediated processes in solid-state systems has recently been alluded to [23,24]. In optics, the reliable generation and detection of single-photon and entangled-photon states has important applications within quantum information science. Similar goals are being pursued in opto-mechanics where the generation [9,13,25] and detection [12] of non-classical phononic states are opening new avenues of research. These schemes are typically probabilistic and rely on the direct radiation-pressure interaction between co-localized optical and mechanical modes where the coupling rate between the modes is proportional to the square root of the intra-cavity photon number [26]. However, parasitic absorption is a problem for large intra-cavity photon numbers when operating at milikelvin temperatures [13,27], which becomes necessary when performing experiments on single phonons with frequencies in the few GHz regime. In this paper we propose an alternative approach based on a hybrid opto-mechanical(OM)-crystal where we engineer the coupling of a three-level emitter to both the photonic and the phononic reservoirs. We demon-strate how the internal spin state of the emitter can be used to mediate strong photon-phonon interaction for an emitter embedded in the OM-crystal. In contrast to the standard approach in opto-mechanics [9,12,25,26], our proposal does not rely on radiation-pressure coupling, which is strong for large intra-cavity photon numbers. Instead the deterministic single-photon-single-phonon cascade triggered by a single narrow-bandwidth photon is operated at an average intra-cavity photon numbers significantly below one [28], thus offering a route to circumvent the problem of parasitic heating. We propose a readily implementable device that can be used with QD emitters. Our protocol is based on the two optical transitions of a lambda-system, given by a singly-charged QD (trion) in an in-plane magnetic field (Voigt-configuration), c.f. Fig. 1(a). In this configuration there are two allowed linearly-polarized optical transitions that decay at the same rate [29], while the two optical ground states are coupled by a spin-flip rate. Experiments have shown that it is possible to operate in a regime (depending on the Zeeman-splitting, sample temperature, and the cotunnelling rates) where the spin-flip process is dominated by single-acoustic-phonon mediated transitions [30,31], which is in good agreement with theoretical predictions [32][33][34]. We note that the coherence properties of the emitted phonons are inherited from the coherence of the spin. In many QD experiments the spin coherence is not limited by phonon mediated relaxation processes [35], leading to emission of incoherent phonons. However, recently progress has been made towards increasing the spin-coherence times [36] and the ultimately limiting process remains to be determined, especially when operating at mK-temperatures. The emitter is embedded in an OM-circuit shown schematically in Fig. 1(b). A photonic (phononic) waveguide couples the single photons (phonons) in-to and out-of the OM-circuit. The photonic and phononic waveguides each couple to their respective mode of the OM-cavity at the rates denoted κ e,o and κ e,m . All relevant rates are indicated in Fig. 1(b) and controlling their relative magnitudes offer a wide range of design possibilities. For the remainder of this paper we will focus on just one of these different realizations, which is well suited for the deterministic generation of single phonons. We consider the case where both cavity modes are in the over-coupled regime, meaning that the resonator loss is dominated by the coupling to the waveguide mode, κ e,o (κ e,m ) κ i,o (κ i,m ) [9]. Thus the intrinsic cavity loss rates, κ i,o and κ i,m , can be neglected in the following. The emitter-resonator coupling is in the bad cavity but large cooperativity regime, i.e., κ e,o g 13 , g 23 and κ e,m g 12 but 4(g 2 13 + g 2 23 )/(κ e,o γ 3 ) 1 and 2g 2 12 /(κ e,m γ 2 ) 1 [37]. In the bad cavity regime the influence of the cavity on an emitter close to resonance can be captured by a cavity-enhanced effective decay rate of the excited state into the cavity mode: Γ 12 = 2g 2 12 /κ e,m , Γ 13 = 2g 2 13 /κ e,o , and Γ 23 = 2g 2 23 /κ e,o [37]. Thus the schematic in Fig. 1(b) simplifies to the effective circuit shown in Fig. 1(c) for the implementation we are considering here. This is best described as an emitter coupled to unidirectional photonic and phononic reservoirs [38,39]. The single-frequency scattering coefficient for the elastic scattering of a trigger photon, ω tr , incident on the |3 → |1 transition is given by For the case of Γ 13 = Γ 23 + γ 3 and ∆ = 0, i.e., when the two optical transitions are enhanced by the same amount and the incident photon is on resonance and is narrowband compared to the QD transition, we see complete destructive interference of the scattered light at the resonance frequency ω 13 . Thus, the photon has to be scattered along one of the remaining available decay channels [40,41]. Consequently, the scattering coefficient for a trigger photon incident on the |3 → |1 transition to Raman-scatter along the |3 → |2 transition is, From Eq. ( 2) we calculate the success probability of initializing the single photon -single phonon cascade process, c.f. Fig. 1(d), which approaches unity for large coupling-efficiencies of the optical transitions. It is promising to note that recent experiments in photoniccrystal waveguides have shown that even for moderate enhancements of the waveguide mode, spontaneousemission coupling-efficiencies β wg > 98% can be readily achieved [42]. This is largely the result of the strongly suppressed coupling to optical loss modes for QDs embedded in photonic-crystal membranes [43]. The incident and outgoing wavepackets for the successful generation of a photon-phonon cascade from a single incident trigger photon is illustrated in Fig. 1(c). Implementing the circuit in Fig. 1(b) requires an OMcrystal that supports photonic and phononic band gaps at frequencies relevant to the optical transitions [2] and Operational principle of the deterministic singlephoton-single-phonon cascade. a) Level structure of a singly charged exciton in a magnetic field in Voigt-configuration. The optical transitions between states |1 and |3 and states |2 and |3 form a lambda system, indicated by the solid arrows. Furthermore, the two ground-states, |1 and |2 , are coupled to each other via a single-phonon-mediated transition, indicated by the dashed arrow. b) Schematic of the desired OM-circuit functioning as a heralded single-phonon source, which is deterministically triggered by a single photon. The photonic (phononic) waveguide, shown in purple on the bottom (green at the top), couples the single photons (phonons) in-to and out-of the OM-cavity modes (middle) at the rate κe,o (κe,m). The QD couples to the photonic and the co-localized phononic mode of the OM-cavity with the rates g13, g23, and g12 and to the loss modes in the photonic and phononic environments with rates γ3 and γ2. c) The schematic of the effective OM-circuit for the parameter regime discussed in the text and with the relevant rates indicated. The incident photon (blue wavepackets) triggers the outgoing photon-phonon cascade (red wavepacket and green wavepacket). d) The success probability of initializing the photon-phonon pair by a single photon incident on the trigger transition as a function of detuning of the trigger photon, ∆ = ωtr − ω13, for several values of the cavity-β-factors, βcav = (Γ13 + Γ23)/(Γ13 + Γ23 + γ3). the Zeeman-splitting [35] obtainable for the considered emitter, i.e., InGaAs QDs. The OM-crystal we propose (shown in Fig. 2(a)) consists of a hexagonal array of holes, separated by the lattice constant a = 300 nm etched into a GaAs membrane of thickness d=0.65a. Each hole is made up by three overlapping ellipses rotated by 2π/3 with respect to one another. The minor and major axes of the ellipse are given by A=0.45a and B=0.6a, respectively. The center of each ellipse is shifted outwards along Fig. 1(b). The photonic resonance frequency was chosen to fall within the range of optical transitions of standard InGaAs QDs and the phononic resonance frequency corresponds to a Zeeman energy-splitting achievable in standard cryomagnet systems. its major axis by L=0.17a resulting in a final shape that is reminiscent of a shamrock, c.f. Fig. 2(a). This leads to a reduction of the crystal symmetry compared to conventional photonic [2] and previous OM-crystals [6] (c.f. the Supplementary Material (SM) for more information [44]). A similar structure has been investigate for its photonic properties [45]. The photonic band-diagram for modes with TE-like symmetry is shown in Fig. 2(b). The lightly shaded region in the center indicates the in-plane band-gap and the dashed line marks the position of the optical transitions within the band-gap. The dark shaded region at the sides and the top of Fig. 2(b) indicates the continuum of leaky radiation modes, i.e., modes that are not confined to the membrane. For simulations of the phononic bands (see SM for details) we consider the modes of all symmetries, c.f. Fig. 2(c), taking into account the anisotropy of the elastic constants of gallium arsenide. In the phononic band structure there are no leaky modes, as all modes are confined to the membrane. Thus, contrary to the photonic case, the phononic band-diagrams exhibit a complete band-gap. This has promising implications for the control of single-phonon mediated transitions as they can be completely suppressed within the band-gap, where the phononic density of states drops to zero. For our proposal this means that the coupling efficiency of the |2 → |1 transition to the phononic cavity mode is only limited by the probability to decay through a single-phonon process, as opposed to other spin-relaxation processes such as co-tunneling [31] and multi-phonon effects [32,33,46]. This differs from the coupling efficiencies in photonic systems, where in addition to the probability of the emitter decaying through a single photon process, the coupling efficiency is limited by the coupling to non-guided radiation modes [42]. Nonetheless, completely analogous to optical emitters in photonic crystals, the coupling rate to the target mode can be enhanced by increasing the local density of mechanical states through the use of defect modes. We now demonstrate some of the different types of defect modes employed for the realization of the proposal. In Fig. 3(a) we show a phononic waveguide formed by replacing a row of shamrock holes by circular holes. This waveguide exhibits a gap at the relevant photonic frequencies, and corresponds to the waveguide at the top of Fig. 1(b). It allows introducing a large κ e,m , by bringing it close to the cavity, while leaving the optical cavity modes largely unaffected. A different waveguide design is obtained by removing a row of shamrock holes and scaling the spacing between the remaining adjacent rows of holes to √ 3a W. The modes for a waveguide width of W=0.58 are shown in Fig.3(b). This waveguide supports both a photonic and a phononic mode with band edges close to the respective transition frequencies of the emitter. The band edges of both modes shift up in frequency when reducing the width of the waveguide, eventually, leading to a dual band gap at the frequencies relevant for the emitter for W=0.52. This makes the waveguide in Fig.3(b) well suited for the design of hetero-structure cavities, that simultaneously support modes at the desired photonic and phononic frequencies [6,8], which corresponds to the OM-cavity drawn in Fig. 1(b). One realization of such a cavity and the mode-profiles of the optical and the mechanical modes are shown in Fig. 3(c) and GHz. e) The three opto-mechanical circuit elements needed to implement the schematic in Fig. 1(b). The QD is positioned in the OM-cavity coupling to the two modes shown in c) and d). (d). In Fig. 3(e) the circuit elements needed to assemble the circuit sketched in Fig. 1(b) are collected. The two types of waveguides discussed suffice to realize the OMcircuit sketched in Fig. 1(b) and (c) by ensuring that the cavities dominant mechanical loss rate, κ e,m , is to the phononic waveguide in Fig. 3(a). However, similar design ideas have been used to realize photonic waveguide defects in the same OM-crystal (not shown). Thus, this new type of OM-crystal is a versatile platform for on-chip opto-mechanics. In this work we have demonstrated a scheme for singlephonon generation in an opto-mechanical crystal. The scheme is based on QD embedded in a membrane GaAs nanostructure whose periodic properties lead to simultaneous photonic and phononic band-gaps, allowing to control optical and acoustical interaction processes in the emitter. We have designed waveguides and cavities for both photons and phonons by introducing different types of crystal defects. Throughout the work the focus has been on designing structures that can be experimen-tally realized within the current scope of gallium arsenide nano-fabrication. Finally we note that other solid-state systems, such as silicon-vacancy centers in diamond, appear to have spincoherence times limited by single-phonon mediated relaxation processes [47,48]. In addition to enhancing the spin coherence of such a system it also becomes possible for the phonons to act as a coherent on-chip quantum-bus, which can couple several emitters [17] or act as a transducer from the optical to the microwave regime [49,50]. When exploiting the type of three-level system discussed here the resulting photon-phonon cascade can be used to implement the DLCZ-protocol (Duan, Lukin, Cirac, and Zoller) [51]. Other possible applications include the creation of vibration amplification by stimulated emission of radiation [52]. We would like to thank Alisa Javadi, Anders Sørensen, Richard Warburton, Petru Tighineanu, Sahand Mahmoodian, and Tommaso Pregnolato for helpful discussions. We gratefully acknowledge financial support from the the Danish Council for Independent Research (Natural Sciences and Technology and Production Sciences) and the European Research Council (ERC Consolidator Grant -ALLQUANTUM). FIG. 4. The symmetry of the shamrock-crystal. a) Three solid blue lines indicate the mirror planes of the C3v symmetry of the shamrock-hole. The red rhombus indicates the primitive cell of the crystal. The particular orientation bewteen the mirror planes an the vectors forming the primitive basis leads to a crystal structure that belongs to the p3m1 planar space group. Thus, distinctly different then the p6mm planar space group, which can be obtained for holes that have at least as high a degree of symmetry as the hexagonal lattice. b) In general both the photonic and the phononic eigenmodes inherit the symmetries of the crystal-structure and hence the irreducible Brillouin zone can be found from the planar space group of the structure. Here the irreducible Brillouin zone for p3m1 is shown [4]. However, this planar space group does not include the inversion operator while the optical and the mechanical modes must both be eigenstates of the inversion operator, due to time-reversal symmetry (ω(k) = ω(−k)) [5,6], leading to a smaller effective irreducible Brillouin zone. c) The effective irreducible Brillouin zone for the optical and mechanical modes in a crystal belonging to the p3m1 planar space group.
2016-02-02T16:39:36.000Z
2016-02-02T00:00:00.000
{ "year": 2016, "sha1": "eeb81ec24997ad60df8ba45a4bed39bc755249aa", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1602.01005", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "20ddd0ac80816133464ca2ce737dda1c78a1ceeb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
6095742
pes2o/s2orc
v3-fos-license
Arthroscopic fixation with a minimally invasive axillary approach for latissimus dorsi transfer using an endobutton in massive and irreparable postero‐superior cuff tears Arthroscopically assisted latissimus dorsi transfer is a viable option for treatment of patients in their 50s to 70s, without arthritis of the glenohumeral joint, who suffer from massive rotator cuff tears that are not amendable to primary repair due to fatty changes in the muscle tissue, or that have failed previous repair attempts. This procedure offers immediate and dramatic pain relief and is not as technically demanding as one might think. Understanding and respecting the principles of tendon transfer is a key to the success of this procedure. INTRODUCTION This article describes the evolution of our previously reported technique for the arthroscopically assisted latissimus dorsi (LD) Transfer and fixation with an interference screw. [1] During the years, transfer of the LD was used as a salvage procedure to massive irreparable rotator cuff tears. The technique was first described by Christian Gerber in his 1988 article. [2] Since then many alterations to the original technique were used, mainly altering the anchorage technique with the use of suture anchors, bone tunnels, Interference screws or a combination of techniques. The LD was regarded as a large vascularized tendon that can be used to close a massive cuff defect and that exerts an external rotation and head depressing moment. [2,[3][4][5][6] The transfer of the LD tendon, to our knowledge, was never regarded as a tendon transfer per-se and therefore, less notice was given to the principles of tendon transfer. This could account for inconsistent outcomes. Five important principles of tendon transfer are: 1. The point of fixation -precise location of the point of fixation of the transferred tendon 2. The muscle tension -the tension acquired at the end of the operation should resemble the original tension within the transferred muscle 3. Tendon to bone contact -fixation to the bone at the site of insertion should provide as much contact as possible between the transferred tendon and the surrounding bone 4. Minimal exposure and tissue damage 5. The use of a muscle that acts as an agonist muscle to the one it is replacing. In our previous publication, we described an arthroscopically assisted LD transfer. [1] For fixation, we used either titanium or absorbable interference screws (IFS) (Stryker Interference Screw, www.stryker.com). Using the IFS technique has two shortcomings: First, we had to drill the tunnel for the tendon in a cranial to caudal direction. This puts the acromion in the way, especially, when the patient has a large lateral acromion extension, as described by Nyffeler et al. [7] In these cases, even the use of the neviaser portal does not offer a clear path to the humeral head at the intended insertion point. Second, as the IFS has a diameter of 7-8 mm, mal-positioning of the drill hole laterally resulted in a stress riser that led to the fracture of the lateral cortex and failure of fixation in two cases. These patients required revision surgery in which the tendon was re-attached using suture anchors. Drilling the tunnel in a caudal to cranial direction addressed the first issue as the acromion was now out of the way. We still had to deal with the diameter of the tunnel required for our fixation device. In our exploration for a safer way to anchor the transferred tendon to the Humerus, we thought of using an endobutton against the anterolateral cortex of the Humerus, on the lateral aspect of the bicipital groove. This technique has three obvious advantages over former techniques: 1. The tunnel drilled through the Humerus should match the tubularized tendon diameter. This ensures that the transferred tendon is in full contact with the surrounding bone tunnel 2. Using an endobutton allows us to precisely adjust the tension of the tendon prior to fixation 3. A metallic button laid against the anterolateral cortex of the Humerus is a strong fixation technique relying on cortical bone which is inherently stronger than cancellous bone. SURgICal TeChNIqUe The patient is placed in the standard lateral decubitus position with 3 kg of traction applied to his arm using weights and a pulley system. A 5 cm incision (4-7 depending on patient anatomy) is made at the anterior (axillary) border of the scapula, 5 cm cranial to the scapular apex [ Figure 1]. The incision can be easily extended caudally or cranially as needed. Once the incision is made, the first visible muscle is the LD. The redundancy of skin around the axilla is used to extend the dissection subcutaneously. The next step is the identification and mobilization of the neurovascular pedicle of the LD, which enters the muscle mid belly from its medial surface [ Figure 2]. Thorough mobilization is required to prevent over stretching this structure while the transfer is performed and tension is placed over the muscle-tendon flap. A healed but clinically and electrically non-functional transfer, as described by Codsi et al. [8] can be a result of insufficient release around the pedicle of the LD. Once the pedicle is mobile, the LD muscle belly is separated from the muscle belly of the Teres major (TM). Beck and Hoffer performed a cadaveric study and described the tendons of the LD and the TM as a conjoined tendon. [9] Goldberg found fascial connections between the muscle bellies of the muscles in all the specimens of his cadaveric study. [10] In our experience, a fatty interface is present in about 50% of the cases between the two muscles and can be used for separating the fascial connections. Care should be taken to thoroughly release these fascial connections between the LD muscle belly and that of the TM. Once the pedicle has been released and the muscle freed from its surrounding structures, the tendon should be carefully followed distally to its humeral insertion. Effort should be made to achieve a graft that is as long as possible. The release of the tendon should be done at the tendon-bone transition zone. This allows for up to 2 cm of additional tendon tissue. Prior to sectioning of the tendon, we place two marker sutures on the muscle belly at a distance of 3 cm from one another while the arm is at an abducted and externally rotated position. This position puts the LD muscle at maximum tension. These marker sutures are used later on to restore tension to the muscle belly when the tendon is being fixed. Restoring the correct tension of the muscle belly is important for muscle activity but we believe that restoring the correct tension on the muscle-tendon unit and on the neurovascular pedicle is crucial in maintaining a viable functional flap. The tenotomy is performed in a cranial-caudal manner. Due to the close proximity of the deep axillary vessels medially, this should be done with extreme care. The tendon of the LD is a flat thin structure. When used to close a massive cuff defect this is advantageous. Our technique requires a structure that is tubular. We therefore tubularize the tendon using two sutures (Arthrex Fiberloop No. 2, Arthrex We normally achieve a tubular tendon measuring about 7 mm in diameter and 7 cm in length [ Figure 3]. This is consistent with the publication by Goldberg et al. [10] At this stage, we further release the muscle belly bluntly, all the way to the apex of the scapula while maintaining traction on the now tabularized tendon. The goal of the release is reached when one' s finger could easily engulf the muscle belly circumferentially. The next step comprises the preparation of the route for the transferred tendon. Creating a tunnel that allows for the shortest route for the transfer is preferable because it preserves the muscle-tendon length. This step begins with an arthroscopic preparation of the greater tuberosity via the standard portals (posterior, anterolateral and lateral portal for visualization and instrumentation). Next, the scope is moved in a posterior direction over the Teres minor in a caudal direction. The plane between the Teres minor, when intact, and the deltoid is developed using elctrocoagulation and an alternating shaver blade. Once possible, the suture manipulator is passed through this newly created space and retrieved through the axillary incision. The four suture limbs of the tabularized tendon are retrieved and passed through the tunnel and the tendon is carefully pulled while the index finger helps in creating the passage. The diameter of the tubular tendon is measured using a standard metallic diameter gauge. The next step is the drilling of the bone tunnel. We introduce a specially designed two parts guide, where the proximal part is placed over the humeral head and guided to the desired position of insertion under vision using the scope. The distal part is then placed against the cortex of the humerus at the desired exit point of the tunnel. The two parts of the guide are then attached and secured. This device allows us to precisely choose our entry and exit points and to hold the position while drilling and passing the shuttle guide [ Figure 4]. A shuttle guide is then placed in a caudal to cranial direction. A drill with the same diameter as the tendon graft is introduced over the shuttle guide and drilling is performed in the same direction (caudal to cranial). The sutures are then passed with the aid of the shuttle guide from cranial to caudal and retrieved outside the skin, where the button is attached [ Figure 5]. While maintaining the precise tension of the muscle-tendon unit using the marker sutures placed before sectioning of the tendon, the button is lowered gradually until it lays flush against the bone cortex. The sutures are then tied with the aid of a knot pusher. Dynamic function of the graft is assessed with great care. DISCUSSION With the advancement in treatment options for repairable rotator cuff RC tears and for (RC) arthropathy, we still face a problem when we look for treatment for an ever growing group of patients in their 50s to 70s, without arthritis of the glenohumeral joint, who suffer from massive rotator cuff tears that are not amendable to primary repair due to fatty changes in the muscle tissue, or that have failed previous repair attempts. These people pursue an active life style and many times, have good passive (and sometimes active) range of motion, but are in constant pain. We believe that a transfer of the LD has much to offer to this specific group of patients. We perform this surgery with good results on patients that have not yet been operated on as well as on patients who had failed previous attempts of rotator cuff repair. We manage to respect four out of the five principles of tendon transfer. Our Arthroscopic technique is much less invasive than the classic LD transfer [2] or the single incision technique. [4] It is relatively easy to perform and in our experience offers immediate and dramatic pain relief. The use of an arthroscope and a specially designed guide allows us to accurately choose our insertion point into the humeral head, with minimal exposure and damage to surrounding tissue [ Figure 4]. The tunnel is drilled at the same diameter of the tabularized tendon, which allows for a large surface area of contact between the bone and the tendon. This increases the chance for good healing of tendon into the insertion point. The use of an endobutton as a fixation device allows us to have more precise control over the tension of the transferred muscle-tendon unit. Once the desired tension is recreated and maintained, the button is locked against the lateral cortex of the bicipital groove, a strong cortical structure [ Figure 5]. Being an antagonist muscle to external rotation, the LD is not an agonist muscle and the 5 th principle is therefore, not respected. Our experience is showing promise after 1 year but longer follow-up is needed as we document improvement in results as late as 3 years after surgery.
2018-04-03T03:30:48.198Z
2013-04-01T00:00:00.000
{ "year": 2013, "sha1": "7710f26ab12d4d8bad11d39c5adce2f1bf0b46e8", "oa_license": "CCBYNCSA", "oa_url": "https://europepmc.org/articles/pmc3743035", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "fb1eaa684b4593a58521f417d127f23f5a3b155c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
262076224
pes2o/s2orc
v3-fos-license
Amandemen Journal of Learning, Teaching and Educational Studies . Islamic religious education is defined as a conscious effort to prepare students to believe, understand, live, and practice the Islamic religion through activities, guidance, teaching, and training by taking into account the demands to respect other religions and relations between religious communities in society in order to realize national unity. Integrated Islamic Education is an alternative to overcome the occurrence of educational dichotomy. But this Integrated Islamic education can be carried out with the condition that the two existing education systems in Muslim countries can be merged into one system, as long as Wahyu Aji, Ziyah, Mahwiyah The Influence Of Science Dichotomy On Islamic Religious Education Curriculum The education system that is still ambivalent reflects a dichotomous view that separates the religious sciences from the general sciences.This view is clearly contrary to the teachings of Islam itself.Islam has an integralistic teaching which teaches that the affairs of the world are inseparable from the affairs of the hereafter, but are one unit.Therefore, general sciences must be understood as an integral part of the religious sciences.[9] Islam does not forbid us to study general sciences.For the needs of our life in this world, we also have to learn, know, then apply it in everyday life, with the aim of helping us in living in this world that will lead to life in the hereafter. With the existence of an educational dichotomy, this will have the impact of the disintegration of the education system, namely the incoherence and uncertainty of the relationship between general education and religious education.In looking at the two sciences there is no similarity in judging, it is more likely that there will be one that is the main goal of an educational institution in carrying out the learning process.So that the two sciences cannot go hand in hand and become a unified whole. The views and attitudes of scientists at the time of the Prophet Muhammad SAW, which positioned science in a parallel manner, led to the exploration of knowledge other than religious knowledge to begin to be carried out, although at a very modest level.Even the prophet Muhammad never taught his faithful and pious followers to stay away from the world which is a medium in achieving the perfection of life.These values were seen when Islam was born in the first half of the 7th century AD, the Arabs were surrounded by nations that had high and magnificent cultures, such as the Persians, Romans, Greeks and Indians.influence on the development of Islamic religious knowledge. RESULTS AND DISCUSSION Islamic Religious Education Religious Education in language is tarbiyah Islamiyah.While in terminology there are several terms regarding Islamic education including: Islamic Religious Education is a conscious and planned effort in preparing students to know, understand, live up to faith, piety, and have noble character in practicing the teachings of Islam from the main source of the holy book Al Quran and Hadith, through the activities of guidance, teaching, training, and the use of experience. Islamic education is an effort directed to the formation of a child's personality in accordance with Islamic teachings or an effort with Islamic teachings, to think, formulate and act based on Islamic values, and be responsible in accordance with Islamic values.[5] From this view, it can be said that Islamic education is not just a transfer of knowledge but rather a system that is laid out on a foundation of faith and piety, namely a system that is directly related to God. Wahyu Aji, Ziyah, Mahwiyah The Influence Of Science Dichotomy On Islamic Religious Education Curriculum The goals of Islamic education must be in sync with the goals of Islam, namely trying to educate individual believers to be submissive, pious, and worship well to Allah, so as to obtain happiness in this world and the hereafter.The purpose of Islamic education is the desired change that is sought in the educational process or educational efforts to convey it, both in individual behavior, from personal life or community life, as well as in the environment in which the individual lives or in the educational process itself and the teaching process.as a basic activity and as a proportion among basic professions in society.[6] In the future, the purpose of education (school institutions) must be addressed, so that in the future people will no longer think that education is not the goal of finding a job after graduation. Islamic Education Curriculum The curriculum is a plan that is structured to expedite the teaching and learning process under guidance, school responsibility, or is a lesson limit that is used by educational institutions to achieve certain goals at the end of each lesson, or also lesson limits given to students at a specified mark or level. .Education as a venue for transferring, preserving and developing culture has five fundamental factors, namely educators, students or students, methods, curriculum and evaluation.These five factors constitute a system that is interrelated with one another.Even so, there is a factor that is the most dominant of the five factors, namely the curriculum.Because the curriculum determines the direction of the goals of an education itself. The Islamic Education Curriculum is Islamic education materials in the form of activities, knowledge and experiences that are deliberately and systematically given to students in order to achieve the goals of Islamic education.The curriculum is also an activity that includes various detailed student activity plans in the form of educational material forms, suggestions for teaching and learning strategies, program arrangements so that they can be implemented, and matters that include activities aimed at achieving the desired goals.Through the basic concepts of the curriculum, "curriculum theory" can be developed. General Education Concept In SK Mendiknas No. 008-E/U/1975 states that public education is education that is general in nature, which must be followed by all students and includes a Pancasila moral education program that functions for the development of good citizens.Public education has several objectives: a. Familiarize students to think objectively, critically and openly b.Provides views on various types of life values, such as truth, beauty and goodness.c.Become a human who is aware of himself, as a creature, as a human being, as a man and a woman, and as a citizen. Wahyu Aji, Ziyah, Mahwiyah The Influence Of Science Dichotomy On Islamic Religious Education Curriculum d.Able to face their duties, not only because they master their profession, but because they are able to provide guidance and good social relations in their environment General education is primary and secondary education which prioritizes the expansion of knowledge needed by students to continue their education to a higher level.Form: Elementary School (SD), Junior High School (SMP), and High School In the National Education System Law No. 20 of 2003 Chapter II Article 3 it is said that national education functions to develop capabilities and form dignified national character and civilization in the framework of educating the nation's life. Judging from the function of general education, humans have the potentials they have.So that with education later can explore the potential of that person.One's ability will not be seen without education.The word forming character above means that humans are created in a state of nature.Therefore, education is the formation of character, individual character attitudes.Educating the nation's life here means that the government is trying to overcome the large number of illiterates and illiterates, so that when all people get education the life of the nation will run well. Implication of Islamic and General Education In general, analyzing and evaluating the logical implications of something for something else is to look at the circumstances before and after something happened.Religious education through madrasas, religious institutes and Islamic boarding schools is managed by the Ministry of Religion, while general education through elementary, secondary and vocational schools as well as public tertiary institutions is managed by the Ministry of National Education.Islamic education does not merely teach Islamic knowledge theoretically so that it only produces an Islamologist, but Islamic education also emphasizes the formation of Islamic attitudes and behavior, in other words, forming Islamic human beings.The following are the implications of the educational dichotomy: 1.The Emergence of Ambivalence Orientation of Islamic Education 2. The gap between the Islamic Education System and Islamic Teachings. Solutions in Handling Education Dichotomy in Indonesia Integrated Islamic Education is an alternative to overcome the occurrence of educational dichotomy.But this Integrated Islamic education can be carried out with the condition that the two existing education systems in Muslim countries can be merged into one system, as long as the philosophical basis remains Islamic.The style of integrated Islamic education is the integration or combination of various existing education systems, without the dichotomy of religious knowledge and general science.So that it can give birth to an education system that is inspired by Islam. Islam has never considered the dichotomy of science and religion.Science and religion are an integral totality that cannot be separated from one another.Indeed, it is Allah who created reason for humans to study and analyze what is in nature as a lesson and guidance for humans in carrying out their lives in this world.The description above illustrates to us that religious knowledge and general science are an integral part that cannot be separated from one another in carrying out activities of daily life. Both of these sciences must be owned integrally, so that human functions as abid and caliph can be carried out optimally.To create an integrated education system that is able to accommodate all the potential of students as a whole, so as to produce perfect human beings (human beings), it is necessary to have harmonious integration in all components of education. Curriculum Integration Curriculum is the essence of education, includes the formulation of objectives and the formulation of the contents of learning activities, as well as preparing students which include skills, knowledge, attitudes and various values needed to carry out work assignments in the future.The curriculum is the basis for developing professional abilities and personality, which determines the quality of human resources and society in a country . Islamic education curriculum in Indonesia should be provide a comprehensive critical space in teaching values religion.In the history of the growth of Islamic thought, at least There are three main classifications that can be explained, namely Islamic style fuqaha, mutakallimun-style Islam, and mutasawwifin-style Islam.These three classifications continue to grow over time, giving birth to various schools of thought.Uncritical and comprehensive teaching of Islamic thought may give rise to a partial understanding of Islam.Partial knowledge encourages the younger generation or people Muslims are trapped in the 'horse's perspective'.In the meaning of, Islam is understood superficially, and is emotionally encouraging followers to act subjectively . Religious sciences and general sciences can be integrated into the contents of the curriculum material.The integration of religious sciences and general science in an integrated curriculum can be done quantitatively and qualitatively.Quantitatively, it means that the portions of general education and religious education are given in a balanced manner.While qualitatively, it means making general education enriched with religious values and religious education enriched with content in general education.
2023-09-21T15:01:57.024Z
2023-04-19T00:00:00.000
{ "year": 2023, "sha1": "54955b837b0fb7602679a71fa29c4ddaef8d0825", "oa_license": "CCBY", "oa_url": "https://amandemen.my.id/index.php/i/article/download/2/2", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "59482c9884149195b6b5b83dca0f19d2f48111b8", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
208300629
pes2o/s2orc
v3-fos-license
Overcoming Ibrutinib Resistance in Chronic Lymphocytic Leukemia Ibrutinib is the first Bruton’s tyrosine kinase (BTK) inhibitor, which showed significant clinical activity in chronic lymphocytic leukemia (CLL) and small lymphocytic lymphoma (SLL) patients regardless of cytogenetic risk factors. Recent results of phase III clinical trials in treatment-naïve CLL patients shift the importance of the agent to frontline therapy. Nevertheless, beside its clinical efficacy, ibrutinib possesses some off-target activity resulting in ibrutinib-characteristic adverse events including bleeding diathesis and arrhythmias. Furthermore, acquired and primary resistance to the drug have been described. As the use of ibrutinib in clinical practice increases, the problem of resistance is becoming apparent, and new methods of overcoming this clinical problem arise. In this review, we summarize the mechanisms of BTK inhibitors’ resistance and discuss the post-ibrutinib treatment options. Introduction Chronic lymphocytic leukemia (CLL) is an incurable clonal proliferation of small CD5/CD19-positive lymphocytes accumulating in blood, bone marrow and lymphoid tissues accounting for approximately 25% of all leukemias in Europe and North America [1,2]. The median age of diagnosis is 70 years, and the disease is incurable in majority of cases [3,4]. The disease has a heterogenous clinical course, and numerous studies aimed at identifying novel prognostic and predictive factors are being intensively performed. Although immunochemotherapy tailored to patients' fitness has so far remained the backbone of frontline CLL treatment in the majority of patients, the development of novel selective compounds targeting the B cell receptor (BCR), along with increasing knowledge of predictive factors, have improved patients' prognosis [3,4]. Next generation sequencing (NGS) identified a number of mutations in genes related to regulation of key cellular processes, e.g., response to DNA damage and cell cycle control (ATM, TP53, RB1, BIRC3), RNA processing (SF3B1), Notch signaling (NOTCH1, NOTCH2, FBXW7) and cytokine signaling (NRAS, KRAS, BRAF, MYD88, DDX3X, MAPK1) that modify CLL's clinical course [5][6][7][8][9]. Although some of the above genetic changes were identified to yield prognostic impact, currently only the negative prognostic status of the p53 pathway aberrations is reflected in clinical guidelines [3,4]. Patients characterized by p53 protein defects (short arm of chromosome 17 deletion [del(17p)] or TP53 gene mutation) are refractory or achieve only transient responses to anti-CD20 antibody based immunochemotherapy [10,11]. Furthermore, transformation to poor prognostically aggressive diffuse large B-cell lymphoma (DLBCL) also occurs in up to 10% of cases [12]. The outcome of CLL patients who are refractory or relapsed to immunochemotherapy changed with the development of novel agents inhibiting BCR signaling, e.g., Bruton's tyrosine kinase (BTK) inhibitor ibrutinib and the phosphoinositide 3-kinase (PI3K) delta inhibitor idelalisib [4,13,14]. Both compounds presented remarkably high activity in CLL, including patients with p53 dysfunction [13][14][15]. Significant clinical efficacy of ibrutinib along with good tolerability, also in comorbid patients, were reported for both relapsed/refractory (RR-CLL) and treatment-naïve CLL (TN-CLL) [16,17]. Considering the widespread use of ibrutinib and other BTK inhibitors (BTKi) in current clinical practice, in this work we discuss the mechanism of action of BCR and ibrutinib in normal and pathological cells, and the adverse event profile of the drug. Furthermore, we present the most important findings regarding the resistance mechanisms to ibrutinib, reasons of therapy discontinuation, and put special emphasis on potential strategies and alternative compounds with the potential to overcome these clinical issues. B Cell Receptor Signaling in Normal and Pathological Cells The cellular origin of B-cell lymphomas has been extensively studied over past 15 years. Early studies using gene expression profiling showed that malignant B cells originate from normal B-cells at a different stage of maturation [18][19][20][21]. Every normal B cell, and consequently every lymphoma cell, has a unique BCR consisting of pairs of immunoglobulin heavy (IgH) and light (IgL) chains. Each IgH and IgL has a unique variable (V) region that allows the BCR to bind to diverse antigens. The antibody portion of BCR is coupled on cell membranes with CD79A and CD79B subunits which mediate signal transductions [21]. In normal and lymphoma B cells, there are two modes of signaling involving the BCR: the antigen-independent "tonic" signaling and antigen-dependent "active" BCR signaling. Tonic BCR signaling was defined by the observation that the conditional ablation of surface BCR expression in mouse B-cells results in the eventual loss of all naive mature B-cells [22,23]. Tonic BCR signaling requires the immunoreceptor tyrosine-based activation motif (ITAM) portion of CD79A, but may not require the extracellular portions of IgM, suggesting that this mode of BCR signaling is antigen-independent [23,24]. A constitutively active form of the PI3K was able to rescue the survival of mouse B-cells in which the BCR was genetically ablated, suggesting a key role for PI3K in delivering survival signaling during tonic BCR signaling [25]. In contrast, active BCR signaling occurs subsequent to BCR aggregation, allowing SRC family kinases to phosphorylate CD79A, CD79B and spleen tyrosine kinase (SYK), which, in turn, activates BTK, PI3Kδ and the phospholipase C gamma 2 (PLCγ2). Unlike tonic BCR signaling, active BCR signaling engages many pathways and transcriptional networks that include the PI3K, mitogen-activated protein kinase (MAPK), nuclear factor of activated T cells (NFAT), RAS pathways and CARD11-mediated activation of NF-κB. Increased activity of NF-κB is characteristic of this mode of BCR signaling, which promotes proliferation and survival of normal and malignant B-cells [21,26]. Microscopic examination of the BCR on the surface of activated B cell type diffuse large B-cell lymphoma (ABC-DLBCL) cell lines and primary tumor cells revealed a consistent pattern of BCR clustering reminiscent of BCR clusters observed in antigen-stimulated normal B cells [26,27]. Moreover, it was shown that in approximately 30% of patients with CLL, BCRs have specific, almost identical structures that maybe classified into distinct subsets (defined as 'stereotyped BCRs') on the basis of shared sequence motifs within the IGHV genes (that is, IGHV-IGHD-IGHJ gene rearrangement sequences) [28]. The reactivity of BCRs to autoantigens exposed on apoptotic cells has been reported for CLL and ABC-DLBCL [29,30]. By expressing CLL-derived or lymphoma-derived BCRs in cell lines, investigators demonstrated that malignant BCRs bound self-antigens, which included structural elements within a subdivision of the immunoglobulin heavy chain V region known as the framework region (FR), triggering proliferation and survival signals in a cell-autonomous fashion [31,32]. Aside from autoantigens, it was shown that BCRs on CLL cells also respond to foreign antigens of bacteria and fungi which, as shown in mouse models, may stimulate CLL pathogenesis due to induced cross-reactivity with autoantigens [33][34][35]. Detailed understandings of the fundamental pathogenetic role of BCR signaling in B-cell lymphomas have led to the development of clinical modulators of this pathway, e.g., BTK, PI3K and SYK inhibitors (Figure 1). Bruton's tyrosine kinase is a Tec protein tyrosine kinase (TEC)-family nonreceptor tyrosine kinase that signals downstream of numerous cellular receptors, including the BCR, toll-like receptors (TLR) and Fc receptors [36]. BTK transduces signaling downstream of the BCR and activates PLCγ2, which catalyzes the cleavage of membrane phosphatidyl-inositol 4,5 bisphosphate (PIP 2 ) into inositol triphosphate (IP 3 ) and diacylglycerol (DAG). This mobilizes calcium and activates protein kinase C beta (PKCβ) and downstream proteins. In addition, the BCR co-receptor transmembrane protein CD19 is phosphorylated by LYN during BCR signaling. This recruits PIK3 to BCR with subsequent phosphorylation of PIP 2 to generate phosphatidylinositol-3,4,5-trisphosphate (PIP 3 ). Collectively, these signaling pathways induce activation of the NF-κB, AKT, RAS, mitogen-activated protein kinase and nuclear factor of activated T cell pathways, resulting in B-cell cellular changes including cell survival, proliferation, adhesion, migration and homing [37][38][39][40]. Recently, Phelan et al. discovered a new mode of oncogenic BCR signaling in ibrutinib-responsive cell lines and DLBCL biopsies, coordinated by a multiprotein super complex formed by MYD88, TLR9 and BCR (My-T-BCR super complex). The My-T-BCR super complex co-localizes with the CBM complex, phosphorylated IκBα and mTOR on endolysosomes, where it drives pro-survival NF-κB and mTOR signaling. The My-T-BCR super complex was reduced by ibrutinib, but was further attenuated by the addition of AZD2014, an mTOR inhibitor. This dual mTOR and BTK inhibition cooperatively decreased MYD88 protein abundance and mTOR activity, providing mechanistic insights into synergism between BTK and mTOR or PI3K inhibitors. Additionally, My-T-BCR super complexes characterized ibrutinib-responsive malignancies and distinguished ibrutinib responders from non-responders [41]. Although My-T-BCR super complexes were not evident in CLL [41], these data shed more light on the inhibiting properties and resistance mechanisms of ibrutinib that should be investigated in near future. Since BTK transduces constitutive signaling downstream of the BCR in many B cell malignancies, the protein has long been considered an attractive therapy target. Ibrutinib, a covalent BTKi has been approved for use in patients with CLL, Waldenström macroglobulinemia (MW), mantle cell lymphoma (MCL) and marginal zone lymphoma (MZL) [42]. Ibrutinib Mechanism of Action in CLL In CLL cells, BTK is constitutively active and expressed at higher levels as compared to normal B cells warranting their proliferation and survival [43,44]. Ibrutinib (PCI-32765) is an orally bioavailable irreversible inhibitor of BTK binding covalently to sulfhydryl C481 residue of active BTK form inhibiting therefore further signal transduction leading to loss of proliferation, induction of apoptosis and cell activation [43,44]. Furthermore, ibrutinib diminishes intracellular interactions of CLL cells with surrounding microenvironment cells despite synthesis of tumor promoting cytokines such as B cell activating factor (BAFF), tumor necrosis factor alpha (TNFα) or CD40 ligand [43,44]. As BTK acts also downstream of chemokine receptors (CXCR4, CXCR5) and administration of ibrutinib leads to diminished cell adhesion resulting in promotion of CLL cells migration to peripheral blood with concomitant reduction of lymphatic organ infiltration [45,46]. CLL cells deprived of protective microenvironmental signals of lymphatic nurse-like cells undergo apoptosis; however, the rate is modest, and persistent lymphocytosis in CLL patients under ibrutinib therapy may be observed [47]. Ibrutinib is relatively selective towards BTK; however, the compound has some off-target activity such as the inhibition of several other tyrosine kinases and their pathways, e.g., epidermal growth factor receptor (EGFR), interleukin-2-inducible T cell kinase (ITK), T cell X chromosome kinase (TXK) and PI3K, owing to its efficacy on the one hand, but potentially contributing to a specific toxicity profile, as discussed below, on the other [48]. Clinical Activity of Ibrutinib In the phase I study (#NCT00849654), ibrutinib monotherapy showed a 69% overall response rate (ORR) with 16% patients achieving complete response (CR) [49]. In the multicenter phase Ib-II trial (#NCT01105247), 85 RR-CLL patients received either 420 mg daily (n = 51) or 840 mg (n = 34) ibrutinib in monotherapy leading to 71% ORR, with only two patients showing CR in the 420 mg cohort [50]. With no significant differences in treatment efficacy and tolerability, the 420 mg daily dose was selected for further clinical trials. The RESONATE trial (#NCT01578707) was the first one to present superiority of ibrutinib monotherapy over monotherapy with ofatumumab (an anti-CD20 antibody) in relation to ORR (63% vs. 4.1%), progression-free survival (PFS) and overall survival (OS) [13]. Of note, ibrutinib was capable of diminishing the negative prognostic impact of 17p deletion. The recently published update of the study showed that ibrutinib may be safely administered for longer periods of time [51]. Importantly, ORR to ibrutinib increased over time, with 91% of patients obtaining a response at a median duration of ibrutinib treatment of 41 months. At data cut-off, 46% of patients continued treatment at a median follow-up of 44 months. Analysis of prognostic factors showed that ibrutinib activity was not influenced by baseline risk factors; however, patients with more than two prior therapies or with a presence of TP53 and SF3B1 mutations had a trend towards shorter PFS [51]. Ibrutinib monotherapy showed even better results in TN-CLL patients. The randomized phase III trial RESONATE II (#NCT01722487) revealed its superiority over chlorambucil monotherapy in terms of ORR, PFS and OS [16]. Although both RESONATE and RESONATE II trials confirmed the remarkable clinical efficacy of ibrutinib monotherapy, the above-mentioned trials were criticized for the agents used in the comparator arms as these were not up to date with immunotherapy regimens recommended by clinical guidelines [17,[52][53][54]. The results of the recently published phase III clinical trials showed that ibrutinib based regimens produced a better clinical benefit than currently recommended first line immunochemotherapy regimens [17,53,54]. In the E1912 trial (#NCT02048813), the ibrutinib-rituximab regimen was characterized by longer PFS and OS than that observed with the use of FCR regimen in TN-CLL patients 70 years of age or younger [53]. Ibrutinib alone or combined with rituximab was also superior in terms ORR and PFS than the BR regimen in TN-CLL patients aged 65 years or more (#NCT01886872) [17]. Furthermore, in their study, Woyach et al. found that addition to rituximab, ibrutinib does not influence patient outcome [17]. Ibrutinib-rituximab was also superior in terms of ORR and PFS compared to an obinutuzumab-chlorambucil regimen in TN-CLL patients either aged 65 years or older, or younger than 65 years with coexisting medical conditions (#NCT02264574) [54]. Of note, the latter two trials of Woyach et al. and Moreno et al., due to a possible crossover to ibrutinib based regimens, were not capable of presenting differences in OS between cohorts [17,54]. Although ibrutinib treatment was well tolerated, some unexpected adverse events, probably due to off-target activity, have also been reported and included mainly atrial fibrillation and hemorrhagic complications [4,13,[15][16][17]51,[53][54][55][56][57][58][59][60][61][62]. Atrial fibrillation occurs in approximately 6-16% patients, predominantly in the first 6 months of treatment and is mostly grade 1-2 [63,64]. Discontinuation of ibrutinib therapy does not affect the resolution of the event; however, if worsening to grade 3 or higher occurs, ibrutinib should be temporarily withheld until the adverse rhythm is resolved [65]. It should be noted that potentially lethal ventricular arrhythmias were also noted during ibrutinib treatment and could contribute to sudden unexpected deaths [66]. The exact mechanism of atrial fibrillation under ibrutinib therapy remains unknown; however, the off-target inhibition of the cardiac PI3K isoform and alteration of late cardiac sodium currents has been proposed [67,68]. The increased risk of bleeding is linked with acquired platelet dysfunction and is manifested particularly as skin-mucosal hemorrhagic diathesis (predominantly petechiae and ecchymoses) affecting approximately 20-40% of patients [16,51,56]. The exact mechanism of this particular toxicity remains unclear, however BTK was shown to be involved in platelet glycoprotines GP1b (via the von Willebrand factor) and GPIV (via collagen) signaling [69,70]. Major bleeding events were estimated to occur in 1-4% of patients, however a recently performed analysis of 1768 patients treated with ibrutinib revealed a similar risk of major bleeding for non-ibrutinib treated patients when data were adjusted for exposure time to the drug (3.2 vs. 3.1 per 1000 person-months) [71]. Diarrhea based on clinical reports is the most often noted mild adverse event during ibrutinib treatment. Its pathogenesis may be multifactorial, but taking into account that it is also frequently observed in patients under EGFR inhibitors, the off-target inhibition of this kinase by ibrutinib may be a contributing factor in the etiology of this adverse event [72]. Despite, the well characterized toxicity profile of the drug, analysis of clinical trials and real-world data revealed that in particular patient cohorts, up to approximately 30% patients discontinue ibrutinib therapy due to toxicity [57,61,62,[73][74][75][76]. Resistance Mechanisms and Clinical Implications Despite the clinical efficacy of ibrutinib, both primary and acquired resistance to the agent has been described in clinical trials and real-world patient settings [13,16,57,58,74,[77][78][79][80]. Primary resistance to ibrutinib, accounting for approximately 13-30% of cases, occurs only in patients with CLL with no initial response and mostly is observed in RR-CLL cases with an underlying possible Richter transformation. These observations are in line with results showing that patients with germinal center B cell (GCB)-DLBCL almost uniformly lack responsiveness to ibrutinib [77,81,82]. In the case of DLBCL, overexpression of the CD79B was also reported to be responsible for primary ibrutinib resistance; however, such a mechanism was not reported in CLL [83]. Secondary resistance to BTK inhibitors is far better characterized in CLL, where it can be manifested as a Richter transformation during the first year of therapy or as progressive CLL [62,78,[84][85][86]. It was shown that CLL cells under ibrutinib pressure are prone to clonal shifts as determined by whole-exome sequencing studies [87,88]. So far, complex karyotype, 17p deletion and BCL6 abnormalities were shown to be risk factors for acquired secondary resistance [84,89]. Whole-exome sequencing analysis of patients with late relapses showed acquired mutations in BTK at the binding site of ibrutinib (C481) with a cysteine to serine mutation, and PLCγ2, the kinase immediately downstream of BTK, where multiple activating mutations were identified [87,88,90]. The functional characterization of these mutations demonstrated that BTK C481S mutation reduces the binding affinity of ibrutinib for BTK, leading to reversible inhibition. Because of the relatively short half-life of ibrutinib, it has been confirmed that patients who relapse with the C481S mutation show expression of phosphorylated BTK which is not inhibited by the administration of ibrutinib [91]. The serine residues in the C481 position prevents the ibrutinib covalent from binding, making the bond reversible and reducing the ability to inhibit the mutant form of BTK, and therefore reducing its clinical activity [91]. Importantly, these mutations may be found in CLL cells up to one year before the clinical relapse occurs, allowing potentially for preemptive therapy modification [88,92,93]. Additional mutations at the T474 (T474I and T474S) locus leading to diminished ibrutinib selectivity and affinity have been described [94]. Recently, a substitution mutation T316A coding the non-kinase SH2 domain was identified. It is predicted that this results in diminished BTK interactions with BLNK and other proteins that drive ibrutinib resistance [95]. Nevertheless, mutations in BTK may occur simultaneously with PLCG2, which may pose problems when selecting proper next line treatment [88,93,96]. Mutations identified in PLCG2 have all been demonstrated to potentially gain a function, allowing BCR signaling activation in the presence of inactive BTK [88,93,97]. In PLCG2, the R665W mutation causes missense alteration leading to BTK-independent activation after BCR engagement, allowing it to bypass the BTK. Furthermore, BCR proximal kinases SYK and LYN are critical for the activation of mutant PLCγ2, indicating that the SYK and LYN blockades may potentially have clinical relevance in overcoming mutated PLCγ2 acquired ibrutinib resistance [97]. In addition to the point mutations in the above-mentioned genes, deletion of the short arm of chromosome 8 has also been connected with secondary resistance to ibrutinib [87]. In the series of five patients, Burger et al. reported expansion of clones harboring del(8p) with additional driver mutations (EP300, MLL2 and EIF2A), and interestingly, this clonal shift resulted in one patient with trans-differentiation into CD19-negative histiocytic sarcoma [87]. Del(8p) resulted in haploinsufficiency of the TNF-related apoptosis-inducing ligand receptor (TRAIL-R), leading to TRAIL insensitivity. Combined administration of TRAIL and ibrutinib resulted in CLL cells' diminished viability in the non-del(8p) samples; however, in the del(8p), this was not observed [87]. The association of del(8p) with point mutations in RPS15 and SF3B1 could potentially be linked with an additional clonal advantage of these cells upon ibrutinib treatment [87]. Recently, an enrichment in SF3B1, MGA, BIRC3, NFKBIE, CARD11 and XPO1 point mutations was noted in CLL samples with an acquired ibrutinib resistance [88,90]. Cosson et al. described a gain of the short arm of chromosome 2 (2p) leading to exportin-1 gene (XPO1) overexpression [98]. XPO1 regulates the transport of several cycle regulatory proteins, e.g., p53, FOXO and retinoblastoma (pRb) from the nucleus to the cytoplasm [99,100]. Overexpression of XPO1 leads to an efflux of the above-mentioned proteins from the cell nucleus, preventing their cell regulatory capabilities, and was linked with resistance to fludarabine-cyclophosphamide-rituximab (FCR) immunochemotherapy and ibrutinib [88,98]. Recently, Spina et al., based on the observations of 31 high-risk CLL patients treated with ibrutinib within the IOSI-EMA-001 study (#NCT0287617), proposed that clonal shifts promote cells with constant activations of AKT and ERK and non-canonical NF-κB pathways, preventing their death [101]. ERK activation was already shown to mediate ibrutinib resistance in MW [102]. Alternative Irreversible BTK Inhibitors Ibrutinib has enormously changed the outcome of CLL patients. Nevertheless, acquired ibrutinib resistance and disease progression remain the real challenge in CLL treatment. Moreover, off-target kinase activity may contribute to adverse effects, which are the most common reason for discontinuation in clinical practice. Therefore, more specific BTKi could possibly be better tolerated while maintaining high clinical efficacy. The most important compounds with potential activity as the post-ibrutinib therapy are listed in Table 1. Acalabrutinib Acalabrutinib (ACP-196) is an oral, highly selective, irreversible and covalent BTK inhibitor which has been proven to inhibit off-target kinases, such as EGFR and ITK, to a lesser extent than ibrutinib [103,104]. In the phase I/II ACE-CL-001 trial (#NCT02029443), acalabrutinib was administered to 134 RR-CLL patients after at least one prior treatment regimen [105]. The ORR reached 85%, and inclusion of partial responses with lymphocytisis (PR-L) increased it to 93%, while the median PFS was not reached. The toxicity profile was better than that noted for ibrutinib with a lower frequency of typical ibrutinib adverse events (AEs)-hypertension of any grade occurred in 11% (grade ≥3 in 2%), while atrial fibrillation in 3% (grade ≥3 in 2%). No grade ≥3 bleeding episodes occurred. Taking into consideration the lower off-target toxicity of acalabrutinib in comparison to ibrutinib, acalabrutinib has been studied for ibrutinib-intolerant patients. In the subanalysis of the ACE-CL-001 trial, 33 patients were in the ibrutinib intolerant cohort [107]. Acalabrutinib was discontinued in two patients (6%) due to AE (one unrelated G5 fungal infection and one unrelated G3 metastatic endometrial cancer). Several studies on setting acalabrutinib in monotherapy or in combination with other agents as an alternative to ibrutinib intolerance have been ongoing, including those which directly compare these two agents (phase III trial #NCT02477696). In summary, results to date demonstrate that acalabrutinib is highly active as both salvage therapy for CLL and for patients with ibrutinib intolerance. Acalabrutinib seems to display better tolerability than ibrutinib; however, the results of a head to head comparison of both agents are eagerly awaited. Zanubrutinib Zanubrutinib (BGB-3111) is another irreversible second generation BTKi which is also more selective for BTK than ibrutinib [108]. A phase I study by Tam et al. established a dose of 320 mg daily (either QD or 160 mg BID) as inhibitory to 99.5% of nodal BTK [109]. After a median follow-up of 13.7 months, 89 (94.7%) CLL/SLL patients have remained on treatment. Grade 3 neutropenia occurred in two patients, whereas one patient suffered from subcutaneous hemorrhage. Seventy-eight patients were evaluable for the efficacy analysis. The ORR was 96.2% and the estimate 12 month PFS rate was 100%. A head to head comparison between zanubrutinib and ibrutinib is conducted in an ongoing trial for patients with relapsed and refractory CLL or SLL (#NCT03734016). Tirabrutinib Tirabrutinib (ONO/GS-4059) belongs to second-generation, irreversible BTKi. Similarly to acalabrutinib, the drug displays a higher degree of selectivity to BTK than to other TEC kinases [110,111]. In a multicenter phase I dose-escalation study (#NCT01659255), tirabrutinib was indicated in 90 patients with relapsed and refractory B cell malignancies, including 28 patients with CLL [111]. Eleven patients (39%) were refractory to chemotherapy and nine (36%) had del(17p), whereas 13 (52%) had TP53 mutation. There were nine dose-escalation cohorts ranging from 20 mg to 600 mg once daily. The median follow-up for CLL was 560 days. Twenty-four of 25 evaluable CLL patients (96%) responded to tirabrutinib regardless of tested dose levels, and 24 (96%) patients had an objective lymph node response. Similarly, all of the patients with del(17p) or TP53 mutation responded to the treatment. At the data cut-off, 75% continued with tirabrutinib. Seven patients discontinued treatment-five due to AE and two due to progressive disease. The majority of Grade 3/4 AEs were mainly hematologic (anemia in 11% of patients and thrombocytopenia in 7%) and recovered spontaneously during therapy. One patient experienced a grade 3 bleeding event (spontaneous muscle hematoma). No arrhythmias were observed [111]. The efficacy and toxicity profile from the early-phase trial seems to be more favorable than that of ibrutinib. Reversible BTK Inhibitors Acalabrutinib, zanubrutinib and tirabrutinib display higher BTK selectivity, however they all bind covalently via C481 residue. Reversible BTK inhibition is a promising strategy to combat progressive CLL, and multikinase inhibition demonstrates superior efficacy to targeted ibrutinib therapy in the setting of Richter transformations. In summary, given the activity, regardless of C481S mutation, this class of agents may be an alternative for ibrutinib resistance. GDC-0853 GDC-0853 is an oral, highly selective and reversible BTK inhibitor which binds independently from C481 [112,113]. GDC-0853 suppresses downstream BCR signaling, resulting in downregulation of the NF-κB pathway and inhibition of cell proliferation [112,113]. In a phase I study, GDC-0853 was indicated in 24 patients with relapsed or refractory non-Hodgkin lymphoma (10 patients) or CLL (14 patients) [114]. Six patients were positive for C481S mutation. There were three cohorts taking 100, 200, or 400 mg once daily. One third of patients (eight out of 24) responded: one had CR and seven had PR or PR-L. Common AEs included fatigue (37%), nausea (33%), diarrhea (29%), thrombocytopenia (25%), headache (20%), abdominal pain, cough and dizziness (16%, each). Nine serious AEs were reported in five patients, of whom two had fatal outcomes (confirmed H1N1 influenza and influenza pneumonia). One patient with a C481S mutation achieved PR and another two patients had a decrease in size of target tumors (-23% and -44%). These data demonstrate that GDC-0853 was generally well-tolerated and had the potency to overcome C481S-mediated resistance. Vecabrutinib (SNS-062) Vecabrutinib is a novel reversible BTK and ITK inhibitor which has the ability to inhibit BTK, even in the presence of C481S mutations. Moreover, it seems to inhibit EGFR to a lesser extent than ibrutinib [115,116]. Recently, the preliminary results of the ongoing phase Ib/II study of vecabrutinib have been published (#NCT030376450) [117]. Nine patients (6 CLL, 2 MCL, 1 WM) have been treated in the phase Ib portion of the study (25 mg BID, n = 3; 50 mg BID, n = 6). The cohort was heavily pretreated with median of five prior regimens. All patients had history of BTKi treatment (eight with ibrutinib, one with acalabrutinib), four patients received venetoclax and two had chimeric antigen receptor T-cells (CAR-T). At the baseline, 67% (six out of nine) of patients had del(17p) or TP53 mutations. The most common grade ≥3 AEs included anemia, neutropenia and increased alanine transaminase (ALT) activity and occurred in one patient. The results indicate that vecabrutinib is rather well tolerated; however, final results with a response assessment are needed to verify its efficacy. LOXO-305 LOXO-305 is a recently described, non-covalent, reversible and selective BTKi [118]. In preclinical models, LOXO-305 has displayed potent BTK inhibition regardless of the presence of C481S mutation. Moreover, it has minimal off-target kinase efficacy and non-kinase activity. The clinical trials are at very early stages, a phase I/II study of oral LOXO-305 in patients with RR-CLL/SLL and non-Hodgkin lymphoma (NHL) (#NCT03740529) has recently started. ARQ-531 ARQ-531 binds reversibly and in a non-covalent way to BTK in the ATP binding region, omitting C481 [119]. Additionally, ARQ-531 inhibits other kinases in the BCR signaling pathway, e.g., LYN kinase from the SRC family and MEK1 kinase which, in turn, blocks the downstream ERK signaling. Because of the multiple target sites in animal models and patients' samples, ARQ-531 was able to inhibit BCR signaling in cells harboring either C481S or PLCG2 mutations. A phase I clinical trial is ongoing to assess the safety and toxicity, and to establish dosing (#NCT 03162536). The preliminary results from the 16 patients with relapsed and refractory B cell malignancies with a history of BTKi treatment, including 12 patients with CLL, were published recently by Woyach et al. [120]. Patients were treated with doses up to 30 mg orally. The most common reported AEs were diarrhea, nausea, vomiting, fatigue, neutropenia and thrombocytopenia, hypernatremia, facial paralysis and headache, all reported in one patient (6.3%). Grade 3 AEs occurred in two patients (one lipase level elevation and thrombocytopenia). Twelve patients received at least one dose of the study drug and were available for response assessment, five achieved stable disease (SD) and seven had a progressive disease (PD). The results seem to show a manageable toxicity profile; however, the follow-up for objective responses is awaited. Venetoclax Venetoclax (ABT-199) is a BCL2 antagonist capable of inducing CLL rapid cell apoptosis independently of microenvironmental signals and signaling pathways [4,121,122]. In the pivotal trial (#NCT01328626), monotherapy with the agent showed remarkable activity in heavily pretreated RR-CLL, achieving a 77% ORR, including a 23% CR [122]. The addition of rituximab even potentiated its clinical efficacy, and in a cohort of 49 RR-CLL patients led to an 88% ORR with a 31% CR (#NCT01682616) [123]. Most importantly, venetoclax is also useful in the clinical setting of ibrutinib resistance based on data derived from clinical trials and real-world experience [60,[124][125][126][127]. In a group, 127 RR-CLL patients, of whom 91 were treated with ibrutinib as the last BCR inhibitor, were treated with venetoclax in monotherapy. In the analyzed group, 59/91 (64.8%) patients achieved response to venetoclax, and within the median follow-up of 14 months, 17 (19%) patients died; however, only seven deaths were attributable to disease progression (#NCT02141282) [126]. These results are supported by the results obtained in US, Italian and UK real-world cohorts [60,124,127]. In the US cohort of 683 patients treated with ibrutinib or idelalisib initially, upon kinase inhibitor failure, either chemoimmunotherapy, alternate inhibitor or venetoclax was administered. Based on PFS data, a switch to an alternate kinase inhibitor or venetoclax was most beneficial when compared to immunochemotherapy. Furthermore, patients who discontinued ibrutinib due to progression or toxicity had marginally improved outcomes if they received venetoclax (79% ORR) compared to idelalisib (46% ORR) [60]. The efficacy of venetoclax was also confirmed in the 105 RR-CLL patients in the UK cohort. In this group 60% of patients received the BTK inhibitor solely and 10% were treated with both the BTK and PI3K inhibitor. Treatment was stopped due to disease progression in 54% and 44% patients, respectively. In the BTK inhibitor cohort, the 85% ORR (23% CR) was achieved. Moreover, venetoclax was also active in patients exposed to both BTK and PI3K inhibitors, in which an 80% ORR (23% CR) was reached [127]. In the Italian cohort, of the 76 evaluable patients, 52 had received venetoclax after one BCR inhibitor (37 ibrutinib; 15 idelalisib n = 15), and 24 received two BCR inhibitors. Although the ORR following treatment with single BCR inhibitor was reasonably high in the analyzed cohort, reaching 74%, once the etiology of the BCR discontinuation was analyzed, the results strongly differed. The highest response to venetoclax (91% ORR) was noted in patients who discontinued BCR due to adverse events; however, in cases where the therapy was stopped due to disease progression, only a 49% ORR was observed [124]. Venetoclax activity diminished, however, if administered following two or more BCR inhibitors. In the cohort of 28 such patients, the ORR reached only 43%; however, the difference between cases ceasing therapy due to adverse events and disease progression was much lower than in the Italian cohort (50% vs. 38%) [128]. Considering the above-mentioned data, venetoclax is so far the best treatment modality for CLL cases following failure of BCR inhibitor therapy, and should be administered as a first treatment choice due to the observed diminished activity with increasing lines of selective agents. Whether preemptive addition of venetoclax to ibrutinib therapy in the case of increased ibrutinib refractory clones (e.g., harboring BTK mutations) could prevent clinical disease progression is still an open question. Nevertheless, the combination of ibrutinib and venetoclax was shown to be well tolerated and effective in cohorts of TN-CLL and RR-CLL cases [129,130]. In the CLARITY study, a response was noted in 47 out of 53 RR-CLL patients treated with such a combination (89%), with 51% achieving CR and 36% bone marrow minimal residual disease (MRD) negativity [130]. Another phase 2 trial (#NCT01799889) was designed for patients with RR-CLL, including RT, who were previously treated with a BTKi [133]. Forty-nine patients were enrolled, including eight with RT. Ibrutinib was the most often used BTKi in their disease course (37 patients; 75.5%). Entospletinib was administered in a dose of 400 mg BID. Sixteen patients (32.7%) achieved PR and 21 (42.9%) had SD; however, no complete responses were noted. The median PFS was 5.6 months. AEs occurred in all patients and were similar to the previous study [133]. Entospletinib in monotherapy demonstrates clinical activity for patients with RR-CLL, including those who have relapsed after BTKi therapy. In addition, Cheng et al. showed that C481S mutation-mediated resistance may be abolished with other SYK inhibitors [91]. In the in vitro model, CLL cells were treated with two SYK inhibitors: cerdulatinib (a dual inhibitor of SYK/JAK (PRT062070) and the highly specific SYK inhibitor PRT060318) and dasatinib (a LYN and BTK inhibitor). Both of the inhibitors with activity against SYK (PRT062070 and PRT060318), as well as dasatinib, led to a complete block of CLL cells' proliferation [91]. Cerdulatinib is a dual SYK/JAK kinase inhibitor which was shown to have antitumor activity in comparison to either target alone in preclinical studies [91,134]. The phase I dose-escalation study of cerdulatinib in 43 patients with relapsed and refractory B-cell malignancies detailed the pharmacokinetics, safety and initial efficacy [135]. Of the dosed patients, eight had CLL/SLL, 13 had follicular lymphoma and 22 had aggressive B cell NHL. The most common grade 3 or 4 AEs were anemia (16%), fatigue (14%) and diarrhea (9%). Cerdulatinib was temporarily interrupted due to AE in 13 patients. The most common of these were fatigue (n = 5) and gastrointestinal events (n = 3). Five patients achieved a partial response (CLL, n = 3; FL, n = 1; transformed FL grade 3B, n = 1). All patients with CLL who achieved a response were on therapy for more than 200 days. Five patients did not respond to the treatment: two because of Pneumocystis pneumonia, one patient had grade 5 pneumonia and another two had rapidly progressed [135]. The phase II study is currently ongoing to assess the efficacy of cerdulatinib (#NCT01994382). Compound 1 A novel class of BTK targeting agents has been recently designed which inhibits both BTK and PI3Kδ [139]. It has been hypothesized that the inhibition of these pathways simultaneously could result in deeper responses or overcoming resistance in comparison to using a single agent. Compound 1 is the first in the class of drugs which have been undergoing biological tests [139]. Mammalian Target of Rapamycin (mTOR) Pathway Inhibition mTOR is a downstream kinase in the PI3K-AKT pathway which promotes CLL cell proliferation. A novel class of drugs and a novel inhibitor of the mammalian target of rapamycin kinase (TORK) and DNA-dependent protein kinase (DNA-PK), CC-115 was evaluated in vitro and led to caspase-dependent cell killing irrespective of the p53, ATM, NOTCH1 or SF3B1 status [144]. This was confirmed in CLL samples obtained from patients with an acquired resistance to ibrutinib treatment [144]. The phase Ia/Ib trial (#NCT01353625) is already ongoing, but its preliminary results have revealed the efficacy of CC-115 in eight patients with RR-CLL with the ATM mutation. Seven patients achieved a decrease in lymphadenopathy, one achieved PR and three achieved PR-L [144]. Hsp90 Inhibition Hsp90 is a molecular chaperone that stabilizes client proteins and prevents their proteasomal degradation [145]. SNX-2112 and its prodrug SNX-5422 are novel, highly selective Hsp90 inhibitors that have been revealed to be active in vitro in multiple myeloma and other hematological malignant cell lines [146]. This resulted in the translation of SNX-5422 in a phase I clinical trial (#NCT02973399) to assess whether the combination of SNX-5422 with ibrutinib will provide a better clinical response in subjects who have residual disease than ibrutinib therapy, but not PD. The results of the trial have not been published yet. Another Hsp90 inhibitor, AUY922, was found to initiate the degradation of BTK and IκB kinases in ibrutinib-resistant MCL cell lines [147]. Other Potential Molecules for BTKi-Resistant Patients The bromodomain and extraterminal family of bromodomains (BET) have been targeted by many antagonists in various types of tumors [148]. The BET family regulates gene transcription [149]. JQ1 is an inhibitor of one of the BET family, BRD4. In preclinical studies, it inhibits the transcription of BCL2, c-MYC and as such mediates the apoptosis of acute myelogenous leukemia cells [150]. In MCL cell lines, it was shown to decrease BTK, PLCγ2 and AKT levels, and to induce apoptosis of the cells [151]. Moreover, the combination of JQ1 and ibrutinib worked synergistically in inducing apoptosis [151]. Pimasertib (AS703026/MSC1936369B) is a selective MEK1/2 inhibitor which has been extensively tested in clinical trials in solid tumors. Recently, in vitro research revealed its synergism with a combination of idelalisib or ibrutinib in DLBCL and MCL cell lines [152]. Afuresertib (GSK2110183) is an oral inhibitor targeting AKT which was shown to inhibit cell proliferation in several hematologic malignancies' cell lines [153]. In a phase I-II study, afusertib demonstrated activity in heavily pretreated multiple myeloma and NHL patients [153]. The activity of nutlin-3, an MDM inhibitor, in combination with ibrutinib was evaluated in vitro in a panel of B cell leukemia cell lines and in CLL patient samples [154]. The study revealed enhanced apoptosis, even in the samples carrying del(17p) and/or TP53 mutations [154]. Chimeric Antigen Receptor T Cells T cells with the modified chimeric antigen receptor (CAR) were shown to overcome immunological tolerance and mediate tumor rejection in patients with acute lymphoblastic leukemia (ALL) and DLBCL [155]. Although the use of CAR T cells in CLL is limited due to widespread of new effective treatment modalities, the safety and efficacy data suggest that they may overcome BCR receptor inhibitor resistance [156][157][158]. Initial studies by Porter et al. on heavily pretreated RR-CLL patients have generated encouraging preliminary data. The ORR was eight of 14 (57%), with four CR and four PR. The in vivo expansion of the CAR T cells correlated with clinical responses, and the CAR T cells persisted and remained functional beyond 4 years in the first two patients achieving CR. At the time of publication, no patient in CR has relapsed [159]. In a group of 24 RR-CLL patients, anti-CD19 CAR-T cells showed promising clinical activity upon ibrutinib resistance (NCT01865617) [157]. In the analyzed group, 19 progressed under ibrutinib, three had ibrutinib intolerance and two did not experience progression while receiving ibrutinib. Moreover, six patients were venetoclax refractory and 23 had a complex karyotype and/or 17p deletion. In the analyzed patient cohort, after four weeks following CAR-T cells' infusion, a 71% ORR was achieved. Twenty of 24 patients received cyclophosphamide and fludarabine lymphodepletion before CAR-T cells' infusion. Among this group, restaging was performed in 19 patients (including 16 ibrutinib refractory patients) which showed a 74% ORR (21% CR and 53% PR). Minimal residual disease (MRD) negativity based on IGH sequencing was shown in seven out of 12 (58%) assessed patients. In these patients, a 100% PFS and OS rate under 6.6 months' follow-up was noted. In the whole cohort, the median PFS was 8.6 months, whereas the median OS was not reached [157]. In the TRANSCEND CLL 004 phase I/II trial, anti-CD19 CAR-T cells were administered to 10 patients with RR-CLL with 70% classified as having high risk disease. The median of prior therapies in this cohort was four (range 3-8), while 90% of patients had received ibrutinib and 60% patients had been treated with both venetoclax and ibrutinib [158]. At 30 days post-dose, in six of eight (75%) patients an ORR was achieved, including four (50%) CR. Furthermore, six of seven (85.7%) patients were characterized by MRD negativity. In five patients eligible for response assessment following 3 months, four had a sustained response with MRD negativity, however one Richter transformation was noted [158]. Considering the above-mentioned results, despite a short observation time and a low number of treated patients, the use of CAR-T cells following ibrutinib failure poses an interesting treatment modality. Conclusions and Future Directions The development of ibrutinib and other selective BTK inhibitors increased the therapeutic armamentarium for CLL. However, despite high clinical activity of ibrutinib, patients discontinue the treatment due to loss of initial response and intolerance. Considering the increasing widespread use of ibrutinib in RR-CLL, and its potential shift to a frontline setting, the number of patients with BTK inhibitor resistance or failure will likely increase. Hopefully, potential strategies for overcoming this clinical problem are possible. First, the use of venetoclax alone or in drug combinations is so far the most promising option following ibrutinib therapy failure. Secondly, a combination of ibrutinib with chemoimmunotherapy for a restricted period of time could potentially limit the clonal disease evolution, minimizing the development of ibrutinib-resistant clones. This approach, however, is currently under evaluation, and further prolonged observations are necessary. Thirdly, use of CAR-T cells present a viable therapy of disease progression upon resistance to ibrutinib, regardless of the presence of a Richter transformation; however, the data are scarce and not as promising as DLBCL and ALL. An increasing number of translational data may help to guide post-ibrutinib therapies depending on the disease's clinical presentation and molecular data. [CrossRef]
2019-11-27T14:04:40.020Z
2019-11-21T00:00:00.000
{ "year": 2019, "sha1": "edf0e0d9bf10b6df6169e35d25db8a856d4ac259", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/11/12/1834/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bf4503a0a912687c40104c5b7212085b58ae5017", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119133641
pes2o/s2orc
v3-fos-license
Asymptotes in SU(2) Recoupling Theory: Wigner Matrices, $3j$ Symbols, and Character Localization In this paper we employ a novel technique combining the Euler Maclaurin formula with the saddle point approximation method to obtain the asymptotic behavior (in the limit of large representation index $J$) of generic Wigner matrix elements $D^{J}_{MM'}(g)$. We use this result to derive asymptotic formulae for the character $\chi^J(g)$ of an SU(2) group element and for Wigner's $3j$ symbol. Surprisingly, given that we perform five successive layers of approximations, the asymptotic formula we obtain for $\chi^J(g)$ is in fact exact. This result provides a non trivial example of a Duistermaat-Heckman like localization property for discrete sums. Introduction The saddle point approximation (SPA) is a classical algorithm to determine asymptotic behavior of a large class of integrals in some large parameter limit [1]. One uses it when exact calculations are either too complex or not very relevant. Recently SPA has been used in conjunction with the Euler Maclaurin (EM) formula to derive asymptotic behavior of discrete sums [2,3]. In the combined EM SPA scheme corrections to the leading behavior come from two sources: the derivative terms in the EM formula and sub leading terms in the SPA estimate. In this paper we use the EM SPA method to derive the asymptotic behavior of Wigner rotation matrix elements. We subsequently use this asymptotic formula to derive the asymptotic behavior of the character of an SU (2) group element. Although our estimate is obtained after using twice the EM SPA approximation and once the Stirling approximation for Euler's Gamma functions it turns out to be the exact result. We then proceed to obtain the asymptotic expression for Wigner's 3j symbol, recovering with this method the results of [4]. In the recoupling theory of SU (2), the EM SPA method has already been used to obtain in a particularly simple way the Ponzano-Regge asymptotic of the 6j symbol [3], [19]. The main strength of this approach is the following. Most relevant quantities in the recoupling theory of SU (2) are expressed in Fourier space by discrete sums. In particular, the Wigner matrix elements admit a single sum representation [20]. However, generically, the sums are alternated hence difficult to handle. Our EM SPA method deals very efficiently with alternating signs: generically such signs lead to complex saddle points situated outside the initial summation interval. After exchanging the original sums (via the EM formula) for integrals one only deforms the integration contour in the complex plane to pass trough the saddle points in a completely standard manner. This feature is the crucial strength of our method, and allows rapid access to explicit results. The EM SPA method should allow one to prove for instance the asymptotic behavior [21] of the 9j symbol. The proofs of our three main results (Theorems 1, 2 and 3) are straightforward, but the shear amount of computations performed renders this a somewhat technical paper. In Section 2 we give a quick review of iterated saddle point approximations. In Section 3 we establish Theorem 1 and use it in Section 4 to derive the character formula (Theorem 2). Section 5 proves the asymptotic formulae of the 3j symbol (Theorem 3). Section 6 draws the conclusion of our work and discusses the relation between our result for the character and the Duistermaat Heckman theorem. The (very detailed) Appendices present explicit computations and detail the EM derivative terms. Successive saddle point approximations We briefly review the iterated SPA approximations. The result of this section justifies the use of our asymptote of the Wigner matrices to derive the asymptotic behavior of SU (2) characters and Wigner 3j symbols. Consider a function f of two real variables. We are interested in evaluating the asymptotic behavior of the integral for large J. One can chose to either evaluate I via an SPA in both variables at the same time or via two successive SPA, one for each variable. The question is if the two estimates coincide. This problem is addressed in full detail in [1] and the answer to the above question is yes (for sufficiently smooth functions), with known estimates. Let us give a quick flavor of the origin of this result. Proof: The simultaneous SPA in u and x yields the estimate and, as [∂ u f ](h(x), x) = 0, the first term above vanishes. The critical point x c is therefore and noting that the estimate obtained by two successive SPAs is with ξ = cos(β/2), η = sin(β/2). The sum is taken over all t such that all factorials have positive argument (hence it has 1 + min{J + M, J − M, J + M , J − M } terms). We call a Wigner matrix generic if its second Euler angle β / ∈ Zπ (that is 0 < ξ 2 < 1). We define the reduced variables x = J M and y = J M . A priori the asymptotic behavior we derive below holds in certain region of the parameters x, y and ξ detailed in Appendices E and C. Theorem 1. A generic Wigner matrix element in the spin J representation of a SU (2) group element has in the large J limit the asymptotic behavior with with φ, ψ and ω the three angles Proof: The proof of Theorem 1 is divided into two steps: first the approximation of eq. (8) by an integral via the EM formula, and second the evaluation of the latter by an SPA. Step 1: In the large J limit the leading behavior of the Wigner matrix element eq. (8) is where and To prove this we rewrite eq. (8) in terms of Gamma functions and use the Euler-Maclaurin formula where B 1 , B 2k are the Bernoulli numbers 1 . To derive our asymptote we only take into account the integral approximation of eq. (15) (the boundary terms are discussed in Appendix E), hence We define u = t J hence du = 1 J dt and using the Stirling formula for the Gamma functions (see Appendix A) we get eq. (12). Step 2: We now proceed to evaluate the integral (12) by an SPA. Some of the computations relevant for this proof are included in Appendix B. Denoting the set of saddle points by C, the leading asymptotic behavior of a generic Wigner matrix element writes Our task is to identify C and compute K| x,y,u * , (−∂ 2 u f )| x,y,u * and f (x, y, u * ). The set C. The derivative of f with respect to u is A straightforward computation shows that the saddle points are the solutions of The region of parameters x, y, ξ for which the discriminant of eq. (21) is positive gives exponentially suppressed matrix elements while the region for which it is zero gives an Airy function estimate. Both cases are detailed in Appendix C. In the rest of this proof we treat the region in which the discriminant of eq. (21) is negative. We denote by ∆ minus the reduced discriminant, that is and the two saddle points, solutions of eq. (21), write thus the set of saddle points is C = {u + , u − }. Evaluation of f (x, y, u ± ). We rearrange the terms in eq. (13) to write Note that by the saddle point equations the last line in eq. (24) is zero for u ± . The rest of eq. (24) computes to (see Appendix B.1 for details) with ıφ = ln Second derivative. The derivative of eq. (19) is At the saddle points a straightforward computation shows that the second derivative is (see The prefactor K. The prefactor K(x, y, u) is which computes at the saddle points to (see Appendix B.3) Final evaluation. Before collecting all our previous results we first evaluate, using eq. (28) and (30) Comparing eq. (31) with eq. (11) we conclude that Substituting eq. (32) and (25) into eq. (18) we obtain and a straightforward computation proves Theorem 1. Characters In this section we use Theorem 1 to derive an asymptotic formula for the character of a SU (2) group element. Theorem 2. The leading asymptotic behavior of the character of a SU (2) group element (with Euler angles (α, β, γ)) in the J representation, χ J (α, β, γ) is with θ defined by Let us emphasize that up to this point we already performed three different approximations: first the EM approximation, second the Stirling approximation and third the SPA approximation. To prove Theorem 2 we will use a second EM approximation and a second SPA approximation. However, formula (35) is exactly the classical relation between the Euler angle parameterization and the θ, n parameterization of an SU (2) group element, thus the leading behavior we find (after five levels of approximation) is in fact the exact formula of the character! We will discuss this rather surprising result in Section 6. with x = M J the rescaled variable. Note that the step in the second sum is dx = 1 J . The leading EM approximation (see end of Appendix E) for the character is therefore the continuous integral (dropping henceforth the argument (α, β, γ)) We now use Theorem 1 (more precisely eq. (33)) and write a diagonal Wigner matrix element as Note that for diagonal matrix elements the exponents simplify to while the discriminant ∆ and angles φ, ψ and ω from eq. (11) become We follow the same steps as in the proof of Theorem 1. Critical set C χ . The derivatives of the exponents for each of the two terms in eq. (38) are The derivative of φ computes to The difference ψ − ω computes to and its derivative Combining eq. (43) and (45) we have and the saddle point equations (42) simplify to Dividing by 2 and exponentiating we get . Asymptotes of 3j symbols In this section we employ the asymptotic formula for the Wigner matrices to obtain an asymptotic formula for Wigner's 3j symbol. Note that one can use directly the EM SPA method to derive this asymptotic starting from the single sum representation of the 3j symbol [20]. We take here the alternative route of using the results of Theorem 1 and the representation of 3j symbols in terms of Wigner matrices where the integral is taken over SU (2) with the normalized Haar measure Substituting the asymptote (9) for each matrix element We expand (61), perform the integration over α and γ and change variables from β to ξ to rewrite it as where the index i runs from 1 to 3, δ i J i x i ,0 is a Kronecker symbols and f i is We will derive the asymptotic behavior of eq. (63) via an SPA with respect to ξ 2 . Note that eq. (59) involves two distinct 3j symbols. If one attempts to first set M i = M i , and obtain a representation of the square of a single 3j symbol, one encounters a very serious technical problem. We will see in the sequel that there are two saddle points ξ 2 ± contributing to the asymptotic behavior of eq. (63). If one starts by setting M i = M i , one of the two saddle points ξ 2 + = 1, and the second derivative in ξ 2 + diverges. The contribution of this saddle point does not evaluate by a simple Gaussian integration. The SPA evaluation of the general case, eq. (63), is a very lengthy computation. We will perform it using the classical angular momentum vectors. For large representation index J i , there exists a classical angular momentum vector J i in R 3 of length | J i | = J i and projection on the Oz axis (of unit vector n) n · J i = M i . A 3j symbol is then associated to three vectors, J 1 , J 2 , J 3 with | J i | = J i and n · J i = M i = x i J i . By the selection rules the quantum numbers J i respect the triangle inequalities, and M 1 + M 2 + M 3 = 0. This translate into the condition that the vectors J i form a triangle J 1 + J 2 + J 3 = 0 (and n · [ J 1 + J 2 + J 3 ] = 0). The asymptotic behavior of the 3j symbol writes in terms of the angular momentum vectors as Theorem 3. For large representation indices J i the 3j symbol has the asymptotic behavior , twice the area of the triangle { J i } and Φ i n and Ψ 13 n and Ψ 23 n five angles defined as Before proceeding with the proof of Theorem 3 note that our starting equation (59) involves two distinct 3j symbols. They are each associated to a triple of vectors, Consequently there exists a rotation which overlaps them. Under this rotation the normal vector n turns into the unit vector k. All the geometrical information can therefore be encoded into an unique triple of vectors, henceforth denoted J i , and the two unit vectors n and k such that figure 1). Proof of Theorem 3: The proof follows the by now familiar routine of an SPA. We perform this evaluation at fixed angular momenta, that is at fixed set of vectors J i , n, k. The dominant saddle points: The saddle points governing the asymptotic behavior of eq. (63) are solutions of the equation A straightforward computation (see Appendix D.1) yields hence the saddle point equation writes Introducing the angular momentum vectors the saddle point equation becomes after a short computation (see Appendix D.2) for all choices of signs s 1 , s 2 and s 3 . Dividing by 4S 2 , eq. (70) factors as with roots again independent of the signs s 1 , s 2 and s 3 . To identify the terms contributing to the asymptotic of eq. (63) for fixed J i , n and k one needs to evaluate J i √ ∆ i for each of the two roots ξ 2 + and ξ 2 − . Using Appendix D.3, we have To any semiclassical state J i , n, k we associate six signs, + i and − i defined by Substituting J i ∆ ± i into the saddle point eq. (69) the latter becomes with A + = ( n ∧ k) and A − = ( S∧ n)( k· S)+( S∧ k)( n· S) As, on the other hand, i J i = 0, we conclude that at fixed semiclassical state we have two saddle points ξ 2 + and two saddle points ξ 2 − contributing • The ξ 2 + saddle point in the term s i = + i and in the term The SPA evaluation of eq. (63) is the sum of this four contributions. The second derivative: The derivative of eq. (68) with respect to ξ 2 yields and the term in the first line cancels (due to the saddle point equation) when evaluating the second derivative at the critical points. After Gaussian integration of the dominant saddle point contributions, the prefactor in the SPA approximation of eq. (63) writes The reminder of this paragraphs is devoted to the evaluation of the K for the two roots ξ 2 + and ξ 2 − . Substituting the second derivative yields Taking into account s 2 1 s 2 s 3 = ± 2 ± 3 , K ± writes where 123 denotes circular permutations on the indices 1, 2 and 3. Using eq. (72), the denominator evaluates, for the ξ 2 + root, while the numerator computes to (see Appendix D.4 for detailed computations and notations) hence Evaluating the denominator in eq. (79) for ξ 2 − yields while a lengthy computation (see Appendix D.4) shows that the numerator is proving that Contribution of each saddle: To evaluate the contribution of each saddle point to the asymptote of eq. (63) we first evaluate Recall that for a fixed semiclassical state only the terms with s i equal to where φ ± i , ψ ± i and ω ± i are the angles φ i , ψ i and ω i evaluated at ξ 2 + and ξ 2 − . For each choice + or − in the accolades, one must count both choices of the overall sign. The angles φ ± i , ± 1 ψ ± 1 − ± 3 ψ ± 3 , etc. are evaluated by a rather involved computation in Appendix D.5. The end results are synthesized below Substituting eq. (88) into eq. (87) yields where Ω n denotes Final evaluation: We put together eq. (82), (85) and (89) and, noting that the two contributions form the saddle ξ 2 − are complex conjugate to one another we obtain Taking into account Theorem 3 follows. Conclusion Using the EM SPA method we have determined the asymptotic behaviors at large spin J of Wigner matrix elements, Wigner 3j symbols and the character χ J (g) of an SU (2) group element g. By far the most surprising fact about this computation is that our formula for the character χ J (g) is exact. SPA reproducing the exact result for integrals are usually the consequence of a Duistermaat Heckman [22,23]localization property (one of the most famous example of this being the Harish Chandra Itzykson Zuber integral [24]). Recall that the Duistermaat-Heckman theorem states that a phase space integral where Ω is the Liouville form, equals its leading order SPA estimation if the flow of the Hamiltonian vector field X (i X Ω = dH) is U (1). To our knowledge all integrals exhibiting a localization property (i.e. equaling their leading order SPA approximation) fall in (some generalization of) this case. Note that the character of an SU (2) group element can be expressed directly as a double integral by where E.M. denotes corrections coming from the Euler Maclaurin approximation, and S the corrections coming from sub leading terms in the Stirling approximation. The double integral in equation (94) is of the correct form, with symplectic form Ω = K(x, x, u)dx ∧ du and Hamiltonian f (x, x, u) generating the Hamiltonian flow Our result can be explained if first, the above flow is U (1) (thus the SPA of the double integral is exact) and second the EM and Stirling correction terms cancel, E.M.+S. = 0. The alternative, namely that the flow is not U (1) would require an even more subtle cancellation of the sub leading correction terms. Either way, the exact result for the character we derive in this paper deserves further investigation. In these appendices we detail various technical points and computations. A The Stirling approximation We detail here the passage from eq. (17) to eq. (12). Our starting point is K(x, y, u) , (A. 4) and K(x, y, u) takes the form in eq. (14). The "-n" terms in the Stirling approximation add to which also implies that the coefficient of ln J in the exponent cancels. The contribution of the Γ functions eq. (A.2) is therefore B Evaluations on the critical set. In this appendix we present the various evaluations relevant for the proof of Theorem 1. We start by some preliminary computations. Recall that As a preliminary we compute the absolute values of the four complex numbers which are Furthermore, a direct inspection shows that the coefficients of all ln(1−x), ln(1+x), ln(1−y) and ln(1 + y) cancel. Hence Therefore f (x, y, u ± ) is a purely imaginary number which assumes the form where the three angles φ, ψ and ω read As the two roots u + and u − are complex conjugate, one can absorb the various signs in eq. (B.18) and write One by one φ, ψ and ω compute by substituting eq. (B.10) and eq. (B.11) to ıφ = ln and ıψ = ln and finally ıω = ln B.2 Evaluation of the second derivative. From eq. (27) we have Each term evaluates at the critical points as . The real part of (B.23) is therefore and computes further to The imaginary part of eq. (B.23) is which finally computes to . (B.28) B.3 Evaluation of K The prefactor K| x,y,u ± is which is, using eq. (B.24), and a straightforward computation proves eq. (30). C Real saddle points In this section we present the SPA evaluation of a matrix element with For convenience we denote ∆ = −∆ > 0. In this range of parameters the two saddle points are real. For simplicity suppose that 0 < x ≤ y < 1. A straightforward computation shows that 0 < u − < u + < 1 − y, hence both roots are in the integration interval. Using the results of Appendix B.1, the function evaluates at the two saddle points as which shows in particular that the maximum of f is u − (as −∂ 2 u f | u − < 0), and the SPA is dominated by the latter. In Figure (2) below we represent the function Ξ = Φ + xΨ − yΩ as a function of x and y, The prefactor writes, using Appendix B.3, hence we get the asymptotic estimate which indeed is suppressed for large J. The case ∆ = 0 is special. A straightforward calculation shows that under this circumstances Φ = Ψ = Ω = 0 . (C.40) Also, eq. (C.37) implies ∂ 2 u f | u ± = 0. One needs to push the Taylor development around the root to third order where a(x, y) is some non vanishing smooth real function (determined by K and f evaluated at u 0 , see [1]), Ai is the Airy function of the first kind and Ai its derivative. At large argument the Airy functions behave like The term Ai is therefore subleading and we have du K(u, x, y)e Jf ≈ e ıJ αx+γy− 2 3 (a(x,y)) D Computations for the 3j symbol In this appendix we detail at length the various computations required for the proof of Theorem 3. D.1 The first derivative To compute the derivative ∂ ξ 2 i s i (ıf i ), note that ∂ (ξ 2 ) ∆ i = −(2ξ 2 − 1 − x i y i ). The partial derivative of ıφ i is then while the derivative of ıψ i writes hence eq. (D.47) writes The derivative of i s i (ıf i ) is then (D.51) D.2 The saddle point equation We will use in the sequel the short hand notation A B := A · B for all vectors A and B. Squaring twice the saddle point eq. (69) we obtain, for all signs s i , We first translate eq. (D.52) in terms of angular momentum vectors Collecting all terms on the LHS we get Eq. (D.56) rewrites Using S = J 1 ∧ J 2 , twice the oriented area of the triangle { J i }, the saddle point equation becomes and dividing by (1 − ξ 2 )ξ 2 we obtain The last line in eq. (D.60) computes and eq. (70) follows. D.3 Evaluation of J , eq. (D.62) becomes which further simplifies to and combining the last two terms this is hence for ξ 2 + we get Combining all the terms common to the RHS in eq. (D.63) and eq. (D.67), we get But note that S · J i = 0, hence the first term on the RHS above can be written as a double vector product, that is D.4 Second derivative Using J i ∆ + i from eq. (74) and ξ 2 + , we have canceling the appropriate cross terms, the remaining expression factors as developing the double vector products and taking into account that n·( n∧ k) = k·( n∧ k) = 0, we conclude We substitute again in the equation above J 3 = − J 1 − J 2 . The coefficient of 1 4S 2 computes, canceling the appropriate cross terms, The RHS of eq. (D.75) becomes which rewrites, combining the appropriate terms into double vector products as and recalling that J 1 ∧ J 2 = S the above equation becomes We develop the double vector products in the first line and take into account n · ( S ∧ n) = k · ( S∧ k) = 0. For the second line we use ( S∧ A) 2 = S 2 A 2 −( S· A) 2 and S·( S∧ n) = S·( S∧ k) = 0 to rewrite the equation as and noting that the cross term in the first scalar product cancel (again as S · ( S ∧ n) = S · ( S ∧ k) = 0), and combining the remaining three terms we get Factoring S in the second term and using A 2 B 2 = ( A · B) 2 + ( A ∧ B) 2 this rewrites as (D.84) D.5 Function at the saddle points We evaluate the relevant angles at the points ξ 2 ± by substituting eq. (72) and eq. (74) into eq. (26). D.5.1 The angles φ ± i For the angles φ ± i the direct substitution yields Consider first the denominator of ıφ − i multiplied by S 2 , namely and developing the double vector products, taking into account S · J i = 0, we conclude We will denote in this section A ∧ B = A ∧ B Direct substitution of ξ 2 + and ξ 2 − yields To evaluate ı + 1 ψ + 1 − ı + 3 ψ + 3 we take apart the numerator J n∧( k∧ n) 1 Taking into account n · ( k ∧ n) = 0 this writes −( k ∧ n) 2 J n 1 J n 3 + J 1 · J 3 ( k ∧ n) 2 + ı J 1 · J 3 ∧ ( n ∧ k)∧ n ∧ ( k ∧ n) = ( k ∧ n) 2 ( n ∧ J 1 ) · ( n ∧ J 3 ) − ı J 1 · J 3 ∧ n( n ∧ k) 2 , (D.92) hence ı + 1 ψ + 1 − ı + 3 ψ + 3 = ln Developing the products in the first line we get −J k∧ n E Boundary terms in the Euler Maclaurin formula Using the short hand notation F (t) for F (J, M, M , t), the remainder terms in the EM formula write In this section we deal with generic Wigner matrices, that is we consistently assume that 0 < ξ 2 < 1. Note that (E.114) The behavior of the higher derivative terms in the EM formula is governed by F (k) (t min ) and F (k) (t max ). To see this, collect all factors depending on t in F (t) and write hence |F (t)| < C ln J |F (t)| for some constant C. Higher order derivatives of eq. (E.116) write in terms of higher order polygamma functions ψ (n) = d n ψ (0) /dt n . For all k, ψ (2k) (X) ≤ ψ (0) (X) at large X, therefore the k'th derivative is dominated by Then |F (k) | < C(ln J) k |F (t)| for some constant C. From eq. (E.112) we conclude that both |F (t min )| and |F (t max )|, as well as all their derivatives are a priori exponentially suppressed in the region where f (x, y, u min ) < 0 and f (x, y, u max ) < 0. As hence there exists some interval in which, term by term, the derivative terms are bounded from below by an exponential blow up. In this region our EM SPA approximation should a priori fail (see also figure 3). A second set of EM derivative terms come when passing from eq. (36) to eq. (37), involving derivatives ∂ n (∂x) n D J xJ,xJ | x=±1 . Using Appendix C, eq. (C.39) we note that all these derivatives yield some function times D J xJ,xJ . As
2010-09-28T16:57:59.000Z
2010-09-28T00:00:00.000
{ "year": 2010, "sha1": "abb16bab219bdc8e5a56635fbacffe7cda12730e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1009.5632", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "abb16bab219bdc8e5a56635fbacffe7cda12730e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
8654434
pes2o/s2orc
v3-fos-license
Imaging outcome measures for progressive multiple sclerosis trials Imaging markers that are reliable, reproducible and sensitive to neurodegenerative changes in progressive multiple sclerosis (MS) can enhance the development of new medications with a neuroprotective mode-of-action. Accordingly, in recent years, a considerable number of imaging biomarkers have been included in phase 2 and 3 clinical trials in primary and secondary progressive MS. Brain lesion count and volume are markers of inflammation and demyelination and are important outcomes even in progressive MS trials. Brain and, more recently, spinal cord atrophy are gaining relevance, considering their strong association with disability accrual; ongoing improvements in analysis methods will enhance their applicability in clinical trials, especially for cord atrophy. Advanced magnetic resonance imaging (MRI) techniques (e.g. magnetization transfer ratio (MTR), diffusion tensor imaging (DTI), spectroscopy) have been included in few trials so far and hold promise for the future, as they can reflect specific pathological changes targeted by neuroprotective treatments. Position emission tomography (PET) and optical coherence tomography have yet to be included. Applications, limitations and future perspectives of these techniques in clinical trials in progressive MS are discussed, with emphasis on measurement sensitivity, reliability and sample size calculation. Introduction During the last 20 years, over a dozen of disease-modifying treatments (DMTs) received the approval for the treatment of relapsing-remitting multiple sclerosis (RRMS), being facilitated by screening the antiinflammatory activity of putative treatments using active magnetic resonance imaging (MRI) lesions as outcomes in phase 2 trials. On the contrary, the paucity of active medications for both primary progressive multiple sclerosis (PPMS) and secondary progressive multiple sclerosis (SPMS) is striking. In view of this, the Progressive MS Alliance recently suggested to develop and to validate biomarkers of progression that could make clinical trials for progressive multiple sclerosis (MS) less time-and resource-consuming, when compared with conventional clinical measures. 1 This could be achieved with the identification of reliable, reproducible and sensitive-to-change imaging outcomes. Several MRI measures reflect the neurodegenerative pathology of progressive MS and hold promise for clinical trial applications in this population. Along with the use of conventional MRI metrics (e.g. brain volume, lesion count and volume), advanced MRI techniques and optical coherence tomography (OCT) are also emerging as candidate imaging outcomes of MS progression. Indeed, the number of imaging outcomes included in clinical trials for progressive MS has almost doubled from 2.3 ± 1.5, in the decade 1996-2006, to 4.1 ± 2.6 in most recent years (2007 to current) ( Figure 1). In this review, we will discuss imaging biomarkers, which have been included in phase 2 and 3 clinical trials for progressive MS and those emerging for the future. Methodological and statistical drawbacks will be also discussed. screen for early disease activity in phase 2 clinical trials in RRMS. 2 On the contrary lesion-derived measures play a secondary -but not negligible -role in the study of progressive MS. In PPMS, the burden of T2-visible lesion load and of gadolinium-enhancing lesions is low, despite clinical severity 3 and seems to have only a minimal impact on the disability accrual over time. 4 MRI measures of focal brain lesions are the most common imaging metric in phase 2 and 3 clinical trials in progressive MS. Future clinical trials on progressive MS might include these outcomes if the presence of inflammation is expected and targeted. Indeed, trials might pick populations with relatively high inflammatory activity, depending on inclusion/exclusion criteria (e.g. 24.7%-27.5% of PPMS patients presented with Gd-enhancing lesions at baseline visit of the ORATORIO trial). 7 However, clinical outcomes might be difficult to predict based on results on lesion measures. Indeed, the use of DMTs specifically designed for RRMS in clinical trials in progressive MS can result in a positive effect on lesion count and volume measures but not on neurodegenerative clinical (e.g. disability progression) and imaging outcomes (e.g. brain and spinal cord (SC) atrophy), as occurred in the INFORMS trial. 5,31 Similarly, use of interferon-beta in SPMS was associated with fewer active lesions, but no effect was established on clinical disability. 32 Global and regional brain atrophy Brain atrophy is detectable on MRI scans from the earliest clinical stages of MS and is a biomarker of irreversible neurodegenerative processes. 33 Global brain atrophy has been associated with the degree of disability in large cohorts of both RR and progressive MS. 34,35 Besides, improvements in MRI post-processing have allowed to segment white matter (WM) and grey matter (GM) (both cortical and deep) separately, allowing refinement of association with clinical features. 36,37 Regional volumes might show a greater change over time, 12 resulting in higher sensitivity and smaller sample size when compared with global measures. 38 Intriguingly, brain atrophy has not been associated with relapse risk in RRMS, suggesting that atrophy is probably driven more by (possibly independent) neurodegenerative changes than inflammatory lesions, which further support the use of this measure in progressive MS. 33 There are several methods to quantify whole-brain atrophy. In general, brain tissue volume needs to be normalized for head size, and longitudinal changes can be detected using registration-and segmentationbased techniques. Registration-based methods compare longitudinally acquired images and measure changes in brain surface; structural image evaluation using normalization of atrophy (SIENA) is the most popular example. Segmentation-based techniques measure brain volume on a single scan and then determine change over time indirectly and include brain parenchymal fraction (BPF) (which is the ratio of brain parenchymal volume to the total volume within the brain surface contour). 33,39 In comparative analyses, brain atrophy measured with registration-based techniques showed better reproducibility 40 and higher power to detect treatment effect, when compared with segmentation-based. 41,42 Whole-brain atrophy has been included in several phase 2 and 3 clinical trials in progressive MS as primary 8,9,12,15,[43][44][45][46] or secondary outcome (Table 1). [5][6][7]11,13,14,[16][17][18][21][22][23][24]26,30,[50][51][52] The first trial demonstrating a beneficial effect on global brain atrophy (using simvastatin) was a phase 2 trial study in SPMS. 8 Positive results have been reported also in the phase 3 ORATORIO study in PPMS. 7 A number of ongoing trials are measuring global brain atrophy, and their results should become available over the next several years (Table 2). Regional brain atrophy has been used as secondary outcome in a few trials, where measures were obtained from GM, WM, 12 putamen, thalamus and optic nerve. 14 Considering that MS does not affect the brain uniformly, the detection of regional pathology might be predictive of more specific clinical features, when compared with whole-brain measures. 53 However, standardization of software for analysis is needed to make widespread application in clinical trials possible. 54 Overall, measures of global and regional brain atrophy are gaining relevance in clinical trials on progressive MS, reflective of improvements in measurement techniques allowing good reproducibility and sensitivity to change. Nevertheless, there are several possible limitations, including changes in magnet, gradients, coils, distortion corrections and image-contrast changes. Patients treated with anti-inflammatory treatments have a slight decrease in the brain volume in the first 6-12 months (pseudoatrophy), compared with placebo, due to the resolution of inflammation and oedema. 55,56 A possible solution is to re-baseline subjects after 6 months, 57,58 although longer periods may be required for more toxic types of treatment (e.g. chemotherapy during bone marrow transplantation). 59 However, rebaselining carries the risk of losing power because of reduced time of observation on treatment. In the OPERA II trial (one of the two phase 3 trials for ocrelizumab in RRMS), statistical significance in brain volume change was lost when analysing data from week 24 to 96, instead of baseline to week 96. 60 A reversible fluctuation of brain volumes can also occur because of variations in dehydration status. 61 Advanced MRI techniques Conventional neuroimaging techniques lack specificity with regard to different pathophysiological substrates of MS and are not able to explain the heterogeneous and long-term clinical evolution of the disease. 58,[62][63][64] Advanced MRI techniques, such as magnetization transfer ratio (MTR), diffusion tensor imaging (DTI) and magnetic resonance spectroscopy (MRS), may provide higher pathological specificity for the more destructive aspects of the disease (i.e. demyelination and neuroaxonal loss) and be more closely associated with clinical correlates. 55,65 Moreover, functional magnetic resonance imaging (fMRI) is contributing to the definition of the role of cortical reorganization after MS tissue damage. 37 MTR values reflect the efficiency of the magnetization exchange between mobile protons in tissue water and those bound to the macromolecules, such as myelin. MTR has been associated with disease progression in PPMS. 35,66 In view of this, MTR has been included in several clinical trials in progressive MS and has been measured in GM, WM, T2 lesions, putamen, thalamus and optic nerve. [13][14][15]18 DTI measures brain tissue microstructure by the exploitation of the properties of water diffusion. From the tensor, it is possible to calculate the magnitude of diffusion, reflected by mean diffusivity (MD), and diffusion anisotropy, which is a measure of tissue organization, generally expressed as fractional anisotropy (FA). In line with this, MD is increased and FA is decreased in T2 lesions, WM and GM from MS patients. 67,68 DTI has been assessed across multiple scanners/platforms and is suitable for multi-centre studies. 69,70 DTI is the most frequently used advanced MRI metric in phase 2 and 3 clinical trials in progressive MS, and is able to detect significant variations in brain microstructure during typical trial duration. 18 MD and FA have been measured in pyramidal tracts, WM, GM and lesions in different trials in progressive MS. 13,15,16,18,24 More specific measures such as axial and radial diffusivity can be calculated as measures of the mobility of water along and perpendicular to axons (reflecting axonal density and demyelination, respectively); 65 however, they have not been included in clinical trials in progressive MS so far. fMRI provides signal related to brain activation based on blood oxygen consumption and blood flow in the brain and has only been included in a single clinical trial on progressive MS. 24 MRS can measure brain levels of several metabolites. 71,72 The most commonly measured is total N-acetyl-asparate (NAA), a marker of axonal loss and metabolic dysfunction. 73 NAA has been included in a few clinical trials in RRMS 74 and one in PPMS. 75 Spinal cord atrophy SC atrophy is a common and clinically relevant aspect of progressive MS. A reduction in the cross-sectional area (CSA) of the SC over time is thought to reflect the development of atrophy (i.e. demyelination and neuronal/axonal loss). 76,77 In clinically definite MS, the rate of cord atrophy has been reported to vary between 1% and 5% per year. [77][78][79][80][81] Higher rates were found in progressive patients. 77,82 Development of cord atrophy is considered to be one of the main substrates of disability accumulation. It can account for 77% of disability progression after 5 years. [83][84][85] A few clinical trials in progressive MS have included SC atrophy as outcome measure (Table 1). 12,13,24,31,[47][48][49] Its more widespread use has been hampered by challenges to obtain high reproducibility and responsiveness to changes when measuring such a small structure. Small absolute changes in SC area are difficult to detect in a multi-centre setting, where there may be a great variability of imaging protocols and scanners. 40 The acquisition of high-quality SC MRI can be affected by artefacts (e.g. breathing, pulsation of blood and cerebrospinal fluid (CSF)), and this may limit the precision of SC atrophy measurements. As a consequence, sample size estimates obtained for current measurement techniques are fairly large and generally prohibitive, when compared with brain atrophy. Development of registration-based techniques to measure SC atrophy may address this concern as will be discussed below. 86 Position emission tomography Position emission tomography (PET) is a quantitative imaging technique, which investigates cellular and molecular processes in vivo using positron-emitting molecules, ideally binding a selective target. 72,87,88 As MS is a complex and multifactorial disease, various radioligands have been tested. Amyloid tracers, measuring myelin loss and repair, and 11 C-flumazenil, reflecting neuronal integrity, might be of interest for clinical trials on neuroprotective compounds. 63,[88][89][90][91] To date, no large MS clinical trials have included PET, reflecting its invasive nature and high costs. In the future, the development of standardized and lessexpensive procedures might represent a trigger for the application of this technique in small phase 1 and 2a clinical trials. 6 OCT OCT is a non-invasive method to obtain high spatial resolution images of the retina, measuring retinal nerve fibre layer (RNFL) thickness and macular volume. RNFL is thinner in patients with MS than in healthy controls, even in patients with MS who have not experienced episodes of optic neuritis. 92 Therefore, OCT measures a more diffuse pathological process which better corresponds to overall central nervous system damage. 93,94 RNFL and macular volume have been included in a few clinical trials on progressive MS (Table 1), [14][15][16]24 so far without demonstrable neuroprotective effects. OCT is a fast, non-invasive, easy-to-use imaging method producing quantitative measures reliably, with great potential in MS for testing neuroprotective strategies over a short time frame. 33 Like brain volume, RNFL is sensitive to biological variations. However, there is the need for high-quality acquisitions and appropriate image processing, performed by trained examiners following specific consensus criteria. Measurement sensitivity Quantitative MRI measures are strongly dependent not only on acquisition parameters but also on processing methods, presenting with different sensitivity to change, reproducibility and measurement error. Clinical trials results can be affected by the analysis. 41 For instance, in a clinical trial of teriflunomide in RRMS, changes were measured by BPF, a segmentation-based technique, and no significant effect was found initially. 95 However, in a post hoc analysis, the use of a registration-based automated technique (SIENA) revealed that teriflunomide was associated with significant reductions in brain volume loss. 96 Similarly, the conventional way of estimating SC atrophy is using segmentation-based methods, such as the cervical cord CSA 97,98 and the upper cervical cord area (UCCA), 99 that are measured at each time point with subsequent calculation of percentage change between time points. More recently, GBSI (generalized boundary shift integral) has been suggested as a novel registration-based method to estimate cervical SC atrophy directly between scans, possibly reducing sample sizes. 86 The number of observations over time can increase sensitivity to change. However, at least for brain atrophy, the effect of increasing the number of observations is modest, when compared with the effect of increasing the duration of follow-up. 100 Sample size Sample size calculation is a pivotal aspect of planning clinical trials and is based on the primary outcome of the study, generally being imaging for phase 2 and clinical for phase 3 trials. 101,102 Imaging outcome measures are often included as secondary or exploratory variables in all patients in phase 3 clinical trials, even though they might require a smaller sample size to detect significant difference. A caveat though is that the size of the treatment effect may differ between clinical and imaging outcomes; that is, 30% reduction in rate of brain volume may not equate in 30% reduction in disability progression. In order to further explore this issue, we estimated the treatment effect on brain atrophy which could have been detected in populations recruited in phase 3 clinical trials in progressive MS (Table 2), based on the actual sample size and the measured rates of brain atrophy in the placebo arm (we accepted a power of 80% and the α error was set at 0.05). Most recent studies would have been able to detect 15%-30% treatment effects on brain atrophy, 5,7,26 in line with actual detected statistical effect (17.5% relative difference in the ORATORIO trial). Inclusion criteria can also impact sample size. For instance, in RRMS populations, the rate of inflammatory activity is high, and measures of inflammatory activity (new or enlarging T2 lesions, new T1 lesions and Gd-enhancing lesions) can lead towards relatively lower sample sizes, compared with progressive MS. 103,104 For instance, when considering the number of enhancing lesions, the detection of 50% treatment effect for interferon-beta treatment requires about 120 patients per arm in RRMS trials and a threefold number in SPMS. 105 By contrast, use of imaging markers more specific for progressive features (e.g. brain atrophy) will reduce the sample size needed in clinical trials in PPMS and SPMS. Advanced MRI techniques, such as MTR, might also require smaller sample sizes, 106,107 in particular when trials with neuroprotective agents are conducted in selected populations. Sample size can be affected also by variability of imaging outcomes. For instance, BPF measurement can have up to 0.00283% variance due to patient repositioning, physiological variations and inflammatory lesion occurrence. 108 Measurement precision can affect the standard deviation of the measure which is a major determinant of the sample size. 42 Increasing the number of scans performed in clinical trials and improving imaging analysis techniques (e.g. registration vs segmentation) can reduce these sources of variability and, accordingly, sample size. Overall, DMTs can have a specific effect on each MRI outcome and thus the sample size should be estimated depending on the expected efficacy profile in the selected population. As such, MRI may be particularly useful in early-phase clinical studies on novel therapeutic agents, where drugs can be easily screened before they are taken forward to larger scale studies, 109 as is common practice for anti-inflammatory drugs in phase 2 RRMS studies. Conclusion and future perspectives Progressive MS represents a unique opportunity for studying imaging markers of neurodegeneration, with equal bearing on relapsing forms of the disease. Several imaging candidates hold promise for filling the unmet need of biomarkers in progressive MS, by capturing the effect on neurodegeneration, although inflammatory markers remain important in this stage of the disease. Brain volume loss is the best examined and most robust outcome with attainable sample sizes and first positive results, though treatment effects tend to be more modest than those seen for inflammatory MRI markers. Brain volume is already being applied as primary outcome measure in phase 2 trials and as secondary exploratory measure in phase 3 trials in progressive MS. Results from these trials will help establish the importance of brain atrophy in tracking MS progression. 46 SC MRI holds great promise for future trials due to higher rates of atrophy and better sensitivity to change compared with brain volume changes. However, robust application in clinical trials requires implementation of techniques with lower measurement noise, such as registration-based methods; in part, these can be validated using historical data sets. Advanced MRI measures (such as MTR, DTI and fMRI), due to their greater specificity, might shed light on mechanisms of action of new medications and should be included when clinical trials aim at exploring drug potentials for neuroprotection and tissue repair. In clinical trial design, the inclusion/exclusion of patients with specific MRI characteristics might help in identifying groups who are more likely to respond to a given medication and, so, in further reducing the sample size needed. Progressive Multiple Sclerosis', for the preparation of the present manuscript. This review is part of a special issue derived from the 5th Focused ECTRIMS Workshop, 'Advancing Trial Design in Progressive Multiple Sclerosis', held in Rome, Italy, on 9-10 March 2017. The authors acknowledge the contributions of workshop attendees. F.B. is supported by the NIHR Biomedical Research Centre at UCLH. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article.
2018-04-03T00:00:36.746Z
2017-10-01T00:00:00.000
{ "year": 2017, "sha1": "55af227cc1991fdb342c62efa553c944ec6447c6", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1352458517729456", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "55af227cc1991fdb342c62efa553c944ec6447c6", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
226345791
pes2o/s2orc
v3-fos-license
E ff ects of pH and Fineness of Phosphogypsum on Mechanical Performance of Cement–Phosphogypsum-Stabilized Soil and Classification for Road-Used Phosphogypsum : This article investigates the e ff ects of phosphogypsum (PG) pH and particle fineness on the mechanical properties of cement–PG-stabilized soil. Using solutions of calcium hydroxide (Ca(OH) 2 ) and sulfate (H 2 SO 4 ) to adjust pH value of PG from 2 to 8. The key pore size used to characterize PG fineness was determined to be 200 µ m based on the Grey relational analysis (GRA), and the fineness of PG was controlled from 12.31% to 56.32% by grinding di ff erent time. Cement–PG cementitious materials (CPCM) and cement–PG-stabilized soil with di ff erent mixture ratios were formed at an optimum moisture content; following this, the unconfined compressive strength and California bearing ratio values of the samples were tested. Results show that the increased pH or the decreased fineness leads to continuous increases in the unconfined compressive strength of CPCM and cement-PG stabilized soil as well as the CBR value of cement–PG-stabilized soil. However, once PG pH value exceeded 5 or fineness was less than 20%, the mechanical properties of cement–PG-stabilized soil remained stable. A classification standard for road usage PG was established based on the analyses regarding cement-PG stabilized soil’s mechanical properties, which has great significance of selecting or disposing road-used PG. Introduction To ensure sufficient subgrade strength and bearing capacity of pavement, original subgrade soils with poor performance are sometimes subjected to certain technical treatments during road construction. Chemical stabilization is a very common disposal measure of technical treatments, it involves the modification of soil properties to improve engineering performance. Ghazi M found that cement can improve soil properties of Pb-contaminated soil and using JET device can consume testing time and conserving energy. Researchers also proposed formulas to mathematically predict the influence of different stabilizer on the mechanistic erodibility parameters [1,2]. The two most commonly-used chemical stabilization methods are lime stabilization and cement stabilization. However, the process of production of lime has great impact on environment, the emitted particles and gas produced during the process will seriously pollute the atmosphere. Additionally, cement production, which leads to various forms of pollution, has increased rapidly over the past decade, and deposition practices can seriously affect ambient aerosol levels [3,4]. Therefore, considering environmental benefits, it is imperative to find alternatives for traditional inorganic The cement type used was 42.5-grade ordinary Portland cement, which aligns with the guidelines in Test Methods of Cement and Concrete for Highway Engineering [27]. The test results of the cement's technical properties are presented in Table 2. Loess is yellow powdery soil with columnar formed in dry climate, it was obtained from a local field in Xi'an, and its basic physical and chemical properties, based on the requirements of JTG E40-2007 [28], are shown in Tables 3 and 4. W L is liquid limit that refers to the limit moisture content between the plastic state and the fluid state of the cohesive soil, i.e., the upper limit moisture content of the cohesive soil in the plastic state; the moisture content of the soil is too high and the soil can even flow like a liquid when the moisture content of the soil is larger than the liquid limit. W P is plastic limit that refers to the limit moisture content between the plastic state and the semi-solid state of the cohesive soil, i.e., the lower limit moisture content of the cohesive soil in the plastic state; the moisture content of the soil is greatly low, and the soil changes from the plastic state to the semi-solid state and loses its plasticity when the moisture content of the soil is less than the plastic limit. The meaning of I P is as follows: The plasticity index is an important index characterizing the mechanical and deformation properties of the soil; the value of plasticity index is equal to the difference between liquid limit and plastic limit, i.e., I P = W L − W P . PG that derived from different sources has significant differences in its pH value. To study the effect of pH on cement-PG-stabilized materials, it is vital to define pH value of PG. This is accomplished by mixing PG and water in a 1:10 mass ratio, stirring uniformly, and then letting the mixture stand for 10 min before testing. PG pH refers to the pH value of the supernatant liquid of solution. To explore the pH value's effect on the strength of cement-PG-stabilized materials, Ca(OH) 2 and H 2 SO 4 were used to adjust the PG solution's basicity and acidity. After adjusting the pH values, different pH-value PG solutions were dried in an oven at 40 • C until the moisture evaporated, then PG with different pH value are prepared and available for follow-up test. Fineness refers to the size of particle diameter. There are two common methods to characterize the material's fineness: The specific surface area method and the use of residue of certain size of a mesh. Because of its complicated testing process, the specific surface area method is limited in practical experiments. The second method used relatively more widely because it is easier to conduct and effectively reflects the surface area of material particles. Thus, this paper used this method to determine the fineness of PG. Note that when using the method, it is significant to confirm the key sieve pore size, which can be determined from the results of Grey relational analysis (GRA). Grey Relational Analysis The basic principle of GRA is to consider a microscopic or macroscopic approach between different factors to analyze or determine the degree of influence between each influence factor. The calculation progress of GRA is as follows [29]: (1) Set reference sequence {Y 0 (k)} and comparison sequences {Y i (k)}. (3) Sequences formed by absolute differences between {X 0 (k)} and {X i (k)} are signed as {∆ i (k)}, ∆ i (k) = |X 0 (k) − X i (k)|. The correlation coefficient sequence {ξ i (k)} can be calculated using Equation (2), where ∆min indicates the minimum element of sequences {∆ i (k)} and ∆ max indicates the maximum element of sequences {∆ i (k)}. The distinguishing coefficient ρ in the formula is generally set as 0.5, and thus, ρ was assigned a value of 0.5 in this paper. (4) The Grey correlation degree γ i is expressed as: (5) Because the data were subjected to absolute value processing when calculating the sequences ∆ i (k), relational polarity of the Grey correlation degree is difficult to confirm. Thus, relational polarity is determined using Equations (4) and (5). When sgn(Q i /Q k ) equals sgn(Q 0 /Q k ), the relational polarity is positive. For the opposite, relational polarity is negative. Unconfined Compressive Strength Cementitious material and stabilized soil were used to test the unconfined compressive strength (UCS) based on JTG E51-2009 [30]. Specimens were formed in cylindrical shape with dimensions of 50 mm diameter × 50 mm height. The loading rate was maintained at 1 mm/min, and the values of failure load (P) and sectional area (A) were documented to calculate the UCS. California Bearing Ratio The California bearing ratio (CBR) values of cement-stabilized soil were obtained by determining JTG E40-2007 [28].The specimen was soaked in water for four days and four nights, after allowing the specimen to stand and drain for 15 min, it was put in the pavement material strength tester (Beijing Haiwei Traffic Instrument Co. LTD, Beijing, China) and loaded at a rate of 1-1.25 mm/min, and the pressure (p, kPa) and penetration (l, mm) values were documented. The CBR value was thus calculated based on the p-l curve and Equation (6). P is the pressure value corresponding to the l 0 (l 0 = 2.5 mm). Mix Proportions To study the effects of pH and fineness on cement-PG soil stabilization, the pH value was set to 2-8 using the above methods, and the grinding time of the PG was set at 0, 5, 10, 15, 20, and 25 min to gain different fineness. Three mix proportions of cementitious material and stabilized soil were designed to be tested by considering the precision of the results. It is favorable to understand the basic mechanical properties of PG with different pH value or fineness. The mix compositions of cementitious material and stabilized soil with mix numbers are given in Table 5. CPCM refers to cement-PG cementitious material, and it is formed by PG, cement and water. The results of unconfined compressive strength test or CBR test of CPCM can directly reflect the mechanical properties of cement and PG. The comparison of CPCM and cement-PG-stabilized soil indicates that soil is added into the cement-PG-stabilized soil. Cement-PG-stabilized soil means that cement, PG, water and soil are mixed in a certain proportion. Grey Relational Analysis Taking a cement: PG = 1:2 group CPCM as an example to describe the GRA calculation process, PG was ground for k min (k = 1, 2, 3, 4, and 5, corresponding to grinding times of 5, 10, 15, 20, and 25 min), then cured specimens of cementitious material in a humidity chamber at 20 ± 2 • C and 95% humidity until tested. The elements of the reference sequence Tables 6 and 7, respectively, and Table 8 shows correlation degree between PG particle size and UCS. To ensure precision of GRA results, two proportion types and different curing days of CPCM groups were tested. The correlation degree between the particle size and UCS of the CPCM mixed with a different ratio was as follows, as shown in Table 9, based on the same method that mentioned previously. Table 9. Correlation degree between PG grades and unconfined compressive strength of cement-PG cementitious materials. Proportion Curing The correlation degree represents the relation between the comparison sequences and the reference sequence. The larger the absolute correlation degree value, the more significant is the relation between the comparison sequences and the reference sequence. Additionally, a positive relational polarity means that the relation is facilitation, whereas a negative polarity indicates an inhibiting effect. The key sieve pore size can thus be confirmed based on the GRA results, and then PG particle fineness can be obtained based on the key sieve pore, which is favorable for exploring the effect of PG particle fineness on cement-PG-stabilized material performance. The results of correlation degree between PG particle size and UCS of cement-PG cementitious materials show that different PG particle diameters seriously affect the UCS of cementitious materials. It can be seen that the relational polarities between particles smaller than 200 µm and UCS were positive in each group, indicating that PG particles smaller than 200 µm can promote strength formation of cementitious materials. Additionally, the increase in the absolute value of the correlation degree illustrates that this promotion of strength was enhanced with increasing particle size. However, relational polarities between particles larger than 200 µm and UCS were negative in each group, indicating that the formation of UCS was inhibited by particles larger than 200 µm and this inhibition effect was enhanced with decreasing particle diameter. This can be contributed to the change of specific surface area and impurities of PG caused by the changing fineness. The details can be found in Sections 3.2 and 3.3 In conclusion, 200-µm PG particles have the most significant influence on the UCS of cementitious materials. Therefore, the key sieve pore size was determined as 200 µm based on GRA, and the fineness of PG refers to the summation of all sieve residues having screen aperture less than 200 µm. Table 10 shows the fineness of PG at different grinding times based on the GRA results. It can be seen that there are large fineness differences under different grinding times, and that the fineness of PG fell sharply with an increase in the grinding time. Figure 1 shows the 7-days unconfined compressive strength of cement-PG cementitious materials and cement-PG stabilized soil at different pH values, respectively. The group mixed with un-treatment PG was regard as control group, the pH value of it is about 3. All the six group specimens showed similar trends: When pH was less than 3, the UCS of the materials increased slowly; when the pH was increased from 3 to 5, the UCS in each group increased significantly with pH; and when the pH value was greater than 5, the increase rate of UCS decreased rapidly with increased pH and the UCS finally reached stabilization. Taking the M1 group cementitious material as an example, the specimen's UCS increased from 4.38 to 4.56 MPa as the PG pH value increased from 2 to 3, with an increment of 0.18 MPa. When pH was increased from 3 to 5, the UCS increased from 4.56 to 6.68 MPa, and the increment increased sharply from 0.18 to 2.12 MPa. However, when the pH continued to increase to 8, the UCS slowly increased from 6.68 to 7.32 MPa, an increase of only 0.64 MPa, and ultimately reached a constant level. The average UCS of cement PG stabilized soil at pH value of 5 is as 1.3 times of control group. This indicates that pH value of PG can seriously affect the strength property of cement PG stabilized materials, when the pH value of PG reached 5, the best strength property of cement stabilized materials can be attained. UCS The above results are possibly attributed to the impurities in PG as different types and contents of impurities in PG can seriously affect cement-PG stabilized materials' property. Acid soluble phosphorus, eutectic phosphorus, organic matter, and fluoride are the main impurities affecting PG properties [31,32]. Among these impurities, P 2 O 5 has the most serious impact, the presence of P 2 O 5 can significantly decreases PG's activity. It is a soluble phosphorus impurity adsorbed on the surface of PG calcium sulfate dihydrate crystals, which coarsens the calcium sulfate dihydrate crystals and turns their original needle shape to rod-like or tabular shape. The P 2 O 5 content in PG reduced with pH increased. When the pH value was less than 3, P 2 O 5 content remained basically constant; thus, the UCS value increased slowly. When the PG pH value increased from 3 to 5, the P 2 O 5 content fell sharply, leading to a great increase in the UCS. As the pH continued to increase, the content of P 2 O 5 was neglectable and no longer reduce. Thus, it was difficult to impact PG properties significantly, which led to a constant UCS level of PG cement stabilized materials. The UCS values with different PG particle fineness are graphically presented in Figure 2. The results show a continuous increase in the UCS with a reduction in PG fineness of each specimen when fineness decreased from 56.23% to 19.79%. However, with a further decrease in fineness, the rate of increasing UCS decreased sharply, and the UCS essentially reached its peak and kept stability as PG fineness decreased to 19.79%. Thus, UCS of cement stabilized soil can be considered as stable when fineness reached around 20%, with an average value of 1.5 times of the control group (mixed with ungrained PG). The possible reasons for this can be contributed to that the smaller fineness of PG particles led to a larger specific surface area for PG particles. The reaction between calcium sulfate dihydrate crystals in PG and soil particles mainly occurs on the surface; therefore, the speed and extent of the reaction between calcium sulfate dihydrate crystals and soil particles and cement increased with decreased fineness, which caused an increase in the unconfined compressive strength of cement-PG-stabilized soil. But when the particle size of PG is too small, the reaction between water and cement are kinetically hindered, which go against to the formation of UCS, so the UCS kept stability when fineness decreased to 20%. CBR CBR value is a significant parameter to evaluate the strength ability of various road subgrade materials. Figure 3 illustrates the CBR value of cement-PG-stabilized soil as a function of the PG pH value. The CBR value of each specimen at different pH showed similar changing trend as UCS. Taking the M5 group as an example, as the pH changed from 2 to 3, the CBR value increased slowly with 4.1% from 280.2% to 284.3%. When the pH kept increasing from 3 to 5, the CBR value soared from 284.3% to 366.4% with an increase of 82.1%. However, when the pH value kept increasing and reached 8, the CBR value basically reached maximum values at 380%, showing increase of 13.1%. This could be related to the same explanation that was highlighted for unconfined compressive strength (Section 3.2), that the impurity content changed with pH value and the discrepancy of impurity content can differ greatly in mechanism property of stabilized soil. The CBR value of cement-PG-stabilized soil with different PG fineness are demonstrated in Figure 4. The CBR value of cement-PG stabilized soil increased with a reduction of PG fineness. The rate of increase of CBR value decreased sharply as PG fineness reached approximately 20%. Taking the M5 group as an example, when the PG fineness decreased from 56.23% to 19.79%, the CBR value increased 52.6% from 295.9% to 348.5%, with an average increase rate of 1.4. However, with a continuous decrease in fineness from 19.79% to 12.31%, the increase rate of CBR value is negligible with a figure of 0.17. It can be seen that the CBR of stabilized soil reached its peak at the fineness about 20%, averagely increased 1.4 folds compared to the control group. This could be the same reason mentioned in Section 3.2 for UCS. Additionally, Peng [32,33] found that there are differences in the content of impurities in different particle size ranges. Purities like soluble phosphorus, organic matter, and F − mainly exist in the surface of PG particles, so the content of purities reduced with decreased PG fineness, which is beneficial to the strength-forming process of cement-PG-stabilized soil. When PG fineness continued to decrease below 20%, the content of impurities basically remained constant, causing lack of variance in CBR value. Considering construction cost, to gain a better CBR value of cement PG stabilized soil, the fineness of PG should be around 20%. Establishment of Road Usage PG's Classification Standard Purities can seriously affect PG properties, but the content of it is difficultly detected in practical engineering, since it incurs high examination costs and complicated process; however, it can be seen that the mechanical properties have close relation of PG's pH and fineness from Sections 3.1-3.3. The CBR and UCS of the cement-PG-stabilized materials increased with a reduction in PG fineness or growth of PG pH, and the pH value and fineness of PG is tested far more simply and conveniently in comparison to the purity test. Thus, PG pH and fineness can be therefore used to assess the properties of PG used in cement-PG-stabilized materials as indicators. Adherent moisture content of an -grade phosphogypsum as building materials should not exceed 15% and its calcium sulphate dehydrate (CaSO 4 ·2H 2 O) content should exceed 90% in accordance with GB/T 23456-2018 [33]. Meanwhile, the mechanical properties of the cement-PG-stabilized materials basically reached stability when the PG fineness decreased to 20% or the pH increased to 5. It is means that PG met its demarcation point of performance since PG's pH reached to 5 or fineness decreased to 20%. Thus, classification standard of road usage PG was established based on the combination between mechanical analyses and above existing specification, as presented in Table 11. This proposition of classification standard of road-using PG has great significance on disposing and selecting PG used to stabilize subgrade soil. According with the standards, PG can be defined as the I grade PG when the following four conditions are satisfied simultaneously: pH value of PG is larger than 5, fineness of PG is less than 20%, the CaSO 4 ·2H 2 O content of PG is larger than 90% and adherent moisture content of PG is larger than 15%. The mechanical properties of cement-stabilized soil can be improved significantly when the I grade PG is used in the stabilization of subgrade soil. Conclusions An experimental study was carried out to study the effect of PG's pH and fineness on performances of PG-cement-stabilized soil. Additionally, a road usage PG classification criterion was built based on unconfined compressive strength and California bearing ratio tests of PG-cement-stabilized materials. The study resulted in the following conclusions. All the key information from the application conclusions can also be seen at the resume Table A1. • A particle size of 200 µm can be used to determine PG fineness based on the key sieve pore size obtained from GRA. PG particles smaller than 200 µm promote the impact on the unconfined compressive strength of cement-PG cementitious materials, while those larger than 200 µm show inhibition. Additionally, the relevance between them is stronger when the particle size is closer to 200 µm. • Both the CBR value and unconfined compressive strength of CPCM and cement-PG stabilized soil increased with PG pH values, and the increase rate tended to stabilize once the pH value reached 5. • With a decrease in PG fineness, the results showed increased CBR value and UCS of CPCM and cement-PG stabilized soil. However, when PG fineness was less than 20%, the mechanical properties of cement-PG-stabilized material reached their peaks. • A classification standard of road using PG was established based on indicators of CaSO 4 ·2H 2 O content, pH, fineness, and adherent moisture content. The classification standard includes two grades (i.e., I and II), as shown in Table 10. Cement stabilized soil mixed with the I grade PG increased 1.3~1.5 fold of mechanical properties compared to that mixed with untreated PG. Note that considering the complexity of the experiments, to simply the experiment process, this study used the same key sieve pore size for cement-PG-stabilized soil as that used for cementitious materials. Additionally, eight types of particle size sequences were considered in GRA, and thus, the division of sequences can be more detailed, with more particle size types being divided to gain a better analysis of the key sieve pore size. More details on the pH value and PG fineness can be obtained in the future, which will possibly enable them to be taken as PG property grade index for utilization of PG mixed with other materials in road engineering. Conflicts of Interest: The authors declare no conflict of interest. Appendix A All the key information from the application conclusions in the research can be seen at the resume Table A1. Table A1. Key information from the application conclusions. 1 PG can be used in subgrade to stabilize soil with cement. 2 PH and fineness of PG have significant effects on the mechanical properties of cement-PG stabilized materials. 3 A particle size of 200 µm can be used to determine PG fineness. 4 The properties of subgrade soil can be improved significantly when the I grade PG is used in the stabilization of subgrade soil.
2020-10-29T09:04:54.533Z
2020-10-23T00:00:00.000
{ "year": 2020, "sha1": "80d1b9d648b3caa5f0703f56ad96ecfa58f27eda", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-6412/10/11/1021/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "30aa5bbcde4b696d3d4b08eb991b6a9d491e823a", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
119203370
pes2o/s2orc
v3-fos-license
A combined model for pseudorapidity distributions in Cu-Cu collisions at BNL-RHIC energies The charged particles produced in nucleus-nucleus collisions come from leading particles and those frozen out from the hot and dense matter created in collisions. The leading particles are conventionally supposed having Gaussian rapidity distributions normalized to the number of participants. The hot and dense matter is assumed to expand according to the unified hydrodynamics, a hydro model which unifies the features of Landau and Hwa-Bjorken model, and freeze out into charged particles from a space-like hypersurface with a proper time of Tau_FO . The rapidity distribution of this part of charged particles can be derived out analytically. The combined contribution from both leading particles and unified hydrodynamics is then compared against the experimental data performed by BNL-RHIC-PHOBOS Collaboration in different centrality Cu-Cu collisions at sqrt(s_NN)=200 and 62.4 GeV, respectively. The model predictions are in well consistent with experimental measurements. Introduction The application of relativistic hydrodynamics to high energy physics may be traced back to the pioneering work of Landau in 1953 [1]. In recent years, the most important achievement on this topic is the discovery that the spatiotemporal evolution of matter created in high energy heavy ion or particle collisions possesses the features of collective flow with strong interaction and behaves nearly like an ideal fluid with a very little viscosity . Owing to the high degree of nonlinearity and interconnection of hydro equations, it has been being a formidable task to solve them analytically. This is the reason why from the times of Landau till now this problem is only limited to 1+1 expansion for ideal fluid with simple equation of state. The expansions for higher dimensions or situations including viscosity have little analytical discussion. The general exact solution for cases such as these is far from being obtained so far. The treatment of these problems usually resorts to Monte Carlo simulations. In Monte Carlo simulations, beside a powerful calculation system, there also needs a sophisticated skill for avoiding instabilities in solving partial differential hydro equations. Furthermore, since the results come from a man-made non-transparent software package, the correlations between them and physical law are not direct and clear. On the contrary, the analytical methods, concerning the most essential and important elements affecting the physical phenomena via ideal assumptions, provide us the most basic law underlying. In addition, the concise and explicit form of exact solution is unmatchable by Monte Carlo simulations. Hence, despite facing tremendous difficulties, the finding of analytical solution of relativistic hydro equations is always our pursuit of the goal. It is an important field in high energy physics. The first exact solution of 1+1 hydrodynamics was given by I. M. Khalatnikov about 61 years ago [26], which is for an accelerated system being assumed as a massless ideal fluid and initially at rest. The solution was presented in a complicated integral form and later was used by L. D. Landau in his hydro model study and obtained the rapidity distributions of charged particles [27], which are in generally consistent with the observations made at BNL-RHIC [28][29][30]. This is the first time for the understanding of the nearly ideal nature of fluid produced in collisions. The second exact solution of 1+1 dimensional hydrodynamics is given by R. C. Hwa about 41 years ago [31]. This solution is for an accelerationless system with Lorentz invariant initial condition. The result got in this way is simple and explicit. From this solution, J. D. Bjorken was able to get a simple estimate for the initial energy density achieved in collisions from the final observables [32]. This makes the energy density be measurable in experiment. It is the first and by now the only formula being widely recognized as one for estimating the energy density of matter created in collisions. Hence, it receives much attention. This is the reason why Hwa's theory is usually named as Hwa-Bjorken hydro model. However, since the free parameter in the formula has not been well fixed, how to determine the mentioned energy density is still an open problem. Moreover, the invariant rapidity distributions obtained from this model are at variance with experimental observations. Theoretically, such distributions are only the limiting cases of NN s   . In recent years, along with the operations of BNL-RHIC and later of CERN-LHC, the investigations of relativistic hydrodynamics have entered into a very active period, becoming one of the most popular subjects. It was during this period that the second and higher order harmonic flows and the ridge structures of the matter created in collisions were observed in experiments [2][3][4][5][6]. This shows us the features of collective flow of produced matter with strong interaction and nearly ideal natures. At the same time, the analytical investigations of hydrodynamics have got into a golden stage of rapid developments and achieved a number of good results [7][8][9][10][11][12][13][14][15]. For example, by generalizing the relation between ordinary rapidity and space-time one, Ref. [7] integrates Landau and Hwa-Bjorken two famous hydro models into one, becoming a unified hydro model and presenting a set of exact solutions. By taking advantage of the traditional scheme of Khalatnikov potential, Ref. [8] solved analytically the hydro equations and gave a pack of simple exact solutions for an ideal fluid with linear equation of state. By taking into account the work done by the fluid elements on each other, Refs. [9,10] generalized the Hwa-Bjorken model for an accelerationless system to the model for an accelerated one, and obtained a class of exact analytical solutions of relativistic hydrodynamics. One of most important applications of 1+1 dimensional hydrodynamics is the analysis of the pseudorapidity distributions of the charged particles produced in nucleus or particle collisions. In our previous work [16], by taking into account the effect of leading particles, we have successfully discussed such distributions for Pb-Pb and Au-Au collisions at respectively CERN-LHC and BNL-RHIC energies in the context of unified hydrodynamics. In this paper, this combined model will be used to analyze the pseudorapidity distributions in the smaller system of Cu-Cu collisions at RHIC energies. A brief description of combined model For the purpose of completeness and application later, we here list the key ingredients of combined model. (1) The matter created in high energy heavy ion collisions is taken as an ideal fluid fulfilling the equation of state where  , s 1 g c  and p are respectively the energy density, the speed of sound and the pressure of fluid. Investigations have shown that g changes very slowly with energies and centrality cuts [14,[33][34][35]. For a given incident energy, it can be well taken as a constant. Eq. (1) allows the expansion of fluid along the colliding axis of two nuclei, that is the longitudinal axis of z to have the form as where y is the ordinary rapidity, +  and   are the compact notation of partial derivatives with respect to is the proper time, and is the space-time rapidity of fluid. (2) Eq. (2) is the complicated differential equations with high nonlinearity and coupling between variable p and y. In order to solve it, the relation between ordinary rapidity y and space-time S  is generalized to the form [7]     , 4 4 , and H is an arbitrary constant. After the above treatments, Eq. (4) becomes solvable. Its solution is [7]     where s is the entropy density of fluid, and h H A  , and A is an arbitrary constant. where   NN , C b s , independent of rapidity y, is an overall normalization constant. b is the impact parameter, and NN s is the center-of-mass energy per pair of nucleons.  stands for an arbitrary time-like hypersurface. (4) The right-hand side of Eq. (7) is evaluated on the time-like hypersurface with the proper time equaling FO  . Such hypersurface can be therefore taken as where C is an arbitrary constant. This equation gives Thus, Eq. (7) (5) In nucleus-nucleus collisions, apart from the freeze-out of fluid, leading particles also have certain contribution to the measured charged particles. Leading particles are believed to be formed outside the nucleus, that is, outside the colliding region [36,37]. The movement and generation of leading particles are therefore free from hydro descriptions. As we have argued before that the rapidity distribution of leading particles takes the Gaussian where   Part NN , N b s is the total number of participants, which can be determined in theory by Glauber model [43][44][45]. Comparison with experimental measurements Having at hand the rapidity distributions of Eqs. (10) and (11), the pseudorapidity distributions can be written as [46] is the transverse mass, and T p is the transverse momentum. The first factor on the right-hand side of above equation is actually the Jacobian determinant. This transformation is closed by another Taking into account the contributions from both the freeze-out of fluid and leading particles, the rapidity distributions in Eq. 13 are Inserting above equation or the sum of Eqs. (10) and (11) the same above stated energies. It can be seen from this table that the variations of C against energies, system sizes and centrality cuts are in just the same way as those of Lead N . That is, for a given centrality cut, C increases with energies and system sizes, While, for a given energy and system, C decreases with centrality cuts. The width parameter  in Eq. (11) takes the value of 0.85 at different incident energies and centrality cuts. As the analyses given above,  is independent of incident energies and centrality cuts. The center parameter 0 y in Eq. (11) takes the values of 2.60-2.92 and 2.40-2.49 for centrality cuts from small to large in collisions at NN s =200 and 62.4 GeV, respectively. As pointed out early, 0 y increases with energies and centrality cuts. Conclusions The matter created in heavy ion collisions is assumed evolving according to the framework of unified hydrodynamics, and then freeze out into charged particles from a pace-like hypersurface with a fixed proper time of FO  . The typical features of unified hydrodynamics are that: (1) By generalizing the relation between ordinary rapidity y and space-time S  , the Hwa-Bjorken and Landau two famous hydro models are integrated as one. (2) In case of linear equation of state, this hydro model can be solved analytically. In addition to freeze-out of fluid, leading particles also play a certain part to the charged particles. As before, the leading particles are supposed to have Gaussian rapidity distributions normalized to the number of participants, which can be figured out in theory. It is interested to notice that the investigations of present paper once again show that, for a given colliding system, the central position 0 y of Gaussian rapidity distribution increases with incident energies and centrality cuts. While, the width parameter  is irrelevant to them, keeping a constant of 0.85. This is consistent with the results arrived at in Ref. [16]. Here, it is worth mentioning that the investigations of Refs. [22,23] also have shown that Landau hydrodynamics alone is not enough in explaining experimental observations in high energy physics. Only after the effects of recombination of constituent quarks in participants are taken into account together, can the experimental measurements in both p-p and nucleus-nucleus collisions be described properly. This is in accordance with our analysis. In order to have a good description to experimental data, besides unified hydrodynamics, leading particles are essential as well. Only after the combined effects of them are included simultaneously, can the theoretical As the end of this paper, we would like to point out that, in recent years, a hydro model named as event-by-event hydrodynamics is widely used in high energy physics [18][19][20][21]. This kind of hydrodynamics differs from the one employed in present paper in two main aspects. (1) The former is about the Monte Carlo simulations. The results come from a software package, and the correctness of these results relies on the validity of input parameters. The latter, however, provide us an analytical solution, which is the extraction of the most essential nature of the concerned problem. (2) The former deals with the collisions on a microscopic single event level. Hence, the fluctuating initial conditions are important in explaining the experimental observations, such as, higher flow harmonics and ridge structures. The latter, on the contrary, treats the collisions on the macroscopic "single shot" level with an averaged initial condition. Hence, there is no necessary to consider the fluctuations in initial conditions. This is enough for describing the global variables, such as, the pseudorapidity and transverse momentum distributions.
2019-04-13T15:02:23.819Z
2016-04-27T00:00:00.000
{ "year": 2016, "sha1": "6d243b000153b10bcea8467b198487eb32f1fb24", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1605.08902", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6d243b000153b10bcea8467b198487eb32f1fb24", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
78091563
pes2o/s2orc
v3-fos-license
BRCA2 and Other DDR Genes in Prostate Cancer Germline and somatic aberrations in DNA damage repair (DDR) genes are more prevalent in prostate cancer than previously recognized, with BRCA2 as the most commonly altered gene. Germline mutations in BRCA2 have been linked to poor prognosis when patients are managed under the protocols currently approved for prostate cancer. The impact of germline mutations in other DDR genes beyond BRCA2 remain unclear. Importantly, a quarter of prostate cancer patients identified as germline mutation carriers lack a family history of cancer. The clinical implications of somatic DDR defects are yet to be elucidated. Poly ADP-ribose polymerase (PARP) inhibitors and platinum-based chemotherapy have proven to be effective in the treatment of other tumor types linked to BRCA1 and BRCA2 alterations and several trials are currently evaluating their efficacy in prostate cancer. Here, we summarize the available evidence regarding the prevalence of somatic and germline DDR defects in prostate cancer; their association with clinical outcomes; the trials assessing the efficacy of new therapies that exploit DDR defects in prostate cancer and briefly discuss some uncertainties about the most appropriate management for these patients. Introduction Alterations in the DNA damage repair (DDR) pathways have only recently been recognized as a major hallmark of prostate cancer. Next-generation sequencing studies have revealed that about 10% of primary tumors and 25% of metastases from prostate cancer harbor DDR defects [1,2] with BRCA2 aberrations consistently described as the most common event. Germline deleterious mutations in DDR genes are present in 8-16% of metastatic prostate cancer patients [1,3,4], a prevalence significantly higher than previously recognized. Inherited BRCA2 mutations that impair the gene function have been described in 3-5% of patients with advanced prostate cancer [3,4]. These BRCA2 mutations have been associated with more aggressive disease and poor clinical outcomes [5][6][7][8][9], but the prognostic implications of other DDR genes are less well established. On the other hand, there is strong emerging evidence that some germline and somatic DDR defects may predict the response to poly-ADP ribose polymerase (PARP) inhibitors and platinum-based chemotherapy in prostate cancer [10][11][12][13]. These findings make genetic testing attractive not only for risk stratification, Table 1. DDR genes screened for germline mutations in mCRPC studies mentioned in this review. Management of Localized Disease A range of management options is available for men with localized prostate cancer. The criteria to consider active surveillance, radiotherapy, or radical prostatectomy for the general population are usually based on several factors including PSA levels at diagnosis, Gleason score, tumor stage, performance status, life expectancy, and patient preference [23,24]. BRCA1 and BRCA2 mutation carriers with localized disease are usually managed under the same general protocols as no definitive evidence has demonstrated that they should be treated otherwise. Active surveillance can be an adequate management strategy for men with low-risk localized prostate cancer unlikely to affect the patient's life expectancy in the absence of treatment [24]. It consists of regular PSA monitoring and MRI, allowing curative intervention for those patients that experience disease progression. The poor outcomes associated with BRCA2-related prostate tumors have discouraged physicians to recommend active surveillance to eligible carriers although evidence to support this decision has only been provided recently. A report analyzing the outcomes of 1211 men undergoing active surveillance including 11 BRCA1, 11 BRCA2, and 5 ATM germline carriers showed that tumors in BRCA2 carriers were more likely to present a tumor grade reclassification requiring radical treatment when compared to non-carriers. The tumor staging upgrade incidence at 2-, 5-and 10-years was 27%, 50%, and 78% in BRCA2 carriers compared to 10%, 22% and 40% in noncarriers (p = 0.001) [25]. No conclusive data are available regarding whether alterations in BRCA2 or other DDR genes are relevant in selecting between curative treatment options (radical prostatectomy or radiotherapy). The only reported data comes from a retrospective study that analyzed the outcomes of 1200 patients with localized disease including 18 BRCA1 and 49 BRCA2 carriers, and reported that 89% and 67% of BRCA surgically treated were free from metastasis at 5 and 10 years, respectively, when compared to 97% and 91% of non-carriers (p = 0.024). For those treated with radiotherapy, the difference was even greater as only 57% and 39% of BRCA1/2 carriers were free from metastasis at 5 and 10 years, respectively, compared to 91% and 80% of non-carriers (p < 0.001) [9]. One could be tempted to assume that surgery is a better approach for mutation carriers, however, this conclusion cannot be based on this study. The two treatment cohorts were not balanced due to the fact that patients treated with radiotherapy (both carriers and non-carriers) presented with more advanced disease than those men surgically treated, and a direct comparison of the two groups was not performed. Nonetheless, these data raise the question of whether adjuvant treatments may be beneficial for BRCA1/2 carriers. Androgen deprivation therapy (ADT) is routinely added to radiotherapy for localized disease. No evidence has been produced to support a different ADT scheme in carriers, but since androgen deprivation seems to downregulate both homologous recombination (HR) [26] and non-homologous End Joining (NHEJ) [27,28], prolonged ADT or the addition of new androgen receptor signaling inhibitors (ARSI) such as abiraterone, enzalutamide, or apalutamide, might be of benefit for those carriers undergoing radiotherapy. Adjuvant chemotherapy with docetaxel following radiotherapy in unselected patients with high-risk disease improves relapse-free survival, but the benefit in metastasis free survival and overall survival is less clear [29]. Several trials are currently ongoing that would provide definite data on the impact of adjuvant chemotherapy or ARSI on metastasis free survival and overall survival. To our knowledge, outcome analyses in these studies have not been planned to be stratified by BRCA and/or DDR status. However, if conducted, such analyses would provide an insight into the benefit of different adjuvant schemes and guide future clinical trials in this population. Management of Metastatic Prostate Cancer After failure of the primary treatment, the disease is usually managed with long-term androgen deprivation therapy (ADT). Disease progression on continuous ADT is termed castration resistant prostate cancer (CRPC). The PROREPAIR-B study has shown that germline BRCA2 mutation carriers become resistant to ADT faster than non-carriers. In the study, the median time from continuous ADT initiation to CRPC was 28 months for non-carriers when compared to 13.2 months in BRCA2 carriers (p = 0.05) [4]. PROREPAIR-B has been the first prospective study to address the impact of germline mutations in BRCA1, BRCA2, and other DDR genes in patients with metastatic CRPC (mCRPC) [4]. Although the 10-month difference in cause specific survival (CSS) between ATM, BRCA1, BRCA2, PALB2 mutation carriers, and non-carriers (33.2 vs. 23) was not statistically significant (p = 0.264), the study showed that CSS from mCRPC was halved in BRCA2 carriers when compared to non-carriers (17 vs. 33, p = 0.027). The difference remained significant when the BRCA2 carriers were compared to other germline DDR carriers (median 33.8 months, p = 0.048). Multivariate analyses identified BRCA2 as an independent prognostic factor for CSS in mCRPC (HR 2.11, 95% CI 1.06-4.18). Importantly, none of the carriers in this series had received a Poly ADP-ribose polymerase (PARP) inhibitor and/or platinum-based chemotherapy, which may have had a confounding effect on survival. Analysis of the response to taxanes and ARSIs in the PROREPAIR-B study showed no difference in the response rates based on carrier status [4]. However, the duration of the responses tended to be shorter in carriers, particularly in those harboring BRCA2 mutations. Importantly, the outcomes of BRCA2 carriers who received abiraterone or enzalutamide as first-line treatment did not differ from that of non-carriers. Conversely, BRCA2 carriers treated with the taxane-ASI sequence had significantly worse CSS (median 28.4 vs. 10.7 months, p = 0.0003; HR:4.16, 95% CI 1.80-9.62) and the progression-free survival from the first systemic therapy until progression on the second systemic therapy (PFS2) was also shorter (median 17.1 vs. 8.6 months, p < 0.0001; HR:8.16 95% CI 3.60-18.49) than in non-carriers who received the same treatment [30]. No biomarker has been identified to date for selecting one therapy over another in the setting of advanced prostate cancer. If these preliminary results are confirmed, the determination of germline BRCA2 status would be of assistance for the selection of the first line of treatment in mCRPC. The observations described above may contribute to explain the contradictory results previously reported by three retrospective series that have evaluated the response of germline DDR carriers to abiraterone and enzalutamide. Annala et al. [22] analyzed the outcomes of 176 metastatic CRPC patients including 22 DDR carriers (16 BRCA2) (Table 1), and found that the progression free survival (PFS) of DDR carriers on first-line ASI was significantly shorter than that of non-carriers (3.3 vs. 6.2 months, p = 0.01). The poor PFS could be related to the high tumor burden in the patients included as reflected by the high levels of circulating tumor DNA (ctDNA) reported (>30%). Authors also remarked on the great heterogeneity observed in PFS, with some DDR carriers benefiting from ARSIs for >2 years. This was the main observation of the second study by Antonarakis et al. [21], who observed a trend toward a more prolonged PFS in mutation carriers when compared to non-carriers (15 vs. 10.8 months, p = 0.090) treated with ARSi. Interestingly, they also identified previous chemotherapy as a factor associated with worse PFS and CSS, but did not analyze whether this affects the same to carriers and non-carriers. The third study is the retrospective analysis of patients from the landmark publication by Pritchard et al. in 2016 [20]. The authors reported the outcomes of 330 non-carriers and 60 DDR carriers (including 37 BRCA2) and found no association between mutation status and the response or duration of treatment to the first ARSI or taxane administered. The clinical implications of somatic mutations in BRCA2 and other DDR genes have not been well characterized yet and there is no evidence of whether these patients may respond differently to the treatment options currently available. Mutations in BRCA and Other DNA Repair Genes as a Potential Target for Platinum-Based Chemotherapy and PARP Inhibitors in Prostate Cancer Specific treatment strategies for patients with somatic and/or germline mutations in DNA repair genes could be obtained from research in other tumor types frequently associated with these events such as breast and ovarian cancers. Platinum-based chemotherapy has been proven to be an effective treatment for BRCA1 and BRCA2 mutated breast [31,32] and ovarian cancers [33] as these compounds generate DNA cross-links that cannot be easily resolved with an impaired homologous recombination (HR). In standard protocols for prostate cancer, platinum-based chemotherapy is only used when neuroendocrine differentiation has occurred as phase III trials in mCRPC failed to show any benefit in unselected population [34]. However, several retrospective reports have suggested that BRCA2 mutated prostate cancer may be highly sensitive to this therapy [10][11][12]. In a retrospective analysis of 141 men with mCRPC treated at the Dana Farber Cancer Institute between 2001-2015 with at least two doses of carboplatin and docetaxel, the treatment demonstrated benefits for patients with germline BRCA2 mutations [12]. Six out of the eight BRCA2 carriers identified (75%) presented a >50% PSA decline within 12 weeks of initiating this regimen when compared to 23 of 133 of non-carriers (17%) (p = 0.001). A >50% PSA decline was associated with a more prolonged survival (18.9 in BRCA2 carriers vs. 9.5 months in non-carriers). Several studies are ongoing to evaluate the role of platinum-based chemotherapy for patients with DNA repair defects. Inhibition of the Poly ADP-ribose polymerase (PARP) is another strategy to treat DNA repair deficient tumors as these drugs exploit the dependency of HR-deficient tumors on alternative DDR pathways [35] and several PARP inhibitors are at different stages of clinical development (Table 3). They differ in their potency and specificity to bind PARP and their ability to trap PARP-DNA complexes [36]. Olaparib (AstraZeneca) was the first drug in class to be approved in 2014 for the treatment of ovarian cancer associated with BRCA defects. The first-in-man clinical trial of olaparib in a population of patients with advanced solid tumors, enriched for germline BRCA1 and BRCA2 mutations included three mCRPC patients; one of them benefited from the drug for over two years [37]. Small numbers of mCRPC patients with germline BRCA mutations were also enrolled in phase I trials with other PARP inhibitors such as talazoparib [38] or niraparib as single agents [39]. An open-label single-arm basket phase II study of olaparib for germline BRCA mutation carriers included eight mCRPC patients. Of these, one of four and three of four with or without previous exposure to platinum, respectively, achieved a response to therapy [40]. The phase II trial TOPARP-A [13] enrolled 50 men with heavily pre-treated mCRPC. Fourteen out of the 16 patients who were found to harbor a DDR defect (somatic or germline) (88%) achieved clinical benefit (including radiological responses, PSA drops, and/or CTC count decreases) and durable responses to the PARP inhibitor olaparib including all seven patients with BRCA2 defects, but also some with BRCA1, ATM, PALB2, and FANCA defects, among others. The preliminary results of the phase II trial TRITON2 evaluating the efficacy of rucaparib in a preselected population with DDR defects in tumor or ctDNA showed PSA and radiographic responses in 48% and 45% of patients harboring BRCA2 defects. Confirmed PSA and radiographic responses were also observed in one patient with BRIP1 and one with FANCA mutations [41]. No confirmed responses were observed for ATM. This raises the question of which DDR genes other than BRCA1 and BRCA2 may be considered predictive of response to PARP inhibitors. However, this point may not be clarified until phase III trials are completed. A cross-talk between the androgen receptor (AR) and DNA repair has been extensively described [26,27,42]. First, PARP is involved in androgen dependent transcription and PARP inhibition impairs this process [43]. Second, the androgen receptor regulates the transcription of DNA repair genes and therefore androgen depletion impairs HR [26], so the tumor may become susceptible to PARP inhibition regardless of HR mutation status. Supported by this preclinical data, several trials are addressing the potential synergisms between PARP inhibitors and ARSIs, irrespective of DDR status. The first study reported was NCI9012, a phase II trial that compared the efficacy of veliparib plus abiraterone with abiraterone in monotherapy. No differences in PSA response rate (63.9% vs. 72.4%, p = 0.27), radiographic response rate (45% vs. 52.2%, p = 0.51), or median PFS (10.1 vs. 11 months, p = 0.99) were observed between the two groups, with all patients considered. However, 20 of the 148 (25%) patients included in the study had biallelic DDR defects and presented better PSA (90% vs. 56%, p = 0.007) and radiographic (87.5 vs. 38.6, p = 0.001) response rates than the DDR-wild type, in both treatment arms. The small number of DDR-defective patients in each treatment arm did not allow further comparisons. The combination was well tolerated. Grade ≥3 side effects were observed in 20% and 24% of the single and combination arms, respectively [44]. A second phase II randomized trial assessed the efficacy and tolerability of olaparib in combination with abiraterone when compared with abiraterone in mCRPC patients, irrespective of their DDR status. Eleven of 71 (15%) men in the olaparib arm and 10 out of 71 (14%) patients in the control arm had mutations in the HR genes. However, 61% of patients only had partially characterized HR status as the results of germline and plasma testing could not be confirmed by tumor analysis. Unlike the previous study, time to radiographic progression was significantly prolonged in the olaparib plus abiraterone group when compared to the abiraterone alone group (13.8 vs. 8.2, p = 0.034), regardless of HR status. The study was not powered for subgroup analysis, but the exploratory analysis showed a benefit on time to radiographic progression with the combination in patients with and without HR defects. No differences in radiological response rates or in PSA responses were observed between the two arms. Importantly, 54% of patients in the combination arm presented severe adverse events when compared to 28% in the abiraterone group, including seven (10%) patients with a serious cardiovascular event [45]. The contradictory results of these two studies regarding the benefit of adding a PARP inhibitor to an ARSI in DDR-proficient patients to prolong the time to progression may be related to the different pharmacological activity of veliparib and olaparib, but also to the different classification of patients by DDR-status. Currently ongoing trials are likely to clarify this in the near future as well as whether the benefit may outweigh the potential toxicity of the combinations (Table 3). Unfortunately, resistance to PARP inhibitors eventually arises, even in patients with BRCA2 biallelic loss who usually present strong initial responses. Early reports suggest that subclonal aberrations reverting germline and somatic DDR mutations back in frame may be a key mechanism of resistance [46,47]. Likewise, polyclonal BRCA2 reversion mutations have also been identified at the time of disease progression in a patient treated with platinum-based chemotherapy [48]. In all cases, these subclonal events likely driving resistance to PARP inhibitors and platinum, were identified in circulating tumor DNA (ctDNA), reinforcing the clinical utility of ctDNA as a multipurpose biomarker for treatment with PARP inhibition in mCRPC [47]. Implications for Hereditary Cancer and Germline Testing Prostate cancer is one of the most heritable human cancers as 57% of the interindividual variation in risk has been attributed to genetic factors [49]. Men harboring an inherited BRCA2 mutation have a 30% lifetime-risk of developing prostate cancer, although it may vary from 19% to 61% depending on the presence/absence of genetic variants acting as risk-modifiers [50]. The impact of germline BRCA1 mutations is more modest as the life-time risk of prostate cancer associated with these mutations has been estimated in 13% [50], similar to that of the general population. The prostate cancer risk for other DDR genes beyond BRCA 1/2 remains unclear. A Gleason score ≥8, and nodal and distant metastasis at the time of diagnosis are more common in BRCA carriers who develop prostate cancer than in non-carriers [8]. Despite the compelling evidence indicating that BRCA2 mutations predispose carriers to an aggressive prostate cancer phenotype, the most appropriate screening strategy has yet to be elucidated. International guidelines recommend annual PSA-based prostate cancer screening from the ages between 40-45 [51]. The efficacy of this approach is being analyzed in the IMPACT study. IMPACT is an international multicenter study that has enrolled over 2000 men including 791 BRCA1 and 732 BRCA2 carriers aged 40-69 years. Participants have annual PSA tests and the threshold to indicate a biopsy is a PSA >3 ng/mL. Data from the first round of annual PSA screening have estimated the positive predictive value of PSA >3 ng/dL in 48% for BRCA2 carriers, double than the 24% estimated for the general population [52]. Importantly, most of the tumors in BRCA2 carriers identified through PSA in this preliminary report are of intermediate or high risk unlike those detected through PSA screening in the general population, who are often low-risk tumors [52]. However, considering the limitations of PSA-only screening in populations at increased risk for prostate cancer, assessing the role of additional imaging and urine or blood biomarkers in BRCA carriers would be important. The PRECISION study has demonstrated the value of multiparametric magnetic resonance imaging (mpMRI) as a screening tool for prostate cancer [53] and further studies should now assess if this more precise screening tool is of particular benefit for men at an increased risk of aggressive prostate cancer due to inherited BRCA mutations. Beyond prostate cancer, germline BRCA1 and BRCA2 mutations are known to increase the risk of other tumor types including breast and ovarian cancers. More than 25 genes linked to DNA repair have been associated with familial breast and/or ovarian cancers, most of them involved in HR and Fanconi Anemia pathways [54]. Cancer patients who carry a germline mutation in one of these genes often have other relatives also diagnosed with cancer, triggering genetic screening. However, studies conducted in prostate cancer patients [3,4] have revealed that 30% of patients harboring a germline mutation in a DDR gene and 15% of those who carry a BRCA2 mutation do not have a relative affected by cancer. Although some reports suggest that intraductal histology may be common in patients with germline BRCA2 mutations [55,56], no tumor features have been strongly associated with the presence of BRCA mutations in prostate cancer beyond a Gleason score >8 and a higher prevalence of node and distant metastasis at diagnosis [8]. Accordingly, updated National Comprehensive Cancer Network clinical practice guidelines [24] now recommend clinicians consider germline screening for mutations in BRCA2 and other HR genes in all patients with high risk localized prostate cancer and metastatic disease. Identification of a germline mutation in a prostate cancer patient would not only have implications for the patient, but should also be followed by genetic testing of all related family members, providing the opportunity for early cancer-specific screening and risk reduction strategies in those found to be carriers [57]. Considering the high prevalence of prostate cancer in developed countries [58], the increasing need for genetic testing, and the shortage of genetic counsellors, it is evident that the traditional approach consisting of pre-test counselling is no longer feasible and new strategies to enable more widespread access to genetic testing in a timely manner are needed. Some institutions are implementing prostate cancer-focused genetic clinics alongside their pre-existing prostate cancer clinics. Under this approach, patients eligible for testing may undergo counselling by a trained urologist or oncologist managing the patient's prostate cancer [59]. The ENGAGE study recently reported the results of the oncologist-led BRCA testing in women with ovarian cancer and demonstrated efficient turnaround times along with high levels of patient and physician satisfaction [60]. This could also be an adequate and efficient strategy to counsel prostate cancer patients before undertaking a genetic test. Conclusions and Future Directions DDR defects are present in prostate cancer at a higher prevalence than previously recognized. BRCA2 is the DDR gene most commonly mutated in advanced prostate cancer with up to 5.3% of these patients carrying a germline mutation. Often, carriers lack a personal or family history of cancer to suspect a heritable mutation, therefore germline screening should be considered in all patients with advanced prostate cancer, at least until we are able to discern in which patients the likelihood of a germline mutation is low enough to spare screening. Understanding the real prevalence of germline DDR mutations in prostate cancer across populations would require further studies into screening groups with different genetic backgrounds. Treating physicians are becoming aware of the clinical implications of identifying these alterations and thanks to the more widespread access to genetic testing, we are likely to obtain an accurate estimation of this prevalence in the near feature. Major efforts are needed to guarantee the carriers' access to a genetic counsellor in a timely manner as well as to establish management protocols that improve these patient outcomes. The approval of the PARP inhibitors to treat mCRPC patients with DDR defects is likely to occur in the foreseeable future. Pending questions to be answered by currently ongoing trials are the benefit of using PARP inhibitors at earlier disease stages either in monotherapy or in combination. However, stratification of patients for PARP inhibition therapy by DDR defects is still suboptimal and represents the major hurdle for the development of PARP inhibitors in this population. Each of the ongoing phase II/III clinical trials testing the antitumor activity of PARP inhibitors and each laboratory performing genetic diagnostic tests uses a different panel. Even more importantly, the analyses pipelines and the criteria to classify genetic variants may differ significantly [61]. In addition, loss of function may occur without changes in a gene sequence, but may be driven by other mechanisms including epigenetic and transcriptomic changes. Assessing all the heterogeneous and complex mechanisms that could lead to deficient DDR using the methodologies currently available is not cost-effective and we may be under-identifying patients likely to benefit from PARP inhibitors. Further efforts are needed to ascertain DDR deficiency in a comprehensive and efficient approach. Conflicts of Interest: The authors declare no conflict of interest.
2019-03-16T13:02:33.654Z
2019-03-01T00:00:00.000
{ "year": 2019, "sha1": "2febe5a2896de7a0caef563d80cfd5cf52559e4a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/11/3/352/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2febe5a2896de7a0caef563d80cfd5cf52559e4a", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
250681269
pes2o/s2orc
v3-fos-license
An evaluation of the influence of a magnetic field on a human subject with the use of bio-impedance The influence of a magnetic field on a living human organism was monitored using a bio-impedance evaluation of vasodilatation effects. A quantitative evaluation of the influence of a magnetic field on a human being was implemented by means of a quantitative evaluation of changes in the bio-impedance of the tissue. The pulse of the magnetic field was controlled by a pseudo-random impulse signal using a power switch that controlled the current of the applicator coil. The peak magnetic field flux density was approximately 60 mT. The bio-impedance was measured by a four-electrode method by means of a radiofrequency narrow band vector bioimpedance meter. Experiments were performed on the magnetic exposure of the forearm of an exposed human subject. During exposure to a magnetic field, the bio-impedance change signal level increases above the normal level, and reaches the maximum level after about 10 minutes. The maximum value is approximately 50 % higher than the normal level. Influence of a magnetic field on living tissue Magnetic fields have been used in medicine for therapeutic purposes since ancient times. It is well known that a magnetic field leads to changes in inorganic and organic matter, and in some of its physical and chemical characteristics. These effects can also be observed on biological systems. The main effect of a pulse magnetic field is thus anticipated to be in the analgesic and vasodilatation area. Our work provides a practical methodology for direct quantitative observation of the influence of a magnetic field on living tissue by means of bio-impedance evaluation of vasodilatation effects. Bio-impedance methods are based on the fact that the electric impedance of the tissue varies according to the amount of blood contained in a segment at a given instant. The values of the impedance magnitude changes (dZ) are proportional to the amount of blood in the tissue and its flow during the heart cycle. This enables us to identify changes in tissue perfusion due to external influences in the course of regular measurements. Description of the experimental arrangement The experiments consisted of measurements of the magnetic exposure of the forearm of a healthy human subject. The monitored segment was exposed to the effects of a magnetic field. Changes in the effective value (RMS) of the dZ signal were chosen as a measure of the magnetic field effects. The experiments were repeated several times in the same configuration. In order to eliminate the placebo effect, the subject was not informed whether a magnetic field was acting or not. Magnetic field generator A magnetic field was generated by a magneto-therapeutic instrument with a pulse random signal (RG), or by a small magneto-therapeutic apparatus (MA) [1]. To observe the bio-impedance we used a radiofrequency (RF) narrow-band vector bio-impedance meter, as described in [2]. The RG uses a generator of a Galois code to generate a digital random sequence of 16 bits. The frequency spectrum of the generated signal has a nearly constant level in the band from f/(2 N -1) to f/2, where N is the length of shift register (here N=16) and f is the clock frequency (here 100 Hz). The applicator is a couple of Helmholtz coils, with an inner diameter of 30 cm, distance 20 cm. It was possible to achieve a peak magnetic flux density value B max = 60mT between the coils with the use of a pulse random excitation current. The MA generates a pulse magnetic field with a frequency of 25 Hz, 12,5 Hz, 6,25 Hz, 3,125 Hz and with a course corresponding to approximately half a sine wave of the electric network. The applicator consisted of a cylindrical multi-layer coil with outer diameter 52 mm and length 37 mm, with an open magnetic circuit made of ferromagnetic sheets formed by the core and the ferromagnetic casing. B max in the surroundings of the applicator reached a level of 40 mT at a distance of 10 mm and a level of 20 mT at a distance of 20 mm from the frontal surface of the apparatus. The bio-impedance meter The bio-impedance meter was realized as a narrowband vector impedance meter (Z-meter) with fourelectrode sensing. A narrow transmission bandwidth and coherent signal detection enabled higher sensitivity and noise immunity from the external disturbance in the surroundings of the pulse magnetic field generator. A block diagram of the arrangement is illustrated in figure 1. The generator of the test signal, which is realized as the current source, supplies the two exciting outside electrodes of the four-electrode system with 1 mA RF current with frequency 75 kHz. Figure 1 Block diagram of the bio-impedance meter The evaluated voltage difference, which is proportional to the observed impedance, is measured by an internal couple of electrodes. The measuring amplifier is realized as a narrowband amplifier with highquality selective resonant circuits and with high linearity. The amplified signal is rectified by two synchronous detectors, controlled by the signals from a 75 kHz generator. Between the controlling signals of the two detectors there is a phase shift of π/2, so that at the phase shift compensation in the measuring chain the output voltages of the detectors are proportional to the real and imaginary part of the measured impedance, which enables us to evaluate its vector. Signal processing The output signal of the synchronous detectors is first processed by amplifiers with an amplitude characteristic representative band pass (BP) with cut-off frequencies of approximately 0,5 and 20 Hz. After A/D conversion, digital dZ signal processing follows. The amplitude is determined for the vector signal of the bio-impedance, and this signal is processed in real time by the adaptive digital filter, using a signal processor and a computer. The amplitude characteristic of the filter corresponds approximately with the signal spectra of the bio-impedance, and practically enables maximal improvement of the signal-to-noise (S/N) ratio for the processed signal [3]. A typical amplitude characteristic of the filter is displayed in figure 2. The characteristic of the filter is approximately periodic, which corresponds to a comb filter with basic frequency f, and the envelope curve of the process corresponds to a low-pass filter with a multiple higher cut-off frequency up to about 7 to 10f (HF). The filter is designed as a filter with a finite impulse response. Its pulse response is created by convolution of the pulse response of the comb filter and a low-pass HF filter. The filter is continuously adapted by setting the periodicity frequency f of the characteristics to a value which corresponds to the signal frequency rate. The tuning of the filter proceeds with a time delay adjustment of the shift registers in the filter. The signal frequency rate filter is based on analyses of the autocorrelation function of the signal. The typical time response of the bio-impedance signal, dZ, is shown in figure 3 before and after filtration. Experiment ordering The schematic ordering of the tests is illustrated in figure 4. The electrodes for bio-impedance scanning were placed on the inside of the forearm. On the inside, the voltage electrodes were 10 cm apart, and the outer current electrodes were placed 5 cm from the voltage electrodes. There were disposable electrocardiograph wet gel electrodes. The area between the voltage electrodes was exposed to the magnetic field inside the Helmholtz coils, or MA was used. The actual layout with the Helmholtz coils applicator is displayed in figure 5. The relative changes in volume during the heart cycle are presented in figure 9; in the case when random magnetic field B max = 60 mT is applied (4 time behaviours, above) and in the case when no magnetic field is applied (1st time behaviour, below). Conclusions The signal level increases above the normal level when exposure begins (in the first 5 minutes of the experiment). The maximum is reached after about 10 minutes of exposure, and exceeds the normal level by approximately 60 % (see the curve in the 15th minute).
2022-06-28T02:51:08.478Z
2010-01-01T00:00:00.000
{ "year": 2010, "sha1": "ca9b714101a25d0c224436b86638970d819884a0", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/200/12/122007", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "ca9b714101a25d0c224436b86638970d819884a0", "s2fieldsofstudy": [ "Engineering", "Medicine", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
15631443
pes2o/s2orc
v3-fos-license
HSPC117 Is Regulated by Epigenetic Modification and Is Involved in the Migration of JEG-3 Cells The human hematopoietic stem/progenitor cell 117 (HSPC117) protein is an essential component of protein complexes and has been identified to be involved in many important functions. However, how this gene expression is regulated and whether the HSPC117 gene affects cell migration is still unknown. The aim of this study was to identify whether HSPC117 mRNA expression is regulated by epigenetic modification and whether HSPC117 expression level affects the expression of matrix metalloproteinase 2 (MMP 2), matrix metalloproteinase 14 (MMP 14), and tissue inhibitor of metalloproteinases 2 (TIMP 2), and further affects human placenta choriocarcinoma cell (JEG-3) migration speed. In our epigenetic modification experiment, JEG-3 cells were cultured in medium with the DNA methyltransferase inhibitor 5-aza-2'-deoxycytidine (5-aza-dC), the histone deacetylase (HDAC) inhibitor trichostatin A (TSA), or both inhibitors. Then, the HSPC117 mRNA and protein expressions were assessed using real-time quantitative PCR (qPCR) and Western blot assay. The results showed that, compared to the control, HSPC117 mRNA expression was increased by TSA or 5-aza-dC. The highest HSPC117 expression level was found after treatment with both 5-aza-dC and TSA. Further, in order to investigate the effect of HSPC117 on MMP 2, MMP 14, and TIMP 2 mRNA expressions, pEGFP-C1-HSPC117 plasmids were transfected into JEG-3 cells to improve the expression of HSPC117 in the JEG-3 cells. Then, the mRNA expression levels of MMP 2, MMP 14, TIMP 2, and the speed of cell migration were assessed using the scratch wound assay. The results showed that over-expression of HSPC117 mRNA reduced MMP 2 and MMP 14 mRNA expression, while TIMP 2 mRNA expression was up-regulated. The scratch wound assay showed that the migration speed of JEG-3 cells was slower than the non-transfected group and the C1-transfected group. All of these results indicate that HSPC117 mRNA expression is regulated by epigenetic modification; over-expression of HSPC117 decreases MMP 2 and MMP 14 transcription, reduces cell migration speed, and increases TIMP 2 transcription. Introduction The human hematopoietic stem/progenitor cell 117 (HSPC117) protein, also known as C22orf28 (with an analogous protein in Bacteria and Archaea, called RtcB), is a member of the UPF0027 family [1,2]. The HSPC117/RtcB protein has been determined to be an essential subunit of a tRNA ligase complex that is involved in tRNA splicing and other RNA repair reactions [3]. It has been demonstrated that this protein family has high sequence similarity with proteins in Eucarya, Bacteria, and Archaea [4]; for example, murine focal adhesion associated protein (FAAP), a homologous protein of HSPC117, is 99% identical to human HSPC117 [5]. Moreover, as an essential component of protein complexes, HSPC117 is also present in the TNF-α mRNA 3' AU-rich element binding complexes and osmotic response element binding protein KIAA0827 [6]. Previous studies have shown that FAAP interferes with the activation of mitogen activated protein kinase (MAPK) by inducing levels of extracellular signaling related kinase (ERK) dephosphorylation and/or reducing phosphorylation in mice [5]. Further, HSPC117 acts as an activator of Serum Response Factor (SRF), which is a transcription factor with important roles in the regulation of the actin cytoskeleton [7]. Recent studies showed that HSPC117 was important in embryo and placenta development [8]. Although HSPC117 has been identified to be involved in many important functions, research into this gene is still limited, and how this gene expression is regulated is still unknown. Recent studies showed that HSPC117 was expressed in mouse pre-and post-implantation embryos. When HSPC117 RNAi knock-down embryos were transferred into pseudopregnant females, a large number of embryo deaths were observed after nine days of pregnancy [8]. This study showed that mouse in vivo produced (IVP) and somatic cell nuclear transfer (SCNT) blastocyst HSPC117 expression was significantly different. Further, placental abnormalities were found in HSPC117 RNAi and low expression embryos. Other studies showed that HSPC117 protein participates in the spreading initiation center (SIC) during the early stages of cell spreading [7] and in cell adhesion. It is commonly stated that the efficiency of successful development of SCNT embryos is less than that of IVP embryos because of incomplete or error-prone epigenetic reprogramming [9]. Further, cell migration is involved in embryonic development and placental formation [10]. Thus, we speculate that HSPC117 might be regulated by one or more epigenetic patterns and involved in cells migration. Histone deacetylation and DNA methylation are important forms of epigenetic modification [11]. 5-aza-2'-deoxycytidine (5-aza-dC) can inhibit the activity of DNA methyl-transferase (DNMT), and trichostatin A (TSA) can inhibit non-competitively the activity of histone deacetylase (HDAC) [12]. In this study, we committed to characterize the regulation pattern of the HSPC117 gene and analyze how TSA and 5-aza-dC influence the expression of the HSPC117 gene. We have known that adherent cell movement is thought to be a result of a multi-factorial process, such as cell interactions with the extra-cellular matrix (ECM) and with adjacent cells [13]. The foundation of cell migration is the recognition and interaction between cells and specific extra-cellular matrix (ECM) components. Matrix metalloproteinase (MMPs) degrade ECM proteins, and create space for cell motility. Tissue inhibitor of metalloproteinases (TIMPs) effectively down-regulate the effect of MMPs. Both MMPs and TIMPs are involved in spatial and temporal ECM remodeling. HSPC117 is thought to regulate cell motility because it is specifically located in the early SIC at the time that cell adhesion occurs [14], and some experiments have proven that it affects cell adhesion through regulating vinculin-paxillin association [7]. However, there is a lack of data regarding HSPC117 alteration of the balance between MMPs and their inhibitors during the cell migration. The scope of this paper was to characterize the regulation pattern of HSPC117 and analyze the effect of TSA and 5-aza-dC on its expression level. Additionally, the expressions of MMP 2, MMP 14, and TIMP 2, and cell migration speed, when HSPC117 was over-expressed, were observed to examine the association between HSPC117 expression level and epigenetic modification, and cells migration. Effect of Histone Deacetylation and Methylation on HSPC117 Expression To determine whether HSPC117 mRNA and protein expressions are associated with epigenetic modification, we analyzed the relationship between TSA/5-aza-dC and HSPC117 expression levels in human placental choriocarcinoma cell line (JEG-3) cells using qPCR and Western blot assays. As shown in Figure 1A, glyceraldehyde-3-phosphate dehydrogenase (GAPDH) as reference gene, the HSPC117 mRNA expression level in each treatment group was higher than that of the control group. The expression level was about 4.2 times higher than that of the control group (p < 0.01) when cells were treated with a combination of both inhibitors (T + aza); this combination treatment group had the highest HSPC117 transcriptional level. When cells were treated with 5-aza-dC or TSA separately, the HSPC117 mRNA levels were about 4.0 times (p < 0.01) and 2.6 times (p < 0.05) higher than that of the control group, respectively. Similar results were obtained when β-actin was uesd as reference gene. When cells were treated with a combination of both inhibitors (T + aza) or 5-aza-dC or TSA separately, HSPC117 mRNA levels were about 3.5, 2.7, and 1.9 times higher than that of the control group ( Figure 1B). Further, the results of HSPC117 protein levels were similar ( Figure 1C,D). The results suggest that HSPC117 mRNA and protein expression could be epigenetically regulated by TSA, 5-aza-dC, or both. To determine whether HSPC117 mRNA expression affects extra-cellular matrix (ECM), we detected MMP 2, MMP 14, and TIMP 2 mRNA expressions in the JEG-3 cell lines with over-expressed HSPC117 and C1-transfected cells in vitro. Basal expressions of MMP 2, MMP 14, and TIMP 2 were also detected in the non-transfected JEG-3 cell lines, which were used as normal controls. As shown in Figure 2A (GAPDH as reference gene), qPCR analysis showed that MMP 2 and MMP 14 mRNA expression was slightly higher but that TIMP 2 mRNA expression levels were slightly lower (p > 0.05) in the C1-transfected JEG-3 cell lines, compared to the control group; however, C1 plasmids had no significant effect on the expression of MMP 2, MMP 14, and TIMP 2 mRNA. Expressions of MMP 2 and MMP 14 mRNA were significantly down-regulated in the JEG-3 cell lines with over-expressed HSPC117, which were only 0.46-fold (p < 0.05) and 0.31-fold (p < 0.05) of those in the normal JEG-3 cell lines, respectively. However, TIMP 2 mRNA was up-regulated, which was 3.3-fold (p < 0.01) higher than that of the non-transfected group. Similar results were obtained when â-actin was used as reference gene. As shown in Figure 2B, TIMP 2, MMP 2, and MMP 14 mRNA levels were about 2.8-, 0.42-, and 0.39-fold lower than that of the control group. These results showed that MMP 2, MMP 14, and TIMP 2 mRNA expressions were significantly affected by the mRNA expression level of HSPC117. It is important to note that over-expression of the HSPC117 gene changed the proportion of MMP 2, MMP 14, and TIMP 2. This meant a decrease in the amount of the TIMP 2/MMP 14/por-MMP 2 compound, as a clear result. Effect of HSPC117 on JEG-3 Cell Migration and Wound Closure in Vitro A significant attenuation of cell migration was observed in HSPC117 over-expressed JEG-3 cells. As seen in Figures 3 and 4, after cells were cultured for 24 h, the mean cell migration distances of the C1/HSPC117, C1, and control cells were 9.46 ± 1.12, 12.89 ± 2.99, and 14.21 ± 0.87 μm, respectively; thus, the HSPC117 over-expressed JEG-3 migration distance was about 66.57% of that of the control group (p < 0.05) and 69.82% of that of the C1 group (p < 0.05). Similarly, from 24 to 48 h, the HSPC117 over-expressed JEG-3 migration distance was about 80.44% of that of the control group (p < 0.05) and 83.15% of that of the C1 group (p < 0.05). From 48 to 72 h, HSPC117 over-expressed JEG-3 migration distance was about 74.59% of that of the control group (p < 0.05) and 83.15% of that of the C1 group (p < 0.05). This may suggest that the migration speed of the control group cells was the fastest, that the C1 cells were slower than the control group cells (p > 0.05), and that the C1/HSPC117 cells were slower than both the control group cells and C1 cells (p < 0.05). Thus, over-expression of HSPC117 may cause a decrease in the migration speed of JEG-3 cells. Relationship of HSPC117 with Epigenetic Inheritance HSPC117 is an RNA ligase that catalyzes the GTP-dependent ligation of RNA with 5'-OH and either 2',3'-cyclic phosphate or 3'-phosphate ends [15,16]. Both in vitro and in living cells, HSPC117 depletion mediated by RNA interference inhibited maturation of intron-containing pre-tRNA [17,18]. The sequence of the HSPC117 protein family is highly conserved in all domains of life; this suggests that it has RNA ligase roles in various organisms. When Ygberg et al. mutationally inactivated cold-shock-associated exoribonuclease polynucleotide phosphorylase (PNPase) in S. enterica, one outcome was an increase in RtcB gene expression [19]. RtcB depletion by RNAi was correlated with Parkinson symptoms and neurodegeneration in the Caenorhabditis elegans Parkinson disease model. RtcB overproduction resulted in neuroprotection [20]. When mouse HSPC117 RNAi knock-down embryos were transferred into pseudopregnant females, a large number of embryo deaths were observed after nine days of pregnancy [8]. Therefore, we speculate that HSPC117 might be influenced by epigenetic modification. However, there are very few reports about its expression regulation by epigenetic modification. To determine if regulation of HSPC117 mRNA expression was affected by epigenetic modification in vitro, JEG-3 cell lines were treated with inhibitors of methylation alone, histone deacetylation alone, or in combination, to examine how those inhibitors influence HSPC117 expression. We found it intriguing that the mRNA and protein expressions of HSPC117 were induced by TSA and 5-aza-dC. The highest expression level was observed when both inhibitors were added to the culture medium; this suggests that HSPC117 mRNA and protein expressions may be synergistically up-regulated by TSA and 5-aza-dC. Although both inhibitors may regulate HSPC117 expression, there are still some limitations in our study. Mainly, we did not perform 5-aza-dC and TSA-response elements analysis in the HSPC117 promoter regulatory region to determine the possible genetic regulation mode of HSPC117. RtcB is the bacterial and archaeal homolog of HSPC117 and RtcA is coregulated by sigma54 in an operon. The region upstream of the transcription start site of the rtcA/rtcB mRNA contains -12 TTGCA and -24 TGGCA elements, respectively, which is the characteristic of sigma 54-dependent promoters [2]. Further, other research groups have shown that the human HSPC117 gene is located on chromosomes 22, and that human Chrs 22 is a CpG-rich island. The CpG islands are important genomic landmarks. They are concentrated in highly acetylated gene rich regions. Many gene expressions are regulated by CpG island epigenetic status. The HSPC117 promoter also has a CpG island involving the first exon [21]. We suspect that HSPC117 expression level was up-regulated when cells were treated with 5-aza-dC because the 5-aza-dC decreased the methylation level of CpG islands in the promoter region. Previous studies in leukemia cells have shown several gene expressions can be induced by 5-aza-dC, although their promoters are not directly affected by methylation [22]; this suggests that 5-aza-dC might exert its influence on regulating gene expression through a methylation independent or non-dependent manner. When JEG-3 cells were treated with the HDAC inhibitor, TSA, the expression level of HSPC117 was slightly increased, while the combination of 5-aza-dC and TSA resulted in a significantly higher expression of HSPC117 than 5-aza-dC or TSA alone. No report about TSA response element was found in exons of HSPC117 to its promoter; this indicates that an indirect mechanism might be responsible for TSA induction. TSA and 5-aza-dC have been widely used for studying epigenetic modification of many genes because of their specific inhibition of histone deacetylation and methylation, respectively [23,24]. By remodeling chromatin via directly converting methylated DNA to unmethylated DNA or unacetylated histones to the acetylated state, TSA or 5-aza-dC usually cause global changes in gene expression, allowing easy access of the transcription machinery to gene promoters [25]. This is why many transcriptional activities of non-histone transcription factors could be affected by the HDAC inhibitor [26]. The Relationship among HSPC117, MMPs, TIMPs, and Cell Migration There are some lines of evidence that HSPC117 is involved in cell adhesion. However, there is a lack of data regarding the effect of HSPC117 expression on cell migration. To our knowledge, MMPs and TIMPs participate in the regulation of cellular migration processes by interacting with components of ECM [27]. MMPs degrade components of ECM, while they can be specifically inhibited by TIMPs; thus, MMPs and TIMPs play an important role in remodeling the basement membrane (BM) and ECM [28]. In this process, gelatinases (MMP 2) play an important role, and they have the unique ability to degrade BM. TIMP 2 is a known inhibitor of MMP 2. MMP 14 is known as a membrane-type MMP, which is specifically involved in the processes of cell migration and invasion in a paracrine manner to affect close surroundings [29]. In order to verify that HSPC117 contributes to cell migration through regulation of MMPs and TIMPs, we studied MMP 2, MMP 14, and TIMP 2 mRNA expressions in the JEG-3 cells with over-expressed HSPC117. In our study, over-expression of HSPC117 significantly decreased the expression of MMP 2 and MMP 14 mRNA, and expression of TIMP 2 mRNA apparently increased. We have known that MMP 2 activation at the cell surface requires the participation of MMP 14 and TIMP 2. Through the formation of the trimolecular noncovalent complex of TIMP 2/MMP 14/por-MMP 2 (MMP 2 inactive zymogen pro form), MMP 2 is activated and released to the extracellular space, degrades extracellular matrix gelatin/collagen IV and stimulates cell migration. The concentrations of all three molecules are important in this process [30,31]. MMP 14 can activate MMP 2 in a specific manner, and it is believed that an optimal ratio of MMP 14 to TIMP 2 to activate MMP 2 is in the range of 3:1 to 3:2 [32,33]. It is important to note that over-expression of the HSPC117 gene changed the proportion of MMP 2, MMP 14, and TIMP 2 in our reverse transcription real-time quantitative PCR (RT-qPCR) study. MMP 2 and MMP 14 mRNA declined, which meant a decrease in the amount of the TIMP 2/MMP 14/por-MMP 2 compound, and, as a result, inhibition of MMP 2 activity. Contrarily, TIMP 2 mRNA significantly increased, and an excess of TIMP 2 may have inhibited the activity of MMP 2. TIMP 2 is not only the inhibitor of MMP 2 but also an inhibitor of MMP 9; both MMP 2 and MMP 9 are enzymes that degrade gelatin. Our results indicate that the ability to degrade gelatin in JEG-3 with over-expressed HSPC117 declines. We conducted a scratch wound assay to investigate whether over-expression of HSPC117 resulted in decreased migration speed through the surface covered with gelatin. The result showed that the migration speed of JEG-3 cells with over-expressed HSPC117 declined significantly compared to controls, and this is in concordance with our qPCR results. This study lends further support to the supposition that HSPC117 inhibits migration of cells on the surface of the extracellular matrix by modulation of balance between MMP 2, MMP 14, and TIMP 2. Gene Cloning Human cervical carcinoma cells were used for the cloning. Total RNA was extracted from human cervical carcinoma cells using a Trizol reagent (Invitrogen, Grand Island, NY, USA) according to the manufacturer's instructions. cDNA samples were amplified by PCR using specific sense and antisense primers. According to the Human HSPC117 cDNA sequences (GenBank: NM_014306.4), two pairs of primers F and R were designed. F: 5'-AAGCTTATGAGTCGCAGCTATAATGATGAG-3', R: 5'-GGATCCCTATCCTTTGATCACAGCAATTGGTC-3'. The PCR products were subjected to electrophoresis on a 1% agarose gel, and the expected size of the amplified PCR product was 1518 bp. Construction and Transfection of EGFP-HSPC117 Expression Plasmid We used F and R primers to amplify the full coding sequence of the human HSPC117 gene. HindIII and BamHI restriction sites were incorporated at the 5' ends of the forward and reverse primers. After T-A cloning, the PCR fragment was double-digested using HindIII and BamHI. The product was then inserted into the linear pEGFP-C1 vector, which was digested using the same enzymes, to generate the pEGFP-C1-HSPC117 plasmid, named C1/HSPC117, and the control plasmid was named C1. Subsequently, pEGFP-C1-HSPC117 was sequenced to verify the sequence of the inserted fragment. Prior to the day of transfection, JEG-3 cells were plated into 6-well plates. When the cells had reached approximately 80% confluence, transient transfections were performed using 10 μL Lipofectamine 2000 reagent (Invitrogen, Carlsbad, CA, USA) with 7 µg of C1 or C1/HSPC117 plasmid DNA. The transfection medium was replaced with normal growth medium after 6 h. HSPC117 mRNA expression in the cells was assessed by RT-PCR. All constructs were confirmed by sequencing. RNA Extraction and Quality Control Total RNA isolated and obtained from each cell line was extracted with TRIzol reagent (Invitrogen, Grand Island, NY, USA), following the instructions of the manufacturer. The concentration and purity of the total RNA were spectrometrically assessed with spectrophotometer Evolution™ 201 (Thermo Scientific Evolution 201, Chicago, IL, USA). The absorbance was measured at 260 and 280 nm. All the RNAs used in this study have the absorbance ratio of A260 nm/A280 nm was between 1.8-2.0, which indicates that the RNA is pure. The concentration of RNA was calculated as follows: RNA concentration (µg/mL) = (OD 260) × (dilution factor) × (40 µg RNA/mL)/(1 OD 260 unit). In addition, the integrity of the total RNA was assessed by visualization of the 28S/18S band pattern. At least 200 ng of total RNA was loaded onto a 1% denaturing agarose gel stained with ethidium bromide (EtBr) and visualized using GelDoc2000 (Bio-Rad, Hercules, CA, USA). When sharp and clear 28S and 18S rRNA bands were observed, and the 28S/18S rRNA bands have ratio around 2, the total RNA was considered completely intact. The total RNA with confirmed integrity was then used for subsequent experiments. Reverse Transcription Real-Time Quantitative PCR (PT-qPCR) One microgram total RNA per sample was used for reverse transcription. The first-strand cDNA was synthesized using M-MLV First Strand Kit (Invitrigon, Grand Island, NY, USA) according to the manufacturer's protocol, with oligo-dT as primer. Following reverse transcription, the qPCR amplification was then carried out with the Stratagene (Palo Alto, CA, USA) Mx3000P qPCR system, using the specific primers listed in Table 1 To determine the stability of the reference genes under experimental conditions, GAPDH and β-actin mRNA levels in JEG3 cells were measured under control or treated by TSA or 5-aza-dC or HSPC117 over-expression. The results showed that GAPDH and β-actin mRNA were not regulated by our experimental conditions, which indicates that GAPDH and β-actin are perfect reference genes for qPCR in our case. The relative mRNA expression levels of HSPC117, MMP 2, MMP 14, and TIMP 2 was normalized by GAPDH and β-actin. The relative qPCR amplification efficiency were 104% (GAPDH), 99% (β-actin), 97% (HSPC117), 103% (MMP 2), 99% (MMP 14), and 98% (TIMP 2), determined by standard curve analysis. It is indicated that 100% can be roughly used as the amplification efficiency, and the relative expression levels were calculated by the formula 2 −ΔΔCt where ΔΔC t = (C t Target − C t GAPDH/β-actin) treated − (C t Target − C t GAPDH/β-actin) control. Western Blot Analysis JEG-3 cells were washed in phosphate-buffered saline (PBS) and incubated for 30 min on ice in lysis buffer and the lysates were centrifuged at a speed of 16,000× g at 4 °C for 10 min. The supernatants were used to measure the protein content using the bicinchoninic acid (BCA) method, and the pellets were heated at 99 °C for 10 min in sodium dodecyl sulfate (SDS) loading buffer. Thirty micrograms of cell lysate protein was electrophoresed by 12% sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and then the protein blots were electroblotted to polyvinylidene difluoride (PVDF) membranes. The membranes with protein blots were blocked with 5% nonfat dry milk for 1 h in Tris buffered saline-Tween (TBST) and then were incubated with primary antibody (1:500 dilution for HSPC117 and 1:2000 for GAPDH/β-actin) overnight at 4 °C. Membranes were incubated at 37 °C with peroxidase-conjugated goat anti-rabbit secondary antibody (1:3000 dilutions) for 1 h at room temperature after being washed in TBST. The blots were detected using the chemiluminescence detection and analysis system. Scratch Wound Assay To assay cell migration, JEG-3 cells were transfected with C1/HSPC117 plasmids, C1 plasmids or nothing (control), and seeded in 24-well plates that were coated with diluted gelatin (Sigma, Santa Clara, CA, USA). Once the cells reached confluence, the cells were wounded with a plastic tip that was dragged across the cell monolayer. Cells were incubated in DMEM medium with 1% serum, and the phase contrast images were taken at 0, 24, 48, and 72 h of incubation. Five fields were randomly selected and the distances of migrated cells were measured under a light microscope (Carl Zeiss, Oberkochen, Germany). Experiments were repeated at least three times. Statistical Analysis All experiments were performed at least three times, with reproducible results. Statistical analysis was performed using SPSS 15.0 software (SPSS, Chicago, IL, USA). Statistical comparisons were performed with one-way ANOVA. The Tukey multiple comparisons were taken as a post hoc test when differences were significant. Differences were considered to be statistically significant when p < 0.05. Conclusions Our results indicate that HSPC117 mRNA and protein expression are regulated by epigenetic modification; inhibitors of methylation (5-aza-dC) and histone deacetylation (TSA) induce the high expression of HSPC117 mRNA and protein. Further studies indicate that over-expression of HSPC117 gene reduce MMP 2 and MMP 14 gene expression, while TIMP 2 gene expression was up-regulated, and as the result shows, the speed of JEG-3 cell migration is slower than control group.
2016-03-01T03:19:46.873Z
2014-06-01T00:00:00.000
{ "year": 2014, "sha1": "e7003c24f014d71a4ca83f2b3998427c172f3440", "oa_license": "CCBY", "oa_url": "http://www.mdpi.com/1422-0067/15/6/10936/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e7003c24f014d71a4ca83f2b3998427c172f3440", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
164629981
pes2o/s2orc
v3-fos-license
Evaluation of non-targeting, C- or N-pH (low) insertion peptide modified superparamagnetic iron oxide nanoclusters for selective MRI of liver tumors and their potential toxicity in cirrhosis Superparamagnetic iron oxide nanoclusters (SPIONs) modified with pH (low) insertion peptide (pHLIP) could be advantageous for magnetic resonance imaging (MRI) diagnosis of liver tumors at the early stage due to their unique responsiveness to the tumor acidic microenvironment when tumor markers are unknown. However, many critical aspects including the effectiveness of selective MRI in liver tumors, types of delivery and the potential safety profile in cirrhosis need to be fully evaluated. In this study, we report the evaluation of non-targeting, C- or N-pHLIP modified SPIONs as the contrast agent for selective MRI of liver tumors and their potential toxicity profile in cirrhosis. It was found that N-pHLIP modified SPIONs did not result in the loss of liver tumor in the T2-weight MRI but provided additional dynamic details of tumor structures that would enhance the diagnosis of liver tumors at a small size below 8 mm. In addition, an enhanced safety profile was found for N-pHLIP modified SPIONs with almost fully recoverable impact in cirrhosis. In contrast, the poly-d-lysine assembled SPIONs and C-terminus linked pHLIP SPIONs had non-tumor specific MRI contrast enhancement and potential safety risks in cirrhosis due to the iron overload post injection. All these results implied the promising potential of N-terminus linked pHLIP SPIONs as an MRI contrast agent for the diagnosis of liver tumors. Introduction Liver cancer has been one of the major cancer burdens with increasing mortalities over recent years from 695 000 in 2008 to 781 631 in 2013 according to reports by the global cancer observatory. 1 On the basis of the Barcelona clinic liver cancer staging system, the early diagnosis and treatment of liver cancer at stage 0 (single tumor < 2 cm) could effectively increase the 5 year survival rate up to 70%. 2,3 Magnetic resonance imaging (MRI) has been an effective non-invasive diagnostic method for liver tumors, yet the challenge remains in the early liver tumor diagnosis due to limits in distinguishing them from liver lesions such as cirrhosis. 3,4 Superparamagnetic iron oxide particles (SPIOs) have been used as an MRI contrast enhancement agent to improve the diagnostic accuracy of liver tumors in cirrhosis. [5][6][7][8] In addition, SPIO has also been used to assess stages of cirrhosis and non-alcoholic steatohepatitis as well as the effectiveness of treatment of non-alcoholic fatty liver disease. [9][10][11][12][13] Furthermore, SPIO has been widely used in diagnosis and therapy of other cancers such as prostate cancer diagnosis and drug delivery. [14][15][16][17][18][19] On the other hand, the potential cytotoxicity of SPIO has recently been considered as a major safety concern, 19-21 especially in chronic liver diseases due to SPIO induced iron overload resulting in increased risks of steatohepatitis and cirrhosis progression and liver lesions in obesity. [22][23][24][25][26] One way to enhance the safety prole of SPIO was to utilize the target-specic delivery strategy such as cRGD ligand or pH (low) insertion peptide (pHLIP). [27][28][29][30] pHLIP has been successfully utilized to target tumor-specic acidic microenvironment due to its unique conformational change under acidic conditions to a-helical membrane insertion mode. [31][32][33] It was specially advantageous for early tumor diagnosis because the molecular markers and subtypes of liver tumors in most cases are unknown at early stage. Furthermore, pHLIP is capable to achieve dual modes of cargo delivery to tumor cells through peptide modication at either N-or C-terminus. 33, 34 We recently reported that pHLIP linked SPIO nanoclusters (SPION) could be an effective contrast agent in the MRI of subcutaneous tumor models. 30 However, many critical aspects remain unclear in the liver tumor diagnosis. For example, a critical concern is whether the accumulation of targeting SPION in liver tumor would result in the full disappearance of the tumor in T2-weighted MRI due to the concurrent uptake of SPION by the normal liver Kupffer cells. Another one is what signicant advantages pHLIP modi-ed SPIONs could have over non-targeting SPIO for the MRI of the orthotopic liver tumor as well as the difference between the C-and N-terminus pHLIP delivery of SPION. Further question is how the targeting SPIONs would have any improvement in the safety prole in cirrhosis due to the iron overload. In this study, we report the evaluation of the effectiveness of non-targeting, Cor N-pHLIP modied SPION for selective MRI of liver tumor and their potential toxicity prole in cirrhosis as the potential MRI contrast agent. Material and methods All chemicals were obtained from Sigma-Aldrich (USA) and Sinopharm Chemical Reagent Co., Ltd. (China) unless specied otherwise. The hydroxyethyl starch coated SPIO was synthesized as reported, 35 and the detailed steps were described in the ESI. † Murine liver cancer cell line H22 were obtained from Shanghai Institute of Life Science Cell Culture Center (China) and maintained via passages intraperitoneally in BALB/c mice. Synthesis of pHLIP modied poly-D-lysine polymers C-and N-terminal inserted cysteine pHLIP peptides were commercially synthesized by GenScript (NJ, USA) with certied analysis (protein sequences shown in Fig. 1a). Poly-D-lysine (PDL, 30-70 kDa) in phosphate buffered saline (PBS, 1 mg mL À1 , 100 mL) was rst mixed with maleimide-(PEG) 24 -succinimidyl ester (Thermo Scientic Pierce, USA) in DMSO (10 mg mL À1 , 20 mL) at room temperature. Aer 1 h, the PEG-maleimide modied PDL was puried via centrifugation with a Zeba spin desalting column (7 kDa cutoff, Thermo Fisher Scientic, USA). The resulting ltrate was then mixed with pHLIP peptide solutions (2 mg mL À1 , 100 mL in PBS). The desired pHLIP modied PDLs were obtained through centrifugation with the Zeba spin desalting column. The UV absorbance spectra of the resulting modied polymers were obtained with a T9 UV-Vis spectrometer (Persee Analytics Inc., China) in PBS. The uorescence emission spectra were recorded with an F-4500 uorescence spectrometer (Hitachi, Japan) using the excitation wavelength at 280 nm. The circular dichroism (CD) spectra were obtained with a J-810 CD spectrometer (JASCO Inc., USA) at 25 C in PBS at pH 4.0 and 8.0 containing 1.0 mg mL À1 lipids including 1,2-distearoyl-sn-glycero-3-phosphocholine, 1,2-dioleoyl-sn-glycero-3phosphocholine and cholesterol at a weight ratio of 2 : 1 : 1. Assembly of SPIO nanoclusters Three types of SPIO nanoclusters were prepared with PDL, Cand N-terminus linked pHLIP PDL polymers with the hydrodynamic diameter size around 70 nm as PDL-SPION, C-pHLIP-SPION and N-pHLIP-SPION, respectively. Typically, the polymer solution in PBS (50 mg mL À1 for PDL, 150 mg mL À1 for C-pHLIP-PDL or 100 mg mL À1 for N-PDL-pHLIP) was slowly added to an SPIO solution (10 mg mL À1 in PBS at 8.0) at 1 : 1 volume ratio under sonication and then equilibrated at room temperature for 20 h to afford the desired nanoclusters. The hydrodynamic size and zeta-potential of the nanoclusters were determined with a Nano-ZS90 particle analyzer (Malvern, United Kingdom) in PBS. The transmission electron microscopic images were obtained with a Hitachi HT-7700 transmission electron microscope (Tokyo, Japan) using uranyl acetate staining. The stability of SPION was assessed in PBS containing 10% FBS over 24 h at 37 C with the Nano-ZS90 particle analyzer. The relaxivity (R 2 ) of assembled nanoclusters was obtained as the slope of a series of SPION concentrations over the 1/T2 that was determined with a clinical Magnetom Trio Tim MRI spectrometer (3.0 Tesla, Siemens Prisma, Germany) as reported. 35 Orthotopic liver tumor and cirrhosis mouse models All animal procedures were performed in accordance with the Guidelines for Care and Use of Laboratory Animals of the People's Republic of China and approved by the Animal Ethics Committee of Huazhong University of Science and Technology. SFP male BALB/c mice (6-7 weeks of age) were obtained from Beijing HFK Bioscience Co. Ltd., China. For the orthotopic liver tumor model, H22 subcutaneous tumors were rst grown at the right ank of BALB/c mice until the tumor size reached 14  14 mm approximately. The tumor tissue was then harvested, and small pieces (1  1 mm) were inserted in the liver of naive mice through surgical procedure under isourane/O 2 anesthesia. Typically, orthotopic liver tumors generally formed in mice over 10-20 days at a successful rate of 30% to the size of 6  6 mm approximately by MRI analysis. For the cirrhotic liver model, diethyl 1,4-dihydro-2,4,6-trimethyl-3,5-pyridinedicarboxylate (Acros Organics, USA) suspended in corn oil (10 mg mL À1 , pharmaceutical grade, Aladdin, China) was fed into mice via gavage at a dose of 0.25 mg g À1 body weight twice a week for four and half weeks. The formation of cirrhosis was conrmed by MRI analysis in week six as signicantly enlarged liver at a successful rate of 100% and directly used for the following studies. In vivo MRI assessment of liver tumors and cirrhosis with SPIONs The T2-weighted MRI of the mice were performed with a clinical Magnetom Trio Tim MRI spectrometer (3.0 Tesla, Siemens Prisma, Germany) using optimized sequences of 3000 ms repetition time, 80 ms echo time, 1.0 mm slice thickness, 1.0 mm slice space thickness and 30  96 mm eld of view. Prior to the SPION injection, MRI was rst performed on all mice as the baseline references. SPION solutions including PDL-SPION, C-pHLIP-SPION and N-pHLIP-SPION (100 mL each) were then intravenously injected at 5 mg Fe per kg body weight (3 mice per group). MRI was performed at 0, 2, 4 and 24 h post iv injection. The resulting MR images were processed with provided Syngo Fastview soware (Siemens, Germany). Toxicity proles of SPION in cirrhosis The toxicity proles of SPION in cirrhosis were assessed through the serum biochemistry tests on day 1, 2, 5, 10 and 15 post the single injection of SPION solutions at 5 mg Fe per kg body weight (5 mice per group). Typically, blood was collected on the specied day, and the obtained sera were assessed with serum biochemistry test kits included levels of alanine aminotransferase (ALT), aspartate aminotransferase (AST), alkaline phosphatase (AKP), cholesterol (CHOL), high-density lipoprotein cholesterol (HDL) and low-density lipoprotein cholesterol (LDL) (Nanjing Jiancheng bioengineering Institute, China). As a comparison, the levels of serum biochemistry in the cirrhosis mice without SPION injection or with injection of nonassembled SPIO were obtained on the same end point (day 15) as the references. In addition, the body weights of mice in all groups were monitored over 15 days post SPION injection. The liver and spleen organ weights were recorded on day 15 post SPION injection. The iron level in the liver tissue was determined similarly as reported. 35 All the data obtained were analyzed by GraphPad Prism program with one-way ANOVA and Tukey's multiple comparison statistical test. Stable SPIO nanoclusters produced through assembly with modied PDL polymers Three different types of assembled SPION were assessed in this study, i.e., non-targeting PDL assembled SPION and the C-or Nterminus linked pHLIP PDL assembled SPION. The acidresponsive pHLIP modied PDL polymers were rstly synthesized via the inserted cysteine residue at the C-or N-terminus of the peptide (sequences shown in Fig. 1a). The conjugation was achieved through the amide formation of the PDL lysine residues with an activated succinimide ester of a PEG crosslinker followed by the thiol addition on the maleimide by pHLIP cysteine (Fig. 1a) as reported. 30 The incorporation of PEG linkage was to maintain the water solubility of the modied polymers due to the high hydrophobicity of pHLIP peptides. The synthesized C-terminus linked pHLIP PDL (C-pHLIP-PDL) and N-terminus linked pHLIP PDL (N-pHLIP-PDL) polymers had characteristic UV and uorescence properties as those of pHLIP peptide (Fig. 1b-c). 30 The amount of pHLIP attachment was estimated to be approximately 5% of the PDL lysine residues based on the uorescence intensity of modied polymers. 30 Moreover, both C-and N-terminus linked pHLIP PDL polymers exhibited conformational changes with more a-helix structures at pH 4.0 than those at pH 8.0 (increased negative mdeg around 220 nm in Fig. 1d). 30 All these results indicated the successful conjugation of PDL polymers with the pHLIP peptide linked at either C-or N-terminus. The assembly of SPIO nanoclusters was then accomplished with PDL and the synthesized pHLIP PDL polymers on the hydroxyethyl starch coated SPIO. The nal nanocluster diameter size was optimized to be approximately 70 nm, which was found to be the most effective size for the enhancement in the MRI of subcutaneous tumor models. 30 The zeta average diameter sizes of the assembled PDL-SPION, and C-pHLIP-SPION and N-pHLIP-SPION were 65.9, 64.8 and 68.1 nm, respectively (Fig. 2a-c). The zeta potentials of all three nanoclusters were similarly in the range of À1.69 to À1.32 mV in PBS. Moreover, TEM analysis conrmed the assembly of SPIO into the spherical nanoclusters by PDL polymers with consistent diameter size (Fig. 2d-f), which were signicantly different from the amorphous distribution of non-assembled hydroxyethyl starch coated SPIO (Fig. S1 †). The MRI contrast enhancement capability of assembled SPION were determined as the relaxivity (R 2 ) of 71.8, 72.3 and 70.7 mM À1 s À1 for PDL-SPION, C-pHLIP-SPION and N-pHLIP-SPION, respectively, which were higher than that of non-assembled SPIO (55.4 mM À1 s À1 ). Furthermore, the stability of these three assembled SPION was conrmed with no signicant change of the zeta average diameter size over 24 h at 37 C in PBS solution (pH 7.4) containing 10% serum (Fig. S2 †). Signicant MRI enhancement of orthotopic liver tumor with assembled SPION The enhancement of MRI contrast by assembled SPION in orthotopic liver tumor was investigated with the tumor of 5-8 mm size approximately. The T2-weighted MR images were obtained using a clinical 3.0 Tesla MRI instrument with settings optimized for mice study. The resulting representative MR images were shown in Fig. 3 with a scale bar of 5 mm. The slight blurry in MR images was due to the muscle relaxation during the long acquisition time and limits of the instrument. Nevertheless, signicant differences were found in MRI post the injection of assembled SPION. Firstly, post the injection of PDL-SPION or C-pHLIP-SPION, the MRI signals of the normal liver tissue decreased signicantly due to the uptake of SPION by the normal liver Kupffer cells. 35 On the other hand, the liver tumors remained unchanged post the SPION injection due to the dysfunctional macrophages in tumors. In addition, no signicant changes of the tumor images were observed over a period of 24 h post injection (representative images shown in Fig. 3a and b), which suggested there was no enhanced permeation and retention (EPR) effects by these two types of SPION in the tumor. In the case of N-pHLIP-SPION, decrease of MRI signal in the mouse liver were consistently found post injection. More importantly, time-dependent changes inside the tumor in MRI were observed over a period of 24 h post injection (Fig. 3c). In mouse number 1, certain interior structure of tumor was revealed at 2 h post injection as compared to that prior injection, which then disappeared slowly at 4 h and even more at 24 h. The tumor in mouse number 2 was found to have pronounced changes in the tumor perimeter in MRI at 2 and 4 h, which then become blurry at 24 h post injection. Similar observation of changes in the tumor core of mouse number 3 was also found at less discernible level. Of note, these contrast changes were observed at 2-3 mm resolution based on the scale bar in the MR images (Fig. 3c). Because the mice in the MRI were taken out of the instrument at each time point, further quantitative analysis was not possible. However, the contrast differences shown in the MR images (Fig. 3) were sufficiently enough as a qualitative analysis. Thus, all these results implied that there was EPR effect in the tumor by the N-pHLIP-SPION in a time-dependent manner eliciting details of the tumor post injection (Fig. 3c). Therefore, selective targeting of liver tumor by SPION would not result in the loss of the tumor signals in the T2-weight MRI but provide additional dynamic details of tumor structures that would enhance the diagnosis of liver tumor at a small size below 8 mm size. The selective liver tumor imaging in Fig. 3 might be attributed to the presence of N-terminus linked pHLIP peptide. The pH-induced membrane insertion mechanism by pHLIP is that the C-terminus of pHLIP was inserted across the cell membrane and located in the cytosol under acidic condition while the Nterminus remained at the outside of cells. 31-33 Therefore, the C-terminus linked SPION would require an endocytosis process to pull SPION into cytosol whereas the N-terminus linked SPION would possibly reside outside tumor cells. Although gold nanoparticles had been shown to be effectively delivered into tumor cells via the C-terminus linked pHLIP, 34 no signicant MRI contrast enhancement of tumors was observed with C-pHLIP-SPION in our study, possibly due to the relatively large size of SPION and thus high membrane-cross energy required. In addition, the enhancement of tumor structure by N-pHLIP-SPION was possibly due to the signicantly overexpressed lactate dehydrogenase A in the tumor acidic cores. 36 Besides the liver tumor, the MRI contrast enhancement in cirrhosis was also investigated with the assembled SPION. The purpose of the study was to evaluate whether the contrast enhancement in MRI by SPION was able to differentiate the brosis lesion from the tumor under the same condition. Ideally, an orthotopic liver tumor model with additional cirrhosis would validate the advantage of assembled SPION in the MRI diagnosis. Unfortunately, the mortality rate was so high that few mice survived for the investigation with MRI. It was found that all the cirrhosis mice exhibited ununiformed ber-like pattern all across the liver prior SPIO injection similarly over 24 h. Interestingly, the MRI signal of cirrhosis liver was markedly reduced post the injection of PDL-SPION, C-pHLIP-SPION or N-pHLIP-SPION (representative MRI at 4 h shown in Fig. 4). Only a small fraction of ber-like pattern was barely observed post the SPION injection. Moreover, the contrast change in MRI clearly conrmed the enlarged liver organ in the cirrhosis with the maximum width  height about 20  10 mm in Fig. 4 as compared with that of the liver in tumor model at 15  7 mm size in Fig. 3. The enhancement by SPIONs in cirrhosis had been attributed to that cirrhosis only resulted in partial dysfunctional Kupffer cells that were still capable to take up SPION. 9-13 Improved safety prole of N-pHLIP-SPION in cirrhosis model To ensure the potential safety of N-pHLIP-SPION as an MRI contrast agent for liver tumors, the toxicity prole of the assembled SPION was determined in cirrhosis model at the same injection dose. This was because although the poly-Llysine assembled SPION at 5 mg Fe per kg injection dose had no toxicity impact in healthy mice, signicant risk potential was discovered in cirrhosis model with disrupted liver function, lipid metabolism and iron level due to the iron overload post SPION injection, 25 which indicated the safety prole in cirrhosis was a more better safety study model. 26 Thus, the serum liver function markers and lipid cholesterol levels including alkaline phosphatase (AKP), aspartate aminotransferase (AST), alanine aminotransferase (ALT), total cholesterol, high-density lipid cholesterol (HDL) and low-density lipid cholesterol (LDL) were monitored in cirrhosis mice over 15 days post the intravenous injection of assembled SPION. Analysis of liver functional serum markers is much more accurate and indicative than the tissue pathological analysis, especial when the potential toxicity in cirrhosis is at the early stage. It was found that all three liver function markers increased at 24 h post the injection of the assembled SPION at 5 mg Fe per kg dose, possibly as a result of septic shock in response to the iv injection of SPION. 25 The levels of these serum markers then decreased over 15 days with the ALT as the most decreased marker (Fig. 5a-c). While the serum cholesterol levels uctuated a little bit over time aer the initial increase post the injection (Fig. 5d-f). Among all six serum markers tested, only the total cholesterol and LDL levels had no difference among PDL-SPION, C-pHLIP-SPION and N-pHLIP-SPION groups whereas the other four markers showed certain differences. All these results implied that impacts post the injection of assembled SPION were partial recoverable at various degree over 15 days in the cirrhosis model. To fully assess whether the injection of assembled SPION had any potential toxicity impact on cirrhosis, we included two more control groups on day 15 as a comparison, namely, the cirrhosis group on day 15 with no SPIO injection as a negative control and the cirrhosis group on day 15 post injection of non-assembled SPIO as the positive control. It was found that cirrhosis without SPIO injection had an AKP level at 90 U L À1 on day 15 whereas the injection of non-assembled SPIO resulted in an increased level at 106 U L À1 even aer 15 days (Fig. 6a). Similarly, the PDL-SPION injection induced a higher level at 111 U L À1 aer 15 days whereas that of C-pHLIP-SPION was at 99 U L À1 . Only the N-pHLIP-SPION group was the closest to no SPIO injection group at 94 U L À1 . Similar trends were consistently observed in the AST and ALT levels, i.e., those of N-pHLIP-SPION were almost the same to those of no SPIO injection versus those of PDL-SPION and C-pHLIP-SPION similar to those of non-assembled SPIO injection (Fig. 6b and c). Unfortunately, due to the high variation of individual mouse, there was no statistically signicant difference found. On the other hand, there was difference in the HDL level among the assembled SPION groups versus those nonassembled SPIO and no SPIO injection groups (Fig. 6d). Further comparison of the mouse body weight post injection indicated the PDL-SPION group had a signicant one gram less body weight than other groups (Fig. 6e). It was found that the PDL-SPION group did not gain any body weight over 15 days post injection while other groups had 1-2 g weight increase because the mice in all groups had similar body weight on day 0 prior injection. On the other hand, the liver organ weights were all similar at approximated 1.8 g (Fig. 6f), characteristic of enlarged liver in cirrhosis as compare with the normally 1 g weight of the healthy mouse liver. 25 In contrast, the iron level in the liver tissue was found to be different among these groups. The injection of non-assembled SPIO injection resulted in 10% iron level loss in liver tissue whereas the PDL-SPION injection led to a 10% increase of iron level in the liver tissue aer 15 days (Fig. 6g). Both of the pHLIP modied PDL-SPION had similar iron levels as that of no SPION injection group. Furthermore, decreased organ weight of the spleen was observed in the PDL-SPION and noneassembled SPIO group aer 15 days in contrast to that of N-pHLIP-SPION group that was closest to that of no SPIO injection group (Fig. 6h). Combined together, all these results indicated that N-pHLIP-SPION had an almost fully recoverable impact in the cirrhosis with similar liver function markers, iron level in liver tissue and spleen weight (except HDL levels) to that of no SPIO injection aer 15 days post injection. On the other hand, non-assembled SPIO and PDL-SPION had high potential risk at the 5 mg Fe per kg injection dose with increased AKP, decreased AST and ALT, altered iron level in liver tissue and decreased spleen organ weight. Despite the relative high dose, the enhanced safety prole of N-pHLIP-SPION in cirrhosis demonstrated the feasibility for clinical applications. Our results implied that the enhanced safety prole was attribute to the modied PDL polymer assembly because both non-assemble SPIO and PDL-SPION resulted in disrupted safety markers as a high potential risk. On the other hand, the non-assembled SPIO, polylysine assembled SPION and pHLIP modied PDL assembled SPION were found to have low cytotoxicity in liver cells. 30,35 Thus, it was the disrupted iron homeostasis due to iron overload by SPION in cirrhosis that was a safety concern. 25,26 This was further supported by the reduced spleen weight observed aer 15 days with SPIO and PDL-SPION injections. It has been shown that the macrophages in the liver and spleen were mainly responsible for the uptake of SPIO and that it would take over 30 days in spleen to slowly convert SPIO into iron in ferritin. 37,38 In addition, our results showed that the Cterminus linked pHLIP SPION also had less toxicity than non-modied PDL-SPION. Thus, it was possible that pHLIP linked SPION could be metabolized more quickly in the liver and spleen due to the pH responsive nature, which was currently under investigation. Conclusion We presented in this study that the N-terminus linked SPION as stable nanoclusters were a selective MRI contrast enhancement agent for liver tumor with signicantly enhanced safety prole. N-pHLIP-SPION were capable to not only to "brighten up" the tumor through the reduction of MRI signal of the normal liver but also provided additional details of tumor in a timedependent manner post injection. In addition, the assembled SPION could be used for the MRI assessment of cirrhotic liver due to the uptake of SPION by partial dysfunctional Kupffer cells. Moreover, the injection of N-pHLIP-SPION had an almost fully recoverable impact in cirrhosis with a safety prole similar to that of cirrhosis group with no SPIO injection. In contrast, the PDL-assembled SPION and C-pHLIP-SPION showed non-tumor specic MRI contrast enhancement and potential safety risks in cirrhosis due to the iron overload by SPION. Therefore, targeting SPIONs in liver tumor would not result in the loss of the tumor signals in the T2-weight MRI but provide additional dynamic details of tumor structures that would enhance the diagnosis of liver tumor at a small size below 8 mm size. All these results implied the promising potential of N-pHLIP-SPION as a selective MRI contrast agent for liver tumor diagnosis.
2019-05-26T13:55:13.957Z
2019-05-07T00:00:00.000
{ "year": 2019, "sha1": "f3b9f988f7c108de1d94024e0bf63151f7c3a0bf", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2019/ra/c9ra02430a", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8dfbe295fc1d3f8b822d465fc5728c91286b832b", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
36510196
pes2o/s2orc
v3-fos-license
Social Network Characteristics Associated with Weight Loss among Black and Hispanic Adults with Overweight and Obesity Objective To examine social network member characteristics associated with weight loss. Methods Cross-sectional examination of egocentric network data from 245 Black and Hispanic adults with BMI ≥ 25 kg/m2 enrolled in a small change weight loss study. The relationship between weight loss at 12 months and characteristics of helpful and harmful network members (relationship, contact frequency, living proximity and body size) were examined. Results There were 2,571 network members identified. Mean weight loss was -4.8 (±11.3) lbs. among participants with network help and no harm with eating goals vs. +3.4 (±7.8) lbs. among participants with network harm alone. In a multivariable regression model, greater weight loss was associated with help from a child with eating goals (p=.0002) and coworker help with physical activity (p=.01). Weight gain was associated with having network members with obesity living in the home (p=.048) and increased network size (p=.002). Conclusions There was greater weight loss among participants with support from children and coworkers. Weight gain was associated with harmful network behaviors and having network members with obesity in the home. Incorporating child and co-worker support, and evaluating network harm and the body size of network members should be considered in future weight loss interventions. participant contracted to follow their goals at least 6 days per week. Participants were then randomized to receive either a positive affect/self-affirmation intervention in addition to the small change eating and physical activity intervention, or the small change intervention alone. Participants who received the positive affect/self-affirmation intervention were asked to reflect on positive moments in their lives, and recall proud moments when faced with obstacles completing their goals. 14 -15 Participants were enrolled from August 2012-September 2013 and followed for one year by trained community health workers at routine intervals (weekly for months 1-3; biweekly months 4-9; once monthly for months [10][11][12]. Close-out interviews were conducted at month 12. Measures Socio-demographic and clinical data were collected at enrollment. Participant height and weight were measured at enrollment and study completion. At study completion, participants completed an assessment of their social network by listing network members using the Convoy Model of Social Relations, a set of three overlapping concentric circles. 16 Network members were defined as "people who are important in your life right now." In the inner circle participants listed members "to whom you are so close it is hard to imagine life without"; in the middle circle "people to whom you may not feel quite that close to but who are still very important to you"; and the outer circle "people whom you haven't already mentioned but who are close enough and important enough in your life that they should be placed in your personal network." There was no limit on the number of members listed. Participants were asked to provide the following information on each member: age, relationship, frequency of contact (once a day, several times a week, once a week, a few times a month, or once a month), living proximity (lives in the home, within walking distance, within same borough, within the 5 boroughs, outside of New York City), and perceived body size. Body size was measured using the Stunkard figure rating scale. 17 , 18 Participants were asked the following question regarding each network member, "Did they help you with your eating goals in the SCALE study?" Answer options were: helped, made it more difficult (referred to as 'harmed') or neither. The same question was asked regarding physical activity. Using these data the following covariates were created: network help only, harm only, help and harm, and neither help nor harm. The same subgroups were created for physical activity. Covariates were created to describe the relationship of helpful and harmful members (i.e. at least 1 child helpful with eating goals), and the number of network members with obesity living in the home. Finally, participants were asked, "What did the people we just talked about say that helped or made it more difficult for you to follow your eating goals?" The same question was asked regarding physical activity. Statistical Analysis Descriptive statistics were calculated using means and proportions. One way analysis of variance (ANOVA) tests were used to compare mean values for weight loss stratified by network help and/or harm with eating and physical activity goals. Student's t-tests were used to compare mean weight loss between participants with vs. without network members helpful with eating goals (or physical activity goals) stratified by member relationship. A multivariable regression model was used to examine the relationship between weight loss (dependent variable) and network help and/or harm with goals. In the model, the following covariates were examined: harm only with eating goals, help and harm with eating goals, neither help nor harm with eating goals (compared to referent group = help only with eating goals). Network help and/or harm with physical activity was examined in the same model using similar subgroups (referent group = help only with physical activity goals). The model was adjusted for participant age, race/ethnicity, gender, education, study site, randomization group and network size. A second multivariable regression model was used to examine the relationship between weight loss and the following covariates: at least one child helpful with eating goals, at least one co-worker helpful with physical activity, and at least one network member with obesity living in the home. The model was controlled for participant age, race/ethnicity, gender, education, study site, randomization group and network size. In exploratory analyses the variable "at least one inner circle network member helpful with eating goals" was included in the fully adjusted model. Interactions by race/ethnicity and gender in the relationship between weight loss and child help with eating goals were examined in a fully adjusted model. In exploratory analyses, an average score was calculated for the number of network members helpful or harmful with eating goals (+1 for each helpful member, -1 for each harmful member and 0 for members neither helpful nor harmful). This score was examined in a linear regression model with weight loss as the outcome variable. Qualitative comments were reviewed by two independent evaluators who formulated a consensus on the major themes. Results A total of 405 participants were randomized in the SCALE trial; 247 completed the study and network data were collected on 245. Table 1 shows the baseline characteristics of participants who provided network data. Participants were primarily women (89%), mean age 50.5 years, 51% Black, 49% Hispanic, 76% completed high school, 76% insured, and 49% employed. English was the native language for 95% of Black and 12% of Hispanic participants. The mean age of study non-completers was younger than completers (44.5 vs. 50.5 years, P<.0001), and non-completers had a borderline larger BMI at enrollment (34.7 kg/m 2 vs. 33.5 kg/m 2 , P=.05). There was no difference in mean MOS social support scores between non-completers vs. completers (P=.2), nor in the percent of children living in the home (P=.3). Table 2 shows the characteristics of the network members (n=2,571, mean network size n=10). The highest percentages were friends (35%), children (17%), and siblings (15%). Thirty-one percent of members were identified as overweight (25 ≤ BMI < 30 kg/m 2 ), and 10% obese (BMI ≥ 30 kg/m 2 ). The inner circle was 29% children, 17% friends, 17% siblings, 12% other family, 8% parents and 7% partners (defined as husband, wife, boyfriend, girlfriend, significant other). Of the 106 partners identified, 92% were in the inner circle. The middle circle was 53% friends, 18% siblings, 16% other family, and 2% children. The outer circle was 71% friends, 10% other family, 10% coworkers and 3% sibling. Table 3 shows mean weight loss stratified by network help and/or harm with eating or physical activity goals. For eating goals, mean weight loss was -4.8 lbs. for participants with network help and no harm; -5.4 lbs. with both help and harm; +3.4 lbs. with harm alone; and -0.4 lbs. with no help or harm (ANOVA, P =.006). There was no difference in mean weight loss when network help and/or harm with physical activity was examined ( Table 3 Network member relationship In bivariate analyses, participants with at least one child helpful with eating goals had greater weight loss compared to participants without a helpful child (P =.003; Table 4). Weight loss was greater among participants with at least one child helpful with physical activity goals (P =.02; Table 4). When examined by child age, there was greater weight loss among participants with vs. without an adult child (≥ 18 years) helpful with eating goals [-6.1 (±10.0) vs. -2.8 (±10.8) lbs., P =.03], but not physical activity [-5.9 (± 9.1) vs. -3.1 (± 11.1) lbs., P =.07]. There was no difference in weight loss between participants with vs. There was a borderline difference in weight loss between participants with a co-worker helpful with physical activity goals vs. no coworker help (P =.05). Circle position, Frequency of contact, Living Proximity, Body size Participants with at least one child in the inner circle helpful with eating and physical activity goals had greater weight loss compared to those without child help in the inner circle [-6.8 (±11.6) vs. -2.6 (±10.4) lbs., P =.007)]. The majority of children (95%) were identified as inner circle members. There was no difference in mean weight loss between participants with vs. without helpful inner circle members of other relationships. Participants in contact with a child helpful with eating goals ≤ once a week (but at least once a month) had borderline greater weight loss compared to those with more frequent contact [-13.9 (±10.7), n=8 vs. -5.6 (±11.2) lbs., n=93, P =.05]. There was no difference in mean weight loss when frequency of contact with helpful vs. non-helpful network members was examined by other relationships. There was no difference in mean weight loss between participants with a child helpful with eating goals living in the home vs. not [-6.0 (±12) lbs., n=55 vs. -6.7 (±8.5) lbs., n=23, (P =. 80)]. Participants with at least one network member with obesity lost less weight compared to those without [-2.4 (±10.8) lbs., n=138 vs. -5.6 (±10.3) lbs., n=107, P =.02]. Table 5 shows two multivariable linear regression models examining the relationship between participant weight loss and network characteristics. Model 1 shows the relationship between weight loss and network help and/or harm with goals. Participants with network member harm to eating goals and no help gained weight compared to a referent group of participants with help alone (P =.02). Participants with both help and harm to eating goals had no difference in weight loss compared to those with help alone (P =.29). There was an association between weight gain and increased network size (P=.001). Multivariable regression models The second model ( Table 5; Model 2) examines the relationship between weight loss and child or coworker help with goals. Participants with at least one child helpful with eating goals had greater weight loss compared to those without child help (P =.0002). Participants with at least one coworker helpful with physical activity goals had greater weight loss compared to those without coworker help (P =.01). Weight gain was associated with having at least one network member with obesity living in the home (P =.0048) and increasing network size (P =.002). Child help with eating goals remained significant (P =.01) when the variable "at least one child helpful with physical activity goals" (P =.55) was entered into the fully adjusted model. When the variable "at least one network member with obesity" was examined in Model 2 there was a significant association with weight gain [estimate 3.57 (95% CI 1.04, 6.10), P=.006]. When the variable "at least one inner circle member helpful with eating goals" was included in Model 2 (p= 0.3), child help with eating goals remained significant (p=.0037). There was no interaction by participant race/ethnicity (P =.72) or gender (P =.14) in the relationship between weight loss and child help with eating goals in the fully adjusted model. In exploratory analyses, we found no significant relationship between weight loss and the score given for average number of network members helpful and harmful with eating goals (p=0.4), suggesting no linear relationship between the number of helpful and harmful members and weight loss. In additional exploratory analyses, network size was larger among participants with network harm to eating goals compared to participants with no harm [11.6 (±5.7), n=73 vs. 9.9 (±5.6), n=172, P=.047)]. In an unadjusted regression model, the number of network members with obesity increased (independent variable) as network size increased [estimate = 0.75 (S.E. 0.27), P =.005]. Table 6 shows participant comments regarding ways in which network members helped or harmed goals. Participants reported that helpful network behaviors included encouraging statements and joining in behavior change. Harmful behaviors included negative and discouraging comments, and network members engaging in unhealthy eating behaviors. Discussion There are three primary findings in this study: (1) participants who reported network behaviors that harmed eating goals and no network help gained weight compared to those with help alone; (2) participants with child help with eating goals, or coworker help with physical activity, had greater weight loss compared to those without these helpful members; and (3) weight gain was associated with having at least one network member with obesity living in the home and larger network size. To our knowledge, this is the first published study to use egocentric network data to examine network member characteristics associated with weight loss among Black and Hispanic adults with obesity. Previous studies have examined the relationship between social support and weight loss, however, results have been inconsistent and assessment of social undermining limited. 2 -5 , 19 -22 Recently, Wang et al. found that family undermining of healthy eating was associated with weight gain. 7 Our results show weight gain among participants with network harm and no help with eating goals. There was no relationship between the number of helpful and harmful network members and amount of weight loss. Interestingly, there was no difference in weight loss between participants who experienced both network help and harm with goals, compared to those with help alone. This suggests that help from network members may counteract harmful network behaviors. Further investigation is needed to assess whether network support in weight loss can be used to counteract negative influences. We found that participants with a child helpful with goals had greater weight loss compared to those without child support. Published studies of parent-child involvement in the weight loss process have primarily focused on addressing childhood obesity. 23 Limited study data have addressed the role of child support in parental weight loss. In a study of parent support for child weight loss, Epstein et al. found that parents lost more weight when they were actively engaged in their child's weight loss efforts. 24 More recently, reciprocal parent-child encouragement was found to increase co-participation in physical activity among Mexican-American families. 25 Our data indicate that child support may be an important strategy in parental weight loss. We also found the benefit of child support to be independent of the living proximity of the child. In addition, our data suggest that help from coworkers with physical activity goals is associated with greater weight loss. This finding is consistent with published data showing higher physical activity scores among adults with work-site social support. 21 We did not find increased weight loss among participants with help from friends. This did not change when the importance of the friendship, frequency of contact, or body size of friends were examined. One potential explanation is that help from friends is not sufficient to counteract other network influences. Published data suggest that support from friends is associated with weight loss only if the friends also lose weight. 4 , 5 The dose and specific type of help may also play a role and were not measured. Participants who reported network members with obesity had less weight loss in our study. We cannot conclude, however, that the body size of network members influenced weight loss patterns. While longitudinal data analyzed from the Framingham Study suggest a role for network body size in index individual weight gain 26 , limitations of this analysis have been raised. 27 Future longitudinal examination of the relationship between network body size and weight loss are needed. The average number of close network members identified in our study is similar to previously published literature. 28 We found less weight loss as network size increased. In exploratory analyses, we found a larger network size was associated with having members harmful to goals and members with obesity. Previous investigators have postulated that an increased number of family ties may reflect increased burden of responsibilities or obligations, resulting in higher stress and less time to engage in healthier lifestyles. 20 Our qualitative data offer insight into ways in which network members can help or harm weight loss behavior change. Similar themes of network help and sabotage have been reported. 6 Positive reinforcement, co-participation in exercise, and avoiding criticism have been identified a helpful behaviors among African American women engaged in weight loss efforts. 29 Our study has limitations. We collected egocentric network data and therefore do not have data from the perspective of network members. Data from network members may have provided further insight into behaviors perceived as harmful. We did not collect network data from participants who did not complete the study. In addition, we cannot comment on how network composition may have changed during the course of the study. We also lack data on the dose of the network help provided. It is possible that participants with and without child support differ on characteristics other than those included in the multivariable model such as self-efficacy. Finally, the body size of network members was based on participant perception measured using a figure rating scale. Conclusion Network help with weight loss goals, specifically child and co-worker help, were associated with increased weight loss. Harmful network behaviors in the absence of network help, and the presence of network members with obesity living in the home were associated with weight gain. These findings suggest that future weight loss behavior change interventions would benefit from engaging the support of children and co-workers, as well as, assessing harmful network behaviors and network member body size. Weight loss studies are needed that measure network characteristics longitudinally to better understand how networks may influence weight loss patterns and how network composition may change over time. Table 6 Qualitative themes regarding network help and harm with eating and physical activity goals Network help: Comments Encouragement "My sister gave me positive feedback regarding drinking water " Join in behavior change "My wife would try to help by steaming food instead of frying and adding more vegetables" "As a family we are eating on 10 inch plates." "My son would say let's walk instead of taking the train or bus." "The girls (daughters and niece) don't drink soda like before. We stopped bringing soda into the house." Network harm: Negative/discouraging comments "My husband and sister criticized me. If they saw me serve less food or eat vegetables they said I wanted to be sexy. They made fun of me because I was eating healthier than before. They said I just wanted to be skinny." "She told me every day that I was fat" "My husband complained about the changes and efforts I was making to lose weight. He didn't want me to look better because he is very jealous." "My mother said, 'I don't see the difference you didn't lose weight'." Engaging in unhealthy behaviors "They ignored my request not to fry everything!" "My co-workers would get food and eat it in front of me such as potato chips and cookies" Encourage non-adherence to eating goals "People use to tell me that there is no point to be on a diet, that we get fat anyway. And that I'm not taking anything with me whenever I die." "They would say not to lose too much weight." "My husband feels like I need to be doing the same thing he is doing. He always asks me why I'm not baking any more cakes and pies for the house."
2018-04-03T05:11:13.151Z
2015-05-09T00:00:00.000
{ "year": 2015, "sha1": "f8467aa4fd2918030313d53f29b3dbc7592839c0", "oa_license": null, "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/oby.21155", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "f8467aa4fd2918030313d53f29b3dbc7592839c0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
246529551
pes2o/s2orc
v3-fos-license
Immobilization of the Aspartate Ammonia‐Lyase from Pseudomonas fluorescens R124 on Magnetic Nanoparticles: Characterization and Kinetics Abstract Aspartate ammonia‐lyases (AALs) catalyze the non‐oxidative elimination of ammonia from l‐aspartate to give fumarate and ammonia. In this work the AAL coding gene from Pseudomonas fluorescens R124 was identified, isolated, and cloned into the pET‐15b expression vector and expressed in E. coli. The purified enzyme (PfAAL) showed optimal activity at pH 8.8, Michaelis‐Menten kinetics in the ammonia elimination from l‐aspartate, and no strong dependence on divalent metal ions for its activity. The purified PfAAL was covalently immobilized on epoxy‐functionalized magnetic nanoparticles (MNP), and effective kinetics of the immobilized PfAAL‐MNP was compared to the native solution form. Glycerol addition significantly enhanced the storability of PfAAL‐MNP. Inhibiting effect of the growing viscosity (modulated by addition of glycerol or glucose) on the enzymatic activity was observed for the native and immobilized form of PfAAL, as previously described for other free enzymes. The storage stability and recyclability of PfAAL‐MNP is promising for further biocatalytic applications. The coding nucleotide sequence for the investigated aspartate ammonia-lyase was found by the Tblastn tool on the NCBI Whole genome shotgun sequence contigs (WGS) database using the protein sequence of AspA from Pseudomonas aeruginosa PAO1 (UniProt ID: Q9HTD7) as query within the WGS of Pseudomonas fluorescens R124 (taxid.:742713). The resulted hits had sequence similarities between 37-97%. The sequence for AAL in Pseudomonas fluorescens R124 WGS showed 89% identity to the query sequence ( Figure S1). The primers (Integrated DNA Technologies BVBA; Leuven, Belgium) for cloning the AAL of P. fluorescens R124 were designed to contain NdeI and BamHI restriction cleavage sites (in bold and underlined). Figure S2a shows the translated protein sequence of the identified AAL in WGS of P. fluorescens R124, Figure S2b depicts the protein sequence of PfAAL expressing from the plasmid construct after the cloning procedure). Figure S4. Kinetic curves of the reaction from L-aspartic acid with PfAAL at different glycerol concentrations Thermal behavior of the native PfAAL The thermal behavior of PfAAL activity was investigated with L-aspartic acid (20 mM) in a reaction mixture (1 mL) containing Tris buffer (50 mM, pH 8.8) and PfAAL (3 µg mL -1 ) in a temperature range between 30-60 °C (30; 35; 40; 45; 50; 55 and 60 °C). The enzyme stock solution and the reaction mixture were preincubated separately for 5 min at the desired temperature. Protein leaching from PfAAL-MNP biocatalyst during reaction The quantitative protein determination was performed with Bradford method with ready to use Coomassie (Bradford) reagent (Thermo Fischer Scientific; Waltham, USA). The sample and the Bradford reagent were used in 1:1 volume ratio. After 10 min incubation the detection was performed at 595 nm with Multiskan™ FC Microplate Photometer (Thermo Fischer Scientific; Waltham, USA). Calibration was made from the purified PfAAL enzyme stock solution in the followed enzyme concentrations: 1; 5; 10; 15; 20; 25 and 30 μg mL -1 . Figure S10. Calibration for protein determination (PfAAL) with Bradford method Samples were taken from the immobilization solution of PfAAL, from the supernatant after the enzyme immobilization and after 3 h incubation in Tris buffer (1 mL, 50 mM pH= 8.8). Samples were taken in triplicate and analyzed. Protein concentration (µg mL -1 ) Treatment of the PfAAL-MNP biocatalysts for end-capping of the unreacted epoxides The immobilized PfAAL-MNP preparation (5 mg) was shaken in 5 mM ethanolamine or 5 mM glycine in 1 mL Tris buffer (50 mM pH= 8.8) for 2 h at 600 rpm. After the treatment, the PfAAL-MNP biocatalysts were washed with 1 mL Tris buffer (1 mL, 50 mM pH= 8.8). The activity of the biocatalysts was determined in a reaction with 20 mM L-aspartic acid in 50 mM Tris buffer pH= 8.8. Figure S11. End-capping treatment of the PfAAL-MNP biocatalyst with ethanolamine or glycine 6 Characterization of PfAAL-MNP biocatalyst Thermal behavior of PfAAL-MNP The PfAAL-MNP biocatalysts (5 mg) were tested in reactions containing 20 mM L-aspartic acid (50 mM Tris buffer, pH 8.8) at various reaction temperatures (30; 35; 40; 45; 50; 55; 60 °C). Before the activity test, the PfAAL-MNP-containing buffer (0.9 mL) and L-aspartic acid (200 mM in 50 mM Tris buffer, pH 8.8) were preincubated at the desired temperature for 5 min and the reaction was started by addition of the L-aspartic acid solution. The reactions were performed at 600 rpm on orbital shaker at the desired temperature. Samples (10 µL) were taken at 2.5, 5-, and 7.5-min reaction times (during sampling, MNPs in the reaction mixture were sedimented rapidly with a neodymium magnet). The reaction rate was calculated as the average of the rates between 2.5-5 min and 5-7.5 min. Relative activity (%) Temperature (°C) S12 Stability of PfAAL-MNP biocatalyst at various pH The pH optimum of PfAAL-MNP (5 mg; 6 µg mg -1 PfAAL on MNP) of the ammonia elimination reaction was investigated with 20 mM L-aspartic acid in a reaction mixture (1 mL) containing buffer (50 mM, sodium phosphate for pH 6-7; Tris for pH 7-9; and sodium carbonate for pH 9-10) at 30 °C and at 600 rpm in a thermostated orbital shaker. Samples (10 µL) were taken at 2.5, 5-, and 7.5-min reaction times (during sampling, MNPs in the reaction mixture were sedimented rapidly with a neodymium magnet). The reaction rate was calculated as the average of the rates between 2.5-5 min and 5-7.5 min. Figure S17. Relative activity of PfAAL-MNP biocatalyst on various pH Effect of divalent cations on PfAAL-MNPs The effect of divalent cations on the activity of PfAAL-MNP (5 mg; 6 µg mg -1 PfAAL on MNP) was investigated in the ammonia elimination starting from 20 mM L-aspartic acid in a reaction mixture (1 mL) supplemented with divalent metal ion chloride (100 µM, Co 2+ ; Mg 2+ ; Cu 2+ ; Ni 2+ ; or Ca 2+ ) at 30 °C and at 600 rpm in a thermostated orbital shaker. Samples (10 µL) were taken at 2.5, 5-, and 7.5-min reaction times (during sampling, MNPs in the reaction mixture were sedimented rapidly with a neodymium magnet). The reaction rate was calculated as the average of the rates between 2.5-5 min and 5-7.5 min.
2022-02-05T06:23:43.654Z
2022-02-03T00:00:00.000
{ "year": 2022, "sha1": "d087dbd1b8565c5590a7a0bd1a50320488372e2c", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "25bccd84c75a32a6218c72c0ab03dc38dd3b1b63", "s2fieldsofstudy": [ "Chemistry", "Materials Science", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
256191709
pes2o/s2orc
v3-fos-license
TRIM8: a double-edged sword in glioblastoma with the power to heal or hurt Glioblastoma multiforme (GBM) is an aggressive primary brain tumor and one of the most lethal central nervous system tumors in adults. Despite significant breakthroughs in standard treatment, only about 5% of patients survive 5 years or longer. Therefore, much effort has been put into the search for identifying new glioma-associated genes. Tripartite motif-containing (TRIM) family proteins are essential regulators of carcinogenesis. TRIM8, a member of the TRIM superfamily, is abnormally expressed in high-grade gliomas and is associated with poor clinical prognosis in patients with glioma. Recent research has shown that TRIM8 is a molecule of duality (MoD) that can function as both an oncogene and a tumor suppressor gene, making it a “double-edged sword” in glioblastoma development. This characteristic is due to its role in selectively regulating three major cellular signaling pathways: the TP53/p53-mediated tumor suppression pathway, NFKB/NF-κB, and the JAK-STAT pathway essential for stem cell property support in glioma stem cells. In this review, TRIM8 is analyzed in detail in the context of GBM and its involvement in essential signaling and stem cell-related pathways. We also discuss the basic biological activities of TRIM8 in macroautophagy/autophagy, regulation of bipolar spindle formation and chromosomal stability, and regulation of chemoresistance, and as a trigger of inflammation. Graphical Abstract a "double-edged sword" [9]. In addition to the RBCC domains, TRIM8 protein contains a nuclear localization signal (NLS) that is required for nuclear localization. The TRIM8 coiled-coil domain enables the formation of nuclear bodies (NBs), which are important interchromatin structures, implying that TRIM8 regulates the function of important cellular proteins via protein-protein interactions [10]. The TRIM8 gene is located on chromosome 10q24.32, an area known to have extensive deletion or loss of heterozygosity in 88% of GBMs. However, this deletion does not result in a reduction in TRIM8 protein, leading to the alternative name GERP (glioblastoma-expressed RING finger protein) for the gene product [11]. Few studies have been performed to thoroughly understand the activities and underlying mechanisms of TRIM8 in GBM [12]. TRIM8 can respond to various stimuli, including genotoxic stress and viral or bacterial attack. Moreover, TRIM8 is critical in many biological processes, including cell survival, innate immune response, carcinogenesis, autophagy, apoptosis, differentiation, and inflammation. TRIM8 has either a tumor suppressive or an oncogenic function regulating the proliferation of GBM cells. The importance of TRIM8 in modulating the p53 tumor suppressor pathway indicates that it plays a tumor-suppressive role in GBM. In contrast, a number of oncogenic mechanisms have been proposed for TRIM8, as it is involved in the positive regulation of NF-κB (nuclear factor kappa-light-chain-enhancer of activated B cells) and JAK-STAT signaling pathways, promoting tumor development and progression [13]. Brat et al. demonstrated that upregulation of TRIM8 in adult tissues and a variety of tumors, particularly GBM, correlates with higher-grade cancer, massive tumor size, and increased stem cell formation and self-renewal ability of cancer stem cells (CSCs) [14]. Regarding TRIM8 tumor suppressor activity, Micale et al. showed that restoring TRIM8 expression in patient glioma cell lines inhibits tumor development and significantly reduces clonogenic potential [15]. In the present review, we sought to elucidate both the tumor suppressive and oncogenic functions of TRIM8 and the underlying molecular networks in GBM. Our findings contribute to a better understanding of TRIM8 and provide clues for developing a new approach to the treatment of GBM cancer. TRIM8 acts as a novel marker for malignant glioma stem cells GBM is the most common and lethal type of primary brain tumor, even after treatment with standard therapies. Over the past decade, glioblastoma stem cells (GSCs) have been extracted from GBM and characterized. These cells have likely played a critical role in tumorigenesis and therapy resistance due to their unique properties, such as selfrenewal and pluripotency, suggesting that GSCs are a new effective target for treatment [16]. Therefore, searching for regulators that effectively enhance the stem-like property of GSCs may provide clues for innovative treatments. Zhang et al. reported that the expression of TRIM8 is consistently correlated with stem cell markers or other transcription factors such as PROM1/CD133, NES (nestin), SOX2, and MYC/c-MYC, and partially correlated with OLOG2 and NANOG, and therefore could promote stem cell property in GBM [14]. They observed that overexpression of TRIM8 results in increased expression of these stem cell markers and transcription factors involved in the expression of two distinct groups of genes: those engaged in tumor dedifferentiation status and Page 4 of 20 Hosseinalizadeh et al. Cellular & Molecular Biology Letters (2023) 28:6 stemness acquisition [9,14]. They also found significant MKI67/Ki-67 protein expression in GSCs overexpressing TRIM8 [14]. MKI67 is a protein commonly used as a cell proliferation marker, and its increased expression in human cancer tissues is closely associated with worse histological grade [17]. They concluded that overexpression of TRIM8 not only correlates with the expression of stem cell markers and transcription factors in GSCs but also increases stem cell activity. Knockdown of TRIM8 inhibits self-renewal of GSCs, and the expression of stem cell markers and transcription factors such as NES, SOX2, and, to a lesser extent, PROM1/CD133 and MYC are significantly impaired. The expression of MKI67 is also reduced, suggesting lower cell proliferation after TRIM8 downregulation [14]. Further investigation of the mechanisms behind the effect of TRIM8 on stem cell maintenance and the ability of GSCs to self-renew revealed that TRIM8 acts mainly by stimulating the JAK-STAT signaling pathway [14,18]. Protein inhibitor of activated STAT (PIAS) plays a crucial role in regulating the balance and steady state of signal transducer and activator of transcription (STAT) by decreasing the activity and translocation of this protein [19]. PIAS3 specifically interacts with phosphorylated STAT3 via the latter's DNA-binding domain, thereby inhibiting its physical binding to target genes [20,21]. Activated STATs are critical regulators of GSCs and are involved in various physiological processes, including immortalization and inhibition of differentiation. TRIM8 interacts with PIAS3 and inhibits its activation, either by degrading PIAS3 through the ubiquitinproteasome machinery or by significantly reducing its nuclear translocation, resulting in enhanced STAT3-mediated support of stem cell properties in GSCs. STAT3 is an essential regulator of normal stem cells and cancer stem cells; it mainly transmits signals from cytokine-stimulated receptors in the plasma membrane through interactions with importins into the nucleus, where they regulate gene expression directly or indirectly via other transcription factors involved in maintaining undifferentiated phenotype in stem cells and cancer stem cells [22]. STAT3 exerts its effect on GSCs by binding to and inducing the expression of promoters of genes encoding transcription factors essential for maintaining self-renewal or pluripotency, such as SOX2, POU2F1/ OCT1, NES, PROM1/CD133, and MYC [23,24]. Studies have shown that the knockdown of TRIM8 inhibits stem cell formation and self-renewal capacity of GBM and leads to glial differentiation. Moreover, STAT3 promotes the expression of TRIM8, resulting in a positive continuous feedback cycle between TRIM8 and STAT3 [18]. The discovery of the positive TRIM8-STAT3 feedback cycle in GSCs sheds new light on the possibility of disrupting the positive feedback loop by targeting either TRIM8 or STAT3 and opens new opportunities for developing treatments that affect pluripotency in GSC and other malignancies with TRIM8 overexpression (Fig. 1) [25]. Further research is needed to understand the pathways that lead to increased TRIM8 transcription in response to STAT3 activation. The TRIM8 promoter contains two potentially conserved STAT3 binding sites and several MYC and POU2F1/OCT1 transcription binding sites [14]. Therefore, STAT3 could either directly or indirectly activate TRIM8 transcription via MYC and POU2F1/OCT1 [26,27]. In addition to positively regulating the JAK-STAT signaling pathway in GSCs, TRIM8 also positively regulates the NF-κB signaling pathway. The NF-κB pathway activates the expression of GSC-associated genes such as CD44/HCAM, SOX2, and NANOG. TRIM8, through its role as a crucial activator of NFKB, enhances signaling pathways initiated by proinflammatory cytokines such as TNF/TNFα (tumor necrosis factor) and IL1B/IL-1β (interleukin 1 beta) [28,29]. In particular, TNF-induced NFKB activation is a critical regulator of cell survival and apoptosis, which has implications for various physiological and pathological conditions, including cancer [28]. Li et al. demonstrated that TRIM8 mediates K63-linked polyubiquitination of MAP3K7/TAK1 (mitogen-activated protein kinase kinase kinase 7) at the K158 residue, which is associated with MAP3K7/ TAK1 activation in TGFB/TGFβ signaling [30]. Subsequently, activated MAP3K7/TAK1 led to phosphorylation and degradation of NFKBIA/IκBα, an essential NFKB inhibitor protein [9,30]. Once NFKBIA is degraded, the NFKB transcription factor translocates to the nucleus. It promotes the expression of key stem cell transcription factors that ultimately mediate fundamental elements of GSC biology, including self-renewal, proliferation, and metastasis, either alone or in collaboration with other signaling pathways. Tomar et al. identified another possible mechanism by which TRIM8 triggers NF-KB activation. PIAS3 inhibits NFKB-dependent transactivation by binding to RELA/p65 and affecting the transcriptional activity of RLA/p65 in the nucleus. The SUMOylation of endogenous RelA by PIAS3 mediates negative regulation of the NF-κB transcription factor [28]. TRIM8 interacts with PIAS3 and mediates its transport from the nucleus to the cytoplasm and its degradation [31]. Nucleocytoplasmic translocation of TRIM8 is required for positive control of NFKB activation [28]. Tomar et al. have observed the function of TRIM8-induced NFKB regulation and its nuclear localization for the migratory and clonogenic abilities of HEK293 cells [28]. This finding of TRIM8-mediated enhancement of cell motility and clonogenic capacity needs further investigation, as it may provide important information about the role of TRIM8 and establish links between inflammatory responses and cancer [18,32,33]. The pathways that stimulate TRIM8 or induce its enhanced expression in GSCs are unclear. Toniato et al. showed that TRIM8 interacts with SOCS1 (suppressor of cytokine signaling 1) both in vitro and in vivo. This association requires the SH2 domain and the SOCS box of SOCS1 [34]. This interaction decreases the stability and abundance of the SOCS1 protein, resulting in decreased suppression of IFN-induced JAK-STAT activation. As described previously, specific cytokines, such as IFNG/IFN-γ and IL1, can increase TRIM8 mRNA expression via a positive continuous feedback cycle between STAT3 and TRIM8 [35,36]. The mechanism by which SOCS1 decreases JAK-STAT signaling is partially known. The SOCS protein family plays a critical negative regulatory role in cytokine-mediated JAK kinase signaling [37]. SOCS proteins can interfere with cytokine signaling through two distinct pathways. They serve as ubiquitin ligases for ubiquitination-dependent regulation of signaling components or directly inhibit JAK tyrosine kinase receptors via their kinase inhibitory domains (KIRs) [37,38]. Interleukin 6 (IL6) is another potent trigger of TRIM8 in GSCs. Abnormal IL6 production and signaling significantly increase STAT3 activity in GSCs, and recent research shows that this is closely linked to their ability to self-renew via binding to the IL6R/IL-Rα receptor. Thus, IL6 increases TRIM8 expression in GSCs via STAT3 activation [39]. Interestingly, increased IL6 levels lead to a dose-dependent increase in TRIM8 protein expression. Other cognate receptors and associated signaling cascades that promote STAT3 activation in GSCs, such as EGFR, NOTCH, and KDR/VEGFR, may also enhance TRIM8 in GSCs through positive interactions between TRIM8 and STAT3 [40,41]. TRIM8, a double-edged sword in glioblastoma In contrast to previous data demonstrating the oncogenic role of TRIM8 in GSCs, studies now also indicate the tumor-suppressive function of TRIM8 in GBM. Micale et al. showed that TRIM8 expression is significantly decreased in patients with glioma at high risk of mortality and higher risk of disease progression. The expression of TRIM8 in grade IV gliomas is considerably lower than in grade III gliomas, indicating a negative correlation with higher-grade GBM [15,42]. The authors observed that overexpression of TRIM8 decreases cell proliferation by approximately 25% and results in a significant reduction in clonogenic potential as an indirect indicator of tumorigenic potential, suggesting that TRIM8 has proliferation inhibitory properties in patients with glioma [43]. A possible target of MIR17 is TRIM8, which regulates TRIM8 expression at the transcriptional and posttranscriptional levels. Suppression of MIR17 significantly decreases cell viability and enhances apoptotic activity in glioma cell lines, and overexpression of this miRNA has been associated with accelerated tumor growth and poor overall survival in gliomas [44]. Thus, these findings suggest a feedback loop between MIR17 and TRIM8 [15,43]. Okumura et al. showed that TRIM8 binds to HSP90 in embryonic stem cells and specifically inhibits transcription of NANOG, a master regulator of pluripotency, by preventing excessive signal transduction via STAT3 [45]. HSP90, a molecular chaperone, is one of the endogenous binding partners of TRIM8 and facilitates the translocation of activated STAT3 into the nucleus. In stem cells, TRIM8 suppresses the translocation of the HSP90-STAT3 complex into the nucleus, modulating Nanog transcription but not that of other transcription factors such as POU5F1/OCT3/OCT4 and SOX2 via STAT3 [46]. Suppression of TRIM8 increases transcription of Nanog in mouse embryonic stem (ES), suggesting that TRIM8 plays an essential role in controlling STAT3-mediated signaling in ES cells. In contrast, the expression of TRIM8 results in the spontaneous differentiation of stem cells [45,46]. Therefore, TRIM8 is thought to be a dual positive and negative regulator of stem cell properties, and its expression must be tightly regulated at an appropriate level to maintain stem cell pluripotency. It is still unclear how TRIM8 can maintain GSC self-renewal capacity and alter its mechanism to reduce glioma cell proliferation and clonogenic potential. Recently, TRIM8 was found to be involved in several cellular signaling pathways critical in cancer suppression. The tumor suppressor and transcription factor TP53/p53 is one of the most critical factors in controlling cell proliferation and is deregulated in nearly 84% of patients with GBM. The tumor suppressor TP53 modulates the expression of genes involved in cell cycle arrest, DNA damage response, and programmed cell death (apoptosis) [47]. Under various stress conditions, such as UV radiation or genotoxic stress, TP53 directly targets the TRIM8 gene and induces its expression. In a positive feedback loop, TRIM8 interacts with TP53 and impairs its interaction with MDM2, a negative regulator of TP53, thereby increasing TP53 stability [48,49]. TRIM8-stabilized TP53 mediates G1 cell cycle arrest through increased expression of CDKN1A/p21 and GADD45 [48]. At the same time, TRIM8 induces polyubiquitination and degradation of MDM2, which further promotes TP53-dependent cell growth arrest [49]. This suggests that TRIM8 not only plays a role in enhancing the efficacy of chemotherapeutic agents by reactivating the TP53 pathway but may also be an alternative pathway to increase TP53 activity in malignant cancers; thus, an increase in TRIM8 expression could be used as an enhancer of chemotherapy efficacy in a TP53 wild-type background [50]. Other studies by Mastropasqua et al. have expanded the understanding of the molecular mechanisms underlying the downregulation of TRIM8 in oncogenesis and chemoresistance [51]. The authors found that TRIM8 is negatively associated with MIR17-5p and MIR106B-5p, both of which are overexpressed in many different chemo/ radioresistant cancers, resulting in a lack of TP53 protein activation by disrupting the positive feedback loop between TRIM8 and TP53 [51]. The oncoprotein MYCN/N-MYC, typically overexpressed in GBM, stimulates MIR17-5p and MIR106B-5p transcription, highlighting its role as an oncogene. Along these lines, activation of TRIM8 in TRIM8-deficient cells improves the efficiency of chemotherapy in resistant cancer cell lines. This occurs not only by reactivating the tumor suppressor function of TP53 but also by enhancing the transcription of MIR34A, which suppresses the activity of MYCN [52,53]. As a result, these miRNAs no longer silence TRIM8. However, in other cases, simultaneous activation of TRIM8 and TP53 may lead to adverse effects, such as in response to hypoxic stress caused by ischemia after stroke or myocardial infarction [54]. Recent findings in clear cell renal cell carcinoma (ccRCC) have shown that a higher percentage of wild-type TP53 is present in most aggressive drug-resistant cell lines, highlighting the significant association between TRIM8 deficiency, TP53 inactivation, and chemoresistance. Restoration of TRIM8 expression in ccRCC cell lines decreases cell growth rate in a TP53-dependent manner. Interestingly, restoration of TRIM8 expression makes the cells more susceptible to therapy with axitinib and sorafenib, two specific drugs now used to treat a variety of malignancies, including ccRCC [48,50]. However, another study found that suppression of TRIM8 in Ewing sarcoma cells increases DNA damage and makes the cells susceptible to DNA damage inhibitors such as olaparib. More in-depth research on TRIM8-mediated regulation of TP53 activity or its anti-cancer capacity will significantly enhance our understanding of the complex framework based on TP53 dynamics and provide better insight into the ability of TRIM8 to restore the native conformation of TP53 mutants and reactivate its tumor-suppressor function. In cancer cells, chromosomal abnormalities often lead to increased transcription factors (TFs) activity and form a class of driving oncoproteins that are difficult to target effectively. Recent research has shown that TRIM8 plays a vital role in the degradation of certain oncoproteins [55,56]. Stegmaier et al. found that TRIM8 degrades the EWS/ FLI oncoprotein, a driving fusion TF in Ewing sarcoma, and is associated with improved overall survival. Ewing sarcoma is defined by a genome translocation combining the EWSR1 transactivation domain with the FLI1 DNA-binding domain. EWS/FLI is a TF that recruits chromatin remodeling complexes such as the BAF complex to gain access to packed chromatin [57]. The results of Stegmaier et al. have shown that EWS/FLI can be indirectly targeted by TRIM8, opening a new therapeutic window for treating Ewing sarcoma by targeting TRIM8 [55]. TRIM8 is involved in a number of critical cellular processes, including carcinogenesis, autophagy, innate immunity, apoptosis, differentiation, and inflammatory responses, and is closely related to DNA repair, metastasis, tumor suppressive regulation, and carcinogenic regulation. In the following section, we review some essential cellular processes in which TRIM8 has tumor suppressive or oncogenic functions. These include autophagy, regulation of bipolar spindle formation and chromosomal stability, regulation of chemoresistance, and induction of inflammation. Autophagy Autophagy supports cellular fitness by directing poorly functioning proteins, damaged DNA, aggregates, and damaged organelles to lysosomes for degradation, and it is critical for providing energy and macromolecular precursors for cancer cell progression [58][59][60][61]. TRIM8 is particularly important in cancer, where autophagy promotes and inhibits tumor growth. TRIM8 is emerging as a critical regulator of cell survival under various genotoxic stress conditions by promoting autophagy flux and regulating lysosomal biogenesis [9,62]. This function can improve cancer cell survival under genotoxic stress conditions by allowing cells to repair DNA damage through autophagy, reducing cytotoxicity, and protecting cells from the cell death response after DNA damage so that cell repair and proliferation can continue [13]. Autophagy triggered by genotoxic stress plays an essential role in cell survival and death. Roya et al. have shown that TRIM8 is stable and exhibits a high turnover under genotoxic stress conditions [62]. TRIMs form homo-and hetero-oligomers with other TRIMs and become either indirectly ubiquitinated via their binding partner or directly ubiquitinated due to their innate ubiquitin ligase activity, modulating their turnover under various pathophysiological conditions [62,63]. The ubiquitin ligase activity of TRIM8 via its RING domain as a posttranslational modification is required to regulate autophagy pathways positively. For instance, polyubiquitin chains linked via the Lys63 residue of ubiquitin are involved in the signaling cascades associated with autophagy and recruit several ubiquitin-binding proteins such as IKBKG/NEMO, the regulatory subunit of the IκB kinase (IKK) complex. These ubiquitin-binding proteins are necessary for TNF-and IL1B-mediated NFKB activation [64,65]. TRIM8 indirectly modulates transcription of autophagy-regulating genes via activation of NFKB. NFKB transcription factors have been shown to be crucial triggers of autophagy and can trigger this process by inducing the expression of genes or proteins involved in the machinery that generates phagophores, such as BECN1, ATG5, and MAP1LC3/LC3 [66,67]. Interestingly, TRIM8 may indirectly regulate the level of SQSTM1/p62 (sequestosome 1), a pleiotropic protein that functions as a selective autophagy receptor and promotes mitophagy, thereby promoting tumorigenesis [68,69]. These results highlight the intricate interaction between the TRIM8-mediated regulation of SQSTM1/p62 and its potential function beyond autophagy and cancer. Etoposide, a genotoxic agent, causes apoptosis through the involvement of the effector caspase 3 (CASP3) [70]. Studies have shown that autophagy regulated by TRIM8 can induce the degradation of activated CASP3, one of the significant cysteine proteases of the apoptotic cascade, to prevent cell death caused by genotoxic stress [62]. In addition, TRIM8 regulates cell death and autophagy by stabilizing XIAP/IAP3 (X-linked inhibitor of apoptosis) [71]. XIAP can interrupt both the "extrinsic" and "intrinsic" death pathways by directly inhibiting the proteolytic activity of CASP9 and the effectors CASP3 and CASP7 via its BIR domains. XIAP forms a multiprotein complex with CASP3 during genotoxic stress and inhibits its cleavage and activation [71,72]. XIAP activates NFKB-dependent transcription via its NH2-terminal baculovirus inhibitor of apoptosis protein repeat (BIR) domain by activating SMAD signaling. SMAD signaling activates the expression of autophagy-related genes and the MAPK/JNK pathway. Conversely, the MAPK/JNK pathway can switch to the NFKB signaling pathway (Fig. 2) [73,74]. Thus, TRIM8 prevents cell death upon genotoxic stress and radiotherapy by these novel mechanisms, suggesting that the high oncogenic potential of TRIM8 may support cancer cell viability [62]. Suppression of autophagy as a biological protective process against environmental and cellular stress has been investigated as a cancer therapy target, as it may predispose cancer cells to various treatments, such as exposure to DNA-damaging agents and radiation. The difficulty of acting directly on the components of autophagy-which play essential roles in normal cell physiology, and autophagy-related proteins participate in other cellular processes-could be reduced if there were a way to target cancerpromoting autophagy while allowing different types of autophagy to function efficiently. In principle, this can be achieved by targeting proteins with autophagy-specific activities rather than the core components of the machinery or lysosomal function to prevent complete autophagy. One proposed approach is to use the TRIM proteins due to their role in autophagy regulation, demonstrating the great potential of modulating TRIMs in oncogenesis or cancer progression. TRIM8 controls bipolar spindle formation and chromosomal stability The formation of a bipolar spindle, which divides and separates the duplicated chromosomes during cell division, is one of the most critical processes in cell division. The controlled alignment of microtubules (MT) and the combined forces exerted by highly conserved motor proteins, including kinesins and dyneins, are also required to properly separate the duplicated chromosomes [75]. When duplicated chromosomes are not properly segregated due to defective formation of the bipolar spindle, chromosome missegregation and aneuploidy occur. Cancer cells typically exhibit a higher rate of chromosome mis-segregation and an aneuploid karyotype [75,76]. E3 ubiquitin ligases are post-translational modifiers that facilitate the binding of ubiquitin to target proteins involved in the control of mitotic spindle machinery, including the TRIM protein family, whose dysregulation has been linked to a number of human diseases, including cancer [77]. A variety of TRIM proteins are important in mitotic and cell cycle transitions. In particular, interest in TRIM8 has increased dramatically in recent years. Studies have shown that TRIM8 is critical for the maintenance of genome integrity during cell division and the formation of the mitotic spindle machinery during mitosis [78]. TRIM8silenced cells are responsible for a significant portion of the aneuploidy phenotype due to delayed progression through the G2/M phase of the cell cycle associated with centrosome and mitotic spindle abnormalities [78]. TRIM8 regulates cell cycle progression and mitosis by affecting cell cycle checkpoints and critical mitotic regulators and indirectly interacts with various motor microtubule-associated proteins (MAPs) such as kinesins [79]. Venuto et al. demonstrated that TRIM8 is found at the mitotic spindle during all phases of the cell cycle and interacts with KIFC1/HSET, KIF11/Kinesin-5/Eg5, KIF20B, and KIF2C, which are involved in the development of a bipolar spindle during mitosis, suggesting, that TRIM8 plays an essential role in determining cell polarity from the onset of centrosome duplication at the G1/S transition to the end of mitosis, where a cell divides into two identical daughter cells, a fundamental process in eukaryotic life mediated by microtubules and members of the kinesin family [79]. This physical contact between TRIM8 and KIF11 or KIFC1 is critical for proper microtubule assembly, ensuring the active structural configuration of these proteins and their mutual alignment along the mitotic spindle [80,81]. KIFC1, an important member of the KIF14 superfamily in neurons with a specific minus-end directed motor, has also been associated with endocytic vesicle motility and cleavage [82], oocyte maturation [83], and long-distance transport of naked double-stranded DNA [84]. KIFC1 and KIF11 work together to promote microtubule aster formation, centrosome segregation, and proper spindle organization [85]. Impaired expression of KIF11 or KIFC1 is responsible for the abnormal spindle phenotype [86]. In particular, inhibition of KIF11 function by immunodepletion or knockdown of KIF11 mRNA by small interfering RNA leads to cell cycle arrest in mitosis with monopolar spindle phenotype [86]. Considering the biological functions of KIF11 and KIFC1, TRIM8, with its E3 ubiquitin ligase activity, is likely involved in mitosis via ubiquitination of KIF11 and KIFC1 proteins in GSCs. KIF11 and KIFC1 are increased in GBM and are associated with the increased proliferation, self-renewal, and invasive behavior that are hallmarks of this brain tumor [80,87]. KIF11 is increased in glioblastomas and is inversely related to overall survival. This protein promotes stem cell formation in glioma cells and increases cell proliferation and chemoresistance in malignant brain tumors [88,89]. The Cancer Genome Atlas/TCGA data revealed that KIF11 is highly expressed in grade IV tissues compared with lower-grade and normal tissues. It is suggested that, in GBM, the E3 ubiquitin ligase function of TRIM8 is disrupted with KIF11 and KIFC1, resulting in increased expression of the latter proteins [90]. This finding implies that other TRIM proteins may have similar functions in transporting motor proteins or be transported as cargo within the cell. Studies have shown that TRIM8 plays an important role in the progression of centrosome duplication. TRIM8 localizes to centrosomes and colocalizes with PLK1 (polo-like kinase 1), a human protein kinase with high sequence similarity to Cdc5 in Drosophila, and interacts directly with the centrosomal protein CEP170 [90]. PLK1, a key regulator in mitotic cell division, is involved in a number of critical processes, including mitotic entry, kinetochore-microtubule binding, and spindle formation [91]. This interaction primarily inhibits TRIM8 activity and delays mitotic progression, making cells more likely to arrest or be delayed in G2/M phase. Accumulation of cells arrested in the G2/M phase of the cell cycle leads to either initiation of the apoptotic pathway and activation of DNA damage responses or persistence of aneuploidy. The mechanisms of apoptosis mediated by silenced-TRIM8 cells require further investigation [92]. CEP170 localizes to both mother centrosomes during interphase and spindle microtubules during mitosis and plays a role in microtubule assembly and cell morphology determination [93]. Studies have shown that TRIM8 is also required for reliable chromosome segregation in mitosis. The process of chromosome segregation during mitosis is highly complex, and defects in this pathway can lead to mis-segregation and/or a non-integral set of 46 chromosomes. Knockdown of TRIM8 increases the rate of chromosomal instability and delays centrosome segregation, leading to an increase in aneuploid cells and micronucleus formation, demonstrating the essential role of TRIM8 in maintaining chromosomal integrity during mitosis [90]. Overall, deficiency of TRIM8-E3 ligase in glioma cells may promote carcinogenesis by promoting chromosome segregation defects during mitosis, leading to structural and non-euploid chromosome number aberrations, implying that TRIM8 may have tumor suppressor function during mitosis [94]. Chromosomal instability is a characteristic of human malignancies linked to poor prognosis, immune evasion, therapeutic resistance, and metastasis [95]. TRIM8 as a target in chemoresistance Drug resistance of tumors is a significant obstacle to cancer therapy [96,97]. Understanding the signaling pathways is critical for determining the enzymes involved in chemoresistance in order to target them in combination therapies and make cells susceptible to standard chemotherapeutic agents. The ubiquitin-proteasome system has been recognized as a key player in a variety of physiological processes, including cell proliferation, autophagy, apoptosis, and DNA repair, all of which have been linked to carcinogenesis, cancer development, and drug resistance [98,99]. Therefore, the use of proteasome inhibitors that alter the proteasome-mediated degradation pathway represents a new and promising method for treating human tumors with fewer side effects [99]. In particular, E3 ligases have attracted increasing attention in cancer and resistance research [100]. E3 ligase inhibitors are thought to specifically sensitize tumor cells to chemotherapeutic agents and radiotherapy by stabilizing or promoting the degradation of a subset of tumor suppressors or oncoproteins without affecting the activity of other proteins necessary for normal cell function [99,100]. The most important type of E3 ligase is the really interesting new gene (RING) finger family, distinguished by its conserved RING domain. Other growing types of E3 ligases include the homologous to the E6AP carboxyl terminus/HECT type, the U-box type, and the RING-IBR-RING/RBR type, which are critical in drug resistance in several malignancies, including GBM. The TRIM protein family is a large subgroup of RING-type E3 ligases [101]. TRIM proteins act as both cancer driver and tumor suppressor proteins in regulating cell proliferation, depending on tumor type and deregulation processes. Many TRIM proteins are elevated in GBM (e.g., TRIMs 11,14,22,25,28,32,44,59,and 65) [12,[102][103][104][105][106][107][108]. Abnormal overexpression of these TRIMs has been associated with poor prognosis and poor overall survival. In contrast, TRIMs 13, 16, 21, and 62 are potential tumor suppressors in a variety of malignancies, including GBM [11,109]. TRIM8 is considered a cancer driver and tumor suppressor in controlling cell proliferation. The ability of TRIM8 to modulate the stability and activity of p53-mediated tumor suppressive activity is one of the reasons why it exerts a tumor suppressor function [48]. In addition, TRIM8 stimulates the degradation of MDM2, a primary cellular TP53 inhibitor, and directs the TP53 response toward growth arrest rather than apoptosis [48]. In general, patients with cancer with higher chemotherapy resistance have more mutations in the TP53 gene or inactivation in its signaling pathway due to alterations in its regulators [110]. This is especially true for malignancies such as GBM, where the TP53 pathway is deregulated in 84% of patients, implying that reactivation of the TP53 pathway may be one of the most promising therapeutic approaches [111]. TRIM8 expression has been shown to correlate with increased TP53 activation and MDM2 instability in glioma tissues and cell lines, enhancing the effects of chemotherapeutic agents such as cisplatin and nutlin-3. In contrast, the silencing of TRIM8 correlates with the inactivation of TP53 and resistance to these chemotherapeutic agents. This suggests that TRIM8 levels play an essential role in TP53-mediated cellular responses to chemotherapeutic agents [15,51]. Another example of TRIM8 activity in chemosensitivity is in anaplastic thyroid carcinoma/ATC, where downregulation of TRIM8 essentially correlates with overexpression of MIR182 in human ATC tissues. Qin et al. found that MIR182 promoted tumor development by suppressing TRIM8 expression and contributed to the chemoresistance of human ATC to standard chemotherapeutic agents such as cisplatin. On the basis of these results, it was concluded that MIR182-TRIM8 could be a therapeutic target for the treatment of chemoresistant human papillary thyroid carcinoma [112]. Tullo et al. found that TRIM8 is a target of MIR17-5p and MIR106B-5p, both of which are overexpressed in chemo-/ radioresistant cancers such as ccRCC and GBM. MYCN promotes carcinogenesis by activating MIR17-5p and MIR106B-5p, and this oncogene is inhibited by MIR34A, whose expression is induced by TP53. Of note, silencing of MIR17-5p and MIR106B-5p enhances TRIM8 expression. It leads to the restoration of tumor suppressor activity of TP53 in a TRIM8-dependent manner, thereby restoring the sensitivity of cells to clinically used chemotherapeutic agents such as sorafenib and axitinib, which are used as second-line treatments for advanced renal cell carcinoma [51,113,114]. Chemosensitization by TRIM8 was also observed in SW620 and SW480 cells. SW620 and SW480 cells are two different colon cancer cell lines with different levels of TP53 protein [115,116]. SW620 cells have wild-type TP53 protein, whereas SW480 cells lack TP53 protein. SW620 cells with higher TRIM8 expression were more susceptible to the chemotherapeutic agent 5-fluorouracil, whereas silencing TRIM8 increased SW620 cell survival. However, this was not the case for SW480 cells. Therefore, TRIM8 was shown to increase the susceptibility of CRC cells to the above chemotherapeutic agent in a TP53-dependent manner [115]. It is probably important to evaluate the tumor suppressive and oncogenic activity of TRIM8 in relation to the molecular status of p53 because, if p53 is mutated, TRIM8 expression is partially oncogenic [48]. The effects of TRIM8 on the stability and activity of the oncogenic form of the TP63 transcription factor, ∆Np63, which shares structural similarities with the tumor suppressor TP53, reveal another vital role for TRIM8 in sensitizing cells to chemotherapeutic agents [117]. Numerous studies have shown that TP63 plays an important role in cancer development, resistance to chemotherapy, metastasis, and survival of cancer cells. TP63 is overexpressed in various types of malignant tumors, suggesting that it confers a selective growth advantage to cancer cells. Overexpression of TP63 in oral squamous cell carcinoma/OSCC has recently been shown to be a potential marker of radioresistance and a predictor of poor prognosis [117][118][119]. Tullo et al. found that TRIM8 enhances the tumor suppressor activity of TP53 and decreases the expression of the TP63 protein in a manner that is dependent on both the ubiquitin-proteasome system and caspase 1 (CASP1) [118]. In addition, studies have shown that TP63 decreases TRIM8 gene expression by inhibiting the TP53-directed transcriptional program of TRIM8, indicating the presence of a negative feedback loop. These results suggest that increasing TRIM8 activity could provide therapeutic benefits and improve the treatment of chemoresistant malignancies, particularly GBM [117,120]. TRIM8 as an inflammation inducer The relationship between inflammation and cancer has attracted much attention in recent decades. The importance of inflammation in gliomas is less evident than in other cancers, especially at the onset. The transcription factor NFKB induces the expression of genes involved in many aspects of the innate and adaptive immune system and is one of the most important molecules in triggering chronic inflammation as a hallmark and cause of cancer [121,122]. Constitutive NFKB activation is a common phenomenon in GBM, as in many other malignancies. Inflammation has been reported to promote mesenchymal differentiation, maintenance of cancer stem-like cells, and radiation resistance [123], and it also plays a key role in several other active carcinogenic processes in GBM [124]. Mutations or overexpression of NFKB signaling components such as TNF receptorassociated factor 2 (TRAF2) and TNFRSF1A associated via death domain (TRADD) are rare in tumors, suggesting that abnormal activation of NFKB signaling in GBM may be due to pathway dysregulation or oncogenes [125]. TRIM proteins are involved in the development of various malignancies by affecting a number of biological processes, including modulation of NFKB transcriptional activity. TRIM40 is downregulated in the gastrointestinal tract during carcinogenesis, which inhibits NFKB activity via neddylation of IKBKG, a critical regulator of NFKB activation, and consequently causes inhibition of NFKB activity [126]. In recent years, there has been a surge of interest in TRIM8 as an activator of NFKB signaling. At least two subcellular sites (the cytoplasm and nucleus) have been identified where the ubiquitin ligase activity of TRIM8 is required to activate the NFKB pathway. Nucleocytoplasmic transport of TRIM8 is necessary for positive control of NFKB activation [127]. TRIM8 reduces the nuclear localization of endogenous PIAS3 and its RING domain is required for this function. This translocation from the nucleus to the cytoplasm impairs the negative regulation of NFKB at the RELA/p65 subunit through the activity of PIAS3, and enhances NFKB transcription factor dimerization and activation of NFKB-responsive genes [28,128]. TRIM8 also functions as a positive regulator of cytokine-induced NFKB activation in the cytoplasm. Wang polyubiquitination of MAP3K7/TAK1 at Lys158, but not K48-linked polyubiquitination after activation of surface receptors such as TNF or interleukin 1 receptor (IL1R) [29]. Activated MAP3K7/TAK1 is required for the IKK complex-induced NFKB pathway activation. The IKK complex phosphorylates NFKBIA/IKBA protein, leading to ubiquitination and degradation by the 26S proteasome and translocation of NFKB to the nucleus, allowing the activation of NFKB-responsive genes [29,129]. The long noncoding RNA GNAS-AS1/Nespas inhibits TRIM8-induced Lys63-linked polyubiquitination of MAP3K7/TAK1, suppressing inflammatory cytokine production and NFKB signaling activation [130]. Deregulated NFKB activation is a common phenomenon in glioblastoma; its activity is a significant driver of the malignant phenotype, ranging from tumor growth and invasion to the maintenance of cancer stem-like cells, suppression of programmed cell death, and resistance to radiotherapy [131]. A well-known function of NFKB is the regulation of inflammatory responses by controlling the expression of proinflammatory genes and activities in innate and adaptive immune cells. Not surprisingly, NFKB expression is a marker of inflammation and has attracted considerable attention in the field of inflammation-related cancers [132,133]. Because NFKB has been identified as a driver of several features of gliomagenesis and treatment tolerance, the NFKB signaling network is now an attractive therapeutic target. Thanks to recent advances in drug discovery, a variety of drugs targeting NFKB are now available, and several of them have shown promise in preclinical studies, either alone or in combination with temozolomide, a first-line chemotherapeutic agent in GBM [134,135]. Research has shown that inhibition of NFKB in combination with temozolomide can synergistically enhance glioma cell suppression. However, further research is needed to elucidate the activity of TRIM8 and establish links between inflammation and carcinogenesis. Conclusion GBM is the most common type of brain tumor in adults worldwide. Among the newly identified glioma-associated genes, interest in TRIM8 has increased dramatically in recent years. TRIM8 is an E3 ubiquitin ligase involved in many biological processes such as autophagy, apoptosis, and differentiation, all of which are required to maintain cellular homeostasis and thus regulate most signal transduction pathways. Our study suggests that TRIM8 plays a role in GBM carcinogenesis by positively regulating key cellular signaling pathways such as NFKB and JAK-STAT, which effectively enhance the stem-like property of GSCs and potentially provide clues for innovative treatments. TRIM8 also exerts its anticancer effect by potentiating tumor suppressor TP53 through interaction with MDM2, an important inhibitor of TP53, and, conversely, by suppressing the activity of the oncogenic protein ΔNp63. This suggests that TRIM8 confers a selective growth disadvantage to cancer cells, and enhancing TRIM8 activity could provide therapeutic benefits and improve the treatment of chemoresistant tumors. In this study, we summarized the dual role of TRIM8 in cancer as an oncogene or tumor suppressor gene in regulating autophagy, controlling bipolar spindle formation and chromosomal stability, regulating chemoresistance, and triggering inflammation. We believe that it is critical to understand how TRIM8-associated axes can be further modulated for the development of cancer therapeutics, as this could provide new insights into
2023-01-25T05:06:58.893Z
2023-01-23T00:00:00.000
{ "year": 2023, "sha1": "e0d0258864290bec33be616054a4c901d6072028", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "e0d0258864290bec33be616054a4c901d6072028", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
23304782
pes2o/s2orc
v3-fos-license
Crystal Structure of Human REV7 in Complex with a Human REV3 Fragment and Structural Implication of the Interaction between DNA Polymerase ζ and REV1* DNA polymerase ζ (Polζ) is an error-prone DNA polymerase involved in translesion DNA synthesis. Polζ consists of two subunits: the catalytic REV3, which belongs to B family DNA polymerase, and the noncatalytic REV7. REV7 also interacts with REV1 polymerase, which is an error-prone Y family DNA polymerase and is also involved in translesion DNA synthesis. Cells deficient in one of the three REV proteins and those deficient in all three proteins show similar phenotype, indicating the functional collaboration of the three REV proteins. REV7 interacts with both REV3 and REV1 polymerases, but the structure of REV7 or REV3, as well as the structural and functional basis of the REV1-REV7 and REV3-REV7 interactions, remains unknown. Here we show the first crystal structure of human REV7 in complex with a fragment of human REV3 polymerase (residues 1847–1898) and reveal the mechanism underlying REV7-REV3 interaction. The structure indicates that the interaction between REV7 and REV3 creates a structural interface for REV1 binding. Furthermore, we show that the REV7-mediated interactions are responsible for DNA damage tolerance. Our results highlight the function of REV7 as an adapter protein to recruit Polζ to a lesion site. REV7 is alternatively called MAD2B or MAD2L2 and also involved in various cellular functions such as signal transduction and cell cycle regulation. Our results will provide a general structural basis for understanding the REV7 interaction. Large numbers of DNA lesions occur daily in every cell, and the majority of the DNA lesions stall replicative DNA polymerases. This results in the arrest of DNA replication, which causes lethal effects including genome instability and cell death. Translesion DNA synthesis (TLS) 2 releases this replication blockage by replacing the stalled replicative polymerase with a DNA polymerase specialized for TLS (TLS polymerase). It is generally considered that TLS includes two steps performed by at least two types of TLS polymerases, namely inserter and extender polymerases (reviewed in Refs. 1 and 2). In the first step, the stalled replicative polymerase is switched to an inserter polymerase such as Pol, Pol, Pol, or REV1, which are classified as Y family DNA polymerases (3) and have different lesion specificity (reviewed in Refs. 4 -8), and an inserter polymerase incorporates nucleotides opposite the DNA lesion instead of the stalled replicative polymerase. In the second step, an inserter polymerase is switched to the extender polymerase DNA polymerase (Pol), and then Pol extends a few additional nucleotides before a replicative polymerase restarts DNA replication. Pol consists of the catalytic REV3 and the noncatalytic REV7 subunits. REV3 is classified as a B family DNA polymerase on the basis of the primary sequence. The catalytic activity of yeast REV3 is stimulated by yeast REV7 (9). Biochemical analysis has been done only for yeast REV3 but not mammalian REV3, because the molecular mass of human REV3 is larger (ϳ350 kDa) than that of yeast REV3 (ϳ150 kDa). Disruption of the mouse REV3 gene causes embryonic lethality accompanied by massive apoptosis (10 -12), suggesting that the function of mammalian REV3 is essential for embryogenesis. Although REV7 is a smaller protein with a molecular mass of 24 -28 kDa compared with REV3, the function of REV7 is less understood. REV7 is a member of the HORMA (Hop1, Rev7, and Mad2) family of proteins (13). REV7, which is alternatively called MAD2B or MAD2L2, appears to be involved in multiple cellular functions including not only TLS but also cell cycle regulation (14), bacterial infection (15), and signal transduction (16,17). In this study, we investigated the function of REV7 in TLS from structural analysis. Previous studies have reported that human REV7 interacts with the central region (residues 1847-1892) of human REV3 by yeast two-hybrid and in vitro interaction assays (18). Interestingly, human REV7 also interacts with the C-terminal region (residues 1130 -1251) of human REV1 polymerase as shown by yeast two-hybrid, in vitro interaction and co-immunoprecipitation assays (19 -21). Furthermore, human REV7 and human REV1 were co-expressed by Escherichia coli, and the REV7-REV1 complex was purified, whereas REV7 does not affect the polymerase activity of REV1 (22). The three yeast rev mutants and the triple mutant show very similar sensitivity to various genotoxic treatments (23)(24)(25). Furthermore, chicken DT40 cells deficient in one of the three REV proteins and those deficient in all three proteins show hypersensitivity to various genotoxic treatment including cisplatin (cis-diaminedichloroplatinum (II)), indicating the functional collaboration of the three REV proteins (26). However, these previous analyses failed to determine the mechanism underlying the protein-protein interactions on the atomic revel, because they tried to analyze without data of the three-dimensional structures. In addition, it remains unclear whether mammalian REV1, REV3, and REV7 can form the Pol-REV1 ternary complex. It has been considered that switching of DNA polymerase occurred at least twice in TLS: the switching from a stalled replicative polymerase to an inserter polymerase and from an inserter polymerase to the extender polymerase. Recently, the structural implications of the first polymerase switching have been reported (27). However, the mechanism underlying the recruitment of the extender polymerase to the lesion site and the second polymerase switching as well as the physical and functional interactions of REV1, REV3, and REV7 remains unclear. Here we report the first crystal structure of human REV7 in complex with a fragment of human REV3 (residues 1847-1898). The structure reveals the mechanism underlying Pol formation and shows that the REV7-REV3 interaction unexpectedly provides a structural interface for REV1 binding. Furthermore, we show that these REV7-mediated interactions with REV1 and REV3 are responsible for DNA damage tolerance. Lastly, we propose a model of the structural interplay of REV1, REV3, and REV7 in TLS. Our results will provide a general structural basis for understanding the REV7 interaction in various cellular functions. EXPERIMENTAL PROCEDURES Crystallographic Analysis of Human REV7 in Complex with REV3 Fragment-In the present crystallographic study, REV7 with an R124A mutation, REV7(R124A), was used instead of wild type REV7, REV7(WT) (28). It has been shown that a human REV3 fragment (residues 1847-1892) interacts with human REV7 (18). Thus, based on the result of secondary structure prediction, we constructed the REV3 fragment carrying residues 1847-1898, REV3(1847-1898), for this crystallographic study (28). The REV7(WT)-REV3(1847-1898) complex was polydisperse and did not crystallize, whereas REV7(R124A)-REV3(1847-1898) complex was a monodisperse (28). In addition, REV7(R124A) efficiently binds REV3(1847-1898). Preparation and crystallization of the human REV7(R124A)-REV3(1847-1898) complex have been described before (28). In brief, recombinant human REV7(R124A) with an N-terminal hexameric His tag in complex with human REV3(1847-1898) was expressed in E. coli BL21(DE3) harboring the REV7(R124A)-REV3(1847-1898) co-expression vector. The protein was purified by nickel-Sepharose resin (GE Healthcare), HiTrap Q HP (GE Healthcare), and HiLoad Superdex200 (GE Healthcare). Monoclinic and tetragonal crystals of the REV7(R124A)-REV3(1847-1898) complex were obtained in different conditions. Heavy atom derivatives of monoclinic crystals were prepared by the soaking method using a solution of 10 mM ethylmercurithiosalicylate, 100 mM Tris-HCl, pH 7.5, 800 mM sodium formate, and 25% (w/v) polyethylene glycol 2000 monomethyl ether for 20 h. X-ray diffraction data for native crystals were collected by using a Quantum 315 CCD detector (Area Detector Systems Corp.) on Beamline BL-5A at Photon Factory. X-ray diffraction data for derivative crystals were collected by using an FR-D in-house x-ray generator with an R-AXIS IV ϩϩ imaging plate detector (Rigaku). All of the diffraction data were processed with the program HKL2000 (29). The structure of the REV7(R124A)-REV3(1847-1898) complex was solved by the single isomorphous replacement method using the programs SOLVE and RESOLVE (30,31). Model building was performed with the programs O (32) and COOT (33). Structure refinement was performed at 1.9 Å resolution with the programs CNS (34) and REFMAC (35). P2 1 crystal contains two REV7(R124A)-REV3(1847-1898) complexes in the asymmetric unit. The structure in the P4 1 2 1 2 crystal was solved at 2.6 Å resolution by the molecular replacement method with the program MOLREP (36) using one of P2 1 structures. The structure was refined with a procedure similar to that of the monoclinic case. The data collection and refinement statistics are given in Table 1. The coordinates and structure factors have been deposited in the Protein Data Bank Japan. Rapid Survival Assays Using Chicken DT40 Cells-For retrovirus infection, a pMSCV-IRES-GFP recombinant plasmid was constructed by ligating the 5.2-kb BamHI-NotI fragment from pMSCVhyg (Clontech) with the 1.2-kb BamHI-Not1 fragment from pIRES2-EGFP (Clontech). cDNA of chicken REV7 (GdREV7) was inserted between the BglII and EcoRI sites of pMSCV-IRES-GFP. Virus was prepared by using 293T cells and 1 l of Gene juice (Novagen), 1 g of pMSCV-GdREV7-IRES-GFP, and 1 g of pCL-Ampho. After the 293T cells were cultured with the above reagents and plasmids at 37°C for 2 days, the cells were centrifuged, and the supernatant was stored at Ϫ80°C. Retrovirus infection was done by centrifugation (3000 rpm, 30 min, 32°C) of the DT40 REV7 Ϫ/Ϫ cells (26) and the retroviral solution. A day after infection, expression of GFP was confirmed by flow cytometry. The efficiency of infection was more than 50%, as assayed by GFP expression. The cells were subcloned into 96-well plates, and clones displaying high levels of GFP were determined by a fluorescence-activated cell sorter. To test for differential sensitivity to cisplatin, we performed rapid survival assays using chicken DT40 cells. 3 The cells (1 ϫ 10 4 ) were exposed to various concentrations of cisplatin and incubated at 39.5°C for 48 h. We analyzed each cell type with at least three clones, and at least three independent experiments were carried out to obtain individual data. The cell number was counted by the Cell Titer-Glo luminescent cell viability assay (Promega) according to the manufacturer's instructions. We calculated the extent of cytotoxicity. Human REV7 shares 22% amino acid identity with human Mad2, another member of the HORMA family (13). Mad2 functions in spindle assembly checkpoint by binding directly to Mad1 (40 -42). Mad2 undergoes a striking conformational change from the open (O-Mad2) to the closed (C-Mad2) form, in which the C-terminal region known as the "seatbelt" following ␤6 moves toward the edge of ␤5 to wrap around the ligand, and ␤1 is relocated. Concomitantly, ␤7 and ␤8 are rearranged to form ␤8Ј and ␤8Љ (43, 44) (Fig. 1A). Structural alignment between REV7 bound to the REV3 fragment and Mad2 bound to Mad1 shows an root mean square deviation value of 2.0 Å for 183 superimposable C␣ atoms (Fig. 1D), indicating that the overall structure of REV7 in the REV7(R124A)-REV3(1847-1898) complex may be very similar to the structure of the closed Mad2 form. Conceivably, a large conformational change may occur in the seatbelt (residues 153-211) of REV7 upon interaction with the REV3 fragment. Structural Details of the Interaction between REV7 and REV3 in Pol Formation-Our novel structural analysis of the human REV7(R124A)-REV3(1847-1898) complex allowed for the pre-3 T. Kogame and S. Takeda, unpublished data. (44). However, a similar interaction observed in the Mad2-Mad1 complex is not observed in the REV7(R124A)-REV3(1847-1898) complex, implying that the mechanism underlying the REV7-REV3 interaction may be distinct from that of Mad2-Mad1 interaction. To elucidate the crucial residues responsible for the REV7-REV3 interaction, we performed in vitro binding assays using alanine mutants of REV7. Of these mutants, Y63A or W171A mutation significantly reduced affinity for REV3(1847-1898), indicating that Tyr-63 REV7 and Trp-171 REV7 are crucial for the physical interaction with REV3(1847-1898) (Fig. 2B, lanes 5 and 8). In contrast, Lys-159 Mad2 , Leu-161 Mad2 , Tyr-64 Mad2 , and Trp-167 Mad2 were crucial for the physical interaction with Mad1 in the Mad2-Mad1 interaction (44). Thus, the mechanism underlying the REV7-REV3 interaction is distinctly different from that underlying the Mad2-Mad1 interaction. In addition, the ␣-helix of REV3(1847-1898) might be required for interaction with REV7, because a REV3 fragment (residues 1847-1886) lacking the ␣-helix did not form a stable complex with REV7 (data not shown). This suggests that the van der Waal's interactions by the ␣-helix of REV3(1847-1898) also contribute to the formation of the REV7-REV3 complex, Pol. Furthermore, to investigate the REV7-REV3 interaction in vivo, we carried out co-immunoprecipitation assays using HEK293 cells. Consistent with the in vitro results, REV7(Y63A/ W171A) showed no binding affinity for REV3(1776 -2044), although REV7(Y63A) and REV7(W171A) retained affinity (Fig. 2C, lanes 3-5). It is also noteworthy that the expression level of FLAG-REV3(1776 -2044) in lane 5 of Fig. 2C is considerably lower compared with other lanes, even though the same amount of plasmid DNA was used for transfection in each cells, suggesting that REV7-unbound REV3(1776 -2044) may be unstable in vivo. Interestingly, REV7(R124A) had markedly higher affinity for REV3(1776 -2044), as compared with REV7(WT) (Fig. 2C, lane 2). This observation suggests that the R124A substitution stabilized the closed conformation of REV7. In fact, the analogous substitution in Mad2, Mad2(R133A), pushes the conformation toward the closed form and enables structure determination of the ligand-free C-Mad2 (45), although why this substitution stabilizes the closed form is less understood. The CD spectrum of the REV7(R124A)-REV3(1847-1898) complex is similar to that of the REV7(WT)-REV3(1847-1898) complex (28), whereas the CD spectrum of REV7(R124A) is distinct from that of REV7(WT) (supplemental Fig. S2). This observation indicates that REV7 also supposedly undergoes a significant structural change, in which the seatbelt region is expected to move upon the ligand binding as observed in Mad2. To identify amino acid residues of REV7 responsible for interaction with REV1, we performed comprehensive alanine substitutions in the solvent-exposed residues of REV7 (Fig. 3A). Mad2 has two independent binding sites for different proteins; one is the seatbelt region, where Mad1 interacts with Mad2, and the other is the ␣C helix, which is the binding site of O-Mad2 for checkpoint activation (47) or p31 comet for checkpoint inhibition (48). However, mutations in the ␣C helix in REV7 did not affect binding to REV1(1130 -1251) (data not shown). Our results show that L186A or Q200A mutation significantly depletes REV1 binding, whereas those mutations have no effect on REV3 binding (Fig. 3B, lanes 3 and 4). The fact that Leu-186 and Gln-200 are exposed to solvent (Fig. 3C) indicates that these residues are directly involved in REV1 binding. Leu-186 and Gln-200 are present in ␤8Ј and ␤8Љ, respectively (Fig. 3C). Therefore, the REV1-binding site is unprecedented and represents a novel interface for protein-protein interactions of HORMA family proteins. Consistent with this finding, Leu-186 and Gln-200 are not conserved in Mad2 (Fig. 1A), whereas they are conserved in yeast REV7. Leu-186 and Gln-200 of REV7 are positioned close to each other and are thus likely to provide a platform for REV1 binding on the anti-parallel ␤-sheet composed of ␤8Ј and ␤8Љ. This observation implies that formation of the C-terminal ␤-sheet is significant for REV1 binding. To verify this idea, we performed further surveys by alanine substitution for residues that are relatively buried and seemingly involved in stabilizing the structure of the ␤-sheet. Consequently, we found that the Y202A mutation impaired the REV1 interaction (Fig. 3B, lane 5). Tyr-202 is located in ␤8Љ and directly interacts with both Leu-186 and Gln-200 (Fig. 3C). These results indicate that Tyr-202 stabilizes the REV1-binding platform. To examine whether Leu-186 REV7 is important for interaction with REV1 in vivo, we performed binding assays using human cells and showed that the L186A mutation greatly reduced the interaction with REV1(826 -1251), although it did not reduce the interaction with REV3(1776 -2044) (Fig. 2C, lane 6, and 3D, lane 4). On the other hand, the W171A mutation unexpectedly decreased the interaction with not only REV3(1776 -2044) but also REV1(826 -1251) (Fig. 3D, lane 3). Most interestingly, the R124A mutation, which stabilized the closed conformation of REV7, increased REV1 binding; furthermore, the double mutation R124A/W171A in REV7 brought back REV1 binding to the level of REV7(WT) (Fig. 3D, lanes 2, 3, and 5). Therefore, we conclude that REV1 interacts with the closed form of REV7 and that the REV7-REV3 interaction precedes the REV7-REV1 interaction during formation of the Pol-REV1 complex. Functional Role of REV7-mediated Interactions in DNA Damage Tolerance-To examine the contribution of REV7mediated interactions to DNA damage tolerance, we performed rapid survival assays using chicken DT40 cells. Chicken REV7 shares a high degree of amino acid identity with human REV7 (96%). To functionally analyze the REV7-mediated interactions, we expressed chicken REV7(WT) and REV7(Y63A/ W171A) in DT40 REV7 Ϫ/Ϫ cells and measured the sensitivity to cisplatin in the resulting reconstituted clones (Fig. 4). We chose cisplatin, because REV1 Ϫ/Ϫ , REV3 Ϫ/Ϫ , or REV7 Ϫ/Ϫ DT40 cells show strong sensitivity to cisplatin (26,49). The reconstitution of REV7 Ϫ/Ϫ cells with the REV7 transgene completely normalized cellular sensitivity to cisplatin (white square). In contrast, expression of REV7(Y63A/W171A), which lack both REV3 and REV1 interactions, had no impact on cellular sensitivity of REV7 Ϫ/Ϫ cells to cisplatin (black triangle). Our results clearly indicate that REV7-mediated interactions are essential for resistance to the DNA damage caused by cisplatin, implying that formation of the Pol-REV1 complex is responsible for resistance and that REV7 functions as an adapter protein between REV3 and REV1 polymerases. Recruitment of Pol and Polymerase Switching-Taking our results altogether, we can propose a model of the interactions involving REV1, REV3, and REV7 (Fig. 5A). In the absence of REV3, REV7 can adopt an open form. In the presence of REV3, REV7 undergoes structural rearrangement of the seatbelt by REV3 binding, resulting in the formation of Pol, where Tyr-63 REV7 and Trp-171 REV7 are crucial for the interaction (Fig. 2). The conformational change in the seatbelt of REV7 provides an interface for REV1 interaction (Fig. 3C) and therefore enables formation of the Pol-REV1 complex. REV1, which is the inserter polymerase for an abasic lesion (50,51), is also supposed to function as a scaffold protein for polymerase switching at a lesion site, because the C-terminal region (residues 1130 -1251) of REV1 interacts with the other inserter polymerases, namely Pol, Pol, and Pol (21). In addition, it has been shown that mouse Pol and Rev7 compete directly for binding to Rev1 (20). Based on our results, we can propose a model of the recruitment of Pol and the second polymerase switching from an inserter polymerase to the extender polymerase, as well (Fig. 5B). An inserter polymerase suitable for a DNA lesion could perform bypass synthesis after the first polymerase switching from a replicative polymerase to an inserter polymerase through interactions with ubiquitinated PCNA and/or the C-terminal region of REV1. Then Pol (REV7-REV3 complex) could be recruited to the lesion site by the interaction between REV7 and the C-terminal region of REV1, where the C-terminal ␤-sheet of REV7 is crucial. The REV7-REV1 interaction would release an inserter polymerase from the C-terminal region of REV1, and Pol subsequently could perform extension from the nucleotide inserted by an inserter polymerase (Fig. 5B). Depending on DNA lesions, it is considered that Pol performs both nucleotide insertion and extension. In this case, Pol is recruited to the lesion site through the REV7-REV1 interaction in a similar way. In contrast to the interaction of human REV7 and REV1, it has been reported that yeast Rev7 interacts with various regions of yeast Rev1 (the N-terminal BRCT domain; the polymerase-associated domain, which is alternatively called the little finger domain; and the C-terminal domain) (52-54). Furthermore, yeast Pol, an inserter polymerase, interacts with the polymerase-associated domain of yeast Rev1 (55). Therefore, the Rev1 interactions in yeast are complicated, and the interactions might diverge between yeast and human. In this work, we have performed structural and functional analysis of REV7-mediated interactions, and clarified the structural basis of the REV7-REV3 interaction and obtained structural implication of Pol-REV1 interactions. We propose that REV7 functions as the adapter protein between REV3 and REV1 polymerases, thereby mediating the second polymerase switching. Human REV7 interacts with various proteins including the Shigella effector IpaB in bacterial infection and ELK1 and TCF4 in signal transduction (15)(16)(17). Those proteins have sequences similar to the REV7-binding region of human REV3, indicating that the mechanisms underlying the interactions of those proteins with REV7 are similar to that of the REV7-REV3 interactions described here. Thus, our finding will provide a general structural basis for understanding the interactions mediated by REV7 in various cellular functions.
2018-04-03T02:28:06.761Z
2010-02-17T00:00:00.000
{ "year": 2010, "sha1": "a56b7229b264990c37b15adf39f6ec9be369a93f", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/285/16/12299.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "070f4c4c00fc6cf9eaadd0416f0ffd7939688e96", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
235653460
pes2o/s2orc
v3-fos-license
A Metabolic Choreography of Maize Plants Treated with a Humic Substance-Based Biostimulant under Normal and Starved Conditions Humic substance (HS)-based biostimulants show potentials as sustainable strategies for improved crop development and stress resilience. However, cellular and molecular mechanisms governing the agronomically observed effects of HS on plants remain enigmatic. Here, we report a global metabolic reprogramming of maize leaves induced by a humic biostimulant under normal and nutrient starvation conditions. This reconfiguration of the maize metabolism spanned chemical constellations, as revealed by molecular networking approaches. Plant growth and development under normal conditions were characterized by key differential metabolic changes such as increased levels of amino acids, oxylipins and the tricarboxylic acid (TCA) intermediate, isocitric acid. Furthermore, under starvation, the humic biostimulant significantly impacted pathways that are involved in stress-alleviating mechanisms such as redox homeostasis, strengthening of the plant cell wall, osmoregulation, energy production and membrane remodelling. Thus, this study reveals that the humic biostimulant induces a remodelling of inter-compartmental metabolic networks in maize, subsequently readjusting the plant physiology towards growth promotion and stress alleviation. Such insights contribute to ongoing efforts in elucidating modes of action of biostimulants, generating fundamental scientific knowledge that is necessary for development of the biostimulant industry, for sustainable food security. Introduction Currently, agriculture is facing a massive increase in demand due to twin pressures of an increasing population and environmental deterioration [1][2][3]. Hence, accurate and predictive metabolic models are imperative for designing a roadmap for the next generation of crops with high productivity and resilience to climate change, and devising agricultural strategies for sustainable crop production. Interrogating plant responses to environmental conditions, through the lenses of omics sciences, is disruptively enabling the decoding of the language of cells at molecular level. This advances the understanding of regulatory network rules and mechanistic events in the cellular and chemical space of the plant under consideration, which, in turn, provides greater impetus for the translation of fundamental knowledge to actionable programs in the field [4,5]. Thus, reported herein is an investigation of biostimulant-induced reconfigurations of maize metabolism towards growth enhancement and stress alleviation. The incorporation of biostimulant strategies and programs in the agriculture industry holds promise to sustainably improve crop productivity. Currently, biostimulants, subdivided into microbial and non-microbial categories, are described as formulations that improve plant health and productivity as a resultant action induced by the novel, or emergent properties of the complex mixture, and not only from the presence of a plant growth regulator [6][7][8]. The biostimulant market is constantly on the economical rise due to the need to use formulations that promote sustainable soil health, and those that lead to crop improvement with respect to climate resilience and nutrition traits [3]. Emerging studies have demonstrated the effects of biostimulants on plant physiology and agronomic traits. For instance, the application of a biostimulant on tomato plants showed improved growth and fruit nutritional quality, as well as enhanced antioxidant machineries (e.g., elevation of ascorbic acid) under heat stress [9]. Another study by Paul et al. [10] investigated the action of protein hydrolysate-based biostimulants on tomato plants under drought, reporting biostimulant-induced changes in metabolic profiles and phenotypic traits of tomato plants. These changes included alterations in phytohormones and lipids, increases in biomass, stronger stomatal conductance, and enhanced antioxidant defence systems [10]. Despite ongoing efforts made in studying and understanding the effects of biostimulants on plants, the underlying biostimulant-induced changes (at molecular and cellular levels) for plant growth promotion and stress resilience remain an active research field. This knowledge gap hampers the novel formulation of biostimulants and the implementation of these products into agronomic practices [3]. Hence, in this work, metabolomics was applied to generate fundamental insights regarding the effects of humic substance (HS)based biostimulants on maize metabolism under normal and nutrient-starved conditions. Metabolomics, a multidisciplinary omics science, provides a readout of the metabolome, which carries imprints of environmental and genetic factors. As such, one of the best descriptions of metabolism is the metabolic fluxes it generates, representing the integrated output of the molecular machinery and biochemical characteristics of a biological system [11,12]. Arguably, metabolomics is probably the most challenging and demanding of the omics sciences, due to the inherent complexity of the metabolome. However, metabolomics-generated insights are increasingly rendered possible as the field positions itself in the current innovative developments in analytical technologies, computational tools and integration of orthogonal biological approaches [12]. Thus, metabolomics offers unique opportunities in elucidating modes of action of biostimulants, at cellular and molecular levels, necessary insights for the biostimulant industry, and subsequently for a sustainable and improved cropping system. Results and Discussion As mentioned in the Introduction section, this study aimed at elucidating metabolic alterations that explain the effects of a non-microbial biostimulant, a humic substance (HS)based formulation, on maize plants under normal and nutrient starvation, in greenhouse conditions. Experimental details are provided in Section 3. For semantic simplicity, the expressions humic substances, HS, humic biostimulant, HS-based biostimulant, and biostimulant will be used interchangeably to refer to the biostimulant formulation used in this study (a humic substance-based formulation, Section 3). Briefly, the study was designed to comprise four (4) different groups, namely, control 1 (starved and with no HS), control 2 (non-starved and no HS application), and HS-treated under starved and non-starved conditions ( Table 1, Section 3). Prior to metabolomic analyses, the morphophysiological assessments were performed to evaluate the effects of the HS-based biostimulant on maize plants under normal and starved conditions. The HS-treated plants showed increased canopy cover, plant height, above ground dry biomass, improved nutrient uptake and nutrient leaf content under both normal and nutrient starved conditions ( Figure S1). These morphophysiological traits observed in HS-treated plants can be associated with improved plant health, growth and nutrient stress alleviation. For metabolomics analyses, metabolites were extracted from leaves and analysed on liquid chromatography-mass spectrometry (LC-MS) analytical systems, with both untargeted and targeted approaches. Different methodologies and workflows were applied to mine and interpret the generated metabolomics data: these included molecular networking approaches, chemometrics methods, and metabolic pathway and network analyses (Section 3). The spectra data (from untargeted analyses) were mined using computational tools such as feature-based molecular networking (FBMN) and MolNetEnhancer housed within the global natural product social (GNPS) molecular networking environment (Section 3). Using various algorithms, molecular networking provides a visual overview of all the ions of molecules that are detected and fragmented during an MS/MS experiment and the chemical relationship between them. This exploration of the collected 'fragmentome' enables the visualization of chemical similarity between annotated known metabolites and unknown molecules, thus expanding the coverage of the metabolome under consideration [13,14]. Moreover, in contrast to the conventional (also referred to as 'classical') molecular networking tool which relies solely on MS 2 information for molecular network generation, FBMN improves upon this by also incorporating MS 1 information such as retention time, ion mobility, and natural isotopic pattern. As a result, FBMN allows for spectral annotation, distinguishes isomers, as well as incorporates relative quantification information [15,16]. This method also offers the advantage of giving a more precise estimation of the relative ion intensity by making use of the LC-MS abundance of the features (i.e., peak area/height), as opposed to classical MN which makes use of the sum/total precursor count or spectral count [15]. The MolNetEnhancer workflow, on the other hand, improves the chemical insight obtained from a dataset by combining outputs from multiple independent computational tools such as molecular networking, MS2LDA (MS2 latent Dirichlet allocation), as well as the Network Annotation Propagation (NAP) in silico annotation tools and thus allowing for enhanced metabolomics data annotation [15]. In this study, the metabolome covered included unknown classes which had no matches and known or putatively annotated classes, namely, glycerolipids, hydroxycinnamic acid (HCA) compounds, cinnamic acids and derivatives, carboxylic acids and derivatives, fatty acyls and diazines ( Figure 1A), and, as detailed in the methodology (experimental, Section 3), more confirmatory scrutiny was performed to validate the metabolite annotations. Thus, FBMN and MolNetEnhancer both aided in the putative annotation of some of the metabolites in the extracted maize leaves metabolome. Each node represents a single chemical entity, e.g., caffeoylquinic acid (m/z 353.0885, Figure 1B) and DGMG 18:3 (gingerglycolipid A-m/z 721.3687, Figure 1C), which can be connected to other structurally similar chemical entities (nodes) by edges in a cluster, molecular family. The putatively annotated metabolites/nodes can then, in turn, be used for the identification of other nodes in the same molecular family by means of the extrapolation of loss or gain of certain chemical groups [17]. Furthermore, the molecular networking computation also provided a quantitative description of the measured metabolome, pointing to the differential distribution of ions belonging to different classes, as reflected on the pie charts in the clusters of HCA compounds and glycerolipds, showing the effect of HS on the maize plants under normal (well-fed) and stress (starved) conditions ( Figure 1B,C). This is further discussed in the subsequent sections. The extracted and annotated maize metabolome comprised different classes of metabolites, as infographically shown in Figure 1D, suggesting that the metabolic changes (in maize plants) induced by treatments span a wide spectrum of both primary and secondary metabolic phenomenology. HS-Biostimulant Alters Maize Primary and Secondary Metabolism towards Growth Promotion The application of HS-based biostimulant on maize plants (under normal conditions) induced coordinated changes in the maize chemical space (Figure 1), significantly impacting pathways for primary and secondary metabolism. Some of these metabolic pathways include alpha-linolenic acid metabolism, amino acid-related pathways (such as tryptophan metabolism, glycine, serine and threonine metabolism and cysteine and methionine metabolism), and secondary metabolism pathways such as phenylpropanoid pathway and flavonoid metabolism ( Figure 2A; Table S1). Maize plants treated with the humic biostimulant showed increased levels of oxylipins such as oxo-phytodienoic acid (OPDA), hydroperoxy-octadecatrienoic acid (HpOTrE) 1, hydroperoxy-octadecatrienoic acid (HpOTrE) 2 and oxo-(pentenyl)cyclopentaneoctanoic acid (OPC), components of alphalinolenic acid metabolism ( Figure 2B). Although the mechanistic roles of these individual oxylipins are still poorly understood, some of the general functions of oxylipins in plants include modifications of chloroplast function, plant senescence, stomatal conductance, and antifungal and antibacterial activities [18]. Furthermore, the oxylipin pathway leads to the generation of the phytohormone, jasmonic acid. Moreover, other signalling metabolites such as indole acetic acid (IAA) and salicylic acid (SA) were found increased in maize plants treated with HS compared to the control ( Figure 2C). These phytohormones are regulatorily involved in various biochemical and physiological processes in plants, such as seed germination, seedling growth, stomatal aperture, respiration, and in interactions with the environment [19,20]. Thus, the measured changes in lipids and hormonal (signalling) networks in maize plants ( Figure 2B,C) suggest that the HS biostimulants remodel maize metabolism towards growth promotion via the activation and enhancement of physiological events for improved plant development and the potentiation of defences [21][22][23]. Furthermore, other metabolic remodelling induced by the HS treatment on maize plants under normal conditions included a general increase in the levels of amino acids ( Figure 2E,F). Amino acids play indispensable roles in metabolic pathways governing the plant growth and development processes. In this study, HS application increased the content of alanine (Ala) and aspartic acid (Asp), amino acids which are involved in the carbon assimilation/fixation pathway ( Figure 2F), one of the essential processes in growth promotion. Plants do not only harvest atmospheric carbon dioxide for the production of photosynthates, they also utilize the internal carbon pool [24,25]. Thus, it can be postulated that increases in Ala and Asp levels contribute to an increased pool of the internal carbon, which could be used in photosynthetic reactions, thus supporting growth promotion. Asp also plays an important role in maintaining plant growth by serving as a substrate/precursor for the biosynthesis of four essential amino acids, namely, Thr, Lys, Ile and Met via the Asp-family pathway ( Figure 2E) [26]. The increased levels of Thr and Met could be the result of the upregulation of the Asp-family pathway in HS-treated plants ( Figure 2E). Correspondingly, the study of Vaccaro et al. [27] showed a significantly higher accumulation of Thr in the leaves of seedlings grown with HS in comparison to those observed in control plants. Met is also involved in a wide range of functions in plant growth and development; for example, it provides a required supply of sulphur and nitrogen to plants [28]. Thus, in this study, it can be postulated that the HS-induced increased level of Met was also translated into the measured increase in sulphur and nitrogen contents ( Figure S1A), a growth promotion mechanism. Moreover, Met is also known to maintain the structure of proteins required for cell differentiation and division [28]. Table S2. Other changes in amino acid levels included an increase in Ser levels in HS-treated maize plants, under normal conditions ( Figure 2E). Ser is synthesized through three routes: (i) the glycolate pathway (photorespiration); (ii) glycerate pathway (cytosolic glycolysis); and (iii) phosphorylated pathway (Calvin cycle) ( Figure 2D). Thus, our results suggest that the application of HS may have impacted these pathways, leading to the accumulation of Ser ( Figure 2E). Apart from its proteinogenic roles, Ser takes part in the biosynthesis of several biomolecules required for cell proliferation, including amino acids, nitrogenous bases, phospholipids, and sphingolipids. Furthermore, it also plays an indispensable role in signalling mechanisms, as one of the three amino acids that are phosphorylated by kinases [29]. Ser is also involved in another significantly impacted pathway: Gly, Ser and Thr metabolism (Figure 2A; Table S1), which plays an important role in plant photorespiration [26]. The accumulation of amino acids in HS-treated plants ( Figures 2E,F and S2A) also suggests an increased pool of substrates for protein synthesis, which is positively associated with increased plant biomass [30]. Agreeably, these metabolic measurements were translated into the maize phenotype, because HS-treated plants showed higher plant biomass, an HS-enhanced growth and development ( Figure S1C). The application of the HS-based biostimulant on maize plants under normal conditions also impacted the secondary metabolism, as revealed by molecular networking approaches ( Figure 1) and metabolic pathway analysis (Figure 2A,G and Table S1). In this study, under normal conditions, most flavonoids such as quercetin, luteolin neohesperidoside, kaempferol and isorhamnetin rutinoside levels were decreased in plants treated with HS compared to non-treated plants ( Figure 2G). Moreover, the application of HS showed a differential response of HCA compounds, namely, chlorogenic acids and cinnamoyl hydroxycitric acid esters ( Figure 2G). Primary and secondary metabolisms are involved in the use of the available photosynthetic assimilates, leading to trade-offs of the carbon allocation. In nutrient-rich environments, large amounts of carbohydrates are allocated to primary metabolism (protein synthesis), while secondary metabolism (phenolics production) is limited [31,32]. The latter could be a possible reason for the observed reduction in flavonoid contents in HS-treated plants compared to control plants, under normal conditions. Furthermore, the decreased levels of some HCA compounds (3-and 5-caffeoylquinic acid, caffeoyl hydroxycitric acids and caffeoylglutarate; Figure 2G) may suggest that the phenylpropanoid pathway was not favoured in HS-treated plants under normal physiological conditions, regardless of the accumulation of the precursors of this pathway, of Phe and Tyr in HS-treated plants vs. control ( Figure S2A). This result further supports the above-mentioned hypothesis that the carbon from these amino acids is mainly directed towards the primary metabolism, thereby prioritizing plant growth. With the phenylpropanoid pathway not being stimulated, this may have affected the downstream pathways such as flavonoid metabolism; thus, a general decrease in flavonoids levels in HS-treated plants ( Figure 2G). However, some phenolic compounds such as tricin diglucuronide, 3-feruloylquinic acid, coumaroylquinic acid and coumaroyl hydroxycitric acid were increased under HS treatment ( Figure 2G). This points to a dynamic and complex network of phenolic compounds, reconfigured by biostimulant treatment for the enhancement of growth and development of maize plants, under normal conditions [33]. These HS biostimulant-induced metabolic alterations (accumulation of lipids, hormones and amino acids and differentially changed phenolic compounds) under normal conditions ( Figure 2) were synchronously translated into agronomic traits: the maize plants treated with the humic substances showed increased canopy cover, plant height, plant diameter, above ground dry biomass and chlorophyll content (Figure S1C), and enhanced plant growth mediated by HS-based biostimulant application. HS-Biostimulant Alleviates Nutrient Starvation in Maize Plants: Underlying Metabolic Reprogramming The HS-biostimulant-induced global metabolic reprogramming under nutrient starvation spanned a wide range of metabolic classes such as flavonoids, HCA compounds, lipids, amino acids and hormones ( Figure 1). Chemometrically, in principal component analysis (PCA) scores ( Figure S2B), the nutrient-starved group which was treated with HS (S + HS) clustered closely to the non-starved group, suggesting similar metabolic profiles in the two groups. Correspondingly, relative quantification analysis also revealed that the amino acid and phenolic compound (HCA derivatives and flavonoids) profiles of the starved plants treated with HS (S + HS) are similar to the profiles of the non-starved plants ( Figure 3A). Amino acids were significantly reduced due to nutrient starvation in non-HS-treated plants ( Figure 3A), suggesting an increased degradation of amino acids as an alternative mechanism to compensate for limited nitrogen (N) and/or carbon (C) supply. Enhanced amino acid degradation is usually observed in plants suffering from C deficiency [34]. However, the application of HS to starved plants showed an increase in these amino acids compared to non-treated starved plants ( Figure 3A). This could mean that HS either directly supplies the plants with C and N or it triggers other mechanisms which efficiently provide the plant with sufficient C and N. Several studies have shown that the application of HS enhances the acquisition and mobilization of nutrients such as N (amongst others). N is known as the most essential nutrient in plants, because its metabolism is the basis of biological molecules such as amino acids, proteins, nucleotides and enzyme synthesis [35][36][37]. The increase in amino acids observed in starved HS-treated plants compared to the untreated starved plants ( Figure 3A) can thus be correlated with the increased absorption of N ( Figure S1). Table S3. With regard to the HCA compounds and flavonoids, metabolites which were increased in non-treated starved plants (e.g., rutin, kaempferol rhamnosyl hexoside, trans-3-caffeoylquinic acids, 3-feruloylquinic acid 1, caffeoylhydroxycitric acid, etc.) were decreased in HS-treated starved plants ( Figure 3A). In contrast, phenolic compounds that were decreased in non-treated starved plants (e.g., kaempferol rutinoside, isorhamnetin rutinoside, cis-3-caffeoylquinic acid, etc.) were increased in HS-treated starved plants ( Figure 3A). The application of HS also showed a differential response of oxylipins such as OPDA, HpOTrE 1, HpOTrE 2 and OPC ( Figure 3B). Oxylipins have been shown to be involved in stress signal transduction, the regulation of stress-related gene expression, and interaction with hormonal signalling pathways [38]. The growth and stress hormones, IAA and ABA (abscisic acid), respectively, were decreased by the application of HS under nutrient starvation ( Figure 3C), suggesting homeostasis (towards normal condition). Generally, under abiotic stress conditions, plants biosynthesize higher levels of ABA, which induce stomatal closure and inhibit the growth and development of plants [39,40]. The level of IAA was increased under nutrient starvation in non-treated maize plants ( Figure 3C), which correlated with previous studies [41]. Overall, these metabolic alterations suggest that the application of HS under nutrient starvation induces metabolic readjustments to alleviate the negative effect of starvation in plants. It can then be postulated that HS-based biostimulant treatment led to a rewiring of the maize metabolism for the efficient acquisition and use of resources under limited supplies of nutrients. This HS-induced metabolic remodelling towards stress alleviation correlates to the observed in-plant nutrient profiles; the uptake of macronutrients such as K, N, Ca, Mg, P and S and micronutrients such as Na, Fe, Zn, Mn, B and Cu was higher in starved plants that were treated with HS biostimulant compared to non-treated plants ( Figure S1A). Moreover, the nutrient leaf analysis showed that the leaves of HS-treated starved plants contained higher levels of nutrients compared to non-treated starved plants ( Figure S1B). Furthermore, these metabolic changes (and nutrient profiles) were translated into phenotypically observable agronomic traits such as improved plant height, above ground dry biomass, and canopy cover ( Figure S1C). To distinctively map and globally visualize the metabolomic data, a metabolic network analysis was performed using MetaMapp. This web-based tool is able to map all detected metabolites into network graphs using the KEGG reactant pair (krp) database and Tanimoto chemical similarity between PubChem substructure fingerprints, thus generating an overview of the metabolic regulation under specified conditions [42]. The chemical similarity feature was implemented on the foundation that biochemistry is described as the inter-conversion of chemically similar entities. This information can thus assist in the prediction of the enzymatic transformation networks between the biochemical domains [43]. As infographically depicted on the metabolic networks (Figure 4), there are three main metabolic clusters, namely, phenolics (indicated by the circles), lipids (arrows), and amino acids (squares), which are mainly interconnected based on their chemical similarity (grey edges). Hormones (diamonds) such as indoles (e.g., IAA) and ABA are structurally interlinked with amino acids and lipids, respectively. Thus, the correlation network computed comprised structural similarity complemented by krp interactions to avoid the misclustering of some obviously biologically related compounds and to reveal the biochemical reaction networks [44,45]. The krp interactions (highlighted in green) are shown between amino acids (Ala-Ser, Ala-Asp, Ala-Cys, Ala-Val, Ala-Phe, Phe-Tyr, Ser-Trp, Cys-Ser) and between oxylipins OPC and OPDA (Figure 4). The biochemical reaction network amongst the amino acids highlights Ala as a metabolite hub of the network, with many krp edges connecting to the Ala node ( Figure 4). This point to the tight regulation of the amino acid metabolism and may warrant a closer look into the potential roles of Ala as a regulator. Ala metabolism has been shown to be tightly linked to carbon and nitrogen metabolism, the TCA cycle and sugar metabolism [46]. Furthermore, MetaMapp analysis utilizes statistical information such as the p-value and fold-changes [43]. Thus, the generated metabolic networks revealed significantly altered metabolites in HS-treated ( Figure 4B) and non-treated ( Figure 4A) plants under nutrient starvation (illustrated by node attributes such as size and colour). Ala was decreased in non-treated plants in response to nutrient starvation, and the other amino acids which are connected to Ala were also decreased ( Figure 4A). However, in HS-treated starved plants, Ala was increased while its interconnections were either increased or unchanged ( Figure 4B). Moreover, a study by Ishihara et al. [30] showed that the enrichment in free Ala was the best choice to correct enrichment in alanine residues in protein and determine the accurate rate of protein synthesis plants. This further supports the functional role of Ala as a potent regulator of amino acid metabolism. Furthermore, the application of these metabolic network maps allowed for the detection of metabolites which were significantly altered by HS application under starvation. For instance, observing the phenolics cluster, compounds such as kaempferol rutinoside, rutin, luteolin rutinoside, and caffeoylglutarate were significantly changed compared to other compounds which showed no significant changes ( Figure 4B; see Table S3 for p-values). This suggests that one of the mechanisms employed by HS in stress alleviation involves upregulation/downregulation of specific phenolic compounds-A complex and dynamic network of phenolic compounds, as also reflected in Figure 2G. The computed metabolic network (Figure 4) points to possible regulatory events underlying the HS-induced metabolic reconfiguration in maize plants towards growth enhancement and the alleviation of nutrient starvation. Thus, a mechanistic model emerging from the present study provides key fundamental insights describing a hypothetical (metabolic) framework underlying the effects of HS-based biostimulants on maize plants, under normal and nutrient-starved conditions ( Figure 5). Metabolic reconfigurations related to the HS biostimulant-induced growth promotion involves differential alterations in the levels of amino acids, phenolics and lipids, which are translated into physiological events such as (i) membrane remodelling, (ii) improved chlorophyll content and photosynthesis rates, (iii) improved N and C assimilation, (iv) elongation of roots and shoots, and (v) increased nutrient uptake and assimilation ( Figure 5). The main HS-mediated mechanisms involved in nutrient stress alleviation elucidated in this study include (i) metabolic/cellular homeostasis, (ii) low-cost machineries in response to starvation, and (iii) increased nutrient uptake and assimilation. Materials and Methods The maize (Zea mays) plants, PAN 3Q-240, were cultivated in 10 L pots filled with 17 kg of sandy soil (pH of 4.6), organic carbon of 0.22% m/m, bulk density of 1495 kg·m −3 and organic matter of 0.38% m/m) in a greenhouse on a rotating table, at Omnia facilities in Sasolburg, Free-State, South Africa. The study was experimentally designed to comprise different treatments or groups (Table 1), i.e., plants with no HS-treatment and starved (Control 1), plants with no HS-application and no starvation (Control 2), and two HStreated groups (with and without starvation). Each pot was considered as a biological replicate and contained five plants at the harvesting time. Five biological replicates (i.e., five pots) per treatment (group) were harvested. Immediately after emergence, well-fed plants were given NUTRIGRO TM and NUTRIPLEX™ at 1 g/L of water of each product (100%) and nutrient-starved plants were given 0.4 g/L of water of each product (40%). At the 4-leaf stage, 20 L/ha humic substance-based biostimulant formulation (Omnia Group Ltd., Bryanston, South Africa), was applied to treatment groups ( Table 1). The detailed descriptions and preparation of this HS-based formulation are not disclosed, because these biostimulant products are Omnia trade-marked and still undergoing commercialization processes. Harvesting of the plant materials was performed 3-days after the application of humic substances. The leaves were harvested and immediately shock-frozen in liquid nitrogen to quench all metabolic reactions [47,48]. The frozen plant leaf tissues were stored at −20 • C, pending metabolite extractions. Metabolite Extraction For metabolite extraction, the harvested leaf samples were crushed (to a fine powder) using liquid nitrogen in a mortar. Two grams (2 g) of the crushed leaves were weighed, dissolved in 20 mL (1:10 m/v) of 80% analytical grade cold methanol, and then subjected to homogenisation using a probe, Ultra-Turrax homogenizer, at 100% intensity for 2 min. Following homogenisation, the mixtures were sonicated for 30 s at 55% power using a probe sonicator (Bandelin Sonopuls, Berlin, Germany) and the crude extracts were centrifuged at 5100 rpm for 20 min. The supernatants were evaporated under vacuum to approximately 1 mL using a Büchi Rotavapor R-200 (Heidolph Laborota, Schwabach, Germany) at 55 • C, transferred to 2 mL Eppendorf microcentrifuge tubes, and then dried to completion using a speed vacuum concentrator (Eppendorf, Merck, Johannesburg, South Africa) set at 45 • C. The dried residues were then resuspended with 500 µL of LC-MS-grade methanol: milliQ water (1:1, v/v) and filtered into HPLC vials (Shimadzu, South Africa). The quality controls (QCs), consisting of pooled equivalent volumes from the control and treatment groups, were prepared. The filtered samples were stored at 4 • C until analysis. Data Acquisition Using Liquid Chromatography-Mass Spectrometry Systems An ultra-high-performance liquid chromatography (UHPLC) system coupled to a high-definition quadrupole time-of-flight MS instrument (Waters Corporation, Manchester, UK) was used to analyse the aqueous-methanol extracts, for the nontargeted approach. The samples were chromatographically separated prior to MS analysis on the UHPLC system fitted with an Acquity HSS T3 C18 column (Waters, Milford, USA, 1.7 µm, 150 mm × 2.1 mm) at a flow rate of 0.4 mL/min. A sample volume of 2 µL was injected, and the column was housed in a column oven thermostated at 60 • C. The binary solvent system comprised solvents A (0.1% aqueous formic acid in Milli-Q water) and B (0.1% formic acid in acetonitrile). The initial conditions (98% solvent A and 2% solvent B) were maintained for 1 min. The conditions were then gradually changed to 30% solvent A and 70% solvent B at 14 min, followed by a change at 15 min to 5% solvent A and 95% solvent B, which were maintained for 2 min and then changed to the initial conditions at 18 min. The analytical column was allowed to calibrate for 2 min before the next injection. The total chromatographic run time was 20 min. The chromatographic effluent was further analysed as follows: a SYNAPT G1 highdefinition mass spectrometer, equipped with electrospray ionization (ESI) source, was used for untargeted analysis. The MS detector was set to acquire centroid data in both positive and negative ionisation modes. The MS conditions used were as follows: the source temperature was set at 120 • C, desolvation temperature at 450 • C, capillary voltage 2.5 kV, sampling and extraction cones at 30 V and 4 V, respectively, cone gas flow at 50 L h −1 , desolvation gas flow at 550 L/h, and a mass scan range of 50-1200 Da with a scan time of 0.1 s and an inter-scan delay of 0.02 s. Analysis of each sample was performed in triplicates. Online mass correction was conducted using a lock spray source: leucine encephalin (50 pg/mL), [M + H] + = 556.766, and [M − H] − = 554.2615, to ensure high mass accuracy (1-3 mDa) of analytes. For downstream structural elucidation, the MS analyses were set to result in both unfragmented and fragmented experiments through collision-induced dissociation (MS E ), where the fragmentation patterns were obtained by alternating the collision energy from 10 to 50 eV. For targeted analysis, a triple quadrupole mass spectrometry platform, LCMS-8050 (Shimadzu, Kyoto, Japan), equipped with an ESI source and ultra-fast liquid chromatography (UFLC) as the front-end, was utilized. A multiple reaction monitoring (MRM) method was used for absolute quantification of the targeted metabolites (amino acid and hormones) ( Table S4): descriptions of the LC and MS parameters are detailed in Nephali et al. [49]. Data Mining: Data Processing and Multivariate Data Exploration The UHPLC-qTOF-MS raw data were processed using MassLynx XS™ software's MarkerLynx application (Waters, Manchester, UK). This application makes use of the patented ApexTrack algorithm [50] to perform accurate peak detection and alignment and results in a data matrix of retention time (Rt) m/z variable pairs, with m/z peak intensity for each sample. The following parameters were used for data processing: retention time (Rt) range of 1-17 min, a 100-1100 Da mass range, intensity threshold of 50, mass tolerance of 0.05 Da, and an Rt window of 0.2 min for both polarities. Normalization was then performed by using total ion intensities of each defined peak; prior to calculating intensities, the software performs patented modified Savitzky-Golay smoothing and integration. Only data matrices with noise levels below 50% (MarkerLynx metrics) were used for downstream data analysis strategies. The data matrices generated from MassLynx were exported into the SIMCA-15.0 software (Umetrics Corporation, Umea, Sweden) for statistical modelling. Some of the computed chemometrics models were included principal component analysis (PCA). The latter is an unsupervised method that aims at data dimensionality reduction and generates a model that reveals clusters, trends, and similarities between treatment groups [12]. Supervised, orthogonal partial least squares-discriminant analysis (OPLS-DA) models were also computed for (binary) sample classification and generating the descriptive statistics. MetaboAnalyst (version 5.0) was used for further statistical analyses where necessary. Before building the chemometrics models (e.g., PCA or (O)PLS-DA), data pre-treatment (e.g., pareto scaling) was applied to normalize the variances and correct heteroscedasticity [51,52]. A nonlinear iterative partial least squares algorithm (in-built within SIMCA software) was used to handle the missing values, with a correction factor of 3.0 and a default threshold of 50%. A sevenfold cross-validation (CV) method was applied as a tuning procedure in generating the models, and only the components positively contributing to the prediction ability of the model (R1 significant components) were considered. Furthermore, different metrics and tests were used for model validation, which included an evaluation of explained and predicted variation (cumulative R 2 and Q 2 ), the analysis of variance testing of cross-validated predictive residuals (CV-ANOVA, p-value < 0.05 as a cut-off), the receiver operating characteristic (ROC) curves, response permutation tests (with n = 100) and predictive testing. Thus, to ensure reliable results, only thoroughly validated and (preferably) parsimonious models were considered in this study. Quantitative analysis (i.e., generation of comparative bar graphs, heatmaps and pie charts) was performed using average integrated peak areas and concentrations for untargeted and targeted metabolites, respectively. Molecular Networking All the raw vendor (i.e., Waters) format MS/MS data were first converted to 'analysis base file' (ABF) format using the Reifys Abf converter software (https://www.reifycs.com/ AbfConverter/, accessed on 21 April 2021) and then uploaded into the Mass Spectrometry-Data Independent AnaLysis (MS-DIAL) software. The MS-DIAL data-processing program makes use of a deconvolution algorithm to perform mass spectral deconvolution of data-independent acquisition (DIA) data, thus making it applicable for the extensive untargeted metabolomics analysis of both DIA and data-dependent acquisition (DDA) centroid datasets [53]. The data were processed using the following parameters: mass accuracy (MS1 and MS2 tolerance) of 0.05 Da, minimum peak height of 50 amplitude and mass slice width of 0.1 Da for peak detection, a 0.5 sigma window value and a 0 MS/MS abundance cut-off for data deconvolution; a retention time tolerance of 0.05 min was used under alignment parameter settings with one of the QC samples used as a reference file for alignment. Following data-processing with MS-DIAL, the resultant GNPS export files, i.e., GnpsMgf and GnpsTable (feature quantification table) were then uploaded into the GNPS environment (https://gnps.ucsd.edu/, accessed on 28 April 2021) using the WinSCP server for molecular networking. A feature-based molecular network (FBMN) was generated for both the negative and positive mode data by uploading the respective feature quantification table, MGF file and a metadata file describing the properties of the sample file (i.e., treatment, days, plant condition, HS concentration and stress level). The MS/MS (fragmentation) spectra were clustered using the MS-Cluster algorithm with a precursor ion mass tolerance of 0.05 Da and fragment ion mass tolerance of 0.05 Da to create the consensus spectra. A network was generated where the lines/edges connecting the nodes were filtered to have a cosine score above 0.7 and a minimum of 4 corresponding fragment ions. This approach builds on the assumption that molecules which are structurally related give rise to similar fragmentation patterns when subjected to MS 2 fragmentation, for example, collision-induced dissociation (CID), thus allowing for molecular networks to be created [14,54]. The MN spectra were then searched against the spectral libraries housed in GNPS where the same parameters (i.e., cosine score > 0.7 and min-matched fragments of 4) were used for metabolite annotation. The resultant molecular network data were first enhanced with the MolNetEnhancer to improve the chemical structural annotations acquired before they were visualized using the Cytoscape network visualization tool/software (version 3.8.2), where the nodes and edges were labelled and coloured. For the FBMN networks, the nodes were labelled with the precursor mass (m/z) and coloured by means of pie charts based on the differential changes in the metabolite levels under different treatment conditions. The MolNetEnhancer networks, on the other hand, were coloured based on the classes such that nodes present in the same class had the same colour while grey nodes represented the non-matched metabolites. The fragmentation spectra of all the putatively annotated metabolites matched to the GNPS spectral libraries were manually validated using the metabolite annotation workflow described below. Metabolite Annotation and Biological Interpretation Metabolite features were annotated based on the following criteria: (i) molecular formula (MF) from full-scan accurate mass data, filtered through heuristic rules such as mass differences, nitrogen rules, restrictions of element numbers, isotopic fit and ringsand-double-bond equivalent; (ii) the calculated, filtered elemental composition predictions were searched against bioinformatics tools or databases such as PlantCyc (https://www. 1 and MS E spectra of the metabolites; and (iv) putative annotations of metabolites were also compared to the available literature, considering their respective chromatographic elution profiles on a reverse-phase column. In the current study, metabolites were putatively annotated to level 2 of the Metabolomics Standards Initiative (MSI) [55]. All annotated and targeted metabolites (Tables S2 and S4) were used for metabolic pathway and network analyses. Metabolic pathway analysis was performed with the Metabolomics Pathway Analysis (MetPA) component of the MetaboAnalyst bioinformatics tool suite (version 5.0). This enabled the identification of the affected metabolic pathways, analysis thereof, and visualization. MetPA uses high-quality KEGG metabolic pathways as the backend knowledge base. In addition to the existing literature, the use of these bioinformatics tools (for pathway analysis) provided a framework to partially map the molecular landscape of the metabolism under study, enabling the biological interpretability of observed changes in a metabolome view [44]. To globally visualize the metabolite changes, a correlation network was computed using MetaMapp ( http://metamapp.fiehnlab.ucdavis.edu/, accessed on 30 April 2021). MetaMapp-encoded chemical structures of all the identified metabolites were retrieved from the PubChem and KEGG databases, and the p-values and fold changes were obtained from OPLS-DA-derived descriptive statistics (Table S3). A Tanimoto score threshold of 0.7 was used to define the similarity cut-off among metabolites. The generated networks were visualized using Cytoscape v3.8.1 [56]. Conclusions Understanding the modes of action involved in biostimulant-mediated growth promotion and stress resilience is one of the critical steps necessary for the full implementation and integration of biostimulants into agricultural practices. Thus, this present study intended to decode a metabolic choreography that defines the effects of an HS-based biostimulant on maize plants, under normal and starved conditions, in a greenhouse setting. Although further investigation may be needed to build on our findings, the model derived from this metabolomics study suggests that the HS-biostimulant induced a metabolic reprogramming in maize plants towards growth promotion and the alleviation of starvation stress. Molecular networking approaches aided in characterizing the HS-altered chemical space. In more detail, a wide and coordinated range of metabolic processes was involved in the response of maize plants to HS treatments. Impacted metabolic pathways included amino acid metabolism, phenylalanine metabolism, and alpha-linolenic acid metabolism, among others, involving a spectrum of metabolite classes such as amino acids, phytohormones, lipids, HCA compounds and flavonoids which are involved in growth promotion and nutrient stress alleviation. Furthermore, metabolic network analysis revealed some qualitative characteristics of HS effects on maize metabolism under nutrient starvation: a complex structural interconnectivity between altered metabolites involved in stress alleviation and metabolite hubs depicting possible biochemical regulatory mechanisms, which can be investigated further. These HS-induced multilayered metabolic reconfigurations in maize plants could generally be linked to morphophysiological data such as chlorophyll content, nutrient assimilation, and changes in biomass. The knowledge generated from this work provides a morphophysiological and metabolomic gateway to the mechanisms underlying the effects of HS-biostimulant on plants. Such insights lay a foundation for advancement of the biostimulant industry and incorporation of these formulations in agronomic practices, for sustainable food security.
2021-06-28T05:09:34.687Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "2164720773a63cbb3322ba91f05eca1fb714e2da", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2218-1989/11/6/403/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2164720773a63cbb3322ba91f05eca1fb714e2da", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
30147402
pes2o/s2orc
v3-fos-license
Machine translation between uncommon language pairs via a third common language: the case of patents This paper proposes to familiarize the MT users with two major areas of development: (1) To improve translation quality between uncommon language pairs, the use of a third language as the pivot. Various techniques have been shown to be promising when parallel corpora for the uncommon language pairs are not readily available. They require the use of two other language pairs involving a common third language pairing with each member of the initial target pair. (2) The surging demands in the field of patent translation and for efforts to bootstrap machine translation in uncommon language pairs (e.g., Japanese and Chinese) via more common language pairs (e.g., Chinese-English and English-Japanese), and the application of the pivot approach to expedite processing. Introduction Recent success in the application of machine translation (MT) in many multilingual contexts, such as textual translation and cross-lingual information retrieval, has been dependent on the building up of critical bilingual resources such as Translation Memory (TM) to develop and continuously fine tune the translation or search engines. The basis for TM is parallel sentence or sentence segment pairs drawn from relevant bilingual texts by sophisticated filtering processes. Thus there is the need to draw from quality texts the maximum amount of relevant parallel linguistic structures, which is of critical importance to the cultivation of high quality TM's. The amount of useful bilingual corpora varies considerably between language pairs, and consequently the amount of useful TM's varies considerably among language pairs with English as a frequent common member in the paired TM's. For example, many parallel corpora with English as one language have been built, such as the French-English Canadian Hansards (Gale and Church, 1991), the Japanese-English parallel patent corpus (Utiyama and Isahara, 2007), the Chinese-English parallel patent corpus (Lu et al., 2010), and the Arabic-English and English-Chinese parallel corpora used in the NIST Open MT Evaluation 1 . However, few parallel corpora exist for language pairs among other languages (e.g. French-Chinese, German-Chinese, Japanese-Arabic or Chinese-Japanese). This is especially so for some domain-specific areas, such as patents, whose use of language embraces both legal and legalistic as well as technical considerations, thus placing limits on the useful application of current MT techniques to meet the needs in the commercial and other sectors. This paper introduces two major areas of development to the MT users. First, we introduce various approaches with the use of a third common language as the pivot to improve translation quality between uncommon language pairs. When parallel corpora for the uncommon language pairs are not readily available, various techniques have been proposed and shown to be promising. Second, we discuss the rapidly increasing demands in the field of patent translation and various efforts to bootstrap patent machine translation in uncommon language pairs (e.g., Japanese and Chinese) via more common language pairs (e.g., Chinese-English and English-Japanese), and the application of the pivot approach to expedite processing. Pivoting Approaches for Machine Translation There are three major approaches introduced below. Suppose the three languages involved are source, pivot and target. The first is based on phrase table translation (Cohn and Lapata 2007;Wu and Wang, 2007). The approach usually first train two translation models: source-pivot and pivot-target; then it induces a new source-target phrase table by using the translation probabilities and lexical weights in source-pivot and pivot-target translation models For example, to translate between Chinese and Japanese, we can first train Chinese-English and English-Japanese translation models based on available bilingual corpora; the two models are then combined together at the phrase level to provide a new Chinese-Japanese phrase translation table with induced translated probability for each new entry. The second approach is sentence translation strategy (Utiyama and Isahara, 2007;Khalilov et al., 2008). The first step of this approach is the same as the first approach: to train source-pivot and pivot-target translation models. But the second step would be quite different: the translation step of this approach would be to first translate the source sentence to the pivot sentence based on the source-pivot translation model, and then translate the pivot sentence to the target sentence based on the pivot-target translation model. The third uses existing models to build a synthetic source-target corpus, from which a source-target model can be trained (Bertoldi et al., 2008). For example, we can fist build a pivot-target translation model based on the pivot-target parallel corpus. Based on the pivot-target translation model, the pivot sentences in the original source-pivot bilingual corpus can be translated into the target language. We can then obtain a source-target corpus by translating the pivot sentences in the source-pivot corpus into the target language with the pivot-target translation model, and/or obtain a target-source corpus by translating the pivot sentences in the pivot-target corpus into the source language with the source-pivot translation model. Patents in the Multilingual World Patents are important indicators of innovation, and patenting increasingly becomes an international activity as the economy is Table 1 shows the number of PCT applications for the most used languages of filing and publication. From Korean just became one of the official publication languages for the PCT system since 2009, and thus the number of PCT patents with Korean as language of publication is small. 5 For the national phase of the PCT System, the statistics are based on data supplied to WIPO by national and regional patent Offices, received at WIPO often 6 months or more after the end of the year concerned, meaning that the numbers are not up-to-date . Based on Table 2, we can see rough sizes of bilingual corpora, trilingual corpora, and even quadri-lingual corpora we can build from these PCT patents involving different languages. Patent Parallel Corpora A couple of parallel corpora have been introduced in the patent domain. For example, Utiyama and Isahara (2007b) mined about 2 million parallel sentences by using the description section of Japanese-English comparable patents. The corpus was used as the training data for the Japanese-English patent machine translation task at NTCIR PatentMT evaluations (Fujii et al., 2008;Goto et al., 2010). For the parallel patent corpora involving the Chinese language, we have constructed a large-scale Chinese-English patent parallel corpus containing about 14 million good-quality parallel sentences mined from a large number of comparable patents (Lu et al., 2010). The human evaluation of sampled sentence pairs shows that the mined pairs are of high quality with only 1%-5% wrong pairs. We have chosen one million sentence pairs for the Patent MT evaluation at NTCIR-9 (Goto et al., 2010). Patent Machine Translation Although most MT systems in the patent domain begin with rule-based MT (RBMT) techniques, statistical machine translation (SMT) techniques are increasingly adopted and tremendous strides in SMT have been made in recent decades. However, SMT requires parallel corpora as critical resources, and the unavailability and limited size of parallel patent corpora, especially for uncommon language pairs, are still major limitations for SMT systems to achieve higher performance in the patent domain. The authors are working with NICT and NII in Japan to co-organizing the NTCIR-9 patent translation evaluation, to which more than 30 participants signed up from all over the world, and finally 130 runs from 21 teams were submitted. Some preliminary observations on the relative success of different approaches based on this large-scale evaluation are as follows (see also Goto et al. (2011)):  On the Chinese-to-English patent translation task, the state-of-the-art SMT shows much better human evaluation scores (adequacy) than two commercial RBMT systems and the Google online translate system which do not have access to the training data provided by the task organizers.  On the Japanese-to-English patent translation task, the commercial RBMT systems still show higher adequacies than the state-of-the-art SMT systems.  On the English-to-Japanese translation task, some SMT systems achieve equal or better human evaluation scores (adequacy) than the top-level commercial RBMT systems. No SMT system did this at NTCIR-7, and this is thought to be the first time that this was achieved. Building Parallel Patent Corpora using English as the Pivot We have cultivated a trilingual parallel corpus by means of bilingual parallel corpora with English as the pivot (see also ). With the 14 million Chinese-English bilingual sentences introduced in Section 3.2, and 4.2 million Japanese-English bilingual sentences we have in our center, a trilingual sentence-aligned patent corpus has been cultivated. Specifically, we align Chinese-English and English-Japanese sentence pairs by using the English sentences as the pivot, and finally obtain Chinese-English-Japanese sentence triplets. The selectivity in the whole process of this resource building is shown in Table 3. The pivoting approach has given us 2.1 million trilingual sentences. The distribution of these trilingual sentences is shown in From the above experiment, we may conclude that the cultivation of large-scale parallel corpora from multilingual patents via a pivot language would alleviate the parallel data acquisition bottleneck in multilingual information processing involving a wide variety of languages, such as English, Chinese, Japanese, Korean, German, etc.. Conclusion Given the 1.8 million PCT patent applications and their corresponding national ones, there is considerable potential to construct large-scale high-quality parallel corpora for a wide variety of languages, and to open new opportunities for MT practitioners and researchers in the patent domain. Moreover, with these large-scale patent parallel corpora, MT quality can be enhanced for uncommon language pairs (e.g. between East Asian languages and European languages other than English) by using the common language (e.g. English) as the pivot. Tsou has cultivated the largest synchronous corpus of Chinese LIVAC (www.livac.org) on the basis of analysis of more than 400 million characters of Chinese newspaper texts from different Chinese communities in the last 16 years. His research interests have focused on the quantitative and qualitative studies of language to facilitate the curation and cultivation of large resources, including search engines and algorithms to enable meaningful mediation between parallel bilingual linguistic corpora
2017-08-03T01:06:35.528Z
2011-11-01T00:00:00.000
{ "year": 2011, "sha1": "f773bbcaa10905cd0b2187fc0149a2feb2bb2be4", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "f773bbcaa10905cd0b2187fc0149a2feb2bb2be4", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
252382074
pes2o/s2orc
v3-fos-license
Real-world use patterns of angiotensin receptor-neprilysin inhibitor (sacubitril/valsartan) among patients with heart failure within a large integrated health system BACKGROUND: Sacubitril/valsartan is a first-in-class angiotensin receptor-neprilysin inhibitor (ARNI) that is now preferred in guidelines over angiotensin-converting enzyme inhibitors (ACEIs) or angiotensin receptor blockers (ARBs) for patients with heart failure with reduced ejection fraction (HFrEF). However, it has not been broadly adopted in clinical practice. OBJECTIVE: To characterize ARNI use within a large diverse real-world population and assess for any racial disparities. METHODS: We conducted a cross-sectional study within Kaiser Permanente Southern California. Adult patients with HFrEF who received ARNIs, ACEIs, or ARBs between January 1, 2014, and November 30, 2020, were identified. The prevalence of ARNI use among the cohort and patient characteristics by ARNIs vs ACEIs/ARBs use were described. Multivariable regression was performed to estimate odds ratios and 95% CIs of receiving ARNI by race and ethnicity. RESULTS: Among 12,250 patients with HFrEF receiving ACEIs, ARBs, or ARNIs, 556 (4.54%) patients received ARNIs. ARNI use among this cohort increased from 0.02% in 2015 to 7.48% in 2020. Patients receiving ARNIs were younger (aged 62 vs 69 years) and had a lower median ejection fraction (27% vs 32%) compared with patients receiving ACEIs/ARBs. They also had higher use of mineralocorticoid antagonists (24.1% vs 19.8%) and automatic implantable cardioverterdefibrillators (17.4% vs 13.3%). There were no significant differences in rate of ARNI use by race and ethnicity. CONCLUSIONS: Within a large diverse integrated health system in Southern California, the rate of ARNI use has risen over time. Patients given ARNIs were younger with fewer comorbidities, while having worse ejection fraction. Racial minorities were no less likely to receive ARNIs compared with White patients. Plain language summary Sacubitril/valsartan is one of the newest guideline-recommended medications for patients with heart failure as it has shown reduction in hospitalizations and death. However, it has not been broadly adopted in clinical practice. Within an integrated health system in Southern California, we observed a rise in angiotensin receptor-neprilysin inhibitor (ARNI) use year-over-year to 7.48% by 2020. Patients given ARNI were younger with fewer comorbidities but had worse heart function. Racial minorities were equally likely to receive the medication compared with White patients. Implications for managed care pharmacy Wide-scale adoption of ARNI in practice has been gradual. Better understanding of the characteristics of the patients prescribed ARNI and potential barriers to its use may help further establish its place within goal-directed medical therapy for patients with heart failure. Heart failure (HF) affects 6.2 million people in the United States, and its incidence is rising yearly. 1 It was reported on 379,800 or 13.4% of all death certificates in 2018 and had been estimated to cost the United States $30.7 billion in 2012. [1][2][3] Over the years, the discoveries of a short list of drugs have led to the development of precise algorithmic guidelines regarding guideline-directed medical therapy. 4 Sacubitril/valsartan is a first-in-class angiotensin receptor-neprilysin inhibitor (ARNI) that inhibits the renin-angiotensin-aldosterone system and simultaneously inhibits the degradation of atrial and brain natriuretic peptide, thereby exerting multifaceted neurohormonal effects via vasodilation and natriuresis to afford additional cardiovascular benefits. 5 The benefits of ARNIs in patients with HF were first established in the Prospective Comparison of ARNIs with ACEIs to Determine Impact on Global Mortality and Morbidity in Heart Failure (PARADIGM-HF) trial in 2014. ARNI use in HF was approved for clinical use by the US Food and Drug Administration (FDA) in 2015. 5 Since then, ARNIs have been recommended by the American College of Cardiology, American Heart Association, and Heart Failure Society of America, as well as the European Society of Cardiology guidelines as not only an alternative but also preferred over angiotensin coverting enzyme inhibitors (ACEIs) or angiotensin receptor blockers (ARBs) for patients with heart failure with reduced ejection fraction (HFrEF). 4,6 Observations of ARNI use in the real-world setting have supported the cardiovascular benefits seen in clinical trials. [7][8][9][10][11][12] In 2020, the FDA granted an expanded indication for the use of ARNIs in some patients with HF with an ejection fraction (EF) greater than 40%, making it the first drug to be approved for this indication. 13 However, despite recommendations of ARNIs in HF guidelines and growing indications, wide-scale adoption of the drug into clinical practice appears to be low. 14,15 Potential factors preventing broad use of the medication include cost of the drug, the development of hypotension as a side effect, and the presence of only 1 large-scale randomized trial (PARADIGM-HF) to support its superiority over ACEIs or ARBs. 10,12,16,17 Notably, the PARADIGM-HF included 8,442 patients and showed a 20% reduction in cardiovascular mortality and HF hospitalization after being stopped early because of overwhelming benefit. 5 Knowledge on the actual penetrance of ARNIs in the real world is limited. 15 Better understanding of current ARNI use may help elucidate potential barriers to its use. In this study, we sought to characterize ARNI use within a real-world setting and assess for any racial disparities among patients receiving ARNI. STUDY DESIGN AND SETTING We conducted a cross-sectional study within Kaiser Permanente Southern California (KPSC). KPSC is an integrated health system that provides care for more than 4.7 million members at 15 hospitals and greater than 200 medical offices, spanning across 10 counties in Southern California. The KPSC membership population is sex-and gender-balanced and is also an ethnically, racially, and socioeconomically diverse representative of the Southern California population. 18 KPSC is a prepaid integrated health care plan, and members have equal access to health care services, benefits, clinic visits, and medications. This study was approved by the KPSC institutional review board and exempted from informed consent. STUDY POPULATION Adult patients (aged ≥ 18 years) with KPSC membership and a diagnosis of HF between January 1, 2014, and November 30, 2020, were first identified. A diagnosis of HF was defined by at least 2 different encounters with HF International Classification of Diseasaes (ICD) code (Supplementary Table 1, available in online article). The date of the first qualifying encounter was used as the index date to extract clinical information including laboratory, echocardiogram, and vital signs. We excluded patients who had a history of heart transplant or left ventricular assist device identified by ICD codes to avoid inclusion of patients with significantly different underlying characteristics. We further excluded patients who did not have pharmacy data or did not receive an ACEI, ARB, or ARNI. We restricted our cohort to only those receiving one of the above medications as we felt patients already receiving ACEIs or ARBs were the primary patients considered ARNI-eligible during our study period (Supplementary Figure 1). ACEI, ARB, or ARNI use was defined by a filled prescription and electronically abstracted by the corresponding generic product identifier code (Supplementary Table 1). Finally, we excluded patients who had an EF greater than 40%. EF was defined by the most recent EF STATISTICAL ANALYSIS Prevalence of ARNI use was determined by the number of patients receiving ARNIs within the cohort across the entire observation window and within each year during the same period. Patients who received both an ARNI and ACEI (or ARB) within the study period (and each year) were counted as ARNI use. Characteristics of patients were presented descriptively by the use of ARNIs vs ACEIs/ARBs. Continuous variables were presented as median (interquartile range) and compared using the Kruskal-Wallis test. Categorical variables were presented as frequencies and percentages and compared using chi-square test. To evaluate for racial and ethnic differences in receiving ARNIs, we estimated odds ratio and 95% CI for receiving ARNIs for racial minorities by comparing each race and ethnicity with non-Hispanic White patients using multivariable logistic regression adjusting for other demographic variables. A 2-sided P value less than 0.05 was considered statistically significant. All statistical analysis were performed using SAS 9.4 (SAS Institute Inc.). Results Between 2014 and 2020, a total of 12,250 patients with HFrEF receiving ACEIs, ARBs, or ARNIs were identified ( Figure 1) DATA ABSTRACTION AND VARIABLES OF INTEREST KPSC has a comprehensive, integrated electronic health record and claims data system that allows for complete data capture both within and outside the system. All information on demographics, comorbidities, laboratory measurements, medications, device therapy, and HF hospitalization were extracted from the electronic health record and collected as part of routine clinical encounters. Baseline demographics included age, sex, race and ethnicity, and body mass index. Race and ethnicity were based on self-report. Laboratory data included estimated glomerular filtration rate calculated using the 2009 Chronic Kidney Disease Epidemiology Collaboration equation. 19 Medical comorbidities included hypertension, diabetes, atrial fibrillation, compared with White patients and may reflect the equal access to care within our integrated health system. Nationally, the estimated proportion of patients with HFrEF prescribed ARNIs were 3.6% (1.5%-6.8%) for patients with Medicare plans and 13.7% (4.9%-31.8%) for patients with commercial plans in a cross-sectional study between August 2018 and July 2019. 14,15,20,21 Lower rates seen in the Medicare population may be because of the fact that this group is generally older with more comorbidities. We observed that ARNI-prescribed patients were a younger population. This may be attributed to tolerance of the drug. We observed the largest percentage increase in 2017 when ARNIs were officially adopted into HF guidelines and suspect ARNI use will continue to rise especially given a stronger recommendation in recent 2021 guidelines, particularly given the fact that ARNIs are recommeneded not only as an alternative but also as a preferred agent to replace ACEIs/ARBs in existing users. 4,22 Potential reasons for the slow adoption of ARNIs include the higher cost of ARNIs relative to ACEIs or ARBs and subsequent higher copays for patients, the fact that only 1 large randomized clinical trial demonstrated its superiority, and potential adverse drug-related outcomes including hypotension. 10,12,16,17,23 Some prescribers may feel that the findings of PARADIGM-HF are not as generalizable to the population they treat. Notably, our study population was generally older in age, included more males, and also included more Black patients compared with PARADIGM-HF trial participants. Symptomatic hypotension was seen both in the original clinical trial and following real-world studies and was the main side effect that led to discontinuation of the drug or inability to titrate to the maximum dose. 5,[7][8][9][10][11][12] Despite these current doubts and challenges, the benefits of ARNIs can be advantageous for patients with HF and health systems managing these patients. Furthermore, ARNIs now have an expanded FDA indication for use in some patients with HF with an EF greater than 40%, making it the first drug to be approved for this indication and applicable to all patients with HF. Discussion In a real-world clinical setting, 4.54% of patients with HFrEF receiving ACEIs, ARBs, or ARNIs between 2014 and 2020 received an ARNI. A rise in ARNI use was observed during the observation period. Patients receiving ARNIs had lower EF values and higher use of AICDs. Furthermore, patients receiving ARNIs tended to be younger, were more likely male, and had fewer comorbidities, which are findings that are consistent with previous observations from smaller populations from previous real-world studies. [7][8][9][10][11][12]14 Racial minorities were also no less likely to be prescribed an ARNI LIMITATIONS There are potential limitations to our study, which may confound the interpretation of our findings. Our study was cross-sectional in nature, which limited the interpretation of the temporal relationships between the variables studied and ARNI vs ACEI/ ARB use. We also relied on coding to identify HF as well as comorbidities, which can result in misclassification. Our EF value was defined by the most recent EF prior to index date, in which EF values may have changed or improved over time. In addition, we did not have specific information on economic burdens or other socioeconomic factors for our patient population. These factors, along with varying insurance plans that could result in higher copays, may have been a barrier to use for some patients. Finally, our findings from an integrated health system may not be generalizable to other care settings and patient populations given possibly greater variability in access and slower adoption of newer medications in nonintegrated health systems. Conclusions Among a diverse cohort of patients with HFrEF, ARNI use was low but comparable to existing reports and has risen through the years. Patients receiving ARNIs were younger with fewer comorbidities, while having lower EF. Patients from racial minority groups were no less likely to receive ARNI compared with White patients.
2022-09-21T06:16:49.570Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "1b34c90f9dbaa128602f6e16128bde669ca8a40f", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "6f0c41604ac19a3606ef8cf69e2e47acb0b3822c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3830569
pes2o/s2orc
v3-fos-license
Phenotypic and genetic analysis of cognitive performance in Major Depressive Disorder in the Generation Scotland: Scottish Family Health Study Lower performances in cognitive ability in individuals with Major Depressive Disorder (MDD) have been observed on multiple occasions. Understanding cognitive performance in MDD could provide a wider insight in the aetiology of MDD as a whole. Using a large, well characterised cohort (N = 7012), we tested for: differences in cognitive performance by MDD status and a gene (single SNP or polygenic score) by MDD interaction effect on cognitive performance. Linear regression was used to assess the association between cognitive performance and MDD status in a case-control, single-episode–recurrent MDD and control-recurrent MDD study design. Test scores on verbal declarative memory, executive functioning, vocabulary, and processing speed were examined. Cognitive performance measures showing a significant difference between groups were subsequently analysed for genetic associations. Those with recurrent MDD have lower processing speed versus controls and single-episode MDD (β = −2.44, p = 3.6 × 10−04; β = -2.86, p = 1.8 × 10−03, respectively). There were significantly higher vocabulary scores in MDD cases versus controls (β = 0.79, p = 2.0 × 10−06), and for recurrent MDD versus controls (β = 0.95, p = 5.8 × 10−05). Observed differences could not be linked to significant single-locus associations. Polygenic scores created from a processing speed meta-analysis GWAS explained 1% of variation in processing speed performance in the single-episode versus recurrent MDD study (p = 1.7 × 10−03) and 0.5% of variation in the control versus recurrent MDD study (p = 1.6 × 10−10). Individuals with recurrent MDD showed lower processing speed and executive function while showing higher vocabulary performance. Within MDD, persons with recurrent episodes show lower processing speed and executive function scores relative to individuals experiencing a single episode. Introduction Major Depressive Disorder (MDD) is common mental disorder affecting at least 1 in 10 in the United Kingdom 1 and is a leading cause of disability worldwide. Showing a SNP-based heritability of~30% 2,3 and a twin-based estimate of~40% 4 , MDD has a substantial genetic component. It has been shown that individuals suffering from MDD show lower performance in cognitive domains such as executive function (EF), memory, language and attention [5][6][7] . The identification and quantification of lower cognitive performance in MDD could lead to a better understanding of the underlying aetiology of depression, to improve treatment of patients, or as an endophenotype for subsequent studies investigating the genetic architecture of MDD. These targeted approaches could possibly lay the groundwork to improve the mental health of MDD patients and therefore lower the burden MDD has on society. Despite the high prevalence of MDD, cognitive lower scores in MDD have not been as widely studied as in other psychiatric disorders such as bipolar disorder 8 and schizophrenia 8,9 . Snyder et al. 5 performed an extensive and the largest-to-date meta-analysis of cognitive performance in MDD, focussing mainly on tasks that measure executive function (EF) with the exception of two non-EF tests measuring vocabulary (language) and digit symbol substitution (processing speed, but is also considered by some to be a component of EF). They observed that MDD patients showed a lower performance in phonemic verbal fluency and digit symbol measures. That is, MDD patients produced significantly fewer words than healthy control individuals and recoded significantly fewer symbols to digits in digit symbol measures. Vocabulary performance was observed to be lower in MDD patients; however, the effect was not significant. Logical memory (LM) immediate and delayed (both measuring verbal declarative memory) have been less well studied compared to other cognitive measures in depression. Lim et al. 6 conducted the largest meta-analysis study of LM to date (N logical memory immediate = 291; N logical memory delayed = 348). They observed that MDD patients performed significantly less well than controls on both LM immediate and delayed. This result has been previously reported by smaller studies not included in the Lim et al. study 10,11 , with one exception 12 . Significant lower performances were also observed in the attention domain 6 , via the digit span test and continuous performance test where MDD patients performed slower compared to controls. The final domain examined, visuospatial processing (immediate and delayed visual memory), showed no differences between MDD patients and controls 6 . As the genomic underpinnings of MDD are poorly understood 13 , we examined genomic associations with cognitive differences as observed in our study as an endophenotype strategy. Using the extensively phenotyped Generation Scotland Cohort Study, we sought to: (a) investigate whether cognitive ability in MDD patients differs from controls without MDD or reported mental illness, (b) assess whether cognitive performance differs between single-episode MDD versus recurrent MDD, (c) investigate cognitive performance between controls and recurrent MDD and (d) to reduce multiple testing we performed genome-wide single locus, genome-wide single-locus interaction, polygenic and polygenic interaction analyses only on cognitive performance tests showing a significant difference within study designs. This study represents the largest single cohort study investigating the association of cognitive performance in depression using a formal clinical diagnosis of MDD and incorporating genomic association analyses. The largest single cohort study investigating cognitive performance in depression is the UK Biobank cohort study 7 however that study relies on self-reported MDD status and does not examine genetic factors. Cohort data and phenotyping Generation Scotland: the Scottish Family Health Study (GS:SFHS) is a family-based cohort study sampled from the general population in Scotland (www. generationscotland.org) 14,15 . The study design has been widely documented 14,15 . In short, between 2006 and 2011 over 24,000 subjects were recruited into the study. The initial sample of study subjects (N = 7953) were registered with general medical practitioners, between 35 and 65 years, and from five regions of Scotland. These initial study subjects were asked to bring a relative within the age range 18-99 to the baseline data collection. Participants were asked to fill in health, lifestyle and family history questionnaires and answer a 30 min interview which included questions about possible mental ill health. If participants answered positively on either of the 2 mental health screening questions ("Have you ever seen anybody for emotional or psychiatric problems?" and "Was there ever a time when you, or someone else, thought you should see someone because of the way you were feeling or acting?") (N = 4539), they were asked to take part in a Structured Clinical Interview for DSM-IV (SCID) 16 , focussing on mood disorders. Individuals answering "no" to both questions were assigned to the control group. Individuals who completed the SCID but did not meet the criteria for MDD or bipolar disorder were subsequently assigned to the control group 17 (N = 1727). Finally, individuals who were invited for the SCID interview but refused to take part (N = 507) were not assigned to either case or control group 3 . Four cognitive domains were measured in Generation Scotland: processing speed (Wechsler Digit Symbol Substitution Test; recoding symbols to digits 18 -DST), verbal declarative memory (Wechsler Logical Memory Test; sum of immediate and delayed recall of an oral story 19 -LM1 and LM2), executive functioning (the phonemic verbal fluency test; using the letters C, F, and L, each for one minute 20 -VFT), language (the Mill Hill Vocabulary Scale, Junior and Senior Synonyms combined-finding a synonym of a given word 21 -MHVS) and the difference between logical memory immediate and delayed (LM1-LM2). The correlation between scores on tests of these different cognitive domains are reported in Supplementary Tables S1-S4. In addition to age and sex, we selected lifestyle factors (self-reported smoking and alcohol intake), socioeconomic status (the Scottish Index of Multiple Deprivation 22 ), medication usage (anti-depressants and mood stabilisers) and 15 genetic principal components to control for population stratification. These variables have been previously used as covariates in Cullen et al. 2015 7 to investigate cognitive differences in depression using the UK Biobank cohort. Genetic data DNA of 20,128 GS:SFHS participants was analysed by means of high density genome-wide bead array genotyping (Illumina OmniExpress 700K SNP GWAS and 250K exome chip). We selected a set of unrelated individuals for use in our analyses, to remove the influence of shared environments. We removed single nucleotide polymorphisms (SNPs) and individuals with a missingness of >1% and removed rare SNPs with a minor allele frequency <0.01 leaving 557,292 SNPs for analysis. We used Genome-wide Complex Trait Analysis 23 to extract a list of genetically unrelated individuals from a predefined list of participants with a known MDD SCID diagnosis or controls. Seven thousand, one hundred and seventy-two unrelated individuals (relatedness<0.025, corresponding to second degree cousins) were selected, of which 1042 individuals (14.5%) were diagnosed with either single or recurrent depression. One hundred and five individuals were removed due to the lack of self-reported medical background information. Another 25 individuals with self-reported Alzheimer's and/or Parkinson's disease were removed leading to a total of 7012 individuals, of which 1021 individuals (14.5%) were diagnosed with a form of depression. Statistical analysis Phenotypic differences We used phi coefficients and Spearman correlation coefficients to determine the level of correlation between the pool of potential covariates and MDD case-control, single-recurrent or control-recurrent status. As a continuous variable, age was assessed using the Spearman correlation coefficient. As all other variables were binary, their correlations were assessed using the phi coefficient, with associated p-values from either a χ 2 or Fisher's exact test. The Fisher's exact test was used when observed cell counts in the 2 × 2 contingency table were <5. No potential covariate was strongly correlated with MDD case-control (Supplementary Table S5), single-recurrent (Supplementary Table S6) or control-recurrent (Supplementary Table S7) status aside from age, sex and medication usage in the case-control study and solely medication usage in both the single-recurrent and control-recurrent MDD study, as expected. To keep in line with Cullen et al., 2015 all covariates (sex, age, alcohol consumption, smoking tobacco, medication usage, socioeconomic status and 15 principal components) were included in the full model. Multiple regression analysis was performed for each cognitive test and the diagnosis label before and after controlling for covariates. We used the following models: a baseline model (1): and a full model (2): We observed that medication usage contained many missing values (52%), with only a small percentage of all participants answering positively (5.1%). Therefore, we performed model 2 and all subsequent analyses twice (1) including medication usage (M2A) and (2) excluding medication usage (M2B) as a covariate. A Bonferroni significance level of p < 8.3 × 10 −03 (p = 0.05/6 cognitive tests) was used. All models were run using the R Statistical Computing Environment 24 v 3.1.0. Single-Locus analysis We performed a Genome-Wide Association Study (GWAS) for the cognitive performance variables that showed a significant difference in the phenotypic analyses. We further tested whether each SNP's association with cognitive performance depended on MDD status via a Genome-Wide by Environment Interaction Study (GWEIS). The GWAS analyses can be seen as a baseline model and GWEIS as a measure of non-additive effects for SNP and depression case status. The standard Bonferroni significance level of p < 5 × 10 −08 is conservative, as many SNPs are in linkage disequilibrium thus statistical tests are not independent. Therefore, we applied a less conservative significant level p < 1.52 × 10 −07 derived from the Genetic type 1 Error Calculator (GEC) 25 . All models were run using PLINK version v1.90b1g. Polygenic analysis Polygenic Risk Scores (PGRS) were calculated for five pvalue threshold ranges (0-0.01, 0-0.05, 0-0.1, 0-0.5, 0-1) using summary output from the Cohorts for Heart and Aging Research in Genomic Epidemiology (CHARGE) meta-analysis GWAS of DST and similar tests that controlled for sex, age, assessment centre, education and community 26 . Generation Scotland is a part of the CHARGE consortium but was not included in this specific meta-GWAS study. The CHARGE consortium performed a sample size weighted meta-analysis because of the differences in the test methodology and measurement units. The z-statistic was weighted by the effective sample size (sample size × (observed dosage variance/expected dosage variance)) for each SNP. We pruned the Generation Scotland dataset for linkage disequilibrium (window size = 50 kb, step size = 5 kb and r 2 threshold = 0.25) and converted the CHARGE z-statistics to standardised beta coefficients using the z-score and standard error provided by CHARGE. We performed a linear regression model between the DST and the polygenic risk scores as well as a model including polygenic risk score-by-MDD status interaction in a controls-recurrent MDD and singlerecurrent MDD study. Consistently with previous analyses, we restricted our polygenic score analysis to the groups where we had observed significant differences. We controlled for all covariates and the number of valid genotypes in a model that did not include medication usage. Figure 1 shows a graphic representation of performed analyses. Descriptive statistics We observed significant differences in the distributions of sex, age, alcohol consumption, smoking tobacco, medication usage and socioeconomic status across MDD status with a higher frequency of females (69-72%), tobacco smokers (23-26.8%) and medication users in the MDD case group (Table 1). Within MDD cases, alcohol drinkers represented a significant lower frequency in the recurrent MDD (83.5%) group than the single-episode MDD (88.8%) but a higher frequency in medication usage. On average, controls were slightly but significantly older than cases with MDD and lived in less deprived areas. Cognitive association by depression status We performed three linear regression analyses for each cognitive test (the dependent variable). The predictor variable was MDD diagnosis, classified as either control-MDD, single-episode-recurrent MDD or controlrecurrent MDD. No other covariates were considered in these baseline models (Table 2). No significant association was observed between MDD and cognitive test scores, except for digit symbol substitution in the singleepisode-recurrent comparison (β = −3.41, p = 5.8 × 10 -04 ), with the recurrent MDD group recode fewer symbols to digit compared to single-episode MDD group. We then performed linear regression on the full model, including all covariates that were used in Cullen et al. 7 , which includes medication usage (Supplementary Table S8). We observed a significant difference after correcting for multiple testing in the MHVS in both the control-MDD and control-recurrent MDD study. Individuals with depression had higher scores in the MHVS, identifying on average 0.66 more synonyms relative to controls (β = 0.66, p = 2.96 × 10 −03 ). Between controls and individuals with recurrent MDD, participants with recurrent depression performed even higher, with 1.07 more synonyms identified (p = 6.0 × 10 −04 ). When leaving out medication usage (Table 3) we observed the same significant higher performance of the MHVS in the MDD and recurrent MDD group in the control-MDD (β = 0.79, p = 2.02 × 10 −06 ) and controlrecurrent MDD (β = 0.95, p = 5.8 × 10 −05 ) study design. Individuals with recurrent MDD recoded significantly fewer symbols back to digits compared to their study design counterparts in the single-episode-recurrent Polygenic score analysis In the single-episode-recurrent study design, the DST PGRS was significantly associated with DST performance at all but two p-value thresholds (Bonferroni p = 0.01; 0.05/5 PGRS ranges), indicating that the DST polygenic risk score explained a significant amount of variation (most significant polygenic score: R 2 1%, p-value threshold = 0.1, p = 1.66 × 10 −03 ) in performance among MDD cases (Table 4). We observed significant statistical association with each PGRS range in the control-recurrent MDD study design with the PGRS explaining between 0.13 and 0.5% of variation (Table 4). However, the effect of the DST polygenic score did not differ between singleepisode-recurrent cases nor between controls and recurrent MDD cases. We did not observe a PGRS by MDD group interaction on DST performance (Supplementary Table S9). Figure S1a-b) and GWEIS (Supplementary Figure S2a-b) analyses was performed on MHVS in the control-MDD and control-recurrent MDD study designs excluding medication usage. No SNP was observed below the GEC significance threshold in the MHVS analyses (GEC p = 1.52 × 10 −07 ). The same analysis was performed for DST in the singleepisode-recurrent and control-recurrent MDD study designs without controlling for medication usage (Supplementary Figures S3a-b and S4a-b). We did not observe a significant association between genomic variation and DST. Both the strongest non-significant GWAS and GWEIS hit were associated with digit symbol performance and observed in the single-episode-recurrent MDD study design. SNP rs10829637 (p = 3.3 × 10 −07 ) located on chromosome 10 in LOC107984280 was the most significant GWAS hit and rs911684 (p = 6.7 × 10 −07 ) located on chromosome 14 in LOC100506999 was the most significant GWEIS hit. Other GWAS and GWEIS results can be found in Supplementary Figure S5a-b, S6a-b. Discussion This study of cognitive performance in MDD is the largest single cohort study with a formal clinical diagnosis of MDD and incorporating genomic association analyses. The only larger single cohort study is UK Biobank, which does not contain a formal clinical diagnosis of MDD and did not investigate genetics association. Moreover, the cognitive battery used in Generation Scotland is standardised and validated on large representative samples using pre-existing evidence while the cognitive battery used in UK Biobank was bespoke and designed for UK Biobank itself. We observed significantly higher MHVS scores in MDD cases versus controls, and between recurrent depression versus controls with and without controlling for medication usage, with'cases' performing higher than the latter in both studies. The same directionality of effect was observed in UKB by Cullen et al. 7 ; they also observed a significant higher score in vocabulary performance in MDD case groups compared to controls. We also observed significant lower performance of DST between recurrent and single-episode MDD cases, and between recurrent MDD and controls; however, in this case the 'cases' performed less well in both study designs. We also observed a significant amount of variation explained in DST performance using the CHARGE consortium DST polygenic risk score; however, this result was observed across cases and controls and did not differ by case status, indicating that the DST polygenic risk score may not be a useful endophenotype for depression. Our results are consistent with the largest meta-analysis of case-control differences in digit symbol coding performance, which found that individuals with depression performed significantly lower than controls 4 . One previous study not included in the recent meta-analysis 4 examining differences in digit symbol coding performance between individuals with depression (current (N = 37) or previous (N = 81)) and controls (N = 50) found no significant difference between the three groups, but the sample size was modest 27 . We also report no significant differences between cases and controls or single-episode 4 observed significant lower performance in phonemic verbal fluency between cases and controls whereas we observed no significant difference. One possible reason is through the inclusion of people in the control group that have symptoms of depression but do not meet the criteria of being diagnosed with MDD, in other words, misclassification of controls, which may have biased our estimates toward the null. Misclassification of controls as MDD participants might be possible due to the screening questions: "Have you ever seen anybody for emotional or psychiatric problems?" and "Was there ever a time when you, or someone else, thought you should see someone because of the way you were feeling or acting?". However, this is unlikely due to the subsequent SCID interview given by a trained clinical nurse. Given that this interview was given to all MDD cases in GS:SFHS, misclassification would be less likely between single-episode MDD versus recurrent MDD. Second, publication bias could have influenced results from meta-analyses. Our sample size, although the second largest to investigate MDD and cognition to date, may be underpowered to detect small differences in cognitive performance. Although we removed individuals with Alzheimer and Parkinson's disease and controlled for smoking and alcohol intake, we did not control for other disorders that may affect cognition. Many previous studies focused on clinical populations, whereas our study is population based; clinical populations may have more severe forms of depression. The use of simpler models in meta-analyses, which do not control for covariates, may obscure signals. Finally, observed cognitive performance in MDD in the literature are mainly observed in large meta-analyses which increases the study heterogeneity, while our results are derived from a much more homogeneous single cohort study. However, both Snyder et al. 4 and Lim et al. 5 observed significant heterogeneity and subsequently applied random-effects meta-analytic models that do not assume homogeneity of effect between studies. We also were not able to assess all cognitive domains which could show signs of cognitive impairments in MDD, such as visuospatial processing and attention 6 . Finally, we were unable to control for the effects of antidepressant use on cognitive performance in the full sample, which may lead to poorer performance in cases 9 . Cognitive differences between single-episode and recurrent MDD have been not as well studied as differences between MDD cases and controls 30,31 . Talarowska et al. 30 compared the cognitive performance of 210 patients with MDD (single-episode N = 60, recurrent N = 150) and observed that the cognitive domains of executive functioning, memory and processing speed showed significant lower performance in recurrent MDD in relation to single-episode MDD. The largest study to date to assess cognitive differences between singleepisode and recurrent depression has been the UK Biobank study 7 . Cullen et al. (2015) observed higher performance in single-episode MDD vs controls (numeric and prospective memory), however moderate and severe MDD groups performed lower (e.g. reaction time and numeric memory) compared to both the single MDD and control group. Cullen et al. 2015 observed the same counter-intuitive higher performance in vocabulary for MDD cases compared to controls and provided several possible explanations for this. It may include differential selection (depressed individuals are more likely to participate than controls), differential recall (cognitive test is associated with greater recall), higher health literacy (individuals with a higher intelligence are quicker to spot possible health issues and therefore quicker to see a GP) or residual confounding. 7 As vocabulary is a crystallised intelligence measure where the tests demand recall ability, and as we observed the same higher performance in a second large population-based cohort, we hypothesise that differential recall and higher health literacy are the most plausible explanations. That we did not observe a significant genome-wide hit for MDD was unsurprising as it is a clinically heterogeneous disorder with multiple SNPs of small effect, which would be difficult to observe without very large sample sizes. We controlled for LD structure in GWAS/ GWEIS by applying a less conservative GEC significance threshold which takes into account LD between SNPs. We compared p-values of SNPs associated with depression in a large cohort study 32 with our results from the GWEIS studies (Supplementary Table S10). Four SNPs overlapped with those available in Generation Scotland and for 18 SNPs we used 52 proxy SNPs (r 2 > 0.8). We observed a consistent positive association with p-value <0.05 for the GWEIS of MHVS (both case-control and control-recurrent) and for the GWEIS of DST in controlrecurrent analyses for SNP rs4143229 which is intronic and located in ENOX1. A recent GWAS of antidepressant treatment response at 12 weeks to selective serotonin reuptake inhibitors (SSRIs) showed suggestive association with another intronic SNP in ENOX1, rs17538444 33 . Using Quanto 34 for gene-by-environment power calculations, setting α = 0.05, two-sided, and using a MAF of 0.5 (as our top SNP had a MAF of 0.48), and observed MDD proportion and distribution of DSST, we concluded that a sample size of 2885 individuals was required to detect an interaction effect at 80% power. Although a significant amount of variation in DST was explained by the CHARGE consortium DST polygenic score, it was not specific to MDD cases and the effect did not vary by MDD case status. Polygenic scores often explain only a small amount of variation in endophenotypes. In this study, we looked for main and polygenic effects; it might be possible that more variation can be explained by incorporating possible genetic interactions between loci and/or the environment or interactions of two or more loci. The main strength of this study is that is has assessed the association between MDD and cognitive ability in a large homogeneous population sample, using standardised tests and outcome measures across all participants. This represents a significant advantage over previous studies that used either meta-analytic (combination of effects across studies) or mega-analytic (combining individual-level data across studies) methods to improve statistical power. The division of the dataset in three study designs based on MDD diagnosis allowed us to assess cognitive performance based on MDD severity. Limitations of this study are the sample size (N = 7012) which results in a low powered interaction analysis, underreporting of antidepressant and mood stabiliser medication usage (<40%) and finally certain cognitive domains are not measured in the Generation Scotland cognitive battery, i.e., visuospatial perception. In conclusion, we have shown that cognitive performance in some domains significantly differs between controls and MDD groups but also within MDD groups. This difference could not be linked to single-locus associations but a small proportion of variation could be explained by means of a polygenic approach.
2018-04-03T03:58:52.570Z
2017-11-01T00:00:00.000
{ "year": 2018, "sha1": "a5193ff8895d0a3051cc4934cdc4df80aa1d5c71", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41398-018-0111-0.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "304efe40e74cc5f8399786b671ad181a2906cf02", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
229471337
pes2o/s2orc
v3-fos-license
Chemical Switching of the Magnetic Coupling in a MnPc Dimer by Means of Chemisorption and Axial Ligands : We present an ab initio density functional theory study of the magnetic properties of manganese phthalocyanine dimers, where we focus on the magnetic coupling between the Mn centers and on how it is a ff ected by external factors like chemisorption or atomic axial ligands. We have studied several di ff erent con fi gurations for the gas phase dimers, which resulted in ferromagnetic couplings of di ff erent magnitudes. For the bare dimer we fi nd a signi fi cant ferromagnetic coupling between the Mn centers, which decreases by about 20% when a H atom is adsorbed on one of the Mn atoms and is reduced to about 7% when a Cl atom is adsorbed. The magnetic coupling is almost fully quenched when the dimer, bare or with the H ligand, is deposited on the ferromagnetic substrate Co(001). Our calculations indicate that the coupling between the two Mn atoms principally occurs via a superexchange interaction along two possible paths within a Mn − N − Mn − N four-atom loop. When these electrons get involved in chemical bonding outside the dimer itself, an appreciable alteration of the overlap between Mn and N molecular orbitals along the loop occurs, and consequently, the magnetic interaction between the Mn centers varies. We show that this is re fl ected by the electronic structure of the dimer in various con fi gurations and is also visible in the structure of the atomic loop. The chemical tuning of the magnetic coupling is highly relevant for the design of nanodevices like molecular spin valves, where the molecules need to be anchored to a support. ■ INTRODUCTION A key objective of molecular electronics and spintronics (spinbased electronics) is the downscaling of the electronic components, which would bring up remarkable benefits like increase in magnetic storage density and reduction of power consumption. In this context, the possibility to use organic molecular materials of low production cost has become highly appealing. The molecules of the phthalocyanine family (Pc) have been studied quite extensively in the emerging field of organic spintronics. 1−4 Among the milestones in this field are the possibility of spin-polarized injection and transport of electrons through organic semiconductors, shown by Dediu and colleagues, 5 and the giant magnetoresistance observed for a single H 2 Pc 1 and CoPc. 6 In addition, an organic spin valve device based on immobilized layers of CuPc (spacer layer) and aligned MnPc (or NiPc) as spin-injection and spin-detection layers was fabricated and studied experimentally by Banerjee and co-workers. 7 Recently, also supermolecular nano and lowdimensional structures like molecular chains and dimers have been addressed, with an emphasis on their structures 8 and magnetic properties. 9,10 These systems are promising novel building blocks for molecular size devices as well as model systems to understand the magnetic interactions and spin transfer mechanisms between the molecules or between the molecules and substrates. An important issue in this field is to find feasible mechanisms to manipulate the molecular spin and magnetic moments in a reproducible fashion. In the present work, we target dimers of the MnPc molecule, which is among the most studied phthalocyanines in molecular spintronics, due to the known magnetic interactions in MnPcbased bulk materials. 11,12 Interestingly, the deposition of MnPc molecules on the ferromagnetic Co(001) surface has shown to generate a so-called spinterface, 13,14 i.e., an active heterogeneous interface for spin transport. 3 Beyond that, MnPc and FePc coadsorbed in the form of a checkerboard table on Au(111) were reported to be antiferromagnetically coupled through the Ruderman−Kittel−Kasuya−Yosida (RKKY) exchange interaction via the substrate electronic states. 15 MnPc, shown in Figure 1A, consists of a single Mn center surrounded by an organic ring, representing a sort of archetype of single molecule magnet. The molecule has a spin 3/2 and a D 2h symmetry. The magnetic properties of specific phases of MnPc films were described already some decades ago. Both ferromagnetic and antiferromagnetic couplings were reported for respectively the β and α phases in bulk MnPc films, 11,12 showing that the magnetic interaction between the molecules is directly linked to their reciprocal arrangement. 12 Because of this, it is essential to unveil how the interplay between the electronic and geometric structure shapes the magnetic properties to engineer the future molecule-scale devices. We have first studied the magnetic properties of dimers of MnPc in the gas phase, considering several possible molecular configurations by means of ab initio density functional theory (DFT) including dispersion interactions. We have then focused on the dimer with the configuration of lowest energy and studied its adsorption on a Co(001) substrate, both bare and with atomic axial ligands such as H and Cl. It has been shown in previous joint theory/experiment studies how these theoretical methods well reproduce the experimental findings for the adsorbate MnPc/Co(100) system in terms of adsorption distances and magnetic properties. 3,16 Previous theoretical studies have reported dimers of phthalocyanines in geometrical configurations that are derived from the various 3D bulk stackings, 8 where the magnetic interactions are directly associated with those in the thick films. We have here analyzed the connection between the magnetic coupling and the inter-and intramolecular chemical bondings that occur in the dimer. The Mn atoms at the center of the molecules are coupled via superexchange interaction, which unrolls via a N atom directly bonded to one of the Mn atoms and situated in the face of the Mn of the opposing molecule, via a mechanism analogous to the one described for MnPc films and bulk materials. 11,12 This type of superexchange path was also experimentally investigated by Chen et al. 17 by means of spinflip electron tunneling spectroscopy between CoPc molecules adsorbed on a Pb(111) surface, where the CoPc molecules assume a reciprocal configuration analogous to the one of the dimers in our study. We observe that when the Mn atoms take part in intermolecular chemical bonds, the magnetic coupling is markedly reduced. This happens in all cases when the dimer is chemisorbed on a Co surface via one of the MnPc rings or when a H or a Cl atom is axially adsorbed on the Mn center on one side of the dimer. In these cases, the chemical bonding scheme of the Mn atoms is changed, as can be seen in the modifications of the inter-and intramolecular distances between the Mn and N atoms involved. ■ THEORETICAL METHODS The present study is based on ab initio DFT calculations using the VASP 18 code with the Perdew, Burke, and Ernzerhof (PBE) exchange correlation functional 19,20 and the projector augmented wave method. 21 The plane wave cutoff was 400 eV, and only the Γ-point k-mesh was used. An effective Hubbard term U eff of 4 eV was chosen to describe the Mn 3d orbitals, following Dudarev's method. 22 The value for the Hubbard term has been chosen according to the findings in our previous study. 23 This value has been proven to properly reproduce experimental data, for example, photoemission spectra. 23 The long-range dispersion forces were described with Grimme's second method (D2). 24 For comparison, some tests were performed with the (D3) 25 method showing no differences in the relevant properties. To describe the Co(001) substrate, we have used a three-layer fcc Co film as in our previous works with FePc on Co(001), 16 ■ RESULTS AND DISCUSSION We have first analyzed different possible geometries for the MnPc dimer based on possible arrangements of multilayers of metal Pc molecules. We will briefly discuss these structures which are shown in Figure S1 of the Supporting Information. Starting from the α and β polymorphs, we have built the corresponding models α× with a tilt angle of about 63°, the α+ with a tilt angle of 59°( Figure 1B), and the β type with a tilt angle of 43°, where the tilt angle is the angle between the line joining the two Mn atoms and a line orthogonal to the plain of the molecules. In the α+ and β configurations, the two molecules are shifted with respect to each other along an axis joining the Mn center to an isoindole N atom (N iso ), while in the α polymorph, the reciprocal shift occurs along a line joining the Mn center to an aza bridge N atom (N aza ) (see Figure 1A). The α+ configuration was previously proposed in a theoretical study of MnPc/F 16 CoPc dimers, 8 and the magnetic properties of MnPc and CuPc in this configuration were studied by Wu et al. 9 Furthermore, this configuration was experimentally observed in a recent STM investigation of the growth of FePc multilayer films on Au(111), as the reciprocal arrangement of the molecules between the first and second FePc layers. 27 In addition to these three models, we have studied a dimer with two MnPc lying on top of each other and rotated by 45°, a structure that was for example observed as the reciprocal geometry of two CoPc in the first two layers of films of CoPc grown on a Pb(111) substrate. 17 After a geometry relaxation of all the structures, we find that α+ is the one with lowest energy, as can be seen from Table 1 with the total energies, and it is the structure the rest of this study focuses on. To get an estimation of the magnetic coupling between the MnPc molecules in the dimers, we computed the energy difference between the ferromagnetic (FM) and the antiferromagnetic (AFM) arrangements, ΔE FA , for all the structures given in Table 1. After geometry relaxation we obtain in all cases ferromagnetic coupling between the Mn The Journal of Physical Chemistry C pubs.acs.org/JPCC Article centers (corresponding to a positive value of ΔE FA ), but with different magnitudes in the various configurations. The top 45°d imer has the strongest coupling but has a higher total energy. In the dimers, both the α+ and the α× actually have a ferromagnetic coupling according to our calculations unlike what is reported for the α polymorphs. 11,12 ΔE FA is higher in the α+ and in the β dimers than in the α×. We attribute this to the different mechanisms that govern the magnetic coupling between the two Mn atoms which depend on their relative positions. In fact, different tilt angles in the dimers, resulting from different shifts between the planes of the two molecules, are generally expected to affect the superexchange mechanism, since the relative orientation of the molecules influences the possible overlap between different molecular orbitals. In the α+ dimer, each Mn atom is located precisely on top of a N iso of the other MnPc. From the side view of Figure 1B, interestingly, it is noticeable that the MnPcs in the dimer are indeed influenced by the presence of the opposite phthalocyanine, since they are not as flat as the single molecule in gas phase ( Figure 1A). The two central Mn atoms are 3.20 Å apart and slightly protruding from the plane of the molecule toward each other. For MnPc in gas phase, the computed bonding distances between the Mn and the N iso atoms amount to 1.97 and 1.96 Å in the two orthogonal directions because of the D 2h symmetry of the molecule induced by Jahn−Teller distortions. 23,28 When the dimer is formed, the D 2h symmetry is lifted. The distance between the Mn and the N iso that is facing the Mn of the other molecule is now 1.98 Å, slightly elongated with respect to the gas phase bond lengths, while the bonding distance with the three remaining N iso is 1.96 Å. In the Pc films, the magnetic coupling between the metal atoms is generally explained by an intermolecular superexchange interaction that connects two successive Mn centers along the chains through the N atoms of the closest molecules. The interaction depends on the overlap of the 3d orbitals of Mn with the π orbitals centered on the N atoms in the organic shell. 12 As in the α and β phases, the relevant orbitals extending between the two molecules are the out-of-plane Mn 3d z 2 and the 3d π (i.e., the 3d xz and 3d yz ) and the N iso 2p z . One should point out that these electrons actually participate in molecular orbitals that involve also other atoms of the MnPc (C, N, and Mn). Furthermore, more than one orbital combination may contribute to the superexchange interaction, generating a more complex process than the one usually described. In a simplified scheme, the ferromagnetic superexchange coupling in the β phase is assumed to occur mainly via the overlapping of the Mn 3d z 2 with the N (π), while the antiferromagnetic coupling in the α phase via the overlapping of the Mn d π with the N (π) orbitals. In the ferromagnetic α+ configuration, the atoms involved in the superexchange path form a loop that develops along the Mn centers and two of the N iso atoms located on the opposite MnPc, while in the β configuration, shown in Figure S1, the superexchange path develops instead along the Mn atoms and the N aza atoms. These four atoms in the α+ configuration form a rectangular loop that is highlighted in Figure 2 for three of the different structures based on the α+ dimer considered in this study and that will be discussed in the next paragraphs, namely (MnPc) 2 in gas phase ( Figure 2A) and adsorbed on the Co substrate ( Figure 2B), and (MnPc) 2 with a Cl axial ligand ( Figure 2C). The two Mn atoms in the loop are indicated as Mn1 and Mn2 and the N iso in front as N1 and N2, where the indices 1 and 2 identify the two molecules, 1 being the upper MnPc, on which the axial ligands are adsorbed, and 2 being the index of the molecule directly adsorbed on Co(001). In the bare dimer in gas phase, the superexchange loop has a regular rectangular shape with intramolecular distances between the Mn atoms and the N iso of 1.98 Å and intermolecular distance between the Mn atoms and the N iso of 2.74 Å. In this configuration, we obtain the highest ferromagnetic coupling, with a ΔE FA value of 257 meV. The atomic distances in the four-atom loop for all the configurations studied with the respective ΔE FA are reported in Table 2. The next step was to simulate the deposition of the α+ dimer on a Co(001) surface. In our previous works regarding the adsorption of FePc on Co(001), we found a ground state corresponding to the molecule adsorbed on a top site, that is, with the Fe atom on top of a Co atom and with the Mn−N iso main axes of the molecule oriented along the [100] crystallographic direction of the surface. 16,26 The adsorption of a single MnPc on the same surface gives analogous outcomes for structures and adsorption sites as for the FePc, as reported in Table S1. Following these results, we have placed the whole (MnPc) 2 dimer on the same top Co site, and the The Journal of Physical Chemistry C pubs.acs.org/JPCC Article relaxed structure is displayed in Figure 1C. The chemisorption casts the dimer into a heterogeneous environment with one phthalocyanine (MnPc2) in contact with the metal and the other one (MnPc1) facing vacuum. Hence, the structural symmetry between the two molecules is lifted, with an impact on the overall geometry and properties of the dimer. A sideview inspection of the adsorbed dimer ( Figure 1C) shows that MnPc2 is subject to noticeable distortions of the planar symmetry, while MnPc1 maintains a flatter geometry, akin to the gas phase molecule. The distance between the Mn atoms (Mn2 in the dimer) and the Co atom beneath is 2.43 Å for the single molecule and 2.36 Å for the adsorbed dimer, while the distance between the two Mn atoms for the adsorbed dimer is 3.32 Å, larger by 0.12 Å than in the gas phase. These distances are similar to those obtained by means of a X-rays standing wave experiment of MnPc adsorbed on Cu(001) which gave an adsorption distance of 2.240 ± 0.045 Å for the Mn to the top layer of the surface. 29 When the MnPc dimer is adsorbed on Co(001), we still obtain ferromagnetic coupling between the two molecules and between the dimer and the substrate (see Table 2). However, we find that the magnetic coupling between the two MnPc's is dramatically decreased, with ΔE FA reduced to 15 meV. In practice, we find that the deposition of the dimer on the Co substrate severely weakens the magnetic coupling between the molecules. This result may imply that in a magnetic nanodevice based on a chain or on a multilayer of MnPc the first layer would not be magnetically coupled to the remaining layers. However, the magnetic moments on the Mn centers are only very little affected by the adsorption, while for similar compounds, like for CoPc on Pb(111), 30 the first molecular layer acts as a spin insulating buffer. The quenching of the coupling is reflected by the distortion of the superexchange path presented in Figure 2. In fact, the chemical bonding formed between the Mn1 center and the Co atom beneath alters the bond lengths between the N and Mn atoms reported in Table 2. In the slightly corrugated geometry of the chemisorbed MnPc2 ( Figure 1) the N2 atom is lifted toward Mn1, resulting in a shorter Mn1N2 distance (2.46 Å) and, on the other side, inducing a longer Mn2N1 distance (2.93 Å). This can be seen from the slightly irregular loop in Figure 2B. As is also the case for the MnPc dimer with H and Cl ligands, the increase of the intermolecular Mn−N bond length between the two molecules affects the overlap of the orbitals along the loop and is correlated to the quenching of the coupling. Experimental and theoretical studies of the axial adsorption of small molecules on several 3d metal Pc's have demonstrated how the ligands can influence the magnetic moment and spin state of the metal macrocycles. 2,31,32 Here we focus on the effect of atomic H and Cl axial ligands and on how the coupling is additionally affected by the deposition on Co(001). When a H atom is adsorbed on one side of (MnPc) 2 , the overall geometric structure of the dimer is only marginally changed with respect to the bare dimer ( Figure S2A). After geometry relaxation, the distance between Mn1 and Mn2 is 3.26 Å, which is 0.06 Å larger than in the bare dimer, and the Mn−N iso distances are between 1.95 and 1.98 Å: in MnPc1, bonded to H, the Mn−N iso distances are generally shorter by just about 0.01 Å. The two MnPc have ferromagnetic coupling with a ΔE FA of 200 meV, which is 20% lower than in the bare dimer. In comparison to the gas phase dimer, the sides of the superexchange loop are only weakly distorted, with the Mn2− N1 distance stretched from 2.74 to 2.83 Å and the Mn1−N2 distance shortened from 2.74 to 2.70 Å (see Table 2). The deposition of the dimer on the Co(001) surface on the top site ( Figure S3A) leads instead to considerable changes. We obtain a quenching of the magnetic coupling with a ΔE FA of −6 meV, where the minus sign indicates an antiferromagnetic character. In this case, we find a larger distance between the Mn centers of 3.41 Å. The Mn1−N2 bond length is 2.90 Å and the N1− Mn1 bond length is 2.03 Å, thus stretching two sides of the superexchange loop with respect to the bare dimer in the gas phase. When a Cl ligand is adsorbed on MnPc1 ( Figure S2B), Mn1 is pulled toward the Cl atom, which lies 2.33 Å above it. The Mn1−Mn2 distance is increased in the presence of Cl to 3.62 Å in the gas phase and to 3.71 Å when adsorbed on the Co substrate ( Figure S3B); in addition, all the bond lengths between the atoms in the superexchange loop are increased as reported in Table 2. The calculations show that in this case the Cl ligand brings about a drastic reduction of the coupling: the calculated ΔE FA is 18 meV in the gas phase and 15 meV when the dimer is adsorbed on Co (see Table 2). The coupling is ferromagnetic in both cases. The distance between the Mn1−Mn2 atoms reported in Table 2 is not a major factor in determining the strength of the magnetic interaction: for example, it only varies by 0.06 Å between (MnPc) 2 and H-(MnPc) 2 , which have ΔE FA differing by 20% from each other, and again it varies by 0.06 Å between H-(MnPc) 2 and (MnPc) 2 adsorbed on Co, where the coupling is reduced to 15 meV. Rather, the coupling depends mostly on a more complex interplay between the orbital overlaps along the superexchange path. Summarizing the results of Table 2 for all the systems studied, we can point out how the magnetic coupling is considerable for only two structures, namely, for the bare dimer and for the dimer with the H ligand in gas phase. In these cases, the bond lengths within the superexchange loop are very similar, with intramolecular Mn−N bond lengths between 1.97 and 1.98 Å and intermolecular Mn−N bond lengths between 2.70 and 2.83 Å. When the bare (MnPc) 2 is adsorbed on Co(001), ΔE FA goes down to 15 meV: here the superexchange loop is distorted on one side, with the bond length Mn1−N2 increased to 2.94 Å, which is 0.2 Å more than in gas phase. When H-(MnPc) 2 is adsorbed Table 3 shows the computed magnetic moments on the Mn, C, and N atoms and on the ligands for the monomer and for all the examined configurations of the α+ dimers. In (MnPc) 2 in the gas phase, in each molecule, the major part of the magnetic moment resides on the Mn atom (3.6 μ B ), while about 12% of this moment with opposite sign (−0.46 μ B ) is distributed among the N atoms and the C atoms bonded to them in the organic ring, as shown by the computed isosurface magnetization density for (MnPc) 2 in Figure 3A. This distribution of the magnetic moments is very similar to that of the single molecule in gas phase with 3.5 μ B on Mn and −0.44 μ B on the N and C atoms in the organic ring. When MnPc is adsorbed on Co(001), both as single molecule and as a dimer, we obtain a slightly smaller moment on the Mn atom directly in contact with the surface, varying between 3.41 μ B for MnPc and 3.46 for (MnPc) 2 , as can be seen in Table 3. At the same time, we see an increase in the moments of the C atoms in the benzene rings that bond to the Co surface by about 0.1 μ B (see Figure 3B). On the other hand, the total moment of the N atoms decreases by about 1.1 μ B , since here the moment in the ring is concentrated in the N iso with negligible contributions from the N aza . These results are similar to the variation of magnetic moments for MnPc adsorbed on Cu(001) computed with the same method by Alouani and co-workers, 29 where between gas phase and adsorption on surface, a variation in the Mn moment from 3.46 to 3.34 μ B was found, with a change of the total C moment from −0.15 to −0.27 μ B and a very small change of the total N moment from −0.15 to −0.14 μ B . The small differences observed in the present work could be attributed not only to the different surfaces but also to the different adsorption sites considered in the two works, since this could affect the bonding and therefore the charge transfer of the N and C ligands to the substrate and finally their magnetic moments. The axial ligands mainly modify the distribution of the moments in the central part of MnPc1 where they are adsorbed. In H-(MnPc) 2 , the moment on Mn1 is 2.95 μ B with −0.50 μ B on the ring (about 17% of the moment on Mn1 but with opposite sign). The H ligand brings about a decrease of the moment on Mn1 accompanied by an increase of the moment on the C atoms. The Cl ligand, on the contrary, induces an increase of the magnetic moment on Mn1 to 3.74 μ B . Now a moment of −0.17 μ B (5% of the moment on Mn1) is located on the ring and mostly concentrated on N iso , as can be seen from Figure 3C. WhenH-(MnPc) 2 is adsorbed, the moment in Mn1 further decreases to −2.80 μ B , and it goes to 0.44 μ B on the ring. In Cl-(MnPc) 2 the moment stays at 3.74 μ B on Mn1, and it is lowered to −0.11 in the ring. The antiferromagnetic coupling observed in all these cases between the central metal and the organic ring was also been observed for several metal Pc's deposited on surfaces, 29,33 while for some Pc-based 3D materials, it was found to be ferromagnetic, for example, in FePc (CN) 2 ·2CHCl 3 crystals. 34 In addition, we have considered the magnetic interaction of the dimer to the surface, which takes place through MnPc2. We have obtained a ferromagnetic coupling of the Mn2 atom with the Co atoms as well as an antiferromagnetic coupling of the C atoms in MnPc2 to the surface. The same magnetic configuration was obtained for the single molecule and for the dimers with axial ligands. The bonding of the molecule to the surface occurs through the direct interaction of especially Mn and C atoms in the benzene rings to the underlying Co atoms, and this is coupled to variations in magnetic moment on both the MnPc and Co(001) surface. This results in a sort of The Journal of Physical Chemistry C pubs.acs.org/JPCC Article footprint left by the molecule on the underlying surface, as can be seen in Figure 4, where the magnetic moment of the top Co atoms on the surface are shown. A profile highlighting the position of the adsorbed molecule is also sketched. The magnetic moments of the Co atoms vary between about 1.6 and 1.9 μ B , and the strongest variations are given by those Co atoms lying below the C atoms in the benzene rings, suggesting the decisive role played by these atoms in the bonding of MnPc to the surface. We had previously found the same type of patterning for the adsorption of FePc on the same substrate. 16 We have finally examined the interplay between the chemical and geometrical configurations and the electronic structure of the MnPc molecules in the dimer and how this is related to the magnetic coupling. The partial density of states (pDOS) of the single molecule and of the MnPc dimer in gas phase and on Co(001) are presented in Figure 5A−C. In Figure 5A, the pDOS of the single molecule is illustrated, with the Mn 3d orbitals, 3d z 2 , 3d π , 3d xy , and 3d x 2 −y 2 , and the N iso 2p z . For the single MnPc in gas phase we obtain a 4 E g ground state where the 3d z 2 and the 3d xy are half occupied, and the 3d π contains three electrons (see Figure 5A), in agreement with previous DFT studies 23,35 and experimental results. 36,37 MnPc's maintain the 4 E g electronic configuration in the α+ dimer and even when they are adsorbed on Co(001) and with the axial ligands ( Figure 5B,C). Because the dimer is symmetric for the two molecules, only the pDOS for one of the two is shown in Figure 5B. First of all, one can notice how the M n 3d z 2 orbital in the dimer is split into several components of lower intensity (black lines in Figure 5). This is evident when comparing the two high-intensity 3d z 2 peaks of the single MnPc at −4.3 and 1.5 eV in Figure 5A, with the several low-intensity 3d z 2 peaks in the regions between −5.4 and −3.8 eV and between 0.8 and 2.6 eV in the gas phase dimer in Figure 5B, and is a result of the interaction between the two molecules. Also, the 3d π states (gray curves) undergo some minor changes in energy position and especially in intensity, although they do not further split into several smaller components, giving a hint that the 3dπ states are less involved in the hybridization between the molecules. In all the diagrams, also the partial DOS of N iso 2p z is plotted, for the specific N iso which is located opposite to a Mn atom of the other molecule; this pDOS is very weak compared to that of the Mn 3d electrons, and its intensity is therefore multiplied by a factor 4. In (MnPc) 2 the N iso 2p z pDOS overlaps the Mn 3d orbitals in several energy intervals, like for example between −5 and −4 eV, at about 0.5 and 2 eV in Figure 5B, suggesting hybridization with the Mn 3d electrons. In this dimer as we When the MnPc dimer is adsorbed on Co(100), the lower molecule (MnPc2) is strongly bonded to the substrate. The Mn center on a top site bonds to the Co atom above through the out of plane 3d z 2 and 3d π electrons. In this configuration, the MnPc is ferromagnetically coupled to the Co surface, in analogy to FePc on Co(100), as we had reported in our previous studies 16,26 and as was also found experimentally for example for the adsorbed systems FePc, CoPc, NiPc, and CuPc on Ag(100), FePc, CoPc, and CuPc on Co(001), 38 and MnPc on Co(001). 3 The pDOS curves of MnPc2 in Figure 5C show that both the Mn 3d z 2 and 3d π electrons are strongly affected by the adsorption, with the formation of several peaks with reduced intensity. The involvement of these electrons into the bonding to the surface corresponds to a somewhat weakening of the bonding between MnPc1 and MnPc2, which could also be deduced by the flatter geometrical structure of MnPc1, and is also responsible for the drastic decrease of the magnetic coupling between the molecules with a ΔE FA of 15 meV. Figure 6A shows the pDOS of (MnPc) 2 with adsorbed H. Although the ferromagnetic coupling is only reduced by 20% from the bare dimer, some differences in the DOS of MnPc1 are evident. For example in the Mn1 3d z 2 pDOS we observe a different splitting of the peaks and a new 3d z 2 feature at 0.57 eV. However, the pDOS of the MnPc2 and the N iso 2p z are mostly analogous to the ones of the bare dimer. Figure 6B reports the pDOS of the dimer with the Cl ligand. In this case, a significant weakening of the magnetic coupling between the Mn centers occurs, as when the dimer is adsorbed on the Co(001). The Cl is adsorbed on MnPc1, forming chemical bonds between the M n 3d z 2 and 3d π electrons to the Cl 2p (see Figure 6B between −3 and −1 eV). These new bonds weaken the interaction between the two MnPc's significantly, which are now at 3.71 Å from each other. This effect is also visible in the pDOS of MnPc2 of Figure 5C, which resembles very closely the one of the gas phase MnPc, with distinct peaks for the M n 3d z 2 and 3d π electrons. On the other side, the pDOS of MnPc1 is highly affected by the bonding to Cl. For all these gas phase and adsorbed systems, the pDOSs provide further evidence of the role of the M n 3d z 2 and 3d π in creating the superexchange ferromagnetic interaction between the two Mn moments, since in all cases when these orbitals are involved in other chemical bondings, the superexchange path is disturbed and the coupling gets affected. ■ CONCLUSIONS To summarize, the analysis of the interplay between the reciprocal arrangement of the molecules and the magnetic interaction in the α+ dimer confirms that the magnetic coupling between the two MnPc takes principally place via superexchange interaction, exploiting one of the two paths in a loop entangling the Mn centers and two isoindole N atoms. The path is conceptually analogous to the one pointed out for the α and β bulk phase, although in the dimers only ferromagnetic coupling is obtained. In the dimer, it is possible to generate distortions in the atomic loop, for example, by chemisorption on a substrate or by adsorption of axial ligands. We have highlighted how the coupling between the molecules can be tuned by ligand adsorption, and how it can be influenced by the adsorption of the dimer on the FM Co(001). In addition, MnPc has a ferromagnetic coupling via the Mn atom to the underlying surface, but the C and N atoms have an antiferromagnetic coupling to the same.
2020-11-26T09:05:52.151Z
2020-11-24T00:00:00.000
{ "year": 2020, "sha1": "b4b7a147fcadbf67489e5f1b0112a6d4753d2ed0", "oa_license": "CCBY", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.jpcc.0c08448", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "a4830f5295c39f626bb2becfe12ac3625c8272f9", "s2fieldsofstudy": [ "Chemistry", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
14803509
pes2o/s2orc
v3-fos-license
On time-inhomogeneous controlled diffusion processes in domains Time-inhomogeneous controlled diffusion processes in both cylindrical and noncylindrical domains are considered. Bellman's principle and its applications to proving the continuity of value functions are investigated. The first part of this article is devoted to quite an old subject in the theory of controlled diffusion processes, namely deriving Bellman's principle (also called the principle of optimality) for processes controlled up to the first exit time from bounded domains. This principle plays a major role in many aspects of the theory of controlled diffusion processes. The necessity of proving it and deriving from it some continuity properties of value functions came to light while investigating the rate of convergence of finite-difference approximations for Bellman's equations. In this connection, we point out that our main results are Theorems 2.10, 2.13 and 2.17 in the second part of the paper. Theorem 2.17 is one of the main ingredients in [3], where we proved a sharp result that the rate of convergence of finite-difference approximations for Bellman's equations in bounded domains is not less than h 1/2 , with h being the mesh size. Theorem 2.17 is similar to Theorem 2.1 of [8] and is nontrivial even if we consider a single diffusion process without any control. In that case, it yields the rate of convergence h 1/2 without much work (see, e.g., Corollary 1.10 of [8]). Our main results depend heavily on the validity of Bellman's principle. Bellman's principle has been derived in different settings in many papers and books. We refer the reader to [1,2,4,5,6,10] and the references therein. Probably the article closest to the subject of the present one is [6], where Bellman's principle is derived under very general conditions allowing unbounded domains and the coefficients of the controlled processes, but only for the problem of optimal stopping of controlled processes. Later on, these results were used to obtain sharp results concerning when the value functions for time-homogeneous processes satisfy the corresponding Bellman's equations. Another result, which is also very close to the results presented in the first part of this paper, is Theorem 2.1 in Chapter V of [4]. However, there are gaps in the original proof of the theorem (the corrected version is to appear in the forthcoming second edition of [4]) and the conditions under which it is stated are somewhat different from those we require for some applications we wish to consider. It is worth noting that in [4], the controlled process is considered up to the first exit time from the closure of a domain, so that if we have two domains with the same closure, the corresponding value functions will coincide. In contrast, we consider exit times from a domain as is usually done in the theory of Markov processes. One of major technical differences between these two settings is that our exit times are lower semicontinuous and the exit times from [4] are upper semicontinuous. The approach in [4] originated from [11], where the reader can also find many useful results concerning the continuity of value functions. In Section 1 we prove Bellman's principle in a setting more general than that of Theorem 2.1 in Chapter V of [4] (see Remark 1.14). Several examples show that under the assumptions in Section 1, value functions can be discontinuous even inside the domains. With additional assumptions, in Section 2, we prove the Lipschitz continuity of value functions in space variables and Hölder-1/2 continuity in the time variable, which is one of the main motivations of this paper. In Remark 2.14, we also present our understanding as to how the statement of Theorem 2.1 in Chapter V of [4] regarding the continuity of value functions can be corrected. Finally, we derive in Corollaries 1.3 and 2.12 an inequality which we use to prove Theorem 2.17. As we have mentioned above, the last theorem plays a major role in investigating the rate of convergence of numerical approximations for Bellman's equations in domains. 1. Bellman's principle. Let A be a separable metric space and let A(n) be fixed subsets of A, n = 1, 2, . . . , such that A = n A(n), A(n) ⊂ A(n + 1). Let (Ω, F, P ) be a complete probability space and {F t ; t ≥ 0} an increasing filtration of σ-algebras F t ⊂ F which are complete with respect to F , P . Let (w t , F t ; t ≥ 0) be a d 1 -dimensional Wiener process on (Ω, F, P ). Suppose that the following have been defined for α ∈ A and (t, x) and real numbers c α (t, x), f α (t, x) and g(t, x). We assume that for every n ≥ 1, on A(n) × R d+1 , the functions σ, b, c and f are Borel, bounded, continuous in (α, x) and continuous in x uniformly with respect to α for each t ∈ R. Moreover, for every n ≥ 1, on A(n) × R d+1 , let σ and b satisfy a Lipschitz condition in x with constant not depending on (α, t) and let g be lower semicontinuous and bounded in R d+1 . By A(n), we denote the set of all functions α r (ω) on Ω × [0, ∞) which are F r -adapted and measurable in (ω, r) with values in A(n). Let A = n A(n) and let M be the set of all bounded stopping times (relative to {F r }). For α ∈ A and (t, x) ∈ R d+1 , we consider the Itô equation The solution of this equation is known to exist and to be unique. We denote this solution by x α,t,x s , following the abbreviated notation adopted in [5]. For any s ≥ 0, we set Let Q be a bounded domain in R d+1 = R × R d and let τ = τ α,t,x be the first exit time of (t + s, x α,t,x s ) from Q: Observe that since Q is bounded, τ α,t,x is a bounded stopping time. Define the parabolic boundary ∂ ′ Q of Q as the set of all points (t, x) on ∂Q for each of which there exists a curve (s, y s ), t − ε ≤ s ≤ t, such that ε > 0, (t, y t ) = (t, x), y s is a continuous function and (s, y s ) ∈ Q, t − ε ≤ s < t. Obviously, if (t, x) ∈ Q, then at s = τ α,t,x , the point (t + s, x α,t,x where we use common abbreviated notation, according to which we put the indices α, t, x beside the expectation sign instead of explicitly exhibiting them inside the expectation sign for every object that can carry all or part of them. For instance, . It is worth noting that τ α,t,x = 0 if (t, x) / ∈ Q. Therefore, v = g in Q c . The above assumptions and notation will apply throughout the paper. Additional assumptions will be introduced for each particular result. Observe that since Q is bounded, v ≥ −N , where N is a constant, and the case v ≡ ∞ in Q is not excluded. The following version of Bellman's principle is the first main result of this section. Theorem 1.1. Assume that g ≡ 0 and that there is an α ∈ A such that f α ≥ 0 on Q. Then: (i) the function v is Borel measurable, nonnegative and, moreover, it is lower continuous in Q, that is, whenever (t, x) ∈Q and for any α ∈ A, we are given a stopping time γ α ≤ τ α,t,x [in (1.2) the superscript α of γ α is dropped in accordance with the above stipulation]. Proof. For any It is known from [6] (see Theorems 1.1, 2.4 and Lemma 2.2 therein) that under the conditions of the theorem, w ≥ 0, the function w is Borel, it is also lower continuous in Q, we have onQ and the process Since the rightmost term in (1.5) equals v(t, x) by definition, all the inequalities in (1.5) are equalities. To complete the proof of (1.2), it only remains to observe that its right-hand side equals sup α E α t,x ρ γ . Remark 1.2. Since v = 0 on ∂Q (even in Q c ) and v ≥ 0 in Q, the lower continuity of v holds onQ, provided that ∂Q has no isolated points, because v is continuous along ∂Q, being identically zero there. 5 Set As a corollary of Theorem 7.4 of [7] (or of the corresponding results in [5]) and Theorem 1.1, we have the following result. We remind the reader that Lipschitz continuous functions have bounded first-order generalized derivatives. is a locally integrable function on Q ′ . Then in Q ′ in the sense of generalized functions, that is, for any nonnegative To obtain a generalization of Theorem 1.1 for g ≡ 0, we need one more assumption on Q. Assumption 1.5. There exists a function ψ ∈ C(Q) such that the first derivatives of ψ with respect to (t, x) and the second derivatives with respect to x are continuous onQ, ψ vanishes on the parabolic boundary ∂ ′ Q of Q and for some α ∈ A, Theorem 1.6. Suppose that Assumption 1.5 is satisfied. Then assertions (i) and (ii) of Theorem 1.1 hold true. In particular, if v is bounded in an open set Q ′ ⊂ Q and for any α ∈ A, the generalized function (1.6) is a locally integrable function on Q ′ , then (1.7) holds in Q ′ in the sense of generalized functions, as in Corollary 1.3. Proof. To exhibit the dependence of v and v α on g, write v = v[g] and v α [g]. Since g is bounded and lower semicontinuous, there exists a sequence of smooth functions g n ↑ g. Also, notice that by the monotone convergence theorem, (1.10) This shows how to obtain (1.2) for g from the same assertion for g n , so that in the proof of (1.2), we may assume that g is a smooth function. Furthermore, since τ = τ α,t,x = 0 if (t, x) ∈ ∂Q, we may assume that (t, x) ∈ Q. Next, let N be a positive real number to be chosen later. Owing to Itô's formula and the facts that (t + τ, Denote byf α any continuous continuation of outsideQ. This is possible because by assumption, the derivatives of g and ψ involved above are continuous inQ. Then v α − g + N ψ is simply v α constructed from f α +f α and g = 0. If (1.2) holds with v − g + N ψ, f αs +f αs in place of v, f αs , respectively, then by using Itô's formula again, we obtain (1.2) in its original form. This enables us to assume that g = 0. Due to (1.9), we can choose N sufficiently large so that 2) follows immediately from Theorem 1. 1. This theorem also shows that v is lower continuous in Q, at least for smooth g. Then (1.10) implies that v is lower semicontinuous in Q in the general case. The fact that it is lower continuous follows from (1.2), as in the proof of Theorem 2.4 of [6]. The theorem is thus proved. Next, we prove a similar result for processes in cylindrical domains. Let D be a bounded domain in R d and T ∈ (0, ∞) a fixed number. Usually, one is interested in processes (t + s, x α,t,x s ) not until τ α,t,x , but rather These two exit times coincide if we take (0, T ) × D as Q and t > 0. However, if t = 0, then the former exit time is zero, since the starting point is already outside (0, T ) × D. Psychologically, the value t = 0 looks important and, therefore, in order to allow the process (t + s, x α,t,x s ) to start at points (0, x) and yet have nontrivial objects to deal with, we set We impose an assumption slightly different from Assumption 1.5: Assumption 1.7. There exists a function ψ ∈ C(Q) such that the first derivatives of ψ with respect to (t, x) and the second derivatives with respect to x are continuous onQ, ψ > 0 in Q and ψ vanishes on (−1, T ) × ∂D. Condition (1.9) is also satisfied for an α ∈ A. Observe that in Assumption 1.7, we do not require ψ to vanish on the whole parabolic boundary of Q. The reason is that if the derivatives of ψ are continuous at points on {T } × ∂D and ψ = 0 on {T } × D and (0, T ) × ∂D, then the left-hand side of (1.9) is zero on {T } × D and so this inequality cannot be satisfied. then one can modify ψ in such a way that (1.9) holds in Q for the modification. To see this, it suffices to observe that It is also worth noting that Assumption 1.7 does not imply that ∂D is smooth. For instance, if D ⊂ R 2 = {(x, y) : x, y ∈ R} near the origin is described by y > 2|x| and L α ≡ ∆, then near the origin, one can take ψ = y 2 − 4x 2 . Theorem 1.9. Suppose that Assumption 1.7 is satisfied. Then assertions (i) and (ii) of Theorem 1.1 hold true. In particular, if v is bounded in an open set Q ′ ⊂ Q and for any α ∈ A, the generalized function (1.6) is a locally integrable function on Q ′ , then (1.7) holds in Q ′ in the sense of generalized functions, as in Corollary 1.3. Proof. As in the proof of Theorem 1.6, we reduce the general situation to the case where g is smooth and then, using Itô's formula, to the case g = 0. Let η ∈ C ∞ (R) be a function satisfying For any ε > 0 and α ∈ A, set and onQ, define v α ε and v ε with f α ε in place of f α . From the definition of v, it is easy to see that v ε → v uniformly onQ as ε ↓ 0. Therefore, it suffices to prove the theorem for functions f satisfying the additional assumption for some ε > 0 and any α ∈ A. Our goal is to apply Theorem 1.1. Denotē Therefore, we can choose N sufficiently large such that in Q, By using Theorem 1.1 and the argument in the proof of Theorem 1.6, we complete the proof of the present theorem. Remark 1.10. It is known (see, e.g., [5]) that the optimal stopping problem for controlled diffusion processes reduces to a problem without stopping, but with the data c α , f α becoming unbounded in the variable α. Then the above results become applicable to the optimal stopping problem for controlled diffusion processes. This shows the usefulness of allowing our data to be unbounded in α. In the following example we present a situation in which b α and f α are unbounded: Example 1.11. Consider the so-called singular stochastic control problem. Let x α t be a process in R d defined by where w t is a d-dimensional Wiener process and α = α t is a d-dimensional control process such that for any t ≥ 0, α t is F t -measurable. Moreover, we will allow any such continuous process for which that is, we allow processes of locally bounded total variation. Fix a smooth bounded domain D ⊂ R d , a lower semicontinuous bounded function g = g(t, x) on R × R d and a bounded continuous function f on R d . Assume that for t ≤ T , we need to investigate where τ is the minimum of the first exit time of x t from D and T − t. The fact that we restrict ourselves to continuous α t allows us to use smooth approximations of α and to do this in such a way that the exit points for the original process and its approximations are close. Obviously, one can approximate process (1.13) by processes of the form where β ∈ A = n A(n) and A(n) is defined as the set of jointly measurable F t -adapted processes with values in A(n) = {β ∈ R d : |β| ≤ n}. It is also clear that Observe that due to the boundedness and smoothness of D, there is a smooth function ψ = ψ(x) such that ∆ψ = −2 in D and ψ = 0 on ∂D. It follows that Assumption 1.7 is satisfied with α = 0. Therefore, Bellman's principle is applicable in this situation. From Theorem 1.9, one can extract more information. Indeed, we have By considering large |β| we see that in the sense of generalized functions, we have It is known that if the generalized gradient with respect to x of a function that is measurable in (t, x) is bounded by 1, then the function itself coincides [(t, x)-a.e.] with a function whose Lipschitz constant with respect to x is majorated by 1. It follows that in Q (a.e.), the function v coincides with a functionv that is Lipschitz continuous in x with the Lipschitz constant bounded by 1. We claim that v is itself Lipschitz continuous in x with the Lipschitz constant bounded by 1. To show this, take an ε > 0 and define τ α ε as the first exit time of (t, w t + α t ) from Q ε = (−ε, ε) × B ε . Also, take a random variable ξ which is uniformly distributed on [0, 1] and independent of the filtration {F t }. Then for any δ ≥ 0, δξ is a stopping time with respect to the filtration {F t ∨ σ(ξ)}. We also know that changing the probability space and filtrations does not affect the value function (see, e.g., [5,6]). Set τ α εδ = τ α ε ∧ (δξ). If (t, x) is such that (t, x) + Q ε ⊂ Q, then τ α εδ ≤ τ α,t,x for any control process α. For α t ≡ 0, by Theorem 1.9, Obviously, as δ ↓ 0, S εδ v and P εδ f tend to zero uniformly with respect to (t, x). Also, by the lower semicontinuity of v and Fatou's lemma (|v| is bounded), which, along with (1.15), shows that R εδ v → v as δ ↓ 0 on the set of (t, x) such that (t, x) + Q ε ⊂ Q. We now observe the obvious fact that the distribution of (δξ, w δξ ) has a density, so that in the definition of R εδ v, we can replace v withv, which implies that for all ε > 0 and δ > 0. This and the above prove our claim. Note that generally, since g is only assumed to be lower semicontinuous, it is easy to see that v need not be continuous inD. Also, v need not be continuous in t unless g is continuous in t. Example 1.12. In Example 1.11, the value function is most likely discontinuous because the data are unbounded. However, Assumption 1.7 does not guarantee that v is continuous in Q ∪ ∂ ′ Q, even if everything is bounded and continuous. To see that, return to Example 1.4, letting (1.8) describe the dynamics for some control α ∈ A = {1, 2} and the equation describe the process response under the other control β, where (w t , z t ) is a two-dimensional Wiener process. Also, let f α = f β = 1 and keep g = 0. Then, obviously, v is greater than the function of the same name in Example 1.4. Also, v(0, x, y) = 0 for (x, y) ∈ ∂B 2 and, thus, v(0, x, y) is discontinuous at the points (− √ 3, ±1). However, as is easily checked the function ψ(t, x, y) = 2(2 − r)(r − 1), r = x 2 + y 2 , satisfies Assumption 1.7 with α = β. Example 1.13. In Example 1.12, the function v is discontinuous only at a few points on the boundary. One can modify this example in such a way that Assumption 1.7 is still satisfied and the discontinuities occur inside Q. To show that, replace (1.16) with where b(r) is a smooth function on [1,2], with b(r) = −1 for r ∈ [1, 5/4] and b(r) = 1 for r ∈ [7/4, 2]. Then any smooth function ψ(t, x) such that ψ = r − 1 near ∂B 1 and ψ = 2 − r near ∂B 2 satisfies Assumption 1.7 near ∂B 1 ∪ ∂B 2 . According to Remark 1.8, one can find a new function satisfying Assumption 1.7 as it is stated. However, it is not hard to see that the new value function v coincides with the function from Example 1.4 on the set where t = 0, r ∈ [1, 5/4], x < 0 and y ∈ [−1, 0]. Since, on the other hand, v is not less than the function from Example 1.4, v(0, x, y) is discontinuous on that part of the line (x, −1), x ≤ 0, which lies in B 5/4 \ B 1 . Remark 1.14. Theorem 2.1 in Chapter V of [4] concerning Bellman's principle requires the existence of a rather smooth functionḡ inQ such that g = g on (−1, T ) × ∂D,ḡ ≥ g on {T } × D and in Q, This assumption is not satisfied in Examples 1.4 and 1.12 where g ≡ 0 because otherwise, by Itô's formula, we would have v ≤ḡ in Q ∪ ∂ ′ Q and v(0, x, y) would go to zero as (x, y) goes to ∂D. 2. Lipschitz continuity of v in x and Hölder continuity of v in t. In this section, we show that under certain additional conditions, the function v defined in Section 1 is Lipschitz continuous with respect to x and Hölder 1/2 continuous in t. Both the case that Q is a general domain and the case that Q is a cylindrical domain are treated. For some applications (see, e.g., the proof of Theorem 2.17), it is also convenient to investigate the dependence of v on parameters. Therefore, apart from our basic objects and assumptions introduced at the beginning of Section 1, we suppose that for an ε 0 ∈ [0, 1] and each ε ∈ {0, ε 0 }, we are also given having the same meaning and satisfying the same assumptions as the original σ, b, c, f . The solution of (1.1) corresponding to σ α (ε), b α (ε) will be denoted by x α,t,x s (ε) and the functions v α , v constructed from the new objects by v α (t, x, ε) and v(t, x, ε), respectively. We assume that for ε = 0, the functions in (2.1) coincide with the original ones, so that in our notation, Naturally, the operator L α constructed from σ α (ε), b α (ε) and c α (ε) is denoted by L α (ε), and by τ α,t,x (ε), we mean the first exit time of (t+s, x α,t,x s (ε)), s ≥ 0, from Q. Let λ ∈ [0, ∞), K, K 1 , T ∈ (0, ∞) be constants. The names of the following assumptions contain a parameter ε. This is done in order to provide flexibility for using the assumptions in different settings. (ii) The functions g(ε) and ψ, their first and second derivatives in x and first derivatives in t are a continuous onQ. We start by estimating the moments of the difference of solutions of (1.1) with different initial values. This notation will allow us to use our convention regarding indices with which we provide the expectation sign. By using Theorem 1.6 with γ α = τ α,t,x ∧ τ α,t,y (ε), we get |f αs (t + s, x s )e −ϕs − f αs (t + s, y s , ε)e −φs | ds, By using the inequality |e a − e b | ≤ e a∨b |a − b| and Assumption 2.3, we obtain where µ is any constant ≥ 0. Upon applying Theorem 2.4, we get To estimate I 2 , we observe that either (t + γ, x γ ) or (t + γ, y γ ) is on ∂ ′ Q. Due to Lemma 2.5, in the first case, we have A similar argument is valid in the second case. Thus, by Theorem 2.4, for any α ∈ A, we have After combining ( In particular, if g and ψ are Hölder-1/2 continuous in Q ∪ ∂ ′ Q with respect to t, then so is v. Proof. Observe that if both points (s, x) and (t, x) are on ∂ ′ Q, then the left-hand side of (2.9) is less than the first term on the right and so there is nothing to prove. However, if one of them is in Q, then x can be slightly moved in such a way that they both fall into Q and by Theorem 2.6, this leads to an insignificant modification of the left-hand side of (2.9). We see that it suffices to concentrate on (s, x), (t, x) ∈ Q. Next, we assume that t ≤ s and set γ α,t,x = (s − t) ∧ τ α,t,x . Note that by Bellman's principle and Itô's formula (as usual, we drop indices α, t, x from objects behind the expectation sign), Since v − g + K 1 ψ = 0 on ∂ ′ Q, so that by Theorem 2.6, Furthermore, well-known estimates of stochastic integrals, combined with the assumption that σ and b are bounded and that s − t ≤ √ s − t, imply that Next, according to Lemma 2.5, we have (v − g + K 1 ψ)(s, x) ≥ 0. It follows that That is proved similarly by considering v − g − K 1 ψ and noting that this function is negative on Q. The theorem is thus proved. Next, we consider the case where is a cylindrical domain in R d+1 under weaker assumptions on the boundary data. Let D be a bounded domain and let ψ(t, x), g 1 (ε) = g 1 (t, x, ε) and g 2 (ε) = g 2 (x, ε) be functions onQ. Assumption 2.8 (ε). (i) The functions g 1 (ε) and ψ, their first derivatives with respect to (t, x) and their second derivatives with respect to x are continuous onQ, ψ > 0 in Q and ψ vanishes on (−1, T ) × ∂D. (ii) We have (iii) For each α ∈ A on Q, we have Observe that Itô's formula immediately yields the following: Lemma 2.9. Let Assumptions 2.8(ε) and 2.2(ε) be satisfied for some The following theorem can be proven in almost the same way as Theorem 2.6. By "the assertion of Theorem 2.6" in Theorem 2.10 we mean that which follows "Then" in the statement of Theorem 2.6. Theorem 2.13 should be read similarly. Indeed, we can reproduce the proof of Theorem 2.6 except that we use Theorem 1.9 in place of Theorem 1.6 and while estimating I 2 instead of (2.6), we write where as before, the first term on the right is less than the last term in (2.6) and the second is majorated by I γ=T −t times Remark 2.11. In Theorem 2.10, we required ψ to satisfy Assumption 2.2 in Q. As in Remark 1.8, one may show that we actually need this assumption only near (−1, T ) × ∂D. Using Theorem 7.4 of [7] (or the corresponding results in [5]) and the above results immediately yield the following: Corollary 2.12. Suppose that the assumptions of Theorem 2.10 or Theorem 2.6 are satisfied with ε = 0. Then for any α ∈ A, (1.7) holds true in Q in the sense of generalized functions, that is, for any nonnegative χ ∈ Our next result concerns the Hölder continuity of v in t. Theorem 2.13. Under Assumptions 2.8(0), 2.2(0) and 2.3(0), suppose that (2.8) holds for any α ∈ A in Q. Then the assertions of Theorem 2.7 are valid with g 1 in place of g in (2.9) and v is Hölder-1/2 continuous in Q ∪ ∂ ′ Q with respect to t. The proof of this theorem follows that of Theorem 2.7 almost word for word; of course, we replace g with g 1 in that proof. In the following remark, we state an analog of one of the assertions of Theorem 2.1 in Chapter V of [4]. As everywhere in the article, we remain within the framework introduced in Section 1. Remark 2.14. Let Q = (−1, T ) × D and let ψ be a function onQ which is continuous along with its first derivatives with respect to (t, x) and the second derivatives with respect to x. Also, let Assumption 2.2(0) be satisfied, let ψ > 0 in Q and let ψ = 0 on (−1, T ) × ∂D. Assume that A = A(1) and g is continuous. It then turns out that v is continuous inQ \ ({−1} ×D). Indeed, the fact that A = A(1) guarantees the validity of (2.8) and Assumption 2.3(0)(i). Furthermore, having in mind approximations using mollifiers, we may assume that c and f are Lipschitz continuous in x uniformly with respect to other variables and that g is infinitely differentiable (see more about this in [5]). Then it only remains to observe that v is continuous inQ \ ({−1} ×D) by Theorems 2.10 and 2.13 [of course, in these theorems, we take g 1 (ε) = g(ε) = g and g 2 (ε) = g(T, ·)]. Note that our requirement that Assumption 2.2 be satisfied is, in fact, very similar to condition (1.17) imposed in Theorem 2.1 in Chapter V of [4]. However, we only need it for g ≡ 0, albeit with 1 in place of f α . Inequality (2.12) follows from (2.15) and elementary properties of mollifiers which also imply that in H(δ). By recalling (2.14), we see that it only remains to prove (2.11). The theorem is thus proved.
2014-10-01T00:00:00.000Z
2005-12-09T00:00:00.000
{ "year": 2005, "sha1": "bd898ec893208e9a2363345be47be7d9431000bb", "oa_license": "implied-oa", "oa_url": "https://doi.org/10.1214/009117906000000395", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "bd898ec893208e9a2363345be47be7d9431000bb", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
54173745
pes2o/s2orc
v3-fos-license
Insight into High-order Harmonic Generation from Solids: The Contributions of the Bloch Wave-packets Moving on the Group and Phase Velocities We study numerically the Bloch electron wavepacket dynamics in periodic potentials to simulate laser-solid interactions. We introduce a new perspective in the coordinate space combined with the motion of the Bloch electron wavepackets moving at group and phase velocities under the laser fields. This model interprets the origins of the two contributions (intra- and interband transitions) of the high-order harmonic generation (HHG) by investigating the local and global behavior of the wavepackets. It also elucidates the underlying physical picture of the HHG intensity enhancement by means of carrier-envelope phase (CEP), chirp and inhomogeneous fields. It provides a deep insight into the emission of high-order harmonics from solids. This model is instructive for experimental measurements and provides a new avenue to distinguish mechanisms of the HHG from solids in diffrent laser fields. I. INTRODUCTION The techniques in attosecond sciences, traditionally applied to atoms and molecules in the gas phase [1,2], have been extended to the bulk solids recently [3][4][5][6][7][8][9]. A crucial difference between bulk solids and gas targets is the localization of the initial electron wave-packet, which is spatially confined in isolated atoms and molecules but can be delocalized in solids. The effect of electronic distribution on wave-packet dynamics of laser-solid interaction remains elusive. A semiclassical model [10] is proposed, which is in analogy with the three-step model for highorder harmonics generated from the atomic and molecular systems in the coordinate space [11,12] by requiring that the electron-hole pair have the same displacement, i.e., x c -x v = 0. Our theoretical work also introduces a quasiclassical [13] model to investigate the electron dynamic processes under the laser fields in the wavevector k space, based on the delocalization of the wave-packet. However, both the two models can not reveal the origins of HHG from the time-dependent evolution of the Bloch electron wave-packet between neighboring atomic sites in the coordinate space. In order to understand the process of the HHG from solids intuitively, a further picture in the coordinate space is required. Theoretically, HHG in crystal solids is divided into intra-and interband contributions in the wavevector k space [14][15][16]. However, the key role of these two contributions remains intensively debated [16,17]. A deep perspective is desired to understand intra-and interband contributions at an intuitive level. In this work, we provide a novel insight into the process of HHG in crystal solids by focusing on the two underlying nonlinear currents, which are caused by the motion of the Bloch electron wave-packets moving at group and phase velocities in the coordinate space, respectively. This model reveals that the two nonlinear currents (j group and j phase ) correspond to the global and local oscillation motion of the wave-packet in the coordinate space respectively. Pictures in k space show a good agreement with those in the coordinate space. II. THEORETICAL APPROACH During the laser fields interacting with solids, Bloch electrons in the valence band have probabilities to tunnel to conduction bands, i.e. Zener tunneling [18][19][20]. But the tunneling probabilities exponentially decay with the increase of energy gap. Only a small portion of electrons, which are populated on top of the valence band near the wavevector k = 0 with minimal band gap, can tunnel to conduction bands with the laser parameters used in the current work. So we choose an initial wavefunction in the valence band which is superposed by the ∆k Bloch eigenstates near k 0 = 0 [21]. We can regard the initial Bloch wave-packet as a quasiparticle. Based on the assumption, the quasiparticle wave-packet can be written as [22] where u n k (x) is a function with period in the lattice constant a 0 and ǫ n (k) represents eigenvalue of the energy. The Bloch wave-packet at a given wavevector k 0 in band n can be superposed by the wavefunctions of ∆k near the k 0 in the same band. It can be represented as The Taylor expansion of eigenenergy ǫ n (k) near k 0 can be expressed as and the amplitude modulation factor u n k (x) changes slowly with k. So the Eq. (2) can be rewritten as we finally come to where ζ = x − 1 ( ∂ǫn(k) ∂k ) k0 t. The wavefunction can be divided into two parts naturally. Electronic probability at atom sites in coordinate space is defined by It implies that the electronic probability is the amplitudes of the periodic lattices (|ψ n k0 (x, t)| 2 ) modulated by the envelope (|Φ(x, t)| 2 ). The envelope involves the information of the energy bands. We describe the light-solid interaction in one dimension, along the polarization direction of the laser fields. In the length-gauge treatment, the time-dependent Hamiltonian is written aŝ whereĤ 0 =p 2 2m + V (x), and V (x) is a periodic lattice potential. In our calculations, we choose the Mathieutype potential [7,21,23]. The specific form is V (x) = −V 0 [1 + cos(2πx/a 0 )], with V 0 = 0.37 a.u. and lattice constant a 0 = 8 a.u., respectively. The energy band structure and time-dependent Schrödinger equation (TDSE) can be solved by using Bloch states in the k space and B splines in the coordinate space respectively. For details we refer readers to Refs. [24,25]. After obtaining the time-dependent wave function Ψ n k0 (x, t) at an arbitrary time, we can calculate the laser-induced currents by dividing it into two contributions according to Eq. (5). It can be written as and where the N and x s are the index of the lattice site and the coordinates of the periodic lattice, respectively. Picture of the Eq. (8) and Eq. (9) implies that the two nonlinear currents corresponding to the Bloch wave-packet moving at phase and group velocities in the laser fields respectively. The current j phase is caused by the electron polarization between each two neighboring lattice sites, which is shown in the inset of the top panel in the Fig. 1(b). Based on the physical picture, we combine the time-dependent electron population and energy band dispersion of the each band, the Eq. (9) can be reduced to where ρ n and ∂ǫn ∂k represent the population and group velocity of the band n respectively. A(t) is the vector potential of the laser fields. The HHG power spectrum is proportional to |j(ω)| 2 , the modulus square of Fourier transform of the time-dependent current in Eqs. (8) and (10). III. RESULTS AND DISSCUSSION We study the electron time-dependent wave-packet evolution process during the laser-solid interaction. Fig. 1(a) shows the full view of the electron wave-packet evolution in the fields. The wave-packet oscillations between the lattice sites are shown in the Fig. 1(b). Timedependent envelope function (the orange dash line) of the electron wave-packet depicts the nonlinear current in Eqs. (9) and (10), which correspond to the intraband current in k space. The electron wave-packet amplitude difference between each two neighboring atom sites in the time-dependent periodic fine structure, as shown in the inset of the Fig. 1(b), describes the charge density polarization under the laser fields. The time-dependent polarization can be obtained by integrating the current in Eq. (8). It gives rise to the HHG, which corresponds to the picture of the interband polarization in the k space. In summary, the oscillations of the envelope function and periodic fine structure between each two lattice sites give rise to the HHG emission, which are pictured in the intraand interband contribution respectively. A. Validity of the model The harmonic spectra generated by the two nonlinear currents are shown in Fig. 2. The total harmonic spectrum is depicted by the solid black line, which characterizes a rapid decay and double-plateau structure. One can find that the currents j group and j phase play key roles in the HHG process in the below gap and the plateau zones respectively. Several theoretical models have been proposed for solid-state HHG, such as interband polarization combined with dynamical Bloch oscillations [26][27][28], intraband electron dynamics [29,30] and time-dependent diabatic process [31]. However, a unified predictive theory that captures the essential feature of HHG in solids remains elusive. Theoretically, the plateau area is contributed by the interband polarization in the previous works under the laser parameters adopted here [10,21]. The Fig. 2 shows that the current j phase dominates the contribution of the HHG at the plateau area. The HHG spectrum calculated by the j phase shows a good agreement with the total HHG spectrum. The comparison of the current model and previous models reveals the physical picture of the currents j group and j phase which correspond to the intraband Bloch oscillations and interband transition dynamics, respectively. The new insight into the HHG process provides an intuitive understanding on the role of the dominant contribution in the laser fields with the wavelength ranging from mid-infrared light to Terahertz (THz) region. B. Contribution of wave-packets on group and phase velocities In order to further investigate the mechanisms of HHG, we reinterpret the intensity enhancement in the HHG process by regulating the laser parameters such as the spacial nonhomogeneity, CEP and chirp. We firstly perform an analysis of the HHG yield enhancement in solids under the nonhomogeneous ( plasmon-enhanced ) fields, as shown in Fig. 3. It has been reported theoretically [24] and experimentally [32][33][34] recently. The spatial dependence of the enhanced laser electric field can be described approximately as (similar to Taylor expansion) where γ ≪ 1 is a parameter characterizing the spatial nonhomogeneity. We show the harmonic spectra in the case of the homogeneous and nonhomogeneous fields with a nonhomogeneity parameter γ = 0.0004 in Fig. 3(a). The doubleplateau structure of the harmonic spectra is shown in both the homogeneous and nonhomogeneous fields. However, the second HHG plateau exhibits yield enhancement by two to three orders under the nonhomogeneous fields. The mechanisms of the yield enhancement had been previously interpreted with the populations and transition probabilities enhancement of the high-lying conduction bands [24]. Here, we turn to the new insight on the picture of the currents of j phase and j group . The Fig. 3(b) shows the distinction of the contributions in the HHG spectrum under the nonhomogeneous fields. One can observe that the contribution of j phase dominates the double-plateau region. It implies that the main contribution of the HHG plateau has no changes between show that the currents (j1 phase and j2 phase ) contribute to the HHG spectra of the first and second plateau, respectively. The laser parameters are the same as those in Fig. 2. nonhomogeneous and homogeneous fields by comparing with the results in Fig. 2. A further insight is required to explain the only yield enhancement of the second HHG plateau by focusing on the current j phase . We divide the current j phase into j1 phase and j2 phase by projection to the eigenstates of the first and high-lying conduction bands, respectively, as shown in Fig. 3(c). On the top of the Fig. 3(c), it illustrates that the j1 phase has the same magnitude in the case of homogeneous and nonhomogeneous fields, which explains why the change of the yield enhancement of the first plateau is not obvious. However, in the bottom of the Fig. 3(c), one can clearly see that the current j2 phase has a dramatic increment at the center of the laser pulses, which could give rise to two to three orders of yield enhancement of the second HHG plateau. The increment of the current j2 phase suggests that the intensity of the electronic polarization between each two atomic sites is enhanced in the case of nonhomogeneous fields, which leads to the enhancement of the second plateau high-order harmonic radiation. Then, we focus on the effects of CEPs and chirps [25,35] on the HHG spectra presented in Figs. 4 and 5. The form of the laser fields is expressed as with different CEPs. The intensity change and cutoff extension of the HHG spectra are marked with Z ↓ (Z ↑ ) and ∆N1 (∆N2), respectively. Panel (b) shows the currents (j1 phase and j2 phase ), which contribute to the HHG spectra of the first and second plateaus, respectively. The intensity, wavelength, and duration of the driving laser pulses are 0.56 TW/cm 2 , 3.2 µm, and two optical cycles, respectively. The chrip parameter β is zero. where φ(t) = β( t τ ) 2 , and β is a chirp parameter. τ is fixed to 610 a.u. φ CEP represents the CEP phase. ω and f (t) are the frequency and envelope function of the laser fields respectively. The cutoff extensions of the two plateaus are obvious and marked with ∆N 1 and ∆N 2 in Fig. 4(a). Due to the bigger wavevector k in the laser pulses with φ CEP = π/2, as shown in the inset of Fig. 4(a), the cutoff extensions can be clarified easily based on the previous quasi-classical analysis of the dynamics. One can also find that the intensity of double-plateau HHG changes dramatically in the case of the laser pulses with different CEPs. The second HHG plateau has a magnitude enhancement of six to seven orders (Z ↑ ), however, the yield of the first HHG plateau decreases by one to two orders (Z ↓ ) in the case of the fields with φ CEP = π/2. In order to clarify the mechanisms of this phenomenon, we adopted the model mentioned above by distinguishing the currents j1 phase and j2 phase from the dominated contribution current j phase , as illustrated in Fig. 4(b). The amplitude of the current j1 phase shows a small decrement in the fields with φ CEP = π/2, which leads to one to two orders intensity decrement of the first plateau. However, the j2 phase is enhanced obviously at the center of the laser pulses in the case of φ CEP = π/2, which gives rise to six to seven orders intensity enhancement of the sec- ond plateau. We also investigate the laser chirp effect on the highorder harmonic emission, as shown in Fig. 5. Fig. 5(a) shows an intensity enhancement phenomenon of the sec-ond HHG plateau, which can also be attributed to the enhancement of the nonlinear current j2 phase with a chirp parameter β = -1.8 in Fig. 5(b). One could conclude that the effects of CEP and chirp regulate the two mode intensities of the electron polarizations between neighboring lattice sites, which leads to the yield decrease or enhancement of the HHG plateau. Finally, we investigate the mechanisms of the HHG process in the THz fields [17,27,36], as presented in Fig. 6. One can find that the dominant contribution of the HHG spectrum originates from the current j group [17], which implies that the Bloch wave-packet oscillates back and forth in the coordinate space with a group velocity under the THz fields. The instantaneous oscillation between two lattice sites can be neglected in the THz fields. As a result, the current j phase caused by the electronic polarization between two neighboring atomic sites can be ignored. Consequently, the mechanisms of the HHG in the THz fields differentiate from those in the mid-infrared laser fields. It is in agreement with recent experimental measurements in Ref. [36]. The picture can be comprehended in the k space, as shown in the inset of Fig. 6. A THz driver field induces photoionization (vertical orange arrow), transferring electrons to conduction bands, creating holes in valence band and driving the electron and hole wave-packet dynamics in the conduction and valence bands, and then oscillating separately back and forth (shown by blue and black arrows), which gives rise to the emission of the high-order harmonics. It reveals that the dominant mode of the wave-packet oscillation decides the mechanisms of the HHG in the laser fields which range from mid-infrared to THz fields. IV. SUMMARY In summary, this work reveals a new model on the HHG from solids by focusing on the dynamics of the Bloch wavepacket, which moves at group and phase velocities in coordinate space. The physical picture of this model shows a good correspondence to the model in momentum space with intra-and interband dynamic processes. It is a universal way to deal with the chirp, CEP and nonhomogeneous laser fields. It is valid ranging from mid-infrared to THz fields. It provides an instructive scheme for experimental measurements to determine the mechanisms of the HHG by distinguishing the dynamic modes of the wave-packets.
2017-07-18T15:36:47.000Z
2017-07-18T00:00:00.000
{ "year": 2017, "sha1": "7ecd77d28ea65c8affd532a1f9ee55aaa2628fbd", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1707.05689", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7ecd77d28ea65c8affd532a1f9ee55aaa2628fbd", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
89614016
pes2o/s2orc
v3-fos-license
Propagating wave in the flock of self-propelled particles We investigate the linearized hydrodynamic equations of interacting self-propelled particles in two dimensional space. It is found that the small perturbations of density and polarization fields satisfy the hyperbolic partial differential equations---that admit analytical propagating wave solutions. These solutions uncover the questionable traveling band formation in the flocking state of self-propelled particles. Below the critical noise strength, an unstable disordered state (random motion) undergoes a transient vortex and evolves to an ordered state (flocking motion) as unidirectional traveling waves. There appear two possible longitudinal wave patterns depending on the noise strength, including single band in stable state and multiplebands in unstable state. A comparison of theoretical and experimental studies is presented. We investigate the linearized hydrodynamic equations of interacting self-propelled particles. It found that the small perturbations of density and polarization fields satisfy the hyperbolic partial differential equations-that admit analytical propagating wave solutions. These solutions uncover the questionable traveling band formation in the flocking state of self-propelled particles. Below a critical noise strength, the unstable disordered state (random motion) undergoes the transient vortex and evolves to the ordered state (flocking motion) as unidirectional traveling waves. There appears two possible longitudinal wave patterns depending on the noise strength, including single band in stable state and multiple bands in unstable state. The onset of collective motion can be found in various systems of the self-propelled objects, ranging from macromolecules, to microorganism, animal, human, and swarming robots (see Ref. [1] and references therein). The physics aspect of this phenomenon has been a current active research topic. The minimal paradigm that can be used to describe this dynamics successfully is acknowledged to the Vicsek model [2]. In this model, the point-like self-propelled particles (SPP) move at constant velocity in the direction of their orientation unit vector [2]. The heading direction of each particle is aligned by noisy mutual polar interaction. For sufficient small noise strength, below a critical value, the particles transit from random motion (disordered state) to flocking (ordered state), where they form the coherence clusters that the individual members trend to move together in the same direction. The Vicsek model can be viewed as the flying XY spin model where the phenomenological hydrodynamic equations have been proposed for description at continuum level by Toner and Tu [3,4]. Recently, the hydrodynamic equations of SPP can be derived from specified individual-based dynamics by using several coarse-gaining frameworks, such as the Smoluchowski equation [5][6][7] and the Boltzmann equation [7][8][9][10]. Based on hydrodynamic theory, the propagating as sound wave of the long wavelength mode fluctuations of density and velocity fields in SPP had been predicted since by the work of Tu and Toner [11]. Later, the moving bands of the ordered state in the disordered state background was obviously found in the large-scale simulations of SPP by Refs [9,[12][13][14]. Two distinguish robust pattern forms of the traveling waves in SPP have been explored, including solitary moving band [9,15,16] and moving multi-stripes [17]. Apart from wave patterns, fluctuating flocking states [17] and stationary radially symmetric asters [15] in SPP have been also presented. The analytical works have been carried out in order to gain deep insight into wave propagating dynamics in SPP. The standard method is the linear stability analysis of the hydrodynamic equations [9,15,17]. Several authors agree that the emergence of traveling waves in SPP is from instability of the homogeneous states [9,15,17]. However, the linear stability analysis provides only the dispersion relation [9] that is inadequate to characterize the spatiotemporal wave patterns. In more rigorous study, the propagative Ansatz, in which the wave profile travels along a direction with constant speed, has been postulated to be a solution for hydrodynamic equations of SPP in one-dimensional space (1D) [9,[17][18][19]. This approach reduces the hydrodynamic equations from nonlinear partial differential equations (PDEs) to nonlinear ordinary differential equations (ODEs). The nonlinear ODEs can be recast further into equivalent Newton's equation of motion for single particle moving in potential field in the presence of a friction by using a dynamical framework [18,19]. This approach seems likely to classify the three different types of propagating patterns in SPP successfully, including solitary wave, multi-stripes wave and polar-liquid droplet. Nonetheless, the exact wave profiles are unable to solve explicitly by using this framework due to its nonlinearity. As classified in the textbook of Whitham [20], there are two classes of wave solutions for the linear or nonlinear PDEs which consist of hyperbolic wave and dispersive wave solutions. The difference is that the hyperbolic wave propagates in two opposite directions along arbitrary axis, in which the speeds are unnecessary to be equal [21][22][23]. Obviously, the previous analyses rely on the dispersive wave solution that propagates in a direction [9,15,[17][18][19]. In contrast to, it is found in our research here that the linearized hydrodynamic equations of SPP can formulate the hyperbolic-type PDEs. Therefore, the previous propagating wave assumption, which belongs to the dispersive wave solution [9,15,[17][18][19], is still incomplete wave feature of SPP. In this work, we investigate the linearized hydrodynamic equations of SPP which can be combined into the linear wave PDEs [21][22][23]. Instead of performing mode analysis as in the conventional works [9,15,17], we solve for the exact space and time dependent solutions of these equations by using the Riemann method [21][22][23]. These linear analytical solutions are plausible to capture the dynamics of SPP in vicinity of early and final state of system. Especially, they can be used to classify the wave pattern formation in the flocking state of SPP clearly. In this paper, we consider a particular variant Vicsek model that has been studied by Farrell et al. [24]. Adapted from Ref. [24], the hydrodynamic equations, that describe evolution of particle number density field ρ(r, t) and polarization field W (r, t) in two-dimensional space (2D), are given by where v 0 is particle moving speed, ǫ describes noise strength and γ describes the strength of alignment. The polarization field is associated with the particle ve- . These equations are coarse-gained dynamics of a N point-like SPP system that particles move at constant speed v 0 in the direction of their orientation unit vector and interact to each other with noisy alignment rule [2]. The position r i (t) and the orientation angle θ i (t) of ith particle at time t evolve with the following equations of motion: where the unit vector p i (θ i ) = cos θ ix + sin θ iŷ and η i (t) is a white noise with zero mean and unit variance. The local pairwise alignment interaction is given by F (θ, r) = γ sin(θ)/(πl 2 ), if |r| ≤ l (otherwise F = 0), where l is interaction range [24]. The advantage of this variant model is that it can map the microscopic physical parameters into the hydrodynamic equations through a coarse-grained process explicitly. Although differences in physical parameters, Eq. (1) and Eq. (2) have identical form of the phenomenological model proposed by Toner and Tu [3,4] and the coarse-grained equations obtained by using the Boltzmann theory [9]. The homogeneous states of Eq. (1) and Eq. (2) admit arbitrary constant density ρ 0 with two possible values of polarization W 0 , given by where ε 0 = γρ 0 /2 which is defined as the critical noise strength value. Above the critical point (ε > ε 0 ), the system is in disordered state with zero polarization where the SPP move in random direction. Whereas below the critical point (ε < ε 0 ), the system undergoes ordered state where the SPP tend to move together in the same direction with nonzero polarization, called the flocking. Now we study the dynamics of SPP in vicinity of the homogeneous states. We suppose that the homogeneous polarization aligns in x-direction. Thus we define the solutions as follows: ρ(r, t) = ρ 0 + n(r, t) and W (r, t) = W 0x + u(r, t), where n(r, t) and u(r, t) respectively are small perturbations in density and polarization fields, called the perturbations for short. Substituting these solutions into Eq. (1) and Eq. (2) by retaining the first-order terms, then we obtain the linearized hydrodynamic equations of SPP, The vector field h tends to drive the polarization field to the mean direction and has the effect only in the flocking state (W 0 = 0). Operating Eq. (4) with ∂ t and using Eq. (5) (similarly, operating Eq. (5) with ∂ t and using Eq. (4)), we evaluate that where c = v0 16ε . Noting that the third-order derivative terms in Eq. (6) and Eq. (7) can be ignored since we are interested in evolution of large flocking cluster or long wavelength (λ) mode, that λ ≫ πv0 2 √ |εα0| . Obviously, Eq. (6) and Eq. (7) belong to the wave equations or hyperbolic PDEs [21][22][23]. As shown in Eq. (6) and Eq. (7), the perturbations around the flocking or the ordered state (W 0 = 0) trend to be biased to the mean direction by vector field h. It is observed, at least in simulations, that the moving bands are unidirectional waves [9, 12-14, 18, 19]. Such a symmetry-broken field, we rewrite u(r, t) = w(r, t)x + v(r, t)ŷ, where w and v are x-and y-component of the small perturbed polarization field, respectively. Now we consider the longitudinal mode, where the wave profiles propagate in the same direction of mean polarization (n y = w y = v y = 0). From Eq. (6) and Eq. (7), the wave equations in this case are provided by Noting that v is decoupled and trends to decay to eventually small value by a bias-diffusion process. Next, we consider the transverse mode where the wave profiles propagate in perpendicular direction of mean polarization (n x = w x = v x = 0). From Eq. (6) and Eq. (7), the wave equations for the ordered state in this case read Noting that w is assumed to relax to zero in the ordered state. By neglecting the third-order derivative term, Eq. (11) and Eq. (12) are the plane wave equations that have the well-known d'Alembert solution-where the initial condition splits into two waves that propagate in opposite directions along the y-axis with the speed of sound c [21][22][23]. Since the perturbations do not change shape from the initial conditions for this sort of wave, we ignore this mode in this study. We are now looking for the analytical space and time dependent solution of longitudinal waves. By dropping the third-order derivative term, we rewrite Eq. (9) or Eq. (10) where φ can refer to either n or w, since all equations are in identical form. The initial conditions for Eq. (13) are given by φ( . Eq. (13) is a secondorder PDE whose the characteristic equation is given by [21][22][23]. From the characteristic equation, obviously, Eq. (13) is a hyperbolic-type PDE and it can be reduced to a canonical form by introducing the curvilinear coordinates: where c ± = √ ν 2 + c 2 ± ν. So that, ν is exactly the collective speed of SPP induced by the alignment interaction. Applying the transformations in Eq. (14), we rewrite Eq. (13) in ηξ-plane where . Now the solutions of Eq. (15) depends on the two wave variables, φ(x, t) = φ(η, ξ). According to ν > 0 in the ordered state, the wave speeds c ± are always positive so that η and ξ, respectively, are left-and right-propagating wave variables. In the presence of collective motion, the wave speeds in the flocking state are larger than the speed of sound in disordered phase. This supports the supersonic wave structure as pointed out by Ihle [16]. Finding the solution of Eq. (15) subjected to the initial data is called a Cauchy problem, which can be solved by using the Riemann method [21][22][23]. This approach can solve the general form of linear hyperbolic PDE in 1D, but case study in the presence of ν term is rare in many textbooks [21][22][23]. Therefore, we provide the procedure for solving Eq. (15) in the Supplemental Material [26]. From [26], the analytical wave solution of Eq. (15) in space and time variables is provided by the propagators F and G are given by where s(x, t) = √ c 2 t 2 + 2νxt − x 2 and Γ = k + − k − . Eq. (16) is valid only in the interval [x − c − t, x + c + t], called domain of dependence [22,23], that supports the finite bands formation and discontinuous front as found in simulations [9,13,16]. From Eq. (16), the analytical solution indicates that initial profiles of the small perturbed density and polarization fields lose their configuration and propagate in both positive and negative direction of x-axis with unequal speed. Due to c + > c − , the propagation in the positive direction is faster than in the negative one. Below the critical point, ε < ε 0 , we found that k + > 0 when ε < 7 11 ε 0 and k + < 0 when ε > 7 11 ε 0 while k − is always negative [26]. As t ≫ 0, according to Λ > 0, the leftpropagating initial profile trends to decay to zero whereas the right-propagating wave grows for ε > 7 11 ε 0 (unstable regime) and decays for ε < 7 11 ε 0 (stable regime) [26]. Therefore, the propagating waves in SPP trend move in the direction of mean polarization vector as found in simulations [9, 12-14, 18, 19]. To this point, there exists another transition noise strength at 7 11 ε 0 that separates the spatiotemporal pattern formation of the propagating wave in the flocking state of SPP into two regimes as mentioned by Chaté et al. [13]. Let us consider the unstable regime where 4k − k + = k 2 > 0. The asymptotic Bessel function is given by J m (ks) = 2 πks cos(ks − π 4 − mπ 2 ) for s ≫ 1. With this character, it shows that the perturbations of the unstable ordered state propagate as waves with spatial oscillatory pattern or multiple-bands in 2D, that has been observed in simulations [13,[17][18][19]. The wave profiles grow fastest in the position of the leading front and grow slower for the tandem position [26], at least in the early stage. Therefore, k is equivalent to wavenumber which relates to the wavelength λ w as follow: where ε ′ = ε ε0 . The wavelength in Eq. (19) can be used to approximate the stripes width and it shows that the particle moving speed v 0 has a role on regulation of the bands width. In the opposite situation, for the stable regime that 4k − k + = −k 2 < 0, the propagators change to G(x, t) = 1 Λ I 0 (ks(x, t)). From the asymptotic form of the modified Bessel functions, given by I m (ks) ∼ 1 √ 2πks e ks for s ≫ 1, it shows the non-oscillatory wave patterns or a single band in 2D. In the stable ordered state, the perturbations decay to eventually smaller values thus our linear approximation should be valid in long time scale. In long time scale t → ∞, we approximate s ≃ ct + ν c x − 1 2 Λ 2 c 3 x 2 t . Thus, the perturbations converge to the homogeneous ordered state as biased Gaussian waves for below noise threshold. However, the homogeneous ordered state does not observe in the simulation and this state is replaced by the fluctuating flocking state [13,17]. In conclusion, based on the linearized hydrodynamic theory of self-propelled particles, the small perturbed density and polarization fields are governed by the hyperbolic partial differential equations. Our analytical hyperbolic wave solutions reveal some different aspect of spatiotemporal pattern formations in self-propelled particles, as opposed to the previous analytical studies, that rely on the dispersive wave solution. Below critical noise strength, the homogeneous disordered state is unstable, growing to the ordered state, and generates the vortex flow of perturbation polarization field. The perturbations in the homogeneous ordered state evolve as two possible unidirectional longitudinal propagating waves, separated by a threshold noise strength. This includes single band in the stable state below the threshold value and multiple-bands in the unstable state above the threshold value. We believe that these special case solutions could provide the basic knowledge for study the dynamics of generic self-propelled particles in the future work. RIEMANN METHOD For convenience in further calculation, we rewrite equation (15) in the main text where L is linear operator. From equation (14) in the main text, we have that t = η−ξ c − +c + and x = c + η+c − ξ c − +c + . When t = 0, we have η = ξ = x, which is the straight line in the ηξ-plane. Therefore, the initial conditions in ηξ-plane (Cauchy data) are transformed to where we define operator M[ * ] = ∂ η * −∂ ξ * .
2018-12-29T01:33:06.000Z
2018-05-13T00:00:00.000
{ "year": 2018, "sha1": "dba39104dcd877f6e040f3fd5e56ac4a04eec875", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1805.04971", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "dba39104dcd877f6e040f3fd5e56ac4a04eec875", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Chemistry" ] }
202554362
pes2o/s2orc
v3-fos-license
Effects of dietary crude protein levels on ammonia emission, litter and manure composition, N losses, and water intake in broiler breeders This study determined the effects of different dietary crude protein (CP) levels on ammonia emission (NH3), litter and manure composition, nitrogen (N) losses, and water intake in broiler breeders. A total of 480 females and 64 males (Ross 308) 20 wk of age were randomly allotted to 2 dietary treatments with 8 replicates of 30 females and 4 males per replicate. Birds were fed either high CP (CPh) or low CP diets (CPl) supplemented with free amino acids (AA). Both diets consisted of 3 sub-diets; 1 for each phase of the laying period. Diets were formulated to be iso-caloric and calculated CP content of the CPl diets was 15 g/kg lower than the CPh diets (Breeder 1 (23 to 34 wk): 135 vs. 150, Breeder 2 (35 to 46 wk): 125 vs. 140 and Breeder 3 (47 to 60 wk of age): 115 vs. 130 g/kg, respectively). Pens consisted of an elevated slatted floor (25% of the floor surface) and a litter floor. Water and feed intake were recorded daily. Litter (floor) and manure (below slatted floor) composition and ammonia concentration were measured at 34, 44, and 54 wk of age. Ammonia concentration was measured using a flux chamber on top of the litter or manure. Estimated N losses were calculated. Dietary protein level did not affect water intake and dry matter (DM) content of the litter or manure. Compared to birds fed the CPh diets, the litter and manure samples of broiler breeders fed the CPl had 8% lower total-N and 13% lower ammonia-N content resulting in a 9% lower ammonia concentration, 9% lower ammonia emission, and 11% lower total-N losses. In conclusion, this study shows that reducing CP level in the diet of broiler breeders reduces ammonia emission and total N-losses from litter and manure. INTRODUCTION Ammonia (NH 3 ) is a volatile nitrogen compound present in poultry houses from the microbial degradation of uric acid and undigested proteins from their diet in the droppings of the birds (Groot Koerkamp, 1994;Ferguson et al., 1998a). Inside poultry houses, concentrations of gaseous ammonia (above approx. 10 ppm) negatively affect the health, welfare, and productivity of the birds (Targowski et al., 1984;Kristensen and Wathes, 2000;Miles et al., 2004), as well as the health of workers (Donham and Cumro, 1999;Omland, 2002). Once emitted from the poultry houses, ammonia contributes to acidification, eutrophication, and reduced biodiversity in natural ecosystems (Lekkerkerk et al., 1995). Furthermore, ammonia reacts with other gases to form secondary particulate matter which contributes to air pollution as particles within the PM 2.5 fraction (Brunekreef et al., 2015). Therefore, enhancements are needed to reduce ammonia concentration within poul-try houses and to lessen the nitrogen pollution of the poultry industry. Commercial breeder layer diets often contain high levels of protein (150 to 160 g/kg) (de Beer, 2009) for growth rate and muscle deposition, hatching egg production, and protein turnover. This however, leads to a high N excretion (Lopez and Leeson, 1995a,b). In the last decades, several experiments have been carried out investigating effects of decreased dietary crude protein (CP) and/or amino acid (AA) levels on reproduction performance of breeders (e.g., Lopez and Leeson, 1995b;van Emous et al., 2013van Emous et al., , 2015van Emous et al., , 2018 without a negative effect on the reproductive performance of broiler breeder hens fed reduced CP diets (van Emous et al., 2018). Until now, however, no research has been carried out on the effect of low dietary CP on ammonia emission in breeders and only few papers (Lopez and Leeson, 1995a,b) have been published on the effect on N excretion of breeders. The latter authors carried out 3 experiments with 4 treatments consisting of diets with 20 to 14% CP (exp. 1), 15 to 9% CP (exp. 2) and 16 to 10% CP (exp. 3). N content of the excreta was reduced by 4 to 8% per 10 g/kg reduction of dietary CP level. More research on this topic has been done in the progeny of breeders (i.e., broilers) (Moran et al., 1992;Elwinger and Svensson, 1996;Ferguson et al., 1998a,b;Gates et al., 2000;Khajali and Moghaddam, 2006;Kamran et al., 2010;Ospina-Rojas et al., 2012;Hernandez et al., 2013;van Harn et al., 2017). These studies, in general, found a reduction of N excretion of 3 to 10% per 10 g/kg reduction of dietary CP level. Several studies with broilers showed a 10% reduction of ammonia emission per 10 g/kg reduction in dietary CP level (Elwinger and Svenson, 1996;Ferguson et al., 1998a,b;Ospina-Rojas et al., 2012;Hernandez et al., 2013). Furthermore, in broiler experiments, a decreased water intake due to a reduced CP intake has been found (Elwinger and Svensson, 1996;Hernandez et al., 2013;van Harn et al., 2017). This is caused by the decreased need to excrete the protein surplus from the body (in the form of uric acid). Furthermore, a lower water intake increases the DM content of the litter which in turn reduces the incidence of skin dermatitis like footpad lesions, hock burns, and breast blisters (van Harn et al., 2017). A reduced CP content in broiler breeder layer diets can be a feasible solution to reduce N losses in broiler breeder houses. Due to current concerns about the effect of ammonia emission on the environment the development of strategies to reduce ammonia pollution from broiler breeders is necessary. Therefore, the current experiment was conducted to determine the effects of dietary CP levels in broiler breeder during the laying period on ammonia emission, litter and manure composition, N losses, and water intake in broiler breeders. Effects on reproduction performance from the present experiment have been reported in another paper (van Emous et al., 2018). Experimental Design A completely randomized experimental design consisting of 2 dietary CP levels with 8 replicates of 30 females and 4 males per replicate was applied. The breeders received 1 of 2 diets with either a high CP level (CPh) or a low CP level (CPl) ( Table 1) and both diets consisted of 3 sub diets; 1 for each phase of the laying period. The CPh diets were formulated to be iso-caloric [2,800 kcal/kg nitrogen corrected apparent metabolize energy (AME n )] based on the recommendations of the breeder company (Aviagen-EPI, 2015 (Lys, Met, Thr, Trp, Arg, Ile, and Val) to meet the minimum essential AA levels recommended by the breeder company (Aviagen-EPI, 2015). Birds of both diets were fed the same daily amount of feed to provide a different daily nutrient (particularly CP and AA) intake (more details: van Emous et al., 2018). Males received a standard male diet (2,600 AME n kcal/kg; 13.0% CP; 0.45% dig. Lys; 0.5% dig. M+C; 1.0% Ca; 0.3% aP). During the first 3 wk birds received a standard pre-breeder diet according to the recommendation of the breeder company (Aviagen-EPI, 2015). Birds, Housing, and Management A flock of 480 female and 64 male Ross 308 broiler breeders at 20 wk of age (PoultryPlus, Ambt Delden, The Netherlands) were allotted to 16 floor pens with 34 birds per pen (30 females and 4 males). Pens were located in 4 separated experimental rooms (4 pens per room; both treatments were applied in each room). Each room was mechanically ventilated to remove the heat, moisture, carbon dioxide, and ammonia from the birds and ensure proper climate. Pens measured 1.8 m × 4.0 m (7.2 m 2 ; 4.7 birds/m 2 ). They consisted of a litter floor (75% of the floor surface; 2 kg/m 2 of wood shavings as bedding material) and an elevated floor with plastic slats (1.8 × 1.0 m = 1.8 m 2 ; 25% of the floor surface). Each pen was equipped with 4 nests boxes (94 × 33 cm) and were available to the breeders from 23 wk age onwards. Water was provided ad libitum via 7 drink cups above the slats. Females were fed in 2 feeding troughs with a male exclusion system (3 m each) at the walls of the pens. Males were fed in 1 feeding pan positioned at a height of 50 cm to prevent female access. Birds were maintained on the same target BW and feed allocation was adjusted to the predetermined body growth curve during rearing and a combination of the predetermined body growth curve and egg production (Aviagen-EPI, Roermond, The Netherlands). Breeders were photostimulated with 11 h of light (40 lx) at 21 or 23 wk of age. After photostimulation, day length was gradually increased by 1 h (later 0.5 h) per week to a photoperiod of 14L:10D. This was maintained until the end of the experiment at 60 wk of age, with lights on from 0330 to 1730 h (40 lx). Temperature was maintained at 20°C during the entire period by means of the mechanical ventilation. The protocol for the experiment conformed to the standards for animal experiments and was approved by the Ethical Committee of Wageningen UR, the Netherlands (protocol number: 2016012). Observations Water and Feed Intake Water intake was measured daily by visual inspection of the scaling on the water bins. Water−feed ratio was calculated by dividing the daily water intake by the daily feed intake per pen. Daily feed rations were weight weekly, and birds fed the CPl diet received the same daily amount of feed than those fed the CPH. Litter and Manure Composition At 34, 44, and 54 wk of age 4 litter sub samples and 2 manure sub samples (below slatted floor) per pen of approximately 200 g were taken from the full depth of the litter and manure layer (away from feeders and drink cups). Sub for 4 h at 550°C (±10°C) (NEN 7432, 1996). Total-N and ammonia-N (NH 4 +) was analyzed by distillation (NEN 7433, 7434, and 7438, 1998). At the same days, the pH was measured at room temperature (approximately 20°C) with an Orion 9104SC. Ammonia Concentration and Emission Ammonia concentration was measured at 34, 44, and 54 wk of age using a flux chamber placed on the litter or on the manure under the slatted floor. The flux chamber measured 60 × 40 × 15 cm (length × width × height; surface area: 0.24 m 2 ; volume: 0.036 m 3 ). On both sides of the flux chamber a tapered air duct guided the air over the emitting surface. Air entered and left the flux chamber through a round flexible tube with a diameter of 0.35 m (P-super, Panflex, Ede, the Netherlands). A ventilator (Fancom FMS 35; 3,000 m 3 /h max. ; Fancom, Panningen, the Netherlands) with ventilation control unit (Fancom FCTA; Fancom, Panningen, the Netherlands) was used to pull the air through the flux chamber and over the emitting surface. Ventilation was set at a constant value of 30% of the maximum capacity (i.e., 900 m 3 /h) resulting in an average air velocity across the emitting surface of 0.57 m/s. Incoming air was taken from outside the experimental facility to guarantee clean air. Ventilation rate was measured using a fan wheel anemometer placed upstream of the ventilator in the outgoing air flow. Both incoming and outgoing air were sampled using a PE sampling line. Concentration of ammonia in the incoming and outgoing air was measured using 2 photo-acoustic multi gas monitors (Innova 1312; LumaSense Technologies, Santa Clara, USA). Concentrations were measured during 30 min. During the last 15 min, concentrations of ammonia of both incoming and outgoing air were also measured by sucking an air sample (1,000 mL/min restricted flow) through 2 glass impingers placed in serial, each containing 100 mL of 0.5 M sulfuric acid solution. Halfway the 30 min measuring period, concentrations of ammonia were checked using gas detection tubes (Kitagawa; type No. 105SD 0.2 to 20 ppm; Kitagawa, Komyo Rikagaku Kogyo, Japan). The ammonia flux (Q; mg/h per m 2 ) from the emitting surface was calculated by equation (1): where C in (mg/m 3 ) is the concentration of ammonia entering the flux chamber; C out (mg/m 3 ) is the concentration of ammonia leaving the flux chamber; φ (m 3 /h) is the ventilation rate; and A (m 2 ) is the emitting surface. Estimate of Nitrogen (N) Losses Environmental total N losses per hen housed were calculated by equation (2): Environmental gaseous N losses per hen housed were calculated by equation (3): where N litter start is the N present in the litter at the start of the experiment (assumed to be 6.8 g/kg; N' Dayegamiye and Isfan, 1991), N intake female is the N consumed by the females (calculated from the feed intake and the feed N content); N intake male is the N consumed by the males (calculated from the feed intake and the feed N content); N eggs is the N deposited in the eggs, calculated from the total mass of eggs produced and the egg N content (the latter was assumed to be 19.3 g/kg; Jongbloed and Kemme, 2005); N female deposited = deposition of body N of the females from start until end of the experiment, including deposition in dead birds (assumed to be 33.4 and 31.5 g/kg live weight at 20 and 60 wk of age, respectively, independent of dietary CP level; Nonis and Gous, 2016); N male deposited = deposition of body N of the females from start until end of the experiment, including deposition in dead birds (assumed to be 34.5 and 35.4 g/kg live weight at 20 and 60 wk of age, respectively, independent of dietary CP level; Jongbloed and Kemme, 2005); N litter end = N present in the litter at the end of the experiment, calculated from the amount of litter present (calculated via the ash balance, see below) and the N content of the litter as measured at 54 wk of age; N manure end = N present in the manure at the end of the experiment, calculated from the amount of manure produced (calculated via the ash balance, see below) and the N content of the manure as measured at 54 wk of age. The quantity of litter and manure was estimated by calculating the ash balance. The total amount of ash in the litter and manure was calculated by subtracting the ash retention in the birds (ash retention per kg growth from Gous et al., 1999) and the amount of ash in eggs (ash content from and van den Brand et al., 2004 andMatt et al., 2009) from the ash intake in the feed plus the amount of ash in wood shavings (ash content from Alakangas, 2005). The amount of litter and manure was estimated under the assumption of a litter/manure ratio equal to 2 to 1 (van Emous, non-published data). Statistical Analysis The data were analyzed using Genstat statistical software (Genstat, 2015). Statistical significance was declared at P < 0.05, with 0.05 < P < 0.10 considered as a tendency. Response variables with regard to litter composition, manure composition, and ammonia emission were analyzed using the REML (REstricted Maximum Likelihood) directive of GenStat according the following model: where Y ijkm is the response variable, μ the overall mean, R i the random effect of room (i = 1…4), CP j the effect of CP diet (CPh, CPl; j = 1, 2), A k the effect of age (34, 44, 54 wk of age; k = 1…3), L m the effect of sampling location in the pen (litter, manure under slatted floor; m = 1, 2), and Ɛ ijkm the residual error term. Pen was the experimental unit. Interaction effects between CP j , A k and L m were tested for significance but excluded from the final model when not significant. Significant differences between levels of a factor were determined using t-tests. Response variables with regard to feed and water intake, N balance and N loss were analyzed using Analysis of Variance (ANOVA) with room as block and CP diet (CPh, CPl) as treatment. Ammonia Concentration and Emission The results on concentrations and emissions of ammonia as affected by dietary protein level are shown in Table 2. Ammonia concentration was 9.2% lower (4.76 vs. 5.24 ppm; P = 0.039), and ammonia emission was 9.0% lower (103.6 vs. 113.8 g/h per m 2 ; P = 0.017), in pens with birds fed the CPl diet in comparison to pens with birds fed the CPh diet. This reduction applied to both the litter and the manure under the slatted floor. Ammonia production and volatilization from litter and manure depends on temperature, pH, water availability and physical-chemical interactions in the litter and manure, but the level of N excreted from the birds as undigested proteins and uric acid to the litter and manure is the basis for differences in ammonia production (Groot Koerkamp, 1994;Fergusson et al., 1998a). The results from the present work further supports evidence from previous studies that reducing dietary CP level in broilers reduces ammonia concentration (Ferguson et al., 1998b;Gates et al., 2000;Ospina-Rojas et al., 2012;Hernandez et al., 2013) and emission (Elwinger and Svensson, 1996;Fergusson et al., 1998a). Several studies in broilers have shown an effect between dietary CP level and excretion of N (Moran et al., 1992;Elwinger and Svensson, 1996;Ferguson et al., 1998a,b;Khajali and Moghaddam, 2006;Kamran et al., 2010;Ospina-Rojas et al., 2012;van Harn et al., 2017). The reduction of ammonia emission in the present study (6% per 10 g/kg lower dietary CP level) was lower than expected based on previous studies in broilers by Ferguson et al. (1998b), Ospina-Rojas et al. (2012), and Hernandez et al. (2013). These authors found reductions of ammonia emission in the range of 8 to 14% per 10 g/kg reduction of dietary CP level. An explanation for this could be that, in the present study, factors other than dietary CP (temperature, pH, and moisture content) were very similar between the diets in both litter and manure (Table 3). Litter Composition The results of litter and manure composition as affected by dietary CP level are shown in Table 3. Total-N was 8% lower (33.2 vs. 36.2 g/kg DM; P < 0.001) and ammonia-N was 13% lower (5.6 vs. 6.4 g/kg DM; P = 0.014) in the litter and manure samples of the birds fed the CPl diet as compared to the birds fed the CPh diets. The dietary CP level did not affect the DM content (P = 0.59), ash content (P = 0.064), and pH (P = 0.73) of litter and manure. In the present study with broiler breeders, total-N in the litter and manure was 6% lower per 10 g/kg reduction of dietary CP level. This result can be compared with those from experiments with their offspring, i.e., broilers (Moran et al., 1992;Elwinger and Svensson, 1996;Ferguson et al., 1998a,b;Khajali and Moghaddam, 2006;Kamran et al., 2010;Ospina-Rojas et al., 2012;van Harn et al., 2017). These studies showed a 3 to 10% lower total-N content in the litter per 10 g/kg reduction of dietary CP level. Total-N content was 10% lower (32.8 vs. 36.5 g/kg DM; P < 0.001) and ammonia-N content was 14% lower (5.5 vs. 6.5 g/kg DM; P = 0.002) for the manure samples as compared to the litter samples (Table 3). This is surprising given the fact that litter is manure too which originated from the same birds. Apparently, the net result of all factors influencing microbial degradation of uric acid and undigested proteins to ammonia were more favorable in the manure than in the litter. These factors can be a higher water content in manure (477 vs. 399 g/kg), more aerobic conditions in the generally more crumbly litter, and possibly a higher temperature in the manure than in the litter due to composting. The ash content of the litter and manure in the present study was not affected by dietary CP level (Table 3), which is in agreement with earlier studies in broilers (Moran et al., 1992;Kamran et al., 2010). Furthermore, the ash content did not differ between litter and manure which can be explained by the fact that equal amounts of feed (i.e., the main source of ash) were fed to all birds. The ash content of litter and manure increased with the age of the birds, most likely due to drying which is reflected in the increase in DM content with age (i.e., in time). Research of Taraba et al. (1980) showed that, besides other factors, the pH of litter has a large influence on ammonia production. Normally, the pH of litter ranges between 6.5 and 8.5 (Anthony et al., 1994;Ferguson et al., 1998a;Khajali andMoghaddam, 2006, van Harn et al., 2017). A pH below 7.0 (neutral) reduces the uricolytic bacterial population responsible for ammonia production and increases the population of other bacteria which absorb ammonia, resulting in less ammonia volatilization to the environment (Ferguson et al., 1998a,b). In the present study, dietary CP level did not affect the pH of the litter and manure (Table 3). This finding is in agreement with those from broiler experiments by Ferguson et al. (1998a) and van Harn et al. (2017) but contrasts with those from broiler experiments by Ferguson et al. (1998b) and Khajali and Moghaddam (2006). The latter studies found a lower pH of litter when broilers were fed diets with lower CP levels. Thus, the literature is inconclusive on this matter. Possibly, the absence of an effect of dietary CP level on pH in the present study and previous ones (Ferguson et al., 1998a;van Harn et al., 2017) has been caused by the presence of cakes on the litter. Litter covered with such a top layer definitely shows different characteristics, however the literature is inconclusive on this matter. The pH of the manure (under the slatted floor) was lower (7.4 vs. 7.8; P < 0.001) as compared to the litter on the floor (Table 3) which coincided with a lower ammonia content (5.5 vs. 6.5 g/kg; P = 0.002). This finding is consistent with prior studies of Carr et al. (1990) and Ferguson et al. (1998a,b). It is postulated by Elliot and Collins (1982) that especially the combination of moisture content and pH controls the release of ammonia from manure. Relative small changes in pH (−log[H + ]) results in large changes in [H + ] concentration which, in turn, largely affects the free (unionized) ammonia content of the manure. Over time, pH of the litter and manure samples decreased from 8.5 to higher than 7.0, reaching a stable situation from 44 wk of age onwards (Table 3). This is generally in agreement with the previous discussion about the correlation between pH and ammonia-N content in this paper. Table 4 shows that birds fed the CPl diets showed a 11% lower loss of total N (644 vs. 727 g N per hen housed; P < 0.001) and a 14% lower loss of gaseous N (294 vs. 340 g N per hen housed; P = 0.008) as compared to the birds fed the CPh diet. Total N losses (N losses in the excretion) in the present study was 7% lower calculated for a 10 g/kg lower dietary CP level which is in agreement with previous studies with breeders and broilers. An 8% lower total N loss per Nitrogen Balance and Losses 10 g/kg lower dietary CP level has been found in breeders (Lopez and Leeson, 1995a,b) and a 9% lower total N loss per 10 g/kg lower dietary CP level has been found in broilers (Elwinger and Svensson, 1996;Kamran et al., 2010;Hernandez et al., 2013). The difference between the CPh and CPl diets in calculated gaseous N losses was higher (14%) as compared to the reduction in ammonia emission (9% ; Table 3). This may have been caused by underestimation of the N content of manure and litter because of the use of measured N content values of the manure and litter from 54 wk of age instead of 60 wk of age. Moreover, the total amount of litter and manure was calculated via the ash balance method which included assumptions (values from literature) on the ash content of eggs, birds, and bedding material. Water and Feed Intake and Water/Feed Ratio The results on water intake, feed intake, and water/feed ratio as affected by dietary CP level are shown in Table 5. Dietary CP level affected neither of the 3 variables. Studies in broilers, however, did find a lower water intake and water/feed ratio with lower dietary CP levels (Elwinger and Svensson, 1996;Bailey, 1999;Hernandez et al., 2013;van Harn et al., 2017). Elwinger and Svensson fed broilers diets with CP levels of 22 (control), 20 or 18% during the entire growth period. They found that water intake was 3.5 and 7.0% lower for the 20 and 18% CP diet as compared to the control (22%), respectively, whereas water/feed ratio was 2 and 5% lower. This finding is in agreement with research in broilers by van Harn et al. (2017) with 4 different treatment groups (control, −10, −20, and −30 g/kg CP). The latter authors found that water intake and water/feed ratio decreased linearly with dietary CP level. The lowering effect on water intake is caused by the lower protein intake which decreases the amount of metabolites that need to be excreted in urine: a process which enhances water intake (Elwinger and Svensson, 1996;van Harn et al., 2017). CONCLUSIONS Results from the present study in broiler breeders show that reducing dietary CP level by 15 g/kg (on average from 140 to 125, depending on the breeder layer diet) reduces nitrogen excretion in the litter and manure by 8%, ammonia emission by 9%, total N losses by 11%, and does not affect water or feed intake. Overall, reducing CP level in the diet of broiler breeders reduces ammonia emission from litter and manure by 6% per 10 g/kg reduction of CP. ACKNOWLEDGMENTS This research was conducted within the framework of the public private partnership "Breeders In Balance (BIB)". It was partially funded by the Dutch poultry industry, partially by the Ministry of Agriculture, Nature and Food Quality, and partially by Evonik Nutrition & Care GmbH, Germany. We thank the staff of the poultry facilities Carus (Wageningen, the Netherlands) for taking care of the animals and Klaas Blanken and Guus Nijeboer for carrying out the measurements. The animal keepers of the poultry facilities Carus (Wageningen, the Netherlands) are acknowledged for their assistance in performing the study. Furthermore, we thank the anonymous reviewers for providing useful comments on the draft manuscript.
2019-09-12T13:06:30.677Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "8233e16cf119bda2ac36457d089935c357f1b8d7", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.3382/ps/pez508", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "486c2ad0990df2abcb43ca3d135a8af409e0f0e8", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
225486193
pes2o/s2orc
v3-fos-license
Prevalence of Human Papillomavirus Genotypes in Patients with Genital Warts in Gorgan, Iran Background and objectives: Low-risk and high-risk human papillomavirus (HPV) genotypes are the main cause of anogenital warts. The present study aimed to determine prevalence of HPV genotypes in patients with anogenital warts in Gorgan, northeast of Iran. Methods: In this cross-sectional study, 40 biopsy samples were taken from patients with anogenital warts in Gorgan, Iran. After DNA extraction, multiplex polymerase chain reaction was carried out for detecting HPV genotypes 54, 18, 16 and 6. Demographic characteristics of subjects including gender, age, education level, marital status, smoking and method of contraception were also collected. Data were analyzed in SPSS 16 software at statistical significance of 0.05. Results: The mean age of male and female patients was 31.81±6.9 and 27.95±6.92 years, respectively. The frequency of HPV-6, HPV-16 and HPV-54 was 77%, 15% and 7.5%, respectively. In addition, HPV-18 was not detected in the collected specimens. Co-infection of HPV-54 with HPV-6 and HPV-16 was also observed in some cases. No significant association was found between HPV infection and age, gender, smocking, contraceptive method and education level. Conclusions: Similar to previous studies in Iran and other countries, HPV type 6 is the predominant cause of genital warts in Gorgan, Iran. Further studies with a larger study population are needed to explore the role of other contributors to HPV-induced genital warts. INTRODUCTION Human Papillomavirus (HPV) is the leading cause of cervical cancer, the fourth most common cancer among women worldwide (1). It is also involved in progression of anogenital tumors and warts. The virus also has an established role in the etiology of oropharyngeal cancers (2). Until now, more than 200 HPV genotypes have been identified, while only one fifth of them are associated with anogenital infections. Genital HPV genotypes are divided into high-risk and low-risk groups based on the carcinogenicity potential (3,4). The high-risk HPV genotypes include 16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, 59, 68, 73 and 82, while, HPV types 6, -11, 40, 42, 43, 44, 54, 61, 70, 72, 81, 89 are considered low-risk genotypes (5-7). HPV-16, -18 and -51 are the predominant oncogenic genotypes involved in the development of the majority of cervical cancer cases (8,9). Almost 4.5% of all cases of head and neck cancer as well as anogential cancers are caused by HPV (10). Moreover, high-risk HPV types 16 and 18 are the predominant genotypes in Iran with frequency of 77.5 % and 32.4% in cervical cancer and head and neck cancer cases, respectively (11). Low-risk HPV types are commonly associated with benign anogenital warts (12). Although most genital warts arise from infections caused by HPV types 6 or 11, high-risk genotypes have been also isolated from patients with genital warts (13,14). Alongside HPV-11 and -6 genotypes, HPV-54 is also isolated from clinical samples of anogenital warts (15,16). Incidence of HPV genotypes (77.5%) in Iran is almost identical to other parts of the world (11). Low-risk HPV genotypes, including types 6, 11 and 54 are more prevalent among Iranian men (17). A similar distribution of HPV genotypes 18, 16, 11 and 6 exists among Iranian women (18,19). Therapeutic approaches for the treatment of genital warts include podophyllin, trichloroacetic acid, cryotherapy, electrocautery, imiquimod, carbon dioxide laser and surgery. Despite treatment, recurrence rate of genital warts ranges between 40% and 90% (13). In this study, we investigate the frequency of HPV infection in patients with genital warts in Gorgan, North of Iran. DNA extraction and HPV genotyping A Genomic DNA extraction kit Macherey-Nagel, Germany) was used For DNA extraction from sections of anogenital arts.A multiplex polymerase chain reaction (PCR) experiment was designed to investigate presence of low-risk (types 6 and 54) and high-risk (types 16 and 18 ) HPV genotypes. HPV types 16 and 18 are the predominant high-risk genotypes in Iran, while HPV-6 is the most common low-risk HPV genotype in anogenital specimens. However, the prevalence of the low-risk HPV genotype 54 is not clear. Four primer pairs encompassing the HPV L1 coding sequence were designed (Table 1). PCR was performed as follows: initial denaturation at 94 °C for 5 minutes, followed by 35 cycles at 94 °C for 30 seconds, at 58 °C for 30 seconds and at 72 °C for 40 seconds. PCR products were electrophoresed on 1% garose gel. A HPV-positive specimen and a specimen-free sample were used as the positive and negative controls, respectively. Statistical analysis Descriptive analysis was performed using SPSS 16.0 software package. Crosstabulation and Chi-square test were performed to evaluate association of HPV infection with the demographical factors. Results are reported as frequency percentage among patients with HPVpositive genital warts. A p-value of less than 0.05 was considered statistically significant. RESULTS The mean age of men and women was 31.81±6.9 and 27.95±6.92 years, respectively. The frequency of HPV genotypes among patients with HPVpositive genital warts is shown in table 2. The highest and lowest genotype frequency was related to HPV-6 (77.5%) and HPV-54 (7.5%), respectively. In addition, HPV-18 was not detected in any of the samples. Coinfection with two or three HPV genotypes was also observed. Accordingly, coinfection of HPV-6 with HPV-54 and HPV-16 with HPV-54 was detected in two patients. In addition, a co-infection of HPV-6, HPV-16 and HPV-54 was observed in one married woman. As shown in table 2, HPV-6 was more prevalent in men than in women (P>0.05). On the contrary, HPV-16 was only detected among women. The prevalence of HPV-16 was 60.7% among patients aged 20 to 30 years. Overall, 29 patients were married and 13 patients were cigarette smokers. Of 40 patients, five had primary education, 15 had high school diploma, 18 had bachelor's degree and four had post-graduate education. Two patients had hypertension and high blood low-density lipoprotein level. Condom use (n=38) was the preferred contraceptive method, followed by tubectomy (n=1) and contraceptive pills (n=1). There was no significant relationship between HPV infection and demographic characteristics. DISCUSSION Genital warts are benign epithelial cell growths caused by sexually transmitted HPV infection (21,22). Genotyping of genital HPV is of great clinical significance in terms of developing treatment plans as well as follow-up and prevention strategies (23). Low-risk HPVs are frequency associated with anogenital warts or condylomata acuminata. Based on test sensitivity, HPV-6 can be detected in more than 90% of clinical samples of genital warts (24). In this study, 77.5% of HPVpositive samples were infected with HPV-6. In a previous study on 100 tissue specimens from women with HPV genital warts in Iran, HPV types 6 and 11 were found in 49% and 67% of patients, respectively. The mentioned study also found no significant association between marriage and HPV-11 (14). These reports demonstrate the high prevalence of low risk HPV types 6 and 11 in anogenital warts. In recent reports, 20-50% of patients with genital warts harbored high-risk HPV genotypes. However, we found no HPV-18 and seven (15%) HPV-16-positive samples among patients with anogenital warts. Following low-risk HPV types 6 and 11, HPV-16 is reported to be the most frequent HPV type isolated from anogenital warts. Consistent with our results, Ma et al. found HPV-16 in 10.6% of patients with genital warts (25). Similar to our results, Jalilian et al. found a high frequency of HPV-6 (64.8%) and HPV-16 (9.2%) in patients with genital warts in West of Iran (18). In the present study, coinfection with HPV types 6, 16 and 54 was observed in some cases. Another study also reported a high rate of HPV-6 and -11 co-infection in genital warts (17). HPV-54 is mainly isolated from genital tumors (26 (20). In a study by Chen et al., a large proportion of low-risk HPV genotypes was detected in patients under 20 years of age (30). We found no association between HPV infection and gender, smocking, condom use or education level. Further studies with a larger study population could help clarify the role of environmental factors in the incidence of HPV infection. In the present study, HPV-6 was more prevalent in men than in women. In a study on 66 patients with anogenital warts, Ozaydin-Yavuz et al. found a relationship between HPV type distribution and age, gender, place of residence and number of sexual partners. Similar to our findings, they detected a higher frequency of HPV genotypes in men with genital warts (13). Another study in Iran also reported a higher frequency of HPV-6 in HPV-positive genital specimen from men (17). However, geographical factors can affect the distributions of HPV infection in men and women. CONCLUSION Based on the results, HPV-6 is the predominant cause of anogenital warts in Gorgan, northeastern Iran. In addition, the rate of HPV infection was slightly higher in patients under 30 years of age. Further studies with a larger study population are needed to explore the role of other contributors to HPV-induced genital warts. induced genital warts. ACKNOWLEDGMENTS The Department of Microbiology of Golestan University of Medical Science is acknowledged for their support in conducting this study.
2020-10-30T07:04:49.414Z
2020-08-01T00:00:00.000
{ "year": 2020, "sha1": "0e1d81b0518dff9d3e0b7173e39c7a2c50cd3134", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.52547/jcbr.4.3.3", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e824e3019cefcea305923761cfdb62872306eddc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }