id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
258617688 | pes2o/s2orc | v3-fos-license | The interrelatedness of cognitive abilities in very preterm and full-term born children at 5.5 years of age: a psychometric network analysis approach
Background: Very preterm (VP) birth is associated with a considerable risk for cognitive impairment, putting children at a disadvantage in academic and everyday life. Despite lower cognitive ability on the group level, there are large individual differences among VP born children. Contemporary theories define intelligence as a network of reciprocally connected cognitive abilities. Therefore, intelligence was studied as a network of interrelated abilities to provide insight into interindividual differences. We described and compared the network of cognitive abilities, including strength of interrelations between and the relative importance of abilities, of VP and full-term (FT) born children and VP children with below-average and average-high intelligence at 5.5 years. Methods: A total of 2,253 VP children from the EPIPAGE-2 cohort and 578 FT controls who participated in the 5.5-year-follow-up were eligible for inclusion. The WPPSI-IV was used to measure verbal comprehension, visuospatial abilities, fluid reasoning, working memory, and processing speed. Psychometric network analysis was applied to analyse the data. Results: Cognitive abilities were densely and positively interconnected in all networks, but the strength of connections differed between networks. The cognitive network of VP children was more strongly interconnected than that of FT children. Furthermore, VP children with below average IQ had a more strongly connected network than VP children with average-high IQ. Contrary to our expectations, working memory had the least central role in all networks. Conclusions: In line with the ability differentiation hypothesis, children with higher levels of cognitive ability had a less interconnected and more specialised cognitive structure. Composite intelligence scores may therefore mask domain-specific deficits, particularly in children at risk for cognitive impairments (e.g
Introduction
Meta-analyses have shown that very preterm (VP, <32 weeks' gestation) born children have on average up to 13 points lower IQ than their full-term (FT) born peers (Allotey et al., 2018;Brydges et al., 2018;Twilhaar et al., 2018). Between 20 and 40 weeks' gestation, multiple rapid and complex developmental processes occur in the brain that are highly vulnerable to disruption caused by preterm birth and associated pathogenetic factors (e.g. inflammation, hypoxia, ischemia; Volpe, 2019). This leads to injury and dysmaturation of white and grey matter (Volpe, 2019) and subsequent cognitive deficits (Anderson et al., 2017). These deficits are evident as early as preschool and persist into adulthood (Arpi et al., 2019;Eves et al., 2021;Weisglas-Kuperus et al., 2009). VP born children are therefore at a significant lifelong disadvantage in both academic and everyday life, as intelligence is associated with a variety of outcomes, including academic achievement, income, life satisfaction, and mental and physical health (Brown, Wai, & Chabris, 2021). However, there are large interindividual differences in cognitive outcomes among VP born children. Heeren et al. (2017) provided more insight in this heterogeneity in a sample of extremely preterm (EP, <28 weeks of gestation) born children, by identifying four distinct cognitive profiles that differed in severity and abilities affected. The aetiology of these differences, however, remains unclear.
For decades, researchers have tried to explain individual differences in intelligence. Cognitive tests are known to positively correlate with each other. Someone who scores high on one cognitive test tends to also score high on other cognitive tests. This phenomenon is called the positive manifold. Its strength varies across individuals. The ability differentiation hypothesis states that higher cognitive ability is associated with a weaker positive manifold. Different cognitive abilities are thus less interrelated, resulting in a more differentiated cognitive structure in which abilities are more specialised and distinctly recognisable (Breit, Brunner, & Preckel, 2020. The positive manifold has been ascribed to a single underlying general factor, g (Spearman, 1904).
More recently, the existence of g as a psychological attribute has been questioned and alternative theories of intelligence have been proposed. According to the mutualism model, cognitive abilities reciprocally influence each other during development (van der Maas et al., 2006). Specifically, growth in a certain cognitive ability results from autonomous growth of that ability and from reciprocal influences of growth in other cognitive abilities. As a result, cognitive abilities become positively interrelated over the course of their development. However, growth is restricted by ability-specific limiting capacities. These capacities vary across individuals as a function of genetic and environmental factors, giving rise to individual differences in abilities (van der Maas et al., 2006;van der Maas, Kan, Marsman, & Stevenson, 2017). Process Overlap Theory (POT) assumes that any cognitive test requires both domain-general and domain-specific processes. Domain-general processes include primarily executive processes (e.g. goal maintenance, updating, inhibition) that are involved in a variety of tasks, whereas domain-specific processes are particularly involved in certain types of tasks (e.g. verbal, spatial, numeric). Positive correlations between tests arise because of overlapping domain-general executive processes and domain-specific processes that are involved in these tests (Kovacs & Conway, 2016). Domain-general executive processes are involved in most tests and constrain performance to various extents because of individual differences in these processes. For example, individuals with deficits in executive processes are more likely to perform poorly across test items, despite unaffected domain-specific processes (e.g. spatial reasoning) involved in some parts of the test.
In line with these contemporary theories and criticisms of g being merely a statistical artefact of latent factor analysis, the present study considered the structure of intelligence as a system of interrelated abilities without presuming a single underlying general factor (Schmank, Goring, Kovacs, & Conway, 2019;van der Maas et al., 2017). These contemporary theories are compatible with psychometric network analysis, which was applied in the current study as a viable alternative to factor models. Our main objective was to provide insight in the intelligence structure and increase our understanding of individual differences in VP and FT born children at 5.5 years of age. To this end, we described and compared the networks of cognitive abilities, including the strength of interrelations and the relative importance of abilities, in VP and FT born children and in VP children with lower compared to higher IQ. In line with ability differentiation, it was hypothesised that abilities were more strongly interrelated in VP than FT children and in VP children with lower compared to higher IQ. Based on the proposed central role of working memory (WM) by mutualism (van der Maas et al., 2017) and a previous network analysis of intelligence in adults (Schmank et al., 2019), WM was expected to be one of the most central abilities in the network across samples.
Subjects
EPIPAGE-2 is a prospective population-based cohort study of infants born preterm with a gestational age (GA) between 22 and 34 weeks in France (Lorthe et al., 2021). Participants were recruited between March 28 and December 31, 2011. The present study focuses on EP and VP born children (GA < 32 weeks) at 5.5 years of age, of whom 2,253 children with available follow-up data and no chromosomal and/or severe congenital abnormalities were eligible for inclusion. Infants born between 22 and 26 weeks GA were recruited during an 8-month period and those born between 27 and 32 weeks GA during a 6-month period (Lorthe et al., 2021). Detailed information about the inclusion and exclusion from birth to 5.5 years is presented in Figure 1.
Figure 1
Flowchart for very preterm born children from birth to follow-up at 5.5 years A total sample of 578 FT born peers, born between 37 and 40 weeks GA and with available follow-up data were included as a reference sample. FT children with chromosomal and/or severe congenital abnormalities were excluded from the analysis. The FT children were part of the larger populationbased ELFE study (N = 18,040;Charles et al., 2020). For financial-organisational reasons, a subsample of 600 of these children was subjected to the same assessments as the VP children, which was a sufficient number to ensure good precision of test scores (Pierrat et al., 2021).
Cognitive assessment
The Wechsler Preschool and Primary Scale of Intelligence, Fourth Edition (WPPSI-IV) (Wechsler, 2012), for the older age group (i.e., 4:0 to 7:7) was used to assess cognitive abilities at 5.5 years of age. WPPSI-IV assesses five areas of cognitive functioning, namely verbal comprehension index (VCI), visuospatial index (VSI), fluid reasoning index (FRI), working memory index (WMI) and processing speed index (PSI). These primary indices are composite scores, each made up of two core subtests, with a mean of 100 and a standard deviation of 15. The descriptions of these subtests and the abilities they measure can be found in Table 1.
Procedure
Follow-up at 5.5 years of age was conducted between September 2016 and December 2017. Written informed consent for participation was obtained from both parents. A set of neuropsychological tests, including the WPPSI-IV, were administered by trained psychologists.
Statistical analyses
Missing data evaluation. R (version 4.1.1; R Core Team, 2021) was used for data-analysis. Missing data were analysed and visualised with the R packages VIM (Kowarik & Templ, 2016), mice (Van Buuren & Groothuis-Oudshoorn, 2011), and naniar (Tierney, Cook, McBain, & Fay, 2021). Perinatal and socio-economic characteristics of the VP sample with complete WPPSI-IV data, with one or more missing WPPSI-IV subtest scores and those lost to follow-up were compared. ANOVA and independent samples t-test were used to compare the means of continuous data, and v 2 test to compare frequencies of categorical variables. Means, SDs, percentages, t-tests, ANOVA and v 2 were weighted by sampling weights to account for differences in recruitment duration in the VP sample (Pierrat et al., 2021).
Cognitive outcomes. Differences in mean full-scale IQs, index scores, and specific subtests between the VP and FT samples were tested with an independent samples t-test. Cohen's d was used to quantify effect sizes, with .2, .5, and .8 indicating small, medium, and large effect sizes, respectively (Cohen, 1988). Cases with missing data for all subtests were excluded from the analyses. For cases with incomplete WPPSI-IV data, missing data were handled by multiple imputation by chained equations with predictive mean matching. In total, 50 imputed datasets were generated (5 iterations each). Neonatal characteristics, parental socioeconomic status and cognitive scores were included as predictors.
To define VP subsamples with below-average and averagehigh intelligence levels, a cut-off point of 93 was used as described in Pierrat et al. (2021), corresponding to 1 SD below the mean of the FT sample after weights were applied to improve the sample's representativeness (Charles et al., 2020).
Psychometric network analysis. In a psychometric network, observed variables are presented as nodes, while edges and edge weights represent statistical associations and the strength of these associations between nodes, respectively (Epskamp, Borsboom, & Fried, 2018). The 10 core subtests were used as nodes. Although these subtests involve multiple abilities, we refer to them as single cognitive abilities for simplicity. Network estimation was performed using qgraph (Epskamp, Cramer, Waldorp, Schmittmann, & Borsboom, 2012) as implemented in the bootnet package . We estimated four Gaussian graphical models, where edges represent partial correlation coefficients (Epskamp, Kruis, & Marsman, 2017), using regularisation (EBICglasso) to identify cognitive networks of VP and FT born children and VP children with below-average and average-high IQ. The EBICglasso estimator was used because it works well in retrieving an overall structure that resembles a true network while depicting non-prominent edges in faded colours or setting them to zero, thus reducing the risk of spurious connections Isvoranu & Epskamp, 2021). Using graphical lasso, multiple regularised networks were estimated, with the level of sparsity being dictated by the tuning parameter lambda. The best-fitting model was then chosen using the extended Bayesian information criterion (EBIC), where the hyperparameter gamma determines how conservative EBIC will be . Gamma was set to 0.5 according to guidelines Foygel & Drton, 2010). Missing data were handled by full information maximum likelihood estimation.
Model fit was evaluated based on RMSEA, TLI, and CFI indices, with RMSEA <.06-.08 and TLI/CFI values ≥.95 indicating good fit (Schreiber, Nora, Stage, Barlow, & King, 2006). To identify the most important nodes in the networks, strength centrality was computed using bootnet. Strength centrality corresponds to the combined strength (edge weights) of a node's connections (Opsahl, Agneessens, & Skvoretz, 2010). To quantify the degree of centrality, a strength z-score ≥1 SD above the mean was defined as strong centrality (Simpson-Kent et al., 2021). Furthermore, node predictability, which is based on the proportion of explained variance (R 2 ) was calculated and visualised to assess how well a certain cognitive ability is predicted from other abilities that are directly linked to it, thereby giving further insight into the relevance of its connections (Haslbeck & Waldorp, 2018).
To compare differences in global strength of connectivity, network structure, and centrality across samples, the Network Comparison Test from the NetworkComparisonTest package was used (Van Borkulo et al., 2022). Based on a simulation study by Van Borkulo et al. (2022), high power (≥0.8) can be expected when the number of nodes in the network is low (i.e., 10) and the sample size of one network is at least 500, which resembles our conditions. Bonferroni-Holm method was used to correct for multiple testing.
The accuracy and stability of estimated network models were assessed using the bootnet package. The accuracy of estimated edge weights was determined by estimating 95% non-parametric bootstrapped confidence intervals (CI). Stability of strength centrality was estimated by non-parametric case-dropping subset bootstrap to assess whether the order of nodes based on their strength remains stable when decreasing the number of cases in the sample . This was quantified by the correlation stability (CS) coefficient, which indicates the proportion of cases that can be dropped while retaining a 0.7-correlation between centrality values of the original and subset samples. Values above 0.5 indicate that the order of strength centrality can be interpreted, whereas values below 0.25 indicate that it is not interpretable .
To explore the specific role of VP birth, network analyses were repeated in IQ-matched balanced samples. Samples were matched on FSIQ with optimal pair matching, using the R package MatchIt (Ho, King, Stuart, & Imai, 2011). In addition, sensitivity analyses excluding children with cerebral palsy and/or moderate-severe neurosensory impairments were performed to evaluate the robustness of the main findings against the influence of cases at high risk for intellectual impairment or compromised test performance.
Missing data evaluation
A total of 1,906 of the 2,253 VP born children participating at 5.5 years follow-up completed all WPPSI-IV subtests. The total percentage of missing data for WPPSI-IV subtests was 15%, of which 303 (13%) VP children had no data available and 44 (2%) VP children had some data available. Furthermore, 570 of the 578 FT born controls completed all WPPSI-IV subtests. For half (n = 4) of the FT children with missing data all subtests were missing. A total of 1,950 VP and 574 FT born children with available WPPSI-IV data were used in the network analysis.
Comparison of VP children with complete (n = 1,906) and incomplete WPPSI-IV data (i.e., 1 or more subtests were not completed; n = 347) as well as those lost to follow-up at 5.5 years (n = 941) is presented in Table 2. Regarding neonatal characteristics, VP children who did not participate in followup were born to younger mothers and the percentage of multiple birth in this group was lower compared to children who participated in follow-up. Furthermore, VP children who were lost to follow-up were more frequently born to mothers who were born outside Europe and with a lower level of education at birth compared to VP children who participated in followup. The percentage of parents with a low educational level at 5.5 years was higher in children who did not complete one or more WPPSI-IV subtests compared to children who completed all subtests. Additionally, a higher percentage of children with incomplete cognitive assessment had cerebral palsy and had significantly lower WPPSI-IV index scores compared to children with a complete assessment.
Cognitive outcomes
All WPPSI-IV scores were significantly lower in the VP sample (Table 3). The most affected cognitive domains were visuospatial (d = .8) and verbal (d = .8) abilities, whereas a medium effect size was observed for WM (d = .5). On the individual level, 38% of the VP born children had a below-average IQ (i.e., below 1 SD of the FT mean; <93), 57% had an average IQ (i.e. within 1 SD of the FT mean; 93-119; Pierrat et al., 2021) and 5% had an above average IQ (i.e. above 1 SD of the FT mean; >119). Table S1A-D shows the full correlation matrix of all groups.
Network visualisation, description, and comparison
Very preterm and full-term sample. The network models showed good fit (Table S2). The VP and FT networks ( Figure 2, left panel) were both densely connected (edge density VP, FT = 0.93, 0.91), with many positive links between abilities from different cognitive domains. In both networks, the strongest connections were observed between subtests within the same cognitive domain. This was especially prominent for nodes relating to verbal, visuospatial and processing speed abilities (Figure 2, Figure S1). Due to wide confidence intervals, estimated edge weights of the FT sample should be interpreted with caution ( Figure S1). The majority of abilities in the VP sample were strongly predicted from their connected abilities, in contrast to those in the FT sample. For instance, the degree of explained variance was highest for Similarities (R 2 = .27) and Information (R 2 = .24), which is approximately half as much as in the VP network (Table S3).
The most strongly connected abilities in the VP network ( Figure 2) were Similarities (z = 1.01) and Information (z = 0.95), which measure verbal ability, followed by processing speed (e.g., BS [z = 0.94]), and visuospatial ability (e.g. BD [z = 0.92]). Strength centrality varied across abilities in the VP sample, in which certain abilities had significantly higher strength than others ( Figure S2). Much less differences were observed in the FT sample. WM had the lowest strength centrality. In the FT sample, the order of node strength could not be reliably interpreted because of the low CS coefficient (see Network stability paragraph). The Similarities subtest (verbal ability) showed strong centrality (i.e., z ≥ 1 SD above the mean) in both the VP and FT sample.
Network comparison: The network of VP born children was more strongly interconnected than the network of FT born children (distance measure S = 0.58, p < .001). No statistically significant differences in network structure (i.e., in individual edges) were found (distance measure M = 0.13, p = .19). Statistically significant differences in strength centrality were found for Information (p = .02) and Cancellation (p = .02) between the two networks, which were less strongly connected in the FT compared to the VP network.
Very preterm sample: below-average vs. averagehigh IQ. The fit of both models was good (Table S2).
The network of VP born children with below-average IQ (Figure 3, top left panel) was densely connected (edge density = 0.84), whereas the network of VP born children with average-high IQ (Figure 3, bottom left panel) was the least densely connected network (edge density = 0.67) of all estimated networks. Individual abilities in the latter were also the least strongly predicted from their connections, with the proportion of explained variance ranging from R 2 = .01 for WM (i.e., ZL) to R 2 = .13 for verbal ability (i.e. SI). The most strongly connected abilities in the network of children with below-average IQ were processing speed (e.g., CA [z = .91]), followed by visuospatial (e.g. OA [z = 0.89]) and verbal abilities (e.g. IN [z = 0.86]). In contrast, the most strongly connected abilities in the network of VP children with average-high IQ were visuospatial ability (e.g., OA [z = 0.63]) and fluid reasoning (e.g. PC [z = 0.59]). Again, WM was least strongly connected to other abilities in both networks. The Bug Search and Cancellation subtests (processing speed) showed strong centrality in the VP sample with below-average IQ, whereas the Object Assembly subtest (visual-spatial ability) had strong centrality in the VP sample with average-high IQ. Within samples, the degree of centrality varied across abilities in children with below-average IQ, which was generally not true for children with average-high IQ ( Figure S2).
Network comparison:
The network of children with below-average IQ was more strongly interconnected than the network of children with average-high IQ (S = 1.32, p < .001). The test on invariant network structure was statistically significant (M = 0.20, p < .001). Specifically, the networks differed in three edges: Information-Similarities (p = .02), Matrix reasoning-Picture concepts (p = .03) and most remarkably Information-Cancellation (p < .001), which showed a positive correlation (bootstrapped edge-weight = .20; 95% CI [.14, .27]) in the group with below-average IQ but were unrelated in the group with average-high IQ. Moreover, the relative importance of all cognitive abilities, except for those measured with Block Design, was significantly higher in the sample with below-average IQ.
Network stability. Variability was observed in edge-weight accuracy ( Figure S1). In the VP network, for example, IN-SI, BS-CA, and BD-OA were the most accurately estimated edges, whereas CIs of other edges were wider. Estimated edge-weights were least precise for the FT network. Therefore, the strength of edges with wide CIs should be interpreted with caution .
The stability of strength centrality was highest in the VP network (CS = 0.75), meaning that the order of node strength could be interpreted ( Figure S3). Furthermore, the stability of strength was acceptable in the networks of VP children with below-average IQ (CS = 0.67) and with average-high IQ (CS = 0.52). In contrast, the order of node strength could not be reliably interpreted in the FT network, as correlations with the original sample decreased steeply in the subsamples with dropped cases (CS = 0.36). Despite similar FSIQs, processing speed and visuospatial ability were considerably lower in the VP compared to the FT sample (Table S4). Comparison of cognitive networks in these samples yielded no differences in network structure, strength, and centrality ( Figure S4).
Sensitivity analysis. Exclusion of children with cerebral palsy and/or moderate-severe neurosensory problems (n = 140) did not alter the main results. VP (n = 1824) born children had more strongly interconnected networks than FT (n = 570) Imputed unweighted data are presented. born children (S = 0.50, p < .001), whereas no differences in network structure were observed. Strength centrality differed only for Information (p = .01), which was more strongly connected to other subtests in the VP than in the FT group.
Discussion
This study is the first to provide insight into the structure of intelligence in large population-based samples of VP and FT born children at 5.5 years of age. In both samples, cognitive abilities formed a strongly interrelated network at this age. Nevertheless, important differences in the strength of connectivity in the networks were observed between groups. Cognitive abilities were more strongly interrelated in VP compared to FT born children.
Within the VP group, the cognitive network of children with below-average intelligence levels was more strongly interrelated than that of children with average to high intelligence levels. WM had the least central role in all networks, whereas processing speed, visuospatial and verbal abilities were most interconnected. The presence of exclusively positive edges between abilities in our four network models of intelligence reflect the positive manifold. Although associations between some abilities were weak or non-existent, simulation studies of the mutualism model show that even when edge-weights are sparse, including zero or weak edges, they can still give rise to a positive manifold (van der Maas et al., 2006). Networks of cognitive abilities after preterm birth Network models of intelligence, including ours, have been found to provide a good fit to intelligence data (Kan, van der Maas, & Levine, 2019;Schmank et al., 2019). Overall, these results support modelling intelligence as a network in line with contemporary theories of intelligence. From a mutualism perspective, positive interrelations are seen as causal interactions between abilities that were measured by different cognitive tasks. Applying this to the cognitive networks in this study, fluid reasoning abilities, for example, are thought to develop in part because of growth in verbal comprehension and visual spatial abilities that reciprocally influence each other. Following POT, positive interrelations mainly result from domain-general executive attentional processes that are involved in each of the domain-specific tasks. The differences in strength of connectivity between networks can be interpreted in light of ability differentiation, where higher cognitive ability is associated with weaker correlations between cognitive tests (Spearman, 1927). This was indeed shown in our study: connectivity was stronger in VP than in FT born children, as well as in VP children with below-average IQ compared to VP children with average-high IQ. Support for ability differentiation in children in the literature is scarce and inconsistent due to varying methodological approaches. In a systematic review, Breit et al. (2021) made a distinction between grouping and model-based methods. In grouping methods, the sample is usually split into high and low ability groups to compare average intercorrelations. Such approaches have been criticised for the arbitrary division of the cognitive ability spectrum, bringing forth the concern that results may be biased by irrelevant chosen cut-off points. To overcome this, model-based methods using confirmatory factor analysis have been developed. Studies using grouping methods showed mixed findings, whereas four of five more recent model-based studies found consistent support for ability differentiation. Furthermore, Breit et al. (2021) found differentiation effects for verbal but not figural and numeric factors, suggesting that ability differentiation might be domain specific. However, these studies used factor models to model intelligence, which may limit a direct comparison with our findings obtained using psychometric network analysis and comparing VP, FT, belowaverage, and average-high IQ groups. One explanation for the more differentiated cognitive structure in children with higher levels of intelligence is offered by POT. According to this theory, executive processes serve as bottlenecks, constraining performance across tests and giving rise to the positive manifold. The bottleneck effect becomes stronger with decreasing levels of EF, resulting in higher correlations between tests . VP born children are at risk for deficits in EF and attentional control processes (Brydges et al., 2018;Twilhaar, Belopolsky, de Kieviet, van Elburg, & Oosterlaan, 2020;; Van Houdt, Oosterlaan, van Wassenaer-Leemhuis, van Kaam, & Aarnoudse-Moens, 2019). These deficits have been found to underlie lower IQ and academic performance in VP compared to FT born children (Twilhaar, Belopolsky, et al., 2020;. Network analysis has also been used to study the brain connectome, where nodes correspond to voxels or regions of interest and edges represent structural or functional associations between pairs of nodes (Wang, Zuo, & He, 2010). Preterm birth has been found to affect the brain connectome. In EP and VP born school-age children, structural networks were more segregated and less integrated compared to full-term born peers, possibly resulting from white matter abnormalities (Fischi-Gomez et al., 2016;Thompson et al., 2016). Without directly studying this link, it remains speculative how the cognitive networks in our study relate to the alterations in brain connectivity in VP born children. Based on a behaviour-brain combined multilayer network, Simpson-Kent et al. (2021) concluded that such relations are complex and not necessarily straightforward.
Working memory had the least central role in the networks across samples, whereas verbal, processing speed, and visuospatial abilities were most central. Similar findings were shown in a cohort of 5-18-year-old children with learning difficulties (Simpson-Kent et al., 2021). This contradicts mutualism and POT, which propose (the executive component of) WM as one of the most central or domaingeneral processes giving rise to the positive manifold (Kovacs & Conway, 2016;van der Maas et al., 2017). However, the strength of interrelations between and importance of cognitive abilities may change throughout development. Cowan (2021) showed that the correlation between the WPPSI WM subtests and other subtests varied across ages between 2.5 and 7.6 years in a wave-like pattern. According to Demetriou et al. (2018), there are four main stages of cognitive development, in which the centrality of cognitive processes varies depending on the developmental priority of a specific stage. Attentional control, processing speed, and linguistic awareness were found to be more central and more interrelated with general ability between 5 and 8 years, whereas reasoning and WM became more important between 9 and 12 years of age (Demetriou et al., 2014;Demetriou, Mougi, Spanoudis, & Makris, 2022;Demetriou, Spanoudis, Makris, Golino, & Kazi, 2021). This might be related to the developmental trajectories of cognitive processes and their neural correlates. Whereas some processes, such as language, start to develop very early on and reach equilibrium sooner, others start to develop and reach a steady state later in life (Demetriou et al., 2022).
Indeed, research shows that WM still largely develops into adulthood, when it reaches a steady state (Funahashi, 2017;Gathercole, Pickering, Ambridge, & Wearing, 2004;G omez et al., 2018). Brain infrastructure supporting these functions show similar trajectories. Particularly, highly centralised and strategically located regions or hubs are initially located in primary networks, including the sensorimotor, visual, and auditory networks, but move toward regions implicated in higher-order cognition later in life (Cao, Huang, & He, 2017;Zhao, Xu, & He, 2019). In line with aforementioned studies, we have shown that verbal and processing speed abilities are more central in early childhood, reflecting the cognitive demands at that stage as suggested by Demetriou et al. (2018), whereas WM may become more central later on, as shown in the adult network of intelligence (Schmank et al., 2019). In contrast to the present findings, WM and attentional control processes were found to play an important role in impairments in intelligence and academic performance in VP born adolescents (Twilhaar, Belopolsky, et al., 2020;. Altogether, the incompatibility of our findings to mutualism and POT demonstrates the need for further theory development, integrating findings from cognitive and biological sciences, while also taking developmental dynamics into account. The theory of evolving networks of human intelligence (Savi, Marsman, & van der Maas, 2021) presents such a multilevel and dynamical view on intelligence and should be considered in future research.
In VP born children with below-average intelligence levels, processing speed had a particularly strong connection with other abilities, which was not found in VP children with average-high intelligence levels. In light of POT, this may indicate that processing speed may function as a bottleneck in VP children with impaired intelligence by restricting performance in tests of other abilities, resulting in lower overall test performance. Rather than the level of processing speed per se, the extent to which it is linked to other cognitive abilities seems particular to this group. However, Clark et al. (2014) showed limited discrimination between processing speed and attentional control processes in pre-schoolers. Moreover, WPPSI processing speed subtests tap multiple processes, including attentional control. VP birth is associated with attentional control deficits and impaired task performance mainly when attentional control demands are high (Twilhaar, Belopolsky, et al., 2020;Twilhaar, de Kieviet, van Elburg, & Oosterlaan, 2019). This suggests that the strong interrelatedness of processing speed with other tasks may in part be explained by the overlapping demands of these tasks on attentional control processes. Further research into these relations is needed.
The present study contributes to the literature by using a novel approach according to contemporary views on intelligence, allowing for individual differences. To our knowledge, it is the first application of psychometric network analysis to WPPSI-IV data in large population-based neurotypical (FT) and neurodiverse (VP) groups. Our study also has several limitations. Firstly, selective drop-out of children from less favourable social backgrounds and children with disabilities limits generalizability of our findings. Similarly, weighting procedures to correct for non-representativeness of the FT sample (Charles et al., 2020) were incompatible with network analyses. Unequal sample sizes limit a direct visual comparison between the networks in Figures 2 and 3. Moreover, stability decreased in networks with smaller sample sizes, resulting in larger variability in edge-weight estimation and less accurately estimated overall strength. Further studies with reasonable sample sizes are therefore warranted to replicate our findings. Although our findings can be interpreted in line with ability differentiation, our study should not be seen as a direct test of this hypothesis, because of the disadvantages associated with grouping based on IQ (Breit et al., 2021). Furthermore, the cross-sectional analyses did not take the dynamic character of intelligence, as proposed by the mutualism model, into account. Regarding centrality, we only focused on strength centrality since other centrality indices are generally unstable (Bringmann et al., 2019). This limits our comprehension of the networks' most important abilities. Lastly, connections between abilities describe partial correlations rather than causal interactions, as proposed by mutualism. Therefore, it remains to be further explored whether interventions targeting central abilities would lead to meaningful improvements in other abilities.
Despite these limitations, our findings have several important implications. Cognitive abilities are strongly interrelated in early childhood, particularly in children with difficulties. This means that VP born children with below-average intelligence levels are likely to suffer from difficulties across multiple cognitive domains. The differences in network strength between VP and FT born children do not seem to be specific to VP birth, as no differences were observed in cognitive networks of VP and FT born children that were matched on IQ. As suggested before by Tucker-Drob (2009), the more differentiated cognitive structure at higher levels of intelligence implies that composite IQ scores may not well reflect domain-specific abilities. This is particularly relevant for VP born children. Our matched subsample still showed lower levels of processing speed and visuospatial abilities in VP compared to FT born children, despite similar FSIQ scores. Such specific difficulties may be masked when focusing on general ability (i.e., FSIQ). This emphasises the importance of assessing specific abilities in addition to general cognitive ability in VP born children, both in clinical and research settings. At 5.5 years of age, verbal and processing speed abilities and not WM were the most central abilities. This suggests that efforts to promote the development of these abilities may benefit the development of other cognitive abilities. This requires longitudinal research to study the dynamics of the relations shown in the present cross-sectional networks and whether improvement of certain abilities actually leads to improvement of other abilities. and Kievit, Hofman, and Nation (2019) showed that children (6-8 years) and adolescents (14-25 years) with better vocabulary subsequently showed larger gains in reasoning ability. This mutualistic coupling was strongest in young children (Kievit et al., 2019) and emphasises the importance of verbal abilities as a building block for the development of other cognitive abilities in early childhood, as also suggested by our findings and Demetriou et al. (2021Demetriou et al. ( , 2022. Although further research is required, verbal abilities seem an important target for early interventions to improve cognitive outcomes after VP birth.
Conclusions
At 5.5 years of age, cognitive abilities are densely positively interrelated in both VP and FT born children. This was particularly true for children with lower levels of intelligence. Our study confirmed the value of psychometric network analysis for studying cognition in neurotypical and neurodiverse groups of children and highlights the importance of considering the interrelatedness of cognitive abilities in future studies. The present analyses should be extended by longitudinal network analyses to consider the dynamics of cognitive development and to provide further crucial knowledge for the development of interventions.
Supporting information
Additional supporting information may be found online in the Supporting Information section at the end of the article: Table S1. Correlation Matrix. Table S2. Fit statistics for estimated network models. Table S3. Node-predictability indicated by the explained variance (R 2 ) across networks. Table S4. Comparison of WPPSI-IV scores between very preterm (VP) and full-term (FT) born children who were matched on full-scale IQ. Figure S1. 95% bootstrapped confidence intervals of estimated edge-weights for the estimated networks of cognitive abilities for the very preterm sample (A), fullterm sample (B), very preterm sample with belowaverage IQ (C), and very preterm sample with averagehigh IQ (D). Figure S2. Bootstrapped difference tests (a < .005) for node strength of the ten cognitive abilities for the very preterm sample (A), full-term sample (B), very preterm sample with below-average IQ (C), and very preterm sample with average-high IQ (D). Figure S3. Stability of strength centrality for the very preterm sample (A), full-term sample (B), very preterm sample with below-average IQ (C), and very preterm sample with average-high IQ (D). Figure S4. Network models of cognitive abilities for very preterm and full-term born children who were matched on IQ. | 2023-05-12T06:16:28.412Z | 2023-05-11T00:00:00.000 | {
"year": 2023,
"sha1": "a6f823e7a820fb21c1bb20ad886fc33488fa1c4c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1111/jcpp.13816",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a29bd6945d6d8943f6166afd41257f94f08497e7",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231954864 | pes2o/s2orc | v3-fos-license | Understanding multi‐stakeholder needs, preferences and expectations to define effective practices and processes of patient engagement in medicine development: A mixed‐methods study
Abstract Background The holistic evolution of patient engagement in medicines development requires a more detailed understanding of the needs of all involved stakeholders, and one that better accounts for the specific needs of some potentially vulnerable patient populations and key stages in medicines development. Objective The purpose of this convergent mixed‐methods study was to better understand the needs of different stakeholders concerning patient engagement at three key stages in medicines development: research priority setting, clinical trial design and early dialogues with Health Technology Assessment bodies and regulators. Design This study brought together findings from three sources: i) an online questionnaire, ii) face‐to‐face consultations with two potentially vulnerable patient populations, a workshop with Health Technology Assessment bodies, and iii) three‐step modified Delphi methodology. Results Overall stakeholders still need additional varied support mechanisms to undertake, sustain or measure value of patient engagement. Health Technology Assessment bodies need better rationale for patient engagement in early dialogue and tools to support its implementation. Improved awareness and understanding of the need and value that involving patients, who are often considered as potentially vulnerable, can bring is needed, as is better accommodation of their specific needs. Similarly, weighted Delphi categories were as follows: aims and objectives, and sustainability. Several additional themes were common across the three key stages in medicines development. Conclusion This broad‐reaching study provides the blocks needed to build a framework for patient engagement in medicines development. Patient or Public Contribution Patients were involved in review and interpretation of data.
| INTRODUC TI ON
There is increasing consensus among stakeholders that patient engagement (PE) in medicines development is critical to fostering patient access to better designed innovative therapeutic solutions and delivering more effective health outcomes for patients. 1 7 While there are many initiatives to involve patients across that continuum, inconsistency and fragmentation remain the norm. 1,[8][9][10] This is especially so for the meaningful engagement of potentially vulnerable patient populations who often have additional or specific needs (such as people with dementia and young people). 11,12 Patient engagement also needs to be understood better at key upstream stages of medicines development that are comparatively poorly serviced by current efforts such as research priority setting, clinical trial design and early dialogues with health technology assessment (HTA) bodies and regulators. Achieving systematic and meaningful PE are challenging and must meet the needs, expectations and context 13,14 of the different stakeholders involved. 3,8,15,16 A better understanding of multi-stakeholder needs, preferences and expectations for patient engagement would provide a more solid foundation for empowering actions such as the creation of tools and frameworks that can holistically enhance effective, meaningful and sustainable PE.
| Aims and objectives
We aimed to generate criteria for effective PE in medicines development, focused on three key stages where systematic PE is generally less mature compared to other stages and taking into consideration the specific needs of, and support that should be in place for, potentially vulnerable patient populations, and which are not always appropriately addressed in current PE approaches. These key stages and patient populations were as follows: (i) research priority setting (RPS) -providing opinion, providing evidence and/or being part of a group that decides what is important to research, (ii) clinical trial design (CTD) -designing protocols, discussing patient burden, discussing patient-related outcomes, iii) early dialogues with regulators and HTA bodies (ED) -early discussions between industry, HTA bodies and regulators (and in some contexts with payers) regarding developmental plans for a medicinal product and to ensure they meet the requirements and iv) potentially vulnerable patients, in the context of this project, include (but are not limited to) people with dementia and their carers, and young people.
| DE S I G N AND ME THODS
This work was conducted within the context of the PARADIGM project, funded by the Innovative Medicines Initiative (IMI), that developed ways to ensure that patients are always meaningfully involved in the development of medicines. 7 This study took a convergent mixed-methods approach combining quantitative methods and a qualitative approach, using consultations set within the framework of patient and public involvement (PPI). The learning and key emerging themes from each stage were incorporated into each following stage. This brought together three components: i) an online questionnaire to capture overall identification of needs, expectations and preferences of effective PE from all involved stakeholders, ii) separate face-to-face consultations with two specific groups of potentially vulnerable patient populations, and a separate workshop with representatives from HTA bodies -to gather greater insight on the needs and preferences of these groups beyond the results of a survey, and (iii) modified Delphi exercise to identify and prioritize the minimum agreed criteria for effective and meaningful PE at the three key stages of medicines development ( Figure 1). PARADIGM consortium partners AE and FSJD sourced patients involved in the two consultations from pre-existing working groups within each respective umbrella organization. Each was led by their own staff and followed its own standard institutional processes for organizing PPI activities. 17,18 Delphi panel experts were identified from the PARADIGM consortium networks, and standard institutional processes were followed for undertaking the Delphi exercise.
| Stage 1: Survey
This survey aimed to identify current needs and expectations for PE across medicines development. The survey was constructed in two phases. Firstly, issues identified from existing literature on PE in medicines development 8,15,16,19,20 were prioritized during a face-to-face workshop involving a multi-stakeholder working group. This informed the structure of a survey which was piloted over two weeks using respondents from each respective stakeholder group.
The final survey was made up of 15 general questions that all stakeholder groups completed. Within this general section, two questions were constructed in matrices allowing more than one choice per row, six questions were structured as visual analogue scales (assigning 0-100 points based on respondents' impressions on PE) and four questions were multiple-choice, allowing for more than one option to be chosen. Stakeholder groups: the patient community, industry, regulators, policymakers, HTA bodies and payers, research funders and HCPs (clinical academics) also had additional separate sections comprising matrices and multiple-choice questions. Within the survey, we initially sought to gain a broad benchmark of PE across the medicines lifecycle, including dialogues with regulators and HTA bodies that involve the licensing of medicines, HTA assessment, and pricing and reimbursement. Hereafter, the workshop with HTA bodies and Delphi panels only explored the definition of early dialogues with regulators and HTA.
The survey was administered in English in an online tool, Survey Gizmo, over a four-week period in 2018. A snowball technique was utilized to cascade the survey within consortium members' internal and external networks to reach an estimated sample population of 10,000 (with 95% CI), with a minimum calculated sample size needed of 370.
All key stakeholders in medicines development were targeted: regulators, HTA bodies, industry (pharmaceutical, biotechnology and medical technology companies) health-care professionals (clinical academics), patients and patient representatives (from diseasespecific or agnostic organizations, and non-affiliated individual patients), policymakers, research funders and research and academia (research institutes and universities). Findings are described as total responses or percentage of respondents to a given question or theme.
| Stage 2: Face-to-face consultations and workshops
The specific needs, expectations and preferences of young people, and people with dementia and their carers were explored through separate face-to-face consultations, involving respective experienced staff, in order to better understand the specific PE needs of these groups of patients.
In the case of people with dementia, members of Alzheimer Europe's (AE) European Working Group of People with Dementia (EWGPWD) 21 and their carers participated in a one-day consultation F I G U R E 1 Infographic of three-stage convergent methodology. The method contained three components -each informing the next: i) an online questionnaire to capture needs, expectations and preferences, ii) face-to-face consultations with two groups of potentially vulnerable patient populations, and a separate workshop with representatives from HTA bodies, and iii) three-
| Stage 3: Three-step RAND modified Delphi methodology
The modified Delphi methodology is a recognized method for prioritization of diverse variables through consensus. 25,26 Delphi questionnaires were developed based on the outputs from stages one and two, combined with a review of existing frameworks for PE. 10 Experts for each of the Delphi panels were convened using a snowball method. Experts had to hold recognized expertise and experience within their topic panel in relation on PE to guarantee that the group reflected the view of a majority (initial expert group size; RPS = 24, ED = 26, CTD = 31) ( Table 1).
Each panel was balanced as far as possible for geographical coverage and sex (see Table 1
| RE SULTS
Here we present the findings of each of the three stages and the overall conclusions. Additional results are available athttps://imiparad igm.eu/our-work/. 28 Note: The stakeholder group patient community is differentiated into: Patients (including carers) †, and patient advocates and patient organizations ‡. Health-care professionals (HCP) -clinical academics, Health Technology Assessment bodies (HTA), industry (pharmaceutical companies (Pharma), small and medium enterprises (SME) and Biotechnology companies (Biotech)), other § (include, but not limited to individuals that identify as their primary affiliation of; charity, consultant, independent expert, think tank, industry association, social association, funder, NGO's, or identified as having multiple relevant affiliations.
Total number of respondents
TA B L E 2 Survey respondents by stakeholder from the total of 372 English respondents
| Survey general characteristics
A total of 372 respondents completed the survey in English ( Table 2).
The largest respondent (stakeholder) group was the patient community (patients (including carers), patient advocates and/or patient organizations) (35.8%), followed by industry (34.9%). Respondents completed the survey from 48 countries -a majority from the UK and United States (28.2% and 16.9%, respectively) ( Figure 2). The group 'Other' comprised 36 countries, both within and outside the European Union (18%, n ≤ 5 respondents from each).
| What is the status quo of PE today?
Current perception of PE is low but ideal expectations are high.
| What are the desired outcomes of PE?
Greater patient-centric input, patient-relevant outcomes and communication are desired the most.
Respondents were asked to indicate up to three most desired outcomes of PE from a separate list of predefined outcomes for each of the three stages. The top three are as follows (see also Supplementary Table 1
Consultations: Increasing and supporting involvement of potentially vulnerable populations in PE.
Incorporating the lived experience and accommodating reasonable adjustments help to recognize and acknowledge the value patients bring.
Several common barriers to increasing and sustaining patient involvement emerged from the consultations with young people, and people with dementia and their carers, along with recommendations to overcome these ( Table 3).
Understanding and incorporating the 'lived experience' of the patient (and not just the parent or carer) was considered to be paramount in adding genuine value to a given PE activity and in reflecting the expected outcomes. This can also serve to readdress many misconceptions of people with dementia and young people -namely
F I G U R E 7
What is required to do more effective patient engagement (total respondents). All stakeholder responded. Respondents could select more than one option. In the 'Other' category, additional resources highlighted included 'a clear framework and guidelines on how to engage with patients' and 'funding/ financial support from other stakeholders, particularly in providing funding/ reimbursement/tokens of gratitude for patients' [Colour figure can be viewed at wileyonlinelibrary.com] that they are not able to or willing to be involved, an incorrect understanding of the condition(s) that they are living with, the added value that they can contribute, and the specific considerations (physical, mental, socioeconomic) of these populations.
The diversity of the patients involved in any PE activity was considered important to allow the broad range of experiences patients could contribute and provide equal opportunities for those who can be involved to do so. For example, different age groups, TA B L E 3 Major themes identified from two separate consultations (people with dementia (and their carers) and young people) and how to improve patient engagement (PE) with these populations. Other themes and full results are available at 28
Meaningful methods to enhance meaningful PE in patient populations according to the input provided
Voice of the person with the condition
People with dementia
The people with the condition are the 'experts by experience' and their input is unique.
Young people
The people with the condition are the 'experts by experience' and their input is unique.
Redress misconceptions
People with dementia Myths/misconceptions surrounding the type of disease, ability/willingness to participate, need for support. Young people Misconceptions about unable, or unwilling to contribute properly to PE. The early engagement in a research priority setting is feasible; the experience and/ or support young patients can provide in the design of clinical trials protocols, participation in regulatory activities is feasible if a suitable framework is established.
Diversity of patients involved and inclusion
People with dementia Account for differences such as age group, country, type and stage of dementia -mild, moderate and advanced dementia etc Young people Account for age group, country, different or complementary expertise to that of parents.
Equal opportunities for patients to participate by ensuring accessibility needs are met People with dementia
The structure and format of the PE activity should consider the needs of the person with dementia: the accessibility of the materials received, the use of plain language and avoiding the use of jargon, acronyms and highly technical terms. Young people Include age appropriate formats and language.
Raising awareness of PE opportunities
People with dementia Provision of relevant information, support and training to patients, carers and other stakeholders interacting with them Young people Promoting autonomy, respect and equality. Ethical principles and the children's rights need to be considered in the design of involvement activities for children and young people.
Reasonable adjustments for travel/ accommodation, accessible information, training sessions/personal support, financial support/ reimbursement People with dementia
Travel and accommodation costs incurred should be covered for both the person with dementia and his/her carer. Organizer of the PE activity should designate a 'named person or a single point of contact' with whom the person with dementia could speak to freely Accessible and understandable information about the PE activity Young people Travel and accommodation costs incurred should be covered for both the young person and their parents. Timing so that it doesn't interfere with school lessons Educational material and information (written and verbal) should be education level with appropriate language.
A facilitator from a YPAG should be available to provide the right personal support to a young person in both the preparation and during the PE activity. For the other two key stages, Sustainability was ranked equally low.
Conversely, the category, Aims and objectives (and related category, Key elements of practice design, which included aims and objectives) were ranked similarly high across all three key stages. Criteria within this category centred on that the aims and objectives of PE practices should meet the expectations of patients and/or focus on patients' needs and interests. They should also be agreed upon up front by all and be understandable to all involved participants (see Supplementary Tables 2-4).
At the criteria level overall, there were differences in the wording and definitions, across the three key stages (Table 5), the main difference, however, was in their respective weightings. The following general criteria were thus fairly consistent in their definition, across categories and the three key stages:
| Involvement and participation
Patients' participation should be properly planned, taking into account timing requirements, accessibility and vulnerability. More specifically, it should consider specific patient's circumstances and characteristics linked to but not limited to possible physical or mental impairments, cultural background, age and other relevant features (eg recordings, virtual communication and use of language, format of meetings, the venue and information provided). Additionally, an up-to-date single point of contact or a named person with whom patients can communicate when needed for information and/or support, is made available throughout their involvement in the activity (eg during ED).
| Legal and ethical considerations that govern PE -including a code of conduct
It is necessary to have any relevant policy directives, legal, ethics, governance requirements and/or regulatory framework about how to engage patients included as part of PE practices. More specifically, ensuring that codes of conduct are adhered to and conflicts of interest are addressed and managed through accountability and transparency. Notably, participants raised as yet unresolved considerations of how to effectively balance conflicts of interests with suitable patient participation.
| Building capacity to support the PE process
The competencies that are required to perform the PE activity itself Note: The final Delphi round aimed to i) reach consensus among the disagreement items not obtained after the first and second round; ii) Merging and rephrasing criteria and categories; iii) weighting categories; iv) weighting individual criteria. Starting from a list of approx. 50 questions for each decision point, after round 3 this was reduced to an agreed 20-25 criteria separately for each key stage of medicines development, positioned under 8 or 9 major categories. All weightings across categories and criteria within each category, equate to 100. The full breakdown of round 1 and 2 results can be seen at. 28 backgrounds and the circumstances of potentially vulnerable patients involved in setting research priorities.
| Evaluation of the PE process
Generally, methods, tools and monitoring systems should be in place to evaluate the PE practice, including a framework. An evaluation of the outcomes should be linked to the aims and objectives of the PE practice in each setting. Additionally, the outcomes should be shared with all the participants involved and procedures should be in place so that the conclusions of the evaluation are used to support a continuous improvement process. The rate of evolution here will likely be at a different rate from industry and regulators. From the reported perspective of HTA bodies, the proposed 'building block' approach allows work to progress in parallel and at a different rate to other stakeholders and geographies, but remain linked and sympathetic to an evolving ecosystem approach to PE.
| Limitations
The online survey was completed only by those who are familiar with and access to the technology, and a good understanding of Finally, there were dropouts of Delphi panellists during the process for a variety of reasons. In the CDT Delphi group, many respondents were from North America; hence, some bias in interpretation of responses and weightings was inevitable. Despite this, the overall number and expertise balance within and between each group was monitored, and stakeholders' representativeness were not at risk.
| CON CLUS ION
This multi-stage study adds significant value by building on some of the previously identified gaps between stakeholder specific understanding and importance of PE, and who should be responsible for leading efforts to improve it. 8,15,31 These findings provide detailed and tangible building blocks to all involved stakeholders as to where stakeholder needs and expectations are, and where greater alignment could occur. The work of the PARADIGM consortium and its partners continue to address many of these signals holistically through multi-stakeholder mechanisms 35,37,57 in the continued evolution of meaningful, ethical and sustainable PE.
ACK N OWLED G EM ENTS
The authors gratefully acknowledge all those PARADIGM consortium members who helped design, test and validate the survey and Delphi materials. We thank all those survey participants and Delphi panellists who dedicated their time to help make these a success and to those participants who participated in the face-to-face consultations facilitated by Alzheimer Europe
AUTH O R CO NTR I B UTI O N S'
All authors contributed to the design of this study and the writing of this paper.
DATA AVA I L A B I L I T Y S TAT E M E N T
The data that support the findings of this study are available from the corresponding author upon reasonable request. | 2021-02-19T06:16:16.036Z | 2021-02-17T00:00:00.000 | {
"year": 2021,
"sha1": "8a468e7d29e0434d18d8ea8d759c0f4c2cb1f6ba",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/hex.13207",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dd5b31cad0ee05fded3a61c5b22117c5fcc50560",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
32019275 | pes2o/s2orc | v3-fos-license | High optical efficiency and photon noise limited sensitivity of microwave kinetic inductance detectors using phase readout
We demonstrate photon noise limited performance in both phase and amplitude readout in microwave kinetic inductance detectors (MKIDs) consisting of NbTiN and Al, down to 100 fW of optical power. We simulate the far field beam pattern of the lens-antenna system used to couple radiation into the MKID and derive an aperture efficiency of 75%. This is close to the theoretical maximum of 80% for a single-moded detector. The beam patterns are verified by a detailed analysis of the optical coupling within our measurement setup.
We demonstrate photon noise limited performance in both phase and amplitude readout in microwave kinetic inductance detectors (MKIDs) consisting of NbTiN and Al, down to 100 fW of optical power. We simulate the far field beam pattern of the lens-antenna system used to couple radiation into the MKID and derive an aperture efficiency of 75%. This is close to the theoretical maximum of 80% for a single-moded detector. The beam patterns are verified by a detailed analysis of the optical coupling within our measurement setup.
In the next decades millimeter and sub-mm astronomy require 1 large format imaging arrays to complement the high spatial resolution of the Atacama Large Millimeter/submillimeter Array 2 . The desired sensors should have a background limited sensitivity and a high optical efficiency and enable arrays of up to megapixels in size. The most promising candidate to fulfill these requirements are microwave kinetic inductance detectors (MKIDs) 3 due to their inherent potential for frequency domain multiplexing. MKIDs are superconducting resonators, thousands of which can be coupled to a single feedline. Each resonator is sensitive to changes in the Cooper pair density induced by absorption of sub-mm radiation. By monitoring the change in either phase or amplitude of the complex feedline transmission at the MKID resonance one can measure the absorbed photon power. Using amplitude readout photon noise limited performance has been shown 4 . However, for practical applications two key properties need to be demonstrated: (1) Photon noise limited operation in phase readout. (2) A measurement of the aperture efficiency 5 , which describes the absolute optical coupling of a MKID imaging array to a plane wave. In this letter we present antenna coupled hybrid NbTiN-Al MKIDs designed for groundbased sub-mm astronomy. We show that these devices achieve photon noise limited performance in both amplitude and phase readout. Through a detailed analysis of the optical coupling within our setup we validate the simulation of the lens-antenna far field beam pattern. From this we derive an aperture efficiency of 75%. This is close to the theoretical maximum of 80% for a single-moded detector. The device design, shown in Fig. 1, aims to simultaneously maximize the phase response and minimize the a) Electronic mail: r.m.j.janssen@tudelft.nl two-level system (TLS) noise contribution 8 . The device is a L ≈ 5 mm long quarter wavelength coplanar waveguide (CPW) resonator consisting of two sections. The first section (∼ 4 mm), at the open end of the resonator, is a wide CPW made entirely from 200 nm thick NbTiN. NbTiN has 10 dB lower TLS noise than conventional superconductors such as Al 9 . The TLS noise is further reduced by the width of the CPW 9 , 23.7 µm and 5.4 µm for the CPW gap and central line, respectively. The second section (1 mm), at the shorted end of the resonator, is a narrow CPW with NbTiN groundplanes and a 48 nm thick Al central line. The Al is galvan-FIG. 1. Scanning electron micrograph of the antenna-coupled hybrid NbTiN-Al MKIDs used. A wide NbTiN CPW resonator is used to minimize the two-level system noise contribution. At the shorted end, where the planar antenna is located, one millimiter of CPW is reduced in width and the central line is made from thin Al. The Al is galvanically connected to the NbTiN at both ends (inset). A white noise spectrum is observed, the level of which is constant with loading power, for P350GHz > 100 fW. The roll-off above 1 kHz is due to the quasiparticle lifetime, which is reduced by an increasing optical load. Note that the photon noise level is 16 dB higher in phase readout. Nevertheless, at a given loading the NEP is the same for phase and amplitude readout as shown in (b ically connected to the NbTiN central line ( Fig. 1 inset) and the NbTiN groundplane at the resonator short. The NbTiN is lossless for frequencies up to the gap 2∆ 0 /h = 1.1 THz (T c ≈ 14 K). Any radiation with a frequency 0.09 < ν < 1.1 THz is therefore absorbed in the Al (T c = 1.28 K) central line of the second section. The optically excited quasiparticles are trapped in the Al, because it is connected to a high gap superconductor. This quantum well structure confines the quasiparticles in the most responsive part of the MKID and allows us to maximize the response by minimizing this active volume. Therefore, we use a narrow CPW in section two, 2.3 µm and 3.7 µm for the central line and slots, respectively. Using a narrow Al line at the shorted end of the MKID does not increase the TLS noise significantly, because of the negligible electric field strength in this part of the detector. At the shorted end of the resonator light is coupled into the device through a single polarization twin-slot antenna, which is optimized for ν = 350 GHz. The advantage of using antenna coupling is that it can be designed independently from the distributed CPW resonator. The disadvantage is that the antenna occupies only ∼ 1% of the total pixel footprint. To achieve a high filling fraction we use elliptical lenses to focus the light on the antennas. The design presented here is an improvement on that by Yates et.al. 4 . Our design has a wider body in section one, which provides ∼ 7 dB reduction of the TLS noise. In addition we have thinner Al, 48nm instead of 80 nm.
This increases the kinetic inductance fraction by 45% to α = 0.09 and reduces the volume by 40%. Both give a linear increase in the phase response. An array of 24 pixels has been fabricated 10,11 on a high resistivity (> 10 kΩ cm) 100 -oriented Si substrate. All pixels are capacitively coupled to a single feedline with a coupling Q c ∼ 58k, which is matched to the Q i expected for an optical loading of ∼ 10 pW. After mounting an array of 16 laser machined Si lenses with a diameter of 2 mm on the central pixels, the array is evaluated using a pulse tube pre-cooled adiabatic demagnetization refrigerator with a box-in-a-box cold stage design 12 . In this design the array is fully enclosed in a 100 mK environment with the exception of a 2 mm aperture, which is located 15.05 mm above the approximate center of the MKID array. This aperture is isotropically illuminated by a large temperature-controlled blackbody 6 . Two metal mesh filters provide a minimum rejection of 20 dB at all wavelengths outside the 50 GHz bandpass centered on ν = 350 GHz. This allows us to create a variable unpolarized illumination over a wide range of powers. Fig. 2(a) shows the amplitude and phase noise spectra measured for a typical device as a function of the optical power absorbed in the Al, P 350GHz . In this figure we observe two characteristics that prove our device is photon noise limited: 1. The noise spectra in both phase and amplitude are white with a roll-off given by the quasiparticle lifetime, τ qp , or resonator ring time, τ res . For our devices we observe a white noise spectrum for P 350GHz ≥ 100 fW, which has a roll-off due to τ qp , because τ qp > τ res .
2. When reducing the optical loading from the photon noise limited situation one should observe at negligible power levels the transition to a noise spectrum that is limited by intrinsic noise sources of the detector. At a negligible optical loading (P 350GHz = 4 fW) the phase noise spectrum is no longer white and the noise level in both phase and amplitude readout is lower with respect to P 350GHz > 4 fW. Fig. 2 shows three more features, which may be present in a photon noise limited MKID.
The photon noise level of the spectra observed in Fig. 2(a) is independent of the optical loading, because the product of quasiparticle number and quasiparticle lifetime is constant 6,13 and the loaded Q did not change.
By fitting a lorentzian roll-off 6 to the spectra presented in Fig. 2(a) we derive the quasiparticle lifetime as a function of the optical loading. Fig. 3 shows that the quasiparticle lifetime obtained from phase and amplitude readout is equal for all loading levels. For P 350GHz > 100 fW the quasiparticle lifetimes show a τ qp ∝ P −0.50±0.02 350GHz relation, which matches the expected τ qp ∝ 1/ √ P 350GHz relation 13-15 for a homogeneously illuminated superconductor. At P 350GHz = 4 fW we observe τ qp = 150 µs, which deviates from the trend set by the photon noise limited regime and is significantly lower than the τ qp ≈ 2 ms observed by de Visser et.al. 16 in similar Al. Measurements on hybrid NbTiN-Al MKIDs with a varying length of the Al section show that the quasiparticle lifetime increases with the Al length. Based on this we tentatively conclude that the reduced lifetime we observe is due to poisoning by quasiparticles entering from the NbTiN. Fig. 2(b) and 2(c) show the observed Noise Equivalent Power (NEP) 6,7 , the level of which only depends on the optical loading and is thus equal in amplitude and phase readout. The NEP follows the N EP ∝ √ P 350GHz relation expected for photon noise limited MKIDs 4,13,17 . Given the equal NEP values, Fig. 2(a) shows why phase readout is preferred for practical background limited systems. The photon noise level in phase readout is 16 dB higher than that in amplitude readout, thereby relaxing the dynamic range requirements of the readout electronics. We estimate 18,19 that with a state-of-the-art readout system based upon the E2V EV10AQ190 analog-todigital converter (ADC) we can simultaneously read out approximately 1800 of the presented NbTiN-Al MKIDs in phase readout if we accept a 10% degradation of observing time due to the noise added by the ADC alone. Using the same electronics amplitude readout would only allow 30 pixels. In the photon noise limited regime it is favorable to have a high optical or aperture efficiency 5 , η A , because the observation time required to achieve a given signal-tonoise follows t σ ∝ η −1 A . The most reliable way to determine the aperture efficiency of a MKID is through the measurement of the photon noise. Yates et.al. 4 used this approach and determined the aperture efficiency by comparing the measured NEP to the NEP expected for a perfect absorber with the same area as a single pixel. The latter was determined using the geometrical throughput between the detector and the illuminating aperture. However, this approach is only valid if the gain of the antenna is equal to its maximum value within the entire angle spanned by the aperture. We determine the aperture efficiency from the full far field beam pattern, which is obtained from a simulation of the complete lens-antenna system using CST Microwave Studio. This gives us the freedom to adjust the design as required and allows a calculation of η A independent of the measurement setup. However, the simulated beam pattern does require experimental verification. We will first show how we verify the coupling efficiency, C ν , which describes the reflection losses due to mismatches between the antenna and the resonator CPW, and the gain pattern 20 , G ν (Ω), as a function of the angular direction, Ω, from the microlens focus. The gain pattern of our lens-antenna system is shown in Fig. 4(b). After the verification we will use C ν and G ν (Ω), which we obtain from CST Microwave Studio, to calculate η A . We verify C ν and G ν (Ω) using the photon noise limited optical NEP, N EP opt , as the experimental observable. We compare N EP opt to the photon noise limited NEP we expect, N EP calc , from the resonator. To calculate N EP calc we need to know the power coupled to the lensantenna system, P calc , which is given by Here c is the speed of light, F ν the filter transmission and B ν (T BB ) Planck's law for a blackbody temperature T BB . The factor 1/2 takes into account that we receive only a single polarization. The second integral evaluates the solid angle Ω over the aperture area, A ap . The rest of the detector enclosure is a 100 mK absorber that has a negligible emission at 350 GHz. Included in this calculation is the experimentally measured lateral shift between each pixel and the aperture. The effect of lateral deviations from co-alignment are shown in Fig. 4(a). The contours in this figure show the predicted reduction in received power as a function of the lateral translation between the lenses' optical axis and aperture center. The contours are normalized to a co-aligned system. The circles in Fig. 4 indicate the positions of the 16 lensed pixels. The color indicates the relative frequency change between 5 and 90 fW of optical loading, which roughly approximates the relative absorbed power for all 16 pixels. The qualitative match is striking and assures us of the shape of G ν (Ω).
We can now define the power error ratio ǫ as the discrepancy between the calculated and measured photon noise limited NEP.
Here hν is the photon energy of the incoming radiation, (1 + mB) the correction to Poisson statistics due to wave bunching 6 , ∆ the superconducting energy gap of the absorbing material and η pb = 0.57 the pair breaking efficiency inside this material. N EP opt is equal to the measured NEP at a modulation frequency of 200 Hz, N EP 200Hz , after correction for any detector noise contribution to the NEP, N EP det . In amplitude readout this is the amplifier noise contribution, which we estimate from the NEP value at a modulation frequency of 300 kHz (black diamonds in Fig. 2(b)). In phase readout N EP det = 0 as both the frequency independent amplifier noise contribution, observed above the roll-off frequency, and the 1/ √ F TLS noise contribution, observed below 10 Hz, are insignificant at 200 Hz. We expect ǫ = 1, if the description of the optical power flow is complete and the simulated beam patterns are correct. Fig. 2(c) shows the measured optical NEP, N EP opt , as a function of P calc for phase (magenta dots) and amplitude (black dots) readout. The measured photon noise NEPs from phase and amplitude are within 2σ of each other at all loading levels. The solid black line shows the best fit for the expected linear relation, N EP opt = N EP calc / √ ǫ. For the presented MKID, numbered 16, ǫ = 1.06 ± 0.06, if we disregard the lowest loading. For a different MKID, numbered 3, the above analysis yields ǫ = 1.09 ± 0.13. From the verified far field beam pattern we can determine η A , which is mathematically defined as Here A is the physical area covered by the pixel; λ and ν are the wavelength and frequency of the observed radiation, respectively; and Ω 0 is the direction of the maximum gain. Using the circular area of the lenses A = π mm 2 , C 350GHz = 0.98 and the gain of the CST beam pattern at broadside, G ν (Ω 0 ) = 5.0 dB, an aperture efficiency of 75% is determined for a single pixel. The maximum achievable aperture efficiency of a circular antenna illuminated by a single-moded gaussian beam is 21 η A = 0.80. For the measured array the filling fraction of the square packing means we have a total array aperture efficiency of 57%. Using an array with hexagonal packing the total array aperture efficiency can be increased to 66%.
In conclusion, we present hybrid NbTiN-Al MKIDs, which are photon noise limited in both phase and amplitude readout for loading levels P 350GHz ≥ 100 fW with an aperture efficiency of 75%. The photon noise level will allow us to simulatenously read out approximately 1800 pixels using state-of-the-art electronics to monitor the phase. Given these specifications, hybrid NbTiN-Al MKIDs should enable astronomically usable kilopixel arrays for sub-mm imaging and moderate resolution spectroscopy. | 2013-11-11T12:30:47.000Z | 2013-11-11T00:00:00.000 | {
"year": 2013,
"sha1": "422ce3fae9c7725c351fce61cc0cbb003c8afce0",
"oa_license": null,
"oa_url": "https://pure.rug.nl/ws/files/63620778/1.4829657.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "422ce3fae9c7725c351fce61cc0cbb003c8afce0",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
85529237 | pes2o/s2orc | v3-fos-license | Cumulants, Zeros, and Continuous Phase Transition
This paper explores the use of a cumulant method to determine the zeros of partition functions for continuous phase transitions. Unlike a first-order transition, with a uniform density of zeros near the transition point, a continuous transition is expected to show a power law dependence of the density with a nontrivial slope for the line of zeros. Different types of models and methods of generating cumulants are used as a testing ground for the method. These include exactly solvable DNA melting problem on hierarchical lattices, heterogeneous DNA melting with randomness in sequence, Monte Carlo simulations for the well-known square lattice Ising model. The method is applicable for closest zeros near the imaginary axis, as these are needed for dynamical quantum phase transitions. In all cases, the method is found to provide the basic information about the transition, and most importantly, avoids root finding methods.
I. INTRODUCTION
Phase transition has been a major playground of statistical mechanics. A phase transition is generically defined as a point of non-analyticity of any quantity of interest, like the free energy for an equilibrium system and developing an understanding of phase transitions is associated with keywords like critical exponents, order parameter, correlation length etc. and not the least the transition point [1]. One of the innovative ways of locating and finding the nature of a transition is the method of finding the zeros of the partition function by allowing the temperature or similar intensive field like quantities to be complex. This method was first incorporated by Lee and Yang to study the liquid-gas phase transition in the complex fugacity plane and the 2d Ising model in the complex magnetic field plane [2,3]. The failure of the cluster expansion method by Mayer and coworkers in describing the properties of the liquid phase, motivated Lee and Yang to study the liquid-gas phase transition. Of late, studies of zeros of partition functions, or its equivalents, have become relevant in many fields, like quantum dynamics, polymers, QCD, and analysis of experimental data, to name a few [4][5][6]. The method of Lee-Yang zeros are not only confined to cases of equilibrium systems but have also been applied to systems out of equilibrium successfully [7].
Finding the zeros of the partition function analytically using numerical techniques is not always possible in the thermodynamic limit, due to the increasing level of memory requirement. The zeros can be found as curves, can cover some planar region or can occur in some other complicated fashion. Although the pattern of the zeros can be visually appealing, it is only the zeros near the positive real axis (or the imaginary axis in case of Dynamical Quantum Phase Transition (DQPT) [4]) that bears use- * debjyoti@iopb.res.in † somen@iopb.res.in ful information. Recently, a cumulant method has been proposed to locate the leading zero taking the first order transition of the zipper model of DNA as an example [8].
Although things seem to be pretty well behaved in case of first order transition, it calls for more detailed analysis for cases where zeros simply do not distribute uniformly. It is difficult to realize complex parameters in experiments, which made it hard to observe the Lee-Yang zeros experimentally for quite some time [9][10][11]. Recently there have been attempts to explore the Lee-Yang zeros of an Ising type spin bath [9]. It has been shown that for continuous transitions, the angle at which the zeros approach the real axis near the limit point, is related to the critical exponent for diverging length, the ratio of specific heat amplitudes on either side of the critical point, and the specific heat exponent [12].
We look into a detailed study of different types of continuous transitions to study the effectiveness of the cumulant method from different types of data generated such as by exact methods or from Monte Carlo simulation to see to what extent they can provide us with fruitful information. This work is divided in the following order. In Sec. II and Sec. III we have defined our model of dsDNA on the hierarchical lattice and the cumulant method, showing how good it is possible to estimate the critical temperature β c from closest zeros of some lower generations (or smaller system sizes). The second closest zeros are found with some less accuracy and finally, the exponent for diverging length is calculated. In Sec. IV we have probed into the problem of a binary random disorder (motivated from the fact that the energy of the two types of bonds in a dsDNA is not the same), by introducing two types of interaction energies randomly. Then, the critical temperature no longer remains unique and forms a distribution. By observing how the width of the distribution varies with the system size, we concluded that there is a unique T c in the thermodynamic limit. Finally, in the Sec. V and Sec. VI we have checked the known results for the well studied Ising model in 2d and discussed how to find the closest zero which meets the imaginary axis for Ising model in 1d.
II. DNA ON HIERARCHICAL LATTICES
On untwisting the helical structure of a dsDNA, it takes the shape of a railway track where the ties represent the bonds shared by the two strands. This structure of bond sharing can be mimicked by putting the two strands on a hierarchical lattice with the two endpoints fixed but they can wander at intermediate points. The lattice is generated iteratively by replacing each bond in the (n − 1)th generation with a motif of λb bonds to get the new nth generation, where λ and b are the bond scaling factor and bond branching factor respectively. The effective lattice dimension in the thermodynamic limit will be given by [13] In this paper we shall choose λ = 2 and b = 4. Hierarchical lattice with discrete scaling allows exact implementation of the real space renormalization (RG). Thus one can write down recursion relations for the partition function and the zeros of the partition function. The contact energies are defined to be −ǫ (ǫ > 0) when two strands share a bond. This constitutes our DNA model [13]. It is known that dsDNA undergoes a phase transition from a bounded state to an unbounded state (dsDNA melting) under variation of the temperature field, the critical point being known as the duplex melting point. From RG relations the critical point is found to be y * = b − 1, with y = exp (ǫβ) being the Boltzmann factor. Exact recursion relations followed by the single chain (C n ) and double chain (Z n ) partition functions can be written as with the initial conditions taken as The zeros of the partition function thus follows the following recursion relation whereq j is one of the zero of the (n − 1)th generation, giving rise to two new zeros in the nth generation [13].
III. THE CUMULANT METHOD
We can define cumulant-like quantities where we take derivatives of the logarithm of the partition function with respect to the Boltzmann factor i.e. y instead of β itself, and study the zeros in the complex y plane. The nth order cumulant can be defined as The partition function of the dsDNA model on hierarchical lattice is of the form Z n = a 0 + a 1 y 2 + · · · + a n y 2 n , which being a polynomial with all of its coefficients positive cannot have any of its zeros as real. According to the Weierstrass factorization theorem any entire function can be written in terms of the product of its zeros [14]. Thus, the partition function can be written as where y k is the kth zero in the complex y plane and these zeros occur in complex conjugate pairs. Thus the free energy becomes a sum over all the zeros The nth order cumulant is completely expressed by the zeros as The cumulants are real quantities since the zeros come in complex conjugate pairs. Eq. (10) can be further reduced to where ρ k is the distance to the kth zero from the point on the real y axis at which cumulants are being calculated and φ k is the angle with the real axis from the cumulant In the higher orders, cumulants will be dominated by the closest zero contribution only, i.e., Thus, from four such successive cumulants in the higher orders we can solve for the leading pair of zeros, ρ 0 and φ 0 , using the matrix equation where, κ [8]. The closest zeros found for a few smaller generations are shown in Fig. 2. Once we have the closest pair of zeros we can calculate approximants U n of the cumulants using only the closest pair. Now, in the higher order cumulants the closest zero mostly dominates, while in the lower order other zeros should contribute. It must be somewhere in between that the closest zero takes over and just before that the largest contribution must be from the closest and second closest zeros. Thus, we have to look for this crossover region, which will provide us with an upper bound on the second closest zero. The upper bound, will depend upon the error introduced in calculating the closest pair of zeros, which then propagates in the approximate cumulant calculation only from the closest pair of zeros. Let us consider the region where the nth cumulant consists mainly of the first and second closest zeros. Then, we may write where y is the point in the complex y plane at which the cumulant is being calculated, y 0 and y 1 being the closest and second closest zeros respectively. Consider, where y * 0 is the exact value of the closest zero, and δy is the error introduced when calculated from the exact higher order cumulants. Then, the nth order cumulant becomes Subtracting the contribution of the closest zero to the cumulant, we get U n where ψ is the phase factor coming from the complex δy =| δy | exp (−iψ). Now, the first term in Eq. (17) dominates when This signifies that as | δy |→ 0, n → ∞ i.e. we can have second closest zero contribution in higher orders also.
In Fig. 4(a) and Fig. 4(b) we plot the real and imaginary parts of the second closest zeros calculated at three different temperatures. Clearly, we did not consider very low order points since all the zeros contribute there. There appears to be a crossover region after which there is a constant contribution which corresponds to the first closest zero. In between, there is a flat plateau which is the region of our interest. Thus, we fit our curve such as to get all the points before the crossover region and up to the flat plateau to find our best value of the second closest zero. The thickness of the shaded region in the flat plateau should serve as an error bar to our value. While the closest zeros can be determined with a decent accuracy of four decimal places for smaller generations, the next to closest pair of zeros can be calculated with somewhat less accuracy of one place after the decimal.
Next, we find the angle at which the zeros meet the real axis in the thermodynamic limit; see Fig. 4(c). For this, we have taken the average of the angles made by the first and second closest zeros with the critical point (since the zeros don't fall in a smooth straight line, in lower generations) and plotted it with the generation number inverse. The results vary substantially if we do not include the second closest zero. This angle contains the information about the order of the phase transition. The angle at which the zeros meet the real y axis is related to the specific heat exponent as where φ is the angle between the tangent of the zeros at the limit point and the real axis of y, ν is the critical exponent for diverging length, α is the specific heat exponent and A ± are the amplitudes of the specific heat on the low and high y side of the transition [12]. For our problem with double stranded DNA, it is known that A− A+ → ∞ as A + = 0. Therefore, the angle is given by Once we have φ, we can find the critical exponent ν. But in general we need to know ν separately. To determine ν one can make use of the RG transformation properties of the Hamiltonian to see how the partition function would change under a scaling [12], and obtain the following scaling equation of the distance of the kth zero from the critical point where d is the dimension, ∆ k is the distance of the kth zero from the critical point, L is the size of the system and F −1 is a constant. To determine ν we plot ln ∆ k vs ln L, for the closest zeros (k = 1) for generations n = 5, 6, 7, 8; see Fig. 4
(d).
Our estimate of the length exponent ν estimate = 1.67 compares reasonably well with the independent estimate from Eq.
IV. EFFECTS OF BINARY RANDOM DISORDER
In order to show the usefulness of the cumulant method, we now use it for a heterogeneous DNA. The non-uniformity in the base sequence of a DNA can be handled by specifying the base pair energies along the chain. A simpler situation is a random sequence where the interaction energy is randomly chosen from a specified distribution [15][16][17][18].
With, the introduction of randomness, the obvious question is whether a critical point for melting exists. Since the partition function is different for the different realization of randomness, one of the important distributions would be that of the partition function. And since the partition function can be written completely in terms of the zeros in the complex beta plane, there is a probability distribution of the closest zero. In the thermodynamic limit, a well-defined melting point would require a sharp probability distribution. The width of the distribution should vanish in the large length limit ("self-averaging"). Although, that is not enough for a transition to occur the zeros must touch the real axis. However, this not equivalent to the traditional quenched averaging. The quenched average transition temperature can be determined from the average specific heat as we discussed below.
Assuming, two different types of energy, ǫ 1 = 1.0 and ǫ 2 = 0.5 of interaction and a uniform distribution of the random energy, we have shown that the width of the probability distribution of the closest zeros decreases with an increase in the size of the lattice and that the angle from the cumulant calculating point on the real axis of beta plane to the closest zero, goes to zero (or in other words cos φ → 1) in the thermodynamic limit, denoting the fact that the closest zero pinch the real axis which is a necessary condition for a transition to occur; see Fig. 5. The randomness only depends on the longitudinal direction and has no dependence on the transverse direction. The mean critical temperature for a particular generation of the hierarchical lattice is determined from the distri-bution of the closest zero calculated from the four successive cumulants of order n = 19, 20, 21, 22. Fig. 5(a) shows that the average of cos φ over samples approaches 1, which indicates that there is a limit point on the real β axis; see Fig. 5(b). Fig. 5(c) and Fig. 5(d) show the distribution of ρ and cos φ for successive generations of n = 6, 7, 8 and how the width of the distribution goes to zero in the thermodynamic limit respectively. Fig. 5(e) shows the comparison between the melting temperature found from the intersection of the specific heat curves scaled by the length of DNA (transparent blue circled region), found from the partition function for 500 different random samples and our estimated value from the partition function zeros (black dot) with the boxes (green and red) along the β axis representing the uncertainty.
In the thermodynamic limit, the melting temperature is estimated to be β c = 1.23 ± 0.004 from the extrapolation of the closest zeros. Whereas the intersection of the specific heat curves gives the melting temperature to be β c = 1.35 ± 0.03. Although it can be confirmed that there is a unique melting point in the thermodynamic limit, the melting point predicted from the distribution of zeros is different from that calculated from the intersection of specific heat curves for successive generations Fig: 4(e).
V. ISING 2D
In the previous sections III and IV, we determined the closest zeros as well as the second closest zeros from the higher energy like cumulants of the partition function. There we had the privilege of finding the exact cumulants since the partition function is known exactly. But in simulations, the higher energy cumulants can be calculated more easily, albeit approximately, than finding the partition function. One such example is the two dimensional Ising model. The Hamiltonian for 2d Ising model with interaction between the nearest neighbours in the Cumulants calculated at κ = 0.5i absence of an external magnetic field is where σ i ∈ {−1, 1} is the spin at site i, J is the coupling constant between neighboring spins and ij indicates nearest neighbors. It is known that for such a two-dimensional spin system there is a phase transition from paramagnetic to ferromagnetic phase at finite temperature. The energy cumulants were calculated from the energy moments which in turn was calculated from Monte-Carlo simulation and then using Mathematica [19] we found the expression for nth order cumulant in terms of the moments. Since our aim throughout this work had been to determine quantities in the thermodynamic limit by studying the zeros of small system size, we have calculated the zeros for n×n square lattices with n = 5, 6,7,8,9,10,11, on the complex β plane and then extrapolated the results to the infinite limit. To find the zeros we have calculated up to 30th moment of energy using quadruple precision data type. Fig. 6(a) shows the transition point of the 2d Ising model in the absence of any external magnetic field in the thermodynamic limit on the complex inverse temperature plane β c . Fig. 6(b) shows that the imaginary part of the closest zero goes to zero in the thermodynamic limit indicating a phase transition. Fig. 6(c) shows how the distance between the closest zeros and the transition point scales with the system size. The inverse of the slope of the fitting line gives the critical exponent ν for the diverging correlation length according to the Eq (22).
Results for the 2d Ising model was found to be in excellent agreement with the exact results. Below given is a table for comparison with exact results in the thermodynamic limit.
VI. IMAGINARY ZEROS IN ISING 1D
Motivated by the fact that a phase transition can happen with time as a parameter in case of DQPT where the zeros meet on the imaginary axis, we have found the zeros closest to the imaginary axis, for the 1d Ising model which do not have any transition at any non zero temperature [4]. The partition function of the 1d Ising model is exactly known to be: where λ ± are the eigenvalues of the transfer matrix and N is the number of spins. In absence of any external magnetic field, the eigenvalues become where κ = J kB T and the zeros of the partition function on the complex κ plane are found to be with n = 0, 1, 2 · · · N − 1. The zeros in the complex κ plane are not distributed symmetrically for all the lengths. Symmetrical distribution about the imaginary axis is only to be found for systems with size the imaginary axis or distributes asymmetrically. While finding the zeros using the cumulant method at a point on the imaginary κ axis, it is found that it works only for zeros with a symmetrical distribution.
Assuming a symmetric distribution about the imaginary axis, since the free energy is a real quantity, we can write where the Z and Z * are the partition functions evaluated at a complex temperature and its complex conjugate respectively so that the free energy is real. In terms of zeros of the complex κ plane: Retaining only the terms of the closest zeros U n ≈ (−∂ iκ ) n [ln (κ 1 − iκ) + ln (κ 2 − iκ) + ln (κ * 1 + iκ) + ln (κ * 2 + iκ)] (30) U n = 4(−1) n−1 (n − 1)! cos (nφ 0 ) (iρ 0 ) n , which requires a little modification of the matrix equation by simply putting iρ 0 instead of ρ 0 . Here, φ 0 is the angle made by the vector extending from the cumulant calculating point on the imaginary axis to the zero closest to the imaginary axis and the imaginary axis itself. The zeros meet the imaginary axis periodically due to the presence of the −i factor inside log in Eq (26). The schematic diagram on Fig. 8 shows how the determination of the closest zeros along the imaginary axis is possible only if we choose the cumulants calculating point judiciously. The closest zeros for different branches lies at a distance of π/4 along the imaginary κ axis. The shaded regions on Fig. 8 for example represents points along the imaginary κ axis where if we choose our cumulants calculating point, would give the closest zero of that particular branch (shown by the blackened top of that shaded region).
VII. SUMMARY
To summarize, we have found the leading pair of zeros of the partition function for a dsDNA on hierarchical lattice where the zeros are exactly solvable as well as for disordered case. We have chosen such a system to have a good comparison with the exact results. The results found out to be matching well enough with the theoretical exact results, thus giving us a new way to look into systems in the thermodynamic limit from smaller system sizes. Although the zeros from the lower generations may not always be helpful, still it proved to be quite a powerful method. Moreover, it is shown that results from Monte Carlo simulation proved to be equally good to reproduce the known results for the critical exponent and critical point estimation for the case of Ising model in 2d and 1d as well, through this cumulant method. | 2019-03-28T12:29:47.000Z | 2019-03-27T00:00:00.000 | {
"year": 2019,
"sha1": "8202fb16bbf16a73d17e3fb0351f47ac421a849a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8202fb16bbf16a73d17e3fb0351f47ac421a849a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
237261926 | pes2o/s2orc | v3-fos-license | Lifelong single-cell profiling of cranial neural crest diversification
The cranial neural crest generates a huge diversity of derivatives, including the bulk of connective and skeletal tissues of the vertebrate head. How neural crest cells acquire such extraordinary lineage potential remains unresolved. By integrating single-cell transcriptome and chromatin accessibility profiles of cranial neural crest-derived cells across the zebrafish lifetime, we observe region-specific establishment of enhancer accessibility for distinct fates. Neural crest-derived cells rapidly diversify into specialized progenitors, including multipotent skeletal progenitors, stromal cells with a regenerative signature, fibroblasts with a unique metabolic signature linked to skeletal integrity, and gill-specific progenitors generating cell types for respiration. By retrogradely mapping the emergence of lineage-specific chromatin accessibility, we identify a wealth of candidate lineage-priming factors, including a Gata3 regulatory circuit for respiratory cell fates. Rather than multilineage potential being an intrinsic property of cranial neural crest, our findings support progressive and region-specific chromatin remodeling underlying acquisition of diverse neural crest lineage potential. Highlights Single-cell transcriptome and chromatin atlas of cranial neural crest Progressive emergence of region-specific cell fate competency Chromatin accessibility mapping identifies candidate lineage regulators Gata3 function linked to gill-specific respiratory program Graphical Abstract
specialized type of gill cartilage distinct from that in the rest of the head, as well as pillar and 111 tunica media cells and putative gill progenitors. We also recovered smooth muscle, perivascular, 112 and stromal cells (see Table S1 for cluster marker genes and Fig. S9-10 for in situ validation). 113
114
In addition to skeletal and gill populations, we recovered a distinct type of fibroblast enriched for 115 the cell adhesion molecule chl1a and wnt5a. Strikingly, these fibroblasts are also enriched for 116 genes encoding enzymes for all steps of phenylalanine and tyrosine breakdown (Fig. 1f, Fig. S11). 117 In situ hybridization for two of these genes (hpdb and pah) reveals that these fibroblasts are in 118 the dermis between the skin epidermis and runx2b+/sp7+ osteoblast lineage cells (Fig. 1g,h). 119 Humans with mutations in HGD, which encodes an intermediate enzyme in the Phe/Tyr catabolic 120 pathway, develop Alkaptonuria, or black bone disease, due to accumulation and pathological 121 aggregation of homogentisic acid 16 . As the abundant melanocytes in the zebrafish skin use high 122 levels of Tyr to synthesize melanin, one possibility is that these specialized dermal fibroblasts 123 function to protect the skeleton by removing damaging Phe/Tyr metabolites. 124 125 Progressive emergence of CNCC derivatives and region-specific progenitors 126 To understand lineage decisions of CNCC mesenchyme across time, we first used the STITCH 127 algorithm 17 to connect individual stages into developmental trajectories for scRNAseq and 128 snATACseq datasets (Fig. 2a,b). As early as 3 dpf (particularly apparent for snATACseq), we 129 observe divergence of CNCCs into skeletogenic versus gill lineages. A hyal4+ perichondrium 130 population precedes branches for tendon/ligament, periosteum, and osteoblasts (Fig. S9), and an 131 fgf10b+ gill progenitor population appears at 5 dpf and precedes branches for gill cartilage, pillar, 132 and tunica media cells (Fig. S10). We also observe a distinct trajectory to dermal fibroblasts by 3 133 dpf (Fig. S11), as well as to cxcl12a+ stromal cells (Fig. S9) and teeth. We do not observe CNCC 134 contributions to cardiomyocytes (Fig. S8), in contrast to reports for amniotes 18 . By creating an 135 index for ectomesenchyme-enriched gene expression at 1.5 dpf, a stage preceding the onset of 136 differentiation, we found no evidence for retention of ectomesenchyme identity at later stages, as 137 shown by aggregated ectomesenchyme gene expression and the early ectomesenchyme marker 138 nr2f5 19 (Fig. S12). Although formation of CNCC ectomesenchyme involves a reacquisition of the 139 pluripotency network 14 , we also did not observe expression of pluripotency genes pou5f3 (oct4), 140 sox2, nanog, and klf4 at any stage of post-migratory ectomesenchyme, with the exception of 141 lin28aa that displays broad expression at 1.5 dpf and is rapidly extinguished by 2 dpf (Fig. S12). 142 Rather than maintenance of a multipotent ectomesenchyme population, our data point to 143 progressive emergence of specialized hyal4+ perichondrium, cxcl12a+ stromal, and fgf10a/b+ gill 144 populations at 3 dpf and beyond (Fig. S12). For gill clusters, cell distribution from 5 to 14 dpf revealed two primary trajectories ( Fig. 2f-h, Fig. 159 S13). In the first, cxcl12a+/ccl25b+ stromal cells give rise to mesenchyme associated with retinoic 160 acid metabolism (aldh1a2+/rdh10a+), with in situ hybridization revealing these cell types restricted 161 to the base of secondary filaments (Fig. S10). In the second, fgf10a+ cells are connected to 162 fgf10b+ cells, which then diverge into gill cartilage, pillar, tunica media, and perivascular 163 populations. To test whether fgf10b+ cells are progenitors for specialized gill subtypes, we used 164 CRISPR/Cas9 to insert a photoconvertible nuclear EOS protein into the endogenous fgf10b locus. 165 We found fgf10b:nEOS to be robustly expressed in the forming gills, with expression becoming 166 progressively restricted to the tips of gill filaments over time, similar to endogenous fgf10b 167 expression (Fig. S10, S14). We then used UV light to convert fgf10b:nEOS fluorescence from 168 green to red in a small number of filaments at 7 dpf and observed contribution to gill chondrocytes 169 and pillar cells 3 days later, with new fgf10b:nEOS cells (i.e. green only) being generated at the 170 tips of growing filaments (Fig. 2i). Similar results were seen in adult gill filaments (Fig. S14). These 171 data support fgf10b+ cells being progenitors for gill-specific cell types from larval through adult 172 stages. 173
174
To understand how CNCC mesenchyme changes from embryogenesis to adulthood, we next 175 interrogated patterns of gene usage and chromatin accessibility (Fig. 2j, Fig. S15-16, Table S2). 176 Gene ontogeny (GO) analysis of ectomesenchyme at 1.5 and 2 dpf revealed terms linked to cell 177 division and metabolism, consistent with early expansion of this population. We also find 178 enrichment of transcription factors for early ectomesenchyme (dlx2a, twist1a, nr2f6b) and arch 179 patterning (pou3f3b, hand2), as well as transcription factor binding motifs for several types of 180 nuclear receptors, in accordance with known roles of Nr2f members in ectomesenchyme 181 development 19 . The hyal4+ population contains skeletal-associated terms (collagen fibril 182 organization, skeletal system development, regulation of ossification, cartilage development), 183 consistent with being a common progenitor for cartilage, tendon, ligament, and bone in 184 pseudotime analysis. The hyal4+ population is enriched for transcription factors implicated in 185 perichondrium biology (mafa, foxp2, foxp4) 23,24 and cartilage formation (barx1, sox6, emx2) 25-27 , 186 and motifs for Bmp signaling (SMAD) and transcription factors (NFAT, RUNX) known to regulate 187 cartilage and bone 28 . For gill fgf10a/b+ progenitors, we recover terms for general growth (e.g. Highly resolved embryonic spatial expression domains from integrated datasets 206 We next sought to understand the developmental origins of distinct cell types and lineage 207 programs in CNCC ectomesenchyme. To do so, we first examined the ability of integrated 208 transcriptomic and chromatin accessibility datasets to predict the expression patterns of potential 209 ectomesenchyme patterning genes at 1.5 dpf, a stage before overt cell type differentiation. and pitx1 (oral mandibular) 25,33,34 -revealed tight correlation to reported expression, including 218 zebrafish-specific overlap of dlx5a and hand2 in the mandibular arch (Fig. 3d). We also identified 219 a previously unappreciated oral-aboral axis of the mandibular arch in zebrafish, marked by pitx1 220 and nr5a2 respectively, which we validated by in situ hybridization for nr5a2 (Fig. 3e). Re- Chromatin accessibility predicts cell type competency in early arches 232 We next sought to understand how the establishment of cell fate competency is linked to the 233 earlier activity of arch patterning genes. To do so, we first computed unique patterns of chromatin 234 accessibility ("peaks") for each cell cluster at 14 dpf (Fig. 4a, Table S3). Modules of the top 235 enriched peaks for each cell type were then mapped onto UMAP projections of SnapATAC data 236 at 1.5, 2, 3, and 5 dpf (Fig. S19). To understand when cluster-specific peaks become established, 237 as well as cluster relatedness, we developed the bioinformatics pipeline "Constellations". First, 238 we calculated whether projections of cluster-specific peak modules are skewed toward particular 239 regions of UMAP space at each earlier time-point, suggesting establishment of cluster-specific 240 chromatin accessibility (a proxy for cell type competency). We then computed the relatedness of 241 peak module projections in two dimensions for each mapped cluster at each stage (Fig. 4b). 242 Analysis of cell competency trajectories shows that cell types can be grouped into five main 243 classes: skeletogenic cells (including hyal4+ perichondral and postnb+ periosteal cells), stromal 244 cells, dermal fibroblasts, gill cell types, and cartilage. Constellations analysis also reveals a 245 temporal order of cell type competency establishment, with unique chromatin accessibility for 246 cartilage and dermal fibroblast lineages emerging at 1.5 dpf; bone and perichondrium at 2 dpf; 247 and periosteum, tendon and ligament, and gill progenitors and pillar cells at 3 dpf (Fig. 4c). This 248 analysis suggests that chromatin accessibility prefiguring diverse CNCC cell types is 249 progressively established rather than being inherited from earlier multipotent CNCCs. 250 251
Constellations analysis reveals candidate transcription factors for lineage priming 252
To discover potential transcription factors for establishing cell type competency, we analyzed the 253 Constellations dataset for transcription factors whose expression and predicted binding motifs 254 were co-enriched in particular clusters. We identified 287 transcription factor expression/motif 255 pairs showing enrichment (Fig. S20, Table S4). The FOXC1 motif and foxc1b gene body activity 256 were highly enriched in the cartilage trajectory, and LEF1/lef1 in the dermal fibroblast trajectory 257 ( Fig. 5a). Projection of FOX motifs and merged Fox gene activity (foxc1a, foxc1b, foxf1, foxf2a, 258 foxf2b) and LEF1/lef1 onto SnapATAC UMAPs at 1.5 dpf reveals close correlation to mapping of 259 the 14 dpf peak modules for cartilage and dermal fibroblasts at this stage (Fig. 5b,c), as well as 260 the known fate map of cartilage precursors in the arches 35 (Fig. 5d,e). This confirms genetic 261 evidence for roles of Foxc1 and Foxf1/2 in cartilage formation in zebrafish and mouse 36,37 , and 262 more specifically Foxc1 in establishing accessibility of cartilage enhancers in the developing 263 face 28 . It also raises the possibility that Wnt signaling, mediated in part by Lef1, may play a role 264 in early dermal fibroblast specification, consistent with enrichment of wnt5a in this population (Fig. 265
S11). 266
We also find GATA3/gata3 to be highly enriched in gill populations, with SnapATAC UMAP 268 projections of GATA3 motif and gata3 gene body activity at 5 dpf correlating with 14 dpf gill 269 progenitor peaks (Fig. 5f). The enrichment of ETS2/ets2, which plays a role in endothelial previous work had shown that gata3 is expressed in and required for initial gill bud formation in 283 zebrafish, larval lethality had precluded analysis of gill subtype differentiation 40 . We find gata3 284 expression to be maintained in gill populations through adult stages in scRNAseq data, which we 285 validated by in situ hybridization at 14 dpf and 2 years of age (Fig. S21). We then identified a non-286 coding region ~143kb downstream of the gata3 gene, itself containing a predicted GATA3 binding 287 site, that was selectively accessible in posterior arch CNCCs by 3 dpf, gill progenitors and pillar 288 cells by 5 dpf, and gill cartilage cells by 14 dpf (Fig. 6a, Fig. S22). This gata3-P1 element was 289 sufficient to drive highly restricted GFP expression in posterior arch CNCCs starting at 1.5 dpf, 290 which continued in gill progenitors, pillar cells, and chondrocytes through 60 dpf (Fig. 6c-e, Fig. 291
S21). 292
Gill cartilage has a markedly distinct expression and chromatin accessibility profile from hyaline 294 cartilage of the jaw, as shown by selective expression of ucmaa in gill cartilage versus ucmab in 295 hyaline cartilage (Fig. S23). We identified a non-coding region ~5kb upstream of the ucmaa gene 296 that was selectively accessible in gill cartilage starting at 14 dpf and contained a predicted GATA3-297 binding site (Fig. 6b, Fig. S22). This ucmaa-P1 element drives highly restricted GFP expression 298 in gill chondrocytes at 11 and 23 dpf, in contrast to a previously described ucmab enhancer 28 299 driving GFP expression in hyaline but not gill cartilage (Fig. 6f, Fig. S23). Although functional 300 assays are needed to confirm Gata3 dependence, our findings are consistent with GATA factors 301 establishing a positive autoregulatory circuit in posterior arch CNCCs that maintains gata3 302 expression and promotes the later differentiation of gill-specific cell types (Fig. 6g). For enhancer transgenic lines, we synthesized peaks for gata3 (chr4:24918100-24918770) and 344 ucmaa (chr4:7836670-783720) using iDT gBlocks and cloned these into a modified pDest2AB2 345 construct containing E1b minimal promoter, GFP, and an eye-CFP selectable marker 28 using In-346 Fusion cloning (Takara Bio). We injected plasmids and Tol2 transposase RNA (30 ng/uL each) 347 into one-cell stage zebrafish embryos, raised these animals, and screened for founders based on 348 eye CFP expression in the progeny. Two independent germline founders were identified for each 349 that showed similarly specific activity in the gills. Libraries were sequenced on Illumina NextSeq or HiSeq machine at a depth of at least 75,000 416 reads per nucleus for each library. Both read1 and read2 were extended to 65 cycles. Cellranger 417 ATAC v1.2.0 (10X Genomics) was used for alignment against genome (built with GRCz11.fa, 418 JASPAR2020, and GRCz11.98.gtf), peak calling, and peak-by-cell count matrix generation with 419 default parameters. 420 We included biological replicates at several stages to test the reproducibility of library preparation 421 and increase depth of data. For scRNAseq, we performed two replicates at 5 and 14 dpf, and 422 three replicates at 3 and 150 dpf. For snATACseq, we performed two replicates at 2, 3, and 14 423 dpf. 424 425 SnapATAC for peak refinement and gene activity matrix imputation 426 To refine the peak profile for better representation of diverse cell types across libraries, we 427 performed a second round of peak calling using package Snaptools (v1.2.7) and SnapATAC 428 (v1.0.0) 15 . We first removed low-quality cell and cell doublets by setting cutoffs based on 429 percentage of reads in peaks (> 30 for 60 dpf, > 45 for 210 dpf, and > 50 for the rest) and fragment 430 number within peaks (5,000 -30,000 for 5 dpf, 1,000 -11,000 for 14 dpf, and 1,000 -20,000 for 431 the rest). Potential cell debris or low-quality cells were removed by setting hard fragments-in-peak 432 number cutoffs. Using the SnapATAC package, we then generated "pseudo-multiome" data at 433 each stage. To recover every aligned fragment, we binned the genome into 5 kb sections and 434 The tissue module scores of the snATACseq data were calculated based on the enriched peak 503 sets and their module scores for each cluster identified at 14 dpf by R packages Seurat and 504 Signac. The enriched peak sets were calculated by the FindAllMarkers function using two-sided 505 likelihood ratio test with fragment numbers in peak region as latent variables. We used the peaks 506 with adjusted p values smaller than 0.001 as the enriched peaks for a cluster. As there are 23 507 clusters (tissues) identified at 14 dpf, we ended up with 23 peak sets, which we applied to 508 calculate the tissue module scores to earlier time points (1.5, 2, 3, and 5 dpf) using 509 AddChromatinModule function. To determine whether a tissue score at a time point distributes in 510 a statistically significant, and hence biologically interesting, way, we calculated the skewedness 511 of the distribution of a tissue score by the R package parameter (v0.12.0). We considered a tissue 512 score to be distributed in a meaningful way if it was strongly right skewed by a hard cutoff of 513 skewedness greater than 1. For 1.5 dpf, the cutoff of skewedness was lowered to 0.4 to 514 accommodate overall lower skewedness at that time point, but with additional filter of max module 515 score > 15 to avoid tissue module scores with extremely low values. | 2021-08-23T13:12:25.712Z | 2021-08-19T00:00:00.000 | {
"year": 2021,
"sha1": "7c6273415f40550fa0b7ef87d87a3ba9e1290c7f",
"oa_license": "CCBY",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2021/08/19/2021.08.19.456710.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "7c6273415f40550fa0b7ef87d87a3ba9e1290c7f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
258989880 | pes2o/s2orc | v3-fos-license | An interaction and feedback mechanism-based group decision-making for emergency medical supplies supplier selection using T-spherical fuzzy information
Selecting a supplier for emergency medical supplies during disasters can be considered a typical multiple attribute group decision-making (MAGDM) problem. MAGDM is an intriguing common problem that is rife with ambiguity and uncertainty. It becomes much more challenging when governments and medical care enterprises adjust their priorities in response to the escalating problems and the effectiveness of the actions taken in different countries. As decision-making problems become increasingly complicated nowadays, a growing number of experts are likely to use T-spherical fuzzy sets (T-SFSs) rather than exact numbers. T-SFS is a novel extension of fuzzy sets that can fully convey ambiguous and complicated information in MAGDM. The objective of this paper is to propose a MAGDM methodology based on interaction and feedback mechanism (IFM) and T-SFS theory. In it, we first introduce T-SF partitioned Bonferroni mean (T-SFPBM) and T-SF weighted partitioned Bonferroni mean (T-SFWPBM) operators to fuse the evaluation information provided by experts. Then, an IFM is designed to achieve a consensus between multiple experts. In the meantime, we also find the weights of experts by using T-SF information. Furthermore, in light of the combination of IFM and T-SFWPBM operator, an MAGDM algorithm is designed. Finally, an example of supplier selection for emergency medical supplies is provided to demonstrate the viability of the suggested approach. The influence of parameters on decision results and comparative analysis with the existing methods confirmed the reliability and accuracy of the suggested approach.
www.nature.com/scientificreports/ parameter 'q' into the constraints such that qth sum of MD, AD, and NMD must not exceed one; then then by taking the value of q ≥ 3 , the above example (0.7) 3 + (0.5) 3 + (0.6) 3 = 0.684 < 1 holds. If the value of parameter is one and two, then t-SFSs are reduced to PFSs and SFSs, respectively. T-spherical fuzzy sets with zero abstinence degrees are mathematically equivalent to q-ROFSs. On the other hand, t-SFSs with null abstinence degrees are considered as generalized form of IFSs and P y FSs in cases where parameter is one and two, respectively. SFSs and T-SFSs are now the focus of decision system research. For example, in 46 , the authors designed and extended the MULTMOORA method using SFSs. Garg 47 proposed the power aggregation operators under T-SF information. Yang et al. 48 developed a novel information error-driven T-spherical fuzzy cloud method to assess small and medium-sized businesses' innovative digital transformation strategies. Debnath et al. 49 defined power-partitioned neutral aggregation operators for T-SFSs. Furthermore, T-SFSs have been used in MAGDM methods recently. Ullah et al. designed a correlation coefficient-based MAGDM technique using T-SFSs and used it to solve the clustering problem 50 . In 51 , T-SF Power Muirhead mean operator-based MAGDM is established to tackle the decision problem of daily life. Researchers who are interested are directed to some decent literature on MAGDM using T-SFSs [52][53][54][55][56][57] . This analysis of the literature revealed that it might be challenging for the MAGDM method-based research to effectively capture the features of emergency decision-making. However, there is still a dearth of MAGDM exploration on the assessment of EMS supplier selection, and the relevant literature still suffers from several problems, broadly categorized into three types: (1) The prior study on emergency decision-making did not take into consideration the interaction of experts with each other which is obvious these days. Realistic circumstances frequently include complex causal relationships between several factors. Simultaneously, the expert's preferences, educational background, and consensus level are always different. Therefore, it could be challenging for current MAGDM approaches to effectively capture the incomplete reasoning of experts and the peculiarities of the criteria used to select the suppliers for emergency medical supplies 18 . (2) Few academics have thoroughly thought about how to balance evaluation criteria in emergency decision-making, such as subjective and objective elements, weights of experts and criteria, etc. The accuracy of the results may be impacted by the singleweighting method's ability to strengthen or reduce the influence of various elements 58,59 . (3) Numerous MAGDM research that has already been conducted on decision-making use fuzzy sets as their foundation, sometimes using extended fuzzy sets that use the membership degree and non-membership degree to describe the assessment of experts. For example, when the experts don't want to give any response about any object, then neutrality is involved. The literature about the involvement of neutrality in emergency decision-making is not available. When choosing a supplier for emergency medical supplies, these fuzzy sets may have numerous drawbacks that are very obvious, which can lead the experts to an erroneous assessment procedure.
In order to fill this research gap, our analysis suggests an interaction and feedback mechanism-based MAGDM methodology using T-SFSs to get the group consistency between multiple experts. The proposed analysis framework is novel as it is based on T-SF information using interaction and feedback mechanism to get the group consistency between experts for EMS supplier selection. In the same time, the weights of experts are obtained by using the consistency degree and hamming distance measure. This study makes five major contributions which are as follows: (1) Considering the partitioned structure among criteria, we extend the partitioned Bonferroni mean operator to accommodate the T-spherical fuzzy environment, and two aggregation operators are developed to integrate the evaluation information of experts, namely, T-spherical fuzzy partitioned Bonferroni mean operator and T-spherical fuzzy weighted partitioned Bonferroni mean operator. (2) We improve the emergency decisionmaking process using a unique quantitative weighting method to find the weights of experts. (3) In order to overcome the difficulty of selecting emergency medical supplies suppliers, we propose a decision-making framework to get the final alternative ranking by considering interaction and feedback mechanism to disclose the conflicts between experts while providing their assessments. This framework reconciles differing expert opinions in light of the limitations of existing studies and the complex nature of decision problems. (4) To demonstrate the viability of the designed algorithm, the advised technique is utilized to process a supplier selection example for emergency medical supplies. (5) Finally, the sensitivity analysis confirms that the suggested technique is robust, and the comparison with the current approaches further demonstrates the suggested method's efficacy and superiority.
The study is structured as follows: "Preliminaries" section elaborates on the basic knowledge of T-SFSs, including its definition, basic operations, score and accuracy functions, Bonferroni mean operator, and partitioned Bonferroni mean operator. In "T-spherical fuzzy partitioned Bonferroni mean operator" section, two aggregation operators are introduced, and some of their prominent properties are discussed. "Group decision-making framework based on IFM and aggregation operators for T-SFN" section presents a group decision-making framework based on IFM and aggregation operators for T-SFN. In "Numerical example and comparative analysis" section, the methodology proposed in the previous section is applied to an actual instance to obtain ranking results, and the results are examined from two aspects. The section also includes a comparative analysis of the proposed methodology with existing methods. Finally, "Conclusion" section provides a brief conclusion of the study.
Preliminaries
In this section, some basic concepts related to T-SFSs are recalled to better understand the research ahead. For every σ ∈ U , it satisfies the following condition: 0 ≤ µ 2 P (σ) + v 2 P (σ) ≤ 1 . Additionally, π P = 1 − µ 2 P (σ) + v 2 P (σ) is known as the degree of indeterminacy σ ∈ U . For simplicity, (µ P (σ), v P (σ)) is called P y F number (P y FN) and is denoted as p = (µ, v). 36 Let U be a universal set. q-rung orthopair fuzzy set (q-ROFS) ' Q ' on U can be represented as 1] are called MD and NMD of the element σ ∈ U to the set Q . For every σ ∈ U , it satisfies the following condition:
Definition 2.3
is known as the degree of indeterminacy σ ∈ U. For simplicity, µ Q (σ), v Q (σ) is called q-ROF number (q-ROFN) and is denoted as Q = (µ, v). 45 Let U be a fixed set. Spherical fuzzy set (SFS) ' S ' on U can be represented as 1] are called MD, AD, and NMD of the element σ ∈ U to the set S . For every σ ∈ U , it satisfies the following condition:
Definition 2.4
is known as the degree of indeterminacy σ ∈ U . For simplicity, (µ S (σ), η S (σ), v S (σ)) is called SF number (SFN) and is denoted as s = (µ, η, v). 45 Let U be a fixed set. T-SFS A on U can be represented as 1] are called MD, AD, and NMD of the element σ ∈ U to the set A . For every σ ∈ U , it satisfies the following condition:
Definition 2.8 61 Let us suppose that r ≥ 0 , s ≥ 0 and a k ; (k = 1, 2, ..., n) is a set of positive integers, the BM operator is defined as follows.
The BM operator has the capability to take into consideration the interaction between any two inputs. Depending on how these arguments relate to one another, it may be possible to divide input variables into a number of autonomous groups. These groups may engage with one another while remaining independent of one another. Definition 2.9 62 Suppose that r ≥ 0 , s ≥ 0 and a k ; (k = 1, 2, ..., n) is a set of positive integers, the PBM operator is defined as follows.
where |P h | is the cardinality of P h , d is the number of independent groups and d h=1 |P h | = n.
T-spherical fuzzy partitioned Bonferroni mean operator
Considering the partitioned structure among criteria, we extend the PBM operators to accommodate the T-spherical fuzzy environment. Based on the results of definitions discussed above, we give the definition of T-spherical fuzzy partitioned Bonferroni mean operator and T-spherical fuzzy weighted partitioned Bonferroni mean operator in this section. Definition 3.1 Let a k = (µ k , η k , v k ), (k = 1, 2, ..., n) be the set of T-SFNs, which is divided in to d distinct groups P 1 , P 2 , ..., P d . For any r, s ≥ 0 , with r + s > 0 we give the definition of T-SFPBM operator as follows: where |P h | is the cardinality of P h , d is the number of independent groups and ∪ d h=1 |P h | = n.
Group decision-making framework based on IFM and aggregation operators for T-SFN
Group decision-making usually involves two or more experts while solving any decision problem. However, due to different experts have distinct social experiences and personal beliefs, their decisions frequently reflect prejudice. Therefore, it might be challenging for specialists to agree upon a course of action in group decisionmaking. Experts' evaluation of any object can be expressed in terms of T-SFN. In this section, an IFM using T-SF information is proposed to assist experts to obtain consistency while dealing with any group decision-making problem. The flowchart of the decision-making process is shown in Fig. 1.
ij . To find the best alternative in group decisionmaking IFM-based technique is designed using T-SF setting. The computing steps are simply executed as follows: Step 1. Get T-SF decision matrices concerning experts' opinions.
where a (l) Step 2. (Determination of weights of experts) To determine the weight of experts, the following procedure is given: Step 2.1. First, we find the consistency degree between the evaluation matrices provided by experts; Let and R (2) = a (2) ij m×n be any two T-SF preference relation matrices. Then, the consistency degree between R (1) and R (2) is defined as follows 63 : here ' d ' is Hamming distance.
Step 2.2. By using the definition of consistency degree, the overall consistency level of experts E (L) is defined as follows 63 ; Step 2.3. 63 The weights of experts are calculated with the following equation, Step 3. Aggregate the matrix R (l) = a (l) ij m×n ; l = 1, 2, ..., L of each expert to get the T-SF group decision matrix by utilizing the weights of experts, which can be computed as follows 63 ; Step 4. Normalize the group T-SF decision matrix. If the criterion is benefit type, then do nothing; if the criterion is cost type, then the cost type criterion should be converted into a benefit type. The normalized group decision matrix is shown as follow; Step 5. To compute the consistency degree CD l (l = 1, 2, ..., L) between the evaluation decision matrices and group decision matrix, we utilize the Eq. (15) as follows; Obviously, CD l ∈ [0, 1] . The larger the values of CD l , the higher the consensus among experts. In general, achieving a unanimous agreement among experts is impossible. Therefore, a soft consensus is adopted in CRP, and a consensus threshold ζ ∈ [0, 1] is predefined to measure the consistency degree among experts. Step 6. If CD l > δ , then a consensus is achieved and we will move directly to step 8. Otherwise, the feedback process is carried out to promote the resulting consensus. www.nature.com/scientificreports/ Step 7. (Feedback mechanism) If CD l is less than the predefined consensus threshold, then by taking the evaluation matrix of an expert with the highest consistency degree as a reference, the expert with the lowest consistency degree will adjust his assessment value to a new decision matrix ; (ς = 1, 2, ...) and go back to step 2, to recalculate the weights of experts and reapply all the procedure to compute the CD l .
Step 8. When the predefined threshold is less than the CD l , then by applying, T-SFWPBM r,s we get the aggregated decision matrix; by Eq. (9). Step 9. The score function is calculated by using Eq. (5), to get the best alternatives among all of them. The higher values of score function will be the best one alternative.
Numerical example and comparative analysis
This section aims to demonstrate the usability and efficiency of the technique suggested above by providing a numerical example of supplier selection for emergency medical supplies. Further, the impact of parameters on decision results and comparison analysis with existing methods is also given in the same section.
Numerical example. Emergency medical supplies (EMS) are a collection of essential medical items that are prepared and kept in advance for use during times of crisis or emergency. These supplies are typically intended to provide medical care to individuals who have suffered injuries, illnesses, or other medical emergencies during a natural disaster, accident, or other unforeseen events that could disrupt access to medical services.
Some common examples of emergency medical supplies include: • First aid kit with necessary medical supplies such as bandages, gauze, antiseptics, and pain relievers • Automatic external defibrillator (AED) for cardiac emergencies • Oxygen tanks and masks for respiratory emergencies • Splints and braces for broken bones or fractures • Tourniquets and wound dressings for heavy bleeding • Medications such as epinephrine for allergic reactions or naloxone for opioid overdoses • Portable suction units for airway management • Intravenous (IV) fluids and supplies for dehydration or shock.
The specific items that makeup emergency medical supplies may vary depending on the nature of the emergency, the location, and the level of medical expertise available. However, the goal of emergency medical supplies is to provide medical care to individuals who have suffered injuries or illnesses until they can receive proper medical treatment.
The need for EMS has increased in recent years due to the regular occurrence of disasters, including epidemics, landslides, and earthquakes. We know that the need for medical protective equipment has increased since the COVID-19 epidemic, according to a survey conducted by the World Health Organization. For dealing with natural disasters, public health emergencies, and public safety events, emergency supplies are essential. The availability of emergency medical supplies in the case of a public emergency is crucial for ensuring social www.nature.com/scientificreports/ stability, protecting people's lives, and providing a considerable level of assurance for preventing the spread of disasters. How to rapidly and efficiently choose emergency supplies suppliers has significant practical significance for the timely supply of emergency medical supplies in light of the ambiguity, complexity, and fuzziness of the information environment. It is also an important issue for all sectors to deal with new confronts and construct an up-to-date emergency support model. Now, we suppose that there are three experts D 1 , D 2 , D 3 assess the four alternatives of emergency medical supplies suppliers, which are (x 1 , x 2 , x 3 , x 4 ) . The Five attributes (Fig. 2), which are effective in prioritizing these alternatives are as follows: Supply capacity (G 1 ) : Supply capacity is the promise made by suppliers to always have enough capacity to produce goods as agreed with enterprises. Product cost (G 2 ) : The cost of medical supplies is a significant factor in supplier selection. Consider the supplier's pricing, discounts, and payment terms, as well as the total cost of ownership. Logistic speed (G 3 ) : The speed at which emergency medical supplies are delivered by a supplier is a crucial factor to consider when selecting a supplier, particularly during crisis situations where time is of the essence.
To evaluate a supplier's logistic speed for emergency medical supplies, consider factors such as their delivery time, inventory management practices, distribution network, tracking and communication capabilities, and emergency response capabilities. By assessing these factors, healthcare facilities can choose a supplier that has the necessary speed and capabilities to quickly deliver high-quality emergency medical supplies during emergency situations. Product quality (G 4 ): The quality of medical supplies is essential for patient safety and healthcare outcomes. Consider the supplier's reputation, certifications, and quality assurance procedures, as well as product testing and validation. Financial stability (G 5 ) : Selecting a financially stable supplier for emergency medical supplies is crucial to ensure a reliable supply of essential products, particularly during crisis situations. To evaluate the financial stability of potential suppliers, consider their payment terms, creditworthiness, insurance coverage, contract terms, and business continuity planning. By assessing these factors, healthcare facilities can choose a supplier that has a stable financial foundation, reducing the risk of supply chain disruptions during emergency situations.
By carefully evaluating suppliers based on these factors, healthcare facilities can select high-quality suppliers that can provide reliable and cost-effective medical supplies even during times of crisis or disaster. Furthermore, emergency management involves a systematic approach to prevent, prepare for, respond to, and recover from disasters or emergency situations. In this paper, the phases of emergency management are divided into four sections which we presented as ' d ' are as follows: Mitigation (1): It involves facility protection systems implementation, network architecture, resource allocation, and vital supply routes. It also covers the placement of early warning systems and help facilities. Buying flood and fire insurance for your home is a mitigation activity. Mitigation activities take place before and after emergencies. Preparedness (2): The preparedness phase is an integral part of the emergency management cycle and is aimed at enhancing the capacity and readiness of individuals, communities, and organizations to respond to an emergency or disaster. Its primary objective is to mitigate the impact of such situations by proactively preparing to manage and respond to them. During the preparedness phase, various activities are carried out, including emergency planning, training and education, resource management, and conducting exercises and drills. Emergency planning involves identifying potential hazards and risks, determining roles and responsibilities, and establishing procedures for communication and decision-making. Training and education are provided to emergency responders, community members, and public officials to equip them with the necessary knowledge and skills for effective response. Resource management strategies are developed to manage personnel, equipment, and supplies required during an emergency response. Exercises and drills are conducted to test emergency plans, identify areas for improvement, and enhance readiness. Overall, the preparedness phase ensures that individuals, communities, and organizations are prepared and equipped to respond to emergencies or disasters, thereby minimizing their impact and saving lives. Effective preparedness efforts are critical for reducing the impact of emergencies or disasters.
Response (3):
In certain circumstances, it starts when early warning systems or warnings from risks monitoring alert, the authorities of an impending disaster. Overall, the goal of the response phase is to stabilize the www.nature.com/scientificreports/ situation, protect lives and property, and begin the process of recovery. Successful response efforts are critical to minimizing the impact of a disaster and ultimately reducing the overall cost of the disaster.
Recovery (4):
The recovery phase is the final phase in the emergency management cycle and begins after the emergency or disaster has been brought under control. Its main goal is to restore the community and its infrastructure to its pre-disaster state or a new, improved state. The recovery phase typically involves activities such as assessing the damage caused by the disaster, cleaning up and restoring damaged infrastructure and property, supporting the psychological well-being of those affected by the disaster, and rebuilding the affected community to be more resilient to future disasters. Recovery efforts may take a considerable amount of time and require coordination among various organizations such as government agencies, non-governmental organizations, businesses, and community groups.
The nature of the emergency itself dictates these choices and actions, which presents a unique set of obstacles and challenges when creating or constructing management support systems. By following a structured approach to emergency management that encompasses all four phases, organizations can help mitigate the impact of disasters or emergencies, protect lives and property, and ensure that the affected communities recover as quickly as possible. Here we take q = 3 , r = 1,s = 1 and five attributes that are independent of each other whose weighting vector is (0.20, 0.35, 0.10, 0.24, 0.11) , and the consistency threshold is δ = 0.72 . The evaluation information provided by three experts D 1 , D 2 , D 3 related to these four alternatives (x 1 , x 2 , x 3 , x 4 ) based on the attributes (G 1 , G 2 , G 3 , G 4 , G 5 ) is given in the form of T-SFNs shown in Tables 1, 2, 3. The proposed method is applied to select the best supplier for EMS and the concrete steps are as follows.
Step 1. Construct the T-SF decision matrices according to the Eq. (16).
Step 2. The weights of experts are computed as follows: Step 2.1 By using Eqs. (15), (16), and (17), the consistency degree between the evaluation matrices is obtained,
Step 2.2
By using Eq. (18), the overall consistency degree is computed, which is given as follows:
Step 4. To get the normalize group T-SF decision matrix, the Eq. (21) is utilized, which is given in Table 5. Table 5. Normalized group decision matrix M 1 = a ij 4×5 . Table 6. The adjusted assessment matrix of Table 7. Group decision matrix M 2 = a ij 4×5 . www.nature.com/scientificreports/ Step 6. We already set the consistency threshold δ = 0.72 ; then, we can observe that CD 1 < δ . The consistency level of the first group of experts is low. So, we will apply the feedback mechanism to get a consensus between individuals; for that, we enter step 7.
Step 7. We can see that the consistency level of the second individual is greater than that first and third, i.e. CD 2 > CD 3 > CD 1 . So, we can use the second expert's evaluation information as a reference to adjust the assessment information of the first expert. The revised/adjusted assessment matrix is shown in Table 6.
Then, we return to step 2 and reapply steps 2.1, step 2.2, and step 2.3 to compute the weights of experts, which are given as follows:w 1 = 0.3361 , w 2 = 0.3322 , w 3 = 0.3316. Now, by utilizing these weights of experts, the new group decision matrix M 2 = a ij 4×5 is obtained in Table 7, and further, Table 8 gives us the normalized group decision matrix.
Furthermore, the consistency measures of individual decision matrices and group decision matrix are computed by Eq. (22), which are given as follows: We can see that CD l > δ , which satisfies the condition of consistency level, so M 2 is the final group decision matrix. Now we will move to the next step to get our finest alternative. Table 9. Influence of parameter ' q ' on the ranking of alternatives.
Influence of parameters on the final result.
In some situations, when PFS and SFS are failed to deal with some information data because of their restrictions which are MD + AD + NMD ≤ 1 and (MD) 2 + (AD) 2 + (NMD) 2 ≤ 1 respectively, then T-SFS appears up as a useful tool to deal with ambiguous and uncertain data with the condition that (MD) q + (AD) q + (NMD) q ≤ 1 . In this section, we analyzed the effects of different values of ' q ' , r and s , on decision results to test the flexibility and sensitivity of parameters. Firstly, we observed the impact of parameter ' q ' on decision results. We can see that PFSs and SFSs cannot be employed on the information data provided by experts because of the restrictions defined for these two sets. Because of this reason, we will take 3 ≤ q ≤ 10 and the parameters 's' and 't' remained unchanged. The results are shown in Table 9 and graphically interpreted in Fig. 3. According to the score values obtained from using different values of parameter ' q' and reapplying all the steps of the proposed methodology, it has been revealed that there is a change in the score values of alternatives, but the best alternative x 1 and worst alternative x 2 are the same. Also, the ranking order of the alternative by using different values of the parameter remained unchanged, which is One more thing is also worth noting that as the value of q is increasing, the score values are decreasing. In this study, the complexity of the decision environment and conditions are represented by the value of ' q ' . The greater the value of ' q ' means, the greater complexity is involved. To achieve appropriate deci- Table 10. Influence of parameters 'r' and 's' on the ranking of alternatives.
By considering the influence of parameter change, we can observe that the change in mentioned parameters q , r and s in the proposed methodology have no appreciable effect on the final decision results. The best alternative is x 1 , which demonstrates the resilience of our suggested decision model and the program's ability to produce stable outcomes despite adverse environmental changes. In actual decision-making situations, the experts are advised to take the values of parameters according to the situation to get more accurate results. 31 can only deal with the MD of the elements and fails to deal with situations when the NMD of elements is also involved. The IFSs 34 can tackle situations where both MD and NMD are involved. The invention of PFSs brought many applications of fuzzy set theory in decision-making issues of daily life which can deal MD, AD, and NMD. Afterward, Mahmood et al. 45 came up with T-SFSs by improving the constraints of PFS, which cannot deal with the scenarios when the sum of MD, AD, and NMD becomes greater than one. T-SFSs allow experts to give their assessment in a broader way with minimum restrictions. Because of that reason, we used T-SFSs to get the evaluation information from experts in this study.
Comparative analysis. Fuzzy set
For this section, we compare our proposed approach with T-SF weighted averaging (T-SFWA) operator 60 , T-SF weighted geometric (T-SFWG) operator 60 , T-SF power weighted averaging (T-SFPWA) operator 47 , T-SF power weighted geometric (T-SFPWG) operator 47 and T-SF Einstein interactive Averaging (T-SFEIA) operators 64 . The final results can be seen in Table 11 and graphically interpreted in Fig. 5.
From Table 11, it has been observed that by using different existing methods, the ranking order of the alternatives is slightly different but the best alternative x 1 is the same. Furthermore, a qualitative analysis is also given in Table 12, which shows a qualitative review of the various methods discussed above. Each of these existing methods has its own advantages. However, from the above given detailed comparative analysis, it has been revealed that the technique proposed in this study is more effective, operational, and robust, which, in addition to being easy to use and calculate, may flexibly aggregate information by adjusting specific parameters to meet the Table 11. Ranking of alternatives with existing methods.
Score values Ranking
T-SFWA 60 SCR(a 1 ) = 0.1356,SCR(a 2 ) = 0.1030 , SCR(a 3 ) = 0.1137,SCR(a 4 ) = 0.1178; x 1 > x 4 > x 3 > www.nature.com/scientificreports/ preferences of experts. At the same time, the presented technique fully considers the interaction of experts and provides a feedback mechanism to get consistency degree between them to decrease the conflicts. Therefore, the designed methodology makes the evaluation outcomes of supplier selection for EMS more accurate and scientific.
Conclusion
To effectively manage public emergencies and save human life, a rational and scientific evaluation of EMS suppliers is crucial. The selection of EMS suppliers is a hot research topic in MAGDM. The primary goal of this work was to create a new paradigm for efficiently addressing MAGDM problems. Firstly, some fundamental information on T-SFSs, BM operator, and PBM operator is presented. Then, T-SFPBM operator is established to integrate the individual's information. At the same time, a method is proposed to find the experts' weights and use them to get the group decision information provided by experts. Moreover, IFM and T-SFPBM operatorbased MAGDM methodology is designed to get the ranking of given alternatives. Finally, a real-life example of selecting an emergency medical supply supplier is discussed to verify the proposed approach's applicability. The stability and validity of the proposed technique are further evidenced by sensitivity analysis and comparative analysis with several existing approaches. In conclusion, it has been found that the method used in this paper is more adaptable and has a broad range to convey uncertain information. It is a suitable tool for combining vague and ambiguous data in decision-making because the provided information can be stated more precisely and definitively in it. However, the proposed approach has been successfully applied to the selection of emergency medical supplies supplier problem, there are still some limitations that can be carried out in future studies as a continuation of this study. For example, special attention has been paid to the use of attribute weights for aggregation function in the MAGDM methods. This is a very important issue since different calculations of the attribute weights can change the ranking of possible alternatives. New methods can be proposed to get the weights of attributes. Moreover, the experts usually provide their assessment qualitatively, which we have not considered within this research.
Given the above limitations, this study presents several directions. This study can use other fuzzy decisionmaking methods, such as complex T-SFSs 65 , Lt-SFS 26 and q-rung orthopair hypersoft fuzzy sets 66 , to give experts more freedom when providing their evaluation information. In this paper, the designed approach is applied to the selection of EMS suppliers during disasters. For future studies, we can apply this method to other real-world decision problems and expands its applications, such as green supplier selection 16,67,68 , medical diagnosis 69,70 , and safety risk assessment 71 . | 2023-06-01T06:16:28.763Z | 2023-05-30T00:00:00.000 | {
"year": 2023,
"sha1": "b44c57fefd2c97a1ddae4633cb2210d28e6fbc92",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "769b8ff86c4dd7c2465368f95f6457c27f2c2221",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235212242 | pes2o/s2orc | v3-fos-license | Evaluation of concept drift adaptation for acoustic scene classifier based on Kernel Density Drift Detection and Combine Merge Gaussian Mixture Model
Based on the experimental results, all concepts drift types have their respective hyperparameter configurations. Simple and gradual concept drift have similar pattern which requires a smaller {\alpha} value than recurring concept drift because, in this type of drift, a new concept appear continuously, so it needs a high-frequency model adaptation. However, in recurring concepts, the new concept may repeat in the future, so the lower frequency adaptation is better. Furthermore, high-frequency model adaptation could lead to an overfitting problem. Implementing CMGMM component pruning mechanism help to control the number of the active component and improve model performance.
Introduction
Usually, in Machine learning and predictive analysis, the training and testing data is a random variable generated independently from an underlying stationary distribution [1]. However, in many applications, especially in acoustic scene classification, the system being examined with different data distribution due to several reasons like the change of user's behavior or environment, nonstationary noise, diverse sound events, overlapping in time or frequency audio events, and echo or reverberant operating conditions. The situation where the data distribution change is known as concept drift [2]. As a result, this situation causes the models to undergo performance degradation [3].
In [4], we proposed a Combine Merge Gaussian mixture model (CMGMM) and Kernel Density Estimation (KD3) as a method to detect and adapt to the concept drift problem in the acoustic scene classification problem. CMGMM is an algorithm based on Gaussian Mixture Model that can adapt concept drift by add or modify components to accommodate the emerging concept drift.
KD3 is a concept drift detection algorithm based on the kernel density estimation (KDE) method to estimate the probability density function with one random variable. By comparing two density functions, it is possible to establish the degree of variation in values between the corresponding variables-the greater the variation, the more evidence for the concept drifts. In addition to detecting concept drifts, KD3 also acts as a data collector for the adaptation process by marking a warning zone where data begin to show concept drift symptoms.
In [4], we showed that the KD3 plays an essential role in the model adaptation. This algorithm required three hyperparameters, namely α, β, and ɦ. Hyperparameter α is the margin of windows variation to detect concept drift, β is the margin to accumulate the windows variation over time, and ɦ is the window length. In [4], a parameter grid search was performed to find the optimal hyperparameter. The window length ɦ was set to 45, the drift margin α was set to 0.1, and the warning margin β was set to 0.0001. Those hyperparameters are obtained from datasets containing random concepts drift in three scenarios.
Based on the experiment result, hyperparameters α can significantly affect the performance of the model. Reducing α means making KD3 more sensitive to changes. A sensitive KD3 could lead to an overfitting problem when the number of components grows significantly, then redundant mixture components located close together are introduced. Overfitting can have an adverse impact on predictions, then degrade model performance.
This paper aims to improve and evaluate the CMGMM and KD3 accuracy by tuning the hyperparameter in several concept drift types and scenarios.
2 The component pruning algorithm CMGMM tends to increase the number of components because it works by merging two mixture models (current and adapted models). This mechanism led to an overfitting problem as the frequency of adaptation increases due to the sensitive KD3 hyperparameter.
As a solution to this problem, we propose a component reduction mechanism based on weight ( ) and covariance (Σ) in CMGMM. These reduced components were identified by the ratio of 2 and Σ 2 being close to zero. In practice, components with very close to zero are ignored by the model, and components with large covariance tend to overlap with other components.
Experiment Setup
This experiment will evaluate CMGMM and KD3 accuracy systematically using a combination of hyperparameters α from 0.1 to 0.001, β from 0.0001 to 0.000001, and ɦ from 45 to 300. Furthermore, we will also evaluate the component pruning mechanism's performance using prequential or interleaved-test-then-train with the maximum number of instances for the test or train is 100.
Dataset
The dataset for training and evaluating the KD3 and CMGMM was generated by selecting 15 types of scenes from the TUT Acoustic Scenes 2017 dataset [5] and TAU Urban Acoustic Scenes 2019 dataset [6]. Then we generate four types of concepts drifted datasets in three event audio placement scenarios, namely T1, T2, and T3. In T1, several types of unique event sounds related to each scene were overlaid several times with random timing and gains. In scenario T2, several sounds were randomly selected from a particular group of event sounds, making the concept drift of this scenario to be more complicated than that of scenario T1. Then scenario T3 was quite similar to scenario T2. The difference was that a group of event sounds could exist in several scenes. The overall dataset consisted of 12,000 audio segments of ten seconds each, equally distributed between 15 different scenes and annotated with their ground-truth labels.
As shown in Fig. 1, the types of concept drift used to generate the dataset are: Figure 2 shows the average accuracy in all concept drift types, according to hyperparameters β and ɦ. Judging from the results, we found that hyperparameters β and ɦ have no significant effect on accuracy. However, hyperparameter α significantly affects performance. Therefore, in the rest of the paper, we will discuss the experiment results for hyperparameter α. In those experiments, β and ɦ are set at 0.001 and 45, respectively. Table 1 lists the experimental results for four different types of concept drift and event audio placement scenario. In general, each concept drift type has its respective hyperparameter according to concept drift characteristics. For example, simple concept drift and Gradual concept drift have similar characteristics where there are no repeating concepts, so that the hyperparameter patterns they have are also similar. Furthermore, how the audio event is placed has no significant effect. For example, the simple concept drift hyperparameter on T1 is the same as the hyperparameter for T2 and T3.
Effects of hyperparameters β and ɦ
In simple concept drift, a model with hyperparameter α = 0.05 shows the best results. The value of α = 0.05 is a quite sensitive hyperparameter value because the number of concept drifts detected increases significantly. If the hyperparameter α is increased, the frequency of model adaptation decrease Therefore, by accelerating the update frequency, the loss received by the model is reduced. This pattern also occurs in the T1, T2, and T3 scenarios. Furthermore, in cases where new concepts are replaced while old concepts do not reappear, the model must increase the model adaptation frequency.
Gradual concept drift requires a more sensitive hyperparameter than a simple concept drift hyperparameter because the emerging concept is gradually increasing. Therefore, reducing α to increase the adaptation frequency would increase model performance. The optimal hyperparameters are α = 0.01. Although T1 accuracy is not the best performance in this type of concept drift, the average accuracy is higher, and the number of adaptations is better than α = 0.005. Recurring concept drift type 1 is different from simple and gradual concept drift patterns. In this type of concept drift, small α that is sensitive to change tends to have poor performance. The experimental results also show the optimal hyperparameter values that used in [4] is similar (α = 0.1, β = 0.001 and ɦ = 45). One of the advantages of this hyperparameter is the large difference between α and β could make a larger warning zone, so the model has enough data for adaptation. The results obtained in this experiment are also in line with the results obtained in previous studies where the dataset used contains a random pattern. In random pattern concept drift, there is a possibility that new concepts that emerge will likely repeat themselves in the future.
The Recurring concept drift type 2 is a combination of recurring and gradual concept drift, which still has the same pattern as recurring concept drift type 1. In this type, the best results are also shown on α = 0.1, then the performance decrement also happens if we decrease the hyperparameter α.
In all type of concept drift, if the α value is reduced less than 0.01, the number of components increase significantly. Table 2 shows on in the initial training (E1), the average of component number for simple concept drift (A) is 5.67 per scene. However, at the end of the experiment(En), the average component number had increased four times to 21.8 per scene. This significant growth in the number of components affects the model's performance because too many components of the GMM are the cause of overfitting problems.
Effects of the component pruning algorithm
It can be noticed in figure 3, implementation of the components pruning to overcome the overfitting problem shows better results in almost all types of drift concepts and scenarios. In simple concept drifts show an increase in T1 of 0.99%, T2 of 0.7 %, and T3 of 0.4%. Gradual concept drift show an increase in T1 of 5.55%, T2 of 1.29%, and T3 of 0.4%. Furthermore, in the model that requires a highfrequency model adaptation like simple concept drift and gradual concept drift, component pruning could control the active component numbers and improve the accuracy. However, there was a decrease in T2 of recurring concept drift type 1 by 1.27% and T1 of recurring concept drift type 2 by 0.18%. Figure 4 shows detailed results of simple concept drift in the T1 scenario in mean and window accuracy. Window accuracy is the accuracy measured in each prequential batch of 100 data. In this figure, it is clear that there is an increase in performance in certain ranges when using pruning components. For example, from 1700 to 2900, the window accuracy of CMGMM with component pruning shows better performance than CMGMM.
Based on the experimental results, all concepts drift types have their respective hyperparameter configurations. Simple and gradual concept drift have similar pattern which requires a smaller α value than recurring concept drift because, in this type of drift, a new concept appear continuously, so it needs a high-frequency model adaptation. However, in recurring concepts, the new concept may repeat in the future, so the lower frequency adaptation is better. Furthermore, high-frequency model adaptation could lead to an overfitting problem. Implementing CMGMM component pruning mechanism help to control the number of the active component and improve model performance. | 2021-05-28T01:16:19.589Z | 2021-05-27T00:00:00.000 | {
"year": 2021,
"sha1": "8b4ff2f624f1ad4609bbf2b2f7f0e551fee08dbd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8b4ff2f624f1ad4609bbf2b2f7f0e551fee08dbd",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
231949163 | pes2o/s2orc | v3-fos-license | A Mathematical Construction of an E6 Grand Unified Theory
Of the five exceptional groups, $\mathrm{E}_6$ is considered the most attractive for unification due to the following reasons: (i) it contains both $\mathrm{Spin} (10) \times \mathrm{U}(1)$ and $\mathrm{SU} (3) \times \mathrm{SU}(3) \times \mathrm{SU}(3)$ as maximal subgroups, each of which admit embeddings of the Standard Model; (ii) uniquely among the exceptional groups, it admits complex representations; in particular, its 27 dimensional fundamental representation accommodates one generation of left-handed fermions under the usual charge assignments; (iii) all of its representations are anomaly-free. In this master's thesis, written in the spirit of Baez and Huerta's"The Algebra of Grand Unified Theories", we rigorously show how an $\mathrm{E}_6$ grand unified theory is mathematically constructed. Our modest contribution to the literature includes an explicit check that that $\mathbb{Z}_4$ kernel of the homomorphism $\mathrm{Spin} (10) \times \mathrm{U}(1) \to \mathrm{E}_6$ acts trivially on every fermion; we also formulate symmetry breaking, in particular the symmetry breaking of the exotic $\mathrm{E}_6$ fermions under $\mathrm{Spin} (10) \to \mathrm{SU}(5)$, using a different approach than the usual Dynkin diagrams: we explicitly embedded $\mathfrak{su}(5) \hookrightarrow \mathfrak{so}(10) \cong \mathfrak{spin} (10)$ and solve the related eigenvalue problem. Phenomenological aspects of grand unified theories are also discussed.
D I hereby declare that I am the sole author of this thesis. Where the work of others has been consulted, this is duly acknowledged in the references that appear at the end of this text. I have not used any other unnamed sources. All verbatim or referential use of the sources named in the references has been specifically indicated in the text. This work was not previously presented to another examination board and has not been published.
21st September 2017, München V A I am indebted to Prof. Hamilton for his guidance and patience during the writing of this thesis, and for making me a better mathematician and researcher in the process. Gratitude is also owed to Dr. Robert Helling for many enlightening discussions, and for providing the inspiration for the final chapter of this paper.
I was lucky to have the finest colleagues at LMU, some of whom deserve a special mention: Alex Tabler and Danu Thung for their companionship, and the endless conversations about mathematics, physics, LaTeX, and everything in between; Martin Dupont for his camaraderie and careful proofreading of the final draft, and Agnes Rugel, who gently walked me through my many existential crises. Finally, I must thank my parents, to whom I owe everything, and my brother, for being a constant source of joy in my life. VII able to make more precise the connection between the SU(5) fermion representation Λ * C 5 , and the spinor representations Δ ± of Spin (10) Chapters 3.2 and 3.3 contain our modest contributions to the literature. In the former, we explicitly check that that Z 4 kernel of the homomorphism Spin(10) × U(1) → E 6 acts trivially on every fermion; this is absolutely essential (in the cascade of unified theories that we consider) for E 6 to extend the Spin(10) theory, and hence the Standard Model. We believe the reason that this result does not appear anywhere in the (predominantly physics) literature on the subject is the same reason that the Z 6 kernel of the homomorphism SM → SU(5) is rarely mentioned: physicists are often content to deal with these symmetry groups at the level of Lie algebras, which are indifferent to finite quotients of Lie groups. This affection for Lie algebras extends to their discussions of symmetry breaking in grand unified theories, which are almost universally analysed using Dynkin diagrams and related techniques. While computationally preferable, we felt that following this method would break with the spirit of the rest of the paper, so we attempted to understand symmetry breaking, in particular the symmetry breaking of the exotic E 6 fermions under Spin(10) → SU(5), using a different approach: we explicitly embedded (5) ↩→ (10) (10), and then solved the related eigenvalue problem; this is the work of chapter 3.3. The result of this calculation is table 3.1, where one sees how the Standard Model fermions and their new exotic compatriots fit into the fundamental representation of E 6 . This apparent bounty of new physics was the impetus for the final chapter, on the phenomenology of grand unified theories.
To avoid getting lost in quantum field theory, we restricted ourselves to the following question in chapter 4: are there any predictions of grand unified theories that come solely out of representation theory? One of the most famous is certainly the Weinberg angle, and we treat this in section 4.1. We also consider in some detail, because it has a rather nice mathematical interpretation, the issue of anomaly cancellation; this is not a phenomenological prediction of grand unified theories per se, but rather, a requirement on their fermion representations: in section 4.2, we present Okubo's proof [70] that all representations of E 6 are anomaly-free. We devote the final section of this paper to a brief but general discussion on the signatures of grand unified theories, and their present outlook.
T S M
Any discussion of grand unification must begin with the Standard Model of particle physics. This rather uninspiringly-named theory is in fact a theory of almost everything: it describes three of the four known fundamental forces in the universe (the electromagnetic, weak, and strong interactions), as well as classifying all known elementary particles. It was developed in stages throughout the latter half of the twentieth century, with the current formulation being finalized in the mid-1970s upon experimental confirmation of the existence of quarks. The history of this development is a fascinating subject in its own right, featuring brilliant scientists, and set against the backdrop of some of the darkest periods of the last century. We refer the reader to [21] for the history, and to [51] for a remarkable collection of scientific essays from the pioneers of the field. Since those early days, experimental confirmation of Standard Model predictions have only added to its credence: highlights include the discover of the top quark in 1995, the tau neutrino in 2000, and the Higgs boson in 2012. Indeed, it can be said that the stunning experimental success of the Standard Model is often a cause for frustration among modern theoretical physicists, many of whom are holding out for evidence of new particles to lend support to the many projects of physics "beyond the Standard Model"; a highly-readable overview of the major contenders can be found in [61].
We have already encountered some of the shortcomings of the Standard Model: it does not fully explain baryon asymmetry, incorporate the full theory of gravitation as described by general relativity, or account for the accelerating expansion of the universe as possibly described by dark energy; the model does not contain any viable dark matter particle that possesses all of the required properties deduced from observational cosmology; it also does not incorporate neutrino oscillations and their non-zero masses. Understanding these difficulties is beyond the scope of this paper , but it is nevertheless clear that they are a strong motivation to look for other, hopefully more complete theories. In any case, the Standard Model lies at the heart of all model-building, of which grand unified theory is a part, so we absolutely must understand it before we move on.
P
more geared towards physicists, the book of Fuchs and Schweigert [31] is an excellent resource.
In section 1.1.2, we will formulate and motivate the two fundamental principles of the representation theory of particle physics. Though these rules are surprisingly easy to state and work with, their origins do require some preparation to appreciate, since they are best encountered within the framework of mathematical gauge theory. References for this field abound, we mention but three: for the mathematically inclined, there is the venerable text by Bleecker [17], and the lecture notes by Hamilton [45]; the physicist can turn to Nakahara [68] for his concise and clear presentation.
B D M G
We follow [4] in these paragraphs. A Lie group is a group which is also a smooth manifold such that the maps × → , ( , ℎ) ↦ → ℎ and → , ↦ → −1 are smooth.
A homomorphism : → of Lie groups is a homomorphism of groups which is also a smooth map. A subgroup ⊂ is said to be normal if and only if = for all ∈ ; a Lie group is called simple if it possess no non-trivial connected normal subgroups.
A representation of , where is a finite-dimensional vector space over a field K = R or C, can be thought of as a map × → such that for all ∈ , ∈ , we have • = and ( ) = ( ) , and • is a continuous function of and , which is additionally K-linear in .
By choosing a basis for , we get isomorphisms K for some integer and End( ) A linear subspace ⊂ is called -invariant if ∈ for all ∈ . A representation is said to be irreducible if its only -invariant subspaces are the trivial ones, 0 and ; else, it is said to be reducible. Irreducible representations will be fundamental in what follows; for brevity, we will call them irreps.
The general linear group of , is a group which is an open subset of End( ), and hence a smooth manifold. The product and inverse map in GL( ) are smooth, so GL( ) is a Lie group. If the dimension of over K is , we will write GL( , K) for GL( ).
We can choose on a Hermitian form , such that for , ∈ , ∈ K Since Lie groups possess the structure of a manifold, it is sensible to talk about tangent vectors; if the Lie group also happens to be modelled on some vector space , as the classical groups are, then the tangent spaces at each point, will in fact be isomorphic to . More will be said once we make the following Definition 1.1.1 (Lie Algebra). A Lie algebra is a vector space over a field K equipped with an operation [ , ] : × → called a Lie bracket, which satisfies the following axioms. We will follow the standard convention of denoting the Lie algebra of a group by the same letters in lower-case Fraktur font, e.g. =: . (iv) For any representation of , is a representation of .
(v) For a matrix group acting on as the endomorphism group, acts in the same way.
(vi) If acts on and we are given a homomorphism : → of Lie groups so that acts on , the resulting action of on is · = (d ( )) · , where ∈ .
The proof of these statements can be found in any of the references listed at the beginning of this section. We note that in item (iv) above, a representation of a Lie algebra is defined in the obvious way: a map from × → that is linear for ∈ and respects the Lie bracket, [ , ] = ( ) − ( ).
The final definition in this section is of significant importance to us, and it goes as follows.
acts on itself by conjugation, : → , ℎ ↦ → ℎ −1 , and this is clearly a homomorphism. We hence obtain by item (iii) in the theorem above, for each ∈ , a corresponding Lie algebra automorphism d : → . This is the adjoint representation of on its Lie algebra, Ad : → Aut . The differential of this representation gives the adjoint representation of the Lie algebra on itself; this map is ad : → GL( ), ad ( ) = [ , ].
S
In the mathematics of particle physics, all vector spaces must be complex because of foundational axioms in quantum mechanics. With this in mind, the two fundamental principles of the representation theory of particle physics can be stated in a few words: given a gauge symmetry of a theory, the fermions (matter fields) of this theory are basis vectors of unitary irreps of , while the gauge bosons, which mediate forces, are basis vectors of its complexified adjoint representation (which is irreducible if is simple). In See the Dirac-von Neumann axioms [24,86]. this section, we will try to motivate these claims, assuming some prior knowledge of gauge theory, the arena in which classical field theory plays out. For a field theory on a spacetime, the Lagrangian is the name given to a function that describes the dynamics and all the interactions of the fields. There are a few core principles that one must follow when attempting to write down such a function, of which we will only consider symmetry: the Lagrangian, and hence the laws of physics, should be invariant under the transformations of some specified symmetry group (such as the Poincaré group, which encodes the symmetry of Minkowski spacetime). It is clear at the outset that the fields (particles) out of which the Lagrangian is built must live in irreps of , to keep the invariance under manifest. The unitarity requirement on the irreps then arises quite naturally from the desire to compute observables (matrix elements). For example, a state | ↦ → | under a Poincaré transformation , so an observable would transform as Therefore, we need † = Id, i.e. must be unitary, if the matrix element is to be invariant. It is more involved to see why the gauge bosons live the adjoint representation; we begin with a plausibility argument from physics known as minimal coupling. Consider a matter field ( ): for local gauge transformations, ( ) ↦ → ( ) ( ), and ordinary derivatives transform as that is, inhomogeneously. What we would like instead is a gauge-covariant derivative D , which transforms as D ( ) ↦ → ( )D ( ). To achieve this, we define where is a Lie algebra-valued 1-form written in a local basis. As the notation suggests, this is our gauge boson, and it is forced to have the transformation law The first term is the desired adjoint transformation of . The second term vanishes if ( ) is taken to be constant locally; this is referred to as a rigid gauge transformation. The group of rigid physical transformations form a group isomorphic to .
By way of motivation, the above should suffice (and is often the last word in physics textbooks). But let us go deeper. In mathematics, matter fields like ( ) are sections of the vector bundle × → associated to a -principal bundle → by a representation : → GL( ). The derivative D above is usually denoted ∇ in mathematics, and is in fact the covariant derivative on the vector bundle associated to a connection 1-form on the principal bundle; in coordinate-free notation, for a vector field ∈ ( ), it outputs a section [44]: in quantum field theory, particles in general are described as excitations of a given vacuum field; in the case of a gauge field, one has to declare the vacuum field to be a certain specific connection 1-form 0 on the principal bundle, with reference to which all other gauge fields would then by described ; by the result stated in the previous paragraph, this difference (excitation) − 0 can then be identified with a 1-form on the spacetime , with values in Ad( ) that hence transforms in the adjoint representation of . Now that we have the transformation rules for all the particles in our theory, a natural question arises: how can the gauge bosons be said to "mediate forces"? The mathematical mechanism is in fact quite straightforward. When we say that a force is invariant under the action of some group, this corresponds to the statement that any physical process caused by this force should be described by an "intertwining operator", which is a linear operator that respects the action of the group under consideration. More precisely, suppose that and are finite-dimensional Hilbert spaces on which some group acts as unitary operators. Then a linear operator : → is an intertwining operator if ( ) = ( ) for every ∈ and ∈ . Now we saw in theorem 1.1.3 that a representation : × → of a group gives rise to a representation of its Lie algebra on ; we think of this as the linear map d : ⊗ → . It is easy to check that this map is an intertwining operator, and it hence gives the gauge bosons agency to act on particles.
T F F
We begin our brief exposition of the Standard Model proper with the representation theory of quantum chromodynamics (QCD), since it is the most straightforward application of the principles that we encountered in the previous section. Many great minds contributed to the development of this theory , but it was the trio of Fritzsch, Gell-Mann and Leutwyler who formulated the concept of colour as the source of a "strong field" in a Yang-Mills theory in 1973 [29]. This will be followed by a description of the weak force, which will then be expanded to include electromagnetism in the section on the electroweak interaction; this milestone in the history of unification was due to independent work by Glashow [38], Salam [78] and Weinberg [88], for which they were jointly awarded the Nobel prize in 1979. We will follow the article of Baez and Huerta [8] in this section, and in the one following, on the fermion representation of the Standard Model.
T S I
Let us begin with the nucleons of high-school chemistry, the protons and neutrons. It turns out that they are not fundamental particles, but are instead made up of other particles called quarks, which come in a number of different flavours. It takes two flavours to make protons and neutrons, the up quark , and the down quark : the proton can be written as = , and the neutron, = (the notation will be clarified momentarily). It follows from the charges of the proton (+1) and neutron (0) that has a charge 2/3, and , −1/3.
Quark confinement is one of two defining characteristics of QCD, the other being asymptotic freedom. The latter is unfortunately outside the scope of this paper; we refer the reader to a review article by Gross, one of the discoverers of aymptotic freedom [39]. Now quark confinement, loosely speaking, is the statement that the force between quarks does not diminish as they are separated; thus, they are forever bound into hadrons such as the proton and the neutron. Let us try to understand this mathematically. Each flavour of quark comes in three different states called colours: red ( ), green ( ), and blue ( ).
This means that the Hilbert space for a single quark is C 3 , with , , and the standard basis vectors. The colour symmetry group SU(3) acts on C 3 in the obvious way, via its fundamental representation. Since both up and down quarks come in three colour states, there are really six kinds of quarks in matter: three up quarks, spanning a copy of C 3 ; , , ∈ C 3 , and similarly for down quarks. The group SU(3) acts on each space. All six quarks taken together span the vector space C 3 ⊕ C 3 C 2 ⊗ C 3 , where C 2 is spanned by and . Confinement amounts to the following decree: all observed states must be white, i.e. invariant under the action of SU(3). Hence, we can never see an individual quark, nor particles made from two quarks, because there are no vectors in C 3 or C 3 ⊗ C 3 which transform trivially under SU(3). But we do see see particles made up of three quarks, such as nucleons, because there are unit vectors in C 3 ⊗ C 3 ⊗ C 3 fixed by SU (3). Indeed, as a representation of SU(3), C 3 ⊗ C 3 ⊗ C 3 contains precisely one copy of the trivial representation: the antisymmetric rank-three tensors, Λ 3 C 3 . This one-dimensional vector space is spanned by the wedge product of all three basis vectors, ∧ ∧ ∈ Λ 3 C 3 , so up to normalisation, this must be colour state of a nucleon. We also now see that the colour terminology is well-chosen, since an equal mixture of red, green, and blue light is white. Hence, confinement is intimately related to colour. An explanation of the quark flavours is postponed until the next section.
We will have much more to say about the quarks, but as an introduction, what we have above suffices: the strong force is concerned with the quarks; the up and the down quarks together span the representation C 2 ⊗ C 3 of SU(3), where C 2 is trivial under SU(3). In the previous section, we took the trouble to understand how gauge bosons transform and act, and we now we reap the fruits of that labour: from the standpoint of representation theory, all there is to say is that strong force is mediated by the gluons, usually denoted by , which live in C ⊗ (3) = (3, C), the complexified adjoint representation of SU(3). They act on quarks via the standard action of (3, C) on C 3 .
T W I
Our story of the weak force begins, interestingly enough, in early attempts to describe the strong force, particularly in the work of Heisenberg in 1932 [49]. He hypothesised that the proton and nucleon were the two possible observed states of a nucleon; a nucleon would hence live in the simplest Hilbert space possible for such a setup: C 2 = C ⊕ C. Shortly thereafer, in 1936, Cassen and Condon [20] suggested that the C 2 space of nucleons is acted upon by SU (2), emphasising the analogy with the spin of an electron, which is also described by vectors in C 2 acted upon by SU (2). The property that distinguishes the proton from the neutron was hence dubbed isospin: the proton was declared to be isospin up, 3 = +1/2, and the neutron isospin down, 3 = −1/2. The charge and the isospin of the nucleons were seen to be related in the following simple way: is a quantity called hypercharge which depends only on the "family" of the particle. For the moment, this simply means that is required to be constant on representations of the isospin symmetry group, SU (2). To now understand how all of this relates to the modern theory of weak interactions, we have to introduce a new particle.
Along with the electron − and the up and down quarks, the neutrinos form the first generation of fundamental fermions. They carry no charge and no colour, and interact only through the weak force, first proposed by Enrico Fermi in 1933 [26]. The weak force is chiral, i.e. it cares about the handedness of particles: every particle thus far discussed comes in left-and right-handed varieties, which we will denote by subscript-andrespectively. Remarkably, the weak force interacts only with the left-handed particles, and right-handed antiparticles. We have been silent about antiparticles until now, but they are quite simple to introduce: to each particle, there is a corresponding antiparticle, which is just like the original particle, but with opposite charge and isospin; mathematically, this just means that we pass to the dual representation. Returning to the weak interaction, when the neutron decays for example, we always have and never → + − + .
This parity violation of the weak force, proposed by Yang and Lee in 1956 [58] is still startling; no other physics, classical or quantum, looks different when viewed in a mirror. One important corollary of this oddity is that the right-handed neutrino has never been observed directly; we will discuss this particle in the context of grand unified theories in sections 2.5 and 4.3.
The isospin mentioned above is an extremely useful quantity since it is conserved during quantum interactions; as such, we would like to extend it to weak interactions. First, for the proton and neutron to have the right isospins of ± 1 2 , we must have the isospin of the up and down quarks defined to be ± 1 2 respectively (making these particles the up and down states at which their names hint). A quick check then shows that isospin is not automatically conserved in weak interactions; for example, in the above neutron decay, the right-hand side has 3 = −1/2 while the left-hand side has 3 = 1/2. What is needed is an extension of the concept of isospin to the leptons, i.e. the particles which do not feel the strong force, − and ; simply setting 3 ( ) = 1 2 and 3 ( − ) = − 1 2 does the trick. This extension of isospin is called weak isospin, and unlike the isospin of the nucleons, is an exact symmetry. We will simply refer to it as isospin from now on.
We come to the description of the weak force. This is a theory with the isospin symmetry group SU(2); the particles in the same representation are paired up in doublets, − , , with the particle with the higher 3 on the top; this is just a shorthand way of writing that these particles live in (and span) the same irrep C 2 of SU (2). The fact that only the left-handed particles are combined into doublets reflects the fact that only they participate in weak interactions. Every right-handed fermion, on the other hand, is trivial under SU(2): they are called singlets, and span the trivial representation C.
The particles in the doublets interact via the exchange of the so-called bosons, As we would expect, these span the complexified adjoint representation of SU(2), (2, C), and they act on each of the particles in the doublets via the action of (2, C).
We close this section with the afore-promised explanation of quark flavour splitting. Recall that colour is related to confinement; in much the same way, flavour is related to isospin. Indeed, we can use quarks to explain the isospin symmetry of the nucleons: protons and neutrons are so similar, with nearly the same mass and strong interactions, because and quarks are so similar, with nearly the same mass, and truly identical colours. As mentioned above, the isospin of the proton and neutron arises from the isospin of the quarks, once we define 3 ( ) = 1/2, and 3 ( ) = −1/2; we see that the proton obtains the right 3 : , and a quick check shows the same for the neutron. This is a good start, but what we really need to do is to confirm that and span a copy of the fundamental representation C 2 of SU (2). It turns out that the states ⊗ ⊗ and ⊗ ⊗ do not span a copy of the fundamental representation of SU(2) inside C 2 ⊗ C 2 ⊗ C 2 ; what is needed, for the proton for instance, is some linear combination of the 3 = 1/2 flavour states which are made of two 's and one : The exact linear combination required to make and work also involves the spin of the quarks, which is outside the scope of our discussion. What we can do however, is see that this is at least possible, i.e. that C 2 ⊗ C 2 ⊗ C 2 really does contain a copy of the fundamental representation C 2 of SU (2). First note that any rank-2 tensor can be decomposed into symmetric and antisymmetric parts, C 2 ⊗ C 2 Sym 2 C 2 ⊕ Λ 2 C 2 . Now Sym 2 C 2 is the unique 3-dimensional irrep of SU(2), and Λ 2 C 2 , as the top exterior power of its fundamental representation C 2 , is the trivial 1-dimensional irrep; as a representation of SU(2), we therefore have So indeed, C 2 is a subrepresentation of C 2 ⊗ C 2 ⊗ C 2 . As a final remark, we note that the NNG formula still works for quarks, provided we define the hypercharge for both quarks to be = 1/3.
T E I
All the fermions have now been grouped into SU(2) representations based on their isospin. Let us now consider the other piece of NNG formula, hypercharge. Just as we did for isospin, we can extend the notion of hypercharge to encompass the leptons, calling this new quantity weak hypercharge. It is a matter of simple arithmetic to see that we must have = −1 for both left-handed leptons; for right handed leptons, since 3 = 0, we must set = 2 . How can we understand hypercharge? Let us frame the discussion by briefly discussing isospin again: it is an observable, and we know from quantum mechanics that it hence corresponds to a self-adjoint operator; indeed, from an eigenvalue expression likeˆ 3 = 1 2 , it is easy to see that we must havê The story with hypercharge is similar: corresponding to hypercharge is an observablê , which is also proportional to a gauge boson, although this gauge boson lives in the complexified adjoint representation of U(1).
The details are as follows. Particles with hypercharge span irreps C of U(1); by C we denote the one-dimensional vector space C with action of ∈ U(1) given by The factor of three is inserted because is not guaranteed to be an integer, but only an integer multiple of 1/3. For example, the left-handed leptons and − both have hypercharge = −1, so each span a copy of C −1 . Hence, , − ∈ C −1 ⊗ C 2 , where the C 2 is trivial under U(1). Now, given a particle ∈ C , to find out how the gauge boson in C ⊗ (1) C acts on it, we can differentiate the U(1) action above. We obtain Following convention, we set the so-called boson equal toˆ ; particles with hypercharge interact by exchanging this boson. Note that the boson is a lot like the familiar photon, and the hypercharge force which mediates is a lot like electromagnetism, except that its strength is proportional to hypercharge rather than charge. The unification of electromagnetism and the weak force is called the electroweak interaction. This is a U(1) × SU(2) gauge theory, and we have now encountered it in full detail: the fermions live in representations of hypercharge U(1) and weak isospin SU(2), and we tensor these together to get representations of U(1) × SU(2). These fermions interact by exchanging and bosons, which span C ⊕ (2, C), the complexified adjoint representation of U(1) × SU(2).
We close with a word on symmetry breaking. Despite electroweak unification, electromagnetism and the weak force are very different at low energies, including most interactions in the everyday world. Electromagnetism is a force of infinite range that we can describe by a U(1) gauge theory with the photon as gauge boson, while the weak force is of very short range and mediated by the and bosons: we "define" the photon and the boson by the following relation: We have introduced here the weak mixing angle, or Weinberg angle w ; it can be thought of as the parameter that characterises how far the − 0 boson plane has been rotated by symmetry breaking, which is the mechanism that allows the full electroweak U(1) × SU(2) symmetry group to be hidden away at low energies, and replaced with the electromagnetic subgroup U(1). Moreover, the electromagnetic U(1) is not the obvious factor U(1) × 1, but another copy, wrapped around inside U(1) × SU(2) in a manner given by the NNG formula. Unfortunately, the dynamics of electroweak symmetry breaking is outside of our scope; we refer the reader to [79,Ch. 29.1] for the details. We will discuss symmetry breaking from a representation theoretic viewpoint in section 3.3, and return to the Weinberg angle in section 4.1.
Since U(1) is abelian, all of its irreps are one-dimensional.
T S M R
We are now in a position to put the whole Standard Model together in a single picture. It has the gauge group , and the fundamental fermions described thus far combine in representations of this group. We summarise this information in the table below.
Right-handed down quarks , , All the representations of SM in the right-hand column are irreducible, since they are made by tensoring irreps of this group's three factors. On the other hand, if we take the direct sum of all these irreps, we get a reducible representation containing all the first-generation fermions in the Standard Model. We call the fermion representation. If we take the dual of , we get a representation describing all the antifermions in the first generation. Taking the direct sum of these spaces, ⊕ , we get a representation of SM that we will call the Standard Model representation; it contains all the first-generation elementary particles in the Standard Model. The fermions interact by exchanging gauge bosons that live in the complexified adjoint representation of SM . Notice that we thus have a pattern in the Standard Model: there are as many flavours of quarks as there are of leptons. The Pati-Salam model explains this pattern by unifying quarks and leptons, but we will unfortunately not treat this theory here; the interested reader is referred to [8,Ch. 3.3].
The second and third generations of quarks and charged leptons differ from the first by being more massive and able to decay into particles of the earlier generations. The various neutrinos do not decay, and for a long time it was thought they were massless, but now it is known that some and perhaps all are massive. This allows them to change back and forth from one type to another, a phenomenon called neutrino oscillation ; the Standard Model explain this phenomenon by recourse to the famous "Higgs mechanism" . For our purposes however, the generations are identical: as representations of SM , each generation spans another copy of , with the corresponding generation of antifermions spanning a copy of . All told, we thus have three copies of the SM representation, ⊕ . We will only need to discuss one generation, so we find it convenient to speak as if ⊕ contains particles of the first generation. This redundancy in the Standard Model, three sets of very similar particles, remains a mystery. [8, p. 32] In this chapter, we will encounter the earliest attempts to probe these questions. We begin by introducing some additional concepts in representation theory and motivating the exceptional Lie groups, after which we will elucidate which groups are to be considered potential grand unification groups. Then we will turn to, both from necessity and because of its intrinsic interest, the SU(5) grand unified theory. Thereafter, we will focus our attention on Clifford algebras; it is a short step from there to the Spin groups; once we then understand their representations, we will prove that Spin(10) extends the Standard Model in section 2.5.
C W R
The study of characters, and root and weight systems, is fundamental to representation theory. Our modest goals in this section of simply defining these terms and stating the main results will doubtless do a severe injustice to this branch of mathematics; we point to [43,Ch. 8] for a lucid introduction and additional references. We will follow [4] here. Particle physics demands that we restrict to complex representations, so let us do so right at the outset, reaping the added benefit that over C, every irrep of a compact abelian group is 1-dimensional.
Remark 2.1.1 (Structure Maps). We note that this restriction involves no sacrifice of generality: consider that a representation over the quanterions H is certainly a representation over C; together with a conjugate linear structure -map : → such that 2 = −1, = − , this representation does in fact return the original H-representation; on the other hand, a representation over R gives ⊗ R C and this carries a conjugate linear structure map : ⊗ ↦ → ⊗ such that 2 = 1; we can regard as the +1 eigenspace of (or the -1 eigenspace).
Definition 2.1.2 (Characters).
Suppose is a representation of over C. Then its character is given by It follows from the definition that characters are class functions, i.e. ( for all , ℎ ∈ . Also, The following result clarifies the importance of characters. A proof is found in [3, pp. 46-52]. Consider now the torus, = 1 . Because is a compact, connected abelian group, the exponential map exp : → is a homomorphism, and we can thus regard as /Γ, where Γ := ker exp is a discrete subgroup of , called the integer lattice of . Homomorphisms : → are easily described. We need only check for a linear map : → such that (Γ) ⊂ Γ , and if so, then =˜ : /Γ → /Γ . All continuous homomorphisms arise in this way, and all 1-dimensional representations of arise from linear maps : → (1). Here we encounter the first connection to representation theory: given a representation of , there are linear maps : → (1) such that decomposes as a direct sum of non-zero sub-representations , where acts on by ( ) = ( ) for ∈ , ∈ . Definition 2.1.4 (Weights). The linear maps on are called the weights of . The dimension of is the multiplicity of .
S C C L G
The remarkable history of the more than one-hundred-and-fifty years of Lie theory is studied in [84]; the Killing-Cartan classification of Lie groups is arguably the highpoint of this story, and certainly a significant achievement of modern mathematics. This section is the briefest of summaries of this classification scheme, and is important for us for two reasons: (i) it motivates the existence of the exceptional Lie groups, and (ii) the roots and weights of the classical Lie groups that we will derive along the way will be instrumental in constructing E 6 .
Definition 2.1.5 (Maximal Torus). A maximal torus in a compact connected Lie group
is a subgroup which is (i) a torus, and (ii) maximal, i.e. if ⊂ ⊂ for a torus, then = .
Example 2.1.6. The maximal torii of the classical Lie groups are as follows.
(i) In U( ), consider the subgroup of matrices diag( 2 1 , . . . , 2 ), for ∈ R. This is a maximal torus in U( ): any matrix in U( ) which commutes with all diagonal matrices must be diagonal, and hence in . Thus, is maximal among all abelian subgroups, connected or not.
These will form a torus in SO(2 ).
(iv) In SU( ) we take the matrices of (i) subject to = 0 to get a maximal torus.
Maximal tori are fundamental in representation theory, as the following results demonstrate. Their proofs can be found in [ Hence the weights (together with the multiplicities) of a representation of , determine up to equivalence. We also have Corollary 2.1.9. Given two maximal tori , in a compact connected Lie group , there exists an inner automorphism of taking to .
It follows from this corollary that any property of defined by reference to a maximal torus is independent of the choice of . The most important example of this is the following Definition 2.1.10 (Rank). The rank of a compact connected Lie group is the dimension of the maximal torus of . We will usually write = rank .
Suppose now that ⊂ is a torus (not necessarily maximal). Then acts on via the adjoint representation, so acts on by restriction and ⊗ C splits as a sum of 1-dimensional representations of , with acting trivially on ⊂ . Thus the trivial 1-dimensional representation occurs at least = dim times. In fact, we have
Proposition 2.1.11. If is compact, then is maximal if and only if the trivial 1-dimensional representation occurs exactly times.
A proof of this result can be found in [3, p. 83]. Henceforth, we suppose that is maximal and set = .
Definition 2.1.12 (Roots). The roots of a compact connected Lie group are the weights of the adjoint representation, excluding 0 (which occurs times).
The roots are thus R-linear functions on , that is, elements of * . Since the adjoint representation of is real, the 1-dimensional summands of ⊗ C occur in conjugate pairs and the roots occur in pairs ± . Example 2.1.13. The roots of the classical Lie groups are as follows.
linear structure map and its +1 eigenspace is ( ). Now take the basis { } of standard column vectors for C ; for < , define linear maps The matrix of has a 1 in the -th place and zeroes elsewhere. The are eigenvectors of the action of with eigenvalues exp(2 ( − )), so − are eigenvalues for the action of on ( ). (Here, we are taking to be the diagonal matrices diag( 1 , . . . , ), ∈ R, and ∈ * is given by ) QED • The roots of the other matrix groups are Definition 2.1.14 (Weyl Group). The Weyl group of a compact, connected Lie group is the group of those automorphisms of a maximal torus which are given by inner automorphisms of . The Weyl group acts on and permutes roots. If we regard the roots as elements of * , they form a configuration with great symmetry and very distinctive properties [3,Ch. 5]. The Dynkin diagram encodes this configuration, as we proceed to describe.
We may choose on * a positive definite inner product invariant under , so that we can define the lengths and angles of roots. For each pair of roots ± , the kernel, ker = ker(− ), is a hyperplane in called a root plane. Conversely, it can be shown that each root plane comes from only one pair of roots, ± . The root planes form a figure in called the (infinitesimal) Stiefel diagram.
The root planes divide into convex open sets called Weyl chambers, and the Weyl group permutes these chambers in a way which is simply transitive. We choose one and call it the fundamental Weyl chamber (FWC); we denote it by . A root is positive (resp. negative) if > 0 (resp. < 0) on . A positive root is simple if it defines a wall of ; in the Stiefel diagram of SO(5) shown in figure 2.1, the roots 1 and 1 − 2 are simple. The Dynkin diagram has one node for each simple root (i.e. for each wall of the fundamental Weyl chamber). These nodes are joined by the following number of bonds: (i) For U( ), take to have 1 < 2 < · · · < . We obtain SU( ) has the same Dynkin diagram as U( ), which is traditionally labelled −1 .
We now finally have a rule which associates to a compact Lie group a Dynkin diagram. The upshot: if and are compact Lie groups, then is isomorphic to if and only if ⊗ C is isomorphic to ⊗ C; thus the Dynkin diagram determines , and hence , locally. In particular, corresponding to each Dynkin diagram, there is a unique compact, connected, simply connected Lie group, because to each of the diagrams in the Killing-Cartan classification, i.e. example 2.1.16, plus the exceptional Dynkin diagrams below, there is a unique simple Lie algebra-in fact, every complex simple Lie algebra is isomorphic to one of the algebras in this classification scheme-and hence a unique connected, simply connected, compact, simple Lie group. As we saw above, the groups SU( + 1), ≥ 1, and Sp( ), ≥ 3 correspond to and respectively; the groups Spin(2 + 1), ≥ 2 and Spin(2 ), ≥ 4 correspond to and . All these groups have rank and are pairwise non-isomorphic. The non-classical or "exceptional" Dynkin diagrams are as follows; the notation and conventions are explained in [4,Ch. 9]. We are primarily interested in the Lie group corresponding to the diagram E 6 , but to arrive at the same, we will need to construct the group E 8 ; we do so in section 3.1. Additionally, we describe the construction of the smallest exceptional Lie group G 2 in an appendix, by way of an illustrative example.
P G U G
By way of motivating grand unified theories, we have already raised several questions about the unsatisfactory aesthetics of the Standard Model representation. We introduce now considerations of a more technical nature, which will help us "classify" grand unification groups as it were, i.e. understand which groups are preferred from the plenitude of available Lie groups that contain embeddings of the Standard Model.
C C G T Definition 2.2.1 (Killing Form). Let be a Lie algebra. The map
is called its Killing form.
The symmetry and bilinearity of this form are easy to check; it is also immediately clear that the Killing form of an abelian Lie algebra is zero. More interesting is the following Proof. Since is a Lie algebra automorphism, we have We hence compute We introduce some more terminology: if is the Lie algebra of a compact Lie group , it is in turn called compact; a subspace ⊂ is called an ideal if it is closed under the Lie bracket, and satisfies [ , ] ⊆ ; a Lie algebra is called simple if its only ideals are 0 and itself. Now, one can show that for compact and simple, the negative of the Killing form is a positive-definite inner product; moreover, it turns out that up to a positive constant, it is the unique such form. The proof of this is not very hard, and can be found in [31, Ch. 8.1], for example. From this, one can deduce the following result (see [44,Ch. 2.10] for a proof) which will in turn finally allow us to make the definition that we are after. The scalar product 0 is determined by a positive definite symmetric matrix, and the scalar products are determined by positive constants relative to some fixed Ad-invariant positive definite scalar products on the corresponding Lie algebras (such as the negative Killing form).
Definition 2.2.4 (Coupling Constants
). The constants that determine the positive definite Ad-invariant scalar products on the abelian ideal (1) ⊕ · · · ⊕ (1) and the -summands relative to some standard Ad-invariant scalar products, are called coupling constants.
Some insight from physics is in order. Gauge couplings are simply numbers, determined by experiment, that fix the interaction strength of the field that they correspond to. They are encountered most directly in pure Yang-Mills theories, which lie at the heart of both electroweak unification and QCD. We consider them briefly, returning to the framework of gauge theory; we follow [44,Ch. 7.2]. Let : → be a principal bundle with the structure group compact and finite dimensional. Further, fix an Ad-invariant positive-definite scalar product on as in the theorem above, and a -orthonormal basis for . For a connection 1-form with curvature 2-form ∈ Ω 2 ( , ), in a local gauge : ⊃ → , the field strength is given by The Yang-Mills Lagrangian is then simply defined by In the case that is simple, for instance, there is a single coupling constant > 0 and it is clear that this "determines the field strength" in the sense that it directly scales the inner product that appears in the Lagrangian. This brings us to unification. The coupling constants of the strong, weak, and electromagnetic interactions are known to be different at low energies (≈ 1 GeV); to wit, the strong interaction is observed to be much stronger (obviously) than the weak and electromagnetic couplings. However in principle, there is no reason that the couplings cannot be unified at high energies, because in quantum field theory, these constants are in fact not constant; they depend on the energy scale. This phenomenon is known as renormalisation group running. Calculations show (see [66,Ch. 5.5]) that if the coupling constants are normalised as in the previous paragraph, i.e. taken to be orthonormal with respect to the Killing form on SM , the renormalisation group equation indicates that they roughly converge at high energies. This is a plausibility argument for a grand unification group with a single coupling constant, unifying the three forces of the Standard Model at high energies; this can only occur if the unification group is simple, or a product of identical simple groups, where the coupling constant for each factor is set the same by forcing the theory to have some sort of discrete symmetry. This is the first demand that we will make of any potential grand unification group.
C C R
In mathematics, the term "complex representation" simply refers to a group representation on a complex vector space; the term as used in physics denotes something different, and it is related to chirality. As we have seen, the weak force, and hence the Standard Model, is chiral. This unexpected feature detracts significantly from the symmetry of the rest of the theory, and one might expect that grand unified theories behave more naturally, or at least somehow explain this parity violation. But this is in fact not the case: Georgi [34] and Barbieri et al. [10] have argued that the fermions that would have to be introduced into an achiral grand unified theory to recover the chirality of the Standard Model on symmetry breaking would be unacceptably heavy; this is an instance of the Survival Hypothesis, which we will discuss in more detail in section 4.3. For the moment, we will content ourselves with defining a complex representation, and seeing how it is concerned with chirality.
Definition 2.2.5 (Complex Representation). Two representations 1 : → GL( 1 ) and vector space by ( ) = ( ). A representation of a group is said to be complex if it is not equivalent to its complex conjugate representation.
The connection to handedness is pretty straightforward. We know that the way to get the antiparticle representation from the particle representation is simply to pass to the dual; for example, (2)). So if we take a direct sum of the representations of all the left-handed fermions, call this , it stands to reason that the direct sum of all the right-handed fermion representations is given by = . Therefore, if , i.e. if is real, the theory is manifestly achiral, since the right-handed particles transform as the left; such theories are called vectorlike; on the other hand, if is complex, the theory is chiral. We will hence demand that our grand unification groups admit complex representations, to preserve this feature of the standard model. Let us summarise our work in the following Definition 2.2.6 (Possible Unification Group). We call a Lie group a possible unification group if it satisfies the following properties.
•
is simple, or a product of several copies of the same simple group.
• contains (perhaps up to a finite quotient) the Standard Model gauge group SM .
Of these, the only ones for which we have not explicitly computed the rank are the exceptional groups G 2 , F 4 and E 6 . Since we will momentarily eliminate F 4 as a possible unification group, we will not bother with this computation ; the rank of E 6 is computed in the proof of theorem 3.1.1, and of G 2 in the appendix. Mehta and Srivastava have classified the complex representations of all the classical Lie groups: only the SU( )'s, for > 2, the Spin(4 + 2)'s, for ≥ 1, and E 6 admit complex representations [64,65]. Together with the fact that the gauge group of the Standard Model SM = U(1) × SU(2) × SU(3) has rank equal to 1 + 1 + 2 = 4, and the further requirement on simplicity from definition 2.2.6, we can immediately thin down the above list significantly.
In section 4.2 we will consider the issue of anomaly cancellation, and its consequences for grand unified theories. This rather subtle requirement from quantum field theory is hard to motivate from a representation theoretic standpoint alone (though it does have a nice interpretation in the same), and was hence omitted in this section. In any case, it has no bearing on our list of possible unification groups. Proposition 2.2.7. The only possible grand unification groups with rank less than 7 are the following: • rank 4: SU(3) 2 and SU(5); • rank 5: SU(6) and Spin(10); • rank 6: SU(3) 3 , SU(4) 2 , SU(7) and E 6 .
We provide references for these grand unified theories, where they exist. In the same paper [36] in which they proposed the SU(5) theory, Georgi and Glashow ruled out an SU(3) 2 theory for physical reasons, leaving SU(5) the unique rank 4 unification group; we will turn to this theory in the next section. A theory with unification group SU (6) was suggested in 2005 by Hartanto and Handoko [47], while the Spin(10) grand unified theory was put forward by Georgi in 1974 [33] and Fritzsch and Minkowski in 1975 [30]. Finally, a theory with SU(3) 3 as gauge group, called trinification, was demonstrated by de Rúluja et al. in 1984 [22], an SU(7) grand unification theory was studied by Umemura and Yamamoto in 1981 [83], and the subject of this thesis, the E 6 grand unified theory, first appeared in a 1976 paper by Gürsey et al. [40].
T SU(5) G U T
Georgi and Glashow's SU(5) extension of the Standard Model was the first grand unified theory, and is still considered the prototypical example of the same. Unfortunately this theory has since been ruled out by experiment: it predicts that protons will decay faster than the current lower bound on proton lifetime. Our focus here will be simply to show what exactly we mean when we say that SU (5) is a grand unified theory; the questions we will ask and methodology we will develop will be highly instructive for us when we later consider the Spin(10) theory, and eventually the one of E 6 . We closely follow [8] in this section.
. This Lie group is naturally a subgroup of SU( + ) under the embedding The key to the whole SU(5) theory is the following: the subgroup S(U(2) × U (3)) is isomorphic to SM , modulo a finite subgroup. More precisely, consider the map this is clearly a homomorphism from SM to S(U(2) × U(3)). Equally clear is the fact that it is not injective: its kernel is all elements of the form ( , −3 , 2 ). This kernel is Z 6 , because scalar matrices −3 and 2 live in SU(2) and SU(3) simultaneously if and only if is a sixth root of unity. So in short order, we have obtained Detailed studies and reviews of this theory abound in the literature, see [66,76] and references therein.
This sets up a test that the SU(5) theory must pass for it to have any chance of success: not all representations of SM factor through SM /Z 6 , but all those coming from representations of SU(5) must do so. In particular, we have to check that Z 6 acts trivially on all the irreps inside , that is, it must act trivially on all fermions (and antifermions, but that amounts to the same thing). For this to be true, some non-trivial relations between hypercharge, isospin and colour must hold. Consider for example the electron − ∈ C −1 ⊗ C 2 ⊗ C; for any ∈ Z 6 we need ( , −3 , 2 ) to act trivially on this particle. We compute, since is a sixth root of unity. In principle, there are 15 other such cases to check, but these can be reduced to just four hypercharge relations that must be satisfied: • for the left-handed quarks, = even integer +1/3, • for the left-handed leptons, = odd integer, • for the right-handed quarks, = odd integer +1/3, and • for the right-handed leptons, = even integer.
A glance at table 1.1 shows that all of these equalities hold, so our SU(5) theory has passed its first test. We remark here that not only is Z 6 contained in the kernel of the Standard Model representation, but it is in fact the entire kernel. Hence, one could say that SM /Z 6 is the "true" gauge group of the Standard Model. Our next order of business is to find a representation of SU(5) that extends the Standard Model representation, and there is a beautiful choice that works: the exterior algebra Λ * C 5 .
We have to check that pulling back the representation from SU(5) to SM using gives the Standard Model representation ⊕ ; our strategy will be to use the fact that, being representations of compact Lie groups, both ⊕ and Λ * C 5 are completely reducible, and can be written as the direct sum of irreps; we will then match them up one irrep at a time.
We already know what the decomposition of ⊕ into irreps is, so let us look at Λ * C 5 . Any element ∈ SU(5) acts as an automorphism of the exterior algebra: where , ∈ Λ * C 5 . Since we know how acts on vectors in C 5 , and these generate Λ * C 5 , this rule is enough to tell us how acts on all of Λ * C 5 . This action respects grades in Λ * C 5 , so each exterior power in is a subrepresentation. More than that, they are all irreps of SU(5), though this is not so easy to see; we refer the reader to [32, Ch. 15.2] for a proof.
Λ 0 C 5 and Λ 5 C 5 are both trivial irreps of SM , and there are exactly two trivial irreps in ⊕ , namely and (we use the angle brackets to denote the Hilbert space spanned by a vector or vectors). Hence, these irreps must match up; we will select Λ 0 C 5 and Λ 5 C 5 for reasons that will be clear in a moment. Consider next the irrep The group SM acts on C 5 via ; just by inspection, we see that this action preserves a splitting of C 5 into C 2 ⊕ C 3 , with the C 2 part transforming in the hypercharge representation C 1 , and the C 3 piece transforming in C −2/3 . From table 1.1 then, we see that we must have where we once again used the self-duality of C 2 under SU (2).
The remainder of the irrep matching is similarly straightforward. The final result is as follows: , Hence, Λ * C 5 ⊕ , as desired. Notice that our choice Λ 0 C 5 has led to a rather pleasing pattern: the left-handed particles transform in the even grades, while the right handed particles transform in the odd ones. At the level of the SU(5) theory, this is nice but not essential; for the Spin(10) theory, this is the only possibility; we will return to this point in section 2.5.
We have now shown everything we needed to show: the mapping above defines a linear isomorphism ⊕ → Λ * C 5 between representations of SM , i.e. these representations are the same when we identify S(U(2) × U (3)) with SM /Z 6 using the isomorphism induced by . This can be neatly summarised in a commuting diagram, the main result of this section.
C A
To approach the Spin(10) grand unified theory, we need to understand Clifford algebras, which are the most natural environment in which to study the Spin groups. Moreover, many of the results that we will obtain will be required to construct E 6 in due course. Clifford algebras are a generalisation of the complex numbers, quaternions and octonions: indeed, they are sometimes constructed in the literature by adding the required number of square-roots of −1 to the algebra of the real numbers. We will not take this route, pursuing the more formal (and standard) treatment found in [4] and [32], for example. In what follows, is a finite-dimensional vector space over K = R or C. Definition 2.4.1 (Tensor Algebra). For a non-negative integer , we define the th tensor power of to be the tensor product of with itself times: By convention, 0 = K. Then the tensor algebra is given by We define multiplication as follows: if 1 ⊗ · · · ⊗ ∈ ⊗ and 1 ⊗ · · · ⊗ ∈ ⊗ , then their product is 1 ⊗ · · · ⊗ ⊗ 1 ⊗ · · · ⊗ ∈ ⊗( + ) . For example, if has a basis { . }, then ( ) has a basis {1, , , , , 2 , 2 , . . .} (the tensor product symbol has been omitted for brevity). In general, ( ) is a free associative algebra. this is the Clifford Algebra over ( , ·, · ).
It is clear that this is equivalent to the characterisation one usually sees, namely, that the Clifford algebra is the associative algebra freely generated by with relations Then ⊗ − , · 1 ∈ 0 ( ) and = 0 + 1 , where = ∩ ( ), and where Cl ( ) = ( )/ . Hence, Cl( ) is a Z 2 -graded algebra.
is its Clifford algebra.
The proof of this standard result can be found in, for example, [4, pp. 14-15]. Its corollaries bridge the gap between our abstract construction of Clifford algebras, and the motivation of generalising the complex numbers.
S M C A
We define structure maps on Clifford algebras, analogously to remark 2.
T S G
We now posses the machinery to introduce the Spin groups. Let We wish to see this Lie algebra as the Lie algebra of a Lie group. It is easy to see that is a homomorphism; let us try to identify its kernel. Suppose that ∈ ker . Then = for all ∈ , so = ( )( ) = ; by the lemma below, must be a scalar. But = 1, so 2 = 1 =⇒ = ±1 =⇒ ker ⊂ {±1}; since the inclusion certainly holds in the other direction, we conclude that ker is identically {±1}. Lemma 2.4.11. If , on is non-singular and ∈ Cl( ) is such that = ( ) for all ∈ , then is a scalar.
Proof. By Schur's lemma, acts on any irrep as either 1 or -1. The ones in which it acts as 1 are representations of E 0 / , which is an abelian group of order 2 −1 , so there are exactly 2 −1 1-dimensional representations of 0 in which acts as 1. Since the kernel of 0 → 0 / has exactly two elements, the conjugacy classes in 0 are either one element (if the element is central) or two elements ± . For 0 , the centre is {±1} if = 2 + 1 and ±1, ± 2 1 if = 2 ; we can see this as follows. If we conjugate = 1 with where = 1, = 0, we change its sign. So if is in the centre, = ±1 or ± 1 2 . . . . The latter is in the centre only for even.
Recalling that the isomorphism classes of irreps (over C) of a finite group are in a 1 : 1 correspondence with the conjugacy clasees, we see that 0 has one (resp. two) more irreducible class(es) of representation(s) than 0 / if = 2 + 1 (resp. m = 2n). Let ⊂ 0 be the subgroup generated by 1 Proof. We have to prove the isomorphism in the various senses of * with , where we write , for the spinor representations in the proposition statement. Consider that by definition, 0 acts on an ℎ in the dual representation * = Hom C ( , C) as ( ℎ)( ) = ℎ( −1 ) for ∈ ; generalising this, we see that Cl 0 (C ) acts on * as ( ℎ)( ) = ℎ(( ) ). From the discussion in the final two paragraphs of the proof of proposition 2.4.15, it is clear that we have an isomorphism of the representations and * of the finite group 0 , which is an isomorphism of the Cl 0 (C )-modules. But Spin( ) ⊂ Cl 0 (C ), so the isomorphism preserves the action of the elements of Spin( ), provided this action is defined by ( ℎ) = ℎ( −1 ), which is the usual action. QED We now in fact have almost everything we need to discuss the Spin(10) grand unified theory. But before we do so, we end this section by stating a technical result, required to construct E 6 (and also G 2 , in the appendix). In particular, we need to understand how the representations of the Spin groups behave under certain inclusions. To this end, we first note that the inclusion K ↩→ K +1 induces an inclusion Cl(K ) ↩→ Cl(K +1 ),
(i) Under the inclusions
The proofs of these inclusions are found in [4, pp. 23-24]. We will henceforth denote, as we have here, the direct sum of vector spaces (and representations) by a simple + instead of an ⊕.
T Spin(10) E SU(5)
Let us revisit the SU(5) theory. Viewed from a different light, the core idea behind the embedding SU(2) × SU(3) ↩→ S(U(2) × U(3)), which subsequently split each irrep of SU(5) into an isospin and colour piece (each twisted with hypercharge), can be stated as follows: since the Standard Model representation is 32-dimensional, each particle or antiparticle in the first generation of fermions can be named by a 5-bit code. Roughly speaking, these bits are the answers to five binary queries.
There are subtleties when we answer "yes" to both of the first two questions, or "yes" to more than one of the last three, but we ignore this problem here; it has no bearing on our argument.
• Is the particle isospin up?
• Is it isospin down?
• Is it red?
• Is it green?
• Is it blue?
This binary code interpretation of the SU(5) theory requires the dimension of + to be 32, and this raises some questions, as we shall see now.
At the time of writing, there is no direct experimental evidence for the existence of the right-handed neutrino, even though they are extremely desirable theoretically, as they could account for several phenomena that have no explanation within the Standard Model. The right-handed neutrino has a direct bearing on our grand unified theories; in particular, it presents a mystery for the SU(5) theory. SU (5) does not require us to use the full 32-dimensional representation Λ * C 5 . It works just as well with the smaller representation which is less-aesthetically pleasing, and moreover, clearly does not allow for the existence of . It would be nicer to have a theory that required us to use all of Λ * C 5 ; better still, if our theory were an extension of SU(5), our explanation for the arbitrary hypercharges of the Standard Model particles would live on. The Spin(10) grand unified theory is an attempt at such an extension; [30] and [33] are the original references for the same.
In proposition 2.4.15, we constructed the spinor representations for Spin(2 ), Δ ± , each of dimension 2 −1 . It is perhaps not immediately apparent from the somewhat technical proof of that result, but these irreps are intimately related to Λ * C , and we will exploit this fact to forge a path to the SU(5) theory.
Let be a complex vector space with dim = 2 , equipped with the standard inner product , . Write = + , where the 's are -dimensional isotropic spaces for , . In fact, under , , we can simply take to be spanned by the first standard basis vectors, and by the last . for any ∈ and ∈ . To do this, we will "deform" the usual wedge product on the exterior algebra (this is sometimes referred to as Clifford multiplication of forms). For each ∈ , ∈ and ∈ Λ * , define For thorough reviews of the current theoretical and phenomenological status of this elusive particle, see [25,85] and references therein.
Recall that a space is isotropic when the chosen symmetric bilinear form restricts to the zero form on it.
where : Λ → Λ −1 is the usual contraction by ; on a basis vector it acts as It is immediately clear that 2 and 2 vanish on their domains, and it is a straightforward exercise to check that equation (2.5.1) holds. Finally, one confirms that the resulting map from Cl( ) → End(Λ * ) is an isomorphism by computing it on a basis set. QED The maps and are far more important than they perhaps appear. The first clue is that if we extend them to all of = C , they are in fact adjoint with respect to the inner product induced on Λ * C by , , i.e. for ∈ C , , ∈ Λ * C , , = , . Adjoint operators are the bread and butter of quantum mechanics, so one might ask if these maps have a physical interpretation; indeed, there is one readily available. In the parlance of physics, particles are vectors, so = ∧ can be said to "create" a particle of type by wedging; analogously = "destroys" a particle of type by contraction. In other words, these maps return, for each , the corresponding creation and annihilation operators. It is customary to denote the creation and annihilation operators corresponding to the basis vectors of C by * and respectively, and we will do so below.
Consider now the splitting Λ * = Λ even + Λ odd into the sum of even and odd exterior powers; Cl 0 ( ) clearly respects this splitting. Hence, we deduce that there is an isomorphism Cl 0 ( ) End(Λ even ) + End(Λ odd ) .
Restricting now to the case = 5, we conclude that since Spin(10) ⊂ Cl 0 (C 10 ), the above Clifford modules, i.e. the even-and odd-graded powers of the exterior algebra Λ * C 5 , are representations of Spin (10). Moreover, by proposition 2.4.15, they are irreducible. Elements of these two irreps, Δ + and Δ − , are called left-and right-handed Weyl spinors respectively, while elements of their direct sum, Λ * C 5 , are called Dirac spinors.
We are tantalisingly close now to the Spin(10) grand unified theory; there remains but one question. Does the Dirac spinor representation of Spin(10) extend the representation of SU(5) on Λ * C 5 ? Or more generally, does the Dirac spinor representation of Spin(2 ), which we will call , extend the representation of SU( ) on Λ * C ? Recall that this latter representation : SU( ) → Λ * C acts as the fundamental representation on Λ 1 C C and respects wedge products. The result that we need is answered in the affirmative by the following theorem, which appears in a classic paper by Atiyah, Bott and Shapiro, wherein they also founded the abstract theory of Clifford modules [7].
Theorem 2.5.2. There exists a Lie group homomorphism that makes this triangle commute:
We follow the proof as laid out in [8]. We claim that these hold on all of Λ * C . To see this, first recall that preserves wedge products: differentiating this condition, we see that ( ) must act as a derivation: Since both the derivative and taking wedge products are linear, derivations on Λ * C are determined by their action on Λ 1 C ; hence, for equations (2.5.3) to hold on Λ * C , it suffices to check that all the operators on the right hand side of the equation are derivations. The annihilation operator is given by contraction, which acts like so on a wedge product: where is the order of the tensor ; this is almost a derivative, but not quite. On the other hand, the creation operators act in a completely different way:
T E 6 G U T
A grand unified theory based on the exceptional group E 6 first appeared in a 1976 paper by Gürsey, Ramond and Sikivie [40]. The authors were motivated by the fact that E 6 has as a maximal subgroup SU(3) × SU(3) × SU (3): they took these components to be, respectively, the symmetry groups of the left-and right-handed quarks, and the colour group of the quarks, and considered two assignments of this subgroup into a 27 dimensional irrep of E 6 . We will not follow their treatment in this chapter, choosing instead to focus on the following "cascade" of theories [11,42,48]: We will first construct E 8 and E 6 in section 3.1 below. In the process, we will see how the group Spin(10) × U(1)/Z 4 arises naturally as a maximal subgroup of E 6 , which will lead us directly into the proof that E 6 extends the Standard Model in section 3.2. Thereafter, we will analyse the new fermions that appear in the E 6 theory.
T C E 8 E 6
We closely follow [4] in this section. Our strategy will be the following: to describe an unknown group , it is useful to find a known subgroup of maximal rank ⊂ and to give an account of / . The main theorem of this section is the following, the proof of which will be in stages. Dim.
Here, is the fundamental representation of U(1) on C. (ii) in Spin(16)/Z 2 , the Z 2 is generated by 16 1 , and The first column we will fill is that of dim / , proceeding thereafter to find groups which have the required representations of these dimensions. We begin with the construction of the Lie algebra of E 8 .
See the construction of G 2 in the appendix for a prototypical example.
T C L A T E 8
For E 8 , there is no representation of smaller degree than Ad, so let us use this fact. Take + Δ + , where we denote (16) = , and consider this simultaneously over R and C; its degree is 120 + 2 7 = 248, as required. For a while we can work with Spin(2 ); let us try and define a suitable inner product on its Lie algebra . By proposition 2.4.10, ⊂ Cl 0 ( ) has a basis { | < } and Δ + is a representation of Spin (2 ), and hence of over R, i.e. for all ∈ , ∈ Δ + , we have [ , ] ∈ Δ + satisfying the Jacobi identity, where the multiplication is Clifford multiplication. Assume now that 2 ≡ 0 mod 8 and consider Δ + as a real representation of Spin (2 ). Choose ( , ) Δ + : . We now transpose the action ⊗ Δ + → Δ + to get a map Δ + ⊗ Δ + → .
T C L G T E 8
Our construction of the simple, connected, compact Lie group with Lie algebra of type E 8 proceeds according to the following steps.
(i) Take the Lie algebra + Δ + (over R or C).
(ii) Take the group of automorphisms of this Lie algebra; this is closed subgroup of GL( + Δ + ) preserving the Lie bracket.
(iii) Take the identity component and call it E 8 . (In fact, the result of step 2 is already connected.) All our constructions are invariant under Spin (16), over R or C, so we get a map Spin(16) → Aut( + Δ + ), and since Spin (16) is connected, we get a homomorphism into E 8 . To find the kernel, note that 1 2 · · · 16 ∈ Spin(16) acts as 8 = 1 on Δ + . It covers −Id ∈ SO(16), so it acts as −1 on R 16 but it acts as 1 on . Therefore it acts as 1 on + Δ + . This and the identity are the only elements which act as 1 on + Δ + , so we get an embedding Spin(16)/Z 2 → E 8 . We now check that E 8 has the required properties.
Let be a finite dimensional algebra over R or C (for example, a Lie algebra) and let Aut( ) be the group of automorphism of , that is, linear bijections : → such that ( ) = ( ) ( ). Then Aut( ) is a closed subgroup of GL( ), hence a Lie group. Proof. Immediate from the preceding three lemmas. QED
this leads to a result of paramount importance for us.
T E 6 E
Spin (10) We only need a few more things to be able to write down a theorem for E 6 as a grand unified theory. Firstly, we have not shown that the above 27-dimensional representations of E 6 are irreducible. This is in fact the case, but the proof of this result is unfortunately quite involved, and we will omit it in this paper; the interested reader is referred to [4,Ch. 11].
The second thing that we need to check is that these representations, call them and , are unitary. This seems problematic, since we have no direct description of them; the only thing we know is their dimension, and how they reduce to Spin(10) × U(1) → E 6 . Fortunately, there is a way to circumvent this difficulty. We have used several times already that an equivalent charecterisation of a unitary representation of a group is the requirement that the action of on is an isometry-indeed, this is sometimes taken to be the definition; with this in mind, we have the following handy result, often referred to as Weyl's unitarian trick. It requires the notion of a Haar measure, which we do not define here; see [19,Ch. 1.5].
Proposition 3.2.1. Any representation of a compact group possesses a -invariant inner product.
Proof (Sketch). Let : × → C be any inner product, and define where the integral is normalised. : × → C is then linear in , conjugate linear in , -invariant since the integral is left-invariant, and positive definite since the integral of a positive continuous function is positive. QED + endowed with this natural E 6 -invariant inner product is thus a direct sum of unitary irreps of the compact group E 6 . To extend theorem 2.5.3 and prove that E 6 is a grand unified theory however, we need to check something still further: we need a homomorphism Δ + + Δ − → + as unitary representations of Spin(10) and E 6 respectively. But since Spin(10) ↩→ Spin(10) × U(1) → E 6 , and we know how + restricts to Spin(10) × U(1), it suffices to produce a homomorphism between Δ + + Δ − and these restricted representations. But first, let us run through our usual checklist: the restricted representations, as the direct sum of irreps, are clearly irreps. Are they unitary? (i) was defined to be the fundamental representation of the unitary group U(1), so there is nothing to check here. (ii) From proposition 2.5.1, the spinor representations can be seen to be unitary: recall that these are defined via the creation and annihilation operators, which are adjoint; we therefore have ( † )( ) = ( ∧ ) = Id , so this is We use the word "restriction" here a little loosely. What we mean is that we obtain a representation on Spin(10) × U(1) as it is homomorphic to the subgroup Spin(10) × U(1)/Z 4 of E 6 . We will pick up this point in the next section. indeed a unitary representation. Lastly, (iii) Spin(10) × U(1) ( , ) acts unitarily on the complex representation Λ 1 10 ⊗ 2 ⊗ , ⊗ : where we simply used the definitions of the tensor product of representations and Hilbert spaces, and the fact that Spin(10) and U(1) are each isometries on these representations. So, (i), (ii) and (iii), together with the fact that the tensor product of unitary representations is again unitary, means that we are done, and can write down the following commuting diagram: We have but one final check. Recall that the homomorphism Spin(10) × U(1) → E 6 has the kernel Z 4 ; it is hence incumbent on us to verify, just as we did for the SU(5) theory, that this kernel acts trivially on every fermion. Explicitly, the four elements of the kernel are 5 , where = ±1 for Δ ± respectively; coupling this with the fact that the U(1) components act as ∓1 , means that 2 , for example, acts on Δ + as 5 ⊗ −1 = 1. The other three cases work out just as easily. The final representations we need to consider are Λ 1 10 ⊗ ±2 , where Spin(10) acts by conjugation. Once again, 1 and 3 pose no problem. To tackle 2 and 4 , we will need the following Claim 3.2.2. 2 1 ∈ Spin(2 ) acts on ∈ Λ 1 2 as ↦ → − . Proof. This is a direct computation. Since Clifford multiplication is linear, it suffices to show this for = , for some 1 ≤ ≤ 2 . Consider then
T N F
We are now in uncharted territory: this latest extension of the Standard Model has, for the first time, yielded new particles. We started with a 32 dimensional representation ⊕ of all the standard model fermions, and found that they fit exactly into the irrep Λ * C 5 of SU(5); this was in turn shown to be isomorphic to the spinor representation Δ + + Δ − of Spin (10). But now, we have added a significant number of dimensions: ⊕ is 27 + 27 = 54 dimensional, which means that we have 11 new fermions and antifermions. How can we understand them?
Let us think again about the SU(5) grand unified theory. There, we matched irreps, one by one, of SU(5) and SM ; this perhaps obscured the fact that the particles of the SU(5) theory as such, are not characterised by the SM charges. Said another way, if we lived in a universe governed by an unbroken SU(5) theory, there would be no need to think of the Standard Model charges, in the same way that the representation theory of the strong force is remarkably simple because its symmetry group SU(3) is unbroken at the vacuum. But more often than not, we find ourselves in the opposite situation, and so out of necessity, we characterise particles based on how they transform under the broken symmetry of our vacuum, SM . In short, to understand these new fermions, we need to think about symmetry breaking, and in particular, we need to understand what charges they carry under SM .
Happily, one can see a whole lot at the level of representation theory, without venturing into the (complicated) dynamics of symmetry breaking; indeed, without saying so explicitly, we have laid most of the groundwork. Consider again the irrep matching of the SU (5) theory, equation (2.3.1). Once we confirmed that that Z 6 kernel of : SM → SU(5) acted trivially on , matching irreps was precisely the act of understanding how the SU(5) symmetry broke down to a SM theory. In much the same way, theorem 2.5.2 was the attempt to see how Spin(10) broke to SU(5). In both cases, we had no need for any new charges; with E 6 , the situation is different. The proof that E 6 is a grand unified theory rested on the inclusion Spin(10) ↩→ Spin(10) × U(1) → E 6 , so a new U(1) charge seems to be demanded by the mathematics; let us denote it with U(1) to differentiate it from the U(1) of electromagnetism. Then since we have no obvious reason to not do so, let us simply declare that each particle now carries the U(1) charge dictated by the superscript of the representation to which it belongs. For example, the particles which live in the representation 1 ⊗ −4 of Spin(10) × U(1) will carry a charge of −4.
The ease with which we able to incorporate a new symmetry into our theory should not obscure the fact that this has huge physical implications: if this U(1) symmetry were to remain unbroken at the vacuum, this would imply the presence of a new force (similar to electromagnetism) mediated by a hitherto unobserved massless boson (akin to the photon). No such force has been detected to date, so let us take this into account and posit the following cascade of theories: Following the discussion in the previous paragraphs, we have introduced here the notation for the extended SU(5) theory to indicate that the particle representations are now tensored with an additional : for the left-handed electron for example, we would write − ∈ Λ 4 C 5 ⊗ −1 , since in the Spin(10) theory, − lives in Δ + , and this is now tensored with −1 . In fact, it should be clear that this analysis works for all the Standard Model fermions: we know already which Weyl spinor representation they live in, so it is a simple matter to assign to them a = ∓1, according to whether they are in Δ ± , respectively. The first legitimately new particles appear in ±4 , but these are easy to understand since they do not transform in any group other than U(1) . Hence, at the level of SU(5) × U(1) , we can simply state that they are the sole elements of the one-dimensional representations 1 ⊗ ±4 ; under this assignment, they would be antiparticles of each other, and not interact with any of the Standard Model particles. We will return to this interesting point in section 4.3.
The representations Λ 1 10 ⊗ ±2 will take the most work to sort through. Clearly, the first step is to understand how the Spin(10) representation Λ 1 10 breaks to SU(5). We make the following The proof of this will be in stages. The first thing we will do is to ask whether it suffices to consider the same question at the level of Lie algebras, since in that case we have the explicit embedding (and corresponding eigenvalue problem), : ( ) (2 ) (2 ) where 1 and 2 are real × matrices such that 1 = − 1 , 2 = 2 , and tr 2 = 0. The result that we will need comes from a classic query in the theory of representations: can every representation of the Lie algebra of a Lie group be associated with a representation of the group itself, where we moreover require that the differential of the group representation returns the one of the algebra? The answer turns out to be in the affirmative in the case where the Lie group is simply connected [87, p. 105], which works out nicely for us since both SU( ) and Spin(2 ) are indeed simply connected: Spin(2 ) is simply connected by virtue of being the universal cover of SO(2 ); for a proof for SU( ), see [92]. Now as we saw above, Spin(2 ) acts on Λ 1 2 by conjugation; the differential of this action is the commutator, · = [ , ], for ∈ (2 ), ∈ Λ 1 2 . Note that the multiplication Masiero's paper [63] considers some of the phenomenological implications of such an extension to the SU(5) theory. The article by King [54] is a general reference for extended SU(5) theories. Some of these extensions are still viable as grand unified theories [1]. on the right hand side of the equation is Clifford multiplication, where we canonically embed both Λ 1 2 C 2 and (2 ) (2 ) = span ∈ Cl(C 2 ) | 1 ≤ < ≤ 2 into Cl(C 2 ). How do we now relate this to our other embedding, (3.3.1)?
Lemma 3.3.2.
For ∈ (2 ) and ∈ C 2 , where the on the left we have the standard action of (2 ) on C 2 , and on the right, Clifford multiplication.
Proof. As with all linear algebra results, it suffices to check this on a basis. As we have seen, a natural one for (2 ), the space of skew-symmetric matrices, is where is the 2 × 2 matrix with 1 in the entry, and 0 everywhere else. We have, for a standard basis vector of C 2 , From proposition 2.4.10, the isomorphism between (2 ) and (2 ) is given by = 2 ; we thus compute, . QED Therefore, we now have an honest-to-goodness eigenvalue problem for the matrix 1 2 − 2 1 ∈ (2 ). A quick calculation shows that the two -dimensional eigenspaces of this matrix are spanned by ( , ± ), ∈ C : whence we conclude that SU(5) ↩→ Spin(10) does indeed act as its fundamental representation (resp. complex conjugate fundamental representation) on Λ 1 5 (resp. Λ 1 5 ). This completes the proof of claim 3.3.1.
We are almost done. The last step we must make is to understand how the Λ 1 5 and Λ 1 5 of SU(5) break down to SM /Z 6 so we can assign the Standard Model charges to these particles; but this is easy. Indeed, the homomorphism from before, ( , , ℎ) (recall that because of how the hypercharge representation C was defined, we have to divide the exponent of by 3): Let us consider a quick example to see how we might catalogue these particles: to the particles in the C 1 ⊗ C 2 doublet of U(1) × SU(2), we would assign as usual the isospins ±1/2, and they would each carry a hypercharge of 1. In addition, at the level of the SU(5) theory and beyond, they would carry a new = ±2, according whether the Λ 1 5 came from the Λ 1 10 ⊗ ±2 . Finally, we note that for the (antiparticle) representation Λ 1 5 , one simply passes to the complex conjugate of the representation on the bottom right of the commuting diagram above.
We summarise all of the information in this section in (2); the SU (3) representations are written down explicitly. The electromagnetic charge can be obtained from the and 3 columns via the NNG formula. The column gives the corresponding U(1) representation that should be tensored with the representations of SM , SU(5) or Spin (10). Finally, the corresponding table for is easily obtained from this one by passing to the dual representations and taking the opposite charges throughout.
Up to the sign of (he chooses the opposite convention), we have reproduced Table 21 in [81], for example.
A P
As mathematically interesting as grand unified theories are, they are ultimately statements about the real world. So in this section, we pose the following question: at the level of group (representation) theory, what can we say about the phenomenology of these theories? Clearly, a healthy amount of physics is required to motivate and supplement any such discussion, but the aim is to stay as close to mathematics as possible; the relevant physics is introduced where necessary.
In the first section, we discuss a proper prediction of grand unified theories, the weak mixing angle, which has a simple closed formula in terms of the eigenvalues of the intertwining operatorsˆ 3 and . Following this, we will discuss anomalies, which are not so much a phenomenological prediction as they are a basic physical requirement on unification groups. They have a rather nice interpretation in terms of a certain Casimir operator on the Lie algebras of said groups, so this issue is completely reduced to a mathematical property that we can understand fairly easily, given the machinery we have already built. Finally, section 4.3 functions as something of a survey section, where we consider other expected signatures of the E 6 theory, and discuss its outlook.
T W M A
One of the unambiguous predictions of grand unified theories is the weak mixing angle or Weinberg angle, which we have already encountered in section 1.2.3. Recall that equation (1.2.1) offered a rather geometric interpretation of this angle, as the parameter that characterised the rotation of the 0 − boson plane after symmetry breaking; it can also be written in terms of the gauge couplings 2 and 1 , of the SU(2) and U(1) groups of the electroweak theory respectively, as (4.1.1) In 1974, Georgi, Quinn, and Weinberg derived a formula for the weak mixing angle w in grand unified theories [37]. The only assumption that they needed in the proof thereof was that the U(1) × SU(2) group of the electroweak theory is embedded in the grand unification group in such a way that the NNG formula still holds. We have assumed this throughout, so this theorem is applicable to all the grand unified theories we have analysed; let us hence state and prove their result. Proof. We follow the lecture notes of Bjorken [16]. The 0 = 2ˆ 3 and bosons, appropriate to the broken SU(2) and U(1) theory, are gauge bosons of the full gauge group ; the coupling of 0 to any fermion is proportional to 3 , and the coupling of the boson is proportional to the hypercharge . Because 0 and are both gauge particles for the group , we must have, for any representation of , since there is a symmetry operation of the group that can transform 0 into , but that transforms the representation into itself. Completing the proof is now a matter of simple algebra. From equation (1.2.1), we must have that the electric charge is given by ∝ ( cos w + 3 sin w ) ; in order to have the difference of between two members of the same isospin doublet be ±1, we must set the constant of proportionality to (sin w ) −1 , i.e.
We now square this equation, and sum over all fermions. The cross term 3 vanishes, because the only non-zero contributions to this sum come from isospin doublets, and for each doublet this term is zero (since is constant on a doublet, while the 3 's come with opposite signs). We hence obtain Utilising equation (4.1.2) above, we obtain the formula stated in the theorem. QED In the same paper, Georgi et al. immediately applied this result to the SU(5) theory, leading to the famous prediction sin 2 SU(5) w = 3/8. It should be clear that since the Spin(10) theory introduces no new fermions, the prediction for the Weinberg angle is same as for SU (5). In our E 6 theory however, we do have new fermions, so we should see a different value; indeed, on consulting table 3.1 and doing the necessary arithmetic, we obtain the following: As far as representation theory goes, this is all we can say. But it is too tempting to not compare such a definite phenomenological prediction with the real world; unfortunately, the comparison is none too comforting: one standard estimate [67] for the weak mixing angle is sin 2 w = 0.2223. Is there a way to fix this massive discrepancy?
The most plausible answer comes from renormalisation theory, a catch-all term for techniques used to deal with the infinities that plague quantum field theory. We have neither the desire nor the pages to get into any details here , but we would like to at least We point the reader to [74,79,90] or any standard quantum field theory reference. state a result from Marciano [62] which succinctly accounts for renormalisation effects on the value of sin 2 w in grand unified theories. What follows is hence necessarily sketchy; the reader is encouraged to consult the original paper for an excellent discussion. The main assumption that he needed in its derivation goes back to an earlier paper of Weinberg's [89]: all gauge bosons in the grand unified theory must have large masses (on the order of some superheavy , say) compared with the ± and the , (the order of which we denote by ) and also compared with the standard model fermions in the theory (also on the order ). The motivation for this was mostly phenomenological: effects mediated by these gauge bosons had eluded detection thus far. Of course, there was a further technical aspect to this assumption, but since it involes some quantum field theory, we relegate it to a footnote dedicated to the interested reader. In any case, Marciano's result is written as So for example, if we take the measured values sin 2 w = 0.2223 and = 80.385, we see that the superheavy mass scale for the E 6 theory is of the order = 3.592 × 10 16 GeV. We caution that the value obtained from the formula is is quite sensitive to changes in the value of sin 2 w because of the logarithm; it decreases by about 50% for each increase He leaves open the possibility that there might be exotic fermions in the theory with masses on the order of . As we will see in section 4.3, this is the case with E 6 . See also [75]. Even today, the lower bounds on the masses of grand unified theory gauge bosons are at least two orders of magnitude larger than the known masses of the and bosons [73]. The argument, as seen in [37], runs as follows. The gauge couplings-of the grand unified symmetry group , and the Standard Model subgroups U(1), SU(2) and SU(3)-are functions of the momentum scale which we denote by ; in particular, equation (4.1.1) only holds when is much larger than the superheavy boson masses, where the breaking of may be neglected. However, the observed values of the gauge couplings refer to much smaller values of , of the order of the ± and masses, or even smaller. The problem is therefore to bridge the gap between superlarge values of , where imposes relations among the gauge couplings, and ordinary values of , where the gauge couplings are observed. In order to deal with this, Georgi and collaborators employed a theorem from Appelquist et al. [6], which proved that all matrix elements involving particles with masses much less than the superheavy scale could be calculated in an effective renormalisable theory. In this case, one could simply consider the original theory with all the superheavy particles omitted (but with coupling constants that could depend on the superheavy masses). All other effects of the superheavy particles are suppressed by factors of an ordinary mass divided by a superheavy mass.
We note that this formula is specifically for theories with sin 2 0 w ≠ 3/8. of 0.005 in sin 2 w . One final remark is that we fixed the value of = 1 (keeping with Marciano) since in general, Higgs scalars are often considered the ugliest features of gauge theories, and one would prefer to have as few of them as possible ; if this restriction is relaxed, there is some wiggle room in the above formulae to increase the value of sin 2 w by increasing the value of , and this in turn obviously has a direct bearing on ; table II in [62] estimates the size of this effect.
A C
In section 1.1.2, we discussed Lagrangian symmetries, and saw their paramount importance; the transformation laws that we considered there were indeed the foundation for everything that came after. We return to this theme now, but with a different question as our starting point: which classical symmetries of the Lagrangian are elevated to quantum symmetries?
The business of quantising a classical Lagrangian is a messy one. By way of illustration, consider the simplest case: given a Lagrangian ℒ of a (real) scalar field , one defines the generating functional as where suggestively denotes a source term, akin to electromagnetism. The measure of integration, D , represents an integration over all possible field configurations. We can now define the effective action: where The first anomaly that we will consider is the chiral anomaly; to introduce the same, we will need the Dirac equation. We have encountered in some detail already the Dirac spinors in chapter 2, as elements of certain irreps of the Spin groups. This is a description free of dynamics, and therefore, far removed from Dirac's original conception of these particles. The equation describes all spin-1/2 massive particles for which parity is a See section 4.3 for references for the Higgs mechanism in E 6 . See [55, § 28-30] for a cogent presentation of the same. It is common knowledge that foundational questions about the mathematical validity of this definition, and about the path integral formalism in general, remain. See [5] For example, if our Lagrangian includes a potential ( ), at a low temperatures, the quantum field will not settle in a local minimum of ( ) as in the classical case, but rather in a local minimum of the effective potential.
In the 1928 paper [23] presenting his equation for the first time, he begins by asking "why Nature should have chosen this particular model for the electron, instead of being satisfied with the point charge;" his remarkable solution to this quandary was a theory that, for the first time, fully accounted for special relativity in the context of quantum mechanics. For a captivating account of the history and a lucid derivation of the equation, see [90, Ch. 1.1]. symmetry, i.e. the leptons. In symbols, for a field , which we take to be massless, and a gauge boson = , written in a basis of generators of the compact semi-simple symmetry group , the theory is described by the Lagrangian The , 0 ≤ ≤ 3, are the gamma matrices, which form a basis for the Clifford algebra As with any other Lagrangian symmetry, the chiral symmetry corresponds to a current, which in this case can be shown to be 5 := 5 .
Recall that we need our theory of leptons to be chiral. The question of the hour is therefore the following: does this classically conserved quantity (i.e. 5 = 0) stay conserved when we pass to the quantised Dirac Lagrangian? The answer turns out to be no. Unfortunately, deriving this result is a nuanced, technical calculation in quantum field theory, far outside the scope of this paper; we list some references in a footnote. The final result of this computation is stated as This is not particularly illuminating as it stands. One can show however [14, § 4] that the right hand side can be rewritten such that it contains the term (recall that the 's are written in terms of the group generators ) where the subscript L (resp. R) denotes the representation of the left-handed (resp. righthanded) fermions under consideration; our theory is said to be "anomaly-free" if this quantity vanishes.
uniquely if it is known on a basis set; therefore one can define the Lie bracket, and hence the Lie algebra abstractly through the expansions where the are called the structure constants of .
Because of the antisymmetry of the Lie bracket, the structure constants satisfy = − ; from the Jacobi identity, we further have The next idea that we wish to consider requires the tensor algebra of the vector space (definition 2.4.1) over which is defined. To endow this very general product with the structure that carries, we make an obvious identification: an element of the form (we suppress the ⊗ symbol for brevity) 1 2 · · · +1 +2 · · · − 1 2 · · · +1 +2 · · · ∈ ⊗ is identified with This quotient still has the structure of an associative algebra (with a unit element), and is called the universal enveloping algebra of . Obviously, { } is a vector operator, but it is not in general the only one. We can also define vector operators { } for a given -dimensional representation of , if { } and { } are × matrices satisfying the structure equation and the relation above.
Let us restrict to the case that is an irrep of , and moreover demand that be simple. Then Okubo showed in [71] that there is a simple relationship between the number of all linearly independent vector operators on the representation , and the highest weight Λ of that representation. This latter quantity is defined by where is the rank of , the Λ 's are its roots (definition 2.1.12), and the 's are nonnegative integers specified uniquely by the representation . Let us denote the number of 's which are zero by 0 ( ); then the number of linearly independent vector operators ( ) is given by ( ) = − 0 ( ) .
We have technically only defined the notions of rank and roots for Lie groups, but it should be clear that these concepts can be extended quite naturally to Lie algebras. For the details, see [43,Ch. 6].
It is not at all obvious that such a unique decomposition in terms of roots should exist for an arbitrary irrep of ; we refer the reader to [43,Ch. 7] for a proof of this, the so-called highest weight theorem.
In other words, ( ) is equal to the number of 's which are positive. We can apply this theorem immediately: for the standard Lie algebras, their roots (and weights) have been studied and tabulated [72]; consulting these, we see quickly that the algebras with Dynkin diagram type , for ≥ 2, have (ad) = 2, and (ad) = 1 for all other algebras. This fact will be extremely important in what follows.
Let us consider the adjoint representation of in some more detail. Set = ad , so that the -th entry of this matrix is given by ( ) = ; the × matrices { } clearly satisfy the structure equation Recall now the Killing form, definition 2.2.1; we will denote the same by = tr . Introduce now the vector operator { } on the adjoint representation; by definition, it . We further introduce the triple linear forms In particular, for { } = { }, the second equation reduces to = =1 ; so is completely antisymmetric in its indices. Now, Okubo showed that an equivalent way of writing the vector operator equation (4.2.3) is the following: he further showed that the satisfying these equations can be chosen to be either completely symmetric or completely antisymmetric, and moreover, that the completely antisymmetric must be proportional to the structure constants . Hence, by the result in the previous paragraph, we conclude that for all simple Lie algebras other than the 's, the 's must be antisymmetric, because { } must be proportional to { }, the one and only vector operator on the adjoint representation. For the 's, there is an additional vector operator.
In order to obtain the "upper-indices" version of the form , we do the obvious thing, raising indices with the Killing form: = , , ; the form we recover must clearly also be either symmetric or antisymmetric. If we then set We are now ready to understand the claim that the calculation of an -fermion closedloop diagram is related to a study of the th order Casimir invariant of .
O S
Since grand unified theories have ever been within the purview of physicists, it is only fitting that we devote this final section of the paper to these tireless inquirers, for whom it is never good enough that a theory be beautiful, and rightly so; they demand that it be predictive, and hence, falsifiable. So in the following paragraphs, we will attempt to paint in broad strokes the answers to some of the questions that have naturally arisen during our analysis, but on which we have hitherto been silent; it is scarcely necessary to add that we are striving neither for comprehensiveness nor exhaustiveness here. Let us begin with the simplest question: what signatures might we expect from the E 6 theory? We quote from a recent survey paper precisely about this topic that will set the tone for the rest of the discussion: Signatures of E 6 include [an] extension of the Higgs sector; existence of neutral gauge bosons at masses above the electroweak scale. . . ; the production of new vector-like quarks and leptons, and manifestations of the neutral fermion. . . through its mixing with other neutral leptons, giving rise to signatures of "sterile" neutrinos. Up to now, with the possible exception of weak evidence for sterile neutrinos there has been no indication of the extra degrees of freedom entailed by the 27-plet of E 6 . [53] Setting aside for the moment that the outlook for the E 6 theory is rather bleak, let us try and understand the terms above that we have not yet encountered.
We have refrained from speaking about the Higgs mechanism thus far, and unfortunately, our silence on the same will continue-references [9,11,18,41,42] are some early papers that examine this mechanism within the context of mass scales in the E 6 model. One thread that runs through them all is worth examining, since it is directly concerned with the most obvious thing one would think to look for to ratify the E 6 theory, namely, new fermions. Here we introduce the Survival Hypothesis [10,34]: stated succinctly, it says that low-mass fermions are those that cannot receive SM invariant masses. To understand what this means, recall from the previous section that mass terms spoil the invariance of the Lagrangian under chiral symmetry; the survival hypothesis thus postulates that when the grand unification symmetry group is broken down to the Standard Model gauge group, the fermions which do not acquire mass are those that cannot receive mass terms invariant under SM ; in particular, this means that all the particles that do admit such a mass term will receive a superheavy mass, since the symmetry breaking occurs at grand unification scales. Put another way, most fermions in a grand unified theory should have masses on We have of course shown more: we have shown that all representations of every algebra other than SU( ) for ≥ 3 are anomaly-free.
In the previous section, for instance, the superheavy mass scale was found to be on the order of 10 16 GeV. unification scales; those that do not are associated with one of the two unbroken gauge symmetry groups SM or U(1) × SU(2), since both of these demand chiral symmetry. The upshot is the following: we do not see the new fermions of the E 6 theory because they are phenomenally heavy, many orders of magnitude outside the reach of even the most powerful detectors. For a thorough examination of mass scales in grand unified theories in general, and E 6 in particular, see [75]. We note that this discussion is of course not valid only for E 6 , but is formulated as a general principle in grand unified theories; this is the "fermion desert". In Georgi's words, "If [the above] picture is correct, physics between 300 GeV [the Standard Model mass scale] and 10 14 GeV is boring. There is a grand plateau in momentum scale on which the world is well-described by an SU(3) × SU(2) × U(1) gauge theory. . . there will be no new interactions below 10 15 GeV." [34] Let us turn our attention now to the aforementioned bosons. A detailed analysis of their phenomenology is outside the scope of this paper, but they arise quite naturally in representation theory, and this we can certainly understand. If gauge bosons live in the complexified adjoint representation of the symmetry group , it is clear that there is something special about the maximal set of elements in that commute with each other, i.e. the Cartan subalgebra of . In general, these commuting elements make for good quantum numbers (charges); the best way to see this is by choosing the Cartan-Weyl basis for , which we describe now. Let us consider the complexified adjoint representation of (while suppressing the use of the ad operator notation): if has rank and dimension , let { }, for = 1, . . . , , be the Cartan subalgebra; then one can show that the set { } can be completed to a basis { , } of such that where the eigenvalue ( ) is non-vanishing for at least one value of . This is the Cartan-Weyl basis for , sometimes called the canonical or standard basis. The most pertinent example of this construction is something that we have already encountered in the SU (2) weak force. The relevant Lie algebra, (2, C), is of rank 1 and spanned by the bosons; they form a Cartan-Weyl basis since they satisfy Hence, we see that the basis for the 1-dimensional Cartan subalgebra is indeed given by a quantum number operator, the isospin matrixˆ 3 = 2 0 . To extend the notion of charges to gauge bosons, recall first this aspect ofˆ 3 : in a standard (doublet) basis for the space of weak-theory fermions C 2 , the isospin of a particle was simply given by the eigenvalue of the action of theˆ 3 operator on said fermion. It should be clear that the correct generalisation of the above concept is the following: since the gauge bosons transform in the adjoint representation of the symmetry group , they can act on each other through the adjoint action of on itself ; moreover, in the Cartan-Weyl basis, the charges of the gauge bosons are given (up to normalisation) by their roots. In the case of the SU(2) theory for instance, this yields the correct isospin for ± , i.e. ±1, since [ˆ 3 , ± ] = ± ± . But notice that we now have a highly interesting statement about these number-generating gauge bosons themselves: all of their quantum charges must vanish since they belong to the Cartan subalgebra (and hence commute with each other). In other words, the number of neutral gauge bosons in a theory is given by the rank of its symmetry They could also be Higgs SU(2) doublets, but we ignore this possibility here. There are of course many different choices of a Cartan subalgebra for a given semisimple Lie algebra, but all of these are related by automorphisms of , so we abuse terminology and speak here as though our choice were unique.
Clearly, this action is non-trivial only if is non-abelian.
group [59]. The bearing of this discussion on grand unified theories is straightforward: when we break from E 6 → Spin(10) → SU(5) → SM , we break from a rank 6 group to a 5 to a 4, and then once again to a 4. Hence, while additional neutral gauge bosons are forbidden in the (standard) SU(5) theory, both Spin(10) and E 6 each obtain one additional neutral gauge boson when their symmetry is broken. This is where representation theory ends and quantum field theory begins. The literature on neutral bosons is vast and varied, and we will not undertake a review of the same here; we point the reader instead to [57,59] and references therein for summaries of the physics and the phenomenology respectively; both cover E 6 in some detail. Reference [73] has the current exclusion limits on the masses of these bosons for the Spin(10) and E 6 theories: the lower bounds are all on the order of 10 3 GeV.
Let us now consider sterile neutrinos. We have assumed throughout our analysis that these exist, incorporating them into the SU(5) theory, and then consequently into the Spin(10) and E 6 theories. Rosner [77] recently carried out a phenomenological analysis for the sterile neutrinos in the E 6 theory, of which there are three, from the three copies of . In his framework, the traditional candidates for sterile neutrinos, the 's, obtain extremely heavy masses and become unimportant, while the Spin(10) singlets (the final entry in table 3.1) acquire light masses, and are promoted to sterile neutrino status. The other exotic fermions in the theory remain heavy, and mix only weakly with the Standard Model fermions, as per the survival hypothesis. Finally, he notes that only two of these three Spin (10) singlets are required to account for present data in neutrino oscillation experiments, leaving one neutrino free to be a candidate dark matter particle. This would fit neatly into the picture of "dark electromagnetism" proposed by Ackerman et al. [2]: in this scenario, dark matter particles interact via a new gauge boson corresponding to some U(1) theory that is unbroken at the vacuum-the U(1) gauge group that arises in the breaking of the E 6 theory is a natural candidate for the same. Schwichtenberg however, argues in [80] that the Spin(10) singlet is not the correct choice for a candidate dark matter particle in the E 6 theory. Instead, he makes a case for the exotic neutrino in the Λ 1 5 ⊗ 2 representation of SU(5) × U(1) (this is the third entry from the bottom in table 3.1). Diving into the details of this interesting debate is unfortunately beyond our scope here; we simply wanted to note that the exotic fermions in the E 6 theory provide a playground to explore such ideas.
We end with a brief discussion on perhaps the most famous prediction of grand unified theories: proton decay. Simply put, since each inclusion of SM into the unification gauge groups that we have been considering involves significant jumps in dimension-from 12 to 24 to 45 and finally to the 78-dimensional E 6 -we obtain at each step a huge number of new gauge bosons. These mediate new interactions between particles, one of which is the decay of the proton, which is stable in the standard model; the review article [56] by Langacker treats the subject of proton decay in depth. The original SU(5) model predicted a maximum proton lifetime on the order of 10 31 years [28] which was subsequently disproved by the Super-K(amiokande) experiment. Following a long period where the Spin(10) theory was thought to be as dead as the SU(5), Bertolini et al. [13] re-examined proton decay in Spin (10)and discovered that it was still viable. For the E 6 theory, [60] and [75] are two early references that go into great detail regarding proton decay via many possible symmetry breaking chains of E 6 ; one take-away point is that in almost every case, the proton decay rate for the SU(5) theory is a lower-bound for the same in the E 6 theory, with the possibility of extending the proton lifetime by some orders of magnitude above the SU(5) bound depending on how certain parameters in the theory are chosen.
This brings us to our final point: the parameter space of grand unified theories is [1] is the most recent publication from the collaboration, summarising data from the 20 years (!) that this experiment has been running.
(usually) large enough for all manner of tinkering and fine-tuning to match data. In some sense then, they still have a shot at corresponding to reality. And yet, we steadily seem to be approaching a point where testing them is getting difficult to the point of being infeasible. As a recent article notes, But while [the aforementioned] Super-K could suddenly strike gold in the next few years and confirm one of these models, it could also run for another 20 years, nudging up the lower limit on the proton's lifetime, without definitively ruling out any of the models.
Japan is considering building a $1 billion detector called Hyper-Kamiokande, which would be between 8 and 17 times bigger than Super-K and would be sensitive to proton lifetimes of 10 35 years after two decades. It might start seeing a trickle of decays. Or it might not. "We could be unlucky," [S. M.] Barr said. "We could build the biggest detector that anyone is ever going to build and protons decay just a little bit too slow and then we're out of luck." [27] Indeed. So while the incompleteness and seeming arbitrariness of the Standard Model remain strong motivators to seek a more complete, natural physics, it seems that the best we can do right now, at least as regards grand unified theories, is simply to wait. It would appear that nature is not so keen to give up her secrets just yet.
One of the inventors of the flipped SU(5) theory [12].
A T C G 2
The reference for this appendix is [4]. For large values of , the Dynkin diagrams of type , , , are distinct. For small values of , we have the possibility of exceptional isomorphisms between the classical groups as follows.
We prove the first two isomorphisms.
Proof. We do both cases in parallel. Spin(5) has the representation Δ of degree 4 over C and degree 2 over H. We can impose a Hermitian form, invariant under the compact group Spin(5), giving us a homomorphism from Spin(5) → Sp(2) which we also denote with Δ. Similarly, Spin(6) has the representation Δ + of degree 4 over C and we have Δ + : Spin(6) → U(4). We first wish to show that Im Δ + ⊂ SU(4). Let ∈ ⊂ Spin(6). Then acts with eigenvalues defined by weights 1 2 these add to zero so the eigenvalues multiply to 1 and must act with determinant 1. Hence, any −1 acts with det 1 and Δ + maps to SU(4). | 2021-02-17T23:57:48.896Z | 2021-02-25T00:00:00.000 | {
"year": 2021,
"sha1": "d7ab3229b15de84e82769f656bd4fb87289ee2f2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9563e5d8bb71ac615c98363e09c806df65d1bb79",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
141501658 | pes2o/s2orc | v3-fos-license | Spin-polarization effects of an ultrarelativistic electron beam in an ultraintense two-color laser pulse
Spin-polarization effects of an ultrarelativistic electron beam head-on colliding with an ultraintense two-color laser pulse are investigated comprehensively in the quantum radiation-dominated regime. We employ a Monte Carlo method, derived from the recent work of [Phys. Rev. Lett. {\bf 122}, 154801 (2019)], to calculate the spin-resolved electron dynamics and photon emissions in the local constant field approximation. We find that electron radiation probabilities in adjacent half cycles of a two-color laser field are substantially asymmetric due to the asymmetric field strengths, and consequently, after interaction the electron beam can obtain a total polarization of about 11\% and a partial polarization of up to about 63\% because of radiative spin effects, with currently achievable laser facilities, which may be utilized in high-energy physics and nuclear physics. Moreover, the considered effects are shown to be crucially determined by the relative phase of the two-color laser field and robust with respect to other laser and electron beam parameters.
I. INTRODUCTION
As one of the intrinsic properties carried by electrons, the spin has been extensively studied and utilized in the highenergy physics [1][2][3], materials science [4], and plasma physics [5,6]. As known, the relativistic polarized electrons are commonly generated via two methods. The first extracts polarized electrons from a photocathode [7] or spin filters [8][9][10], and then employs a conventional accelerator or a laser wakefield accelerator [11] to accelerate them into the relativistic realm. The second directly polarizes a relativistic electron beam in a storage ring via using the radiative polarization effect (Sokolov-Ternov effect) [12][13][14][15][16]. However, the latter typically requires a long polarization time of about minutes∼hours because of the low static magnetic field at the Tesla scale.
Recently, the rapid development of ultrashort (duration ∼ tens of femtoseconds) ultraintense (peak intensity ∼ 10 22 W cm −2 , and the corresponding magnetic field ∼ 4 × 10 5 Tesla) laser techniques [17,18] is providing opportunities to investigate electron polarization effects in such strong laser fields, analogous to the Sokolov-Ternov effect. A plenty of theoretical works have been performed in nonlinear Compton scattering, e.g., see [19][20][21][22][23] and the references therein. However, only a small polarization can be obtained in a monochromatic laser field [24] or a laser pulse [25]. A setup of strong rotating electric fields [26,27] shows a rather high polarization, when the electrons are trapped at the antinodes of the electric field. Unfortunately, this case may only occur for linearly polarized laser pulses of intensities 10 26 W cm −2 [28], which is much beyond current achievable laser intensities. Recently, a scheme with an elliptically polarized laser pulse has been proposed to split the electrons with different spin polarizations through * weiminwang1@ruc.edu.cn † jianxing@xjtu.edu.cn spin-dependent radiation reaction [29], and consequently, to reach a polarization above 70%. Also, a similar setup can be used to generate a positron beam with a polarization up to 90% due to asymmetric spin-dependent pair production probabilities [30].
Previous works indicate that the total polarization of all electrons in monochromatic laser pulses are negligible because of the symmetric laser field. In other words, asymmetric laser fields may result in a considerable polarization. The well-known asymmetric two-color laser configuration has been widely adopted in generation of Terahertz radiation [31][32][33][34], high harmonic wave generation [35,36], and laser wakefield acceleration [37]. Recently, it is also proposed to generate polarized positron beams through multiphoton Breit-Wheeler pair production [38]. However, employing such two-color laser configuration to directly polarize the ultrarelativistic electron beam via nonlinear Compton scattering is still an open challenge.
In this work, the polarization effects of an ultrarelativistic electron beam head-on colliding with a currently achievable ultraintense two-color laser pulse are comprehensively investigated in quantum radiation-dominated regime (see the interaction scenario in Fig. 1). During the interaction, the radiation probabilities of electrons in the positive and negative half cycles of the two-color laser field are substantially asymmetric. Thus, after interaction considerable total polarization and partial polarization can be obtained. We find that the relative phase φ of the two-color laser pulse is crucial to determine the polarization effects. In particular, when φ = π/2, the laser field strengths in negative half cycles are much higher than those in the positive cycles, and consequently, more photons of higher energies are emitted in the negative half cycles. Accordingly, the electron spins more probably flip to the direction antiparallel to the laser magnetic field in the electron's rest frame, assumed to be the instantaneous spin quantization axis (SQA) [29], and those electrons have lower remaining energies due to radiation-reaction effects [40]. As φ changes, the considered effects are weakened until complete disappearance in the case of φ = 0. Moreover, the impacts of the laser and electron beam parameters on the considered effects are studied, and optimal parameters are analyzed. This paper is organized as follows. Section II presents the employed Monte Carlo simulation model. In Sec. III, the polarization effects of the ultrarelativistic electron beam in the two-color laser pulse are shown and analyzed, and the impacts of the laser and electron beam parameters on the polarization effects are also investigated. Finally, a brief summary is given in Sec. IV.
II. THE THEORETICAL MODEL
The quantum electrodynamics (QED) effects in the strong field are governed by the dimensionless and invariant QED parameter χ ≡ (e /m 3 c 4 ) −|F µν p ν | [41], where F µν is the field tensor, p ν the electrons 4-momentum, and the constants , m, e and c are the reduced Planck constant, the electron mass and charge, and the velocity of light, respectively. The normalized laser field amplitude parameter ξ ≡ eE 0 /(mcω L ) 1 and the QED parameter χ 1 are considered to ensure that the coherence length of the photon emission is much smaller than the laser wavelength [41]. Here E 0 and ω L are the laser field amplitude and angular frequency, respectively. The spindependent probability of photon emission in the local constant field approximation can be written (summed up by photon polarization and electron spin after photon emission) as [29,42] where K ν is the modified Bessel function of the order of ν, y = 2u/[3(1 − u)χ], u = ε γ /ε e , ε e the electron energy before radiation, ε γ the emitted photon energy, and α the fine structure constant. The last term in Eq. (1) is a spin-dependent addition, where S i is the initial spin vector of an electron before photon emission, and ζ = β ×â. β is the electron velocity normalized by c, andâ = a/|a| is the electron acceleration. By averaging over the initial electron spin S i , the widely employed spin-free radiation probability can be obtained [43][44][45][46][47]. The spin vector S = (S x , S y , S z ), and |S| = 1. The stochastic photon emission by an electron can be calculated via using the conventional QED Monte-Carlo algorithm [45] with a spin-dependent radiation probability given by Eq. (1). The electron dynamics in the external laser field is described by classical Newton-Lorentz equations, and its spin dynamics is calculated according to the Thomas-Bargmann-Michel-Telegdi equation [48][49][50][51]. After photon emission, the electron spin is assumed to flip either parallel or antiparallel to the instantaneous SQA (along ζ) with a probability given in Ref. [29]. Note that, as shown in the last term of Eq. (1), when the spin vector S i is antiparallel to the instantaneous SQA, the electron has a higher probability to emit a photon.
A. Simulation setup
In our simulations, the fundamental laser pulse of a wavelength λ 0 = 1.0 µm and the second harmonic pulse have the same duration, transverse profile, and linear polarization along the x direction. They propagate along the +z direction and their combined electric field can be expressed as E x ∝ [ξ 1 sin(ω L η) + ξ 2 sin(2ω L η + φ)], where ξ 1 and ξ 2 are the normalized amplitudes of the fundamental and the secondharmonic pulses, respectively, η = (t − z/c), and φ is the relative phase. We employ a three-dimensional description of the tightly-focused laser pulse with a Gaussian temporal profile with the fifth order (σ 0 /z r ) 5 in the diffraction angle [52], where z r = k L σ 2 0 /2 is the Rayleigh length, k L = ω L /c the wave vector, and σ 0 the waist radius.
In our first simulation, we take the laser peak amplitude ξ 1 = 2ξ 2 = 100 (corresponding to the peak intensity I 1 = 4I 2 = 1.37 × 10 22 W cm −2 ), and full width at half maximum (FWHM) duration τ 0 = 10 T 0 (33 fs), where T 0 is the laser period. Considering that the different Rayleigh lengths of two-color laser pulses, we firstly take the waist radius as infinity for simplicity, and then we will discuss the finite waist effects. Our simulations will show that the results in the plane wave case are very close to the ones with σ 0 ≥ 5 µm. An unpolarized cylinderical electron beam is employed, including 10 7 electrons with initial mean energy ε 0 = 1.5 GeV (corresponding to the relativistic factor γ 0 ≈ 2935), energy spread ∆ε 0 /ε 0 = 10%, transversely Gaussian profile with a radius r 1 = 3 µm, and longitudinally uniform profile with a length r 2 = 5 µm. This kind of electron bunch can be obtained by laser wakefield accelerators [53,54] During the head-on collision, one could assume the momenta of ultrarelativistic electrons to be approximately along the initial moving direction, i.e., the −z direction, due to γ 0 ξ 1 . Hence, the magnetic fields experienced by the electrons in their rest frames are along the y axis. Note that "spinup" and "spin-down" indicate the electron spin parallel and antiparallel to the +y axis, respectively.
B. Electron polarization via radiative spin effects
The combined electric field of the two-color laser pulse has a highly asymmetric envelope profiles in the positive and negative half cycles when φ = π/2, as shown in Fig. 2(a). The electrons in the negative half cycles with higher field strengths have a larger QED parameter χ, which causes more photons with higher energies to be emitted than those in the positive half cycles. In the negative half cycles, the instantaneous SQA (along ζ = β ×â) is along −y direction, therefore, after photon emission the electron spin is more probably antiparallel to the SQA, i.e., +y direction [29]. This results in generation of more spin-up (with respect to +y direction) electrons, as shown in Fig. 2(d). Accordingly, the total polarization of the whole electron beam is about 11%. Moreover, due to radiationreaction effects, more spin-up electrons have lower energies [see Fig. 2(b)]. In the region of |p z | < 160 mc marked by the black dotted box, the polarization of 14% electrons is above 40%. Further, if one filters high-energy electrons, the polarization of remaining electrons with |p z | < 100 mc is up to about 63%, as shown in Fig. 3. Obviously, the energy-dependent po- larization could provide a way to generate a highly-polarized electron beam by choosing electron energy. And, it may present an experimental scheme to verify the theory of the spin-dependent radiation reaction. Note that the polarization of laser-driven ultrarelativistic electron beams can be measured via the polarimetry of nonlinear Compton scattering [39].
As φ = 0, the combined electric field has symmetric envelope profiles in the positive and negative half cycles, as shown in Fig. 2(e). Such a laser field cannot generate more spin-up or spin-down electrons via nonlinear Compton scattering, as observed in Fig. 2(h), because the polarization of electrons induced in the positive and negative cycles counteracts each other. One can notice in Figs. 2(f) and (g) that the electrons can acquire a non-zero drift velocity in a such field configuration due to asymmetry in the laser vector potential [33,34] and radiation reaction [55]. Besides, it is shown in Figs. 2(d) and (h) that the energy spectra of the spin-up and spin-down electrons both become broader compared with the initial quasi-monoenergetic spectrum, because the electrons lose energies via stochastic photon emissions.
To analyze the reasons of the polarization effects, Fig. 4 shows the details of the evolution of the electron spin flips in the two-color laser field with φ = π/2. When interacting with the laser field, electrons emit photons, and the spin flips either parallel or antiparallel to the instantaneous SQA [29]. The formed electron polarization can significantly affect the photon emission according to the last term in Eq. (1). With S i · ζ = −1, i.e., the electron spin is antiparallel to the instantaneous SQA, the emission probability could be enhanced by about 30%, oppositely, it could be decayed by about 30% with S i · ζ = 1, as shown in Fig. 4(a).
In Fig. 4(b), we demonstrate the probability that an electron spin flips to the direction antiparallel to the instantaneous SQA after emitting a photon. One can see that the spin-flip probability depends on both the electron spin direction and the emitted photon energy. With S i · ζ < 0, the electron spin very likely flips even though the emitted photon has a low energy. With S i · ζ > 0, the spin flip arises with a high probability when the emitted photon energy is high enough. Basically, the electron spin tends to flip to the direction antiparallel to the SQA. Note that above analysis holds at high laser intensities [χ ≈ 1.1 is employed in Figs. 4(a) and (b)]. When the laser intensity is low and the resulting QED parameter χ ∼ ξ is also small, the photon energy is usually much lower than that of electron, u = ε γ /ε e ∼ χ. Hence, contributions of the electron spin term to the spin-flip probability as well as the radiation probability given by Eq. (1) can be ignored. In Fig. 4(c), we show the ratios of the spin-up and spin-down electron numbers to the total electron number, respectively. When the electron beam collides with the rising edge of the laser pulse at t 7 T 0 , the electrons gradually flip to spin-up or spin-down with nearly the same probability, due to the low laser field strength and small χ. As the electrons approach the laser pulse peak around t ≈ 10 T 0 , χ grows to about 1.1, and more spin-up electrons are generated accompanied with higher energy emitted photons. The similar results can be found in Fig. 4(d), in which we randomly choose 2000 electrons and track their dynamics. It is clearly shown that in the strong laser field region the spin flips are significant. In the negative half cycles of the electric field, the instantaneous SQA is along −y direction, and the electrons incline to flip to spin-up, i.e., +y direction. Oppositely, they tend to flip to spin-down, i.e., −y direction, in the positive half cycles. Because the field strengths in the negative half cycles are stronger, more electrons probably flip to spin-up, and consequently, a polarized electron beam is obtained.
C. Impacts of the laser and electron beam parameters on the total polarization of the electron beam We further study the impacts of the laser and electron beam parameters on the total polarization of the electron beam. In Fig. 5, we change the relative phase φ with different waist radius σ 0 . When σ 0 approaches infinite, i.e., the plane wave case, shown by the black curve with diamonds, the total polarization is zero at φ = 0, increases gradually to the maximum at φ = π/2, and then decreases to zero at around φ = π. Within the range of φ between π and 2π, the same result can be observed except that the polarization turns negative, i.e., more spin-down electrons are generated. This is because the laser strengths in the negative half cycles are higher with φ ∈ (0, π), while the ones in the positive half cycles are higher with φ ∈ (π, 2π). The dependency of the polarization on φ roughly follows the character of the function sin (φ), similar to the THz generation dependency on φ [31], which results from the dependency of laser pulse envelope asymmetry between the positive and the negative half cycles on φ.
When we take the laser waist radius as σ 0 = 5 µm, the dependency of the polarization on φ is still close to the plane wave case. However, as the waist radius is further decreased to 2 µm and 1 µm, the dependency deviates gradually from the plane wave case. The maximum of the polarization does not appear at φ = π/2 and φ = 3π/2, and the maximum is reduced significantly. These characters can be explained by the different Rayleigh lengths between the fundamental laser pulse and the second-harmonic one. As the pulses propagate, the envelope of the combined laser field as well as the the ratio of two laser amplitudes walk off. They can remain the same as the plane wave case only at the laser envelope peak. Therefore, the asymmetry of the laser field with φ = π/2 is weakened with the decrease of the waist radius. To obtain a considerable polarization, the laser waist radius should be taken as σ 0 5 µm.
Furthermore, we investigate the impacts of the laser peak intensity and pulse duration on the considered effects, as presented in Fig. 6. We employ φ = π/2, σ 0 = 5 µm, and ξ 1 = 2ξ 2 . When the laser duration τ 0 = 10 T 0 (FWHM ∼ 33 fs), with enhancing ξ 1 (as well as ξ 2 ) the polarization first increases, and then decreases. The similar results are also observed with longer durations, e.g., τ 0 = 15 T 0 and 20 T 0 . However, the peak appears at a lower ξ 1 for a longer duration. As the dura- tion is decreased to τ 0 = 5 T 0 and 3 T 0 , only a monotonical increase appears within the ξ 1 region considered. It is expected that the polarization will decay if higher ξ is adopted. One can also observe that in the increasing region the polarization is higher for a longer duration when the laser amplitude ξ 1 is fixed. The polarization first grows with both of the laser pulse duration and amplitude because of the probabilities of photon emission and electron spin flip ∼ χτ 0 ∼ ξτ 0 . Due to photon emission, the electrons lose their energies. Provided the laser pulse duration is too long, the electrons could lose their main energies in the rising edge of the laser pulses, and the effective laser fields experienced by the electrons are much lower than that at the laser pulse peak. This could causes that the polarization decays with the increase of ξ 1 . Finally, we study the combined role of the initial electron energy ε 0 and the laser peak amplitude, as shown in Fig. 7. It is found that a high laser amplitude (e.g., ξ 1 100) is necessary to obtain a high total polarization. With a high laser amplitude, the electron beam energy could be flexible in a large range from hundreds of MeV to few GeV. On the other hand, even though a high electron beam energy is taken (e.g., ε 0 ≈ 4 GeV), the total polarization is relative low.
IV. CONCLUSION
In summary, we have investigated the spin polarization effects of an ultrarelativistic electron beam head-on colliding with an ultraintense two-color laser pulse. The asymmetry of the laser field in the processes of the photon emission and the electron spin-flip transition causes considerable total and partial polarization. The polarization strongly depends on the relative phase φ of the two-color laser pulse. When φ = π/2, the degree of a certain polarization reaches its peak. As φ is taken as 3π/2, the same degree is achieved, however, the polarization turns opposite. Moreover, the spin-dependent radiation reaction results in the high polarization of relative-low-energy electrons, which provides a way to generate a highly polarized electron beam by choosing electron energy, and may serve as a signature of the spin-dependent radiation reaction in the QED regime. | 2019-05-01T13:03:18.336Z | 2019-04-30T00:00:00.000 | {
"year": 2019,
"sha1": "67c1b43dc67ae79895d469f746176a160da5f132",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1904.13246",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "05c99bd2f4bdac5765ddb12718ab7b6d41201a6b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
91939173 | pes2o/s2orc | v3-fos-license | The appropriateness of a realist review for evaluating the South African Housing Subsidy Programme
funding: None Conducting meta-reviews of government programmes has become common practice. In South Africa, the national Department of Human Settlements and the national Department of Performance Monitoring and Evaluation recently commissioned a team to review the extent to which the Housing Subsidy Programme had provided assets to municipalities and the poor and whether these assets had helped poor households escape from poverty. A realist approach was employed to conduct the review. We argue that, given the complex nature of housing programmes, the realist review methodology was an appropriate approach to follow in answering the review questions. We explored how the realist review method allowed us to work with the uneven and contested nature of the housing literature and how the review nonetheless enabled elucidation of the factors that had contributed to the expected outcomes. Because this case was the first time that this method was used in a government-commissioned evaluation of housing, there were some practical challenges involved in its use. Some of the challenges were related to the nature of the questions that were asked. At the time of the review, the Department of Human Settlements was in the process of reviewing the 1996 White Paper and, to inform this process, the Housing Subsidy Programme review included a copious number of questions set by the Department of Human Settlements and Department of Performance Monitoring and Evaluation, which made the review rather large and, in some cases, complicated the analysis. In some cases, because the Departments wanted clear-cut answers, the commissioners perceived the theoretical strength of the method, such as offering explanatory instead of conclusive judgement, as a weakness. The paper reveals some limitations of the realist review method for evaluating the multifaceted outcomes of a complex programme, particularly the practical difficulty of dealing with large quantities of data. We do however consider this method to have potential for further reviews.
Significance:
• Housing research in South Africa is uneven which makes any review process difficult.
•
The review was unable to offer judgement on the effect that the Housing Subsidy Programme has had on the asset base of the poor.
•
The review was useful for making clear which factors will help the Programme to achieve the intended outcomes and also for pointing out on what government should focus to build assets for the urban poor.
introduction
Evaluation and review of policy has become a common government practice across the globe.Many of these reviews take the form of meta-reviews, in effect studies of studies, in which the literature pertaining to specific policy concerns is closely examined.The demand for policy reviews has spawned an array of review methods: systematic, realist, scoping, critical, mapping -to mention but a few.In this paper, we assess the 'realist review' method, originated by Pawson and Tilley 1 .For simplicity we have chosen to use the term 'realist review', while noting that this method is also referred to as 'critical realist review', as it stems from critical realist philosophy.
Globally, there is a growing body of work of evaluation of conventional review methods such as systematic reviews. 2 Some common criticisms are that the available evidence is often 'mixed or conflicting' and provides 'little or no clue as to why the intervention worked or did not work when applied in different contexts' 3 , that there are difficulties in striking a balance between rigour and relevance, and that 'few review types possess prescribed explicit methodologies and many fall short of being mutually exclusive' 2 .Substantially more work is needed to evaluate review methods 4 , particularly in the health sciences 5 .
In South Africa, as elsewhere, evaluations and policy reviews have become the norm now that policy is increasingly expected to be evidence based. 6The national Department of Performance Monitoring and Evaluation (DPME), established in the Office of the President in 2010, has been mainstreaming reviews of policies and programmes in various line departments.
By the end of 2016, the DPME had completed 65 evaluations, 2 of which were meta-reviews.In 2014, the national Department of Human Settlements (DHS) and the DPME commissioned a review of South Africa's Housing Subsidy Programme.The review was to investigate the extent to which the Programme had succeeded in providing assets to the poor and whether these assets had helped poor households escape from poverty.We initially suggested the use of systematic review methodology to answer the review questions.Discussions with both the DPME and the DHS alerted us to the limitations of the systematic review approach in regard to the Housing Subsidy Programme.Because this Programme is implemented non-uniformly by the nine provinces -with each province using different implementation protocols in response to particular local contexts and moreover doing so in a variety of communities -our main problem was that we needed a review methodology that would be more flexible and would emphasise different contexts.In the end, we opted for the realist review method.
Apartheid planning left South African cities not only with large numbers of informal settlements and housing backlogs but also with municipalities that were ill prepared to accommodate rapid growth.The Housing White Paper released in 1995 was one of the first post-apartheid policy responses to the housing challenges faced by South African communities.Although multifaceted, the policy chiefly emphasised three things: ownership, a focus on the poor (only households with incomes of below ZAR3500 per month are able to access the subsidy) and a fixed-amount capital subsidy.(In 1995, the USD:ZAR exchange rate was 1:3.61 and about 1:13 at the time of writing in July 2017.) The original capital subsidy amount in 1995 was ZAR15 000 for those households with the lowest incomes.A revised policy, namely 'Breaking New Ground: A Comprehensive Housing Plan for the Development of Integrated Sustainable Human Settlements' 7 , has retained the above three elements while re-emphasising informal settlement upgrading and rental accommodation, and drawing attention to the need to establish sustainable settlements and to develop the property market.The South African Housing Subsidy Programme has delivered approximately four million housing opportunities (subsidised houses and site-and-services) in slightly more than two decades, mostly by providing a capital subsidy and homeownership to households at the lower end of the market. 8spite the growing number of reviews and internal evaluations in South Africa there has been virtually no critical assessment of their methods.Against the above background, we critically assess the method we used and then discuss its appropriateness in terms of evaluating the multifaceted outcomes of the Housing Subsidy Programme.The fact that we as the authors represent both the commissioning department (the first author) and an academic department should ensure a balanced view.While we acknowledge that our closeness to the review process influenced our evaluation of the review, we did attempt to take a step back.We reflected with hindsight on what had helped or hindered the review process and its outcomes.In this paper, we discuss the limitations, and some benefits, of the realist review method.
realist reviews: An overview
Realism is a school of thought that lies between positivism and constructivism. 9Pawson and Tilley 1,5 are credited with applying realist philosophy to programme and policy evaluation.The value of the realist method lies in its ability to deal with complexity 3 , to synthesise evidence while accepting that 'no deterministic theory can always explain or predict outcomes in every context' 10 .Evidence-based policy development is commonly described as wanting to determine 'what works'.However, in a realist review, we ask a more complex question: What is it about this programme that works for whom in what circumstances? 3 In a realist review, the reviewers are able to engage with context and the human element in the implementation of interventions.There is an acceptance that different conditions contribute to programme success or failure 1,7,11 and that while diverse results are problematic, various outcomes are inevitable because the mechanisms that create change are not necessarily embedded within a specific programme but are often present in the thought processes of the programme's participants. 1hese diverse results must thus be explored rather than controlled.Realist reviewers engage with evidence by studying the interaction between contexts, mechanisms and outcomes, in what are called CMO (context-mechanism-outcome) configurations. 8,9A CMO configuration is 'a proposition stating what it is about an initiative that works', in other words, an hypothesis to be tested. 13nventionally, evaluators find it difficult to deal with how context mediates and moderates the results of a programme.Context is both perceived and treated as a threat to the external validity of evaluation where evaluators are concerned with isolating how programme interventions produced observed outcomes. 14Realist review methodology, however, allows evaluators to explore a variety of contexts and they try not to be judgemental. 3Understanding how context mediates and moderates programme performance is thus core to realist reviews.Mechanism is another central component of realist reviews.A realist review looks at the underlying causes of change that are not directly observable. 7echanisms could involve multiple individuals engaged in a sequence of processes. 3Mechanisms connect programmes to their outcomes.Realist review sees the outcomes as the result of interaction between the resources or opportunities the programme provides, the reasoning of its target population, and the context.The change process is studied to provide explanations for how change happens, not just to state what change has been observed.
Other principles besides the CMO configurations underpin a realist review.Firstly, a realist evaluator sees programmes as theories. 1People design programmes on the basis of their beliefs about the nature of the problem and how change happens.This design is then translated by practitioners who are responsible for delivering services to programme beneficiaries.Thus, programmes are always inserted into existing social systems that have produced the negative conditions that necessitated the programme. 1Because an intervention may involve multiple theories, using traditional review methods is difficult.In this regard, Pawson et al. 3 note that 'the review question must be carefully articulated so as to prioritize which aspects of which interventions will be examined'.Programme motivations and designs usually make statements about how the programme or policy should be implemented and what results can be expected.Because a realist review usually starts by adopting the programme or policy design as the theoretical base, it must therefore consider the theory's underlying assumptions.
Secondly, as programmes are embedded in social systems, it is 'through the workings of entire systems of social relationships that any changes in behaviours, events and social conditions are effected' 1 .A realist review therefore recognises and accepts the existence and interplay of multiple social systems.To understand the process of change, the reviewer needs to investigate beyond what the programme offers so as to understand how the wider social systems affect the programme.Traditional review methods are often unable to deal with this multiplicity and with interconnections in society.The realist review accepts that the relationship between mechanisms and outcomes does not have to be linear; in many cases it could be a reverse relationship.In accepting the existence of non-linear relationships, the realist reviewer notes and examines the 'flows, blockages and points of contention' 3 .For example, the outcomes in societies that emphasise self-help might prove to be totally different from those in societies in which the state is required to play a dominant role.A second example relates to the fact that while the South African Housing Subsidy Programme grants individual households decision-making status, the decisions that households make might not be all that similar.
Thirdly, programmes are active.Implementation of a programme requires the active participation of individuals. 1,7This principle is important and has methodological implications.For the realist reviewer, there is no need to control and remove the human influence.Instead, the reviewer needs to explore and understand how the human influence produces change in the intended programme. 1In a realist review the literature review can therefore be broader than in a traditional review in which control and adherence to predefined programme components, population, types of studies, and so on, are critical.A realist review includes literature on the basis of relevance rather than restricting itself to a pre-identified finite set of sources.It generally uses a simple search strategy based on purposive sampling but multiple search strategies can also be used, and grey literature can be given a more important role than in other review types.
Lastly, because programmes are open systems, realist reviewers accept that externalities will always influence the way in which a programme is implemented, with benefits varying according to location.The programme implementer is an active agent in the implementation of the programme and context will constrain what is implemented. 1,7rogrammes can also be self-transformational.implemented, it may be altered according to lessons learnt and may be adapted to context changes that have resulted from the introduction of the programme.A realist review must therefore be able to account for this adaptability.This aspect was important in the review that is the topic of the present paper, as the Housing Subsidy Programme policy had evolved significantly since 1994.From having an initial focus on starter houses in whose growth households were required to invest, the policy now makes provision for fully built houses of good quality that are aimed at incentivising market take-off. 15alist reviews are not free of limitations.Realist review methods have been criticised for not being able to provide definitive answers to policy issues.The practical applicability of the realist approach has also been called into question, with some arguing that although, theoretically, the method offers useful lenses with which to look at programmes, it is difficult to apply these lenses with the methodological rigour and precision required of evaluators.A widely contested issue is how realist reviewers define and interpret causation.Realist reviews tend to emphasise contextual knowledge (what works for whom in what context) over normative positions; and then, too, the nature of causation is often debatable. 16,17Effectively, realist reviews should pay attention to how existing world views influence specific studies and researchers' interpretations of the results.The danger further exists that researchers will choose literature that is in line with their own epistemological and ontological assumptions.Further criticism is that there is too little emphasis on the question 'does it work?' (as opposed to what works under what conditions) and an over-emphasis on contextual factors. 13It is these very criticisms that have necessitated this paper, which reflects on the practical use of the method while attempting to answer a policy question in a complex government programme.
Background to the programme and implications for the review
South Africa's government-subsidised Housing Subsidy Programme is a complex intervention both in design and mechanisms for implementation (Figure 1).It is complex firstly because it has to respond to dysfunctionalities inherited from the apartheid government.The Group Areas Act of 1950 moved most black people from the core urban areas to impoverished and marginalised townships.Landownership for black people was revoked during the 1950s and only selectively reinstalled in the second half of the 1980s.The resulting inequality between black and white households should not be underestimated.As a result, the Housing Subsidy Programme was central to the political negotiations during the transition from apartheid to democracy and was important for restorative justice. 18Housing is now both a constitutional right that the state has an obligation to realise progressively (as affirmed in the Constitutional Court case of the Government of the Republic of South Africa vs Grootboom in 2000) and an individually owned asset that functions in the property market. 15,19,20Responding to apartheid property-ownership biases (in urban areas), the Housing Subsidy Programme adopted an ownership model designed to redistribute wealth, ensure the participation of the poor (particularly black and coloured people formerly denied ownership in urban areas), and enable households to access and benefit from the workings of the property market. 21,22The intervention logic or the theory of change was thus always more than the mere provision of accommodation.The provision of accommodation was a means to reduce asset poverty, address the failings of the market, give the poor equitable access to the property market and create wealth for those previously excluded (Figure 1). 23The 2004 Human Settlements Strategy added to this a clear focus on asset creation as a means of poverty alleviation. 24The theory of change was thus a market-based approach to asset building.Furthermore, when the Programme started, we had to accept the theory of change because it was the policy position adopted by the DHS.Later in the paper we note that during the review process we started to question this one-dimensional asset-building approach.
A second source of complexity is that the outcomes of the Housing Subsidy Programme are contingent on factors beyond its control or influence.Among these factors are macroeconomic conditions (employment, interest rates, and so on), concomitant investment in public spaces by local government, provision of municipal services, and the socio-economic conditions of the beneficiaries.
Thirdly, the intervention is complex because of its delivery arrangements.Nine provincial Departments of Human Settlements annually deliver housing by means of thousands of construction projects using a range of delivery arrangements with municipalities and private contractors.
The nine provinces vary considerably in the way in which they package housing projects, select and appoint building contractors, monitor adherence to policy objectives, work with local governments to secure the spatial planning and other planning approvals necessary for project delivery, and provide bulk services such as water and sanitation.They also vary in the way they plan development so as to integrate low-income households with the rest of the municipality.A further complication is that architects and town planners make decisions about settlement design and land-use schemes (that in turn influence the development trajectory of a settlement).These decisions are made on a project-to-project basis so as to optimise the effective use of land and other resources.
Finally, the households that benefit from government housing subsidies vary in terms of economic circumstances, size and composition, level of education, and so on.To qualify for a subsidy a household must have a combined monthly income of no more than ZAR3500.But households in this income category may be unemployed and dependent on government grants, or formally employed with the possibility of upward economic mobility.They may be single-parent or two-parent households.The type of household determines or influences the extent to which a house will be an asset to that household and how well it will use the resources provided by the Programme.Variation in outcomes is thus only to be expected.Isolated studies on whether housing is elevating people out of poverty are likely to reach different conclusions.
All these complexities had implications for the review.In addition to the ideological context, we had to know the background of papers on housing delivery, such as in which province the research was conducted and the terms of the contractual relationships between developers, contractors and the provincial governments.We also had to take into account the fact that most housing research is currently being done in urban contexts and chiefly in four or five of the largest metropolitan areas, which, although not necessarily a negative, could give our review an urban bias.These factors significantly influence the ability of the Housing Subsidy Programme to achieve its policy objectives and tend to make the delivery mechanisms unduly dependent on context.
the review
The review was commissioned by the DHS and the DPME as part of the cabinet-approved National Evaluation Plan of 2013/2014.The DPME is the custodian of the Plan, as part of the implementation of the National Evaluation System.After 20 years of implementing the Housing Subsidy Programme, DHS reviewed its housing policy to respond to the transition to a broader human settlements approach initiated by the 2004 Breaking New Ground strategy and mandated in 2009 with the name change from 'Department of Housing' to 'Department of Human Settlements'.
Our review was one of seven evaluations that the DHS conducted in partnership with the DPME, intended to influence and inform this policy review process.The need for a review emanated from this policy need.
The review questions
The review's specific focus was to 'determine if the provision of state subsidised housing [had] addressed asset poverty for households and created assets for municipalities'.More specifically, the review questioned whether subsidised houses were 'growing in value' and whether beneficiaries were indeed obtaining and benefitting from this growth.A set of 14 secondary questions pertained to the theoretical and conceptual understanding of housing and assets, asset generation for individual households and asset generation for municipalities (see Appendix 1 in the supplementary material for a full list of questions We had two difficulties with the review questions.Firstly, the focus on asset generation for both households and municipalities required us to combine two methods.Whereas to assess housing assets we could refer to the existing literature, to assess municipal assets we had to do new empirical work because little had been done.While these two types of assets are obviously linked, they are distinctly different issues for which a range of different assumptions exists.Secondly, each of the 14 secondary questions added a different emphasis.Although most of these questions were related, during the review process it proved difficult to devote sufficient attention to all of them.For example, the question about whether title deeds do indeed provide poor people with a platform for market access was a specific focus that required much attention -one that proved to be difficult to answer given that title deeds had to date been issued to only 50% of those households which had received a housing asset as part of the Housing Subsidy Programme.The wide range of questions necessitated a wide range of literature searches on the assumption that a considerable body of research is already available on each of the issues.
The review process
The main research question of the review was whether the Housing Subsidy Programme had provided assets to the poor and whether these assets had helped poor households to escape from poverty.The review process evolved in four phases over an originally envisaged period of 6 months.In the end, the process took more than 1 year to complete.In phase 1, the DHS framed the questions in collaboration with the DPME and an evaluation steering committee, and subsequently appointed an external review team based at the University of the Free State to conduct the review.The review team had to suggest a review method.Originally, the review team proposed the idea of conducting a systematic review to the commissioning departments.In the inception phase of the project, the limitations of the proposed methods were pointed out by the commissioning departments; the weaknesses of this approach soon became apparent in the initial literature scan conducted by the review team.Most of the literature in housing was to be found in grey literature sources and not in academic studies.The existing research also varied in design, so that while many case studies had rich qualitative data, they suffered from a lack of randomised control trials or other impactevaluation measures -a situation often encountered in health-related research.This situation provided further justification for the review team to change the initial method and a critical realist review was thus proposed to the commissioning departments.The commissioning departments, in approving this methodology, noted that it provided the necessary flexibility and also presented a methodologically defensible approach to respond to the review questions.Phase 1 also saw the introduction of a review team to the evaluation steering committee, one that was established by the commissioning departments in line with the requirements of the National Evaluation Plan.The evaluation steering committee comprised staff from the DPME, the DHS, National Treasury, a number of officials from local municipalities and a number of handpicked academic researchers.The DPME also appointed two external peer reviewers to comment on the work of the review team at different stages of the review process.
Phase 2: Conceptualisation and search strategy
Once the review team was familiar with the terms of reference, the team familiarised itself not only with the housing theory of change pertaining to asset building but also with the various theories of asset building.The review team had to indicate from which paradigm it would view asset building.The team argued that it largely accepted the framework of asset building portrayed by the theory of change.Yet, it was also made clear that it would adopt a more critical and normative stance in this regard.The main point is that, as reviewers, we had to work with the theory of change prescribed by the Housing Subsidy Programme.In line with the realist position that programmes should be regarded as theory, the review team at this stage also spent time with the commissioning departments in reviewing and attempting to understand the theory of change that had been developed by the commissioning departments.After this, the review team developed a detailed methodology chapter in which it set out the literature search strategy, where the search would be conducted and how the information would be synthesised.This was an expansion of what the review team had presented to the commissioning departments during the project inception phase.In line with the realist approach, the search strategy comprised a set of search terms, databases to be searched and other information.The strategy, however, allowed for the review team to use other manual search processes like reference lists of studies reviewed and word-of-mouth suggestions by experts in the field of housing, which enabled the process to remain open and flexible as new literature was found and added to the review.
Phase 3: Search process Phase 3 was a structured literature search using not only various databases but also documents provided by the DHS.In line with the realist review method, we formulated the following 28 CMO configurations, i.e. hypotheses, directly related to the theory of change provided by the DPME and the DHS: • Housing subsidies improve social networks and create social capital.
• Housing subsidies improve health outcomes.
• Housing subsidies improve educational outcomes.
•
Housing subsidies create security of tenure for women.
•
Housing subsidies create security of tenure for the aged.
•
Housing subsidies create security of tenure for the disabled.
• Housing subsidies improve household stability.
• Housing subsidies result in a higher degree of citizenship responsibility.
•
The Capital Housing Subsidy results in a feeling of improved security of tenure.
• Housing subsidies engender feelings of belonging.
• Housing subsidies improve social inclusiveness and integration.
• Housing subsidies result in positive attitudes towards one's own 'asset' (house).
• Housing subsidies help restore people's dignity.
• Housing subsidies allow households to trade their units.
• Housing subsidies enable households to 'climb the housing ladder'.
• Housing subsidies allow people to raise collateral for other business activities.
• Housing subsidies make it possible to obtain mortgage finance.
• Housing subsidies reduce expenditure on transport if the houses are well located.
• Housing subsidies have a positive impact on home-based enterprises.
• Housing subsidies help increase household income.
•
Housing subsidies can result in rental income.
• Housing subsidies lay the foundation for increased investment in housing.
•
Housing subsidies lead to an increase in the property values of units.
•
The informal trading of subsidised housing units mitigates their potential value.
•
Housing subsidies improve households' access to employment.
•
Housing subsidies increase poverty.
The search process we followed was iterative and flexible, and continued throughout all the stages of the review.Unlike conventional review methods in which literature searches cover a specific period and follow a strict process that is articulated in a search strategy, in our review process, literature was included as and when it came to the notice of the review team.This iteration process enriched the review process and ensured that no important seminal studies were left out of the review process.
The realist approach requires contending with four main ideological viewpoints: the neoliberal, the Marxist, the American welfare-policy view and the developing country asset-accumulation view.The neoliberal view sees housing and asset building largely in terms of the market, whereas the Marxist view is that housing should in no way be commodified.Between these two extremes, we find two main schools of thought -one originating from research on asset building in the USA, emphasising the importance of investing in housing to pay for education and retirement 25 , and the other from research in developing countries, emphasising the importance of asset building for poor people in urban areas, for health, employment and stability, and particularly for stability for migrants 20 .
These ideological presuppositions dominate much of the research on housing.In contrast to the practice in the health professions, housing research findings do not originate from randomised control trials but mainly from case studies, and are influenced by the researchers' ideological presuppositions.Given South Africa's apartheid past, a large portion of housing research is situated within critical theory that is known to be sceptical of markets In reviewing the literature we thus also had to understand the researchers' ideologies.We often had to make decisions about the value of a contribution solely on the basis of its authors' ideological presuppositions or had to take into account ideologically opposite findings.Overall, we could divide the studies into two categories: theoretically thorough work based on rather scant empirical results, and work based on large empirical data sets but theoretically shallow and moreover riddled with methodological concerns.The ideological problem was further complicated by the fact that the theory of change was based on the assumption of an ideal condition: increased access to the housing market for the poor.Table 1 shows -by means of an overview of the main findings from our sources -how we tested some of the CMO configurations.
We found approximately 1160 relevant sources with which to test our hypotheses; some sources were relevant to more than one hypothesis.
The DHS also provided existing research and evaluations that they had previously commissioned.Then, we examined and assessed the titles and the available abstracts for relevance to the review questions.We found 320 research reports and papers to be relevant to the review questions.These sources included both academic and grey literature identified by means of the process described above.Towards the end of the search, we added new papers that we had found during the research process -a practice commonly followed in realist reviews.
Phase 4: Hypothesis testing
In Phase 4 we used these sources to test our 12 hypotheses.We identified the specific research contexts, mechanisms and outcomes related to each source linked to a specific hypothesis.In our review, we noted the extent to which housing practice as revealed by these sources was based on specific case studies and was therefore not necessarily generalisable.Finally, the DPME asked the project team to test whether the data collected supported the existing hypotheses.
Having done this, we then sought further clarification through interviews with the authors who wrote the initial texts.Because the review was part of the National Evaluation Plan that adopted utilisation-focused evaluations, the participation of users of evidence was important.
The review was therefore carried out with active participation of the implementing departments and their key stakeholders, including National Treasury.Like the previous deliverables, the results from Phase 4 were presented in an evaluation report that was submitted to the DPME and the DHS for review.To test our analysis, this report was also presented in a number of workshops attended by government officials, prominent academics working in this specific field and by people in the NGO sector.
Although we, as the reviewers, had a certain level of independence, the stakeholders shaped the review questions and the different outputs of the review, such as including interpretation, analysis and recommendations.
Analysis
Having provided an overview of the process, we turn to an analysis of the review method.
Working with a contested theory of change The national housing theory of change has a number of outcomes for which there is not always consensus.Although the literature on housing and the theory of change have a number of pathways through which households that receive fully subsidised houses are able to escape poverty and build wealth, one pathway has been dominant in research and evaluation.This pathway is that which argues that a functional property market will be created through the following ways: the subsidised housing appreciates in value; subsidised houses are incorporated into the property market; subsidised houses enter municipalities' rates rolls; the value of the poor's share of the property market grows; and the poor move up the housing ladder.The dominance of this particular pathway could, theoretically, and from a measurement point of view, be ascribed to the fact that it is relatively well established in the literature.However, the theory also acknowledges several factors that block this pathway: racially skewed participation in the property market (because apartheid determined suburbs along racial lines), biased distribution of resources and wealth, high levels of poverty and unemployment, minimal private sector investment in low-income areas, and a dearth of research on how black people -with little experience of dealing in the property market because apartheid prevented black ownership of property -function in the property market.On the positive side, the theory acknowledges factors that clear the pathway: well-located land, effective planning and deeds registration, the creation of functional neighbourhoods, access to private sector finance, and good quality housing.
The anomaly between the intended outcome and the contextual limitations entailed the risk that the reviewers could easily align with either a pro-market or an anti-market perspective.Probably more problematic is the fact that some of these inhibiting factors could prove to be so overwhelming that the theory of change might not be practically possible.It also provides only a single mechanism by means of which asset building can take place, namely the housing market.However, existing research suggests a range of alternative ways of creating assets 26 , such as education, settlement stability and intergenerational transfers.Focusing a theory of change only on the market does not engender a holistic understanding of assets.The review team early on pointed this fact out and the DPME and the DHS accepted a broader understanding of housing assets.This revision highlights the importance of reaching agreement on the theory of change on which the review process is focused.
The importance of review questions Pawson and Tilley 1 argue that reviews need clear policy questions which are suited to the approach.The review questions with which we were working were not developed with the realist evaluation approach in mind and there were also too many review questions.Because the review was commissioned by government departments, there was furthermore no flexibility to adjust the review questions.Having too many questions meant that a number of CMO configurations needed to be tested.This turned out to be a challenge and we were not always able to subject the CMO configurations to thorough testing.Also, the combination of a review and questions requiring primary research made the project difficult to manage.The lesson is that even when reviews have to respond to pressing policy questions, it is important that the questions be streamlined and that the commissioners of the research should not expect the reviewers to respond to all the pressing policy questions at the same time.
Synthesising and reporting issues
Although the realist review method is theoretically sound, in practice, the analysis of the relationships between context, mechanism and outcome requires much effort.This is a limitation that Pawson and Tilley acknowledge.The idea of programmes as open systems is, for example, theoretically useful because they allow the evaluators to see the programme as part of a broader social and economic system.This idea does, however, make the boundaries of the programme wide and thus not very definitive.In our case, this meant that a wide range of articles could be considered in the review.Also, because a realist review can potentially include a range of studies with different paradigms, methods, etc., that test a number of hypotheses, it can be intellectually enormously challenging.There are no simple tick-box solutions for how findings are presented.Attempting to synthesise across more than 400 studies, testing 12 hypotheses underpinned by four theoretical/ philosophical views was not always easy.Although this, in itself, is not an issue, the review team had to work through a considerable volume of data.However, this volume of data combined with answering more than one review question and also synthesising across many studies, considerably complicated our task.
Reconciling methodological values and commissioners' expectations
The commissioners of the review hoped the review would offer judgement on the effect that the Housing Subsidy Programme has had on the asset base of the poor and also on the effect that housing has had on poverty.However, this was a difficult task.The ability of the Housing Subsidy Programme to produce assets that the poor can use to help them escape poverty is contingent on so many factors that it was difficult to declare with certainty what effect the Programme had had in which context and for which category of beneficiaries.Perhaps, too, the expectation was too high.As Pawson et al. 3 at securing an understanding from the commissioners of government evaluations as to the kinds of answers the approach will generate.The commissioners moreover have to invest in unpacking and interrogating the evaluation findings to understand the implications for policy.In this case, the evaluator really walks alongside policymakers 27 and plays the role of what Pawson and Tilley refer to as 'alerting the policy community to the caveats and considerations that should inform decisions' 28 .
Contested programme outcomes
Because different theoretical paradigms, analytical lenses and various academic schools of thought are involved in research on the Housing Subsidy Programme, the existing literature yields no clear-cut normative position.Although the realist approach offers some ways of dealing with this ambiguity, it nevertheless remained difficult to synthesise the body of evidence on any of the CMO configurations that were tested.Obviously, the normative positions in many of the research papers were determined by the ideological positions of the researchers.Although we attempted to factor this preference into the analysis, it did not assist in creating a better understanding.In fact, the ideological divide just became bigger.Obviously, the contested nature of housing research reflects the contested nature of housing itself.Yet, from an evaluation point of view, it remains difficult to reconcile different conclusions from the same reality.There is evidence of both market failure and of some housing asset generation.If the market has failed 80% of the people who traded, could it be a valid conclusion that market failure should necessarily lead to the abandonment of the programme?
Dealing with limitations in primary studies
Housing research in South Africa is heterogeneous and the research landscape is dominated by regional, qualitative case studies.Systematic evaluations, particularly of government programmes, are even more limited.In housing, very few studies can be classified as 'programme evaluations' and even fewer have established the effects of programmes with sufficient methodological rigour.Any form of review that discounts qualitative findings is sure to bypass the bulk of the research in the South African housing sector.We were able to use the realist review method's CMO configurations to identify some regularities and patterns across the different local case studies.However, case studies and other cross-sectional studies were not adequate to respond to the issue of asset creation as thoroughly as the contractors would have liked.The criticism that the quality of research matters in critical reviews remains important and the uneven nature of existing research in our review was problematic.This was not a weakness of the methodology itself but did point to a lack of investment in theory development in most research and to weaknesses in how housing research agendas are crafted.There is limited synergy between policy issues and the kind of research being done by academics and other partners.
Despite the challenges reflected above, the approach was useful for clarifying which factors will help the Programme to achieve the intended outcomes and also for pointing out what government should focus on to build assets for the urban poor.It was also useful for clarifying on what not to focus, for example, the focus on the poor prevents market access for poor households in a secondary market.
Conclusion and final reflections
We have shared our experience of using the realist review method to evaluate a government programme.We have explained some of the difficulties in responding to broad policy questions and the way in which this method helped us to assess a complex social programme and deal with research of an inconsistent quality (mostly small studies using qualitative methods).A realist review can help in explaining what change is happening, for whom and how, and in showing which aspects of a programme create enabling conditions for results.These attributes make it a useful framework for reviewing politically important programmes like the South African government's Housing Subsidy Programme.The government does not intend to abandon the Programme; it is a central element of the country's democracy.The review, in which we took part, was intended to strengthen elements of the Programme that are not functioning properly to enhance performance and help achieve results.The findings of the realist review alert government to those components of the Programme that need strengthening and help it to respond to a context that is complex and evolving.
This methodology has much potential in reviews and evaluations of other large, complex government interventions.However, we have pointed out that, in our case, the review was complicated by a number of issues.Firstly, the theory of change with which we were provided emphasised one pathway of change, namely a market-orientated approach to asset building while a substantial portion of asset building takes place outside market processes.Also, because this particular theory of change is contested, the findings from different studies and comments/inputs from different sector experts were sometimes irreconcilable.Secondly, too many review questions inhibited a focused review and the analysis of the relationships between context, mechanism and outcome was difficult.This was further complicated by the fact that there are many different ways in which the poor use housing to escape poverty.The latter circumstance necessitated the need to test a number of CMO configurations, which considerably complicated the synthesising and presentation of findings.Thirdly, the ideological difference between housing provision and housing research was a dominant factor in assessing the literature.In the end, it turned out that very few studies used asset generation as an important point of departure, which in its turn made reference to the review questions more difficult.Fourthly, a further challenge was that of reconciling the explanatory nature of the findings from the realist review with the commissioners' expectations of conclusive findings as regards the impact of housing on asset creation.Lastly, South African housing research is not always empirically sound and most studies tend not to address issues that are relevant to policy.As synthesis relies on existing research studies, this shortcoming created its own challenges.
However, from our experience, realism offers potential as an alternative to conventional review methods, not only in evaluation synthesis but also in primary programme evaluations.The challenges faced in this review however should not deter those who want to explore the use of realism in assessing housing programmes or any other complex programmes.We have highlighted areas in which evaluators will need to re-think to improve the application of the methodology in programme evaluations.
1 : 5 South
figure 1: Population-intervention-comparison-outcome-context: assessing whether the Housing Subsidy Programme created assets through the ownership programme.
table 1 :
The existing research was found to have a number of shortcomings.Firstly, most of the already existing research focused on the early stages of housing development processes.Earlier studies tended to focus on variables or on the immediate outcomes of the housing development processes on households and neighbouring communities.Longer-term assessments were few.Because asset-generation is a long-term activity, the absence of long-term assessments was a major shortcoming.Evaluations over more than one generation are more likely to reflect on issues pertaining to asset building.Secondly, two paradigms of research dominate South African housing research.The first pole, critical theory, has been Overview of the main findings within the review framework instrumental in challenging apartheid housing policies.The second pole contains research largely based on a positivist research paradigm or in some cases 'ideologically neutral'.Housing research is generally either conceptually or theoretically rich but empirically underdeveloped or empirically rich but conceptually poor.Thirdly, the notions of housing and asset accumulation are not a prominent research direction in South Africa.Asset-based welfare or asset-based development has received scant attention in South Africa.The majority of the research on asset generation has to date originated from NGOs and individuals not affiliated to universities.The majority of the research has moreover hitherto been narrowly focused on housing as an economic asset.Asset building is also not viewed in a more holistic framework -which happens to be the conceptual framework used in the present review.Lastly, because housing research in metropolitan areas dominates the housing research landscape, we also know very little about housing issues in smaller urban settlements. | 2019-04-03T13:08:43.553Z | 2018-11-27T00:00:00.000 | {
"year": 2018,
"sha1": "36deee8045e95bdc2eaea337d73fe6bcbe9219ea",
"oa_license": "CCBY",
"oa_url": "https://sajs.co.za/article/download/4472/7310",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "36deee8045e95bdc2eaea337d73fe6bcbe9219ea",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
} |
252809293 | pes2o/s2orc | v3-fos-license | The influence of inter-bubble spacing on the resonance response of ultrasound contrast agent microbubbles
Highlights • Finite-element modeling of a system of two encapsulated ultrasound contrast agents.• For equal-sized agent, resonance decreases and amplitude increases.• For unequal-sized agent, smaller bubble resonance increases and amplitude decreases.• A larger microbubble strongly influences the response of the smaller bubble.• Bubble-coupling leads to significant nanobubble vibrations at clinical frequencies.
Introduction
Small gas-filled microbubbles, typically ranging in size from 1 to 8 µm and encapsulated with a thin, flexible, and biocompatible stabilizing shell, are currently employed as diagnostic ultrasound contrast agents [1][2][3]. Microbubbles vibrate within an ultrasound beam about their equilibrium radius with scattering cross-sections several orders of magnitude larger than a solid size-matched particle [4]. Through resonant oscillations and nonlinear harmonic and subharmonic emissions [5], microbubble signal enables the detection and separation of echoes originating from the blood -to which microbubbles are confined due to their size -from that of the much greater energy of the echoes from the surrounding tissue [6]. This vasculature-specific signal enables the quantification of blood flow and has many applications spanning from detection, diagnosis and therapy monitoring in cardiology and oncology [3,7,8]. More recently, ultrasound-stimulated microbubbles have been exploited to deliver local and targeted bioeffects under specific acoustic stimulus [9,10]. Microbubble-mediated shear stress and microstreaming are among the mechanisms behind these targeted therapies, including the transient opening of the blood-brain-barrier [11], site-specific drug/ gene delivery [12][13][14], vascular shutdown therapy [15] and sonoreperfusion [16].
For both diagnostic and therapeutic techniques, an understanding of ultrasound-driven microbubble dynamics is critical to ensure robust and repeatable application. As has been previously well documented, microbubble behaviour is a function of both its intrinsic features [5,[17][18][19] (e.g. bubble size, shell properties) and extrinsic environmental factors [20][21][22][23][24]-including fluid viscosity, fluid temperature, local boundaries and the presence of neighboring microbubbles. Indeed, there have been many mechanistic studies investigating the physics of vibrating microbubbles to elucidate the role of these factors on bubble behaviour as it relates to its imaging and therapeutic potential, the majority of which are performed on an individual microbubble [18,22,[25][26][27][28]. These investigations have explored unique physical and biophysical phenomena on an individual bubble scale, resulting in new insights towards contrast imaging [29,30] and ultrasound-mediated cellular therapies [31][32][33].
While it is a challenge to estimate local concentrations of contrast agent in-vivo, microbubbles may not be in isolation when used diagnostically or as a therapeutic agent. Order of magnitude estimates result in clinical agent doses (~1:5000 dilution) possessing an average interbubble spacing of 80 µm, which can decrease due to i) acoustic radiation forces [34], ii) ultrasound-induced bubble coalescence [35] and iii) complex fluid flow patterns [36]. Furthermore, smaller ultrasoundsensitive agents are currently being investigated for both diagnostic and therapeutic application, including phase-shift nanodroplets that can acoustically vaporized into in-situ microbubbles [37], and stabilized nanobubblesencapsulated bubbles on the order of several hundreds of nanometers in radius [38,39]. Assuming a volume-limited dose similar to clinically used micron-sized bubbles, a decrease in size by a factor of 10 translates to a 1000-fold increase in local bubble density [40].
To begin to address this, there are limited studies exploring the physics of bubble clusters, generally performed using analytical modifications of a second-order ODE describing bubble wall motion (e.g. Rayleigh-Plesset-type equations [41]). The majority of these studies focus either on bubbles without a material encapsulation or do not take into account any of the fluid dynamic considerations of the surrounding fluid [42,43]. In this study, we propose to study the effect of bubble proximity in a system of two encapsulated microbubble contrast agents using a finite element approach to ensure the two-way coupling between bubble vibrations and the local fluid environment. Specifically, we examine the coupling between different microbubble sizes and inter-bubble spacings with a view towards the resonance response of the system, as it is one of key features that make microbubbles an ideal ultrasound agent for imaging and therapy.
Fluid domain
In the present study, the radial oscillations of two individual microbubbles in free space are considered, situated a distance h apartsee Fig. 1. The fluid domain surrounding the microbubbles is modeled as a Newtonian fluid. Given that the acoustic wavelength is much larger than the microbubble size and that the fluid velocity is much slower than the speed of sound, the fluid was further assumed to be incompressible [44]. Under these circumstances, the fluid motion is modeled by the Navier-Stokes equations, given below: where v is the fluid velocity, ρ is the fluid density, μ is the dynamic viscosity of the fluid and p is the fluid pressure.
Microbubble dynamics
The gas inside each microbubble is assumed to be spatially uniform and is modeled as an ideal gas via a polytropic process [44]. The pressure difference across the bubble wall P B , is a result of the combined affects of surface tension, the surrounding fluid viscosity, and the Fig. 1. Finite-element model environment and data analysis description. A) A representative example of the mesh grid placement on an individual bubble, where the bubble is divided into 6 sections to allow for spatially dependent application of Eq. (3). Simulations were performed using an axisymmetric environment. Units are in micrometers. B) Schematic view of the two-microbubble system; h denotes the center-to-center distance between the two microbubbles. Units are in micrometers. C) A sample plot of a radial response of a microbubble at a given transmit frequency. Both the maximum R max and the minimum R min radius were used to calculate the radial excursion. D) The frequency of maximum response (f MR ) and amplitude of maximum response (A MR ) of an individual microbubble. pressure contributions from the viscoelastic encapsulation, and is given as follows: where p 0 is the ambient pressure, σ 0 is the initial surface tension at the gas-liquid interface, k is the polytropic index, P v is the vapour pressure which is considered negligible compared to the gas pressure (P v = 0), R and Ṙ represent the bubble radius and wall velocity, respectively, P visc and P elas are the pressure contributions due to the viscosity and elasticity of the shell, respectively, and P(t) is the externally applied acoustic pressure at the bubble wall. Multiple models have been proposed to explain the behaviour of microbubbles characterized by a thin viscoelastic shell by incorporating elastic and viscous terms [5,45,46]. Perhaps the most successful nonlinear bubble models to date incorporate phospholipid monolayer dynamicsindeed, experimental lipid research highlights that the surface tension of a lipid monolayer, such as those commonly employed in contrast microbubble synthesis, decreases with increasing compression rate (i.e. decreasing intermolecular area) [47]. Incorporation of this physics into simplistic Rayleigh-Plesset type bubble models [48,49] have been shown to predict unique microbubble vibrational signatures that have been observed experimentally, including 'compression only' behaviour [30,50]. Given this, we chose to implement an encapsulation model that considers a radially-dependent surface tension, as manifested through the elastic pressure contribution P elas = 2σ(R)/R, with the radially dependent surface tension σ(R) given as: where χ is the shell elasticity, and R b = R 0 (σ 0 /χ + 1) − 1/2 and R r = ) 1/2 are defined as the 'buckling' and 'rupturing' radius, respectively, with R 0 as the equilibrium radius of the microbubble. These are the radial limits within which the shell contribution dictates a quadratic-dependence on radius [48]. Indeed, Eq. (4) models the repartitioning of phospholipid molecules as it is manifested through the alterations in surface tension. Further, we consider the viscous contribution as P visc = 4κ SṘ /R 2 with κ S defined as the surface dilatational viscosity of the monolayer [5]. Note that the compressibility term proportional to Ṙ /c in Eq. (3), which was added despite our assumption of an incompressible fluid, does not play a large role in our simulation results (as Ṙ ≪c). It was however incorporated for validation purposes against well-known models (see below). Further, note that Eq. (4) was originally derived from surface area arguments, however was incorporated into the current study in the form presentedas has been done previously [51].
Model description and method of solution
The boundary conditions imposed along each bubble free surface are such that the velocity and pressure across the boundary remain continuous, namely: where P B , the pressure at the bubble wall, is given by Eq. (3). Note here that we are imposing a no-slip velocity condition and negligible shear stress in the tangential direction along this interface. In this manner, these conditions exert two-way coupling between the bubble wall motion and the surrounding fluid. In order to allow the slight perturbations deviating from spherical oscillations to influence the fluid domain, each bubble free surface was divided into 6 different sections ( Fig. 1), in which each section is subjected to the local boundary conditions given above. This allows the local curvature, approximated as spherical and spatially averaged over 1/6 of the microbubble, to contribute to the neighboring fluid motion. Pilot studies using 12 segments did not yield significantly different results. The final microbubble dynamic curve calculated in our model is derived from the average of the radius changes from the different sections of a given microbubble.
The governing equations subject to the above boundary conditions, along with the boundary conditions of constant p 0 along the edges of the simulation domain, were solved computationally using the finiteelement method (FEM) with COMSOL Multiphysics 5.8 (COMSOL AB. Burlington, MA). Fig. 1B illustrates the geometry of the model and Fig. 1A is a sample of the mesh grid in our FEM simulation for an individual bubble. Due to the symmetry of the two microbubbles and the computational domain, only half of the simulation space was calculated in an axisymmetric environment to minimize the computational time. The mesh size was selected to be 8-20 times smaller than the smallest bubble radius. This results in a mesh size that is much smaller than the wavelength of the acoustic wave. Further, the mesh density was much higher in the neighborhood of the microbubble wall in order to capture the salient physics of interest, and decreased further from the bubbles, where we do not expect any significant effects. The moving microbubble free surface was described using a moving mesh arbitrary Lagrangian-Eulerian (ALE) algorithm. This allows for the computational mesh to move arbitrarily to optimize the shape of the elements and for the mesh nodes to track the moving boundary. The microbubbles are considered to be inside the focal volume of a conventional ultrasound transducer. Based of the size of a typical focal volume (~mm 3 ) and the size of contrast agent microbubbles (~µm), the microbubbles were considered to be inside a uniform domain of ultrasound energy. This energy contribution enters our model as a change in acoustic pressure on the microbubble surface (P(t) in equation (3)). The transmit pressure employed was a Tukey windowed (tapered cosine) 10 cycle pulse at a sampling frequency of 500 MHz. The other parameters in this study were held constant; ρ = 1000kg/m 3 , k = 1.095, μ = 0.001Pa.s, χ = 1 N/m, κ S = 1.5*10 − 9 kg/s, σ w = 0.072N/m and σ 0 = 0.01N/m. We acknowledge that recent work has demonstrated transmit frequency/bubble size dependent shell properties [52,53]. Given that the current study explores many transmit frequency and bubble size combinations, the shell parameters adopted here were chosen to lay well within the range of previous experimental reports on phospholipid-encapsulated contrast agent microbubbles.
To study the effect of frequency-dependent microbubble vibration, we performed our simulations using individual tone bursts with a transmit frequency ranging from 1 to 8 MHz in increments of df = 25 kHz. The microbubble diameter (d = 2R 0 ) range investigated in this study spanned from 0.5 ≤ d ≤ 4 µm to cover both nanobubble and traditional microbubble size ranges [52], with a bubble center-to-center distance varying from 2 to 32 µm and peak-negative pressure ranging from 1 to 120 kPa. The parameter range here was selected due to its relevance in clinical imaging and therapeutic studies.
Analysis of radial oscillations
For a given microbubble radial profile, the radial excursion was calculated based on the average of R max and R min over the 6 regions of the microbubble, which represent the maximum and minimum dynamic radius, respectively (Fig. 1C). To study frequency-dependent microbubble vibrations, the radial excursion was calculated for each bubble at each transmit frequency to generate a resonance curve. The metrics extracted from this curve, as shown in Fig. 1D, were the amplitude of maximum response (A MR ) and the frequency of maximum response . Indeed, f MR represents the frequency at which the damped radial oscillations are maximal (i.e. the resonance frequency of the nonlinear damped microbubble system) -not to be confused with other closely related 'resonance' frequencies, including the frequency at which maximal scattered pressure or scattering cross-sections are observed [52,54].
Validation
We employed four different metrics to validate our numerical model. Firstly, our model was validated in the limit of a single microbubble in free fluid under low acoustic pressures and compared to the well-known analytical Rayleigh-Plesset equation (RPE) under the same acoustic conditions [44,48]. Fig. 2A shows the radial oscillation profile for a microbubble (R 0 = 1.5 µm) driven at f = 1 MHz at 45 kPa. The graphs show that the result of our simulation (solid; red) and the RPE (dashed; black) are in excellent agreement with an average percent error of 0.3 %. A second validation (Fig. 2B) was performed by assessing the resonance response of an individual microbubble as a function of acoustic pressure from 10 to 30 kPa. Our simulation generates the expected strainsoftening behaviour of decreasing resonance frequency with increasing pressure and a skewing of the resonance curveas has been observed both experimentally [29,53,55] and through numerical modeling [56]. Thirdly, under low-amplitude driving conditions (~1 kPa) where the bubble experiences small deviations about its equilibrium radius, all bubble models reduce to a similar expression for the sizedependent resonance frequency [57]. In this limit, our model results over the range of 2.5 ≤ d ≤ 5 µm (red dots in Fig. 1C) show excellent agreement with an average percent error of 3.8 % as compared to the well-known equation. Finally, the fourth validation was conducted by simulating an individual microbubble adjacent to a rigid wall. Indeed, by modifying the RPE via a 'method-of-images' approach [44], it can be shown that the f MR of an individual microbubble decreases by a factor of Fig. 2D, resulting in a shift of f MR and A MR of ~ 13 % and ~ 10 % respectively in the expected direction, as the bubble sits at h = 4 µm from the rigid wall. While we note that the 'method-of-images' does not capture the complex fluid dynamics at the boundary and may not strictly serve as a validating tool, it has been employed in more simplistic microbubble modeling scenarios [58]. Indeed, as a rigid wall is not a biologically relevant boundary, we did not explore this arrangement any further.
Results
We examine the frequency-dependent response of a twomicrobubble system in three different scenarios: i) the effect of the presence of an identical, size-matched microbubble, ii) the effect of the presence of a nearby smaller microbubble, and iii) the effect of the presence of a nearby larger microbubble. In all scenarios, the frequencydependent radial resonance response is investigated for varying interbubble distances h, and the response for the individual microbubble in free-space (i.e. in isolation) is shown for comparison in green to better appreciate the contributions due to the second microbubble. Fig. 3 shows the result of a simulation in a system of two identical microbubbles (d 1 = d 2 ) with diameters of 2, 3, and 4 µm. Microbubbles were subjected to a series of tone bursts at a constant peak-negative pressure of 30 kPa and simulated with center-to-center distances of h = 8, 16 and 24 µm. In all examined scenarios, the results show that when a given microbubble approaches another microbubble of the same size, each bubble experiences a decrease in f MR and an increase in A MR . Further, the extent of this effect amplifies as the microbubbles get closer to each other, with the maximal effect shown here at h = 8 µm. Taking into account all sizes investigated here, the maximum amount of the shift from two closely-positioned microbubbles at h = 8 µm apart as compared to its response in free space is a decrease in f MR ranging from 7 to 10 % and an increase in A MR from 9 to 11 %. Note here the small secondary peaks due to harmonic coupling (e.g. 3-4 MHz for d = 2 µm in Fig. 3A) also exhibit the same trend as the primary resonance peaks; albeit at a lower amplitude. Indeed, the presence of these harmonic peaks is a well-known and established feature of resonant bubble systems [56,59,60].
A microbubble in the presence of a smaller microbubble
In the following two subsections, we examine the results of two unequal sized microbubbles (d 1 ∕ = d 2 ). Fig. 4 highlights the resonance curves for the larger microbubble d 1 . The following four combinations were examined: a d 1 = 2 µm bubble in close proximity to a d 2 = 0.5 µm bubble (Fig. 4A), a d 1 = 3 µm bubble in close proximity to a d 2 = 2 µm bubble (Fig. 4B), a d 1 = 4 µm bubble in close proximity to a d 2 = 2 µm bubble (Fig. 4C), and d 1 = 4 µm bubble in close proximity to a d 2 = 3 µm bubble (Fig. 4D). For the system depicted in Fig. 4A, the microbubbles were insonicated at 120 kPa, and all others were insonicated at 30 kPa. We simulated the system with center-to-center distances of 2, 4, 8, 16 and 24 µm. The results presented here indicate that, for all combinations examined, the presence of the smaller microbubble d 2 has negligible Note that each of these two bubble-systems was insonicated at 30 kPa and show the same general trend. Individual resonance response in free-space (green curve) is shown for comparison. Note the presence of second-harmonic coupling, and that these secondary peaks follow the same trend as the primary resonance peaks. influence on the vibration physics of the larger microbubble d 1 . Within the frequency resolution employed here, there is no change in f MR and only a slight shift towards lower A MR (2-3 %) as compared to its free, isolated response.
A microbubble in the presence of a bigger microbubble
As opposed to the results shown in Fig. 4, there is a significant effect on the smaller microbubble d 2 due to the presence of a neighboring larger microbubble d 1 . Fig. 5 shows the results of the following bubble size combinations: a d 2 = 0.5 µm bubble in close proximity to a d 1 = 2 µm bubble (Fig. 5A), a d 2 = 2 µm bubble in close proximity to a d 1 = 3 µm bubble (Fig. 5B), a d 2 = 2 µm bubble in close proximity to a d 1 = 4 µm bubble (Fig. 5C), and d 2 = 3 µm bubble in close proximity to a d 1 = 4 µm bubble (Fig. 5D). As in the scenario above, the results in panel Fig. 5A were simulated at 120 kPa, while the others were insonicated at 30 kPa. We simulated the system with center-to-center distances of h = 2, 4, 8, 16 and 24 µm. The influence of the larger bubble is most strongly felt as the two bubbles approach each other. For all combinations of microbubble sizes examined here, the results consistently indicate that the smaller microbubble of size d 2 exhibits a strong and significant increase in f MR ranging from 7 to 11 %, and a decrease in A MR ranging from 38 to 52 % as compared to its isolated response. In looking at Fig. 5B, for example, the isolated, free f MR of a d 2 = 2 μm microbubble (green curve) at the simulated pressure is approximately 6.5 MHz. In either the presence of a neighboring d 1 = 3 µm (Fig. 5B) or d 1 = 4 µm (Fig. 5C) microbubble, this peak exhibits a drastic decrease in amplitude and shift to higher frequency (≈ 7 MHz in panel C). Note that the primary resonance of the d 2 = 0.5 µm microbubble (i.e. a nanobubble) is well out of the range of examined frequencies (>8 MHz) and is thus not visible in Fig. 5A.
Another glaring and significant result stemming from the influence of a larger microbubble is the presence of a secondary, off-resonance peak that is distinct from the harmonic peak. Indeed, this secondary peak in the frequency-dependent response exhibited by the smaller bubble d 2 , observed in all combinations of bubbles examined here, corresponds precisely to the f MR of the large microbubble d 1 and thus represents a nonlinear coupling between the two bubbles. As previously stated, while the primary resonance response from the nanobubble is not depicted, the off-resonance peak due to the neighboring d 1 = 2 µm is clear, with its influence becoming stronger as the bubbles approach each other (Fig. 5A). The appearance of this peak, which is maximum at h = 2 µm, appears precisely at a frequency of 4.5 MHz, in excellent agreement with the f MR of the d 1 = 2 µm microbubble shown in Fig. 4A. This is also readily observed in the other three panels as the inter-bubble spacing is decreased, with the f MR of the larger bubble (d 1 = 3 µm in panel B; d 1 = 4 µm in panel C&D) corresponding to 3.5 MHz and 2.1 MHz, respectively. Indeed, the final two panels highlight that this peak derived from the off-resonance nonlinear coupling of the larger bubble vibrations is distinct from the harmonic peaka peak observed even in isolated, individual bubbles (see Fig. 2A for example). While these peaks overlap in Fig. 5D due to the specific sizes of the microbubble pair, they are clearly separated in Fig. 5C (d 2 = 2 µm), where the harmonic peak expectedly shifts up due to the decreasing size of d 2 in Fig. 5C versus that of Fig. 5D; whereas the off-resonance peak at 2.1 MHz remains consistent between these two scenarios since the larger microbubble size is constant between these two panels (d 1 = 4 µm). Further, this nonlinear coupling effect can result in a large magnitude effect that rivals or even exceeds the A MR of the primary resonance peak (e.g. Fig. 5C, black curve). Finally, Fig. 6 highlights the influence of a larger neighboring microbubble on bubble response with a particular emphasis on the transmit frequency (the panels represent the same two-bubble system as those described in Fig. 5). Indeed, clinical applications of ultrasound are conducted at a fixed transmit center frequency, varying from the lower end of the MHz range for deep targets (e.g. 1-2 MHz for abdominal imaging), to mid-range for more superficial parts (e.g. 6-10 MHz for breast imaging, carotid imaging) [6]. While clinical pulses are shorter in length (and thus more broadband) than the pulses employed here, it is still apparent that depending on the clinical application, the direction and magnitude of the influence exerted by the two-bubble system shifts as the inter-bubble spacing decreases. The fixed frequencies here are chosen to align with the main (e.g. primary) and off-resonance coupling peaks.
Discussion
The results presented here indicate that the presence of a neighboring microbubble influences the radial resonance response of an individual microbubble. We note here that a subset of studies performed on 'clean', unencapsulated microbubbles yield similar relationships regarding f MR and A MR . This phenomenon plays a role not only in ascertaining the resonance response of these bubbles in clinically relevant doses, but also in the application dependent (i.e. transmit frequency-dependent) response of a system of bubbles. Specifically, the magnitude and direction of the shift in response due to bubble proximity is a strong function of the transmit frequency, a direct result of the changes in f MR and A MR . For the simplest and idealized case of two equal sized bubbles, the frequency of maximum response for both of them shifts to lower frequencies while the amplitude of maximum response increases. This type of behaviour is similar to the effect of a rigid wall (i. e. non-biologically relevant) on the response of a single microbubblewhich generates the same potential flow as two symmetrically positioned microbubbles oscillating in phase -shown theoretically using the method of images [61][62][63].
Of perhaps more interest is the situation of unequal microbubble sizes. In this type of two-bubble system, the smaller sized microbubble exhibits a strong shift towards higher f MR and a drastic decrease in A MRin stark contrast to the equal-sized bubble scenario described above. Further, when the two bubbles are in very close proximity, the smaller microbubble exhibits a strong off-resonance response that corresponds to the resonance frequency of its larger companion microbubble, while this larger microbubble exhibits no detectable change in its radial responseneither f MR nor A MR . These effects are shown specifically in Fig. 6 which is the result of a fixed frequency simulation for the condition of Fig. 5. Indeed, only small differences in bubble sizes are required for this drastic change in overall response. As shown in Figs. 5 and 6, only the relatively small difference in bubble diameter of 0.5 µm is required to switch the observed effects demonstrated for a two-bubble system of two equal sized bubbles to that of unequal sized bubbles. This is especially of interest when considering practical application of contrast microbubbles. Clinically and commercially available microbubbles (e.g. Definity, SonoVue) are characterized by polydisperse microbubble populations (e.g. [52,64]). While there is ongoing research on the design of monodisperse microbubble formulations with a view to improving contrast image sensitivity, these are still characterized by typical coefficient of variations on the order of 5 % [65,66] which results in an increased likelihood of the situation presented in Fig. 5,6: unequal sized microbubbles. The phenomenon observed here also sheds insight into the recent development and characterization of sub-micron bubbles (i.e. nanobubbles). Indeed, while possessing resonance frequencies much larger than the clinical frequency range on account of their small size (linear estimates beyondf = 30 MHz [40]), robust acoustic measurements have recently provided evidence of nonlinear scattering [67,68], contrast imaging, and therapeutic potentiation [69] from nanobubble populations within clinical and pre-clinical ultrasound frequency ranges. The results presented here, specifically for the nanobubble dataset (d = 0.5 µm), suggest a possible mechanism for this offresonance behaviour, namely strong acoustic coupling from a neighboring micron-sized bubble (Fig. 5.A,6.A). The 'contaminating' microbubble need not be an artefact of bubble synthesis but can also be due to ultrasound-induced bubble coalescence within typical imaging and therapeutic pulsing schemes. In this scenario, numerous off-resonant driven nanobubbles in addition to neighboring resonant microbubbles would contribute to the observed echo at clinical frequencies. Indeed, for ultrasound therapeutics, it is the oscillation amplitude examined here that is relevant as they can be linked to sonoporation and other bioeffects e.g. [12]). In fact, there are many current investigations into nanobubble-based therapeutics [39,40,70]. However, for imaging purposes, we can estimate the far-field scattered pressure P s at a distance r via the following relation [44]: where under low driving conditions, the maximum pressure reduces to. Fig. 6. The direction and magnitude of the proximity effect is highly dependent on the transmit frequency (i.e. clinical application) of interest. The panels correspond to the same two-microbubble systems as described in Fig. 5. The red curves denote a transmit frequency near the primary resonance response, while the blue curve denotes a frequency near the off-resonance peak corresponding to the f MR of the larger microbubble.
where ω is the angular frequency and ∊ is the radial excursion. From the above equation, for a fixed frequency and bubble size (as is the case in Fig. 5a), the maximum scattered pressure scales proportionally to the radial excursion, and thus we expect a similar increase between a nanobubble in free-space (green curve in Fig. 5a) and a nanobubble close to a microbubble (black curve in Fig. 5b). It is insightful here to place our numerical, finite-element model within the framework of the very limited experimental data investigating the influence of a neighboring microbubble and/or a planar boundary on the radial response of an individual ultrasound contrast agent. In perhaps the only dataset to be directly comparable to our model, Garbin et al. [23] measured the influence of a bigger microbubble (d 1 = 4.8 µm) on the radial dynamics of a smaller one (d 2 = 4.5 µm) by employing a combination of optical trapping and ultrafast full-frame microscopy [71]. In this single frequency (f = 2.25 MHz), 8-cycle acquisition, the vibrational response of the smaller bubble d 2 was significantly lower when placed h = 8 µm away from the larger bubble as compared to its free, isolated response (Fig. 3B in Garbin et al. [23]). Our simulated result within this system consistent with the measured data, with the presence of the larger bubble resulting in a 2 % decrease in maximum radius R max and an 8 % decrease in minimum radius R min as compared to its free response (Fig. 7). While this does not directly provide conclusive evidence of the bubble-proximity based f MR and A MR shifts observed in the present manuscriptsince no such experiment has even been conductedit is consistent at this individual transmit frequency. Further, while the individual shell parameters for Garbin et al.'s data were not known, our simulation predicts a similar trend over a wide range of lipid shell parameter estimates.
It is also worth noting here that our model does not incorporate bubble coalescence, nor the effects of secondary Bjerknes force. While this is a noted limitation of the model, this force is likely not the dominant bubble-bubble interaction under the acoustic forcing conditions imposed here (single 10-cycle burst, ~30 kPa). Indeed, in one of the only comparable experimental datasets, Garbin et al. [72] noted no significant translation (on the order of 100-200 nm) between two lipidencapsulated agents situated h = 12.5 µm apart from each other subjected to 150 kPahigher than the transmit pressures used in the present manuscript.
Conclusions
For two identical microbubbles vibrating in close proximity to each other, our results show the frequency of maximum response (f MR ) decreases (7-10 %) and the amplitude of maximum response (A MR ) increases (9-11 %) as the microbubbles approach one another. For a twobubble system of different microbubble sizes, the larger bubble shows no change in f MR and a slight shift of A MR (2-3 %). However, the smaller bubble exhibits an increase in f MR (7-11 %) and a significant decrease of A MR (38-52 %). Furthermore, in very close proximity, smaller bubbles exhibit a secondary resonance peak corresponding to the f MR of the larger bubble, with amplitudes comparable to its primary resonance peak. These results have implications in both contrast imaging and microbubble-mediated therapeutic applications.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Data availability
Data will be made available on request. | 2022-10-11T17:08:32.629Z | 2022-10-06T00:00:00.000 | {
"year": 2022,
"sha1": "cdde9b76f86941c99fa1ce428ffb656d6c58ccc0",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ultsonch.2022.106191",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d054214b493a43a8449c1c567098693c92127f28",
"s2fieldsofstudy": [
"Medicine",
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119606191 | pes2o/s2orc | v3-fos-license | Revisiting the stability of computing the roots of a quadratic polynomial
We show in this paper that the roots $x_1$ and $x_2$ of a scalar quadratic polynomial $ax^2+bx+c=0$ with real or complex coefficients $a$, $b$ $c$ can be computed in a element-wise mixed stable manner, measured in a relative sense. We also show that this is a stronger property than norm-wise backward stability, but weaker than element-wise backward stability. We finally show that there does not exist any method that can compute the roots in an element-wise backward stable sense, which is also illustrated by some numerical experiments.
Introduction
In this paper we consider the very simple problem of computing the two roots of a quadratic polynomial p(x) := ax 2 + bx + c (1) where the coefficients a, b, c are either in R or in C and where a = 0 in order the equation to have indeed two roots. This is a very classical problem for which the solution is well known, namely But the straightforward implementation of the above formula is quite often numerically unstable for special choices of the coefficients a, b, c. One would like, on the other hand, to have a computational scheme that produces the computed rootsx 1 andx 2 which correspond to an element-wise backward stable error, i.e. the relative backward errors are of the order of the unit roundoff u for each individual coefficient a, b and c. In fact, we can assume that a is not perturbed in this process. We will call this Element-wise Backward Stability (EBS) : a(x −x 1 )(x −x 2 ) = ax 2 +bx +ĉ |b −b| ≤ ∆|b|, |c −ĉ| ≤ ∆|c|, ∆ = O(u).
We will see that this can not be proven in the general case, but instead, we can obtain the slightly weaker result of Element-wise Mixed Stability (EMS), which implies that the computed rootsx 1 andx 2 satisfy a(x −x 1 )(x −x 2 ) = ax 2 +bx +ĉ |b −b| ≤ ∆|b|, |c −ĉ| ≤ ∆|c|, ∆ = O(u), which means that the computed roots are close to roots of a nearby polynomial, all in a relative element-wise sense.
This last property is also shown to be stronger than the so-called Norm-wise Backward Stability (NBS) which only imposes that the vector of perturbed coefficients is close to the original vector in a relative norm sense : This problem was studied already by several authors, but we could not find any conclusive answer to the EBS of any of the proposed algorithms.
In this paper, we will first consider the case of real coefficients since it is more commonly occurring and the results are slightly stronger. We then show how it extends to the case of complex coefficients. We end with a section on numerical experiments where we also show that there does not exist a method that is EBS for all quadratic polynomials.
Real coefficients
Before handling the general case where all three coefficients are nonzero, we point out that when b and/or c are zero the proof of EBS is rather simple.
A zero coefficient
Case c = 0. If c = 0, then the roots can be computed as follows which is element-wise backward stable since under the IEEE floating point standard, we have that the computed roots satisfŷ where u is the unit round-off of the IEEE floating point standard (see [1]). The backward error then indeed satisfies the relative element-wise bounds |b − b| ≤ u|b|, |ĉ − c| = 0|c|.
Case b = 0. If b = 0 then the roots can be computed as follows which is also element-wise backward stable since under the IEEE floating point standard, we have that the computed roots satisfy the element-wise boundŝ Notice that if sign(c)=sign(a), the roots are purely imaginary. The backward error for this computation satisfies the relative element-wise bounds |b − b| ≤ 0|b|, |ĉ − c| ≤ γ 3 |c|.
Preliminary scaling
We can thus assume now that all coefficients are nonzero. We start by reducing the problem to a simpler "standardized" form in order to simplify the computational steps.
Scaling the polynomial p(x)
We scale the polynomial coefficients so that it is monic : b 1 := b/a, c 1 := c/a, which can be performed in a backward and forward stable way since we assumed a = 0. According to the IEEE floating point standard we have that the computed valuesb 1 = f l(b 1 ) andĉ 1 = f l(c 1 ) satisfy the relative elementwise bounds This implies we can as well consider the monic polynomial Scaling the variable x We transform the variable x to y := −x/α where |α| := |c 1 | and sign(α) = sign(b 1 ), and consider the polynomial p 1 (−αy)/α 2 which is now monic in y and where β ∈ R + and e = ±1. The formulas to compute α, β and e are Since the sign function is exact under relative perturbations, e is computed exactly. It then follows that α and β can be performed in a backward and forward stable way : the computed valuesα = f l(α) andβ = f l(β) satisfy the relative element-wise bounds and e is computed exactly. This implies we can as well consider the polynomial g(y) = y 2 − 2βy + e. We recapitulate this in a formal lemma.
Proof. If we define the perturbations for the forward map as then the above discussion says that the relative perturbations δ α , δ β on the result are O(u) if the relative perturbations on the data δ b , δ c are O(u). The same reasoning can be applied to the perturbation of the backward map This lemma implies that relative small perturbations in the coefficients of q(y) can be mapped to relative small perturbations in the coefficients of p(x), both element-wise and norm-wise.
Calculating the roots
The roots of the polynomial q(y) := y 2 − 2βy + e are given by The way that these roots are computed depend now on the values of β and e.
Case 1: e = −1 (real roots) Case 2: e = 1 and β ≥ 1 (real roots) Case 3: e = 1 and β < 1 (complex conjugate roots) Let us now check that the roots are computed in a forward stable manner. The error analysis for the operations performed in the IEEE floating point standard give the following bounds.
Case 1: e = −1 (real roots) Case 2: e = 1 and β ≥ 1 (real roots) Case 3: e = 1 and β < 1 (complex conjugate roots) Notice that these bounds imply forward stability for all these computations. Combining this with Lemma 1, we have thus shown the following theorem.
Theorem 1
The computed rootsŷ i , i = 1, 2 of the polynomial q(y) satisfy the relative forward bounds and the transformed rootsx i = f l(−αŷ i ), i = 1, 2 satisfy the mixed bounds We can therefore also evaluate the backward bound by recomputing the sum and product of the computed roots. We first point out that the sum and product will be real because even when the two computed rootsŷ 1 andŷ 2 are complex they will be exactly complex conjugate.
Since the product of the exact roots is e = ±1, and the computed roots are forward stable, we obviously have that the product of the computed roots satisfiesŷ 1ŷ2 = e(1 + O(u)) which is element-wise backward stable in a relative sense.
For the sum of the computed roots, it is more problematic. Since |y 1 | ≥ |y 2 | and both these roots are computed in a forward stable way, we will have that butŷ 1 can be much larger than β and the backward error will then be much larger than β · O(u). Let us analyze the three cases. For Case 3 the sum of the computed roots is exactly 2β since this is a representable number. In Case 2, y 1 ≤ 2β and (3) then implies backward stability for the element β. But when β 1 we can not obtain a sufficiently small backward error for (3) since the recomputed sum has an error that is of the order of O(u)ŷ 1 O(u)β. It is in this special case that element-wise backward stability gets lost.
Complex coefficients
The case where b and/or c are zero are again easy to handle but the relative error bounds are slightly larger. Since exact error bounds are more difficult to describe, we preferred to just indicate their order of magnitude. Let us first treat the case of 0 values.
If c = 0, then the roots can be computed as follows which is element-wise backward stable since under the IEEE floating point standard, we have that the computed roots satisfy (see [1]) The backward error then indeed satisfies the relative element-wise bounds |b − b| ≤ |δ||b|, |ĉ − c| ≤ 0|c|, |δ| = O(u).
If b = 0 then the roots can be computed as follows which is also element-wise backward stable since under the IEEE floating point standard, we have that the computed roots satisfy (see [1]) The backward error then satisfies the relative element-wise bounds |b − b| ≤ 0|b|, |ĉ − c| ≤ |η|.|c|, |η| = O(u).
When there are no zero values, we again first apply a scaling of the problem.
Scaling the polynomial p(x) As in the real case, we scale the coefficients as follows : b 1 := b/a, c 1 := c/a, which can be performed in a backward and forward stable way since a = 0. According to the IEEE floating point standard we have indeed that This implies that we can as well look at the monic polynomial p(x)/a = p 1 (x) = x 2 + b 1 x + c 1 .
Scaling the variable x
This becomes more complicated for the case of complex coefficients. We now have that y := −x/α where |α| := |c 1 | and arg(α) = arg(b 1 ). This implies that we can consider again the polynomial where β ∈ R + and |e| = 1. The formulas to compute α, β and e are where e b := arg(b 1 ) and e c := arg(c 1 ). For computational reasons, we will also compute the square root f of e, i.e. f 2 = e.
We have again a similar lemma describing the transformation between the coefficients of the polynomials where a is not perturbed, are both element-wise well-conditioned maps.
Proof. The proof is very similar, except for the fact that the quantities are complex, except for β which is real, and f that can be parameterized by a real angle. 2 This lemma implies again that relative small perturbations in the coefficients of q(y) can be mapped to relative small perturbations in the coefficients of p(x), both element-wise and norm-wise.
Calculating the roots
The roots of the polynomial (4) are now given by But we need only consider the case where e = f 2 is not real since otherwise we can apply the analysis of the previous section. The algorithm for computing the two roots is to first compute y 1 as the root of largest module, and then to compute y 2 using y 2 = f 2 /y 1 . If we compute the square root of the complex number β 2 − f 2 as γ = (β − f )(β + f ) then the roots are given by The rounding errors can be written as followŝ , where all |δ i |, i = 1, 2, 3 are of the order of the unit round-off u. These formulas yield that y 1 and y 2 can be computed in a forward stable way.
The backward error analysis of these operations will be a problem when β is much smaller than |f |. This leads to the same conclusions as in the case of real coefficients: when the sum of the roots is much smaller than the roots themselves, the relative backward error on the sum can be large, despite the fact that the forward errors on the computation as a function of β and f are small.
Comparing the different stabilities
In this section we compare the different types of stability in terms of the constraints that they impose on the computed roots. First of all, it is obvious that EBS implies EMS since EMS follows from EBS by just choosing We now prove that EBS implies NBS, which is slightly more involved.
Lemma 3 Let the computed rootsx 1 andx 2 of p(x) = ax 2 + bx + c satisfy then they also satisfy the norm-wise bound
Switching to norms and using the triangle inequality then yields 0, |b −b|, |c −ĉ| Because of Lemma 4 in the appendix we also have
2
We then need to show that in general, EBS can not always be satisfied, i.e. there does NOT exist any algorithm that achieves this. A counterexample is given by the polynomial y 2 − 2β − 1 where β = 2 −t + 2 −2t and 2 −2t ≤ u/2 while 2 −t ≈ √ u. One easily checks that β is a representable number and that the roots of the polynomial are given by the expansion Their exactly rounded values are given by the representable numberŝ which gives a sum equal to the representable number but that yields a relative error of the order of √ u ! Moreover, all other representable numbers in the neighborhood of y 1 and y 2 are on a grid of size u and all possible combinations of their sums will still have a comparable relative error. It is thus impossible to find representable numbers that would satisfy the EBS property.
Numerical results
We tested this routine for the relative backward errors on three sets of 1000 random quadratic polynomials. We first took random real polynomials, then random complex polynomials, and finally random real polynomials with a very small sum (of the order of √ ). The test results are given below.
The first plot clearly shows EBS, since the relative errors of the recomputed sums and products of the roots is of the order of the unit round-off u. The second plot shows the same results for polynomials with complex coefficients. The third plot shows that for real polynomials q(y) with a very small (but nonzero) coefficient β, EBS can not be ensured by our algorithm. This is consistent with our analysis that shows that there does not exist any algorithm to ensure EBS for such polynomials. | 2014-09-29T04:13:09.000Z | 2014-09-29T00:00:00.000 | {
"year": 2014,
"sha1": "508a2b758f858d45273b1dc2b2c3e02eb74ac0c4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "dbac68931ed65aff94cdbaa6e63fb1e5dd1da6d6",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
252312850 | pes2o/s2orc | v3-fos-license | Effect of extrusion ratio on microstructure and mechanical properties of Mg-6Sn-3Al-1Zn alloy
The Mg–6Sn–3Al–1Zn (wt%) alloy was prepared by casting and then deformed by hot extrusion at 350 °C with different extrusion ratios (9:1, 13:1, 20:1), extrusion rate was 20 mm min−1. The microstructure of as-extruded alloy was analyzed by XRD and EBSD, the tensile properties were tested by a universal testing machine. The results showed that there underwent processing hardening and recrystallization softening simultaneously during the process thermal deformation. Dynamic recrystallization (DRX) mainly occurred at the grain boundary of the deformed grain, with the extrusion ratio increased, the volume fraction of the dynamic recrystallization grain of alloy increased. When the extrusion ratio was less than or equal to 13:1, the average grain size decreased with the extrusion ratio increased. When the extrusion ratio reached up to 20:1, the average grain size increased. {0001} basal plane texture formed after the alloy extruded, and it paralleled to the extrusion direction, the texture intensity decreased first and then increased as the extrusion ratio increased. With the extrusion ratio increased from 9:1 to 20:1, the tensile properties increased first and then declined. Among all the tested alloys, the alloy with the extrusion ratio of 13:1 exhibited the optimum mechanical properties, the yield strength, tensile strength and elongation of alloy was 320 MPa, 371 MPa and 13.5%, the texture strength of the alloy was 8.26, the average grain size was 1.5 μm.
Introduction
Magnesium and its alloys are known as '21st century green engineering materials'. Magnesium alloy has the characteristics of low density, high specific strength and specific stiffness, which can achieve lightweight, reduce energy consumption and reduce environmental pollution when used in transportation, aerospace and other fields [1,2]. In addition, magnesium alloy also has the characteristics of good electromagnetic shielding and environmental protection, widely used in the field of 3 C. Magnesium and its alloys are often used as degradable bone tissue materials in surgery due to the advantages of good biocompatibility, absorbability, degradability, safety and non-toxicity [3]. Magnesium alloy belongs to close-packed hexagonal (HCP) crystal, its deformation modes at room temperature mainly include slip and twinning due to the lack of independent slip system, and so the plastic deformation ability is poor compared to steel and aluminum alloys.
The Mg-3Al-1Zn alloy has been widely used in industry, however, the strength of this alloy very low, the comprehensive properties of Mg-6Sn-3Al-1Zn alloy are better than Mg-3Al-1Zn alloy because of after adding Sn element into magnesium alloy, the coarse columnar crystal is transformed into uniform equiaaxial crystal, the grain size of alloy is effectively refined. In addition, the Mg 2 Sn phase is formed in microstructure, due to the Mg 2 Sn particle phase have high hardness value, melting point and good thermal stability, which play an effective dispersion strengthening effect role on the Mg matrix. Zhang et al found that adding Sn element to AZ31 alloy can improve the mechanical properties of AZ31 magnesium alloy [4], but, the mechanical properties of Mg-6Sn-3Al-1Zn alloy still very low and needs to be improved further.
Through severe plastic deformation, the grain size of magnesium alloys can be refined to micron or submicron grade, and the basal texture is weakened; therefore, its mechanical properties are improved [5][6][7]. The processing methods of magnesium alloys mainly include high-pressure torsion, extrusion, rolling, stirring friction, etc [8][9][10][11][12][13]. Magnesium alloys have poor plastic deformation ability, extrusion can produce threedimensional compressive stress state, refine grains and improve forming ability [14]. The alloy is subjected to high pressure during extrusion deformation, which is conducive to eliminating defects such as porosity in the ingot, thus improving formability of the alloy effectively [15]. In addition, owing to extrusion, which has the advantages of low cost; it has become a widely used plastic processing method [16]. Extrusion temperature, extrusion ratio and other process parameters have a great effect on the microstructure and mechanical properties of alloy, especially extrusion ratio during extrusion. Liu et al studied the effect of extrusion ratio on microstructure and mechanical properties of Mg-8Li-3Al-2Zn-0.5Y, and found that the grain is significantly refined with the extrusion ratio increases, and the mechanical properties increase first and then decreases [17].
Taek-Soo Kim found that the tensile strength and elongation of Mg 95 Zn 4.3 Y 0.7 alloy increases with extrusion ratio increases [18].
In this paper, the effect of extrusion ratios on microstructure and mechanical properties of the Mg-6Sn-3Al-1Zn alloy is researched. The alloy is extruded at 350°C with different extrusion ratios (9:1, 13:1, 20:1), and the microstructure of extruded alloy is characterized by x-ray diffraction (XRD) and electron back scattering diffraction (EBSD), and the universal testing machine tested the tensile properties of alloy at room temperature.
Preparation of specimens
Mg-6Sn-3Al-1Zn alloy was cast with commercial pure Mg (99.99 wt%), Sn (99.99 wt%), Al (99.99 wt%), and Zn (99.99 wt%) elements. The preparation of the specimen underwent five steps, including pretreatment, melting, solid solution treatment, extrusion and annealing. The specific experimental steps were as follows: (1)Pretreatment. Herein, the alloy used in the experiment was Mg-3Al-6Sn-1Zn. Firstly, the surface of Mg, Al, Zn and Sn bulks were polished with a polishing machine, and then the raw materials of the alloy were weighted by analytical balance according to weight ratio of the alloying elements, the chemical composition of the studied alloy was shown in table 1. Secondly, the crucible was soaked with dilute hydrochloric acid in order to remove the impurities of inner wall of the crucible. After the impurities were removed, the crucible was dried with a hair dryer, and then the mixture viscous liquid which was mixed by talcum powder and water soluble silicate was coated on the inner wall of the cleaned crucible and mold, the surface of the stirring rod. Thirdly, put the weighed raw materials, refining agents, crucibles, molds, and stirring rods in the mold heating furnace, and keeping the temperature at 200°C for 2 to 3 h, close the mold heating furnace and continue to dry using the residual heat.
(2)Alloy smelting. Before smelting, the mold was placed into the mold heating furnace to keep warm at 350°C. The dried magnesium ingot was put into the crucible and set the furnace wall temperature to 800°C, then introduced the protective gas with a volume ratio of SF 6 : Ar=1:99 (vol.%). When the magnesium ingot melted, the Al, Zn and Sn bulk added into the crucible sequentially according to the melting point from high to low. Continue heating until all the elements melted, when the temperature of melt reached up to 730°C, the 36 g refining agent was added into the melt and fully stirred with a stirring rod, then the impurities and residue oxides would be on top of the melt, and could be slagging-off. by ladle. Then, continue heating the melt to 750°C and keep warm for 20 min, and then turn off the furnace, the melt was second slagging when the melt temperature cooled to 730°C, and then poured the melt into mold which was preheated to 350°C.
Casting should be carried out on a dry site, protective gas was introduced during the pouring process. After cooling for a period, the cylindrical ingot with a diameter of 60 mm and a height of 100 mm was obtained. (3)Solution treatment. The ingot was processed into a cylinder shape with a diameter of 44 mm and a height of 80 mm. Put the ingot into the plate and bury it with toner, and put it into the resistance furnace at 450°C for 8 h, and followed water quenching.
(5)Annealing. The extruded bar was cut by wire cutting to 3×5 mm along the extrusion direction for microstructure characterization, the tensile sample was cut by wire cutting into bone shape along the extrusion direction. Put this sample into the ceramic crucible and buried with toner to prevent the alloy oxidation, the sample was annealed by resistance furnace at 350°C for 10 min, and followed water quenching.
Microstructure characterization
(1)X-ray diffraction (XRD) analysis: The sample grind with 400 # , 800 # , 1200 # , 3000 # and 5000 # sandpaper respectively, and put it into a beaker that filled with alcohol, and cleaned the sample with an ultrasonic cleaner for 30 min, then dried it with a hair dryer. The phase composition of the sample was analysis by Bruke D8-Advance type XRD. The radiation source was the Kα of Cu target materials, scanning range was 20°-90°, scanning rate was 7°min −1 , scanning step was 0.02°, and the operating voltage and current were 40 KV and 40 mA, respectively.
(2)Electron back scattering diffraction (EBSD): The samples were grind with 400 # , 800 # , 1200 # , 3000 # , 5000 # and 7000 # sandpaper, respectively. Then polished with 7% perchlorate alcohol solution, adding liquid nitrogen to reduce the temperature of the electrolyte to about −30°C before electrochemical polishing, polished voltage, polished current and time of the sample was 20 V, 0.15 A and 2 min, respectively. After the polishing process was completed, put the sample into the alcohol to prevent corrosion of the sample surface, and then use a hair dryer to dry the surface of the sample. Argon ions polishing was carried out after electrolytic polishing, firstly, set the polishing voltage, angle and time was 3KV, 4°and 1 h, respectively, and then adjusted the polishing angle to 3°, continued polishing for 1 h. After polished, the microstructure of the as-extruded alloy was observed with EBSD, operating voltage was 20KV, scanning step was 0.21 μm, and the texture of the {0001} basal plane, {11-20} and {10-10} cylindrical plane pole figures were tested.
Mechanical properties tests
The dimensions of the standard tensile specimen (according to chinese national standard GB/T-2002) are shown in figure 2. The specimen is mounted vertically on the Instron 5982 universal testing machine, the prestress was adjusted to 10 N, inputted the diameter and gauge distance of the sample on the computer and then relieved load, tensile rate was 0.2 mm min −1 . Test three specimens with the same extrusion ratio, taking their average values as the final value of tensile tests.
Phase analysis of alloys
The XRD patterns of the as-extruded alloy with different extrusion ratios are shown in figure 3. The intensity of diffraction peak in XRD patterns reflects the number of diffraction crystal planes, the strongest diffraction peak represents the preferred orientation of the crystal plane [19]. As can be seen from figure 2, the {0002} basal plane texture is formed after extrusion with different extrusion ratios, the basal plane is parallel to the extrusion direction (ED), and the {10-10} cylindrical plane is parallel to transverse direction (TD). The extrusion ratio has no effect on the preferred orientation of the crystal and phase composition. The alloy is composed of α-Mg matrix and Mg 2 Sn phase after extrusion with different extrusion ratios.
EBSD microstructure analysis of the as-extruded alloys with different extrusion ratios
The microstructure of recrystallization, substructure and deformed grains of as-extruded Mg-6Sn-3Al-1Zn alloy with different extrusion ratios is presented in figure 4. The blue, yellow, and red regions represent recrystallized grains, substructural grains, and deformed grains, respectively. As shown in figure 4(a), when the extrusion ratio is 9:1, the original grain is significantly elongated along the extrusion direction and forms a fibrous tissue, the average grain size of alloy is large. There are many fine recrystallized grains around the original coarse grain, and the local areas inside the metal accumulate enough high dislocations to form a cellular substructure. When the extrusion ratio of alloy increases to 13:1, the deformation grains of as-extruded alloy are fewer and isoaxial, and the number of recrystallization grains and subcrystalline grains are increased, the grains are small and uniform, which is shown in figure 4(b). When the extrusion ratio of alloy further increases to 20:1, the recrystallization grain growth, the average grain size of alloy increases, the cell-like substructure inside the original grains increases, and the grain boundaries are clear, which is shown in figure 4(c). The volume fractions of recrystallization grains, substructured grains, and deformed grains of Mg-6Sn-3Al-1Zn alloy with different extrusion ratios are shown in figure 5. When the extrusion ratio of alloy is 9:1, there exist a large number of deformed grains in microstructure. With the increases of the extrusion ratio, the volume fraction of the deformed grain decreases sharply, when the extrusion ratio of alloy is 13:1 and 20:1, there are almost no deformed grains in the alloy, the microstructure of the alloy is mainly composed of recrystallized grains and substructure grains. When the extrusion ratio is 9:1, 13:1 and 20:1, the volume fraction of deformed grains is 65.1%, 1.1% and 1.0%, respectively, the volume fraction of recrystallized grains is 11.2%, 34.1% and 30.1%, respectively, and the volume fraction of substructure grains is 23.7%, 63.8% and 68.9%, respectively. The driving force of recrystallization comes from two aspects: extrusion temperature and extrusion ratio, when the extrusion ratio is 13:1, the volume fraction of recrystallization grain reaches the maximum, indicating that the dynamic recrystallization under this driving force has been basically completed. In the extrusion process, the increase of dislocations inside the grains forms dislocation walls. In order to eliminate the internal stress, subgrain boundaries are formed, and the merger of subcrystalline boundaries finally forms subcrystalline. The increase of subgrains indicates that the dislocation density increases with the increases of extrusion ratio. Figure 6 illustrates the grain sizes statistical diagram of Mg-6Sn-3Al-1Zn alloy after extrusion with different extrusion ratios. With the increases of the extrusion ratio, the average grain size of the alloy decreases first and then increases, and when the extrusion ratio is 9:1, the grain size less than 40 μm accounted for higher than 95%, the average grain size is 15.4 μm, as shown in figure 6(a). When the extrusion ratio increases to 13:1, most of grain sizes range from 0 μm to 4 μm, the grain size is evenly distributed, and the average grain size is reduced to 1.5 μm, as shown in figure 6(b). When the extrusion ratio is 20:1, the range of grain size is concentrated on less than 5 μm, and the average grain size is 2.2 μm, as showed in figure 6(c). Figure 7 shows the pole figure of {0001} basal plane, {11-20} and {10-10} cylindrical planes of Mg-6Sn-3Al-1Zn alloys with different extrusion ratios. {0001} basal texture formed and texture types do not change after hot extruding. Xo refers to the transverse direction (TD direction), and the direction which perpendicular to Xo refers to the extrusion direction (ED direction). After extrusion, the polar density points of the {0001} polar diagram are distributed along the TD direction, that is, most of the <0001>normal direction of the {0001} basal plane perpendicular to the extrusion direction. Which indicates that most of the grains rotated in the extruding process, making the {0001} basal plane parallel to the extrusion direction, which is consistent with the results that obtained by the XRD diffraction pattern. When the extrusion ratio increases from 9:1 to 13:1, the texture intensity reduces from 11.30 to 8.26 owing to the dynamic recrystallization, while the extrusion ratio further increases to 20:1, the volume fraction of recrystallization grain decreases and the texture strength increases to 15.54. Figure 8 shows the grain boundary types of Mg-6Sn-3Al-1Zn alloys with different extrusion ratios. Black grain boundaries are large angle grain boundaries (HAGBs) with an orientation difference greater than 10°, and the green grain boundaries are small angle grain boundaries (LAGBs) with an orientation difference less than 10° [20]. Dislocations rearrange and form LAGBs at the grain boundary migration regions, the transition energy from LAGBs to HAGBs mainly comes from the dislocation energy, which is determined by the number of dislocations between adjacent grains, therefore, the grain boundary energy of LAGBs increases with the increase of the dislocation density. When the extrusion ratio of alloy is 9:1, a large number of LAGBs distributed near the grain boundaries of the deformed grains, and the density of LAGBs is very high, which is shown in figure 8(a). When the extrusion ratio increases to 13:1 and 20:1, respectively, the LAGBs in the alloy are mainly distributed at the original grain boundary of the unrecrystallized grain, and the LAGBs density decreases sharply, is shown in figures 8(b) and (c), indicating that the number of recrystallization grain increases. With the increases of the extrusion ratio, the degree of grain fragmentation increases leads to dislocation accumulation, resulting in the low-angle grain boundaries decreasing, in addition, the high-angle grain boundaries increase and the average grain size decreases as the dynamic recrystallization in progress. In addition, the pinning effect of Mg 2 Sn phase in the alloy hinders the dislocations movement and grain boundaries migration, thereby reducing the stacking fault energy and refining the grains. The grain refinement greatly reduces the distance between grain boundaries, and makes the dislocations easier to reach the grain boundaries and accumulates at the grain boundaries during extrusion, finally the dislocations annihilated at the grain boundary, which lead to the boundaries angle increases and becomes high-angle grain boundaries. Figure 9 shows the twin boundary types of Mg-6Sn-3Al-1Zn alloys with different extrusion ratio. The red boundary represents {10-12} tensile twin boundaries, and the blue boundary represents {10-11} compression twin boundaries. The volume fraction of the twin grain boundaries of the {10-12} tensile twinning and the {10-11} compression twinning of Mg-6Sn-3Al-1Zn alloy with extruded at different extrusion ratios is shown in table 2. The volume fraction of {10-12} tensile twin grain boundary increases first and then decreases with the extrusion ratio increases, when the extrusion ratio is 9:1, 13:1 and 20:1, the volume fraction of {10-12} tensile twin boundary is 0.0123%, 0.43% and 0.276%, respectively. The volume fraction of the {10-11} compression twinning decreases first and then increases with the extrusion ratio increases, when the extrusion ratio is 9:1, 13:1 and 20:1, the volume fraction of the {10-11} compression twin boundaries are 0.0676%, 0.0202% and 0.0989%, respectively. The volume fraction of twin boundaries increases first and then decreases, when the extrusion ratio is 9:1, 13:1 and 20:1, the volume fraction of the twin boundaries is 0.089%, 0.4502% and 0.3749%, respectively. Twin crystals play a very important role in the strengthening of the alloy, the number of twin crystals increases will lead to the tensile strength of the alloy increases.
Grain boundary analysis of alloys
The inverse pole figures (IPF) and misorientation angle distribution diagrams of the as-extrude Mg-6Sn-3Al-1Zn alloys with different extrusion ratios are shown in figure 10. From figures 10(a), (b) and (c), when the extrusion ratio is 9:1 and 13:1, the volume fraction of necklace-like microstructure increases with the extrusion ratio increases, magnesium alloys occur significant continuous dynamic recrystallization (CDRX), a large number of fine recrystallized grains formed at the original grain boundaries and subcrystalline boundaries. When the extrusion ratio continues rise to 20:1, generated new orientation that is not conducive to sustained deformation, and occurring local shear deformation, resulting in the recrystallization grains of the magnesium alloy rotate into different grain orientations, this phenomenon is known as rotational dynamic recrystallization (RDRX). When the extrusion ratio is 9:1, volume fraction of low-angle grain boundary as higher as 70%, as showed in figure 10(d). When the extrusion ratio increases to 13:1, the volume fraction of LAGBs decreases sharply to 24%, as shown in figure 10(e), when the extrusion ratio further increases to 20:1, the volume fraction of LAGBs increases to 34%, as shown in figure 10(f). It can be seen from misorientation angle distribution diagrams, {10-12} tensile twinning and {10-11} compression twinning appears at 86.1°and 56°, respectively, related studies have reported that these twins have a special orientation relationship with the parent crystal ({10-12} twin 86.1°and {10-11} twin 56°) [21,22]. With the increases of the extrusion ratio, the {10-12} tensile twin boundaries increase first and then decreases, and the {10-11} compression twin boundaries decrease first and then increase.
Mechanical properties
The stress-strain curve, elongation, tensile strength and yield strength of Mg-6Sn-3Al-1Zn alloy after extruded with different extrusion ratios are shown in figure 11. The tensile strength and yield strength of the extruded alloy increases first and then decreases with the increases of extrusion ratio. When the extrusion ratio is 9:1, 13:1 and 20:1, the tensile strength is 337 MPa, 371 MPa and 348 MPa, respectively, the yield strength is 294 MPa, 320 MPa and 280 MPa, respectively. The elongation increases with the increases of the extrusion ratio, when the extrusion ratio is 9:1, 13:1 and 20:1, the elongation rate is 7.4%, 13.5% and 17.4%, respectively. After the alloy is extruded with the extrusion ratio of 13:1, the comprehensive mechanical properties of the alloy are optimal.
4. Discussion 4.1. Effect of different extrusion ratios on grain size of Mg-6Sn-3Al-1Zn alloy The extrusion temperature is 350°C, which is higher than recrystallization temperature of the magnesium alloy, so the extrusion process of the magnesium alloy belongs to thermoplastic deformation. During the thermoplastic deformation, some regions of the magnesium alloy will accumulate enough high distortion energy to occur dynamic recrystallization that grows by merging or annexing subcrystalline. The alloy occurred two opposite processes simultaneously during thermoplastic processing, one is plastic deformation based on dislocation movement, and the other is dynamic recrystallization based on nucleation and growth. After the alloy is extruded, some grains undergo dynamic recrystallization to form small isometric grains, and the other grains are stretched into threadiness and parallel to the extrusion direction. When the extrusion ratio is 9:1, the distortion energy stored in the metal is small since the little degree of plastic deformation, and the dynamic recrystallization can only occur in the region with large deformation degree. Therefore, the alloy is mainly composed of deformed grains and recrystallized grains. When the extrusion ratio increased to 13:1, the deformation degree increases, and sufficient dislocations accumulated in alloy, which is conducive to recrystallization nucleation, and new isometric crystals will be formed, the grains are small and distribute in microstructure uniformly, so the volume fraction of recrystallization grains increase. When the extrusion ratio increases to 20:1, the distortion energy further increased, which will prompt the recrystallization grain to grow up again and lead to the average grain size of alloy increase. When the plastic deformation degree increases, the dislocations are entangled with each other to form a cell-like substructure, so the proportion of subcrystallines gradually increases with the increases of the extrusion ratio. The extrusion ratio also has an impact on the grain boundaries, when the extrusion ratio is 9:1, the volume fraction of recrystallization grains are few, the dislocation density is larger, and the dislocation is accumulated at the grain boundary of the deformed grain, so there are many low-angle boundary at the grain boundary. When the extrusion ratio rises to 13:1, the degree of dynamic recrystallization becomes larger, and the dynamic recrystallization grows through the merger or annexation of subcrystallines, and preferentially occurs at the grain boundaries with large dislocation density and irregular arrangement, so the proportion of low-angle boundaries are reduced rapidly and convert to high angle boundaries. When the extrusion ratio continues rise to 20:1, due to work hardening effect exceeds the recrystallization softening effect, therefore the dislocation increases, and the proportion of the low-angle grain boundary increases. The volume fraction of twin grain boundaries is very low with different conditions of extrusion ratio, and so the alloy not considered with win grain boundaries effects.
4.2.
Effects of different extrusion ratios on the texture of Mg-6Sn-3Al-1Zn alloy The {0001} basal plane texture is easily formed in magnesium alloy during extrusion, the basal plane is preferentially parallel to the extrusion direction, and the texture intensity decreases first and then increases with the increases of the extrusion ratio. Texture intensity decreases when the extrusion ratio is 13:1 owing to the volume fraction of the dynamic recrystallization grains increases, and a phenomenon of texture weakening appears. When the extrusion ratio increases to 20:1, the volume fraction of dynamic recrystallization grains decreases, which leads to the texture intensity increase.
Effects of different extrusion ratios on mechanical properties of Mg-6Sn-3Al-1Zn alloy
Hall-Petch relation demonstrates that grain refinement is an effective method for improving yield strength.
When the extrusion ratio is 13:1, the average grain size of alloy is the smallest and the yield strength is also the highest, but when the extrusion ratio increases to 20:1, the average grain size of alloy increases, which reduces the grain refinement effect, so the yield strength reduces. The elongation of alloy increases with the increases of extrusion ratio, when the extrusion ratio increases from 9:1 to 13:1, grain refinement not only makes the yield strength increases, but also the elongation of the alloy increase, when the extrusion ratio is 20:1, the grain size grows slightly and the elongation increases further.
Conclusions
(1)Extrusion ratio had a great impact on the grain, texture, orientation difference and mechanical properties of extruded alloys. With the increased of the extrusion ratio, the texture intensity and average grain size of the alloy decreased first and then increased, the yield strength and tensile strength increased first and then decreased, while the elongation increased monotonically.
(2)The basal texture intense was the weakest when the extrusion ratio was 13:1, which was 8.6, and the average grain size of the alloy was the smallest, which was 1.5 μm, and its comprehensive mechanical properties were the best, with a yield strength of 320 MPa, a tensile strength of 371 MPa, and an elongation of 13.5%.
(3)The volume fraction of recrystallized grains increased first and then decreased with the increases of the extrusion ratio, when the extrusion ratio was 13:1, the volume fraction of recrystallized grains was the highest, which was 34.1%. The volume fraction of sub-grain increased and the deformation grain decreased with the increased of the extrusion ratio, and the grain boundary was dominated by low-angle grain boundaries of alloy with different extrusion ratios.
(4)Twin grain boundary formed after thermoplastic deformation, and its volume fraction increased first and then decreased with the increased of the extrusion ratio, which was positively correlated with the strength of alloy. The volume fraction of twin grain boundary reached the highest about 0.45% when the extrusion ratio was 13:1, however, the overall volume fraction of twin grain boundary was low in this alloy, therefore, the effect of twin grain boundary on mechanical properties is limited.
Author contributions: Conceptualization, PJ; methodology, JW; investigation, JW; writing-original draft preparation, SZ; writing-review and editing, FW; all authors have read and agreed to the published version of the manuscript.
Data availability statement
All data that support the findings of this study are included within the article (and any supplementary files). Source data are available from the corresponding author upon reasonable request. | 2022-09-17T15:17:26.029Z | 2022-09-15T00:00:00.000 | {
"year": 2022,
"sha1": "86eeac3c2c47e2a1eeed5363ff6733f1595b5e18",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/2053-1591/ac9271",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "7ae4a8d79704da3b350259906103f85252b135ba",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
53589255 | pes2o/s2orc | v3-fos-license | The bidirectional ballot polytope
A bidirectional ballot sequence (BBS) is a finite binary sequence with the property that every prefix and suffix contains strictly more ones than zeros. BBS's were introduced by Zhao, and independently by Bosquet-M{\'e}lou and Ponty as $(1,1)$-culminating paths. Both sets of authors noted the difficulty in counting these objects, and to date research on bidirectional ballot sequences has been concerned with asymptotics. We introduce a continuous analogue of bidirectional ballot sequences which we call bidirectional gerrymanders, and show that the set of bidirectional gerrymanders form a convex polytope sitting inside the unit cube, which we refer to as the bidirectional ballot polytope. We prove that every $(2n-1)$-dimensional unit cube can be partitioned into $2n-1$ isometric copies of the $(2n-1)$-dimensional bidirectional ballot polytope. Furthermore, we show that the vertices of this polytope are all also vertices of the cube, and that the vertices are in bijection with BBS's. An immediate corollary is a geometric explanation of the result of Zhao and of Bosquet-M{\'e}lou and Ponty that the number of BBS's of length $n$ is $\Theta(2^n/n)$.
INTRODUCTION
In [Zh1], Zhao introduced a family of combinatorial objects called bidirectional ballot sequences, defined as follows.
Definition 1.1. A finite 0-1 sequence is a bidirectional ballot sequence (BBS) if every prefix and every suffix contains strictly more ones than zeros. Let B n denote the number of bidirectional ballot sequences of length n.
Bidirectional ballot sequences have a natural interpretation in terms of lattice paths. Suppose we start at (0, 0) and take a finite number of steps either of the form (1, 1) or (1, −1). We call such a path a standard lattice path. We define the length of the path to be the number of steps it contains. We define the height of a point in the lattice path to be its y-coordinate. Bidirectional ballot sequences of length n are in bijection with standard lattice paths of length n whose unique minimum height is attained at the first point in the path, and whose unique maximum height is attained at the last point in the path. The bijection is given by identifying the digit '0' in a BBS with a step of the form (1, −1) and the digit '1' with a step of the form (1, 1) (for an example of this, see Section 4).
From this perspective, bidirectional ballot sequences were independently introduced by Bosquet-Mélou and Ponty [BP] as a special type of what they call culminating paths. In particular, an (a, b)-culminating path is a sequence of lattice points starting at (0, 0) such that each step is of the form (1, a) or (1, −b) and such that the unique minimum height is achieved at the first point and the unique maximum height is achieved at the last point. Thus bidirectional ballot sequences are in bijection with (1, 1)-culminating paths. In [BP] it is noted that (1, 1)-culminating paths had been used in [FGK] with connections to theoretical physics, and general (a, b)-culminating paths had been used in [AGMML], [CR], and [PL] with connections to bioinformatics.
In both [Zh1] and [BP], it is noted that unlike other easy to define classes of lattice paths (e.g. Dyck paths), the enumeration of BBS's is tricky; there is no obvious recursive structure to such paths. Both authors focused on the asymptotics of B n . In particular, [BP] obtained a generating function in n for the number of (a, b)-culminating paths of length n with fixed height k (the generating function for the (1, 1) case was found in [FGK]). Furthermore, they showed that B n ∼ 2 n /4n. Independently, [Zh1] showed that B n = Θ(2 n /n) and stated without detailed proof that B n ∼ 2 n /4n. Additionally in [Zh1], the author conjectured an even finer asymptotic expression for B n . This conjecture was later proved by Hackl, Heuberger, Prodinger and Wagner [HHPW], who refined the asymptotic expression even further using techniques from analytic combinatorics.
The motivation for the study of culminating paths in [BP] was the observation that such paths had been independently introduced and utilized in disparate contexts (theoretical physics and bioinformatics) as well as a general interest in understanding subfamilies of lattice paths. However, the motivation in [Zh1], as well as our original motivation for studying BBS, arises from additive combinatorics. Let A ⊂ Z be a finite set of integers. We define the sumset A + A as those elements in Z expressible as a + b with a, b ∈ A. Similarly, the difference set A − A is those elements expressible as a − b with a, b ∈ A. We say that A is a more sums than differences (MSTD) set if |A + A| > |A − A|. Because of the commutativity of addition, one may intuitively expect that in general |A − A| ≥ |A + A|. This intuition turns out to be correct in some contexts (see [HM]), in particular if each element in [n] := {1, 2, . . . , n} is independently chosen to be in A with some probability p(n) tending to zero). Let ρ n be the proportion of subsets of [n] which are MSTD. In [MO], it was shown that ρ n > 2 × 10 −7 for n ≥ 15, and in [Zh2] it was shown that lim n→∞ ρ n converges to a positive number; experimental data suggests this limit to be of order 10 −4 . Thus, in this sense, a positive proportion of sets are MSTD. However, the techniques in [MO] are probabilistic, and to date no known constant density family of MSTD subsets of [n] as n → ∞ is known.
The best density explicit construction of MSTD sets is due to Zhao in [Zh1] using BBS's. Let B be a binary sequence of length n. We can associate to B the set A ⊆ [n] defined as A := {i : B i = 1}. For example if B = 01101, then A = {2, 3, 5}. Those subsets A of [n] arising from BBS's have the property that A + A = {i : 2 ≤ i ≤ 2n}, which is to say that the sumset is as large as possible (similarly it turns out that the difference set is also as large as possible). Using this property, Zhao was able to translate those subsets of [n] arising from BBS's and append extra elements to the fringes to obtain an MSTD set for each set arising from a BBS. From this, one immediately gets a density Θ(1/n) family of MSTD sets.
Motivated by the use of BBS's in additive combinatorics, in this paper we study the natural analgoue of BBS's in a continuous setting, which we call bidirectional gerrymanders; in the related paper [MP], we use similar ideas as in this paper to study the analogue of MSTD sets in a continuous setting. We first set some notation and then describe our main results. Let I n denote the set of all subsets of R consisting of exactly n disjoint open intervals such that the leftmost interval starts at 0. Suppose A ∈ I n . If we translate A, then the sumset and difference set merely translate as well. Thus, when studying additive behavior, we do not lose any generality by restricting our attention to collections of intervals such that the leftmost interval starts at zero. We can topologize I n by identifying it with R 2n−1 ≥0 , the non-negative orthant: let A = I 1 ∪ I 2 ∪ · · · ∪ I n ∈ I n with I i to the left of I j for i < j. Suppose I j = (a j , b j ). We then identify A with the vector v A = [b 1 − a 1 , a 2 − b 1 , b 2 − a 2 , a 3 − b 2 , . . . , b n − a n ]. Thus the first entry is the length of the first interval, the second entry is the size of the gap between the first and second intervals, the third entry is the length of the second interval, etc. We shall find it convenient to restrict our attention to the following set: let J n ⊂ I n be the set of collections of n non-overlapping intervals such that the leftmost interval starts at zero, the length of each interval is between 0 and 1, and the gap between adjacent intervals is between 0 and 1 (if we scale A ∈ I n by α = 0, then the sumset and difference set scale by α as well, so αA has the same essential additive behavior as A; note that up to scaling, every element of I n is an element of J n ). We can topologize J n by identifying it with C 2n−1 = [0, 1] 2n−1 , the 2n − 1 dimensional unit cube 1 . For other ways to topologize I n and related spaces, see [MP]. The bidirectional gerrymanders in J n form a convex, compact polytope contained in C 2n−1 which we call the bidirectional ballot polytope, P n . This polytope has a number of extraordinary combinatorial features. In Section 2 we formally define this polytope and show that C 2n−1 can be partitioned into 2n − 1 disjoint isometric copies of P n , which in particular shows that the volume of P n is 1/(2n − 1). In Section 3 we show that the vertices of P n are vertices of C 2n−1 . Finally in Section 4 we show that the vertices of P n are in bijection with B 2n+3 , and that a particular subset of the vertices are in bijection with B 2n−1 . From this we are able to immediately rederive geometrically that |B n | = Θ(2 n /n), i.e., there are positive constants α and β such that for all n sufficiently large we have α2 n /n ≤ |B n | ≤ β2 n /n.
THE BIDIRECTIONAL BALLOT CONE AND POLYTOPE
We first set some notation. Let m = 2n − 1 for some n ∈ N. (2.2) We define V n , the set of ballot vectors, as V n = L n ∪ R n .
Definition 2.2. The bidirectional ballot cone, B n , is the set of x ∈ R m such that x · w ≥ 0 for all w ∈ V n . When the value of n is obvious, we simply refer to it as B.
We now define the continuous analogue of BBS's, and show in Proposition 2.4 that it is the right generalization.
Proposition 2.4. Suppose A = I 1 ∪ · · · ∪ I n ∈ I n with endpoints ordered as before. Suppose the right endpoint of Proof. Clearly if these measure conditions hold, then A is a bidirectional gerrymander, as setting t to be left and right endpoints of the I i yields the nonnegativity conditions of pairing with the ballot vectors. The condition µ takes a local minimum only if t is a left endpoint of an interval I i . Hence if v A · w ≥ 0 for all w ∈ L n , then the function is nonnegative at its minima and so the first measure condition holds. Similarly, the second measure condition holds as well by the nonnegativity of pairing with the right ballot vectors.
A BBS in the sense of [Zh1] is a binary sequence for which any subsequence truncated on the left or right contains more 1's than 0's, and Proposition 2.4 shows that a bidirectional gerrymander is a subset of R contained in [0, a] for which any subset obtained by truncating on the left or right contains "more" points (in a measure theoretic sense) in the original set than points not in this set. It is thus clear that they are a natural analogue, but, as we shall see, what is surprising is that they can be used to prove results about standard (discrete) BBS's.
Definition 2.5. The bidirectional ballot polytope P n , is defined as B n ∩ C m . Equivalently, it is those vectors v A such that A ∈ J n is a bidirectional gerrymander. When the value of n is obvious, we shall refer to it simply as P.
FIGURE 1. The polytope P 2 (red) sitting inside C 3 . Notice that adding two additional copies of P 2 , rotated about the main diagonal of the cube by 2π/3 and 4π/3 respectively, would result in a partition of C 3 (neglecting overlap of boundaries).
Proof. Let τ = ρ 2 ∈ Z m be the cyclic shift by two places. Because m is odd, τ generates Z m . In particular, we see that the set of left and right ballot vectors V n as defined in Definition 2.1 is equal to Then for each ℓ we have that Intersecting with C gives the corresponding result for P.
) and τ ℓ+1 = τ k+1 , then (because taking the interior simply changes the inequalities defining B τ ℓ+1 to strict ones) we have both This is a contradiction, so the interiors distinct regions B τ ℓ+1 are disjoint, and it follows immediately that the interiors of distinct regions P τ ℓ+1 are disjoint.
Corollary 2.8. The unit cube C m equals σ∈Zm P σ . Furthermore, for σ 1 = σ 2 , the interiors of P σ 1 and P σ 2 are disjoint. Consequently, the volume of P is exactly 1/m.
Proof. Intersecting the nonnegative orthant and the translates B σ with C m , Theorem 2.7 yields that C m is partitioned into m regions produced by permuting the coordinates of P. Because the matrix representing τ = ρ 2 has determinant 1 it leaves volume invariant. Therefore, Vol(P σ ) = Vol(P) for all σ ∈ Z m , so Vol(P) = 1/m.
If furthermore these are all positive, then σ is unique.
One interpretation of the above corollary is as follows. Suppose you have a necklace with an odd number of beads. On each bead you write a non-negative number. Then there exists some place where you can cut the necklace such that when you lay out the necklace and think of the sequence of values on the beads as a vector in R m , this vector is a bidirectional gerrymander. Furthermore, if the numbers you write on the beads are "generic", in the sense that the inequalities corresponding to (2.9) and (2.10) are strict, then there is exactly one such place you can cut the necklace.
VERTICES OF THE BIDIRECTIONAL BALLOT POLYTOPE ARE VERTICES OF THE CUBE
In this section we show that the vertices of P n are also vertices of C m , the unit cube. We had previously defined P n as the intersection of the unit cube with the ballot cone, which is equivalent to the set of vectors [ℓ 1 , g 1 , . . . , g n−1 , ℓ n ] satisfying the below inequality: (3.1) The first collection of rows in the above matrix is necessary to ensure that we only deal with points inside of the unit cube. Thus we call any vector of the form [0, . . . , 0, ±1, 0, . . . , 0] a cube vector.
Before proving the main result of this section, we must review a few concepts related to convex polytopes. We follow the terminology of [BT].
Then, we say that the i th constraint is active at x * . Definition 3.2. A vector x * ∈ R n is called a basic solution if out of all of the constraints that are active at x * , there is some collection of n of them which is linearly independent. If x * is a basic solution that satisfies all of the constraints, then it is called a basic feasible solution.
Part of what makes the study of convex polytopes interesting is that there are several equivalent but strikingly different ways of defining what the vertices of a polytope are. In particular, one definition is that a point v is a vertex if and only if it is a basic feasible solution.
The following shorthand will be helpful in the proof of the main theorem of this section.
Definition 3.3. A matrix/vector is called flat if all of its entries are 0, 1, or -1.
Let Q n denote the set of vertices in the polytope P n . Let S n denote the set of vertices of the unit cube C m . The main result of this section is the following.
Theorem 3.4. All of the vertices of the bidirectional ballot polytope P n are also vertices of the unit cube C m ; i.e., Q n ⊂ S n .
Proof. By the above discussion, we know that we must show that all basic feasible solutions are vertices of the cube. Throughout this proof, we let n be fixed, and let m = 2n − 1. Thus we unambiguously let P = P n , C = C 2n−1 , Q = Q n , and S = S n . Notice that Z m ∩ P ⊂ S. From this observation, we now describe the strategy for proving the theorem. Suppose x * is a basic solution whose corresponding constraints are a i 1 , . . . , a im . Then x * satisfies -a i 1 -. . .
Let A be the matrix in (3.2). Let b * be the vector on the right hand side in (3.2). Thus x * = A −1 b * . Note that b * ∈ Z m since it is some subset of the entries in the vector on the right hand side of (3.1). If we can show that det(A) = ±1, it will imply that A −1 has integer entries, and thus that A −1 b * ∈ Z m . From the earlier observation, if x * is a basic feasible solution, then we must have that A −1 b * = x * ∈ S, which would prove the theorem. Now we must show that if A is invertible, then it has determinant ±1. In order to show this, we keep track of what happens to the determinant in the process of carrying out Gaussian elimination, which converts A into the identity matrix. In particular, we show that at every step, the determinant changes by a factor of ±1. Since the identity matrix has determinant 1, we could then conclude that A has determinant ±1. The only elementary row operation which potentially changes the absolute value of the determinant of a matrix is multiplying a row by a scalar. Thus it suffices to show that when Gaussian elimination is performed on A, no row is ever multiplied by a scalar other than ±1. In Gaussian elimination, a row is multiplied by a scalar to convert some non-zero entry in that row to a one. If every non-zero entry in that row is ±1, then we would simply need to multiply by ±1. Thus, we shall instead prove the stronger hypothesis that at every step of Gaussian elimination, the intermediate matrix is flat, and hence all of its non-zero entries are ±1. This is the content of Lemma 3.5.
Before proving Lemma 3.5, we include an example to illustrate the method. Here we omit row swapping for clarity, and we obtain a permutation matrix, which has determinant ±1. At each step, the leading nonzero term in the bolded row is used to clear the corresponding column. Proof. We proceed by induction. Let A k denote the matrix resulting from the k th step of Gaussian elimination (i.e. the matrix obtained after "clearing" the first k columns). We shall show that for each k, every row of the matrix A k is of exactly one of six types depending on the form of the first k entries of that row and the last m − k entries of that row (in the sequel, we will refer to this as saying that every row is one of the six types with respect to k).
We now describe these six types. Let α n denote any sequence of length n consisting of alternating plus ones and minus ones (e.g. α 3 = [−1, 1, −1] or α 1 = [1]). Let β n denote the sequence of length n consisting of all zeros. Let γ n denote any binary sequence of length n containing exactly one one (e.g. γ 4 = [0, 0, 1, 0]). Let ⊕ refer to the operation of vector concatenation (e.g. [1, 2, 3] ⊕ [4, 5] = [1,2,3,4,5]). The six types (with respect to k) are listed in Table 1f. We now go through the inductive argument. For the base case, notice that when k = 0, the cube vectors are type 1, the left ballot vectors are type 2, and the right ballot vectors are type 3. Thus the claim is proven in the base case. Now for the inductive step, we shall show that if all rows of A k are of one of the above types with respect to k, then all rows of A k+1 are of one of the above types with respect to k + 1. As described in the proof of Theorem 3.4, at step k we must first find some row whose first k entries are zero, and whose k + 1 entry is ±1. We see then that we must select some row of type 2, call it T . We then subtract T from all other rows whose k + 1 entry is non-zero. Thus the only types we must worry about are types 2 and 5. Notice that when we subtract T from a row of type 2, we get a row either or type 1, type 2, or type 3 with respect to k + 1. When we subtract T from a row of type 5, we get a row either of type 4, 5, or 6 with respect to k + 1. All other rows remain the same. Thus when we catalog the new rows with respect to k + 1, we get that those of type 1 become either type 1 or type 2. As mentioned before, those of type 2 become those of type 1, 2, or 3, except for row T which becomes of type 4 or 5. Type 3 becomes type 2 or 3. Type 4 remains type 4 or becomes type 5. As mentioned before, type 5 becomes type 4, 5, or 6. Lastly, type 6 becomes type 5 or type 6. Thus, by induction, we have proven the desired statement, implying in particular that the matrix is flat at every step.
VERTICES OF THE CUBE IN THE BALLOT REGION
In this section, we demonstrate that bidirectional ballot sequences of length 2n − 1 correspond in a natural way to Q n , and we rederive the growth rate given in [Zh1] and [BP].
Example 4.2. The bidirectional ballot sequence 11011001111 corresponds to the path This is a bijection from binary sequences of length m to graphs of functions f λ with λ ∈ {±1} m . Recall from Section 1 that the graphs which correspond to bidirectional ballot sequences are those of functions f λ where f λ (0) < f λ (t) < f λ (m) for all 0 < t < m. Now we will draw a correspondence between Q n and B 2n+3 through these graphs, as well as a correspondence between a certain subset of Q n and B 2n−1 , by describing a way to interpret vectors v ∈ C 2n−1 = [0, 1] 2n−1 as paths as in the discrete case in such a way that the vertices of the ballot polytope are realized as exactly the graphs above. Given a vector v = [v 1 , . . . , v 2n−1 ] ∈ C 2n−1 , define the slope vector λ v = [λ 1 , . . . , λ 2n−1 ] by λ i := (−1) i−1 (2v i − 1), and associate to v the graph of the function f λv .
Example 4.3. The gap-parametrization vector v = 3 4 , 1 3 , 1 2 , 2 3 , 1 ∈ [0, 1] 5 gives the slope vector λ v = 1 2 , − 1 3 , 0, 1 3 , 1 , which gives the following graph of the function f λv , where the values next to the points indicate the distance above the x-axis: Although the function f λv in Example 4.3 has the property that it achieves global minimum and maximum values at it left and right endpoints (respectively), we will see that this is not always the case (see Example 4.6). We determine this behavior more precisely now.
and similarly (4.2) One can see now that, even if v ∈ P n , it is possible for the graph to fail the property stated above, i.e., to achieve a global maximum or minimum at a point in the interior of its interval of definition (again, see Example 4.6 for an explicit example). However, one can also see that if v ∈ P n , it cannot fail this property to a great extent; namely, the values at the left and right endpoints will be within a distance of 1 from the maximum and minimum values, since the large sums in the RHS of (4.1) and (4.2) will be non-negative. Nonetheless, we would like the graphs of the functions f λv with v ∈ Q n to match the graphs of bidirectional ballot sequences in B 2n+3 , and for that reason we give a way to modify a vector v ∈ Q n before associating it to a graph. Namely, we will add a sort of buffer to each side of the vector, so that the left and right endpoints get a leg up.
We now present two correspondences, the first stated more naturally, and the second proven more naturally, which are nonetheless very closely related. The first correspondence is as follows.
Theorem 4.5. The set Q n is in bijection with B 2n+3 , induced by the map v → f λ α(v) . (4.3) Before we prove Theorem 4.5, we give an example of the process that induces the bijection. This is not the graph of a bidirectional ballot sequence. Namely, the graph passes below the x-axis and above the line y = f λv (5). Let's now consider α(v) = [1, 0, 0, 0, 1, 0, 0, 0, 1] ∈ [0, 1] 9 , which gives slope vector λ α(v) = [1, 1, −1, 1, 1, 1, −1, 1, 1] and leads to the following graph of f λ α(v) . The portion of the graph between the vertical dotted lines is simply the graph of f λv translated in the plane by the vector [2, 2]. This graph does correspond to a bidirectional ballot sequence, namely 110111011. We now prove that this process gives a bijection as in the statement of the theorem.
Proof of Theorem 4.5. By the correspondence between bidirectional ballot sequences and graphs of certain functions given in Example 4.2, it suffices to show that the map of (4.3) puts Q n in bijection with If v ∈ C 2n−1 is any gap-parametrization vector, then, in light of (4.1), (4.2), and the fact that f λv achieves maxima and minima only at integer values, we have that f λv (0) − 1 ≤ f λv (t) ≤ f λv (2n − 1) + 1 for t ∈ [0, 2n − 1] if and only if v is a bidirectional gerrymander. Furthermore, if v is a vertex of the cube C 2n−1 , then α(v) is a vertex of C 2n+3 = [0, 1] 2n+3 so that f λ α(v) takes integers to integers. Since for any v ∈ C 2n−1 we have for all t ∈ (0, 2n + 3) if and only if v ∈ Q n . It follows then that, since λ α(v) ∈ {±1} 2n+3 when v ∈ Q n , we indeed have that f λ α(v) ∈ F , and so the map in (4.3) does indeed take Q n to graphs of bidirectional ballot sequences in B 2n+3 . Injectivity of the map is clear. To show that the map is surjective, we provide an inverse. For a bidirectional ballot sequence b = b 1 · · · b 2n+3 of length 2n + 3, we define the vector w = [w 1 , . . . , w 2n−1 ], where It is easily verified that the graph of f λ α(w) is the one associated to b. Moreover, the two statements directly following (4.4) imply that, since w ∈ {±1} 2n−1 and the graph of f λ α(w) is that of a bidirectional ballot sequence, we must have that w ∈ Q n . It is clear that this map is both a right-and left-inverse of the map given by (4.3).
We now give the second correspondence. Let I n denote the interior of B n in R 2n−1 . Let T n = I n ∩ Q n , i.e. those vertices of P n in the interior of B n .
Corollary 4.7. We have T n is in bijection with B 2n−1 , induced by the map v → f λv . (4.6) Proof. The proof here is essentially the same as that of Theorem 4.5. The point here is that, when v ∈ T n , we already have f λv (0) < f λv (t) < f λv (2n − 1), following similar reasoning as in the statements directly following (4.4).
Lastly, we use these correspondences along with our previous analysis of P n and its translates to obtain the growth rate in [Zh1].
Proof. By Corollaries 4.8 and 4.9, we know that for ℓ odd, the growth rate is Θ(2 ℓ /ℓ). The only additional insight needed is that for all ℓ, B ℓ+1 ≥ B ℓ . To see this, note that given a BBS of length ℓ, by appending a 1 to the end of it, we obtain a BBS of length ℓ + 1. Thus up to fixed constants, the inequalities in Corollaries 4.8 and 4.9 are correct for even ℓ as well. Thus, for all ℓ, B ℓ grows like Θ(2 ℓ /ℓ).
CONCLUSION
Our methods reveal a rich combinatorial structure underlying bidirectional ballot sequences. In previous papers on BBS's ( [Zh1], [BP], [HHPW]), analytic techniques were used to obtain asymptotics, but our techniques reveal a geometric interpretation for the Θ(2 n /n) growth rate. Interestingly, in the final section of [Zh1], Zhao states without detailed proof that nB n /2 n goes to 1/4, but claims his proof is "calculation-heavy". He then posits that "[t]here should be some natural, combinatorial explanation, perhaps along the lines of grouping all possible walks into orbits of size mostly n under some symmetry, so that almost every orbit contains exactly one walk with the desired property." Zhao's statement is strikingly similar to the ideas presented in our paper. Though we have made some effort, we have not been able to derive that nB n /2 n → 1/4 using the techniques of our paper, but we feel that there is hope for such a proof.
The second, more general takeaway from this paper is the potential for the ideas originally presented in [MP]. The ideas in this paper in fact evolved from the ideas in [MP]. In passing to the continuous setting, several additive number theory and combinatorial problems reveal a rich structure which was not otherwise visible. We believe that there is even greater potential still in such ideas and techniques. | 2018-08-19T15:33:46.000Z | 2017-08-08T00:00:00.000 | {
"year": 2018,
"sha1": "04145006f220bf3116431c88a67c0fc9938248bf",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "04145006f220bf3116431c88a67c0fc9938248bf",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
267197848 | pes2o/s2orc | v3-fos-license | Social Support and the Association Between Certain Forms of Violence and Harassment and Suicidal Ideation Among Transgender Women — National HIV Behavioral Surveillance Among Transgender Women, Seven Urban Areas, United States, 2019–2020
Violence and harassment toward transgender women are associated with suicidal thoughts and behaviors, and social support might moderate such association. This analysis explored the association between certain forms of violence and harassment and suicidal ideation and moderation by social support. Better understanding of these associations could guide mental health services and structural interventions appropriate to lived experiences of transgender women. This cross-sectional analysis used data from CDC’s National HIV Behavioral Surveillance Among Transgender Women. During 2019–2020, transgender women were recruited via respondent-driven sampling from seven urban areas in the United States for an HIV biobehavioral survey. The association between experiencing certain forms of violence and harassment (i.e., gender-based verbal and physical abuse or harassment, physical intimate partner abuse or harassment, and sexual violence) and suicidal ideation was measured using adjusted prevalence ratios and 95% CIs generated from log-linked Poisson regression models controlling for respondent-driven sampling design and confounders. To examine moderation, the extents of social support from family, friends, and significant others were assessed for interaction with certain forms of violence and harassment; if p interaction was <0.05, stratified adjusted prevalence ratios were presented. Among 1,608 transgender women, 59.7% experienced certain forms of violence and harassment and 17.7% reported suicidal ideation during the past 12 months; 75.2% reported high social support from significant others, 69.4% from friends, and 46.8% from family. Experiencing certain forms of violence and harassment and having low-moderate social support from any source was associated with higher prevalence of suicidal ideation. Social support from family moderated the association between experiencing certain forms of violence and harassment and suicidal ideation (p interaction = 0.01); however, even in the presence of high family social support, experiencing certain forms of violence and harassment was associated with higher prevalence of suicidal ideation. Social support did not completely moderate the positive association between experiencing violence and harassment and suicidal ideation. Further understanding of the social support dynamics of transgender women might improve the quality and use of social support. Policymakers and health care workers should work closely with transgender women communities to reduce the prevalence of violence, harassment, and suicide by implementing integrated, holistic, and transinclusive approaches.
Introduction
A high proportion of transgender persons considered or attempted suicide at some point during their lives, often higher than in the general population (1), with notably higher prevalence among young transgender persons and transgender persons from racial and ethnic minority groups (2)(3)(4)(5).In the 2015 U.S. Transgender Survey, 82% of respondents ever considered and 40% ever attempted suicide; 48% of respondents considered and 7% attempted suicide during the past year (2).Further, transgender women are more likely to report suicidal thoughts than transgender men, nonbinary persons, and other gender diverse groups (6).On the basis of CDC's National HIV Behavioral Surveillance Among Transgender Women (NHBS-Trans) during 2019-2020, a total of 18% considered and 4% attempted suicide during the past 12 months (7).
Similarly prevalent among transgender persons are experiences of violence and harassment.Studies reported a wide range in lifetime violence among transgender persons (7%-89%), which limits understanding of the true prevalence in this population (8).Violence and harassment against transgender persons come in many forms (e.g., verbal, physical, sexual, occupational, economic, and emotional) and from many sources (e.g., interpersonal, partner or nonpartner, and structural) (8).Particularly, transgender women are more frequently victimized than other transgender and gender diverse groups (9); such is often attributed to transmisogyny, an intersection stigma based on trans identity and feminine expression (10).Violence and harassment have been associated with higher risk for HIV infection (11), mental health conditions (5,12), and death, often from suicide (13)(14)(15).The association between violence and harassment and increased suicidal thoughts and behaviors among transgender persons is consistent across studies (3,13,14,16,17).Social support might attenuate the association, although studies exploring such a hypothesis among transgender persons are scant and violence and harassment were not analyzed separately from other adverse social experiences (15,(18)(19)(20).
Scientific gaps remain because most previous studies were not focused on transgender women and have examined violence and harassment, suicidal ideation, and social support separately.This analyses in this report examined the association between experiences of certain forms of violence and harassment and suicidal ideation among transgender women and explored the moderation of the association by perceived social support.A thorough understanding of the intersectionality of these factors could help guide recommendations for mental health services and structural interventions tailored to lived experiences of transgender women.
Data Source
This report includes survey data from NHBS-Trans conducted by CDC during June 2019-February 2020 to assess behavioral risks, prevention usage, and HIV prevalence.Eligible participants completed an interviewer-administered questionnaire and were offered HIV testing.Information and referrals to appropriate services, which were identified as available and acceptable to the population during formative assessment, were provided to participants who reported experiences of violence and harassment and suicidal thoughts and behaviors.Additional information about NHBS-Trans eligibility criteria, data collection, and biologic testing is available in the overview and methodology report of this supplement (21).The NHBS-Trans protocol questionnaire and documentation are available at https://www.cdc.gov/hiv/statistics/systems/nhbs/methods-questionnaires.html#trans.
Applicable local institutional review boards in each participating project area approved NHBS-Trans activities.The final NHBS-Trans sample included 1,608 transgender women in seven urban areas in the United States (Atlanta, Georgia; Los Angeles, California; New Orleans, Louisiana; New York City, New York; Philadelphia, Pennsylvania; San Francisco, California; and Seattle, Washington) recruited using respondent-driven sampling.This activity was reviewed by CDC, deemed not research, and was conducted consistent with applicable Federal law and CDC policy.*
Measures
The gender minority stress model (14) underpinned the conceptual framework for the analysis (Figure).This model posits that being part of a gender minority contributes to multiple stressors, including violence and harassment, that negatively affect health outcomes, including suicidal ideation, among transgender women (14).Social support was analyzed as a resilience factor that could moderate the association between violence and suicidal ideation.
The outcome assessed was suicidal ideation during the past 12 months (Table 1).The exposure assessed was experiences with certain forms of violence and harassment, which was operationally defined as gender-based verbal or physical abuse or harassment, physical abuse or harassment by an intimate partner, or sexual violence during the past 12 months.The creation of this composite variable (22) was determined by the high co-occurrence of multiple forms of violence and harassment among transgender populations among different studies (8) and in the current analytical sample.The moderator assessed was perceived social support, measured using the Multidimensional Scale of Perceived Social Support, dichotomized as low-moderate (mean <3.57)
Analysis
The association between certain forms of violence and harassment and suicidal ideation was examined using loglinked Poisson regression models with generalized estimating equations with an exchangeable correlation matrix, with robust variance estimators.Bivariable models were used to determine factors associated with suicidal ideation, and the associations were described as crude prevalence ratios with 95% CIs.Multivariable models, controlled for confounding factors, were used to examine the association of certain forms of violence and harassment and social support with suicidal ideation, and the associations were described as adjusted prevalence ratios with 95% CIs.All bivariable and multivariable models accounted for the respondent-driven sampling methodology by adjusting for network size and city and by clustering on recruitment chains.Moderation by social support subscales were assessed using interaction terms of dichotomized social support subscale scores and certain forms of violence and harassment in multiplicative scale in separate multivariable models (25).The interaction between family social support and certain forms of violence and harassment was statistically significant (p<0.05);hence, stratified adjusted prevalence ratios by extent of family social support were calculated (25).Statistical analyses were conducted using SAS (version 9.4; SAS Institute).
Results
Among transgender women in the sample (N = 1,608), many were aged <40 years (59.5%), were Hispanic or Latina (Hispanic) (40.0%) or Black or African American (Black) (35.4%), lived at or below the Federal poverty level (62.7%), were ever incarcerated (58.1%; 17.2% during the past 12 months), and had experienced homelessness during the past 12 months (41.6%) (Table 2).(Persons of Hispanic origin might be of any race but are categorized as Hispanic; all racial groups are non-Hispanic.)Most were currently taking gender-affirming hormonal therapy (71.5%) and wanted gender-affirming surgery but had not received procedures (52.2%); 41.0% tested positive for HIV.During the past 12 months, 59.7% experienced certain forms of violence and harassment: 53.4% reported gender-based verbal abuse or harassment, 26.6% reported gender-based physical abuse or harassment, 15.3% reported being physically abused or harassed by an intimate partner, and 14.8% reported sexual violence (not mutually exclusive).Among all participants, 75.2% reported high social support from significant others, 69.4% from friends, and 46.8% from family.§ All bivariable and multivariable models were controlled for city and network size and accounted for clustering by respondent-driven sampling recruitment chain.
The multivariable models for the exposure and each moderator variables all had separate models and were adjusted for age, race and ethnicity, education, poverty, GAHT status, gender-affirming surgery status, disability, HIV status, incarceration, illicit drug use (excluding marijuana), and homelessness.¶ Statistically significant at p<0.05.** Persons of Hispanic or Latina (Hispanic) origin might be of any race but are categorized as Hispanic; all racial groups are non-Hispanic."Other" includes persons who identified with non-Hispanic ethnicity and identified as American Indian/Alaska Native, Native Hawaiian or other Pacific Islander, Asian, or multiracial (i.e., more than one racial group).† † 2019 Federal poverty level thresholds were calculated on the basis of U.S. Department of Health and Human Services Federal poverty level guidelines (https:// aspe.hhs.gov/topics/poverty-economic-mobility/poverty-guidelines/prior-hhs-poverty-guidelines-federal-register-references/2019-poverty-guidelines).§ § Participants with reactive rapid National HIV Behavioral Surveillance HIV test result supported by a second rapid test or supplemental laboratory-based testing.¶ ¶ Serious difficulty hearing, seeing, doing cognitive tasks, walking or climbing stairs, dressing or bathing, or doing errands alone, based on U.S. Department of Health and Human Services disability data standard (https://aspe.hhs.gov/reports/hhs-implementation-guidance-data-collection-standards-race-ethnicity-sexprimary-language-disability-0).*** Excludes marijuana.
† † † Held in a detention center, jail, or prison for >24 hours.§ § § Any reports of gender-based verbal and physical abuse or harassment, physical intimate partner abuse or harassment, and sexual violence during the past 12 months.
During the past 12 months, 17.7% reported suicidal ideation.The prevalence of suicidal ideation was higher among those who were aged 18-24 years, White, had at least a high school education, had an unmet need for gender-affirming surgery, had HIV-negative test results, reported drug use, have a disability, were currently experiencing homelessness, did not report a history of incarceration, reported low-moderate social support from any source, and experienced certain forms of violence and harassment (p<0.05)(Table 2).In the multivariable analyses, both experiencing certain forms of violence and harassment and having low-moderate social support from any source were associated with higher prevalence of suicidal ideation.
The interaction between social support from family and experiencing certain forms of violence and harassment was significant (p interaction = 0.01) (Table 3).However, even among those with high family social support, certain forms of violence and harassment were significantly associated with increased prevalence of suicidal ideation.The interactions between social support from friends and from significant others and experiencing certain forms of violence and harassment were not statistically significant.
Discussion
Six in 10 transgender women experienced certain forms of violence and harassment during the past 12 months, and approximately one fifth reported suicidal ideation during the past 12 months.Most transgender women reported high social support from friends or significant others.Experiencing certain forms of violence and harassment and having low-moderate social support were associated with increased prevalence of suicidal ideation.Even in the presence of high family social support, certain forms of violence and harassment were still associated with higher prevalence of suicidal ideation.
The prevalence of suicidal ideation during the past 12 months in this analysis was lower than other studies among nonrandom samples of transgender persons (2,16) and young transgender women (5) but was disproportionately higher than a randomly selected sample of the general population in the United States (1).Contrary to other studies (2,4,13,16,24,26), suicidal ideation was not associated with gender-affirming therapy or poverty.Suicidal ideation was highest among those currently experiencing homelessness, consistent with other studies (16,27).The lower prevalence of suicidal ideation among those with history of incarceration >12 months ago was consistent with another study (28).The prevalence of certain forms of violence and harassment during the past 12 months in this analysis was similarly high as the estimates among transgender persons in a large crosssectional study in the United States (2) and in a systematic review (8).The prevalence of physical intimate partner abuse or harassment and sexual violence during the past 12 months in this analysis were higher than that among cisgender women in the United States (29), but comparable to the intimate partner violence prevalence among cisgender women of low socioeconomic status (30).
The analysis contributes to existing research linking certain forms of violence and harassment with increased suicidal ideation among transgender women (13,16,17), and these studies likewise used the gender minority stress model (14) to explain the findings.This model suggests that certain forms of violence and harassment are often enacted upon those nonconforming to heterosexual and cisgender norms and are underpinned by sociocultural, political, and legal marginalization of gender minorities (2,8), emphasizing the role of social determinants influencing health disparities among transgender women (8).
The findings in this report indicate that lack of high social support was associated with suicidal ideation, a finding consistent with other studies (15,18,26).However, the association between certain forms of violence and harassment with higher suicidal ideation was not moderated by social support from friends and significant others, and the association remained despite having high social support from family.Collectively, these results suggest that the association of certain forms of violence and harassment with higher suicidal ideation remained regardless of social support from any source.Mediating factors between experiences of violence and harassment and suicidal ideation (e.g., incarceration, homelessness, and poor access to education and health care) might exist such that social support alone could not adequately reduce the risk for suicidal ideation (18).Previous moderation studies have demonstrated mixed results (12,15,18,20,26).Certain studies found that the association was not moderated by social support from family (12,18,20) and from friends (15,20).Other studies found that social support from significant others (15) and parental support specific to gender identity (26) buffered the increased suicidal ideation associated with violence and harassment.This report contributes to limited studies exploring the relation between these variables (12,15,18,20,26), but this report analyzed the nuanced variables altogether (12,15,18,20,26).
Social support dynamics of transgender women are multifaceted.Family could be a source of social support (31), abuse and harassment (32), or both.Amid the frequent reports of rejection from family, social support from friends and significant others might fill such gaps (33).Moreover, the findings suggest that the effectiveness of social support as a buffer might depend on the quality and the context in which the support was provided.Not all social support might be productively helpful (34), and certain transgender persons report adverse experiences while receiving social support, such as microaggressions (33,35,36) and corumination (18,37).Microaggressions are subtle behaviors of gender-based discrimination from various perpetrators (33); these might even come from supportive family, friends, significant others, and persons who belong to sexual and gender minority groups (33,35,36).Corumination is the unproductive processing and repeated experiencing of trauma with a person who shares lived experiences (37).Although both were associated with poor mental health outcomes (33,36,38), microaggressions and corumination do not discount the protective effects of social support in general on mental health (15,16,23).Nonetheless, further understanding of the social support dynamics among transgender persons, including improving how researchers operationalize and measure social support, is warranted (19).
Addressing violence, harassment, and suicidal ideation among transgender women requires integrated multisectoral interventions (https://www.cdc.gov/suicide/pdf/preventionresource.pdf ).Violence and harassment prevention can be delivered through community-led awareness and cultural changes in existing programs (39), such as transinclusivity in schools (39), homeless shelters (40), the criminal justice system (8), and health care (8).Holistic approaches addressing underlying socioecological factors (e.g., gender norms, economic dependence, and public attitude toward violence and harassment) have been recommended (41,42).Moreover, because transgender persons experiencing violence and harassment were more likely to access support from family, friends, and significant others than from health care providers (43), interventions improving the quality of social support, such as family-based interventions (44), lifecourse appropriate tools (31), peer-delivered support groups (19), and bystander engagement (43), could be considered.Designing interventions with the transgender community is essential because transgender persons have values and strategies (45) on effectively building their social capital.
Limitations
General limitations for the NHBS-Trans are available in the overview and methodology report of this supplement (21).The findings in this report are subject to at least four additional limitations.First, the cross-sectional design precludes inferences on causality among violence and harassment, suicidal ideation, and social support.Second, measurement of variables might be limited by information bias.Measured violence and harassment excluded physical and verbal abuse or harassment that were not specific to their gender identity or presentation and other forms of violence (e.g., psychological and economic violence).Measured social support pertained to individual support to the participants and was not specific to structural or community levels of support.Family might pertain to family of origin or chosen family, or both; social support from significant others might be subject to nonspecificity and transientness of significant others.The survey did not assess whether sources of social support also were perpetrators of violence.Third, most data were self-reported and might be subject to recall and social desirability biases and influenced by trauma, which could underestimate the reports of suicidal thoughts and experiences of violence and harassment.Finally, the sample is not representative of transgender women residing outside of the seven urban areas.Because transgender women are hard to reach, the data might not be representative of all transgender women residing in the seven urban areas.
The surveillance included an incentivized peer recruitment; therefore, participants might have been more likely to have similar characteristics, including socioeconomic status and experiences of violence (22).
Conclusion
Many transgender women experience certain forms of violence and harassment and these experiences are associated with suicidal ideation.Although social support might be protective against suicidal ideation, such support does not seem to completely buffer the association between certain forms violence and harassment and suicidal ideation.Integrated and holistic approaches to violence, harassment, and suicide prevention designed by and for transgender women are needed.
TABLE 3 .
Association between suicidal ideation and experiences of certain forms of violence and harassment* and the moderating effect of family social support -National HIV Behavioral Surveillance Among Transgender Women, seven urban areas, † United States,
TABLE 1 . Variables, questions, and analytic coding for social support and the association between certain forms of violence and harassment and suicidal ideation among transgender women -National HIV Behavioral Surveillance Among Transgender Women, seven urban areas,* United States, 2019-2020 Variable Question Analytic coding
§ Do you consider yourself to be of Hispanic, Latino/a, or Spanish origin?Which racial group or groups do you consider yourself to be in?You may choose more than one option.Black or African American, White, Hispanic, or other Poverty ¶ What was your household income last year from all sources before taxes?Including yourself, how many people depended on this income?Above Federal poverty level or at or below Federal poverty level Education What is the highest level of education you completed?<High school, high school diploma or equivalent, or >high school GAHT status Have you ever taken hormones for gender transition or affirmation?Are you currently taking hormones for gender transition or affirmation?Would you like to take hormones for gender transition or affirmation?Do not want to take GAHT, currently taking GAHT, or want to take GAHT Gender-affirming surgery status Have you ever had any type of surgery for gender transition or affirmation?Do you plan or want to get additional surgeries for gender transition or affirmation?Do you want to have surgery for gender transition or affirmation?* Are you deaf or do you have serious difficulty hearing?Are you blind or do you have serious difficulty seeing, even when wearing glasses?Because of a physical, mental, or emotional condition, do you have serious difficulty concentrating, remembering, or making decisions?Do you have serious difficulty walking or climbing stairs?Do you have difficulty dressing or bathing?Because of a physical, mental, or emotional condition, do you have difficulty doing errands alone, such as visiting a doctor's office or shopping?Yes or no Illicit drug use (excluding marijuana) Have you ever in your life shot up or injected any drugs other than those prescribed for you?How many days or months or years ago did you last inject?In the past 12 months, have you used any drugs that were not prescribed for you and that you did not inject?Yes or no See table footnotes on the next page.
TABLE 1 . (Continued) Variables, questions, and analytic coding for social support and the association between certain forms of violence and harassment and suicidal ideation among transgender women -National HIV Behavioral Surveillance Among Transgender Women, seven urban areas,* United States, 2019-2020 Variable Question Analytic coding
IncarcerationHave you ever been held in a detention center, jail, or prison for more than 24 hours?During the past 12 months, have you been held in a detention center, jail, or prison for more than 24 hours?Not a questionnaire item in National HIV Behavioral Surveillance Among Transgender Women.This is a composite analysis variable from the responses in actual survey questions on verbal abuse or harassment, physical abuse or harassment, physical intimate partner abuse or harassment, and sexual violence.§ Persons of Hispanic or Latina (Hispanic) origin might be of any race but are categorized as Hispanic; all racial groups are non-Hispanic.¶ 2019 Federal poverty level thresholds were calculated on the basis of U.S. Department of Health and Human Services Federal poverty level guidelines (https:// aspe.hhs.gov/topics/poverty-economic-mobility/poverty-guidelines/prior-hhs-poverty-guidelines-federal-register-references/2019-poverty-guidelines).** To assess difficulty in six basic domains of functioning (hearing, vision, cognition, walking, self-care, and independent living), based on U.S. Department of Health (15,16,24) Services disability data standard (https://aspe.hhs.gov/reports/hhs-implementation-guidance-data-collection-standards-race-ethnicity-sex-primarylanguage-disability-0).andhigh(mean≥3.57)(Cronbach'salpha= 0.97)(23).All three social support subscales (family, friends, and significant others) were assessed separately.The instrument demonstrated good construct validity and internal consistency among transgender persons(15).Confounding factors, determined a priori(15,16,24), included age, race and ethnicity, poverty, education, HIV testing result, hormonal and surgical genderaffirmation status, illicit drug use, disability, incarceration, and homelessness.
TABLE 2 . Number and percentage of transgender women experiencing certain forms violence and harassment, by reported suicidal ideation and selected characteristics -National HIV Behavioral Surveillance Among Transgender Women, seven urban areas,* United States, 2019-2020 Characteristic Total (N = 1,608) Reported suicidal ideation during the past year
¶See table footnotes on the next page.
TABLE 2 . (Continued) Number and percentage of transgender women experiencing certain forms violence and harassment, by reported suicidal ideation and selected characteristics -National HIV Behavioral Surveillance Among Transgender Women, seven urban areas,* United States, 2019-2020 Characteristic Total (N = 1,608) Reported suicidal ideation during the past year
GAHT = gender-affirming hormonal therapy; NA = not applicable; PR = prevalence ratio; Ref = referent group.* Atlanta, GA; Los Angeles, CA; New Orleans, LA; New York City, NY; Philadelphia, PA; San Francisco, CA; and Seattle, WA. † Numbers might not sum to 1,608, and column percentages might not sum to 100% because of missing values.
Numbers might not sum to 1,608 because of missing values.** Log-linked Poisson regression models using generalized estimating equation with an exchangeable correlation matrix and robust variance estimators with a significant interaction term between family social support and certain forms of violence and harassment (p interaction = 0.01) † † Models were adjusted for respondent-driven sampling design and confounding factors, including age, race and ethnicity, education, poverty, gender affirming hormonal therapy status, gender affirming surgery status, disability, HIV status, incarceration, illicit drug use (excluding marijuana), and homelessness. ¶ | 2024-01-25T06:17:19.966Z | 2024-01-25T00:00:00.000 | {
"year": 2024,
"sha1": "25a648c30ae079c4e4f54a7a44008b58b86938eb",
"oa_license": "CC0",
"oa_url": "https://www.cdc.gov/mmwr/volumes/73/su/pdfs/su7301a7-H.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "95e9ef169c323d99178c4064c3930c490b71fa8a",
"s2fieldsofstudy": [
"Sociology",
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
226044141 | pes2o/s2orc | v3-fos-license | Engagement, Acceptability, Usability, and Preliminary Efficacy of a Self-Monitoring Mobile Health Intervention to Reduce Sedentary Behavior in Belgian Older Adults: Mixed Methods Study
Background Although healthy aging can be stimulated by the reduction of sedentary behavior, few interventions are available for older adults. Previous studies suggest that self-monitoring might be a promising behavior change technique to reduce older adults’ sedentary behavior. However, little is known about older adults’ experiences with a self-monitoring–based intervention aimed at the reduction of sedentary behavior. Objective The aim of this study is to evaluate engagement, acceptability, usability, and preliminary efficacy of a self-monitoring–based mHealth intervention developed to reduce older adults’ sedentary behavior. Methods A mixed methods study was performed among 28 community-dwelling older adults living in Flanders, Belgium. The 3-week intervention consisted of general sedentary behavior information as well as visual and tactile feedback on participants’ sedentary behavior. Semistructured interviews were conducted to explore engagement with, and acceptability and usability of, the intervention. Sitting time was measured using the thigh-worn activPAL (PAL Technologies) accelerometer before and after the intervention. System usage data of the app were recorded. Quantitative data were analyzed using descriptive statistics and paired-samples t tests; qualitative data were thematically analyzed and presented using pen profiles. Results Participants mainly reported positive feelings regarding the intervention, referring to it as motivating, surprising, and interesting. They commonly reported that the intervention changed their thinking (ie, they became more aware of their sedentary behavior) but not their actual behavior. There were mixed opinions on the kind of feedback (ie, tactile vs visual) that they preferred. The intervention was considered easy to use, and the design was described as clear. Some problems were noticed regarding attaching and wearing the self-monitoring device. System usage data showed that the median frequency of consulting the app widely differed among participants, ranging from 0 to 20 times a day. No significant reductions were found in objectively measured sitting time. Conclusions Although the intervention was well perceived by the majority of older adults, no reductions in sitting time were found. Possible explanations for the lack of reductions might be the short intervention duration or the fact that only bringing the habitual sedentary behavior into conscious awareness might not be sufficient to achieve behavior change. Trial Registration ClinicalTrials.gov NCT04003324; https://tinyurl.com/y2p4g8hx
Introduction
The aging population continues to expand rapidly. Estimates indicate that the global number of adults over the age of 65 years will nearly double from a current population of about 800 million to approximately 1.5 billion in 2050 [1]. This unprecedented population boom poses a major public health challenge. Aging will present an economic burden on society because of increased needs resulting from age-related decline of physical, mental, and cognitive health [2]. To maintain the quality of life of older adults while living independently, healthy aging has become a main priority in the field of public health.
Up until now, the majority of efforts to facilitate healthy aging have been focused on increasing moderate-to vigorous-intensity physical activity [3] but have neglected sedentary behavior. However, both physical inactivity and high levels of sedentary time have been shown to be significantly related to detrimental health effects, like an increased risk for all-cause mortality, noncommunicable diseases [4], and geriatric syndromes, such as physical and cognitive impairments [5,6]. Research has indicated that an increase in moderate-to vigorous-intensity physical activity is often not sufficient to offset the negative health consequences of high levels of sedentary behavior [7]. Given the negative health consequences and the high prevalence of sedentary behavior in older adults (ie, 60 years and over) [8], creating interventions specifically focusing on the reduction of sedentary behavior is recommended to promote healthy aging.
Existing sedentary behavior interventions have mainly focused on social-cognitive models of behavioral change (eg, theory of planned behavior) [9,10]. However, most of these models are based on an expectancy-value framework in which behavior is determined by expected outcomes and the value that is placed on them [11]. As such, these models do not adequately capture processes underlying unintentional and habit-like behavior. Given that a large part of older adults' sedentary behavior is habitual, specific strategies are needed to better control sedentary behavior. One might, for example, change the circumstances, so that habit cueing does not occur anymore [12], or alter external cues that lead to habit execution [13]. These strategies are rather manipulative and often impossible; therefore, they are not always ethical [14,15]. Another way to disrupt undesired habits is preferred, namely by bringing habitual behavior and its context into conscious awareness. This might be achieved by means of self-monitoring [16].
Self-monitoring, which is defined as keeping a record of a specified behavior as a method for changing behavior [16], has been identified as a promising behavior change technique to reduce sedentary behavior in adults [10,17]. A recent meta-analysis, in which interventions including self-monitoring were summarized that aimed to reduce sedentary behavior, showed a significant reduction in total sedentary time [17]. Specifically, an overall mean difference of 34.37 min/day (95% CI 14.48-54.25) was found for total sedentary time between intervention and control groups. It is important to note, however, that the majority of the included interventions targeted young and middle-aged adults. Only four studies targeted older adults with a mean age above 60 years. Of these four studies, only one used an electronic self-monitoring device to provide information on older adults' sedentary behavior, namely the Fitbit One. As the Fitbit One is worn on the wrist, the validity of the sedentary behavior information can be questioned.
Given the limited quantity and quality of existing research on this topic, it remains unclear how older adults experience and use self-monitoring-based mobile health (mHealth) interventions specifically developed to reduce sedentary behavior. However, this information is essential to inform decisions on the development of future interventions. The conceptual model by Perski et al has indicated that user engagement (ie, the combination of subjective experiences characterized by attention, interest, and affect, and objectively measured intervention usage) is assumed to moderate the influence of the mHealth intervention on the mechanisms of action [18]. Next to user engagement, other aspects of acceptability (ie, how well older adults perceived the intervention and the extent to which the intervention met their needs), such as perceived relevance, satisfaction, and perceived usefulness, as well as usability (ie, the extent to which the intervention could be used by other older adults to reduce their sedentary behavior), also contribute to an individual's motivation to continue using the app [19].
Therefore, the overall aim of this study is to gain insight into older adults' experiences with, and the use of, a self-monitoring-based mHealth intervention specifically developed to reduce sedentary behavior. As both qualitative and quantitative data are essential to fully understand concepts such as user engagement, a mixed methods study is used. Moreover, preliminary efficacy of the intervention on older adults' objectively assessed sedentary behavior is examined to get a first indication of the effect size.
Participants and Design
A convergence model with a triangulated mixed methods approach was conducted to gain in-depth comprehension in the user engagement, acceptability, and usability of an mHealth intervention aimed at the reduction of sedentary behavior. This methodology allowed us to compare, corroborate, or relate quantitative data (ie, system usage and activity monitor data) and qualitative data (ie, interview data). Quantitative and qualitative data were analyzed separately, followed by an integrated interpretation of the results. Participants in the mixed methods study were recruited in Flanders, Belgium, from February to May 2019 using convenience sampling. Recruitment continued until data were saturated (ie, until no new themes emerged in additional interviews). Firstly, an advertisement was distributed via Facebook, and secondly, the advertisement was electronically sent to older adults who were included in a previous study by our research group and who had expressed interest in future studies. To be eligible for this study, participants needed to (1) be at least 60 years old, (2) be Dutch speaking, (3) be able to walk 100 meters without severe difficulties, and (4) have a smartphone. The study was registered at ClinicalTrials.gov (Identifier: NCT04003324) and was approved by the Committee of Medical Ethics of the Ghent JMIR Mhealth Uhealth 2020 | vol. 8 | iss. 10 | e18653 | p. 2 http://mhealth.jmir.org/2020/10/e18653/ (page number not for citation purposes) University Hospital (Belgian registration number: 2019/0398). All participants provided written informed consent.
Procedure
The study procedure is explained in Figure 1. Concretely, older adults who agreed to participate were contacted by phone to make an appointment for a first home visit. During this home visit, they received an information letter explaining the purpose of the study and an informed consent form. After signing the informed consent form, baseline measures were collected. Specifically, a structured interview was conducted to assess participants' sociodemographic characteristics. Moreover, an accelerometer-activPAL (PAL Technologies)-was attached to the participants' thighs to objectively measure their sedentary behavior. Participants were instructed to wear the accelerometer for 1 week and to fill out the accompanying diary. After 1 week, a researcher visited the participants again at their homes to collect the accelerometers. After baseline measurements, the self-monitoring-based mHealth intervention was introduced to the participants (see Self-Monitoring mHealth Intervention section). At the end of the intervention (ie, after 3 weeks), participants were asked to complete a semistructured interview that included questions on user engagement with the intervention and perceptions regarding usability and acceptability. At the end of this last home visit, participants were instructed to wear the accelerometer for another week (ie, postmeasurements). Participants were given a prestamped envelope and were asked to send the accelerometer back by postal mail.
Self-Monitoring mHealth Intervention
The intervention consisted of general sedentary behavior information as well as visual and tactile feedback on participants' sedentary behavior. General sedentary behavior information was provided to participants by means of a 10-minute presentation. The presentation was given by an expert in the field during the second home visit. Visual and tactile feedback were provided using a novel self-monitoring device-the Activator (PAL Technologies). The Activator has recently been validated by Gill et al [20]. The Activator is worn on the front of the thigh, either in a pants pocket or attached with an elastic band to clothing covering the upper thigh (eg, trousers, jeans, shorts, leggings, tights, or dresses), and provides visual and tactile feedback [21]. Visual feedback is presented through a smartphone app via Bluetooth connection. Both real-time feedback and a 7-day historical overview are presented based on participants' sedentary time, upright time, and number of steps (see Figure 2). Visual feedback is constantly available and can be viewed whenever and as often as participants want. Tactile feedback is provided by means of a strong, but comfortable, vibration of the Activator device itself each time a participant is sitting for 30 uninterrupted minutes. If a participant remains sedentary, the vibration is repeated after another 30 minutes. Participants were able to turn the vibration function on and off.
Structured Interview
Participants' sociodemographic characteristics were collected by a trained researcher during the first home visit. Sociodemographic characteristics included age, gender, family situation (ie, being single or a widow or widower, having a partner but living independently, living with a partner, or being married), number of children, number of grandchildren, residential area (ie, countryside, village, city suburb, or city), educational level (ie, no education, primary education, vocational secondary education, technical secondary education, general secondary education, college, or university), and employment status (ie, employed or not employed).
Activity Monitor
Total sedentary time, sit-to-stand transitions, standing time, and number of steps were objectively estimated by means of the activPAL accelerometer. The accelerometer was attached on the midline of the right anterior thigh. Participants were instructed to wear the accelerometer for 7 consecutive days (24 h/day), both at baseline and at postmeasurement. The activPAL accelerometer summarizes data in 15-second intervals and has shown to be a valid and reliable measure for estimating the time spent sitting, standing, and stepping [19]. The activPAL data were downloaded using activPAL3 software, version 7.2.38, and were then processed using ProcessingPAL, version 1.1 (University of Leicester, UK). This software uses a validated algorithm to separate valid waking wear data from sleep and nonwear data. A day was considered invalid if there was limited postural variation (ie, ≥95% of wear time in one activity), a limited number of steps (<500 steps/day), or fewer than 10 hours of valid waking wear time [22,23]. Summary data from the algorithm were quality checked using heat maps against participants' diaries, and corrections were made where needed [22,23]. Only participants with at least 5 days of valid activPAL data on both time points were included in the analyses [24].
Diary Log
Participants were asked to indicate sleep time (ie, time they went to bed and got up) and nonwearing time of the activPAL in a diary during the 7 days of baseline measurement and postmeasurement.
Semistructured Interview
Semistructured face-to-face interviews were conducted by trained researchers to explore (1) user engagement with the intervention and (2) the usability and acceptability of the intervention. User engagement was defined as the subjective experience of older adults with the intervention characterized by attention, interest, and affect. Acceptability was assessed by asking questions on how well the older adults perceived the intervention and by evaluating the extent to which the intervention met their needs. Usability included questions on the extent to which the intervention could be used by other older adults to reduce their sedentary behavior. The interview guide (see Multimedia Appendix 1) was developed by the first author (SC) based on an extensive literature search and on previous research by our research group examining user engagement, acceptability, and usability of eHealth and mHealth interventions [25]. Conceptual frameworks identified from the literature search, such as the conceptual framework of direct and indirect influences on engagement with digital behavior change interventions (DBCIs) by Perski et al [18] and the behavioral intervention technology (BIT) model of Mohr et al [26], guided the construction of the interview guide. The DBCI-related framework is an integrative conceptual framework involving potential direct and indirect influences on engagement and relationships between engagement and intervention effectiveness. The BIT model conceptually defines BITs, from the clinical aim to the technological delivery framework.
After thorough discussion with the last author (DVD), the interview guide was revised and pilot-tested with two older adults. Based on the pilot test, some minor changes were made, such as paraphrasing and simplifying some vocabulary. By doing so, the clarity of the questions was verified and the duration of the interview was estimated. Interviews were audio recorded (mean duration 11.11 minutes, SD 5.94) and transcribed verbatim, producing a document of 122 pages in length, using Calibri font, size 11.
System Usage Data
System usage data of the Activator (ie, the app) were stored on the cloud server of PAL Technologies and used to objectively estimate user engagement; data included (1) the number of days the Activator was worn and (2) the number of times the app was accessed.
Data Analysis
Descriptive statistics were used to describe the baseline characteristics of the participants and to assess the extent of usage (ie, engagement). Qualitative data were thematically analyzed, using the NVivo 12 software package (QSR International), using the six-phase approach by Braun and Clarke [27] to gain insight into participants' subjective experiences with the intervention (ie, engagement) and acceptability and usability of the intervention. More specifically, two researchers outside the project team-Charlotte Meersseman and Siel Mechelinck-read and reread the transcripts multiple times to become familiar with the data (phase 1). They independently coded the data line by line and defined an initial coding scheme using an inductive approach (phase 2). The coding schemes were then discussed with the first author (SC). By doing so, the triangulation technique was applied and the trustworthiness and validity of the findings were promoted. Based on the coding schemes, themes were searched (phase 3), reviewed (phase 4), and defined (phase 5). Subsequently, pen profiles (ie, diagrams of composite key emergent themes, frequency data, and verbatim quotes) were constructed based on the defined themes and results were written up (phase 6). This increasingly utilized technique is considered appropriate for presenting qualitative outcome data in a clear and useful manner [28]. Paired-samples t tests were performed to determine preliminary efficacy of the intervention on older adults' sedentary time. All quantitative analyses were conducted in SPSS Statistics for Windows, version 25.0 (IBM Corp).
Participants
A total of 36 older adults expressed interest in participation. Out of these 36 participants, 2 (6%) of them could not be reached to make an appointment for the first home visit and 4 (11%) decided to withdraw from the study after receiving detailed study information. Reasons for withdrawal were health problems (2/36, 6%), lack of time (1/36, 3%), and death of a spouse (1/36, 3%). As such, 30 older adults completed the baseline measurements. Out of these 30 participants, 2 (7%) of them were excluded, as baseline data showed that they did not fulfill the inclusion criteria (ie, they were not able to walk 100 meters without severe difficulties). In addition, 2 (7%) participants dropped out during the intervention period due to health problems (1/30, 3%) and lack of motivation (1/30, 3%). Consequently, posttest data were collected from 26 out of 28 participants (93% retention).
Baseline characteristics of the participants are presented in Table 1. Just over half of the participants (15/28, 54%) were female, the average age was 65.0 years (SD 4.6), and the mean BMI was 25.4 kg/m 2 (SD 3.9). The majority of the participants were highly educated and were married or lived with a partner.
User Engagement
Qualitative data on user engagement were thematically analyzed and are presented in Figure 3. The main themes that emerged were positive and negative feelings about the intervention, preferences for the kind of feedback, and the pattern of use. The participants mainly reported positive feelings, such as being motivated, surprised, and interested. Only a minority (3/28, 11%) indicated that they thought the intervention was not interesting and not helpful. There were mixed opinions on the preferred kind of feedback (ie, tactile vs visual). Some thought the vibrations were more useful, whereas others favored the visual information on the app. System usage data showed that the median number of days the self-monitoring device was worn by the participants was 20 out of 21 days (range [15][16][17][18][19][20][21]. Half of the participants (16/28, 57%) reported that they accessed the app on a daily basis. This finding was confirmed by system usage data (see Multimedia Appendix 2), showing that 8 out of 28 participants (29%) consulted the app every day, while 5 participants (18%) consulted the app at least 80% of the days. Some participants reported that they consulted the visual feedback multiple times a day. Accordingly, system usage data showed that the median frequency of consulting the app ranged from 0 to 20 times a day (see Multimedia Appendix 2). Especially in the evening, and after doing physical activities, participants reported that they viewed the visual feedback. They indicated that the main reasons to access the app were out of curiosity, to go through their day, and to see the impact of certain physical activities. Participants also emphasized that they consulted the app more frequently in the beginning of the intervention period, compared to the end of the intervention period. This finding was also in line with the system usage data, which show that the median frequency of consulting the app ranged from 3 or 4 times a day in the beginning of the intervention to 1 or 2 times a day at the end of the intervention (see Figure 4).
Acceptability and Usability
Results of the thematic analysis on acceptability and usability of the intervention are presented in Figure 5. The main themes that were identified were the design and the ease of use, wearing preferences, problems and solutions, the focus, and the perceived relevance. The intervention was considered easy to use, and most participants described the design as clear. The only remark on the design were the colors of the behaviors. Participants frequently cited that it would be more logical if sedentary behavior (ie, the behavior that should be limited) were displayed in red and the number of steps (ie, the behavior that should increase) in green. Participants expressed mixed preferences regarding the way to wear the device. Some participants (11/28, 39%) preferred to use the elastic band, whereas others (13/28, 46%) preferred to wear it in their pockets. Frequent problems that older adults, especially women, experienced included small or loose pockets, loss of the device, and the imprint of the elastic band on their clothes after wearing it. Out of the 28 older adults, 5 (18%) indicated that they used a handkerchief to ensure that the device was fixed in their pocket and could not flip over. Despite the fact that the aim of the intervention was to reduce sedentary behavior, only 2 participants (7%) mainly focused on the sedentary behavior information. Although the majority of older adults rated the intervention as highly relevant, some older adults were not convinced about the relevance. The most important reasons for limited perceived relevance were (1) the fact that they do not spend a lot of time sitting, or at least think they are not sitting a lot, and thus have no need to reduce their sedentary time and (2) the fact that they often see no other option but sitting to perform certain tasks.
Preliminary Efficacy
Participants commonly reported that the intervention changed their thinking (ie, they became more aware of their sedentary behavior) but not their actual sedentary behavior. The latter result was supported by quantitative data derived from the activity monitor (see Table 2). Sitting and standing time were very similar at pre-and postmeasurements. There was a small improvement in steps of around 400 per day. This improvement was not significant, probably because of the small sample size.
Principal Findings
This study provides novel and in-depth insights into the potential of a self-monitoring-based mHealth intervention in older adults to reduce sedentary behavior. Overall, our results indicated that the intervention was generally well perceived by older adults, but preliminary analyses showed no reduction in sedentary time after the 3-week intervention period.
Previous research has shown that building sustained user engagement over time is challenging in mHealth interventions [29]. Low user engagement results in limited exposure to the intervention and, in turn, small or no intervention effects [30]. Therefore, gaining insight into the user engagement of mHealth interventions is crucial. Both objective usage data and subjective experiences showed that older adults were highly engaged with this study's intervention. The participants generally expressed positive feelings, and the majority consulted the feedback frequently. They all agreed that the intervention made them more aware of their sedentary behavior, but the intervention did not result in a decrease in sedentary time.
The lack of a decrease in sedentary time is not entirely surprising given the following reasons. Firstly, the intervention period of 3 weeks was probably too short to actually change habitual behavior. Changing habits takes a long time [31] and, thus, it is likely that participants still need the cues to interrupt and/or reduce their sedentary behavior after the intervention has ended. Ending the cues might have resulted in relapse into their old and unhealthy habitual sedentary behavior [32]. Secondly, the intervention mainly targeted automatic processes underlying sedentary behavior by bringing the habitual behavior into conscious awareness. However, dual-process theories of motivation posit that both controlled and automatic processes regulate our sedentary behavior [33]. Thus, additional behavior change techniques (eg, goal setting, action planning, and coping planning) should be included in the intervention to affect the controlled processes and to actually achieve behavior change. Given that participants often mentioned that they saw no other options to reduce their sedentary behavior, it might be worth including concrete examples on how to reduce sedentary behavior. Thirdly, physical activity information (ie, number of steps) was also provided in the app, notwithstanding that the only aim of this intervention was to reduce sedentary behavior. The physical activity information could not be removed from the Activator app before the start of the study. Existing literature has indicated that participants of interventions targeting both sedentary behavior and physical activity simultaneously are more likely to focus on increasing physical activity due to (1) the clearer guidelines for physical activity (ie, 150 minutes of moderate-to vigorous-intensity physical activity a week) compared to sedentary behavior (ie, sit less), (2) the better-known negative health consequences of too little physical activity compared to too much sedentary behavior, and (3) the fact that physical inactivity is often still considered a synonym for sedentary behavior [34]. The latter was confirmed by the results of the semistructured interviews: participants often mentioned that they wanted to increase the number of steps in order to reduce their sedentary time. Objective physical activity data showed that the average daily number of steps increased by approximately 400, or 10%, over the 3-week intervention period. Although this increase was not significant, this indicates that Activator feedback is more likely to affect the number of steps than the sedentary time. This finding is in line with the results of previous Activator studies [21,35] and suggests that more efforts should be made to clarify the difference between sedentary behavior and physical inactivity and to emphasize the importance of standing and light-intensity physical activity.
Despite the fact that common aging-related barriers (eg, visual impairment, reduced working memory, limited motivation, and reduced mobility) can influence the use of mHealth in older adults [36], general perceptions on the acceptability and usability of this study's intervention were positive. The app was easy to use and the design was clear. This is of great importance, as previous research has indicated that simplicity is one of the key principles for the design of mHealth interventions for older adults [37][38][39]. Although the Activator could be worn in different ways (ie, in the pants pockets or with an elastic band), the wearing of the device was often mentioned as challenging, especially among women. More research is therefore required to determine the ideal manner of attaching and wearing the Activator, especially when wearing pants without pockets or without deep pockets.
Strengths and Limitations
Strengths of this study include the innovativeness of the research. To our knowledge, this is the first study examining older adults' experiences with an electronic self-monitoring device specifically developed to reduce sedentary behavior. Moreover, by collecting both qualitative and quantitative data, a comprehensive view was obtained on self-monitoring-based mHealth interventions to reduce older adults' sedentary behavior. An important limitation of this study was the sampling method. Participants were not randomly selected and, therefore, selection bias may have occurred. The majority of the participants were highly educated, whereby generalization of the results to lower-educated groups might be limited. Moreover, no control group was included, as the main aim was to gain in-depth knowledge on participants' perceptions with the intervention. Although data saturation was achieved in the qualitative analysis, the small sample size was only sufficient to get a first indication on effect sizes and was not meant to provide sufficient statistical power for the quantitative analysis. Finally, the intervention lasted only 3 weeks and, thus, no conclusions can be drawn on the long-term adherence to the intervention. Based on these limitations, future studies should endeavor to recruit a larger, more generalizable sample and should use a randomized controlled trial design to draw firm conclusions on the effectiveness of a self-monitoring tool to reduce older adults' sedentary behavior. Furthermore, we believe that adding behavior change techniques to the mHealth intervention, ones that can affect the controlled processes underlying sedentary behavior, and extending the intervention duration might be recommended in future studies.
Conclusions
Results of this study suggest that the innovative self-monitoring-based mHealth intervention holds potential for the reduction of sedentary behavior in older adults. The intervention was considered interesting, helpful, and easy to use, and was able to increase awareness among older adults of their sedentary behavior. Despite the positive perceptions, no reductions in objective sedentary time were found in this study's sample. Hence, the intervention was probably of insufficient intensity to reduce the sedentary behaviors of participants. In order to effectively achieve behavior change, a number of modifications to the intervention are suggested, such as the addition of behavior change techniques that target controlled processes underlying sedentary behavior. | 2020-03-19T10:42:14.661Z | 2020-03-11T00:00:00.000 | {
"year": 2020,
"sha1": "e0d87fd7169dfd60bdf4749fa48d8246cbf97d57",
"oa_license": "CCBY",
"oa_url": "https://mhealth.jmir.org/2020/10/e18653/PDF",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e333e039047da466d84fc27fc351c030f15f990b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
2250890 | pes2o/s2orc | v3-fos-license | Phenomenology of theGreen's function within the Resonance Chiral Theory
We analyse the odd-intrinsic-parity effective Lagrangian of QCD valid for processes involving one pseudoscalar with two vector mesons described in terms of antisymmetric tensor fields. Substantial information on the odd-intrinsic-parity couplings is obtained by constructing the vector-vector-pseudoscalar Green's three-point function, at leading order in 1/N_C, and demanding that its short-distance behaviour matches the corresponding OPE result. The QCD constraints thus enforced allow us to predict the decay amplitude omega ->pi gamma, the O(p^6) corrections to pi ->gamma gamma and the slope parameter in pi ->gamma gamma^*.
Introduction
Effective field theories of QCD have provided efficient ways to explore hadron dynamics in those regimes where we are not able to solve the full theory. In the very low-energy domain, chiral perturbation theory (χPT) [1,2] has achieved a remarkable success in describing the strong interactions among pseudoscalar mesons. Moving up to the 1 GeV region the effects of vector resonances become dominant and must be accommodated in the theory. Several works [3,4] have provided a sound procedure to include resonance states within the chiral framework, namely the Resonance Chiral Theory (RχT). As the couplings entering the effective Lagrangian are not fixed by the symmetry alone, one should rely on the phenomenology or, alternatively, construct theoretical tools that could provide a meaningful way to compare the results of the effective theory with those of QCD. The pioneering work of Ref. [4] indicated that the analysis of Green's functions and form factors of QCD currents yields valuable information on the resonance sector.
Recently, several authors have pushed forward this direction, either by using a Lagrangian with explicit resonance degrees of freedom or within the framework of the lowest meson dominance * Talk given by P. D. Ruiz-Femenía at the High-Energy Physics International Conference in Quantum Chromodynamics (QCD 03), Montpellier, France, 2-8 July 2003.
(LMD) approximation to the large number of colours (N C ) limit of QCD [5][6][7][8][9]. In particular, the authors of Ref. [5] undertook a systematic study of several QCD three-point functions which are free of perturbative contributions from QCD at short distances. Therefore, their OPE expansion should be more reliable when descending to energies close to the resonance region. Under this hypothesis, it was shown [5] that while the ansatz derived from the LMD approach automatically incorporates the right short-distance behaviour of QCD by construction, the same Green's functions as calculated with a resonance Lagrangian, in the vector-field representation, are incompatible with the OPE outcome. Moreover the authors put forward that these discrepancies cannot be repaired just by introducing local counterterms from the chiral Lagrangian L (6) χ , as it was done at O(p 4 ) [4]. This result severely questions the usefulness of the resonance effective theory beyond the initial work of Ref. [4], and deserves further investigation.
With this aim, we have reanalysed the vectorvector-pseudoscalar three-point function, this time with the vector mesons described in terms of antisymmetric tensor fields. This requires the introduction of an odd-intrinsic-parity effective Lagrangian in the formulation of Ref. [3] containing all allowed interactions between two vector objects (currents or resonances) and one pseu-doscalar meson. The details of the calculation can be found in Ref. [10].
RχT and the odd-intrinsic-parity sector
The low-energy behaviour of QCD for the light quark sector is ruled by the spontaneous breaking of chiral symmetry. The corresponding effective realization of QCD describing the interaction between the Goldstone fields is χPT given, at O(p 2 ), by The inclusion of resonances as explicit degrees of freedom in the chiral framework was carried out in Ref. [3] for the even-intrinsic-parity sector (L V ). For the odd-intrinsic-parity sector, three different sources might contribute to the VVP Green's function : , which is of O(p 4 ) and fulfills the chiral anomaly, (ii) chiral invariant ǫ µνρσ terms involving vector mesons. Within the antisymmetric formalism, the basis of odd-intrinsic-parity operators which comprise all possible vertices involving two vector resonances and one pseudoscalar (VVP), and vertices with one vector resonance and one external vector source plus one pseudoscalar (VJP) reads: The corresponding resonance Lagrangian will thus be defined as odd in our evaluation.
In summary we will proceed in the following by considering the relevant effective resonance theory (ERT) given by : where Z odd Vχ is generated by L χ , L V and L odd V .
Short-distance information on the oddintrinsic-parity couplings
The vector-vector-pseudoscalar QCD threepoint function VVP is built from the octet vector current and the octet pseudoscalar density, with the four-vector r = −(p + q). When both momenta p, q in Π VVP become simultaneously large, the QCD calculation within the OPE framework gives, in the chiral limit and up to corrections of O(α s ), [7]: where ψ ψ 0 is the single flavour bilinear quark condensate. At leading order in the 1/N C expansion of QCD, the three-point correlator in the effective resonance theory given by Z ERT is evaluated from the tree-level diagrams shown in Fig. 1. The LMD approximation, which assumes that a single resonance in each channel saturates the requirements of QCD, is sufficient to satisfy the short-distance constraint (7) up to order 1/λ 4 , provided the following conditions among the L odd V couplings hold: As the couplings of the Effective Lagrangian do not depend on the masses of the Goldstone fields the constraints above apply for non-zero pseudoscalar masses too. Actually our VVP three-point function fully reproduces the LMD ansatz suggested in Ref. [7] : which has been successfully tested in previous works [7,9]. The authors of Ref. [5] found that the same agreement with the short-distance QCD behaviour could not be reached working with the resonance Lagrangian in the vector representation, not even at the expense of introducing local contributions from the O(p 6 ) chiral Lagrangian. They then suggested that the problem may be inherent to the effective Lagrangian approach and unlikely to be fixed just by using other representations for the resonance fields; our result, derived in the antisymmetric tensor-field formulation with an odd-intrinsic-parity sector, contradicts this assertion, at least in what concerns the VVP Green's function.
ω → πγ
At tree-level, the intrinsic-parity violating transition ω → πγ receives contributions from both the VJP and VVP terms of L odd V (direct and ρmediated diagrams respectively). If we plug in the QCD constraints, Eq. (8), we find a full prediction for this process : The direct and the ρ exchange diagrams almost contribute to similar extent to this process. This means that contrary to what we would expect from VMD, the ωρπ coupling does not saturate the decay ω → πγ. This has immediate consequences to other channels where VMD alone was thought to be the relevant mechanism of decay, as in ω → π + π − π 0 , where the direct amplitude competes in size with the intermediate meson exchange term [10].
π → γγ
In the chiral limit, the amplitude for the π → γγ process is non-vanishing and exactly predicted by the ABJ anomaly. The odd-intrinsic-parity interactions among vector resonances introduced in Section 2 generate O(p 6 ) chiral corrections to this process. Only the two-resonance driven diagram survives after the short-distance conditions are applied. The correction induced into the π → γγ width gives : where ∆ = 4π 2 3 This result provides a tiny 1% correction to the width, and it is perfectly compatible with the experimental uncertainty, Γ(π → γγ)| exp = (7.7 ± 0.6) eV.
π → γγ *
The π → γγ * amplitude is usually written as a slope parameter α which modifies the on-shell behaviour: where k * is the off-shell photon momentum. The interactions contained in L odd V yield a contribution to the parameter α that amounts which is smaller than the VMD estimate, α VMD = 1/M 2 V ≃ 1.68 GeV −2 . The chiral loops contributions to this slope, α χ ≃ 0.26 GeV −2 were calculated in Ref. [14]. We can add both contributions to get m 2 π α ≃ 0.029, to be compared with the averaged value m 2 π α| exp = 0.032 ± 0.004 in the PDG [13]. α odd has been extended beyond the LMD approximation by the inclusion of a second vector resonance into the VVP ansatz, Eq. (9), in Ref. [5]. The latter is in fact needed to have the right 1/k * 2 behaviour for large k * [15,16] in the form factor F πγγ * (k * ). | 2014-10-01T00:00:00.000Z | 2003-09-30T00:00:00.000 | {
"year": 2003,
"sha1": "140dbfeb8cc44fb01040f3dec3015dc728e5d4d5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/0309345",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "140dbfeb8cc44fb01040f3dec3015dc728e5d4d5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
15825050 | pes2o/s2orc | v3-fos-license | Stomatal Function Requires Pectin De-methyl-esterification of the Guard Cell Wall
Summary Stomatal opening and closure depends on changes in turgor pressure acting within guard cells to alter cell shape [1]. The extent of these shape changes is limited by the mechanical properties of the cells, which will be largely dependent on the structure of the cell walls. Although it has long been observed that guard cells are anisotropic due to differential thickening and the orientation of cellulose microfibrils [2], our understanding of the composition of the cell wall that allows them to undergo repeated swelling and deflation remains surprisingly poor. Here, we show that the walls of guard cells are rich in un-esterified pectins. We identify a pectin methylesterase gene, PME6, which is highly expressed in guard cells and required for stomatal function. pme6-1 mutant guard cells have walls enriched in methyl-esterified pectin and show a decreased dynamic range in response to triggers of stomatal opening/closure, including elevated osmoticum, suggesting that abrogation of stomatal function reflects a mechanical change in the guard cell wall. Altered stomatal function leads to increased conductance and evaporative cooling, as well as decreased plant growth. The growth defect of the pme6-1 mutant is rescued by maintaining the plants in elevated CO2, substantiating gas exchange analyses, indicating that the mutant stomata can bestow an improved assimilation rate. Restoration of PME6 rescues guard cell wall pectin methyl-esterification status, stomatal function, and plant growth. Our results establish a link between gene expression in guard cells and their cell wall properties, with a corresponding effect on stomatal function and plant physiology.
. (E) Epidermal peels were taken from WT or pme6-1 leaves and incubated for 2 hours in the light (300μmol m-2 s-1) in opening buffer supplied with CO2 free air before addition of ABA to 1 or 10μM, as indicated. Apertures were measured after incubation for a further 2 hours. Each column indicates the mean stomatal aperture achieved with error bars indicating s.e.m. A t-test was performed on each pair of measurements (WT vs pme6-1) with single asterisk (*) indicating a significant difference at p = 0.05 and double asterisk (**) indicating a significant difference at p = 0.01. (1->4)-b-D-galactan Yes: Epidermis, absent from guard cells [S14] LM9
Plant material
Seeds were surface sterilised in bleach diluted in water (1:5 v/v) containing 0.05% (v/v) Tween20 and stratified at 4°C for 7 days. Seeds were then transferred to square pots of 6 cm diameter and 8 cm height containing 1:3 mix of perlite:soil and transferred to a controlled environment chamber and grown with 12 h light (200 µmol m -2 s -1 ) with 22°C day temperature, 16°C night temperature and 60% humidity. Plants used for immunolabelling were taken at 21 days, and, plants used for aperture analysis and gas exchange at 28 days after transfer to the growth chamber.
For creation of the PME6 promoter GUS reporter line, a region approximately 1200 bp upstream of the ATG promoter translational start codon of PME6 was amplified from genomic DNA with primers 5′-CACCTGGGATCCAAAATGATTG-3'5'-TGTGGGATATTGTTTTCTTAGGG -3′ and KOD DNA polymerase, inserted into the pENTR-D-TOPO entry vector (Invitrogen), and recombined with the pKGWFS7 destination vector [S1] before transfer into Agrobacterium tumefaciensC58 cells and transformation into Col-0 Arabidopsis by floral dip [S2]. Seeds were selected on 50 µgml −1 kanamycin and insertion confirmed by PCR using forward Pro PME6 specific primer and GUS gene reverse primer (5′-TGCTCAGGTAGTGGTTGTCG-3′).
The pme-6 T-DNA insertion line (SGT6342) was obtained from NASC (Nottingham,UK) and confirmed as homozygous for the insertion by PCR using primers 5′-TCTGAGTCGTGTAAACGAGCC -and 5′-CCTCTTCGTATTCAAAGTATTTCCC. To create the pme6-1 line complemented with PME6 the coding region of PME6 was amplified from the vector pUNi-At1g23200 (U18916;ABRC) with primers 5'-CACCAACCTAAACAAAAAAACC-and 5'-GATGACAACCGATTAAATTAATAAC and recombined into the pENTR-D-TOPO vector. This was then recombined by LR reaction into pMDC32 [S3] then excised with AscI. The pENTR-D-TOPO containing the PME6 promoter (described above) was cut with AscI and the PME6 coding region ligated 3' of the promoter. The plasmid was recombined by an LR reaction with pHGW [S1] to create the proPME6::PME6 construct before transfer into Agrobacterium tumefaciensC58 cells and transformation into the pme6-1 background by floral dip [S2]. The complemented pme6-1 line is referred to as pro PME6::PME6. Transformants were selected on 0.5X MS (Murashige and Skoog) medium, 1.5% (w/v) sucrose containing 15mgL -1 hygromycin and plants from the T3 generation analysed.
Gene expression analysis and immunolabelling
For analysis of PME6 expression, RNA was extracted from seedlings using a Qiagen RNAeasy kit, and reverse transcribed into cDNA using oligo dT primer and SuperscriptII (Invitrogen). PCR was carried out on cDNA to determine if any transcript was detectable. Primer 5'-GGAAGATTCCAAAACTACGGC and 5'-GCCGTCCTAAATAAGTTTCCG were used to detect PME-6 transcript, RUB1 (AT4G36800) primers were used as a positive control; (5'-GCGAACTTCGTCTTCACAA and 5'-GGAAAAAGGTCTGACCGACA. Histochemical staining for GUS activity was carried out on leaves of T2 seedlings in 50 mM potassium phosphate, 1 mM potassium ferrocyanide, 1 mM potassium ferricyanide, 0.2% (v/v) Triton X-100, 2 mM 5-bromo-4-chloro-3-indolyl-β-d-glucuronic acid, and 10 mM EDTA after vacuum infiltration at 37°C. Leaves were decolorized overnight with 70% (v/v) ethanol, and washed in 10% glycerol. Images were captured with an Olympus BX51 microscope connected to a DP70 digital camera. Expression patterns shown were typical of several independently transformed lines.
Sections were incubated with 3% (w/v) milk protein (Marvel, Premier Beverages, UK) in phosphate-buffered saline solution (PBS, pH 7.2) (hereafter known as PBS/MP). Sections were then incubated with a ten-fold dilution of primary monoclonal antibody in PBS/MP for 1 h at room temperature. Samples were washed 3 times with PBS and secondary antibody was added (100fold dilution in PBS/MP) for 1 h. Samples were kept in the dark from this step. For the JIM-and LM-series of antibodies anti-rat-IgG (whole molecule) coupled to fluorescein isothiocyanate (FITC) was used, for the 2F4 antibody, an anti-mouse-IgG (whole molecule) coupled to FITC was used. Samples were counterstained with 0.25% (w/v) Calcofluor White solution diluted ten-fold in PBS for 5 min before mounting on slides with Citifluor AF1 anti-fade solution (Agar Scientific, UK). Samples were visualised on an Olympus BX51 microscope with epifluorescence optics fitted and images captured using a DP51 camera. FITC was visualised using a filter set with 460-490 nm excitation filter, a 510-550 nm emission filter and a 505 nm dichroic mirror. Calcofluor White was visualised using a 400-410 nm excitation filter, a 455 nm emission filter and a 455 nm dichroic mirror.
The reproducibility of antibody patterns were assessed by a scoring technique. 50 stomata were assessed and the pattern of immunolabelling was classed in terms of its prevalence in the guard cells. Guard cells which were fully labelled with antibody, as typified by JIM7 labelling were classed as "Fully" labelled, guard cells which had some signal in the guard cell but not distributed throughout the whole cell were classed as "partial" and stomata which had no labelling inside the guard cell but did show signal at the junctions between guard cells and their neighbouring cells were classed as "Junctions only". No stomata analysed fell outside of these three categories.
Electron Microscopy
For cryo-scanning electron microscopy (cryo-SEM), leaves were careful removed with forceps and placed flat on a brass stub, stuck down with a cryo glue preparation consisting of a 3:1 mixure of Tissue-Tec (Scigen Scientific, USA)) and Aquadag colloidal graphite (Agar Scientific, Stansted, UK)) and then plunge frozen in liquid nitrogen with vacuum applied. For sample preparation for cryo fracture, leaves were placed vertically in recessed stubs held by the cryo glue preparation. Frozen samples were then transferred under vacuum to the prep chamber of a PT3010T cryoapparatus (Quorum Technologies, Lewes, UK) and maintained at -145C. Surface ice was removed using a sublimation protocol consisting of -90C for 3 min. For cryofracture, no sublimation was carried out and instead a level semi-rotary cryo knife was used to randomly fracture the leaf. All samples were sputter coated with platinum until a measured thickness of 5 nm was recorded. Samples werethen transferred and maintained cold, under vacuum into the chamber of a Zeiss EVO HD15 SEM fitted with a cryo-stage. Images were taken on the SEM using a gun voltage of 6 kV, I probe size of 460 pA, a SE detector and a working distance of between 5 and 6mm. For transmission electron microscopy, leaves were dissected into 3% (w/v) glutaraldehyde (Sigma-Aldrich) in 0.1 M phosphate buffer. Further fixation and processing were as described previously [S4] Stomatal aperture measurements Abaxial epidermal peels of mature leaves were removed at least 2 hours into the photoperiod and floated onto opening buffer (10 mM KCl, 10 mM MES, pH 6.2). Samples were maintained at 22°C with 200 µmol m -2 s -1 of light. For CO 2 responses air was bubbled into the opening buffer containing either 0 ppm CO 2 (CO 2 free treatment), ambient CO 2 , or 1000ppm CO 2 . For mannitol response samples, 0.5 M mannitol was bubbled into the opening buffer. For ABA responses epidermal peels were incubated in opening buffer supplied with CO 2 free air for two hours before ABA was added to the buffer. For mannitol responses peels were incubated in 0.5M mannitol added to buffer (10 mM MES, pH6.2). Epidermal peels were imaged after 2 hours using an Olympus BX51 microscope and DP70 digital camera and stomatal apertures measured. 40 stomatal apertures were measured for each treatment in each of three independent experiments. For each experiment epidermal peels were taken from at least 3 plants of each genotype.
Thermal imaging
Infrared images were taken using a FLIR SC660 camera (FLIR systems). The camera was positioned 1 m above the leaf rosette. Plants were imaged at 24-days old under well-watered conditions at which point water was withheld. Plants were then imaged again at 29 days under strong drought conditions. 6 plants of each genotype were imaged and subsequent analysis was conducted using ThermaCAM researcher v2.10 professional (FLIR systems).
Gas exchange analysis
CO 2 shifts were conducted on 28-day old plants using mature non-senescent leaves. Analysis was started 2 hours into the photoperiod of the growth chamber and did not continue into the last 3 hours of the photoperiod. Measurements were taken using a LI-6400 infrared gas exchange analyser system using a leaf fluorometer chamber (LI-COR Inc.) with a 2cm 2 circular area for measurement. Temperature was held at 21°C and humidity was kept above 58% and below 65%. Photon flux density was held at 300 µmol m -2 s -1 with 10% blue light. In cases where the leaf did not fill the chamber, leaf area was measured and a correction made in subsequent analysis. To assess stomatal response to CO 2 conductance was stabilised at 500 ppm for 40 minutes, CO 2 was then shifted to 1000 ppm for 50 minutes to stimulate stomatal closing, and then to 100ppm for 50 minutes to stimulate stomatal closure. A/Ci response curves were measured on young fully expanded leaves at 21°C leaf temperature, 1200 µmol m -2 s -1 PPFD light, and approximately 60% relative humidity. Once leaves were acclimated to chamber conditions, measurements were taken at 400, 250, 150, 100, 80, 60, and 40 every 2-3 minutes at 200 µmol s -1 flow rate, then at 400, 500, 600, 800, 900, 1000, 1200, 1400, and 1600 every 3-5 minutes at 300 µmol s -1 flow rate.
Analysis of stomatal size and density
For stomatal density analysis fully expanded non-senescent leaves were harvested from 35 day old seedlings. Leaves were fixed in 4% (v/v) formaldehyde in PEM buffer (0.1 M PIPES, 2 mM EGTA, 1 mM MgSO 4 , adjusted to pH 7) for 8 hours. Leaves were then washed twice in 70% (v/v) ethanol for 30 minutes each wash. Tissue was then cleared by incubation in chloral hydrate (2.5 g mL-1) in 30% (v/v) glycerol twice for 8 h. Samples were then mounted in 30% (v/v) glycerol solution and imaged on an Olympus BX51 microscope under the 40x objective using Nomarsky illumination, images were captured with an Olympus DP70 camera and the number of stomata counted. 4 viewpoints per leaf were analysed and 3 leaves per plant.
For stomatal size analysis abaxial epidermal peels were taken and floated onto opening buffer ((10 mM KCl, 10 mM MES, pH 6.2). Samples were maintained at 22°C with 200µmol of light. CO 2 free air was bubbled through the buffer to promote stomatal opening. Epidermal peels were imaged after 2 hours using an Olympus BX51 microscope with a DP70 digital camera and stomatal complex length was measured.
Analysis of rosette area
Mature Arabidopsis plants were photographed at 30 days old from a height of 30cm using and Olympus E-PL1 digital camera. Rosette area was measured in ImageJ using the colour threshold tool to isolate the rosette in the image. | 2016-10-07T08:50:55.792Z | 2016-11-07T00:00:00.000 | {
"year": 2016,
"sha1": "6cab9359647832d513e1f890557363451df1255c",
"oa_license": "CCBY",
"oa_url": "http://www.cell.com/article/S0960982216309332/pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "a17ce13319eb70ac59008e33aa6a50f7045e2ef8",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
247373500 | pes2o/s2orc | v3-fos-license | Genetically engineered FGF1-sericin hydrogel material treats intrauterine adhesion and restores fertility in rat
Abstract Endometrial injury can cause intrauterine adhesions (IUA) and induce the formation of endometrial fibrosis, leading to infertility and miscarriage. At present, there is no effective treatment method for severe IUA and uterine basal injury with adhesion area larger than one-third of the uterus. In this study, we prepared FGF1 silk sericin hydrogel material (FGF1-SS hydrogel) to treat endometrial injury and prevent endometrial fibrosis. Compared with the silk sericin hydrogel material (WT-SS hydrogel), FGF1-SS hydrogel significantly promotes the cell migration and infiltration ability of endometrial stromal cells (ESCs). More importantly, FGF1-SS hydrogel can release FGF1 stably for a long time and inhibit the ESCs injury model forms fibrosis through the TGF-β/Smad pathway. In the IUA rat model, FGF1-SS hydrogel treatment effectively restored the number of uterine glands and uterine wall thickness in rats, with a fertility rate of 65.1% ± 6.4%. The results show that FGF1-SS hydrogel is expected to be a candidate to prevent IUA.
Introduction
The endometrium is a highly active tissue, divided into outer functional layer and inner basal layer according to function [1]. The functional layer falls off every menstrual cycle, and the basal stromal cells differentiate and regenerate to form the epithelium on the surface of the endometrium [2]. Uterine adhesions are caused by uterine surgery to damage the basal layer of the endometrium, which leads to the repair of the basal layer of the endometrium, endometrial fibrosis occurs and the functional layer and the basal layer cannot be distinguished [3]. As a result, the receptivity of the endometrium is decreased, and the uterine cavity and/or cervical canal are partially or completely occluded [4], leading to amenorrhea, secondary infertility and other diseases [5][6][7]. At present, the most effective method for the treatment of intrauterine adhesions (IUA) is transcervical resection of adhesion, but the postoperative re-adhesion formation rate is high [8]. Yu et al. reported that with severe IUA, the postoperative readhesion rate was as high as 65% [9]. Hooker et al. [10] systematic evaluation showed that the incidence of moderate and severe IUA was 48% when pregnancy was terminated in the first trimester. Therefore, how to effectively prevent postoperative uterine cavity re-adhesion is a difficult point in IUA treatment. Although there are current treatment methods, such as estrogen therapy [11], intrauterine device [12] and sodium hyaluronate [13], the therapeutic effect is not obvious in moderate to severe IUA, and it does not significantly increase the pregnancy rate.
The main pathological feature of IUA is endometrial fibrosis. Therefore, the main idea of treating IUA is to promote endometrial regeneration and inhibit endometrial fibrosis. The physical barrier effect of biological scaffold materials shows a good prospect in the treatment of endometrial injury [14]. Therefore, exploring the treatment options and drug delivery strategies for preventing IUA and finding safer and effective treatment methods have become the object of attention [15]. Sericin is a polymer biomaterial with excellent biocompatibility and is mainly used as a wound dressing [16][17][18]. Zhang et al. [19] found that sericin hydrogel combined with stem cell factors can effectively generate blood vessels and promote collagen deposition. The crosslinking of the porosity of sericin materials can encapsulate growth factors for drug delivery [20][21][22]. The limited fluidity of the crosslinking of the porosity of the sericin material can encapsulate growth factors and become a functionalized hydrogel with slow-release effect [23][24][25]. FGF1 factor plays a role in the process of angiogenesis and wound healing, but its short half-life and low concentration at the injured site limit the use of FGF1 [26]. In this study, we modified FGF1 silk sericin hydrogel materials (FGF1-SS hydrogel) [27], which can be implanted into the injured site for the treatment of IUA. The material has both physical barrier support and stable and sustained release of FGF1, prolonging the half-life and local concentration of FGF1. It is proved that FGF1-SS hydrogel can promote the proliferation of endometrial stromal cells (ESCs) and inhibit endometrial fibrosis for a long time. The effect of FGF1-SS hydrogel on the restoration of uterine structure and fertility was evaluated.
Materials and methods
Fabrication of the FGF1-SS hydrogel materials using the genetically engineered silk fibers In this study, the FGF1 transgenic silkworm B10 line [28] constructed based on piggyBac's transgenic vector phShFGF1Sv40 was kindly provided by Biological Science Research Center, Chongqing Key Laboratory of Sericultural Science, Chongqing Engineering and Technology Research Center for Novel Silk Materials, Southwest University. Grind the genetically engineered FGF1 transgenic silkworm cocoons, and dissolve the cocoon powder in a 0.5% (w/v) 8 M urea solution to extract sericin. After extraction, the FGF1 content in the sericin aqueous solution was quantified by performing an optical density analysis on FGF1 on a Western blot using FGF1 standards. Then, the sericin and FGF1 were dialyzed with deionized water at 4 C, and the deionized water was replaced at least six times within 3 days to completely remove urea and salt. FGF1-SS hydrogel may be formed within 7 days of continuous dialysis. After the same processing, wildtype silkworm cocoons were also used to make WT-SS hydrogel (WT). The total protein content of both WT-SS hydrogel and FGF1-SS hydrogel (WTþFGF1) was 1.2 mg/ml, and the FGF1-SS hydrogel FGF1 content was 1.1 lg/ml.
Scanning electron microscopy
The platinum layer of sericin hydrogel freeze-dried using a freeze dryer (Alpha1-2, Martin Christ, Germany) was vacuum-plated (NeoCoater MP-19020NCTR) and placed under a scanning electron microscope (SEM, JSM-5610LV, Japan). Observe the substructure morphology of sericin hydrogel under an accelerated working voltage of 10 kV at room temperature. Use Image Pro Plus (version 6.0.0.260) to calculate the average pore size of the sericin hydrogel from 50 random holes.
Fourier transform infrared spectroscopy analysis
The sericin hydrogel was frozen in liquid nitrogen for 5 min, and the water in the hydrogel was lyophilized in a lyophilizer to perform a Fourier transform infrared spectroscopy (FTIR) test. A Fourier transform infrared spectrometer was used to evaluate the secondary structure of the dried sericin hydrogel samples. The ZnSe ATR cell was in the spectral range of 4000-650 cm À1 . Use Omni, PeakFit v4.12 and Origin Pro 8 software to analyze the data, and take the average of the independent deconvolution of at least 30 independent tests for each sample.
Release of FGF1 from silk fibers and sericin hydrogels
Add 500 ll of FGF1 sericin hydrogel to the wells of a 24-well plate, while adding 500 ll of phosphate-buffered saline (PBS) (pH 7.4) and keep it at 37 C. At each time point, transfer the supernatant of the well to a test tube, and add the same volume of fresh 500 ll PBS (pH 7.4) to the same well. The content of FGF1 in the supernatant was determined by ELISA.
Viscosity measurement of sericin hydrogel
The viscosity measurement of FGF1 sericin hydrogel was performed on a rheometer with parallel plates (60 mm) in continuous flow mode. The viscosity of FGF1 sericin hydrogel at 25 C at shear rate (1-100 s À1 ) was recorded.
In vitro degradation of FGF1-SS hydrogels An average of 100 mg of dried sericin hydrogel was incubated in 1 ml PBS (pH 7.4) at 37 C with or without 10 U/ml lysozyme. The PBS was changed daily, and samples were removed at fixed times to dry and weigh. Samples for each time point were made in triplicate.
In vivo experiments on FGF1-SS hydrogels
In order to detect the retention time of FGF1-SS hydrogel in rats, we injected 50 ll of FGF1-SS hydrogel in situ in IUA rats, and sacrificed the rats at 1, 6, 12, 24, 48 h, 7 and 14 days after injection. The FGF1 antibody immunohistochemistry was performed on the rat uterus to detect the residence time of FGF1-SS hydrogel in the rat.
Isolation of rat ESC
Take out the uterus of 9-day gestation rats in a sterile environment and wash them with Hank's Balanced Salt Solution three times. Cut the uterus into small pieces of 1 mm 2 , add 3 ml of 1 mg/ml type I collagenase (17018029, Gibco, USA), digested at 37 C for 60 min, and stirred every 15 min. Filter the mixed cell suspension through a 40 lm cell strainer, collect the cell suspension and centrifuge at 800 rpm/3 min, add 10% fetal bovine serum DMEM/F-12 medium (01-172-1ACS, BI, ISR), and inoculate it in a 25 cm 2 culture flask, placed in a 37 C, 5% CO 2 incubator. After 3 days, observe the cell growth status. When the ESCs were overgrown in the culture flask, the cells were digested with 0.25% trypsin, inoculated on glass slides with 1 Â 10 5 , and fixed with 4% paraformaldehyde solution for 30 min, and the expression of vimentin was detected by immunofluorescence.
Hydrogen peroxide (20 lM) (H 2 O 2 , sinopharm, China) was used to damage ESC for 24 h to establish endometrial injury cell model. The WTþFGF1 were co-cultured with H 2 O 2 injured ESC for 24 h to test the therapeutic effect of the material on the cell injury model.
IUA rat model
Animal experiments were approved by the National Research Institute for Family Planning Animal Ethics Committee. Female Sprague-Dawley rats aged 7-8 weeks were purchased from Huayikang Biotechnology Co., Ltd to establish an endometrial injury model [29]. The rats were raised in an SPF environment with free water and food. A total of 100 rats were randomly divided into five groups with 20 rats in each group. The five groups were normal group (control), sham group (sham), model group (model), WT treatment group (WT) and WTþFGF1 treatment group (WTþFGF1). The rats in the control group were fed normally; the rats in the sham group underwent surgery and were injected with physiological saline; the rats in the model group were anesthetized by intraperitoneal injection of 3% sodium pentobarbital (0.3 ml/100g), and the lower abdomen was opened to expose the uterine horns. After injecting 0.3 ml of 95% ethanol to injure the uterus for 3 min, the uterus was washed twice with PBS to establish the injury model; in the WT group and the WTþFGF1 group, 50 ll of WT and WTþFGF1 were injected into the unilateral uterus in situ after uterine injury, respectively. Record the day of modeling as Day 0. At 30 and 60 days after modeling, five rats from each group were randomly selected to be sacrificed and five rats were mated.
Determination of endometrial thickness
Endometrial thickness (from the basement membrane to the apical surface of the epithelium) is determined by measuring the distance along a transverse section of the proximal and distal ends of the endometrium, which is visually considered to be the thickest of any single site [30]. All samples were measured, five fields of view were taken for each section, and the average value was recorded.
Fertility test
On the 30th and 60th day after modeling, five female rats in each group were randomly selected to mate with male rats with proven fertility in a ratio of 1:1. Uterine function is evaluated by detecting the pregnancy ability of the uterus. The morning when the vaginal plug is present is considered to be the 0.5th day of pregnancy. On the ninth day of pregnancy, female mice were sacrificed for uterine examination and the number of fetuses in each uterus was checked.
Histological and immunohistochemistry analysis
Uterine tissue was fixed overnight with 4% paraformaldehyde, dehydrated by ethanol gradient, and then embedded in paraffin. The uterine tissue was sliced (4 lm), stained with hematoxylin and eosin (H&E), Masson stain and Sirius red stain to observe the structure of the uterus. A TE2000-U inverted microscope (Nikon, Tokyo, Japan) was used to observe the uterine morphology and endometrial thickness. Three High power field randomly selected for each image were averaged, and the number of uterine glands was counted. Image J software for analysis and degree of endometrial fibrosis [31]. The fibrotic area was determined using the following formula, using at least five mice per group: Fibrotic area % ð Þ ¼ total area of endometrial fibrosis per field the sum area of endometrial stroma and gland ! Â 100: The type I collagen area was determined using the following formula, using at least five mice per group: Type I collagen area % ð Þ ¼ total area of endometrial type I collagen per region the sum area of endometrial stroma and gland ! Â100: Add 10 mM citrate buffer to the paraffin tissue section to restore the antigen under high pressure, and add the primary antibody at 4 C overnight. The primary antibody used is antivimentin (V6630, Sigma-Aldrich, 1:200), anti-a-SMA (ab124964, abcam, 1:200) and anti-FGF1 (SAB1405808, Sigma, 1:200). After washing three times with PBS, add anti-horseradish peroxidase Conjugate anti-rabbit IgG (ZB-2306, ZSGB-BIO, 1:1000) and incubate at room temperature for 1 h. Use DAB to develop color for 3 min, add hematoxylin for counterstaining and dehydration and mount. Image J software is used to analyze and evaluate the stained area of brown staining [31].
Quantitative real-time polymerase chain reactions
The rat uterine RNA extracted by Trizol (Invitrogen, CA, USA) was reverse-transcribed to cDNA using a reverse transcription kit (Transgen Biotech, Beijing, China). Quantitative real-time polymerase chain reaction (qRT-PCR) was performed using mRNA qPCR detection kit (SYBR Green, Bio-Rad, CA, USA) in CFX96 touch deep well real-time PCR detection system (Bio-Rad, CA, USA). Use each sample in each group to perform three replicate tests. Repeat the experiment at least three times. qRT-PCR primer sequence:
Cell proliferation, migration and infiltration ability detection
Cell Counting Kit-8 (CCK-8) detects the proliferation of ESCs. Inoculate ESC in a 96-well plate at 10 4 /well, co-culture with WT-SS/FGF1-SS hydrogel in a 37 C incubator for 24 h, add 10 ll of CCK-8 solution to each well, and incubate in the incubator for 1 h. A microplate reader (Bioteck, Vermont, USA, Gen 5) was used to measure the absorbance at 450 nm, and the average value of the absorbance at five wells was taken to obtain the cell proliferation rate. The experiment was repeated three times.
ESCs co-cultured with WT-SS/FGF1-SS hydrogel for 24 h were seeded into the Transwell chamber of a 24-well plate at 1 Â 10 5 cells/ml, and cell migration was detected after 12 h of culture. The cell infiltration experiment uses a Transwell chamber covered by Matrigel (#354248, BD Biosciences, NJ, USA), and then inoculates 1 Â 10 5 cells/ml ESC in a 24-well plate Transwell chamber. After 20 h of culture, the cell infiltration was detected. Before observation, the cells were fixed with 4% paraformaldehyde and stained with 0.1% crystal violet. The total number of cells in five fields of view was recorded under a 40-fold objective lens and take the average. The experiment was repeated three times.
Statistical analysis
Use SPSS version 20.0 for statistical analysis. Quantitative data are mean 6 standard deviation. Significant differences between groups were determined by analysis of variance using GraphPad software version 5. P < 0.05 is considered statistically significant.
Preparation and characterization of FGF1-SS hydrogel
The harvested FGF1 silk powders were dissolved in 8 M urea at a 0.5% (w/v) ratio for the simultaneous extraction of the sericin and the FGF1. The obtained WT-sericin and FGF1-sericin aqueous solution had a total protein concentration of $1.20 and $1.14 mg/ ml, respectively. The content of the FGF1 protein in the FGF1sericin aqueous solution was further estimated to be 1.10 lg/ml. The FGF1-SS hydrogel formed during the dialysis processing for the removal of the urea and the salts. FGF1-SS hydrogel has an injectable type and can pass through a syringe needle with a minimum size of 0.7 Â 35 mm (Fig. 1A). SEM analysis of the FGF1-SS hydrogel showed an interconnected lamellar and porous microstructure morphology (Fig. 1B). Three strong peaks were found in the similar FTIR spectra of WTþFGF1, respectively, which are amide I (1590-1699cm À1 ), amide II (1480-1570cm À1 ) and amide III (1200-1310cm À1 ) protein bands (Fig. 1C). In addition, absorption peaks representing b-sheets were found at 1625, 1530 and 1230 cm À1 in both materials, indicating that there is a dense intermolecular hydrogen bond network in the material. Rheological property of the FGF1-SS hydrogel was examined, the results showed the viscosity of the FGF1-SS hydrogel decreased while increasing the shear rate (Fig. 1D), revealing the macroscopically homogeneous and convenient injectability feature of the fabricated material. FGF1-SS hydrogel can continuously and stably release FGF1, and the cumulative release of FGF1 from 100 ll FGF1-SS hydrogel is 78.23 6 9.37 ng (Fig. 1E).
In vivo experiments on FGF1-SS hydrogels FGF1 sericin hydrogels degraded rapidly in vitro with 48.08% 6 2.47% material loss (in PBS, pH 7.4) for the first 10 days, and then the degradation rate decreased with a total loss of 60.78% 6 1.36% within 14 days (Fig. 1F). In order to detect the retention behavior of FGF1-SS hydrogel in vivo, we took the IUA rats injected with FGF1-SS hydrogel in situ at 1, 6, 12, 24, 48 h, 7 and 14 days, respectively, after injection. Because FGF1 factor is of human origin in FGF1-SS hydrogel, we could distinguish human FGF1 from rat FGF1 with anti-human FGF1 antibody in rat uterus. It can be seen from the results of immunohistochemistry that FGF1 can enter the uterine cavity epithelium at 1 h, and then migrate to the endometrium. After 24 h, it completely enters the endometrium, and the luminal epithelium has no FGF1 expression. The expression of FGF1 gradually decreased over time ( Fig.1G and H). Therefore, we speculate that FGF1-SS hydrogel is not cleared quickly after the uterus, but acts by infiltrating the uterine cavity epithelium and then into the endometrium.
The effect of FGF1-SS hydrogel on ESCs
Because the basal layer of the endometrium is damaged and the endometrium cannot be repaired, it is essential to restore the cellular function of ESCs. In order to test the effect of FGF1-SS hydrogel on the cell function of ESCs, we isolated rat ESCs, and immunohistochemical identification of vimentin was positive and keratin was negative ( Fig. 2A and B). The FGF1-SS hydrogel was co-cultured with ESCs for 24 h. CCK-8 detection revealed that the cell proliferation rate of the WTþFGF1 group (0.89 6 0.1) and WT group (1.36 6 0.1) was significantly higher than that of the control group (0.48 6 0.04) (P < 0.05, n ¼ 4) (Fig. 2C), the cell migration rate and cell infiltration rate of the WTþFGF1 group (54 6 7, 64 6 6) were higher than those of the WT group (42 6 5, 52 6 2), and both were significantly higher than the control group (37 6 4, 36 6 3) (P < 0.05, n ¼ 3) (Fig. 2D-G). It is suggested that FGF1-SS hydrogel material can significantly promote the proliferation, migration and infiltration ability of ESCs. Among them, FGF1-SS hydrogel material releases FGF1, which promotes the migration and infiltration ability of ESCs more significantly than WT material. In order to further study the therapeutic effect of FGF1-SS hydrogel on ESCs injury, we selected 20 lM 24 h to construct an ESCs injury model (Fig. 2H). Because fibrosis is the main cause of IUA, we used western blotting to detect the inhibitory effect of FGF1-SS hydrogel on fibrosis in the ESCs injury model. The results showed that the expression of a-SMA in the WTþFGF1 group was significantly down-regulated compared with the model group, and the expression of TGF-b, pSmad2 and pSmad3 was also significantly down-regulated (P < 0.05) (Fig. 2I and J). It is suggested that FGF1-SS hydrogel treatment can reduce the fibrosis of ESCs damage and play a role by inhibiting the TGF-b/Smad signaling pathway.
FGF1-SS hydrogel improves the appearance and shape of the uterus
The uterus of the rats in the control group and the sham operation group was smooth and tough. The uterus in the model group was significantly atrophy, loss of elasticity and edema. Compared with the model group, the uterus in the WTþFGF1 group showed a relatively smooth and plump surface at 30 and 60 days after modeling (Fig. 3A). The uterine structure was observed by H&E staining (Fig. 3B-E). In the model group, the uterine cavity structure disappeared after modeling, and the endometrial thickness at 30 and 60 days after modeling was 356 6 21 and 339 6 30, respectively, which was significantly lower than that in the control group (674 6 55, 681 6 40, n ¼ 5, P < 0.05). At the same time, the number of glands in the model group (0.5 6 1, 2.67 6 1, n ¼ 5) was significantly lower than that in the control group (19 6 2, 19 6 1, n ¼ 5, P < 0.05). The WT group maintained a complete uterine cavity structure, and the endometrial thickness (478 6 71, 467 6 35, n ¼ 5) and the number of glands (12 6 2, 9 6 1, n ¼ 5) were significantly increased compared with the model group. After 30 days of modeling, the structure of the uterine cavity in the WT group was basically intact, and the cells in the functional layer were neatly arranged. However, the cells in the functional layer of the uterus in the WT group were disordered and the number of glands was significantly lower than that in the WTþFGF1 group after 60 days of modeling. In the WTþFGF1 group, the structure of the uterine cavity was complete 30 and 60 days after modeling, the structure of the endometrium was relatively complete, the thickness of the endometrium (526 6 40, 570 6 22, n ¼ 5) and the number of glands (14 6 1, 15 6 1, n ¼ 5) was significantly upregulated compared with the model group (P < 0.05). Among them, 60 days after modeling, the number of uterine glands and endometrial thickness in the WTþFGF1 group were significantly up-regulated compared with the WT group, suggesting that FGF1-SS hydrogel has a better therapeutic effect on endometrial injury.
FGF1-SS hydrogel treatment improves rat fertility
IUA can cause endometrial fibrosis, thereby reducing the rate of embryo implantation. We measure the recovery of uterine function by calculating the implantation rate of rat embryos. As shown in Fig. 4A, 30 days after modeling, the embryo implantation rate in the WT group (40% 6 5%, n ¼ 5) was significantly higher than that in the WTþFGF1 group (18% 6 1%, n ¼ 5) (Fig. 4B). On the 60th day after modeling, the embryo implantation rate in the WTþFGF1 group (65.1% 6 6.4%, n ¼ 5) was significantly higher than that in the WT group (18% 6 1%, n ¼ 5) (Fig. 4C). The results show that FGF1-SS hydrogel can restore the uterine injury in IUA rats and maintain the therapeutic effect.
FGF1-SS hydrogel inhibits endometrial fibrosis
A large amount of collagen deposition can lead to fibrosis. To evaluate the inhibitory effect of FGF1-SS hydrogel on fibrosis, we used Masson staining (Fig. 5A, B and E) and Sirius red staining (Fig. 5C, D and F) to analyze collagen deposition in the uterus. The results of Masson staining showed that the degree of fibrosis in the model group (73% 6 5%, 74% 6 3%, n ¼ 5) was significantly higher than that in the control group (41% 6 4%, 40% 6 6%, n ¼ 5). The degree of fibrosis in the WT group (51% 6 3%, 53% 6 8%, n ¼ 5) and the WTþFGF1 group (52% 6 2%, 46% 6 5%, n ¼ 5) was significantly lower than that in the model group (P < 0.05). At 60 days after modeling, the degree of fibrosis in the WTþFGF1 group was significantly lower than that in the WT group (P < 0.05), suggesting that FGF-SS hydrogel has a long-term therapeutic effect. Sirius red staining was used to detect type I collagen, and the results showed that the WT group (40% 6 1.3%, 45.72% 6 2%, n ¼ 5) and the WTþFGF1 group (43% 6 1%, 37% 6 5%, n ¼ 5) had type I collagen. The deposition was significantly lower than that in the model group (77% 6 2%, 87% 6 2%, n ¼ 5, P < 0.05). The results show that FGF1-SS hydrogel can significantly inhibit uterine fibrosis 30 and 60 days after modeling, and the effect is significantly better than WT.
The inhibitory effect of FGF1-SS hydrogel on fibrosis
As shown in Fig. 6A-D, the expression of PDGFb and TGF-b that promote fibrosis in the WTþFGF1 group was significantly downregulated compared with the model group at 30 and 60 days after modeling (P < 0.05). The expression of FGF1 that inhibits fibrosis and the endometrial stem cell marker SUSD were significantly up-regulated (P < 0.05). The results showed that FGF1-SS hydrogel inhibited the expression of uterine fibrosis factor. Western blot results showed (Fig. 7A-D) the expression of TGF-b1, pSmad2, pSmad3 and a-SMA in the model group was up-regulated compared with the WT group and WTþFGF1 group. The results indicate that FGF1-SS hydrogel may inhibit the expression of pSmad2 and pSmad3 by inhibiting the expression of TGF-b, thereby inhibiting uterine fibrosis. The results of immunohistochemistry showed (Fig. 7E) that the endometrium of the model group was brown, suggesting a large amount of a-SMA expression -5) and 60 days (A6-10) after modeling, the appearance of the uterus of rats in each group was seen 9 days after embolization. Statistical analysis of pregnancy rates at 30 days (B) and 60 days (C) after modeling. *P < 0.05 and **P < 0.01 and endometrial fibrosis. However, the endometrium of the WTþFGF1 group has no obvious positive staining of a-SMA, which indicates that FGF1-SS hydrogel inhibits the formation of endometrial fibrosis and has a good therapeutic effect on IUA.
Discussion
Bombyx mori silk is mainly composed by the fibroin and sericin, the fibroin accounts for about 75% weight of the cocoon silk, while the sericin accounts the other 25% part [32]. Although the majority part of the silk, fibroin has been used as main textile materials for thousands of years or as newly desirable biomaterials for applications of many aspects of tissue engineering, the sericin once regarded as residual products during the silk degumming processing [33], was recently found to be as a potential biomaterial for biological applications due to its biocompatibility, UV protective property, antibacterial activity, antioxidant, antityrosinase activity, coagulant features and moisturizing capabilities [34][35][36][37]. In addition, because the sericin is hydrophilic and distributes in the outer layer of silk, it is easier and more convenient to dissolve the sericin than the fibroin dose to make the sericin related biomaterials. Therefore, in this study, we mainly focused on the strategy to functionalize the sericin to expand its applications in biomedicine fields. Normal uterine function is the foundation of women's reproductive health. Maintaining the normal physiological structure of the endometrium and recovering it after injury is the key to maintaining uterine function. Uterine fibrosis is usually a complication encountered after uterine surgery, including surgery on the uterine cavity during childbirth, and negative pressure suction, forceps and curettage, mid-term labor induction and incomplete abortion evacuation during pregnancy termination, etc [38][39][40]. Surgical operations on the uterine cavity can cause damage to the basal layer of the endometrium, scarring leads to the formation of fibrous bridges between the opposite surfaces of the uterine cavity, increasing the risk of uterine fibrosis and even completely eliminating the uterine cavity and causing amenorrhea [41][42][43]. Uterine adhesions are treated through surgical hysteroscopy to remove the adhesions, and this method has a higher risk of recurrence [44]. Studies have shown that the incidence of re-adhesion of the uterine cavity after surgery is 3.1-23.5%, and the recurrence rate of severe IUA is as high as 62.5% [45]. Therefore, how to effectively prevent postoperative uterine cavity re-adhesion is the key and difficult point of IUA treatment.
Recent studies have shown that sericin materials can promote wound healing [46]. In the wound, the cells lose contact and the production of growth factors and cytokines occurs [47]. Fibroblast migration regulates cell proliferation and collagen regeneration and is a key step in cell repair [48], while sericin increases the number of fibroblasts entering the damaged area [49]. At present, it is believed that the main cause of endometrial fibrosis is the abnormal migration and proliferation of uterine epithelial cells and stromal cells, resulting in abnormal secretion of extracellular matrix proteins and cytokines, leading to the deposition of type I collagen in the endometrium to form endometrial fibrosis [50]. Therefore, the repair of endometrial injury includes endometrial epithelial regeneration, blood vessel and gland repair and growth factor stimulation. This study used 95% to construct a rat endometrial injury model. The materials were taken 30 days and 60 days after modeling, and the uterine structure was observed by H&E staining. The results showed that the uterine cavity of the rats in the model group was completely atretic and could not repair itself, suggesting that we successfully constructed an endometrial injury model. In the WTþFGF1 group, the uterine cavity structure is complete, and the endometrial epithelial cells are arranged neatly and the structure is basically complete. The number of uterine glands and the thickness of the uterus are also significantly restored compared with the model group. FGF1-SS hydrogel for the treatment of endometrial injury has the property of rapidly swelling in water without dissolving, and releasing FGF1 factor [15]. In the uterus, physical support occurs to prevent adhesions, maintain FGF1 factor, and treat the basal layer of the uterus for a long time. It can be seen from the results of immunohistochemistry that FGF1 can enter the uterine cavity epithelium at 1 h, and then migrate to the endometrium. After 24 h, it completely enters the endometrium, and the luminal epithelium has no FGF1 expression. The expression of FGF1 gradually decreased over time. From the data of endometrial thickness and number of glands and the degree of fibrosis, it can be seen that WT itself had the same or even better repair effect than FGF1-SS hydrogel on the damaged endometrium on Day 30 and had worse repair effect than FGF1-SS hydrogel on Day 60. It can be seen that WT itself has an effect on the repair of endometrial damage, but its sustainable time is short, and the addition of FGF1 can significantly prolong its repair effect. Therefore, in the short term, WTþFGF1 may have the same or even better repair effect, but in the long term, the repair effect of FGF1 is better. In the fertility test, the uterine fertility is restored. It suggests that FGF1-SS hydrogel have a significant recovery effect on the treatment of endometrial injury in rats. Masson staining showed that the area of uterine fibrosis in the WTþFGF1 group was significantly lower than that in the model group, indicating that FGF1-SS hydrogel have a therapeutic effect on endometrial injury. In Sirius red staining, it was found that the accumulation of type I collagen in the model group was Among the many hypotheses about the cause of endometrial fibrosis, MET [51] and endometrial stem cell regeneration and differentiation have been studied. In the IUA clinical study of Zhou et al. [52], it was found that estrogen therapy significantly inhibited the expression of TGF-b in the uterus and inhibited the occurrence of fibrosis; studies, such as Se-Ra, found that stimulating endometrial stem cells can restore the differentiation and migration of [53]. Therefore, in this study, the expression of endometrial stem cell marker SUSD and myofibroblast protein marker a-SMA were selected to detect the ways of FGF1-SS hydrogel to inhibit uterine fibrosis and their therapeutic effects on uterine fibrosis. The results showed that compared with the model group, the expression of endometrial marker SUSD in the WT-FGF1 group was significantly up-regulated, and the expression of TGF-b was significantly down-regulated. By detecting the TGF-b/Smad signaling pathway, it was found that FGF1-SS hydrogel inhibited the expression of a-SMA by inhibiting the expression of pSmad2 and pSmad3. We used qRT-PCR and Western blotting to detect signaling pathways related to uterine fibrosis. In qRT-PCR detection, we found that the expression of PDGFb and TGF-b in the WTþFGF1 group was significantly lower than that in the model group. Studies have shown that PDGFb and TGF-b are related to organ fibrosis, showing that treatment with sericin materials can slow down the occurrence of uterine fibrosis. Western blot results show that sericin hydrogel and FGF1-SS hydrogel inhibit the expression of a-SMA by inhibiting the TGF-b/Smad pathway, thereby reducing fibrin deposition on the wound surface.
Sericin has natural cell adhesion, which can support cell adhesion and long-term cell growth on its surface or in the scaffold [54], so it can be seen that the cell proliferation rate in the WT group and the WTþFGF1 group is significantly up-regulated compared to the control group. ESCs may undergo epithelial cell transformation under the induction of FGF1 [55,56], so the cell proliferation rate is lower than that of the WT group. At the same time, FGF1 enhanced chemokines and significantly promoted the migration and infiltration capacity of ESCs (Fig. 2D-G). ESC oxidative damage model in vitro was constructed to detect the therapeutic effects of FGF1-SS hydrogel on uterine functional layer damage. The results showed that in the ESC oxidative damage model, FGF1-SS hydrogel inhibited the occurrence of fibrosis by inhibiting the TGF-b/Smad pathway. In cell function experiments, it was found that FGF1-SS hydrogel significantly upregulated the cell proliferation, migration and infiltration capacity of ESC, suggesting that FGF1-SS hydrogel may treat oxidative damage by changing the function of ESC cells.
The results show that FGF1-SS hydrogel has a significant therapeutic effect on rat endometrial injury and can restore uterine function. It is suggested that FGF1-SS hydrogel can be used as an auxiliary material after uterine cavity surgery to prevent endometrial fibrosis.
Conclusion
FGF1-SS hydrogel inhibit uterine fibrosis by inhibiting the TGFb/ Smad pathway and block the process of endometrial fibrosis in rats, thereby restoring uterine function. FGF1-SS hydrogel has a long-term therapeutic effect on endometrial injury, and can be used as an auxiliary material for IUA postoperative treatment.
Ethics approval
The animal-related experiments, including the isolation of human umbilical cord mesenchymal stem cells and the mouse premature ovarian failure modeling, were approved by the China of National Research Institute for Family Planning (Ethics Number 2011-10). All applicable institutional and national guidelines for the care and use of animals were followed. | 2022-03-11T16:14:55.837Z | 2022-03-09T00:00:00.000 | {
"year": 2022,
"sha1": "dc0ce199d225f22f46777ac6fce98334dfe44292",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/rb/advance-article-pdf/doi/10.1093/rb/rbac016/42779048/rbac016.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6e5c18f908599cd7ac3f5e7b16395539330d940e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
269012519 | pes2o/s2orc | v3-fos-license | Energy-Efficient Partial LDPC Decoding for NAND Flash-Based Storage Systems
: A new decoding method for low-density parity-check (LDPC) codes is presented to lower the energy consumption of LDPC decoders for NAND flash-based storage systems. Since the channel condition of NAND flash memory is reliable for most of its lifetime, it is inefficient to apply the maximum-effort decoding with the full parity-check matrix (H-matrix) from the beginning of the lifespan. As the energy consumption and the decoding latency are proportional to the size of the H-matrix used in decoding, the proposed algorithm starts the decoding with a partial H-matrix selected by considering the channel condition. In addition, the proposed partial decoding provides various error-correcting capabilities by adjusting the partial H-matrix. Based on the proposed partial decoding algorithm, a prototype decoder is implemented in a 65 nm CMOS process to decode a 4 KB LDPC code. The proposed decoder reduces energy consumption by 93% compared to the conventional LDPC decoding architecture at maximum.
Introduction
NAND flash memory is extensively used in many storage solutions, such as solidstate drives (SSDs) and secure digital (SD) cards, due to its fast accessibility, low-power consumption, and compact size [1,2].Recently, advanced structures such as 3D-stacked NAND flash for storing more information in a limited area have been widely employed, which provide a more error-prone environment [3][4][5].In storage systems built with NAND flash memories, error-correction codes (ECCs) are commonly applied to ensure data reliability.Algebraic codes such as BCH and RS codes have widely been employed because of their guaranteed performance and moderate hardware complexity, but the codes are not adequate when the NAND flash channel worsens.For that reason, the LDPC code has been employed in many NAND flash-based storage systems recently, as its error-correcting capability resulting from iterative belief propagation is far superior to the algebraic codes.However, the LDPC decoding necessitates high computational complexity and frequent memory accesses, and consumes considerably higher energy than BCH and RS decoding processes [6,7].
Since the NAND flash channel is reliable for most of its lifetime, the maximum-effort decoding with the full parity-check matrix (H-matrix) is inefficient when the channel is reliable.Providing multiple error-correcting capabilities may be a solution, since the errorcorrecting capability can be adjusted depending on the channel condition.As a matter of fact, multi-rate LDPC codes are commonly used to provide various error-correcting capabilities in the wireless communication systems [8,9].The use of multi-rate codes is effective only when the channel condition at the time of encoding is consistent with that of decoding, which means that the traditional multi-rate codes are not suitable for storage systems in which the data writes and reads can occur far apart.
A new partial decoding is proposed to provide various error-correcting capabilities with a single H-matrix.The decoding strength is adjusted by changing the column degree of the partial H-matrix, which is verified through intensive simulations over the additive white Gaussian noise (AWGN) channel.Based on partial decoding, we present a novel energy-efficient decoding algorithm.The proposed algorithm starts the decoding with a partial H-matrix selected by considering the channel condition at the time of decoding.When the decoding with the partial H-matrix fails, the proposed algorithm increases the size of the partial H-matrix to enhance the decoding strength and tries the decoding again.Since the energy consumption of an LDPC decoder is mainly related to the size of the H-matrix used for decoding, the proposed algorithm can reduce the energy consumed in the decoding process.In addition, it is effective in reducing the decoding latency and enhancing the decoding throughput.
The rest of this paper is organized as follows.Section 3 introduces the proposed partial decoding of the LDPC codes and Section 4 analyzes simulation results of the proposed energy-efficient decoding algorithm.Theoretical analysis is explained in Section 5.The details of the hardware design and the implementation results are presented in Sections 6 and 7, respectively, and conclusions are made in Section 8.
Backgrounds
This section provides an overview of LDPC decoding algorithms, including an indepth explanation of the Sum-Product algorithm (SPA) [10] and that of the Min-Sum algorithm (MSA) [11].Moreover, the structure of the Quasi-Cyclic (QC) LDPC H-matrix will be introduced to explain proposed algorithms.
LDPC Decoding Algorithms
The SPA and the MSA are two prominent methods used for decoding LDPC codes, which are essential for error correction in NAND flash-based storage systems.They are also known as the Belief Propagation (BP) algorithms of LDPC codes.It operates by passing probabilistic messages along the edges of a Tanner graph to estimate the likelihood of bit values.Both algorithms operate iteratively decoding the likelihood messages until they converge to a stable solution or reach a predefined number of iterations.In each iteration, the SPA combines the messages from neighboring nodes using a product operation, followed by a normalization process to update the beliefs of each bit's value.In contrast, the MSA estimates these probabilities by considering the minimum value of the incoming messages, hence the name.This method, while an approximation, significantly reduces the need for complex calculations without drastically affecting decoding accuracy.The SPA typically requires floating-point precision and involves trigonometric functions, making it computationally intensive.
The advantage of using MSA lies in its simplicity, as it can be implemented using integer arithmetic and simple comparison operations, making it suitable for hardware with limited processing capabilities.Both algorithms benefit from the inherent error detection and correction capabilities of LDPC codes, which feature a redundant structure enabling the identification and rectification of errors in data transmission.The practical implementation of these algorithms also considers factors such as channel noise characteristics and the required level of error correction.Tailoring the algorithm to specific needs can result in various modifications and optimizations, such as the normalized MSA and the offset MSA, which aim to bridge the performance gap with the SPA.
Quasi-Cyclic LDPC Codes
The array LDPC code is suitable for adjusting the column degree of the H-matrix, as it is one of regular LDPC codes that have fixed column and row degrees.Moreover, it is one of the quasi-cyclic (QC) LDPC codes composed of shifted identity matrices of the same size [12].Therefore, the number of check nodes can be controlled easily by eliminating some block-rows, each of which having the same size as the identity matrix.Three parameters, w c , w r , and p, define an array LDPC code, where p is a prime number denoting the size of the identity matrix, and w c and w r represent the column and row degrees of the H-matrix, respectively.The H-matrix of the (p, w r , w c ) array LDPC code is where I is the p × p identity matrix and A is a matrix obtained by shifting every row of I cyclically by one.When p = 3, for example, the corresponding matrix A is Based on (2), A 2 is calculated as
Proposed Partial Decoding of LDPC Codes
The partial decoding of an LDPC code is newly introduced to provide various errorcorrecting capabilities, which can be adaptively applied according to the channel condition.The decoding strength is adjusted by changing the number of check nodes to be used for decoding.The number of check nodes relevant to a variable node is called the column degree of the H-matrix.Since each variable node collects the local messages come from the connected check nodes, the LDPC decoding works normally with some check nodes removed.Therefore, the error-correcting capability can be adjusted by changing the column degree.
Construction of a Partial H-Matrix
The H-matrix shown in (1) can be decomposed into w c sub-matrices, h 1 to h w c , where h i is a p × w r p sub-matrix denoting To support various error-correcting capabilities, a partial H-matrix is organized by including some of the above sub-matrices, h 1 to h x .A set of sub-matrices is denoted as H x , where x is an integer ranging from 2 to w c , since the column degree of a partial H-matrix should be at least 2 in order to decode an LDPC code.When w r = 4, for example, a partial H-matrix H 3 is constructed as
Decoding of a Partial H-Matrix
The message is encoded with the full H-matrix of the LDPC code, while the received codeword is decoded by using a partial H-matrix in the partial decoding.Iterative decoding algorithms such as the SPA or MSA can be used to update variable nodes and check nodes based on the partial H-matrix.Before starting a decoding iteration using the partial Hmatrix, the syndromes of the updated codeword are checked with respect to the full H-matrix.If the syndromes are all zeros, then the codeword is correct so that the decoding process is finished.Otherwise, we repeat the decoding iteration until we reach the number of maximally allowed iterations (MAI).The detailed procedure of the partial decoding is described in Algorithm 1.
1: Initialization: load the initial LLR values to each variable node.2: Iterative Decoding: Perform the following steps in accordance with the SPA or MSA.
for all check nodes included in the full H, do ▷ Syndrome check The error-correcting capability resulting from a partial H-matrix is investigated based on a (149, 61, 6) array LDPC code that is designed to protect a message of 1 KB.The SPA is employed to decode the received codeword with setting MAI to 30. Figure 1 shows how the error-correcting capability changes over the channel SNR.The uncorrected bit-error rate (BER) performances resulting from H 2 and H 6 correspond to the weakest and strongest error-correcting capabilities, respectively.The decoding strength is stronger when the partial H-matrix becomes larger.Therefore, it is possible to support diverse error-correcting capabilities by constructing several partial H-matrices from a single H-matrix.Though H 2 shows the weakest decoding strength, it removes two thirds of memory accesses compared to the full H-matrix.This enables a tradeoff between decoding capability and energy consumption, since the number of memory accesses dominates the energy consumption of an LDPC decoder [13].
Proposed Energy-Efficient Decoding of a Partial H-Matrix
The proposed decoding algorithm increases energy efficiency in the LDPC decoding, and is effective in reducing the energy consumption of storage systems built with NAND flash memory, since the NAND flash is reliable in the beginning stage.Applying high voltages to a cell repeatedly to program or erase the cell decreases the SNR of the flash channel monotonically [3,4].As the wear-leveling technique makes the SNR of a page almost the same as that of the other page [5], the NAND flash channel is reliable in a considerable amount of time.Since the NAND flash channel in the beginning does not induce many erroneous bits, the maximum-effort decoding with the full H-matrix is inefficient.Therefore, the proposed algorithm selects a proper partial H-matrix depending on the channel condition.
Considering the channel SNR, the proposed algorithm selects a specific partial matrix from a set of partial H-matrices defined as The selected partial H-matrix is the initial partial H-matrix that is first used for decoding.The initial partial H-matrix for a specific SNR can be determined in advance by conducting simulations over the flash channel or by analyzing the decoding algorithm.The proposed energy-efficient decoding algorithm is described in Algorithm 2.
Simulation Results
The (149, 61, 6) array LDPC code is used to validate the proposed energy-efficient LDPC decoding algorithm.The average number of iterations required to decode a codeword is shown in Figure 2, which is obtained by applying the SPA with setting the MAI to 30.In the simulation, the flash memory is regarded as an AWGN channel.The SNR is defined as σ 2 /N, where N is the noise power, and σ 2 is the signal power.It is assumed that the distribution for a Single Level Cell is similar to that of Binary Phase Shift Keying (BPSK).The Error Rate was considered based on the assumption that an all-zero code transmitted as '1' would result in an error if the outcome was non-zero.For a specific SNR, there are partial H-matrices that provide almost the same decoding performance as that of the full H-matrix.For an SNR of 6 dB, for example, the decoding with H 3 leads to almost the same number of iterations as that of the full H-matrix.Based on the simulation results, the proposed algorithm selects an initial partial H-matrix with which the decoding starts.Decoding may continue with H 2 , but if the average number of iterations begins to increase, it can switch to decoding with H 3 .This inference is exploited by simulation results, and implementation is feasible through an SSD controller that tracks the number of iterations at the end of the previous decoding process.
The energy consumption of an LDPC decoder is mainly dominated by memory accesses resulting from frequent updates of internal messages to be exchanged between variable and check nodes [13].Reducing the number of memory accesses is therefore highly effective in lowering the overall energy consumption.Moreover, it decreases the decoding latency as well as the decoding throughput.As the number of memory accesses is proportional to the size of the H-matrix used in decoding, reducing its size lessens the energy consumed in the LDPC decode in effect.The average number of memory accesses resulting from the proposed partial decoding algorithm and the conventional one that decodes with the full H-matrix are compared in Figure 3.It is clear that the proposed algorithm considerably reduces the number of memory accesses when the SNR is not small.Since the large number of memory accesses leads to high energy consumption, the proposed decoding algorithm significantly reduces the energy consumed in the high SNR region.For the (149, 61, 6) array LDPC code, the energy consumption caused by memory accesses is reduced down to 33.1% even compared to the conventional decoding algorithm that employs the early stopping method [14].As the memory accesses are mainly required to calculate V2C and C2V messages, the computational operations are also reduced in proportion to the reduction ratio of memory accesses, which means that the energy consumption of the LDPC decoder can be reduced by the reduction ratio of memory accesses.
In addition, both the decoding latency and the decoding throughput are enhanced.The normalized latency of the proposed partial decoding algorithm is compared to the conventional one in Figure 4. Since the number of variable nodes connected to each check node is constant, the number of clock cycles taken to process a check node is constant for all partial H-matrices.Therefore, the number of check node operations affects the decoding latency.As the number of check node operations is proportional to the size of the partial H-matrix, the decoding latency can be effectively reduced by reducing the size.In Figure 4, the decoding latency is reduced to 35.5% at maximum compared to the conventional architecture [15].Since the decoding throughput is inversely proportional to the decoding latency, the proposed partial decoding algorithm can boost the decoding throughput significantly in the beginning stage.
Theoretical Analysis
The proposed decoding algorithm is theoretically analyzed to explain the existence of a partial H-matrix that results in almost the same decoding performance as the full H-matrix.It will be shown that the theoretical prediction of the required number of iterations is consistent with the simulation results.The partial H-matrix can be determined by looking into the number of iterations.To calculate the number of iterations required for a specific SNR theoretically, we estimate how the BER of the decoded outputs changes according to decoding iterations.The LLR distribution obtained by the internal message tracking technique, which is called density evolution in [16], is used to estimate the BER of the decoded outputs.The distribution of the LLR values over all variable nodes is investigated in each iteration.The SPA is assumed for this analysis, as the internal steps of the algorithm can be described in mathematically closed forms.
Calculation of the LLR Distribution
The LLR distribution in the l-th iteration is analyzed by using the mathematically closed forms of the SPA.For an H-matrix H, the set of variable nodes connected to the m-th check node is denoted as where h mn represents the element of the H-matrix on the m-th row and n-th column.
Similarly, the set of check nodes connected to the n-th column is If a regular LDPC code is considered in the analysis, the numbers of elements in N m and M n are w r and w c , respectively.The set that excludes element n from N m is denoted as N m \n, and the set excluding m from M n is similarly denoted as M n \m.The LLR value of the n-th variable node after l iterations is denoted as L for all n and m are denoted as λ (l) and µ (l) , respectively.
In previous works [16,17], the distribution of L (l) n for all n is known to be binomially distributed as N(λ (l) , 2λ (l) ) [16], where N(µ, σ 2 ) represents the Gaussian distribution with µ and variance σ 2 , and C (l) m→n is also binomially distributed [17].Therefore, the LLR distribution can be obtained by tracking λ (l) in each iteration.The equations that update variable and check nodes are used to chase the mean of the LLR distribution.In the SPA, the variable node update is expressed as where n is the initial LLR.The corresponding C2V message for the l-th iteration is For convenience, Equation ( 12) is rewritten as Taking the expectations for both sides, For the sake of simple expression, Ψ(x) is defined as where y ∼ N(x, 2x).Equation ( 14) can be rewritten as Taking the expectations for both sides of (11), we obtain where E c is the energy consumed to transmit a bit of a codeword and σ is the standard deviation of the AWGN channel.A bit of zero or one transmitted over the AWGN channel is mapped to , respectively, and the all-zero codeword is assumed to be sent.By substituting ( 17) into ( 16), we have and it is rewritten as By substituting Equation (19) into Equation ( 17), we finally have the mean of the LLR distribution, where µ (l−2) can be recursively calculated from (19) with the initial condition of µ (0) = 0.The mean of LLR distribution λ (l) is only determined by the column degree w c , the row degree w r , and the channel SNR E c σ 2 .Therefore, the LLR distribution after l iterations can be estimated from the mean expressed in (20).
Calculation of the Number of Iterations
For a specific SNR, the LLR values of all variable nodes are distributed following the binomial distribution of N(λ (l) , 2λ (l) ) [16].The mean of the LLR distribution λ (l) is obtained from (20) by adjusting the column degree w c according to the size of the partial H-matrix.To decide the success or failure of the decoding, the BER is estimated from the calculated LLR distribution.
Assuming that the transmitted codeword are all zeros, the correctly decoded codeword has positive LLR values for all bit-positions, but the uncorrected codeword has some negative LLR values.Therefore, the ratio of the negative area to the total area of the distribution can be considered as the uncorrected BER for a specific number of iterations.Since the LLR values are binomially distributed, the BER after l iterations is calculated as where Q(x) is the Q-function of the given distribution, The LLR distribution and the estimated BER for the (149, 61, 6) array LDPC code with an SNR of 5dB is shown in Figure 5.The full H-matrix is used for decoding, which means that w c is 6.As the number of iterations increases, the mean of the LLR distribution moves to the higher value, leading to a reduced BER.The estimated BER is used to compute the number of iterations needed to achieve successful decoding.It is assumed that the left tail of the BER distribution in Figure 5, which falls into the negative region, represents the proportion of errors relative to the total number of cases.The area of that tail was calculated using the Q-function, as described in (21) to determine the BER value.When the calculated BER is less than 10 −15 in a certain iteration, which is a criterion widely accepted in the storage market, the decoding is considered to be successful in that iteration.Therefore, we analyze the theoretical number of iterations needed to achieve successful decoding for a range of SNR.The numbers are depicted in Figure 6.Since the graphs look similar to the simulation results shown in Figure 2, the proposed decoding algorithm is consistent with the theoretical analysis.In addition, the existence of an initial partial H-matrix that provides the same error-correcting performance as the full H-matrix for a specific SNR is explained theoretically.
Hardware Architecture
A simple modification of the existing decoder hardware allows decoding of the proposed algorithm.Therefore, while maintaining the basic structure of the existing architecture, the addition of the capability to dynamically select the optimal partial H-matrix based on the channel state significantly reduces energy consumption while maintaining decoding accuracy.Through such a simple modification, the proposed decoding method can be easily integrated into existing systems, offering improved performance and energy efficiency.
Dedicated Syndrome Check Module
LDPC decoders that utilize soft-information are generally required to perform the first decoding iteration.This approach is adopted because generating the soft-information itself consumes a significant amount of latency, thus making it more advantageous in several aspects to proceed with an initial decoding iteration rather than performing a separate syndrome check.However, the proposed decoding algorithm, which also uses softinformation, requires decoding with partial H-matrices of various sizes.To accommodate this, a separate syndrome check module is incorporated.Employing an independent syndrome check module can significantly reduce decoding latency, especially in good channel conditions.
Typically, a full H-matrix is not necessary for syndrome checking to verify the integrity of a codeword; it only needs to cover the entire message.Therefore, the size of the dedicated syndrome check module can be very compact and implemented with minimal effort.Table 1 shows the gate count for syndrome check logic of LDPC codes of various sizes in 65 nm CMOS process.For a commonly used 4 KB LDPC code with a rate of 0.9, it only requires 22 k equivalent gates, which is about 1% of the total decoder area.Therefore, incorporating this logic into an existing decoder incurs minimal overhead and can be easily applied to any decoding architecture.
Decoding Architecture
A block diagram of the proposed decoding architecture is shown in Figure 7. Except for the dedicated syndrome checking (SYN) unit, the decoding architecture is identical to the conventional layered min-sum decoder [18].Each decoding function unit (DFU) performs the independent check node operation in parallel, and the corresponding LLR values and the intermediate C2V values are stored in LMEM and C2V memories, respectively.The detailed architecture of the DFU is shown in Figure 8. Through the shuffle network, the appropriate LLR and C2V values are obtained, followed by number system conversion, addition and subtraction operations.For a fair evaluation of the implementation, the most efficient method among the existing approaches has been applied for the Minimum search logic [19,20].
The shuffle and de-shuffle networks align the LLR and C2V values.In the conventional architecture, all syndromes are checked in each DFU operation since the conventional decoding algorithm always uses the full H-matrix.However, the proposed partial LDPC decoding uses the partial H-matrices instead of the full H-matrix when the channel is reliable.Since decoding with the partial H-matrices does not compute all check node equations and syndromes, the dedicated SYN unit, which checks remaining syndromes, is additionally applied.As a result, proposed partial LDPC decoding can be applied by adding a simple SYN unit to any existing structure with ease.
Conclusions
This paper has presented a new energy-efficient LDPC decoding method called partial LDPC decoding by taking into account the characteristics of the NAND flash channel.The proposed algorithm decodes by using a portion of the full H-matrix in order to save the energy consumed in the decoding.The partial decoding can provide a range of errorcorrecting capabilities by adjusting the size of the partial H-matrix, enabling a trade-off between energy consumption and error-correcting capabilities.The existence of a partial Hmatrix, which achieves almost the same decoding performance as that of the full H-matrix for a specific SNR, has theoretically been analyzed and proved by intensive simulations.A prototype decoder to implement the proposed algorithm has been developed for 4 KB LDPC codes using a 65 nm CMOS process.The proposed decoder reduces energy consumption by 93% compared to recent LDPC decoding architectures.
Figure 2 .
Figure 2. The average number of iterations simulated for various partial H-matrices of the (149, 61, 6) array LDPC code.
Figure 3 .
Figure 3.The comparison of memory accesses resulting from the conventional and proposed decoding algorithms for the (149, 61, 6) array LDPC code.
Figure 4 .
Figure 4.The decoding latency of the proposed algorithm normalized by that of the conventional one for the (149, 61, 6) array LDPC code.
(l) n , and similarly the C2V message of the m-th check node after l iterations is represented as C (l) m→n .The means of L (l) n and C (l) m→n
Figure 5 .
Figure 5.The probability distribution of LLR values and the estimated BERs of the (149, 61, 6) array LDPC code when the SNR is 5 dB.
Figure 6 .
Figure 6.The theoretically calculated number of iterations for the various partial H-matrices of the (149, 61, 6) array LDPC code.
Figure 7 .
Figure 7.The prototype decoder for the proposed energy-efficient partial LDPC decoding algorithm for the (607, 60, 6) array LDPC code.
1 :
Input: S = {H 2 , H 3 , . . ., H w c }, MAI, and channel SNR 2: j = index of a partial H corresponding to the channel SNR
Table 1 .
Areas of the Syndrome Checking Logic for Various Sizes of LDPC Codes in 65 nm CMOS. | 2024-04-10T15:05:56.823Z | 2024-04-07T00:00:00.000 | {
"year": 2024,
"sha1": "d5a60d083bd2d1fbba0da08c3ba33fc17f04ff52",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-9292/13/7/1392/pdf?version=1712570315",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "eb1be89d8f7f6bcb2cf4bde2550efa7d4d13ffbf",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": []
} |
4667841 | pes2o/s2orc | v3-fos-license | Serum CA19-9 as a marker of circulating tumor cells in first reflux blood of colorectal cancer patients
Circulating tumor cells (CTCs) are used for metastasis surveillance in cancer patients, but low detection rates limit their use in colorectal cancer (CRC). We investigated the distribution of CTCs in peripheral and portal blood of CRC patients, and analyzed the relationship between serum tumor CEA/CA19-9 markers and CTCs blood levels. CTC levels detected in first reflux/portal vein blood were higher than in peripheral blood, and liver reduced CTCs amount. CTCs-positive patients had increased serum CEA and CA 19-9 levels, and the CEA and CA 19-9 levels correlated with the CTCs levels. Even in non-metastatic CRC patients with barely detectable CTCs in peripheral blood, serum CA 19-9 levels correlated with the CTC levels in first reflux/portal vein blood. These results demonstrate that CTC detection in the first reflux vein/portal vein blood is more sensitive than in peripheral blood, suggesting that clinical diagnosis using the CellSearch System should be based on the CTC detection in first reflux vein blood due to the high detection rates. In addition, our results indicate that serum CA 19-9 levels may serve as a diagnostic marker for further evaluation of CTC levels in portal blood.
INTRODUCTION
Colorectal cancer (CRC) is one of the most commonly diagnosed cancers and the leading cause of cancer death worldwide [1]. Metastasis is the leading cause of CRC-related mortality, and is responsible for about 90% of CRC patient deaths [2]. About 50% of CRC patients have synchronous (15%~20%) [3,4] or metachronous liver metastases (20%~30%) [5]. Compared with the overall CRC 5-year survival rates 65% and the 10-year survival rates 58%, the 5-year relative survival rate in nonmetastatic CRC patients (39% of cases) is about 90% [6]. When patients are diagnosed in late stages, with colorectal liver metastases (CRLM), 5-year progress-free survival (PFS) and 5-year overall survival (OS) rates dramatically decrease [7,8]. The lethal factor of CRC-related prognosis are metastases, especially liver metastases.
Circulating tumor cells (CTCs) contribute to metastases by being released into the blood from primary tumors [9]. The Veridex CellSearch system is the only CTCs detection method approved by U.S. FDA and Chinese CFDA for clinical CTCs detection. The Veridex CellSearch system captures CTCs using magnetic beads coated with an epithelial cell adhesion molecular antibody (anti-EpCAM); the CTCs are then identified using cytokeratin (CK) 8/18/19 +/DAPI +/ CD45-staining. Detection of CTCs using the CellSearch www.impactjournals.com/oncotarget/ Oncotarget, 2017, Vol. 8, (No. 40), pp: 67918-67932 Research Paper system has been used as a clinical marker for prostate cancer [10], metastatic breast cancer [11], and colon cancer [12]. However, the CTCs-positive rates using the CellSearch system are low. For example, the median CTCs count was 0 in 7.5 mL of peripheral blood of 413 metastatic CRC patients [12]. In addition, CTCs were barely detectable using the CellSearch System in nonmetastatic CRC patients [13]. However, even with the low detection rates, CTC is still the strongest prognostic factor in non-metastatic CRC patients [14,15], and Cellsearch Systems remains the only method for CTCsdetection approved by the US FDA and Chinese CFDA. Thus, a more sensitive and accurate CTCs detection method using the Cellsearch system is urgently needed for CRC patients, particularly for non-metastatic CRC patients [13].
In the past, CTCs have been isolated almost exclusively from peripheral blood. Thus, the low detection rate might have been caused by an uneven release of CTCs into circulation system, and uneven distribution. Indeed, in pancreatic cancer, studies have shown higher CTC numbers in portal vein blood than in peripheral blood [16,17]. In addition, portal blood CTCs-positive patients had higher liver metastasis rate than CTCs-negative patients after 3-year follow-up [18]. Like pancreatic cancer liver metastasis, colorectal liver metastasis (CRLM), the most frequent CRC metastatic site, is through the portal vein [19]. Tumor drainage (mesenteric) blood and portal blood of CRC patients had higher rates and numbers of CTCs than peripheral blood [20]. Furthermore, the hepatic venous (HV) CTCs>3 were associated with shorter PFS and OS, but not peripheral (PV) CTCs in CRLM patients [21]. However, there have been few studies comparing CTCs detected in portal venous blood vs. peripheral blood in CRC patients, and the relationship between CTCs and clinicopathological serum CRC markers is not known.
In this study, we investigated the distribution of CTCs in peripheral and portal blood of CRC patients, and we analyzed the relationship between serum tumor markers and CTCs counts in peripheral and portal blood.
Study population
From December 2015 to January 2017, 101 patients were enrolled prospectively into the study. The patients were divided into three groups: un-paired non-metastatic CRC patients (UP, n = 77; 42 patients were analyzed by using peripheral blood and 35 patients were analyzed using first flux vein blood), paired CRLM patients (n = 14), and paired non-metastatic CRC patients (NM, n = 10). The clinicopathological features of the patients are listed in Tables 1 and 2. CTCs detection in first reflux vein blood is more sensitive than in peripheral blood in un-paired non-metastatic (UP) patients 42 CRC patients had peripheral vein blood collected preoperative for CTCs detection (Table 1). Consistent with previous studies, the CTCs number detected in peripheral blood of CRC patients by CellSearch system was low ( Figure 1A). Only 7% patients had CTCs detectable (3/42).
We wanted to evaluate whether CTCs detection in portal vein was more sensitive. To avoid the influence of drainage blood from the superior mesenteric vein, splenic vein, and other reflux veins, we collected blood samples from the first branch vein belonging to the primary lesion. First reflux vein blood was collected from the ileocolic vein of ascending or hepatic flexure (Figure 2A), the middle colon vein of transverse colon ( Figure 2B), the left colon vein (upper or lower branch) of descending or splenic flexure ( Figure 2C), the sigmoid vein of sigmoid colon cancer patients ( Figure 2D), and the superior rectal vein of rectal cancer patients ( Figure 2E). The first reflux vein was isolated and blood sample was collected in bloodless dissection in order to prevent tumor cells and epithelial cells of tumor bed from contaminating circulation. Then, 10 mL of portal vein blood of 35 CRC patients was collected intraoperatively from the first reflux vein during CRC resection (Table 1). In addition, 7.5 ml of portal blood was analyzed for CTCs with CellSearch system.
The CTCs detection in first reflux vein blood (16/35) ( Figure 1B) was more sensitive than in peripheral venous blood (3/42) ( Figure 1A). Compared to 7% using peripheral vein blood, the CTCs detection rate in first reflux vein blood was 46% (P<0.0001, two-sided Fisher exact test, ORs=11.32) ( Figure 1C). Consistent with the high CTCs-positive rate, CTCs in first reflux vein blood were detected at a significantly higher number than in peripheral vein blood (mean 2.77 vs 0.24, P<0.0001, twotailed Mann Whitney test) ( Figure 1D). These findings suggest that CTCs detection in first reflux vein blood is more sensitive than in peripheral vein blood in un-paired non-metastatic patients (UP).
CTCs detection in first reflux vein blood is more sensitive than in peripheral blood in paired CRLM patients
As described above, CTCs detection in first reflux vein blood was more sensitive than in peripheral vein blood, but the un-paired test is not a conclusive evidence. Thus, peripheral blood and first reflux vein blood were analyzed in 14 paired colorectal cancer liver metastases (CRLM) patients (Table 1).
CTCs were detected at a higher rate ( CEA: carcinoembryonic antigen CA19-9: carbohydrate antigen 19-9 ORs=15.04) ( Figure 3A) and at a significantly higher number (mean 12.43 vs 1.57, P=0.0024, two-tailed Wilcoxon signed rank test) ( Figure 3B) in first reflux vein blood than in peripheral venous blood of 14 CRLM patients. Next, we analyzed the CTCs counts change in peripheral blood and first reflux vein blood in the paired CRLM patients. The CTCs counts decreased in peripheral blood compared to first reflux vein blood in paired CRLM patients ( Figure 3C). Combined analysis of UP patients and CRLM patients also showed that CTCs detection in first reflux vein blood was more sensitive than in peripheral venous blood (28 [57%] vs 7 [13%], P<0.0001, two-sided Fisher's exact test, ORs=8.871) ( Figure 3D), with higher CTCs numbers (mean 5.53 vs 0.57, P<0.0001, two-tailed Mann Whitney test) ( Figure 3E). These findings indicate that the CTCs detection in first reflux vein blood is more sensitive than in peripheral blood in CRLM and UP patients.
CTCs detection in first reflux vein blood is more sensitive than in peripheral blood in paired nonmetastatic (NM) patients
To determine whether CTCs detection is more sensitive in first reflux vein blood of NM CRC patients than in peripheral blood, 10 NM patients were analyzed ( Table 2). As expected, CTCs were detected at a higher rate (7 [70%] vs 2 [20%], P<0.0001, two-sided Fisher's exact test, ORs=9.333) ( Figure 4A) and a significantly higher number (mean 11.3 vs 0.2, P=0.0223, two-tailed Wilcoxon signed rank test) ( Figure 4B) in first reflux vein blood than detected in peripheral venous blood. The CTCs counts variations in peripheral blood and first reflux vein blood in all paired NM patients were also analyzed. The results showed that the CTCs counts decreased in peripheral blood compared to first reflux vein blood ( Figure 4C). Combined analysis of CTCs levels in all three groups (UP, CRLM, and NM) also showed that CTCs detection in first reflux blood was more sensitive (35 [59%] vs 9 [14%], P<0.0001, two-sided Fisher's exact test, ORs=8.840) ( Figure 4D), and CTC levels (mean 6.51 vs 0.52, P<0.0001, two-tailed Mann Whitney test) ( Figure 4E) were higher than in peripheral blood. These findings indicate that CTCs detection in first reflux vein blood is more sensitive than in peripheral blood in UP, CRLM, and NM patients, suggesting that clinical diagnosis using the CellSearch System should be based on the CTC detection in first reflux vein blood.
CTCs amounts are not associated with primary cancer position or TNM stage in CRC patients
Because of the differences between colon and rectum cancer, we investigated whether CTCs in peripheral or portal blood were associated with primary cancer position. However, there was no statistical difference between colon and rectum cancer. In UP patients peripheral blood, only 1 of 21 colon cancer patients was CTCs-positive, and 2 of 21 in rectum cancer patients were CTCs-positive. In UP patients first reflux vein blood, 7 of 12 colon cancer patients were CTCs-positive, and 9 of 23 in rectum cancer patients were CTCs-positive. Similar results were also found in paired patients. In CRLM patients, 1 of 7 colon cancer patients and 3 of 7 rectum cancer patients were CTCs-positive in peripheral blood, and 6 of 7 colon cancer patients and 6 of 7 rectum cancer patients were CTCs-positive in first reflux vein blood. In NM patients, 1 of 3 colon cancer patients and 1 of 7 rectum cancer patients were CTCs-positive in peripheral blood, and 2 of 3 colon cancer patients and 5 of 7 rectum cancer patients were CTCs-positive in first reflux vein blood.
Next, we analyzed the relationship between CTCs and TNM stage. However, there was no statistical difference. Together, our results indicate that the amounts of CTCs are not associated with primary cancer position or TNM stage in CRC patients.
High CEA/CA19-9 levels indicate high CTCs levels both in peripheral and first reflux vein blood in CRC patients
To identify reliable and non-invasive CTC prognostic markers, we analyzed the relationship between CTCs and two traditional serum tumor markers, carcinoembryonic antigen (CEA) and carbohydrate antigen (CA) 19-9. Peripheral blood were collected preoperative for CEA and CA 19-9 analysis. In peripheral blood, all CTCs-patients had high CEA levels (mean 195.77 vs 50.11, P=0.0089, two-tailed Mann Whitney test) and high CA19-9 level (mean 264.74 vs 90.52, P=0.1325, two-tailed Mann Whitney test) ( Figure 5A). Although CTCs were not correlated with CEA/CA19-9 levels in first reflux vein blood, CTCs-positive patients had high concentrations of CEA (mean 93.54 vs 17.90, P=0.5021, two-tailed Mann Whitney test) and CA19-9 (mean 183.43 vs 30.14, P=0.2766, two-tailed Mann Whitney test) ( Figure 5B).
DISCUSSION
In this study, we compared CTCs levels in portal and peripheral blood in 101 Chinese CRC patients divided in three groups: un-paired non-metastatic CRC patients (UP, n = 77), paired CRLM patients (n = 14), and paired non-metastatic CRC patients (NM, n = 10). Consistent with previous studies, CTCs levels were significantly higher in portal vein than in peripheral blood in UP (Figure 1), CRLM (Figure 3), and NM (Figure 4) patients. Moreover, 57% of CRLM patients ( Figure 3C) and 50% of NM patients ( Figure 4C) had CTCs-positivity only in the portal vein. In CTCs-positive patients, 67% of CRLM patients ( Figure 3C) and 71% of NM patients ( Figure 4C) had CTCs-positivity only in the portal vein. These patients would be missed by analyzing only peripheral blood. No patients were CTCs-positive in peripheral blood but negative in portal blood in paired CRLM patients ( Figure 3C) and paired NM patients ( Figure 4C). These findings provide new evidence that the CTCs-detection in first reflux vein blood is more sensitive than in peripheral blood in UP, CRLM, and NM patients. Further analysis revealed that the CTC counts decreased in peripheral blood compared to first flux vein blood in CRLM ( Figure 3C) and NM patients ( Figure 4C). Our results also provided new direct evidence for liver reduced CTCs amount. These results suggest that the CTCs levels in CRC patients should be analyzed in first reflux vein/portal vein blood, rather than in peripheral blood.
The high sensitivity of CTCs detection in first reflux vein blood might be attributed to two factors: First, CTCs from primary tumor site are released into first reflux vein/portal vein as "seed of metastases" [19], resulting in increased CTCs levels in first reflux vein/portal vein. Second, liver, as the unique organ blocking portal blood flux into peripheral vein, may serve as a goalkeeper or filter of CTCs released into peripheral vein [22]. However, serum liver markers AST/ALT were not different between CRLM patients and non-metastatic CRC patients (data not shown). Due to the small sample size and the low accuracy of AST/ALT assay, the relationship between liver lesions and CTCs in peripheral blood should be confirmed using more paired patients and better molecular markers of liver lesion.
A recent study has revealed that high CTCs counts in portal, but not in peripheral blood, are a significant prognostic predictor for liver metastases and DFS/OS [21]. However, CTCs in portal blood have few applications in clinical practice since portal blood samples cannot be easily obtained before surgery, and the clinical CTCs detection is still expensive. To develop a non-invasive, reliable, and affordable CTCs marker, we analyzed the relationship between traditional serum tumor markers and CTCs in CRC patients.
Serum tumor markers, such as carcinoembryonic antigen (CEA) and carbohydrate antigen (CA) 19-9 are widely used for cancer detection in clinical practice [23,24]. CEA and CA 19-9 have a prognostic role in several cancers, including gastric, pancreatic, bile duct, bladder cancer, and CRC [25][26][27][28][29]. High serum CEA levels correlate with CRC patients' prognosis [30,31]. As recommended by the American Society of Clinical Oncology, CEA levels should be measured after curative surgery for recurrence surveillance in patients with stage II and III CRC [32][33][34]. Previous studies have demonstrated that the preoperative serum CA 19-9 level is a prognostic indicator in CRC patients [35][36][37]. CA 19-9 correlates with tumor cell-induced platelet aggregation [38], and adhesion of tumor cells to the endothelial cells of blood vessels [39], thus contributing to the distant metastases of CRC. In pancreatic cancer, CTCs-positive patients had higher CEA [18] and levels than CTCs-negative patients. Increased serum levels of CEA and CA19-9 were associated with detection of CTCs in peripheral blood of stage IV CRC patients [20]. Preoperative CEA and CA 19-9 levels were associated with CTCs-positive patients in peripheral blood, and shortened PFS/OS in CRC patients [14]. High tumor burden in the liver and high baseline serum CEA levels were associated with high CTCs in stage IV CRC patients [41]. However, there were no studies investigating the possible relationship between portal blood CTCs and serum tumor markers CEA/CA19-9 in CRC patients' peripheral blood.
We found that CTCs-positive patients had high CEA/CA19-9 levels both in peripheral blood ( Figure 5A) and in first flux vein blood ( Figure 5B), and high CEA/ CA19-9 patients had higher CTCs-positive percentage in peripheral blood ( Figure 5C) and in first flux vein blood ( Figure 5D). Our results indicate that high CA19-9 levels in peripheral blood may be used as a marker of CTCs in portal blood of CRC patients. Furthermore, CTCs-positive patients had increased CA19-9 levels regardless of CEA levels ( Figure 6D), suggesting that only the CA19-9 levels may serve as a potential marker for high CTCs levels in non-metastatic CRC patients.
Our results show that CTCs detection in first reflux vein/portal vein blood is more sensitive than in peripheral blood. Furthermore, our results suggest that high CA 19-9 levels may be an early marker for selecting patients for CTC analysis in portal blood. These findings open a new path for CTCs detection in CRC patients. In the clinic, analysis of CTCs in peripheral blood should be avoided in CRC patients due to the low detectable rate and expensive cost. CRC patients with high CA 19-9 levels should be tested for CTCs levels in first reflux blood. The inexpensive and convenient traditional serum testing of CA 19-9 levels may signal CTCs in first reflux/portal vein blood.
The chief limitation of our study is the small sample size. In future, we want to investigate the correlation of portal CTCs-positive patients with liver metastases, recurrence-free survival, and overall survival. The mechanisms of how liver removes most of CTCs from portal vein blood should be investigated in future studies. A large-scale study with a subgroup analysis is also needed to confirm that peripheral serum tumor markers are related to portal CTCs and liver metastases.
Patients' recruitment
This single-institution study prospectively recruited patients with the following criteria: confirmed diagnosis of colorectal cancer with different stages, completed clinical and pathological results, and included biochemical test results. Patients who received any cancer-related treatment (including blood transfusion, preoperative radio-chemotherapy, or immunotherapy) within 1 month before blood sample collection and tumor detection were excluded, as well as those who underwent emergency surgery or surgery for recurrent disease. Furthermore, we excluded patients with a history of another malignancy that was diagnosed or treated within the past 5 years. Patients were recruited into three groups: un-paired nonmetastatic CRC patients (UP group, n = 77; 42 patients were analyzed by using peripheral blood and 35 patients were analyzed using portal blood), paired CRLM patients (CRLM group, n = 14), and paired non-metastatic CRC patients (NM group, n = 10).
Enrollment of all patients in this study was approved by the Ethics Committee. Written informed consent was obtained and signed by all patients prior to sample collection.
Clinical and pathological data recording
Clinical and pathological data were analyzed by reviewing electronic records (including primary tumor pathological characteristics). CEA and CA19-9 serum levels were analyzed using Roche Elecsys 2010 system. Imaging diagnosis of liver or other organ metastases were conducted in a multidisciplinary conference. Clinical TNM stage of patients was in accordance with the criteria of AJCC 7th .
Blood sample collection and CTCs counting
CRC patients had withdrawn 7.5 mL of blood from forearm peripheral vein and/or first reflux vein (portal vein) before surgery. Collected blood samples were immediately transferred to CellSave ® preservative tubes (Janssen Diagnostics, LLC, Raritan, NJ, USA) and analyzed within 3 days using Cellsearch System according to the standard CellSave ® protocol and the CTC Epithelial Cell Kit (Veridex). The first reflux vein was isolated and blood sample was collected in bloodless dissection in order to prevent tumor cells and epithelial cells of tumor bed from contaminating circulation.
Statistical methods
All statistical analyses were performed with the GraphPad Prism 5.0 software. Values are expressed as the means ± SEM. The Two-sided Fisher's exact test was used to compare ratios, and continuous variables were analyzed using Two-tailed Wilcoxon's signed-rank test for paired patients or Two-tailed Mann Whitney test for un-paired patients. A probability value (p) < 0.05 was considered significant. The sensitivity and specificity calculations were performed using IBM SPSS Statistics version 19.0.0 (IBM Corporation, NY). | 2018-01-24T17:25:33.048Z | 2017-07-01T00:00:00.000 | {
"year": 2017,
"sha1": "33ddb4517d751c8e6af324690c513ad8716a63e4",
"oa_license": "CCBY",
"oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=18912&path[]=60675",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "33ddb4517d751c8e6af324690c513ad8716a63e4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269193899 | pes2o/s2orc | v3-fos-license | New Trends in Emerging Novel Nanosponges Drug Delivery
Medical professionals have long had trouble delivering medications to the correct location in the body and controlling their release to prevent overdose. Nano-sponges are novel and complicated molecules, which have the ability to alleviate these challenges. Utilizing nano-sponges allows for precise drug delivery to the desired location. Nano-sponge technology gets better patient compliance by delivering drugs to strategic locations while also extending dosing intervals. Various drug delivery techniques, such as parenteral, transdermal, oral
INTRODUCTION
he Nano-sponges are approximately the size of a virus, with a 'backbone' (a scaffolding structure) comprised of biologically bio-degraded polyester.Cross-linkers are tiny molecules that connect to specific regions of the polyester and break down large polyester strands in a solution.They 'cross link' polyester portions to create a spherical shape with multiple drug pockets (or cavities).The polyester bio-degrades in a predictable way.As a result, it degrades steadily in the body and releases its drug payload in a predictable pattern.These microscopic sponges can move throughout the body until they come into contact with the intended target, where they will then stick to the surface and begin to release the medication. (1)ckground Nano-sponges one novel type of substance is composed of microparticles with large cavities a few nanometers in size that can encapsulate a variety of compounds.These particles may transfer both lipophilic and hydrophilic compounds while as well as enhancing Poorly water-soluble compound solubility.Nano-sponges comprise small mesh-like formations with the potential to revolutionize effective treatment of disease conditions.Preliminary studies showed that this technology was nearly five times more effective than traditional techniques of administering medications for breast cancer.Poly(isobutylcyanoacrylate) (IBCA) nanocrystals are also used to enclose nano-particles.They can effectively absorb drug molecules in their hydrophilic core.The 2 nd group consists of complex nanoparticles that attract molecules through electrostatic interaction.Conjugated nano-particles, are 3 rd type which form covalent connections with drugs.These nanosponges are a new type of nano-particle created by naturally occurring compounds.When compared to other nanoparticles, they are insoluble in both organic and water-based solvents, porous, non-poisonous, and resistant to elevated temperatures of up to 300°C.They can acquire, transfer, and probably release a wide range of molecules because of their three-dimensional structure, which comprises cavities of nano metric size and variable polarities. (2)e Key Properties of Nanosponges Nanosponges are available in a variety of sizes (1 m or even less) and different polarities. (3)By altering the cross linker to polymer proportion, Nanosponges of variable sizes and polarities can be formed.
(3) Depending on the process parameters, they may be They might be crystallized or Para-crystallized.The crystal structure of nano-sponges is critical in their drug complexation.The drug encapsulation ability of nanosponges is mostly determined by the degree of crystallization The drug encapsulation ability of paracrystalline nanosponges have varied .(4) These are harmless, porous particles that are unsolvable in most polar solvents and can withstand temperatures reaching up to 300 degrees Celsius.. (5) Their three-dimensional structure makes it possible to collect, transfer, and precisely release a huge variety of compounds.Because of their potential to be connected with several functional groups, they can be targeted to various environments.Chemical linkers allow nanosponges to preferentially attach to specific locations.With various medications, they produce inclusion and non-inclusion complexes . (6)his technology offers minimized adverse effects and the ability to entrap a wide range of substances.
Advantages & Disadvantages of Nanosponges
Nanosponges could be paracrystalline or crystalline in nature.
3.
Greater elegance, better formulation flexibility, and good stability.The degree of crystallisation has has a considerable impact on the load-bearing capacity of nano-sponges.
The loading capabilities of paracrystalline nanosponges can vary.
Dichloromethane were used to dissolved ethyl cellulose + Drug
In the aqueous phase, add a small amount of polyvinyl alcohol.
For two hours, the reaction mixture was agitated at a speed of 1000 rpm The formed nanosponges were obtained by filtering + and drying in an oven at 400 °C for 24 hours.
Polymer
The polymer selected can have effect on the manufacture and performance of nano-sponges.The cavity must be large enough to accommodate the drug molecule.The polymer is selected based on the desired release and medicine which is encapsulated.The polymer of choice must be able to bind to the selected ligands. (9)ent for cross-linking These cross-linking agents can be chosen on the basis of the polymer structure and the medicament to be prepared.Diaryl carbonates, dichloromethane, diphenyl carbonates & Di isocyanates are some examples. (7,8)ug component Between 100 and 400 Daltons are the ranges of molar mass. A pharmacological molecule is made up of no more than five compacted rings.
Lower than 10 mg/ml of solubility is required in water.
The substance's melting point is less than 250 °C. Procedure for the Preparation of Nano-sponges Procedure for the Preparation of Nano-sponges
Procedure for the Preparation of Nano-sponges Diffusion Method for Emulsion Solvent
Different quantities of ethyl cellulose (EC) and polyvinyl alcohol (PVA) can be used to make nanosponges.A specific quantity of polyvinyl alcohol was introduced gradually to a dispersed phase made up of ethyl cellulose and the medication in 150ml of an aqueous continuous phase after being dissolved in 20ml of dichloromethane.
For two hours, the reaction mixture was agitated at a speed of 1000 rpm.The formed nanosponges were obtained by filtering and drying in an oven at 400 °C for 24 hours.To ensure that all remaining solvents were eliminated, the dried Nanosponges were kept in vacuum desiccators.
Synthesis using Assisted Ultrasound
By combining polymers and cross-linkers in this process without the use of a solvent and while using sonication, nanosponges can be produced.This process will produce spherical, uniform-sized nanosponges An appropriate solvent, such as a polar aprotic solvent, is combined with the polymer.
When adding this composition to greater amounts of the cross linkers, a desired molar mass ratio of 1:4 between the cross-linker and polymer is used.
The reaction is carried out for duration of 1 to 48 hours at a temperature of 100 °C, which is the reflux temperature of the solvent.
A sizable amount of distilled water is added to the end product once the reaction is finished, the solution has cooled to room temperature, and the reaction has been completed.Loading of Drug into Nanosponges (14,15,16) Pre-treatment of nanosponges is required to achieve mean particle sizes below 500 nm for drug delivery.To avoid the formation of aggregates, sonicate the nanosponges in water before centrifuging the solution to extract the colloidal fraction.Freeze-dry the supernatant after separating it from the sample.Make an aqueous suspension of Nanosponge, distribute the excess medication, and keep the suspension constantly stirring for the duration of the complexation process.After complexation, separate the insoluble (undissolved) drug from the complexed drug using centrifugation.Solvent evaporation or freeze-drying can then be used to generate solid crystals of nanosponges.The nanosponges crystal structure is necessary for drug complexation.One study found that paracrystalline nanosponges and crystalline nanosponges have distinct loading capacities.In comparison to paracrystalline nanosponges, crystalline nanosponges have a higher drug loading.Instead of an inclusion complex, drug loading happens mechanically in weakly crystalline nanosponges.• The active ingredient is incorporated into the vehicle in an encapsulated form because there isn't an uninterrupted membrane around the nanosponges.
2
• Until the vehicle is saturated and equilibrium has been reached, the encapsulated active component can freely travel from the particles into the vehicle.
• The vehicle carrying the active ingredient becomes unsaturated when the product is put on the skin, causing upsetting the balance.
3
• As a result, an active flow will start from the sponge particle into the vehicle, then onto the skin, where it will remain until the vehicle is either dried as well as absorbed.
• The release of active ingredients continues for a considerable amount of time even when the nanosponges particles are removed from the stratum corneum of the skin.CHARACTERIZATION OF NANOSPONGE [45][46][47][48][49][50] Inclusion complexes formed between the drug and nanosponges can be characterized by following methods.
CONCLUSIONS:
The Nanosponges can deliver the drug to the desired site in a regulated manner.They can also transport both lipophilic and hydrophilic compounds.Because of their small particle size and spherical shape, these can be designed as oral, parenteral, and topical treatments.Nanosponge technology entraps chemicals, resulting in fewer adverse effects, higher stability, increased elegance, and increased formulation flexibility.Nanosponge can be efficiently incorporated into a topical drug delivery system for dosage form retention on skin, as well as used for oral drug delivery using bioerodible polymers, particularly for colon specific delivery and controlled-release drug delivery systems.Thus, Nanosponge technology delivers drugs to particular locations while also extending dosing intervals, improving patient compliance.The formulation of nanosponges may be the greatest approach for resolving different nano-related challenges in the pharmaceutical business. (2)With huge discoveries and new scientific challenges, the topic of nanosponges continues to gain attention.Nanosponges are used in a variety of drug delivery systems, including oral, topical, intravenous, and immunosuppressive.Nanosponge particles can also be used in a targeted medication delivery system that works through the lungs, liver, and spleen.Some approaches can also be used to identify nanosponges at ailment sites such as Crohn's disease, auto-immune disease, and cancer that impact distinct organs or tissues.
2.
Particle Size and Polydispersity Index Laser Light Scattering Technique
3.
Surface Charge Zeta potential measurement
4.
Loading Efficiency Determined by the quantitative estimation of drug loaded into nanosponges
Drug Release
In vitro diffusion cell,dialysis bag
6.
Interaction between Nanosponges and drug molecules Infrared spectroscopy
8.
Inclusion complexation in the solid state.X-ray diffractometry
9.
Complex formation between the drug and Nanosponges Thin layer chromatography Nanosponges are now utilised in gastro-retentive medication delivery devices.
Figure 1 :
Figure 1:-Classification of Nanoparticles using drug associating approach .(1)Nanosponges and nanocapsules are examples of the first category.Nano-sponges, such as alginate nanosponges, are nano-particles that similar to sponges with numerous pores that transport medicinal molecules.Poly(isobutylcyanoacrylate) (IBCA) nanocrystals are also used to enclose nano-particles.They can effectively absorb drug molecules in their hydrophilic core.The 2 nd group consists of complex nanoparticles that attract molecules through electrostatic interaction.Conjugated nano-particles, are 3 rd type which form covalent connections with drugs.These nanosponges are a new type of nano-particle created by naturally occurring compounds.When compared to other nanoparticles, they are insoluble in both organic and water-based solvents, porous, non-poisonous, and resistant to elevated temperatures of up to 300°C.They can acquire, transfer, and probably release a wide range of molecules because of their three-dimensional structure, which comprises cavities of nano metric size and variable polarities.(2)
Table 3 :
The use of antifungal drug-loaded nanosponges and its disadvantages | 2024-04-18T15:24:31.944Z | 2024-04-15T00:00:00.000 | {
"year": 2024,
"sha1": "1104f84b479c80de11869c00e4df2456db97a7e1",
"oa_license": "CCBYNC",
"oa_url": "https://ajprd.com/index.php/journal/article/download/1360/1358",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d7e64d7cb79b975edc824a2b5c9c21ac6b926277",
"s2fieldsofstudy": [
"Medicine",
"Materials Science",
"Engineering"
],
"extfieldsofstudy": []
} |
220365535 | pes2o/s2orc | v3-fos-license | The Relative Age Effect and Talent Identification Factors in Youth Volleyball in Poland
Previous studies in team sports have not reported evidence regarding the relative age effect (RAE) in relation to the talent identification (TI) process in volleyball, which is organized and controlled by a national federation. Volleyball is a non-contact team sport in which a player’s physique does not directly affect other players in the game but is considered one of the most critical factors in the TI process. The aims of the present study were (1) to determine the differences in the quarterly distribution of age between Polish youth volleyball players from the Olympic Hopes Tournament (OHT) and the general population, (2) to investigate the quarterly differences in anthropometric characteristics and motor test results in OHT participants, and (3) to identify the criteria that determine selection for the National Volleyball Development Program (NVDP). The present study identified the RAE in young male (n = 2,528) and female (n = 2,441) Polish volleyball players between 14 and 15 years of age who competed in the elite OHT in 2004–2015. The study included anthropometric characteristics, motor test results, and selection for the NVDP. The multivariate analysis of covariance demonstrated no significant main effect for birth quarter or calendar age in any of the OHT female players or in male players selected for the NVDP. In the group of non-selected NVDP male players, the analysis demonstrated significant differences by birth quarter as a covariate for body height (F = 0.01, p < 0.001), spike reach (F = 7.33, p < 0.05), and block jump (F = 0.02, p < 0.001). Significant differences by calendar age as a covariate were observed for body mass (F = 0.53, p < 0.01), spike jump (F = 2.64, p < 0.05), block jump (F = 0.4, p < 0.01), and zigzag agility test results (F = 0.01, p < 0.01). The results showed a significant overrepresentation of early-born participants in the OHT and NVDP subsamples. The classification model demonstrated that a combination of four characteristics optimally discriminated between players selected for the NVDP and those who were not selected. This combination of variables correctly classified 77.7% of the female players and 71.8% of the male players in terms of their selection for the NVDP. The results of this study show that jumping ability and body height are crucial in the TI and selection process in youth volleyball.
INTRODUCTION
The requirements of youth sports lead to the age banding of players into training groups and teams; sports administrators age-band players into training groups relative to cutoff dates (e.g., the start and end of the calendar year; Cobley et al., 2009). The assessment of players by trainers during the talent identification (TI) process can be disrupted by differences in the players' biological development (Ramos et al., 2019) and sociological factors (Hancock et al., 2013). Players born closer to the starting point of their age group relative to their peers may be older by as much as 2 to 5 years (Johnson et al., 2017), and the selection of more mature and stronger players will result in an overrepresentation of players born in the first part of the selection period (e.g., quarter). As a consequence, in youth ball sports, later-born and less mature players are strongly underrepresented, especially at the elite level (Hill and Sotiriadou, 2016). This phenomenon is a well-documented selection bias and is known as the relative age effect (RAE; Musch and Grondin, 2001).
The presence of an RAE has been observed at the senior and youth levels in the following contact team sports: basketball (Arrieta et al., 2016;Werneck et al., 2016), soccer (González-Víllora et al., 2015;Skorski et al., 2016), and handball (Schorer et al., 2009). Contrary, the RAE was not found in other team sports such as rugby (Jones et al., 2018) and water polo (Barrenetxea-Garcia et al., 2018). In line with this, the findings of existing literature on RAE in contact team sports have been controversial so far. Nevertheless, it is reported that discrimination against players born in the last quarter of a calendar year differs, depending on the position, gender, age of the player (Salinero et al., 2013;Lidor et al., 2014), and expertise level (Praxedes et al., 2017). Volleyball, however, is a non-contact team sport in which a player's physique does not directly affect other players in the game. It was reported that more than two-thirds of all points scored in volleyball are due to short dynamic bouts that mainly depend on players' vertical jump and body height (Silva et al., 2014). Interestingly, only a few works have considered RAE in terms of birth-date discrimination in volleyball. An overrepresentation of players born in the first quarter of the year compared to other quarters was observed in a group of young male and female players and in the players in the age group younger than 19 years to younger than 23 years in men's World Volleyball Championship (Okazaki et al., 2011;Nakata and Sakamoto, 2012;Campos et al., 2016). In addition, the RAE in volleyball has been identified in school competitions (Reed et al., 2017). Research by Lupo et al. (2019) emphasizes the different nature of the RAE in volleyball compared to other elite team sports in Italy.
Considering the aforementioned, it is clear that RAE manifests itself in such team games according to the physical characteristics of the player. Previous studies about the potential advantage in the physical and motor abilities of early-born players to their counterparts were carried out mostly in the field of other team sports. For example, in youth soccer, possible differences in biological maturation and anaerobic characteristics were observed between players born in the first and fourth quarters of the year (Deprez et al., 2013). Nevertheless, a pilot study from Papadopoulou et al. (2019) shows no quarter differences in anthropometric and physiological characteristics in youth volleyball female players. In contrast, late-born youth basketball players have a "double disadvantage" in body height compared to their peers (Rubajczyk et al., 2017). In addition, advanced maturity status and being relatively older affected players' gamerelated specific fitness (Duarte et al., 2019). However, the RAE has not been thoroughly explored in volleyball, especially with regard to the TI process.
The TI process in volleyball may be challenging for practitioners. In general, successful discrimination between talented and untalented-identified junior volleyball players is multidimensional and is based on the assessment of skill attributes, a tactical understanding of the game (Jager and Schollhorn, 2007), or game intelligence (Rikberg and Raudsepp, 2011), perceptual-cognitive skills (Alves et al., 2013), motor abilities, and anthropometric and physical characteristics (Marcelino et al., 2014). Despite this, body height is considered a key criterion in the TI process used to assess youth players (Aouadi et al., 2012;Carvalho et al., 2020). Thus, the failure to estimate the adult body height of an athlete will significantly hinder the effective TI process in volleyball (Baxter-Jones et al., 2020). In addition, maturity-associated variation in performance (Sandercock et al., 2013), and sex differences in the onset of puberty (Malina, 2014;Kwieciński et al., 2018) may indicate an ineffective TI process and maintain the existence of the RAE phenomenon in youth sports. Furthermore, in a non-contact team sport such as volleyball, earlier age at the start of peak height velocity and player body height may not be important performance factors but can be decisive factors in TI.
An example of the TI process in volleyball, which is organized and controlled by a national federation, is the Olympic Hopes Tournament (OHT). The OHT, which was first organized in 2004, exemplifies the difficulty of identifying talent in the pool of youth players. This event is organized by the Polish Volleyball Federation (PVF) for elite 14-year-old (born in the corresponding calendar year) Polish male and female players. Tournament participants represent 16 Polish voivodeships and are previously selected via regional PVF divisions. Unfortunately, players' data from their regional PVF clubs before selection for OHTs are not available. Eight of the 12 players who qualify for the OHT from each voivodeship are obligated to meet the minimum body height requirements: 185 cm for male players and 175 cm for female players. All teams play three matches at the group stage and one or more matches in the knockout phase. The PVF sets the net height at 243 cm for boys and 223 cm for girls. During the tournament, experienced PVF coaches assess the players separately by gender and identify the players who will be offered full-time scholarships by the National Volleyball Development Program (NVDP). The final result of the tournament is the selection of male and female players for the NVDP. To the best of our knowledge, there are no reports related to the determination of the RAE or TI factors in youth volleyball tournaments similar in scale to the OHT.
Therefore, the aims of the present study were (1) to determine the differences in the quarterly age distribution between Polish youth volleyball players in the OHT and the general population, (2) to investigate the quarterly differences in anthropometric characteristics and motor test results in OHT participants and (3) to identify the criteria that determine selection for the NVDP. We hypothesized that the players selected for the NVDP would exhibit a taller body height and higher jumping ability than the unselected players would. We also hypothesized that the RAE would be most apparent in the group of males and females selected for the NVDP because of the significant role of player height in volleyball.
Data Collection
This study included 2,528 male (aged 14.51 ± 0.32 years) and 2,441 female (aged 14.48 ± 0.31 years) players who participated in the OHT in 2004-2015 and were selected from the official database of the PVF. The obtained data were date of birth, anthropometric characteristics, and the results of fitness tests. Data on differences in the quarterly distribution of birth dates in the Polish population (PP) were obtained from the Central Statistical Office. These data corresponded to the birth dates of the players who participated in the OHT (1989OHT ( -2001. In the PP, there was no significant difference in the shape of the relative quarterly distribution of age among the studied years. All data were obtained according to the Data Protection Act in Poland, and all procedures were approved by the Research Ethics Committee of the University School of Physical Education in Wrocław.
Procedures
To determine the quarterly birth distribution, birth-date data were listed according to the four quarters of the calendar year: Q1 (January-March), Q2 (April-June), Q3 (July-September), and Q4 (October-December). The birth dates of the male and female populations in Poland between 1989 and 2004, which correspond to the birth dates of the players participating in the OHT, were similarly arranged. The OHT competition lasted 3 days: day 1-data collection, anthropometric measurements, and motor tests; day 2-group stage and quarter-final matches; and day 3-semifinal and final matches. The day after the final OHT match, a list of players nominated for the NDVP was published.
All anthropometric and fitness data were obtained by PVF employees in preparation for performing the measurements. In the 12 tournaments from which the results were obtained, the measurements carried out by PVF employees were supervised by the same person. Before the beginning of the tests, a standardized warm-up was carried out. All measurements were taken under the same external conditions in a sports hall and at a similar time of year (October or November). For the anthropometric measurements, the players wore only shorts, and for the performance tests and jumps, they wore shorts, t-shirts, and volleyball-specific shoes. All testing conditions were standardized for all measurement points, including test order, hydration, and preassessment food intake.
Anthropometric Characteristics
An electronic scale (kg) and a stadiometer (cm) were used for the anthropometric measurements. Standing reach stature was measured to the nearest centimeter using a yardstick vertical jump device (VolleySystem, Poland). Players were asked to stand with their feet flat on the ground, to extend their arms and hands and to mark their standing reach. Two measurements were made, corresponding to one-and twoarm standing reaches. The intraclass correlation coefficient for test-retest reliability and technical error of measurement (testretest period of 1 h) in 30 youth male players was 0.24 (p < 0.01), which corresponded to 0.1 kg for body weight, 0.83 (p < 0.01), and 0.1 cm for body height and 1.18 and 1 cm for standing reach.
Vertical Jump and Block Reach
Vertical jump height was calculated as the highest point reached during a countermovement jump with an arm swing from a standing position. Block reach was measured to the nearest centimeter, and the best value obtained from three trials of countermovement jumps with arm swings was used for the analysis for male and female players, respectively. The players were then instructed to stand on a mark and to leap as high as possible with both legs, displacing as many vanes on the yardstick as possible. All jumps were performed using a yardstick vertical jump device (VolleySystem, Poland). The intraclass correlation coefficient for test-retest reliability (test-retest period of 1 h) in 30 youth male players was 1.97 (p < 0.01) for vertical jump and 0.64 for block reach (p < 0.01). The technical error of measurement was 1 cm.
Spike Reach and Spike Jump
The players were asked to stand with their feet flat on the ground, extend their arms and hands, and mark their standing reach. They were then instructed to take a run-up or spike approach and to leap as high as possible with both legs, displacing as many vanes on the yardstick as possible (VolleySystem, Poland). A 5min break between jumps was applied. The best result out of two trials was recorded. The spike jump values were calculated as the difference between the heights of the jump and the standing one-arm reach. The intraclass correlation coefficient for testretest reliability (test-retest period of 1 h) in 30 youth male players was 0.66 for spike reach (p < 0.01). The technical error of measurement was 1 cm.
Zigzag Agility Test
The zigzag agility test consisted of running at maximal speed through a 7 × 7-m zigzag course (Figure 1). Timing began with a sound signal and stopped when the subject passed through a timing gate (SECTRO Timing System, Jelenia Gora, Poland); the time was measured in hundreds of seconds. A 5-min break between trials was applied. The best result out of two trials was recorded. The intraclass correlation coefficient for test-retest reliability (test-retest period of 1 h) in 30 youth male players was 0.46 s for the zigzag agility test (p < 0.01). The technical error of measurement was 0.01 s.
Statistical Analysis
Assessment of the normality of the variable distributions was performed using the Kolmogorov-Smirnov test with Lilliefors correction. Homogeneity of variance was checked, and no violations were found. The χ 2 test was used to determine the differences between the observed and expected frequencies of a birth-date quartile. The effect size was defined by calculating Cramér's V. The threshold values for V were set according to Cohen's (1988) guidelines for df = 3, as follows: ≥0.06 (small), ≥0.16 (medium), and >0.29 (large). An independentsamples t test was conducted to determine the differences in anthropometric characteristics and fitness test results between selected and unselected players for each birth quarter. In addition, multivariate analysis of covariance (MANCOVA) with chronological age and age as covariates and anthropometric characteristics and motor test results as dependent variables was used to examine differences among birth quarters (independent variable). A significant α was set at 0.05. Threshold values for effect size statistics were 0.01, 0.06, and 0.14 for small, medium, and large effect sizes, respectively (Cohen, 1988). To support univariate analyses, Bonferroni post hoc test was used where appropriate.
Performance characteristics were analyzed using a stepwise discriminant function analysis to determine which combination of the measured characteristics optimally explained the selection of qualifying players to join the NVDP. In this analysis, the group (selected for the NVDP vs. not selected) was the dependent variable, and performance characteristics, birth quarter, and calendar age were the independent variables. The calculation included the cases for which complete data were provided. The analysis did not include the medicine ball throw because of its exclusion from the battery of tests in 2012. All calculations were performed using IBM SPSS statistical software (version 22.0, Armonk, NY, United States). Table 1 shows the χ 2 test results (χ 2 = 7.9, p < 0.05, V = 0.06, a small effect for males; χ 2 = 1.2, p > 0.05, V = 0.05, no effect for females), percentage deviations, and standardized residuals for the comparison of the OHT players and the players selected for the NVDP. The observed quarterly distributions of players selected and not selected for the NVDP were significantly different from the uniform distribution (p ≤ 0.001). Furthermore, an overrepresentation of young volleyball players born in Q1 and Q2 was reported for both genders. In contrast, an underrepresentation of players born in Q3 and Q4 was observed. In addition, only 6.03% of male players and 11.42% of female players selected for the NVDP were born in the last 3 months of the year. A medium effect size of the RAE was observed in each of the subsamples of volleyball players.
RESULTS
Anthropometric characteristics and results of the zigzag agility test across the four birth quarters or calendar age for each subgroup are shown in Table 2. The MANCOVA analysis demonstrated no significant main effect for birth quarter or calendar age in all OHT female players and in male players selected for NVDP. In the group of non-selected male players, the analysis demonstrated significant differences according to the quarter of birth for body height (F = 0.01, p < 0.001), spike reach (F = 7.33, p < 0.05), and block jump (F = 0.02, p < 0.001). Significant differences within calendar age were observed for body mass (F = 0.53, p < 0.01), spike jump (F = 2.64, p < 0.05), block jump (F = 0.4, p < 0.01), and zigzag agility test results (F = 0.01, p < 0.01). In addition, Table 2 shows the differences between the selected and unselected players according to birth quarter. Significant differences were found for all anthropometric variables in both genders. The selected NVDP players were taller (all p values < 0.001) and heavier (values from < 0.05 to < 0.001) and jumped higher (values from < 0.05 to < 0.001) than the unselected players. Regarding the mean time obtained in the volleyball agility test, the analyzed groups did not significantly differ in each birth quarter.
The stepwise discriminant analysis results are presented in Tables 3, 4. The model determined that a combination of four characteristics optimally discriminated between the players selected and not selected for the NVDP for each gender. Vertical jump (for females = 0.82, for males = 0.87), body height (for females = 0.8, for males = 0.85), and body mass (for females = 0.8, for males = 0.84) were included in both models. Spike reach (0.84) and spike jump (0.81) were the fourth variables in the male and female models, respectively. This combination of variables correctly classified 77.7% of the female players and 71.6% of the male players in terms of their selection versus non-selection for the NVDP (Table 5).
DISCUSSION
This study confirms the presence of an RAE in young Polish volleyball players who participate in the OHT as part of a controlled and organized TI process carried out by the national federation. As predicted, a skewed quarterly age distribution was observed in the groups selected and not selected for the NVDP. Contrary to what was hypothesized, a similar effect size of the RAE was observed regardless of whether the players were selected for the NVDP. A significant difference between the observed and expected frequencies of birth dates among the players selected for the NVDP compared to the OHT sample was observed. Additionally, the results showed that there were differences in quarterly comparisons between selected and non-selected NDVP players. Nevertheless, the multivariate analysis showed no main effects for females and selected NVDP male players. Moreover, the discriminant analysis identified the factors affecting the TI process in a group of 15-year-old volleyball players. The identification of the RAE in Polish youth volleyball is consistent with the results of other researchers (Okazaki et al., 2011;Campos et al., 2016). However, the unexpected overrepresentation of early-born male players selected for the NVDP may be explained by gender differences in biological development and the onset of puberty (Schorer et al., 2009;Baptista et al., 2016). In 15-year-old adolescents, sex differences at puberty are significant and persist for up to 1 year in relation to age at the start of peak height velocity (Koziel and Malina, 2018). In line with this, the tests and measurements used by the PVF for the TI process seem to apply to groups of players at significantly different stages of biological development. In addition, the two-stage selection process (call-ups to voivodeship teams and selection for the NVDP after the OHT) may affect the magnitude of the RAE in Polish youth volleyball. Unfortunately, one limitation of this study is the lack of documentation regarding preselection by regional clubs and PVF coaches. On the other hand, the results of this study showed a different pattern in youth OHT participants compared to the previous studies reporting the absence of RAE in international volleyball. The equal quarter-birth distribution was reported in the highest senior level in Dutch volleyball (van Rossum, 2006) and Israeli and in female Israeli (Lidor et al., 2014) and Brazilian volleyball (Parma and Penna, 2018). Nevertheless, in a similar context, only in a research carried out by Papadopoulou et al. (2019) did the participants' age corresponded with data obtained in this study, but that study was conducted with small samples (clubs from one city). The effect size of RAE reported in this study was equal in each group, but there was a trend of stronger discrimination against late-born male ball-game players.
The unexpected overrepresentation of early-born male players among those selected for the NVDP not only arises from physical development but also may be due to the differences in game demands between male and female volleyball. Previous studies have shown significant gender differences in volleyball gamerelated statistics (Joao et al., 2010;Nikolaidis et al., 2015). Men's volleyball is characterized by a strength-based style of play, in contrast with the more technical nature of women's games. A study by Pion et al. (2015) reported that motor coordination differentiates elite Belgian female players from subelite players. This argument is further supported by the results of Vargas et al. (2018), which indicated that players could achieve success in women's volleyball even if their physical characteristics were different from those typical of male players (e.g., lower body height).
Interestingly, the differences in anthropometric characteristics and motor test results related to the quarter in which a player was born were observed only in players who were not selected to the NDVP. However, quarter-by-quarter comparisons of the mean anthropometric variables of selected and non-selected showed differences among the female players. These findings are supported by a recent study by Carvalho et al. (2020) comparing the morphological profiles of Portuguese adult female players at different levels. They suggest that "higher body mass, body height. . . are important for top-level performance. . ., " which is in line with research indicating that body height and spike jump reach are the decisive factors for the selection of junior national female volleyball players (Tsoukos et al., 2019). Conversely, previous studies have shown that anthropometric data are inefficient for discriminating between successful and unsuccessful talent-identified junior volleyball players (Gabbett et al., 2007). Note that the discriminant analysis in the present study was conducted with a decidedly larger sample.
The results of the abovementioned studies show that jumping ability, body height, and body mass are crucial for selection for the NVDP regardless of gender. This is consistent with reports showing that a high block jump characterizes the best male volleyball players (Sattler et al., 2015). However, the discriminative models presented in this At each step, the variable that minimizes the overall Wilks' lambda is entered. The maximum number of steps is 20; the minimum partial F for inclusion is 3.84; the maximum partial F for removal is 2.71; and the F level, tolerance, or VIN is insufficient for further computation. free mass), which may indicate significant errors in the predictability and efficiency of the TI process in adolescents, in whom relative body weight seems to be more important (Chung, 2015). Some aspects of the present study need to be put into perspective. One limitation of this study is the closed settings of the zigzag agility test that was used, which may not directly respond to game-related demands of volleyball. A player who changes direction quickly and efficiently is not necessarily effective in the game, for example, in his/her reaction to a ball flying at high speed (Young, 2015). However, as in previous studies, there was no significant difference between selected and unselected players in test results based on planned change-of-direction (Gabbett et al., 2007;Tsoukos et al., 2019). Our findings support this thesis, and no significant difference in zigzag agility test results was reported between selected and non-selected players. Nevertheless, the ability to change direction efficiently may be a factor for TI in female volleyball players, but only in relation to open tasks and decisive processes (Balser et al., 2014). We suggest including open-skilled agility tests in national federation and club TI processes for youth volleyball.
It is worth highlighting that the strength of the study was the use of a representative large data sample taken from the whole country over 14 years. However, in this study, it was impossible to consider quantified assessments of the volleyball skills of the OHT players because of the lack of documentation by the PVF. Another limitation of this study is the lack of data regarding the players' positions on the court. In this case, such a difference may be caused by the earlier discrimination of relatively later-born players who can play in youth volleyball only as defensive players. A previous study reported differences in somatotypes between setters and centers in elite adult volleyball players (Duncan et al., 2006;Giannopoulos et al., 2017). In line with this, future studies about the TI process in youth volleyball using similar sample sizes should include players' positions on the court.
Considering the findings and limitations of this study, several practical implications can be drawn for policymakers and trainers in the context of the TI process and the RAE in youth volleyball. First, we suggest a rethinking of the TI model in youth volleyball to account for the complexity of the RAE phenomenon and gender differences. It seems unreasonable to adopt the same criteria for assessing groups at different stages of biological development. Second, national federations and clubs should attach greater importance to the consistent collection of information from the TI process. Third, open-skilled agility tests tend to have more value in identifying talented players than tests based only on change of direction.
CONCLUSION
The results of these studies confirm the existence of an RAE in youth volleyball and highlight a trend in the selection of male athletes with greater body weight and height and better jumping ability than their unselected counterpoints. We suggest that TI process in youth volleyball be designed based on complexity of the RAE phenomenon and gender differences in maturity and different anthropometric and motor demands for each player's position on the court.
DATA AVAILABILITY STATEMENT
All datasets generated for this study are included in the article/supplementary material.
ETHICS STATEMENT
All data were obtained according to the Data Protection Act in Poland, and all procedures were approved by the Research Ethics Committee of the University School of Physical Education in Wrocław. Written informed consent from the participants' legal guardian/next of kin was not required to participate in this study in accordance with the national legislation and the institutional requirements.
AUTHOR CONTRIBUTIONS
KR: Conceptualization, investigation, and writing original draft. AR: Formal analysis. KR and AR: Funding acquisition, supervision, writing -review and editing. All authors contributed to the article and approved the submitted version.
FUNDING
All funding pertaining to the realization of this study was received internally by the authors' organization (KR's and AR's departmental funding; Department of Team Games Sport, University of Physical Education, Wrocław, Poland). No additional external funding was received for this study. | 2020-07-07T13:16:40.913Z | 2020-07-07T00:00:00.000 | {
"year": 2020,
"sha1": "1be3ab1f170bb437ddf8c3ecb50d017061b71302",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2020.01445/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1be3ab1f170bb437ddf8c3ecb50d017061b71302",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
59224040 | pes2o/s2orc | v3-fos-license | New insights into the pathogenesis and treatment of non-viral hepatocellular carcinoma: a balancing act between immunosuppression and immunosurveillance
Abstract Hepatocellular carcinoma (HCC) is one of the leading causes of cancer-related deaths worldwide. HCC initiates as a consequence of chronic liver damage and inflammation caused by hepatitis B and C virus infections, excessive alcohol consumption, or non-alcoholic fatty liver disease (NAFLD). Until recently, no effective treatments for advanced HCC were available and the 5-year survival rate had remained below 8% for many years. New insights into the mechanisms that drive the development of NAFLD-related HCC indicate that loss of T-cell-mediated immunosurveillance plays a cardinal role in tumor growth and malignant progression, in addition to previously identified inflammation-driven compensatory proliferation. Recently completed groundbreaking clinical studies have shown that treatments that restore antitumor immunity represent a highly effective therapeutic option for approximately 20% of advanced HCC patients. Understanding the causes of inflammation-driven immunosuppression and immune system dysfunction in the 80% of patients who fail to reignite antitumor immunity despite treatment with checkpoint inhibitors should lead to further and even more dramatic improvements in HCC immunotherapy.
Introduction
Hepatocellular carcinoma (HCC) is the most common form of liver cancer and second leading cause of cancerrelated deaths worldwide. 1 Despite a general decline in cancer-related deaths, HCC incidence in the USA and its associated mortality continue to grow at an alarming rate, with a tripling of HCC-related mortality in the past 30 years. 2 Historically, the main causes of HCC were hepatitis B and C virus (HBV, HCV) infections, but the incidence of virus-related HCC is predicted to decline within the next generation because of development of effective and economical HBV vaccines and the recent introduction of highly effective anti-HCV drugs. 1,3 By contrast, the incidence of non-viral hepatitis continues to rise and it is already a leading cause of cryptogenic cirrhosis and liver transplantation in the USA and Europe. 4,5 Major causes of non-viral hepatitis include the obesity epidemic, which has greatly increased the incidence of non-alcoholic fatty liver disease (NAFLD), and rampant alcohol abuse, which results in alcoholic liver disease (ALD). 6 Surgical resection and liver transplantation represent effective treatment options for early, localized HCC, but unfortunately the majority of HCC cases are diagnosed at a rather advanced stage, and are frequently associated with loss of liver function. Compromised liver function means that advanced HCC cannot be treated with high doses of chemotherapy or ionizing radiation and the only evidence-based targeted therapeutic approved for first-line HCC treatment is the pan-kinase inhibitor sorafenib, which extends life by less than 3 months without an impact on 5-year survival rates, which have persisted at <8% for stage 4 patients. 7 Furthermore, sorafenib is a highly hepatotoxic drug that is not suitable for all patients.
Until recently, the only meaningful improvements in HCC treatment have been: 1) the establishment of strict criteria for performance of liver resection and liver transplantation, 8,9 and 2) patient classification and stratification, which help identify those who will benefit most from first-line sorafenib treatment. 10,11 This lack of progress in HCC treatment is most likely because of the absence of reliable biomarkers that allow for effective screening of high-risk patients and early disease detection. This poor state of affairs is likely to change. Recent clinical studies have demonstrated that singleagent treatment with immune checkpoint inhibitors such as nivolumab and pembrolizumab, antibodies that block engagement of the inhibitory receptor PD-1, which is expressed by exhausted CD8 + T cells, led to a significant decrease in tumor burden in~20% of advanced HCC patients. 12,13 These findings are entirely consistent with the results of a recent preclinical study which showed that loss of T-cell-mediated immunosurveillance from inflammation-driven immunosuppression plays a cardinal role in the development of NAFLDassociated HCC. 14 This incredible and rare convergence between mechanistic preclinical research and empirical clinical investigation strongly suggests that immunotherapies aimed at restoring HCC immunosurveillance will revolutionize the treatment of this highly aggressive malignancy and may also lead to its prevention in high-risk individuals.
Preclinical models for understanding HCC etiology and pathogenesis
Currently, the majority of HCC cases in the USA and Europe are caused by HCV hepatitis, non-alcoholic steatohepatitis (NASH), or alcoholic steatohepatitis (ASH). 1 However, in the not-too-distant future, the prevalent HCC etiologies in Western society are predicted to be associated with non-viral hepatitides (especially NASH and ASH), that are driven by the ever-expanding obesity epidemic 4 and excessive alcohol consumption. HBV-or HCV-induced HCC has been difficult to study in mice, because these viruses do not replicate in non-primate mammals. Early attempts to study HBV-and HCVinduced HCC in mice were based on transgenic expression of whole virus genomes or parts of them from liver-specific promoters. [15][16][17][18] Although HBV and HCV transgenic mice develop HCC after a considerable latency, it is questionable whether the tumorigenic mechanisms operating in these mice are related to those that function in HBV-or HCV-infected humans. While HBV is a DNA tumor virus that is thought to induce HCC through insertional mutagenesis, 19 HCV is a non-integrating RNA virus that does not code for any oncogene. 20 It has been speculated that HCV replication within hepatocytes causes endoplasmic reticulum (ER) stress that leads to induction of hepatic steatosis, chronic liver damage, and inflammation, outcomes that have also been detected in HCV transgenic mice. 21,22 Curiously, ER stress was also suggested to be associated with the most severe manifestation of NAFLD-NASH 23-26 -suggesting that HCV infection and NASH may lead to HCC development through a common pathogenic mechanism that involves ER stress. To determine whether hepatocyte ER stress can contribute to HCC development, we chose to use MUP-uPA transgenic mice, which express urokinase plasminogen activator (uPA) from the liver-specific major urinary protein (MUP) promoter. 26 By exceeding the folding and secretory capacity of hepatocytes in the newborn liver, uPA expression results in ER stress and transient liver damage that subsides after 6 weeks of age because of transgene extinction. 26,27 By placing 6-week-old MUP-uPA mice on a high-fat diet (HFD), we were able to reignite the ER stress response and cause activation of sterol response element binding proteins (SREBP), thereby enhancing the accumulation of liver triglycerides (TG) and cholesterol within hepatocytes. 28,29 Whereas TG accumulation leads to simple hepatic steatosis, the buildup of free, unesterified cholesterol acts in conjunction with TNF, which is produced by activated liver macrophages, 29 to amplify the extent of liver inflammation and damage. Thus, both free cholesterol and TNF serve as critical factors in controlling the transition from hepatic steatosis or early NAFLD to NASH in both mice and humans. [30][31][32][33] Indeed, within several months of HFD initiation, MUP-uPA mice develop extensive liver inflammation manifested by massive leukocytic infiltration, chronic liver damage, and hepatocyte death, which result in compensatory proliferation, a "chicken-wire" pattern of fibrosis, and accumulation of Mallory-Denk Bodies (MDB), which are inclusion bodies composed of p62-containing protein aggregates. Of note, MDB serve as a typical sign of chronic liver diseases associated with increased risk of HCC development, 29,34 but as discussed below, their main constituent, p62, is also a cause of hepatic tumorigenesis. All of these responses are characteristic of human NASH 24 and are entirely dependent on TNF signaling via the type I TNF receptor (TNFR1) that is expressed on the surface of hepatocytes. Genetic ablation of TNFR1 or TNF titration using Enbrel (etanercept) completely prevents NASH development in HFD-fed MUP-uPA mice. 29 Anecdotal clinical reports indicate that Enbrel or Remicade (infliximab) treatment lead to improvement of NASH symptoms in both animal models and human patients, [35][36][37][38] although this should be contrasted with the failure of such drugs in alcoholic hepatitis. 18 It should be noted, however, that ASH patients are much more likely to develop severe infections than NASH patients, thereby precluding the use of anti-TNF drugs, which otherwise had shown promising results. 39 Importantly, within 9 months of HFD initiation, at least 85% of HFD-fed MUP-uPA male mice (HCC is much more common in males than in females 40 ) show numerous, poorly differentiated HCC nodules with about 50% of them having the appearance of steatotic HCC. 29 Consistent with early results published by the Pikarsky group, 41 TNFR1 signaling via IκB kinase β (IKKβ) also plays an important role in NASH to HCC progression by stimulating the proliferation of HCC progenitor cells (HcPC). 29 Another important player in NASH to HCC progression is p62/SQSTM1. Hepatocyte-specific ablation of the Sqstm1 gene largely attenuates HCC development in HFD-fed MUP-uPA mice. 34 In addition to its propensity for forming protein aggregates, MDB, and hyaline granules, 42 p62 is an important signaling protein 43 whose tumorigenic activity is exerted via activation of transcription factor NRF2. A point mutation in the KIR motif of p62, which prevents its binding to KEAP1, the negative regulator of NRF2, 44 also blocks the ability of overexpressed p62 to induce HCC development. 34 p62-mediated NRF2 activation also plays an important role in the development of pancreatic cancer, stimulating the malignant progression of preneoplastic PanIN1 lesions. 45 Gain-of-function mutations in the NRF2-encoding NFE2L2 gene and loss-of-function mutations in the KEAP1 gene, both of which lead to constitutive NRF2 activation, were detected in up to 12% of human HCC specimens. 46,47 In addition to sharing identical pathological features and common oncogenic signaling pathways, HCCs that have appeared in MUP-uPA mice are essentially identical to human HCC in their mutational signature, which exhibited a marked enrichment for C to T transitions. 14 The mutational load of mouse NASH-driven HCC is also quite similar to that of human HCC, averaging 50-100 coding region point mutations per tumor. Many of these mutations affect oncogenic drivers that were first detected in human HCC. 14,46,47 The oncogenic role of inflammationdriven immunosuppression Another important feature of both NASH and ASH that is closely associated with liver fibrosis is the presence of high amounts of circulating immunoglobulin A (IgA). 48,49 In both NASH and ASH, the serum concentration of IgA is directly proportional to the extent of liver fibrosis. 48 HFD-fed MUP-uPA mice and other mouse models of liver fibrosis or NASH exhibit a positive correlation between circulating IgA and liver fibrosis. 14 Circulating IgA in NASH-afflicted mice is produced by liver-infiltrating plasma cells that have undergone IgM to IgA class switch recombination (CSR) in response to TGF-β and other cytokines, such as IL-21 and IL-33, whose expression is elevated in response to chronic liver inflammation. 14 Using freshly collected liver biopsies, we confirmed the presence of liver-infiltrating IgA-expressing plasma cells in human NASH and have shown that these cells, as well as their progenitors, IgA + plasmablasts, also express the inhibitory ligand PD-L1 and the immunosuppressive cytokine IL-10. 14 Using the MUP-uPA + HFD model we have developed, we demonstrated that these cells, collectively referred to as immunosuppressive plasmocytes (ISPs), are the principal source of PD-L1 in NASH-driven HCC and are directly responsible for inducing the exhaustion of HCC-directed CD8 + T cells. 14 Ablation of the IgA locus or inhibition of ISP generation through interference with TGF-β signaling resulted in attenuation of HCC development and concomitant reinvigoration of HCC-directed cytotoxic T cells (CTLs). Depletion of CTLs using a CD8 neutralizing antibody restored NASHinduced HCC development in IgA-deficient mice. 14 A dramatically reduced tumor load was observed after treatment of HCC-bearing MUP-uPA mice with a PD-L1 blocking antibody, but the few tumors that did develop in ISP-deficient mice were completely refractory to PD-L1 blockade. 14 These results indicate that IgA + ISP, which accumulate in response to chronic liver damage, inflammation, and fibrosis, are the most critical source of PD-L1 in HCC, a tumor whose development is strongly dependent on the suppression of CTL-mediated immunosurveillance. In support of these conclusions, HCC development is highly accelerated in MUP-uPA/Cd8a −/− mice, which lack CTL. 14 HCC development is also accelerated in MUP-uPA/Rag1 −/− mice, whose reconstitution with T cells in the absence of B cells inhibits tumor development. We conclude that HCC-directed liver-infiltrating CD8 + T cells are potent inhibitors of HCC emergence because of their ability to recognize and kill HCC progenitors. These findings are summarized in Fig. 1.
The PD-1 checkpoint and the therapeutic effect of its inhibition PD-L1 is the main ligand for the inhibitory receptor PD-1 (programmed death-1). PD-1 is expressed by CD8 + cytotoxic T cells as well as CD4 + T follicular helper cells (Tfh), whereas PD-L1 can be expressed by many different cell types, including macrophages, B cells, and epithelial cells. 50,51 The engagement of PD-1 on the surface of CD8 + T cells inhibits their proliferation and all of their effector functions, thereby resulting in a dysfunctional state that is commonly referred to as exhaustion. 52,53 Importantly, this inhibitory response is needed to prevent the rampant activation of antiviral CTLs, and its absence can result in severe collateral damage and even mortality in response to viral infections. 54 In fact, myeloid and epithelial cells begin to express PD-L1 in response to interferon γ (IFN-γ) that is produced by activated CTLs, thereby constituting a negative feedback loop. Engagement of PD-1 on the surface of Tfh cells also inhibits cell proliferation but it also leads to induction of IL-21 and other molecules through which Tfh stimulate the maturation of plasma cells, including IgA-expressing plasma cells 55,56 (Fig. 1).
In addition to the control of antiviral immunity and Tfh function, it was found that PD-1:PD-L1 interactions play a cardinal role in antitumor immunity. Many types of cancer express PD-L1 on their surface and thereby lead to the exhaustion of tumor-directed CTL. 57,58 Consistent with these observations, it was found that treatment of tumor-bearing mice with antibodies against either PD-1 or PD-L1 that block the interaction between the two molecules triggers tumor regression and concomitant reactivation/reinvigoration of tumordirected CD8 + T cells. 59,60 These impressive preclinical results and the success of blocking antibodies directed against another T-cell checkpoint regulator CTLA4 61 quickly led to clinical trials of the first fully human PD-1 blocking antibody, nivolumab, which resulted in an objective response rate (ORR) of 17% in non-small cell lung cancer. 62,63 Since then, nivolumab, pembrolizumab, and the PD-L1 antagonistic antibody atezolizumab, have received approvals for the treatment of melanoma, lung cancer, bladder cancer, kidney cancer, head and neck cancer, and Hodgkin lymphoma. 50,64 It was initially postulated that only cancers with high mutational load can be treated with checkpoint inhibitor drugs, including PD-L1 inhibitors, 63,65,66 but this short-lived dogma has not prevented oncologists from testing these drugs in cancers with lower mutational loads, such as renal cell carcinoma 67 and HCC. 12 Surprisingly, the response rates to all PD-1:PD-L1 inhibitors were found to cluster around 15-25% and do not seem proportional to the total number of mutations a particular tumor harbors. 50 Clearly, mutational load is not the only factor that affects the response to checkpoint inhibitors. Another factor suggested to affect the response to PD-1:PD-L1 interaction inhibitors is the level of PD-L1 expression on tumor cells. 50,62 Although PD-L1 expression is needed for activation of the PD-1 checkpoint and to make the involved tumor responsive to PD-1:PD-L1 inhibitors, the degree of PD-L1 expression by cancer cells themselves was not found to directly correlate with response rates. Furthermore, the source of PD-L1 is highly variable and many cancers show PD-L1 expression by components of the tumor microenvironment, rather than the malignant cells themselves. 68,69 Correlating PD-L1 expression with response rates to PD-1:PD-L1 antagonists has not been highly reliable because of the mediocre quality of the reagents and methodology used to assess PD-L1 expression. Given that in non-viral HCC the most critical source of PD-L1 are the IgA + ISP, 14 it is not surprising that no correlation was found between the response to nivolumab and PD-L1 expression on HCC cells. 12 We postulate that the presence of elevated serum IgA and liver-infiltrating IgA + ISP can be a much more accurate parameter for predicting responsiveness to PD-1/PD-L1 targeting agents in human HCC. Elevated circulating IgA has also been detected in HBV-and HCV-infected patients [70][71][72][73] and is much easier to measure than to quantitate PD-L1 expression in tissue/tumor sections.
PD-1:PD-L1 blockade: a revolution in HCC treatment
As mentioned above, PD-1/PD-L1 blocking therapies were thought to be irrelevant for the treatment of HCC because the typical HCC mutational load is lower than the cutoff value postulated to be needed for anti-PD-1/ PD-L1 responsiveness. 63,66 Furthermore, hepatitis is a common side effect of checkpoint inhibitor therapy 74,75 and it was therefore assumed that advanced HCC patients would not be able to tolerate such a complication. These considerations, however, did not prevent El-Khoueiry and colleagues from conducting the CheckMate Figure 1. Key molecular elements that dictate the balance between tumor immunity and inflammation-driven immunosuppression. Tfh, follicular helper T cells; ISP, immunosuppressive plasmocytes. 040 trial, the first clinical study that clearly demonstrated the effectiveness of nivolumab monotherapy in advanced HCC. 12 The phase 1/2 CheckMate 040 clinical study enrolled 262 patients with histologically confirmed advanced HCC, including patients with non-viral HCC, HCV-infected, and HBV-infected patients. Of these groups, patients with non-viral hepatitis or HCV hepatitis were observed to exhibit an ORR of 20-25%, whereas HBV-infected patients exhibited an ORR of 14%. 12 Although the study was insufficiently powered to correlate response rates with etiology, the results suggest that HBV-infected patients are actually less responsive to PD-1 blockade therapy than non-virus-infected patients or HCV-infected patients. This is somewhat counterintuitive because HBV infection should result in expression of viral antigens that are potent T-cell activators. Of note, the CheckMate 040 study found no correlation whatsoever between membrane expression of PD-L1 by HCC cells and the response to PD-1 blockade, 12 further demonstrating the weakness of the hypothesis according to which the response to anti-PD-1 drugs is determined by PD-L1 expression on the surface of cancer cells. The clinical findings reported by El-Khoueiry et al. are fully consistent with the finding of our preclinical study of NASH-driven HCC and its control by the interaction between IgA + ISP and CD8 + CTLs. 14 The mouse studies demonstrated that PD-L1 mediated exhaustion of HCC-targeting CD8 + T cells plays a critical role in HCC development, therefore suggesting that the PD-1 checkpoint is key to HCC development. Despite low PD-L1 expression by HCC cells, the use of nivolumab resulted in disease control in 67% of patients with non-viral HCC or HCV-related HCC and 55% of patients with HBV-related HCC. These striking results should be compared with response rates of 2-3% in HCC patients treated with firstline sorafenib. 76,77 The median duration of the response to nivolumab was as high as 17 months in the doseescalation phase of the trial, 12 far exceeding the 3-month extension in survival offered by sorafenib. Even the early fear of nivolumab-induced hepatitis has not panned out. Only two patients out of 202 who completed the trial experienced acute hepatitis and the overall rate of adverse events was not any higher than in any other population of similarly treated cancer patients. Thus, there is no question that anti-PD-1/PD-L1 therapies will become the game-changers that will revolutionize HCC treatment. Shortly after completion of the CheckMate 040 trial, nivolumab was approved for the treatment of advanced HCC when used following prior treatment with sorafenib. Hopefully, future studies will show nivolumab, pembrolizumab, and similar drugs are suitable for first-line HCC treatment without prior administration of sorafenib, which has no effect on the rate or duration of the therapeutic response. 12 Similar and more extensive responses were seen in HCC-bearing MUP-uPA mice that were treated with a PD-L1 blocking antibody. 14 We initiated anti-PD-L1 treatment after 7 months of HFD feeding, a time point at which the majority of MUP-uPA mice exhibit visibly detectable liver tumors. After 8 weeks of 3 injections per week with two different PD-L1 antibodies, only one of which is a functional blocking antibody, we observed that PD-L1 blockade led to 60% reduction in tumor load relative to control. 14 The therapeutic effect of PD-L1 blockade was most noticeable on larger tumors, most of which had disappeared. In addition to demonstrating the utility of PD-L1 blockade in a mouse model that is amenable to detailed mechanistic analysis, this experiment has taught us several very important lessons. First, tumor-bearing mice that lack PD-L1-expressing ISP did not show any response to PD-L1 blockade, indicating that in NASHdriven HCC, at least in mice, the critical functional source of PD-L1 are the ISP. Although HCC tumor cells express small amounts of PD-L1, that particular PD-L1 is not functionally important. These results are entirely consistent with those of the CheckMate 040 trial. Second, tumors in MUP-uPA/Cd8a −/− mice also did not respond to anti-PD-L1 treatment, indicating that the targets for PD-1/PD-L1 blockade are the exhausted CD8 + T cells. Indeed, anti-PD-L1 treatment of wildtype MUP-uPA mice decreased the liver content of exhausted CD8 + T cells and increased the number of proliferating and degranulating effector CD8 + T cells that express TNF, IFN-γ, granzyme B, and perforin. 14 These results indicate that CD8 + T cells not only prevent HCC initiation through immunosurveillance, but also that they are responsible for the rejection of established tumors in response to treatment with PD-1: PD-L1 blocking antibodies. In other words, tumors that do not contain exhausted CD8 + T cells are unlikely to be responsive to PD-1/PD-L1 blockade.
The preclinical mouse studies may have also provided us with a precious clue that could explain why no more than 25% of HCC patients mount a response to nivolumab treatment. We observed that the few HCCbearing mice that did not show a response to PD-L1 blockade contained tumors that were surrounded by an envelope of activated hepatic stellate cells (HSC), the kind of cells that are responsible for extracellular matrix deposition during liver fibrosis. 14 In addition to the envelope of activated HSC, these tumors contained very few infiltrating T cells and most of the reinvigorated CD8 + effector T cells stayed outside of the tumor. In this respect, these treatment refractory tumors resembled pancreatic cancer, which contains an extensive stroma of activated pancreatic stellate cells and is devoid of invading CD8 + T cells. 78 Of note, we found that similar to HCC, both mouse and human pancreatic adenocarcinomas contain PD-L1-expressing IgA + plasmocytes. The significance of these findings remains to be determined, but they suggest that somehow HSC or pancreatic stellate cells interfere with the activation of CTL or their ability to penetrate the tumor and recognize their target.
Conclusions and future directions
The rare confluence of preclinical and clinical studies described above strongly establishes the relevance of antitumor immunity to the development and treatment of HCC, one of the most common and difficult-to-treat malignancies. It has been known for many years that HCC development is dependent on chronic liver damage and inflammation, but until now it was assumed that the main pro-tumorigenic effect of liver damage is compensatory proliferation, a biological response that stimulates the division of transformed hepatocytes. 79,80 The new studies reviewed above indicate that another important effect of chronic liver damage and the ensuing inflammatory response is the suppression of CTLmediated immunosurveillance. As long as it is left unperturbed, immunosurveillance provides very strong protection against the growth of nascent HCC lesions, which emerge from mutated pericentral hepatocytes. 81 Given the key pro-tumorigenic effect of inflammationdriven immunosuppression, a process that depends on accumulation of PD-L1 expressing ISP, it is no wonder that drugs that block the binding of PD-L1 to PD-1 and restore CTL-mediated antitumor immunity are so remarkably effective in the treatment of advanced HCC.
Despite the incredible clinical advance in HCC treatment represented by PD-1:PD-L1 interaction inhibitors, the average objective response to this class of drugs is approximately 20%. 12 Undoubtedly, this rate needs to be increased, but how can this by accomplished? I suggest that, first and foremost, we need to understand the factors that render the remaining 80% of HCC patients non-responsive to PD-1/PD-L1 blockade. Given the dependence of the response to PD-L1 blockade on the presence of IgA + ISP, 14 it is plausible that some of the non-responsive patients may have low amounts of liver-and HCC-resident ISP. As circulating IgA is easy to measure and known to directly correlate with liver IgA, 14 one simple study that needs to be carried out in the very near future is a correlative study between serum IgA and the response to PD-1/ PD-L1 blockade. It is also important to determine how many of the non-responsive patients exhibit excessive accumulation of activated HSC around their tumors and defective tumoral invasion of reinvigorated CD8 + T cells. If the findings made in mice are validated in a significant portion of the anti-PD-1 non-responsive patient population, it will become important to test the effect of clinically approved drugs that are capable of inhibiting HSC activation. At this point two classes of such drugs come to mind: 1) phosphodiesterase 5 (PDE5) inhibitors, and 2) vitamin D analogs. PDE5 inhibitors, such as tadalafil (Cialis) and sildenafil (Viagra), were previously observed to inhibit HSC activation and prevent the accumulation of prostate and lung myofibroblasts, which are highly similar in their properties to activated HSC. 82,83 In fact, tadalafil has been approved for the treatment of benign prostatic hyperplasia and pulmonary hypertension, two diseases that depend on myofibroblast activation. 84,85 Likewise, the vitamin D analog calcipotriol was found to attenuate HSC activation and interfere with their ability to express numerous chemokines and other molecules, 86,87 including CXCL13, a B-cell chemoattractant, that may account for the immunosuppressive activity of HSC and other types of myofibroblasts. Importantly, drugs in both groups are safe and free of side effects that could reduce the effectiveness of PD-1/PD-L1 targeting drugs. Another important line of future research pertains to the involvement of other checkpoint inhibitors, such as antibodies to TIM-3, LAG-3, and CTLA4, in controlling the immune response to HCC. So far such studies have not been reported, but it is plausible that blockade of additional inhibitory receptors may result in more sustained reinvigoration of tumor-directed CD8 + T cells than has been seen with PD-1 blockade alone. 88 | 2019-01-29T14:02:29.613Z | 2018-06-01T00:00:00.000 | {
"year": 2018,
"sha1": "1457c3b9b93e7e28ad7cfde3f77b06202ebf1f38",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/pcm/article-pdf/1/1/21/27013151/pby005.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1457c3b9b93e7e28ad7cfde3f77b06202ebf1f38",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256013661 | pes2o/s2orc | v3-fos-license | Note on the PCR threshold standard curve
The PCR threshold standard curve is based on an exponential model of the initial phase of a PCR run where template replication efficiency is constant cycle to cycle. As such it requires that a threshold is at a level of amplified template not higher than where replication efficiency falls from its initial value. A second requirement is that all amplification profiles, both calibration and test, have the same initial efficiency. However, whether these requirements are met may not be checked, and there seems an apparent awareness that thresholds can be set higher than where efficiency has dropped from the initial value without compromising result validity. The objective of this study is to reconcile using the method without satisfying the requirement that amplification is exponential at threshold level. Substituting the more general requirement that profile shapes be congruent to threshold level, except for translation along the cycle axis, and a derivation of the standard curve that includes cycles beyond the exponential phase accomplishes the objective without affecting usage of the method or any prior results and enables a practicable way to verify that the second requirement for same initial efficiency is satisfied.
Introduction
The threshold standard curve method, regarded a gold standard for quantitative PCR, is based on an exponential model of the initial phase of a PCR run. The requirement that the threshold is at a level where amplification is exponential, that is, in the initial, constant efficiency phase of a run, is explicit in derivations of the straight line model of the threshold standard curve presented in articles, tutorials, commercial guides, etc. Many sources, e.g. [1], include the requirement in discussing the method, but some also downplay the importance of where a threshold level is set. There seems a tacit understanding, perhaps engendered by the great number of valid analyses where the requirement is not verified, or is even recognized to not be met, e.g. [2], that strict conformance is not critical. However, an explanation why calibrations and analyses using a threshold set at a level higher than where the exponential phase of the run has ended can be valid and a derivation of the model for the standard curve that avoids the requirement are lacking.
We show in this note that the validity of a standard curve, predicted target amounts in unknown samples based on it, the efficiency estimated from its slope and the linear relationship between C T and the logarithm of target amount in a calibration sample do not depend on threshold level provided a more general requirement is met. We make a logical case for replacing the requirement regarding efficiency with a less restrictive requirement, and we derive the line modeling the standard curve in a way that includes cycles beyond the exponential (constant efficiency) phase of a run. Our model of the standard curve remains a straight line representation of C T as a function of the logarithm of calibration target amount and does not change how a calibration is constructed or used to estimate unknown target amounts.
Analysis
Our analysis identifies a characteristic of amplification profiles implied by the requirement of constant efficiency but more general, substitutes it for requiring that thresholds are within the exponential phase, and extends the Open Access BMC Research Notes *Correspondence: bglendon0@gmail.com Vallejo, CA, USA derivation of the threshold standard curve to incorporate the new requirement.
Requiring shape congruency instead of constant efficiency to threshold
Amplification profiles having the same initial efficiency and maintaining that efficiency to a threshold level will have the same shape to that threshold except for translation along the cycle axis. Two profiles with different initial efficiencies cannot be superimposed by shifting along the cycle axis. If a threshold is set higher than is reached within the exponential phase but at a level where the profiles have the same shape, differences in C T from one profile to the next (and slopes of standard curves) will be the same as for any lower threshold. Thus substituting shape congruency to a threshold for the requirement that thresholds be set within the exponential phase also allows setting thresholds beyond the exponential phase.
Modifying the straight-line model of the standard curve
We briefly recapitulate the derivation of the equation for the standard curve in order to show how to incorporate the new requirement that extends the model to include cycles beyond the exponential phase.
Derivation of the standard curve for threshold within exponential phase
Denoting efficiency at cycle k by E k , amount of starting target DNA by X 0 , and a threshold amount of amplicon X T corresponding to a number of cycles C T , any PCR run is represented by the general equation, This equation requires no assumptions; it represents a growth process in which the amount of amplicon increases by the proportional amount E k at each cycle. Adding the usual assumption that the threshold is set within the exponential cycle range where efficiency is at the initial value (E 1 ) and constant, E k = E 1 for all cycles up to C T , (1) becomes the usual starting point for deriving the equation for the standard curve in PCR guides and publications, Log-transformation and re-arrangement of (2) gives the standard curve, a straight line with intercept representing the logarithm of the threshold amount of amplified target divided by log(X 0 ), the logarithm of 1 + E 1 , and slope equal to minus the reciprocal of the logarithm of 1 + E 1 . C T is a value interpolated between the cycles bounding the threshold crossing. The slope and intercept in (3) are determined by fitting a line to a set of C T values measured for samples with known X 0 . With the slope and intercept determined, the target amount X 0 in an unknown sample is computed from a measured C T and the slope and intercept.
Derivation of the standard curve for threshold after exponential phase
The revised model which includes cycles beyond the constant efficiency range continues to imagine a threshold set corresponding to a point in the cycle sequence at or before the end of the constant efficiency phase. This threshold may be too low to observe and is not set explicitly. However, we regard the crossing of this threshold at cycle C T 0 to occur at an interpolated value just as a usual threshold crossing is a value interpolated between bounding cycles. Measured C T values are determined in the usual way from a working threshold set without concern that it is within the exponential range. Requiring that the threshold be set at a level where all profiles to be analyzed have the same shape implies that in cycles following C T 0 efficiency decreases each cycle are the same in all profiles. It follows that the cycle distance from C T 0 to C T is a whole number and the same for all profiles. We denote this difference by = C T − C T 0 , and rewrite (2) to include these cycles, Because the decrease in efficiency through these cycles is the same in every profile, we can replace the product term in (4) with the geometric mean of (1 + E i ) over cycles C T 0 + 1 to C T 0 + raised to the power . Denoting the geometric mean by �1 + E δ �, we get, The measured variable C T is re-introduced into (5) using C T = C T 0 + , which is log-transformed and re-arranged to get the new standard curve equation, The only difference between this standard curve and the usual one (3) is in the intercept coefficient in which in (7) the amount of amplicon at threshold is multiplied by the factor (1+E 1 ) Because this factor is ≥ 1, X T estimated from the intercept of the usual model will be over-estimated if the threshold is set higher than reached within the constant efficiency part of the PCR run. This difference will not matter when the intercept value is used to compute the amount of target in a test sample as the value of the intercept, determined by the data, is the same whatever model is used; only what the intercept represents is different. The level where (1+E 1 ) �1+E δ � becomes greater than one marks the end of the exponential phase and can be estimated by comparing X T calculated from intercepts of standard curves constructed from thresholds at various levels. The efficiency estimated from the slope is the same as in (3), and from both curve models is a first-cycle efficiency whatever the threshold level.
Discussion
We demonstrate the extended model using data published by Rutledge and Stewart [2] who in a study of methods for estimating efficiency presented data that are an example of the premise of this analysis. We also briefly note other insights afforded by the revised equation for the standard curve line.
Applying the new standard curve to data
Rutledge and Stewart [2] published an example plot of PCR profiles generated from five quantities of lambda gDNA ranging from 1.88 × 10 1 to 1.88 × 10 5 genomes; we apply the revised model to their data to demonstrate its use and how it reveals information about the data not revealed by the usual analysis. Because we wanted to estimate standard curve parameters from threshold levels in addition to those examined by Rutledge and Stewart, we manually de-plotted C T values from their Figure 2A for the seven threshold levels listed in Table 1. Slopes, and efficiencies estimated from slopes of the standard curves constructed from the de-plotted C T values for each threshold level were essentially the same as displayed for five of the threshold levels in an inset to their Figure 2B in [2]. The second and third columns of Table 1 contain values of E 1 and X T (1+E 1 ) �1+E δ � � derived from the slopes and intercepts of the standard curves. Actual values of threshold target amounts, column 4 in Table 1, were estimated assuming linear instrument response and that amplification was within the constant efficiency part of the run at the lowest threshold, 500 fluorescence units (FU). Doubling the threshold to 1000 FU also doubled the intercept confirming that the two lowest thresholds were below the amplicon level where efficiency begins to fall off, but on doubling the threshold again to 2000 FU the intercept more than doubled. The ratio (1+E 1 ) �1+E δ � was now greater than one indicating that E δ < E 1 and amplification was beyond the exponential phase. The fifth column computed from the third and fourth columns.
in the next column was estimated visually from the plotted data, and E δ in the rightmost column was computed from values in the second, fifth and sixth columns.
The equivalence of E δ with E 1 at the 1000 FU threshold level and the decrease in E δ at 2000 FU tell us that the constant efficiency portion of the run ended at an amplicon amount corresponding to a signal level between 1000 and 2000 FU, consistent with the analysis in [2]. The progressive decrease in E δ for higher thresholds shows the continuing decrease in efficiency after the 2000 FU threshold is passed but still several cycles before the abrupt decrease in efficiency signaling the onset of the plateau phase which was 10,000 FU or higher.
Implications for quantification
A standard curve constructed from a higher threshold is moved higher on the C T vs. log(X 0 ) plot as a result of Table 1 Attributes of calibrations at various thresholds in Figure 2A of [2] Threshold Submit your next manuscript to BioMed Central and we will help you at every step: threshold crossings occurring later, but the slope and efficiency estimated from the slope are not changed provided the profile shapes are the same to the threshold level. The caution for threshold standard curve quantification, that initial efficiency must be the same for all samples, both calibration and test, is included in the requirement that the shapes of all profiles be the same which can be alternately stated as requiring that the initial efficiency and fall off in efficiency that occurs as amplicon amount increases must be the same to threshold level for all samples. Profiles developed under the same conditions will typically conform to the new requirement some cycles beyond the end of the exponential phase but thresholds must still be set at levels where efficiency is not too low, e.g., > 50%, and before differences in profiles owing to sample effects, amplification of a second target, etc. develop. Verifying that slopes of standard curves (or efficiencies derived from the slopes) constructed from thresholds at different levels are the same validates that the calibration data, up to the highest threshold tested, meets the requirements of the method. Verifying that the predicted target amount of a test sample is consistent when estimated from two or more calibration lines constructed at threshold settings where calibration profiles are shapecongruent validates that a test sample data meets the requirements of the method.
Application to methods using a standard curve not based on C T
Requiring profiles to have the same shape would also apply to methods that instead of a C T use the cycle position of a different profile attribute, e.g., a derivative maximum [3]. As an aside we note that because an exponential and all its derivatives are increasing functions, a derivative maximum is always beyond the exponential phase.
Limitations
The revised model of the threshold standard curve removes the requirement that threshold crossings occur within the exponential phase of a run, provides a way to estimate where the exponential phase ends and enables new ways to check validity of both calibration and test sample data, but does not change how a standard curve is constructed or the calculation of unknown target amount in a sample. The limitations of the threshold method are not affected by the results of this study. | 2023-01-20T14:30:51.907Z | 2017-12-01T00:00:00.000 | {
"year": 2017,
"sha1": "04527613a9f041c71a660331f58799c26acb8477",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s13104-017-3036-4",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "04527613a9f041c71a660331f58799c26acb8477",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
270204979 | pes2o/s2orc | v3-fos-license | Regulation of PPARγ2 Stability and Activity by SHP-1
Abstract The protein tyrosine phosphatase Src homology region 2 domain-containing phosphatase-1 (SHP-1) plays an important role in modulating glucose and lipid homeostasis. We previously suggested a potential role of SHP-1 in the regulation of peroxisome proliferator-activated receptor γ2 (PPARγ2) expression and activity but the mechanisms were unexplored. PPARγ2 is the master regulator of adipogenesis, but how its activity is regulated by tyrosine phosphorylation is largely unknown. Here, we found that SHP-1 binds to PPARγ2 primarily via its N-terminal SH2-domain. We confirmed the phosphorylation of PPARγ2 on tyrosine-residue 78 (Y78), which was reduced by SHP-1 in vitro resulting in decreased PPARγ2 stability. Loss of SHP-1 led to elevated, agonist-induced expression of the classical PPARγ2 targets FABP4 and CD36, concomitant with increased lipid content in cells expressing PPARγ2, an effect blunted by abrogation of PPARγ2 phosphorylation. Collectively, we discovered that SHP-1 affects the stability of PPARγ2 through dephosphorylation thereby influencing adipogenesis.
Introduction
The Src homology region 2 domain-containing phosphatase-1, (SHP-1) (also known as protein tyrosine phosphatase nonreceptor type 6 (PTPN6)) regulates signal transduction by dephosphorylating phospho-tyrosine residues on its target proteins.SHP-1 is mostly known for its role in hematopoietic cells [1][2][3] where it negatively regulates signaling mediated by cytokine receptors such as interleukin-3 receptor, erythropoietin receptor, colony-stimulating factor 1 receptor, and T cell/B cell antigen receptor. 4In addition to its established impact on immune cells, SHP-1 is also expressed in nonhematopoietic cells such as intestinal epithelial cells, 5,6 myocytes, 6 adipocytes, 7 and hepatocytes, 8,9 where it regulates receptor tyrosine kinases , 10 epidermal growth factor receptor, 11 insulin receptor substrate-1 (IRS-1), and carcinoembryonic antigen-related cell adhesion molecule 1 (CEACAM1). 12HP-1 possesses two N-terminally located SH2 domains, a single phosphatase domain and a C-terminal tail. 13The Cterminal tail of SHP-1 possesses multiple tyrosine and serine residues harboring regulatory functions. 14The structural and molecular analysis of SHP-1 has revealed that its enzymatic function is regulated by structural rearrangement of the SH2 domains, which by binding to the catalytic domain keep the phosphatase inactive. 15The catalytic block is lifted when the SH2-domains interact with phosphotyrosine residues in target proteins, which are then transferred to the C-terminal catalytic domain for dephosphorylation. 137][18] Several natural ligands such as polyunsaturated fatty acids 19 , eicosanoids 20 and synthetic ligands including thiazolidinediones like rosiglitazone 21 are known to activate PPARc. 22,23In addition, the activity of PPARc is modulated by several posttranslational modifications such as phosphorylation 24,25 , SUMOylation [26][27][28] and ubiquitination. 29,30PARc is expressed predominantly in adipose tissue, 31 in immune cells, 32 and to a lower extent in other metabolic tissues such as the liver. 33In adipose tissue, PPARc plays a key role in lipid metabolism and adipocyte differentiation.For instance, fibroblasts that lack PPARc are not able to mature into mature adipocytes. 34In addition to lipid metabolism, PPARc is also known to modulate glucose metabolism, inflammation and cell proliferation, and its functions are organ and context dependent. 33In liver, increased expression of PPARc has been shown to lead to increased lipid accretion and hepatic steatosis. 35,36reviously, we have shown that SHP-1 modulates PPARc activity, but the underlying molecular mechanisms remain elusive. 9Herein, we demonstrate that SHP-1 interacts with PPARc2 to promote its dephosphorylation, which results in its degradation and subsequently in altered expression of adipogenic genes.Taken together, our data reveal a relationship between SHP-1 and PPARc2 adding an additional opportunity to exploit this novel regulation for exploring new therapeutic strategies for limiting fat accretion in obesity.
SHP-1 interacts with PPARc2
We first determined whether SHP-1 is expressed in the same subcellular compartment as the nuclear transcription factor PPARc and found the presence of both proteins in the nucleus of HepG2 and differentiated 3T3-L1 adipocytes (Supplementary material, Figure S1A to C).This prompted us to investigate the molecular mechanisms by which SHP-1 regulates PPARc.We first tested whether these proteins physically interact in vitro.Co-expression of FLAG-SHP-1 and MYC-PPARc2 constructs in NIH3T3 cells, followed by immunoprecipitation of FLAG-SHP-1 indeed revealed an interaction between SHP-1 and PPARc2 (Figure 1A).Furthermore, the binding of SHP-1 to PPARc2 was unaffected by treatment of NIH3T3 cells with the PPARc2 agonist rosiglitazone (Figure 1A and B), indicating that this interaction is not liganddependent.
After establishing that SHP-1 interacts with PPARc2, we asked which domain of SHP-1 is responsible for this interaction.We generated SHP-1 constructs expressing either a FLAG-tagged N-terminal fragment containing only the two SH2-domains (SH2-SH2) or two FLAG-tagged C-terminal fragments without the SH2-domains, but carrying the phosphatase domain in its wild-type (PTPase WT) or substratetrapping, dominant negative (PTPase DN, C453S mutation) form (Figure 1C).FLAG-immunoprecipitations from HepG2 cells co-transfected with these SHP-1 fragments or full-length WT or DN SHP-1 and Myc-PPARc2 showed a slight, but not significant increase in the interaction of PPARc2 with fulllength DN compared to WT SHP-1 (Figure 1D).A similar finding was observed with an additional substrate-trapping mutant (SHP-1 D419A) (Supplementary material, Figure S2A and B).Deletion of the N-terminus of SHP-1 strongly reduced the association between PPARc2 and SHP-1, but the DN form of the PTPase domain still bound better to PPARc2 than the WT version (Figure 1D and E).The N-terminal fragment retained the interaction with PPARc2 at a similar level as fulllength SHP-1 (Figure 1D and E) indicating that SHP-1 mainly interacts with PPARc2 via its SH2-domains.To further dissect whether the N-terminal SH2 (N-SH2) or C-terminal SH2 (C-SH2) domain is responsible for interaction with PPARc2, we generated C-terminally FLAG-tagged GFP-SH2 domain fusions containing either N-SH2-, C-SH2-or both SH2domains, because the N-SH2 domain tagged only with FLAG-epitopes was unstable (data not shown).FLAG-immunoprecipitations from HepG2 cells co-transfected with these GFP fusion constructs and Myc-PPARc2 showed that PPARc2 interacted mainly with the N-SH2 domain (Figure 1F and G).
Collectively these findings show that PPARc2 is a novel binding partner of SHP-1 and that the interaction is independent of PPARc2 activation.
SHP-1 dephosphorylates PPARc2
Next, we investigated whether PPARc2, which has been shown to be tyrosine-phosphorylated, 37 could be a substrate of SHP-1 in vitro.To confirm the tyrosine-phosphorylation of PPARc2, cells were transfected with a FLAG-PPARc2 construct and were either left untreated or treated with the general tyrosine phosphatase inhibitor bpV(HOpic) for 30 min.Immunoprecipitated PPARc2 was strongly tyrosine-phosphorylated as detected with phosphotyrosine-specific antibodies after bpV(HOpic)-treatment (Figure 2A).To determine whether SHP-1 could directly dephosphorylate PPARc2 we performed in vitro dephosphorylation assays.Phosphorylated Myc-PPARc2 was immunoprecipitated as substrate from transiently transfected cells treated with bpV(HOpic) as described above.This substrate was incubated with FLAG-SHP-1-WT or FLAG-SHP-1-DN proteins eluted from FLAG-immunoprecipitates of transfected cells.As a negative control, eluates from cells transfected with an empty FLAG-vector and as a positive control recombinant lambda-phosphatase were included in the experiment.Similar to lambda phosphatase, SHP-1 WT, but not SHP-1 DN was able to dephosphorylate PPARc2.Overall, these results indicate that SHP-1 interacts with PPARc2 and that SHP-1 efficiently dephosphorylates PPARc2 (Figure 2B and C).
Previously, protein tyrosine phosphatase 1B (PTP1B) has been implicated in PPARc dephosphorylation. 25Our dephosphorylation assay performed with phosphorylated PPARc2 as substrate and SHP-1 and PTP1B as enzymes immunoprecipitated and purified from transfected cells revealed that SHP-1 dephosphorylated PPARc2 in vitro to a greater extent than PTP1B (Figure 2D and E).SHP-1 and PTP1B used in the dephosphorylation assay were equally efficient in dephosphorylating a non-specific tyrosine-phosphatase substrate (DifMUP) ruling out that the difference in the dephosphorylation of PPARc2 was only due to reduced PTP1B-activity (Figure 2F).Together, these results suggest SHP-1 as a major phosphatase regulating the tyrosine phosphorylation status of PPARc2, although additional phosphatases like PTP1B might contribute to this effect.
SHP-1 modulates PPARc2 stability
Next, we wanted to examine the functional relationship between SHP-1 and PPARc2 in more details.We generated 3T3-L1 cells with a SHP-1 knock down (KD) and control cells by using lentivirus-encoded shPtpn6 and shScramble shRNAs.During differentiation of these cells into adipocytes, SHP-1 and PPARc protein levels in control cells concomitantly increased implying a co-regulation of the two proteins.PPARc protein levels showed a small increase at day 5 and 7 in SHP-1 KD cells compared to control cells as well as mRNA (A) HEK293T cells were transfected with an empty FLAG-vector or a FLAG-PPARg2 plasmid.After 24 h cells were treated with bpV(HOpic) (20 lM) for 30 min.Lysates were immunoprecipitated with anti-FLAG beads and analyzed by western blot analysis using respective antibodies.IP: immunoprecipitation, WCE: whole cell extracts, 4G10/PY20: mix of phosphotyrosine-specific antibodies (N ¼ 3).(B) HEK293T cells were transfected with FLAG empty vector, FLAG-SHP-1-WT, FLAG-SHP-1-DN or Myc-PPARg2 plasmids.After 24 h cells were either left untreated or treated with 20 mM bpV(HOpic) for 30 min.Myc-precipitated, tyrosine-phosphorylated Myc-PPARc2 was used as substrate and incubated for 30 min at 37 � C with FLAG, FLAG-SHP-1-WT or FLAG-SHP-1-DN eluted from FLAG-beads after FLAG-precipitation or with recombinant k phosphatase.Tyrosine-phosphorylation of Myc-PPARc2 was determined by western blot analysis using phosphotyrosine-specific antibodies to measure SHP-1 activity.The expression of respective Myc-and FLAG-constructs was verified by immunoblotting of whole cell extracts with the indicated antibodies (WT: wild-type, DN: dominant negative).(Representative of three independent experiments.)(C) Quantification of levels of phosphotyrosine Myc-PPARc2 and total Myc-PPARc2 was performed by densitometry using ImageJ software (N ¼ 3).(D) NIH3T3 cells were either transfected with V5 empty vector, PTP1B-V5, SHP-1-V5 or FLAG-PPARg2 constructs.After 24 h cells were either left untreated or treated with 20 mM bpV(HOpic) for 30 min.FLAG-PPARc2 eluted from FLAG-agarose beads was used as substrate and incubated with V5-precipitated V5, PTP1B-V5 or SHP-1-V5 at 37 � C for 30 min.Tyrosine-phosphorylation of FLAG-PPARc2 was determined by western blotting using phospho-tyrosine specific antibodies to measure PTP1B and SHP-1 activity.The expression of respective V5-and FLAG-constructs was verified by immunoblotting of whole cell extracts with the indicated antibodies.(Representative of three independent experiments.)(E) Quantification of levels of phospho-tyrosine FLAG-PPARc2 and total FLAG-PPARc2 was performed by densitometry using ImageJ software (N ¼ 3).(F) V5, SHP-1-V5 or PTP1B-V5 immunoprecipitates were incubated with DiFMUP (6,8-difluoro-4-methylumbelliferyl phosphate) substrate.Reaction was performed at 37 � C for 30 min.Phosphatase activity was assessed by measuring fluorescence at 455 nm (setting: Ex 358/Em 455) (N ¼ 2).IP: immunoprecipitation, WCE: whole cell extracts.levels at day 7 (Supplementary material, Figure S3A to D).These results suggest that SHP-1 influences PPARc-stability, but might also play a role in its transcriptional regulation, which would corroborate our previous data showing an increase of PPARc-transcripts in livers of high fat fed mice with a hepatocyte-specific SHP-1 knockout compared to their wild-type counterparts. 9To analyze more specifically the posttranslational regulation of PPARc through SHP-1, we exogenously expressed PPARc2 in HepG2 and NIH3T3 cells (Figure 3A and B).While HepG2 cells failed to respond to PPARc2 overexpression with and without rosiglitazone treatment (Figure 3C), NIH3T3 cells displayed a robust induction of classical PPARc target genes (CD36 and FABP4) in the presence of PPARc2, an effect that was further increased after treatment with rosiglitazone (Figure 3D).Therefore, we chose NIH3T3 cells for the following experiments.We generated NIH3T3 cells overexpressing PPARc2 using retroviral transduction and knocked down (KD) SHP-1 in these and control cells using lentivirus encoded Ptpn6 shRNA (Figure 4A).Depletion of SHP-1 resulted in a slight increase of PPARc2 protein expression (Figure 4A and B), which was not due to higher transcription, since there was no significant difference in the Pparg2 transcript levels between SHP-1 WT and KD cells (Figure 4C).To corroborate these data, we measured PPARc2 stability in control or SHP-1 depleted cells exposed to the protein synthesis inhibitor cycloheximide.Whereas the half-life of PPARc2 was observed to be 4-4.5 h and it was almost fully degraded after 6 h in SHP-1 WT cells, loss of SHP-1 significantly reduced PPARc2 degradation, which led to less than 50% decrease of PPARc2 even after 6 h (Figure 4D and E).Although we did not observe the same significant increase in PPARc2 levels in SHP-1 KD during 3T3-L1 adipocyte differentiation as in SHP-1 KD NIH3T3 cells, which might be explained by cell type-specific variations, together, these results show that SHP-1 controls PPARc2 stability.
SHP-1 regulates PPARc2-mediated adipogenesis
Finally, we analyzed the molecular mechanisms and functional effects of the SHP-1-regulated PPARc2 stability on PPARc2 activity.Previously, the tyrosine residue 78 (Y78) has been identified as the major site for tyrosine-phosphorylation in PPARc2 and Y78 phosphorylation increased the stability of PPARc2 in HEK293-cells by reducing ubiquitin-dependent degradation. 25,37In agreement to these previous findings, we found that the protein amount of a PPARc2-Y78F mutant, in which the tyrosine 78 is replaced by the non-phosphorylatable phenylalanine, was much lower than PPARc2-WT in NIH3T3-cells, despite no differences in the transcript levels of these two constructs (Supplementary material, Figure 4A and B).However, treatment of these cells with the proteasome inhibitor MG132 restored the stability of the PPARc2-Y78F mutant to PPARc2-WT levels (Supplementary material, Figure 4C and D) supporting the previous finding that Y78 is the main site regulating PPARc2 stability and that repressing Tyr phosphorylation promotes PPARc2 degradation by the proteasome.To determine whether Y78 residue undergoes phosphorylation in NIH3T3 cells, we treated NIH3T3 cells expressing wild-type PPARc2 or mutant PPARc2 (Y78F) with the proteasome inhibitor MG132 to stabilize PPARc2 protein levels as well as bpV(HOpic) and immunoprecipitated PPARc2.We observed that levels of tyrosine phosphorylated PPARc2 were significantly lower in the Y78F mutant compared to wild-type PPARc indicating that Y78 is the main residue undergoing tyrosine phosphorylation in NIH3T3 cells (Figure 5A).
To understand the role of Y78 in SHP-1-controlled PPARc2 stability, we compared NIH3T3 cells overexpressing PPARc2-WT or the PPARc2-Y78F mutant in the SHP-1-WT and SHP-1-KD background (Figure 5B).As before, we found higher levels of PPARc2-WT in SHP-1 depleted cells and low amounts of PPARc2-Y78F in SHP-1-WT cells.Interestingly, the PPARc2-Y78F levels were unaffected by the loss of SHP-1 suggesting that the SHP-1 mediated regulation of PPARc2 stability depends on the presence and probably the phosphorylation of Y78.We ruled out the possibility that the observed effects are due to differences in Pparg2 transcription, because we did not find a significant difference in the transcript levels of any of the Pparg2 constructs in SHP-1-WT and SHP-1-KD conditions with or without rosiglitazone treatment (Figure 5C).
To determine the impact of SHP-1 on PPARc2 activity, we examined the expression of classical PPARc target genes (Fabp4 and Cd36).Reflecting the PPARc2 protein levels seen in Fig. 5B, Fabp4 and Cd36 transcripts significantly increased in cells overexpressing wild-type PPARc2 in SHP-1-KD cells as compared to SHP-1-WT cells after rosiglitazone treatment (Figure 5D and E).However, there was no significant difference in the low levels of Fabp4 and Cd36 transcripts in cells of SHP-1-WT or SHP-1-KD background expressing PPARc2-Y78F after rosiglitazone treatment.To analyze whether the impact of SHP-1-mediated regulation of PPARc2 activity also results in phenotypic changes, we treated PPARc2-WT and -Y78F overexpressing NIH3T3-cells with rosiglitazone in the SHP-WT or -KD background.Similar to the Fabp4 and Cd36 transcript levels and mirroring the PPARc2 protein levels, loss of SHP-1 caused an increase in lipid content in PPARc2-WT, but not PPARc2-Y78F expressing cells, which generally showed reduced lipid content (Figure 5F and G).These findings indicate that SHP-1-mediated regulation of PPARc2 stability through tyrosine residue Y78 results in altered PPARc2 activity consequently affecting adipogenesis.
Discussion
In the present study, we discovered a new interacting partner and a substrate of SHP-1, namely PPARc2.As depicted in the model in Figure 5H, we found that knockdown of SHP-1 stabilizes PPARc2-WT protein leading to increased PPARc2 protein expression as well as enhanced transcription of Cd36 and Fabp4 resulting in elevated adipogenesis.The substitution of tyrosine residue 78 with a non-phosphorylatable phenylalanine markedly destabilized PPARc2, consequently dampening the SHP-1-dependent effects.Thus, SHP-1-mediated dephosphorylation at the Y78 residue results in the degradation of PPARc2 and consequently modification of its activity.Since SHP-1 interacted to the same extent with PPARc2 from untreated or rosiglitazone-treated cells, this SHP-1-mediated regulation seems to be independent of the activation status of PPARc2.
While SHP-1 function has been vastly characterized in the immune system, a much more limited number of SHP-1 substrates have been documented in metabolic tissues such as CEACAM1 12 and phosphatidylinositol 3-kinase regulatory subunit p85. 7Our work provides several lines of evidence that establish PPARc, one of the master regulators of adipogenesis, as a SHP-1 substrate.SHP-1 interacts with PPARc2 via its N-terminal SH2 domains, which are responsible for the detection of phosphotyrosine residues in SHP-1 target proteins.PPARc2 bound stronger to a SHP-1 substrate-trapping mutant as compared to wild-type SHP-1.Choi et al., discovered that phosphorylation of PPARc at the Y78 residue in adipocytes led to the suppression of pro-inflammatory gene expression, as well as the secretion of chemokines and cytokines, ultimately resulting in reduced macrophage migration and this phosphorylation was reversed by PTP1B. 25In our study, SHP-1 dephosphorylated PPARc2 in vitro, even better than PTP1B, which has been suggested to be the phosphatase responsible for PPARc dephosphorylation. 25The identification of PPARc2 as SHP-1 substrate adds another transcription factor to the short list of transcriptional regulators including b-catenin and TonEBP/OREBP, which have been described as SHP-1 targets. 38,39rotein dephosphorylation can lead to a variety of cellular responses, including protein degradation.The relationship between protein dephosphorylation and proteasomal degradation is complex and depends on many factors, such as the specific protein being targeted and the context of the cellular environment.There are a number of studies where the link between protein dephosphorylation and protein degradation have been shown.For instance, Nalavadi et al. demonstrated that fragile X mental retardation protein degradation in dendrites by the ubiquitin proteasome system is mediated by protein phosphatase 2a activity. 40In another report, dephosphorylation of Bcl-2 resulted in its ubiquitin mediated degradation. 41However, to the best of our knowledge, except for one, 37 there have been no other reports linking the dephosphorylation of a tyrosine residue to the degradation of a respective protein.
The stability of nuclear receptors is influenced by their interaction with coactivators/co-suppressors and phosphorylation-induced ubiquitin-mediated protein degradation. 42,43e showed that SHP-1 loss increased PPARc2 stability, an effect that is abrogated by mutation of the tyrosine 78 residue to phenylalanine.Previously, it has been demonstrated that the Y78 residue of PPARc2 is tyrosine phosphorylated by c-Abl kinase, which leads to PPARc2 stabilization by inhibiting ubiquitin-dependent degradation. 37Therefore, our data imply that by dephosphorylating the Y78 residue SHP-1 negatively regulates the stability of PPARc2.A compelling avenue for future investigation lies in unraveling the molecular mechanism how changes in tyrosine phosphorylation modulate ubiquitin-mediated protein degradation.One could speculate about different potential mechanisms.First, tyrosine phosphorylation might inhibit the interaction with an E3 ubiquitin ligase either directly or indirectly by recruiting another protein that prevents the access of an E3-ubiquitin ligase thereby limiting ubiquitination.Conversely, tyrosinephosphorylation might recruit a deubiquitinase and stabilize PPARc by reversing ubiquitination.Several E3 ubiquitin ligases for PPARc including makorin ring finger protein 1 (MKRN1), 44 neural precursor cell expressed developmentally downregulated protein 4 (NEDD4) 45 and tripartite motif-containing 25 (TRIM25) 46 as well as a deubiquitinase for PPARc, ubiquitin-specific protease 22 (USP22), have been described and future studies should provide deeper insights into the so far unknown molecular details of this novel regulatory mechanism.Previously Xiao et al., demonstrated that SHP-1 regulates casitas-B-lineage lymphoma (Cbl)-b protein mediated T cell response by controlling the tyrosine phosphorylation and ubiquitination of Cbl-b. 47It will be interesting in the future to study whether dephosphorylation by SHP-1 controls the stability and activity of other proteins.
Overall, we discovered a novel function for the proteintyrosine phosphatase SHP-1, which by dephosphorylating PPARc2 regulates protein stability and thereby adipogenesis.However, the present study has certain limitations, which allow room for future investigations.Indeed, the physiological implication and relevance of tyrosyl phosphorylation of PPARc2 remains to be further characterized.The utilization of an in vivo model expressing PPARc2(Y78F) may provide further insights into the physiological significance of this site in metabolic processes.Furthermore, the specific signaling pathway(s) responsible for promoting tyrosine phosphorylation of PPARc2 and the exact molecular mechanism how nuclear SHP-1 is activated to dephosphorylate PPARc2 remain elusive and need further investigations.
Cell culture, transfection, and treatment
All cell lines were cultured at 37 � C in a humidified atmosphere containing 5% CO 2 .The cell lines used were human embryonic kidney cells HEK293T, human hepatoma HepG2 cells and NIH3T3 mouse fibroblasts.HEK293T cells and NIH3T3 cells were cultured in DMEM high-glucose (Wisent Bioproducts cat# 319-005-CL) supplemented with 10% fetal bovine serum (FBS).HepG2 cells were cultivated in DMEM low-glucose (Wisent Bioproducts cat# 319-010-CL) supplemented with 10% FBS.3T3-L1 cells were grown and differentiated into adipocytes as previously described. 49All cell lines were transfected using jetPRIME (Polyplus-transfection, New York, USA) transfection reagent according to the manufacturer's instructions with cell confluence between 60 and 80%.Depending on the experiments, the day after the transfection, some cells were treated with 100 nM of rosiglitazone (SynFine cat# 010301) before further analysis.Cells were treated with 20 lM (HEK293T and NIH3T3 cells) or 10 mM (HepG2 cells) bpV(HOpic) (Calbiochem, cat# 203701, reconstituted in water) for 30 min before harvesting.In the indicated experiments 100 mM of cycloheximide (Sigma, cat# 01810-1G) and 25 mM of MG-132 (Calbiochem, cat# 474787-10MG) were added to the cell culture medium.
Virus preparation and generation of stable cell lines
Retro-and lentiviruses were prepared using HEK293T cells.Retroviruses were prepared by transfecting WZLneo-PPARc2 or WZLneo-PPARc2-Y78F plasmid along with Gag-Pol and VSVg constructs.For lentiviral infection, shControl or shPTPN6 constructs were transfected with psPAX2 and pMD2.G plasmids.Transfections were performed using JetPRIME transfection reagent as per the manufacturer's instructions.Virus containing supernatants were collected 48 h after transfection and filtered using a 0.45 mm filter. 50NIH3T3 cells or HepG2 cells were infected with retroviral supernatants (empty vector or PPARc2 expressing vector) in the presence of polybrene (8 mg/mL).After 24 h of infection, transduction media was replaced by fresh media.After another 24 h, cells were split and selected using G418 (800 mg/mL).Expression of PPARc2 protein was determined by Western blot analysis using a PPARc-specific antibody (Cell signaling # 2443S, (81B8), 1:1000 dilution).Similarly, NIH3T3 cells overexpressing mutant PPARc2-Y78F were generated.
PPARc2 protein was measured by western blot analysis at 0, 1, 3, and 6 h of cycloheximide treatment.
Oil red O staining
To assess the lipid content, NIH3T3 cells stably expressing PPARc2 or PPARc2-Y78F in SHP-1-WT or SHP-1-KD cells were cultured to confluency and treated with 100 nM of rosiglitazone for 10 days.On day 10, cells were fixed and stained with oil red O (Sigma O-0625) as described previously. 37Oil red O was dissolved in isopropanol and relative intracellular content was measured by measuring OD at 500 nm. 37
RNA isolation and quantitative PCR
Total RNA was isolated using the Direct-zol TM RNA MiniPrep kit (R2072, Zymoresearch, Irvine, USA) as per the manufacturer's instructions.Two micrograms of total RNA was reverse transcribed using high capacity cDNA reverse transcription kit (Thermo Fisher Scientific, cat# 4368813).cDNA was diluted and the expression of transcripts was determined using Multicell Advanced qPCR master mix (Wisent bioproducts, cat# 800-435-UL).Hprt1 or B2M were included as reference genes (Supplementary material, Table S1).Relative fold change was calculated using 2^(-delta delta Ct) method. 53
Statistical analysis
All values presented in graphs are the average value of at least two independent experiments.The error bar shown in the graph is the standard error of the mean.Student's t test and one-way ANOVA or two-way ANOVA with Tukey's post hoc test were used to determine the statistical significance using GraphPad Prism software version 8. Western blots were quantified using ImageJ software.
Figure 1 .
Figure 1.SHP-1 interacts with PPARc mainly through the SH2 domains.(A) NIH3T3 cells were transfected with the indicated plasmids and were either treated with DMSO or 100 nM rosiglitazone for 1 h.Cell lysates were immunoprecipitated using anti-FLAG M2 beads.The expression of respective proteins was determined by western blot analysis using the indicated antibodies (representative of two independent experiments).(B) Quantification of immunoprecipitated Myc-PPARc2 and FLAG-SHP-1 was performed by densitometry using ImageJ (N ¼ 2).(C) Schematic representation of various SHP-1 constructs generated via recombinant DNA technology.(D) HepG2 cells were transfected with the indicated plasmids and cell lysates were immunoprecipitated using anti-FLAG M2 beads.The expression of respective proteins was determined by western blot analysis using the indicated antibodies (representative of two independent experiments).(E) Quantification of immunoprecipitated Myc-PPARc2 and FLAG-SHP-1 constructs was performed by densitometry using ImageJ software (N ¼ 2).(F) HepG2 cells were transfected with the indicated plasmids and cell lysates were immunoprecipitated using anti-FLAG M2 beads.The expression of respective proteins was determined by western blot analysis using the indicated antibodies (representative of three independent experiments).(G) Quantification of immunoprecipitated Myc-PPARc2 and GFP-FLAG3 SH2 domain constructs was performed by densitometry using ImageJ software (N ¼ 3).IP: immunoprecipitation, WCE: whole cell extracts.
Figure 3 .
Figure 3. CD36 and FABP4 transcript levels are increased in NIH3T3, but not HepG2 cells in response to rosiglitazone treatment after overexpression of PPARc2.(A, C) HepG2 cells and (B, D).NIH3T3 cells were transfected with either FLAG empty vector or FLAG-PPARg2.After 24 h cells were treated either with DMSO or rosiglitazone (100 nM) for 16 h.Expression of indicated proteins was determined by western blot analysis (A and B) using respective antibodies.The amounts of CD36 and FABP4 transcripts were evaluated by qPCR (C and D) (N ¼ 3).
Figure 5 .
Figure 5. SHP-1 regulates PPARc2-mediated adipogenesis through tyrosine residue 78.(A) NIH3T3 cells overexpressing Pparg2 and Pparg2 (Y78F) in SHP-1-WT background were treated with 20 mM MG132 for 1 h.During the last 30 min of MG132 incubation, cells were treated with 20 mM of bpV(HOpic).Protein lysates were prepared and immune-precipitated using a PPARc-specific antibody.Expression of indicated proteins was determined by western blot analysis using respective antibodies (N ¼ 2).(B) Knockdown (KD) of SHP-1 in NIH3T3 cells stably expressing PPARc2-WT or PPARc2-Y78F using lentiviral-mediated shRNA transduction.Expression of indicated proteins was confirmed by western blot analysis (N ¼ 2).(C-E) mRNA levels of Pparg2 (C), Fabp4 (D) and Cd36 (E) were determined by qPCR in NIH3T3 SHP-1-WT or -KD cells stably transduced with empty vector (Control), PPARc2-WT or -Y78F.Cells were treated with either DMSO or 100 nM rosiglitazone for 16 h (N ¼ 3).(F) NIH3T3 cells with or without knockdown of SHP-1 stably expressing either control retroviral vector, PPARc2-WT or -Y78F were cultured to confluence and then treated with rosiglitazone (100 nM).On day 10, cells were fixed and lipid content was determined by oil red O staining.(G) Quantification of oil red O-staining from cells shown in Figure 5F.Oil red O-stained NIH3T3 cells were dissolved in isopropanol and intracellular lipid content was quantified by measuring absorbance at 500 nm (N ¼ 3).(H) Model depicting the control of PPARc2 activity by SHP-1.This figure was partly generated using Servier Medical Art (https:// smart.servier.com),provided by Servier, licensed under a Creative Commons Attribution 3.0 unported license. | 2024-06-04T06:16:35.318Z | 2024-06-03T00:00:00.000 | {
"year": 2024,
"sha1": "40bcdc7b6661aa78e2c7748400e3f18c18508c0b",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/10985549.2024.2354959?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "7d189b7899399deb833f7b6570c6389ed8691d5c",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
19243353 | pes2o/s2orc | v3-fos-license | Universities of Applied Sciences Fachhochschulen – Hautes Ecoles Spécialisées FH HES On-line Monitoring and Control of Fed-batch Fermentations in Winemaking
The fermentation of yeast in fed-batch mode shows great potential in winemaking because it allows the concentration of sugars to be kept low and constant throughout the process which, in turn, reduces cell stress and leads to a significant decrease in the production of unwanted secondary metabolites. The implementation of this technique requires reliable on-line analysis of sugar and a robust control strategy to maintain sugar concentrations at defined levels over the course of the fermentation. In this study, a laboratory-scale setup was used to implement and assess a fully automated fed-batch fermentation of Saccharomyces cerevisiae in grape must. Total sugar levels were monitored in-line by FT-MIR ATR spectroscopy and kept constant at 50 g/kg by a modified PI controller regulating the must feed flow rate. Good setpoint tracking and disturbance rejection were achieved in fermentations of up to four days despite occasional yeast sedimentation on the ATR crystal. The controller parameter adaptation strategy needs to be optimized for longer fermentations.
Introduction
Winemaking traditionally involves batch fermentation processes where yeast convert the fermentable sugars contained in grape must into alcohol.Major fermentation parameters (temperature, aeration, duration) have been optimized overt he course of winemaking history,mostly empirically and based on the sensory evaluation of the final product.However, the quality and sensory attributes of wine are also greatly influenced by the composition of the grape must, which is subject to vintage variations.In recent years, climate change associated acceleration of grape maturation has led to higher grape sugar concentrations.Excessive sugar levels in grape must cause ahyperosmotic stress response in yeast leading to increased formation of undesired fermentation metabolites, such as acetic acid and acetaldehyde, during winemaking. [1]yperosmotic stress in yeast cells can be minimized by actively maintaining relatively lowa nd constant sugar concentrations in the medium during the fermentation process.This can be achievedb yi mplementing fed-batch fermentations, whereby must is added progressively into the fermentation tank at ar ate that corresponds to the metabolic rate of the yeast.Typical sugar concentration profiles in batch and fed-batch modes are shown schematically in Fig. 1.
Frohman et al. [2] noted asignificant increase in cell viability and substantially lower acetic acid and acetaldehyde levels in wines produced by the fed-batch technique as opposed to those obtained by using the traditional batch mode.The authors suggested that this improvement wascaused by reduced cellular stress and the reutilisation of acetic acid by the yeast.In a follow-up study,P ernet et al. [3] developed as trategy for the inline monitoring of sugars and ethanol using near-infrared (NIR) spectroscopy.Thisapproach wassuccessfully tested during fedbatch wine fermentations that were automatically controlled to maintain aconstant concentration of total sugars.
Besides the need for reliable on-line analysis of sugar levels, the implementation of af ully automated fed-batch system requires ar obust control strategy capable of dealing with nonlinearly evolving process conditions (in this case, the exponential yeast growth).In this work, closed-loop control of fed-batch wine fermentations wasi mplemented at the laboratory scale using a Fourier-transform mid-infrared (FTIR) spectrometer for the inline monitoring of the total sugar concentration, and av ariation of the PI controller in which the proportional and integral terms are adapted exponentially to account for the specific growth dynamics. [4]An exemplary experiment is shown and discussed in this paper.
Materials and Methods
The yeast used in this work, Saccharomyces cerevisiae strain DV10, is commercially available from Lallemand Inc. (Montreal, Canada).Grape must, originating from the Domaine de Montmollin (Auvernier,Switzerland), had atotal fermentable sugar concentration of 230 g/l and wass upplemented with 200 mg/l of diammonium hydrogen phosphate as nitrogen source.The inoculation rate was0.07g/l.The fermentations were carried out at 20 °C, with ashort start-up batch phase in asmall initial volume followed by the fed-batch phase.Ap rocess diagram of the experimental setup is shown in Fig. 2. Specifically,t he must wasm aintained in at emperaturecontrolled tank at 5°Cand fed into the fermenter using aperistaltic pump.As econd peristaltic pump wasu sed to recirculate the fermentation broth through the FTIR'sattenuated total reflectance (ATR) flowcell.Measurements were sent from the FTIR to the controller which, in turn, regulated the speed of the feed pump.An autosampler wasu sed to withdrawb roth samples through a three-way valvei nstalled on the recirculation loop for off-line reference analysis.Reference analysis of sugars and ethanol was performed by chromatographic methods (GC and HPLC).
The FTIR spectrometer (PerkinElmer Spectrum One) was calibrated for the analysis of total sugars (glucose and fructose) and ethanol using standards prepared following a7-levelpartial factorial design for multivariate calibration. [5]Ap artial least squares (PLS) model wasb uilt and evaluated using external validation.The standard errors of validation (SEV) were 4.5 g/l for total sugars and 2.8 g/l for ethanol.
The flowr ate of the feed pump, F(t), wasr egulated by the following proportional-integral controller: (1) () () where F 0 is the initial flowr ate, µ is the specific growth rate of the yeast, K P and K I are the proportional and integral gains, respectively,and e(t)isthe control error.The initial flowrate was calculated based on the specific growth rate and the concentration of yeast in the fermenter at the beginning of the fed-batch phase.
The values of the controller gains were tuned empirically.
Results and Discussion
Af our-day fermentation wasc arried out producing approximately 3l itres of wine.Fig. 3s hows the concentration profile of sugars during the fed-batch phase for aconstant setpoint of 50 g/kg, as determined in-line by FTIR.
The system showed good setpoint tracking and recovery following three major disturbances.The first disturbance, at fermentation time of 0.9 days, wasc aused by the replacement of the initial 250 ml fermentation tank with al arger one to accommodate the growing volume.
The following twod isturbances, occurring after 1.5 and 2.6 days of fermentation time, were caused by the accumulation of yeast on the AT Rwindowinside the flowcell.The yeast buildup provokedthe formation of asmall stagnant volume inside which the concentration of sugars dropped.The controller reacted to the false signal of declining sugar levels by increasing the feed flowr ate of grape must which resulted in the accumulation of sugars beyond the setpoint value.After detecting the problem, the recirculation loop wasmomentarily disconnected to allowthe flowcell to be cleaned.As can be seen in Fig. 3, the excess sugar in the fermenter wassubsequently consumed by the culture and control wasre-established.
During the last day of the experiment, as tatic error of approximately 10% wasn oted.This wasm ost likely due to the controller gains becoming ineffective withr espect to the high biomass concentration towards the end of the fermentation.
Conclusions and Perspectives
The implementation of af ully automated fed-batch process of wine fermentation wasachievedatthe laboratory scale overa period of four days.In-line monitoring of total sugar levels was implemented by mid-infrared spectroscopy.The control strategy provedsuccessful butthe results showed that care must be taken to avoid yeast buildup on the AT Rcrystal during fermentations.Future work will focus on resolving this issue, improving the robustness of the controller and testing the approach over extended fermentation periods and on alarger scale.
The parameters of the controller should be tuned following am ore systematic approach and adapted overt he course of the fermentation in order to improve the controller'sl ongterm effectiveness and robustness.Follow-up experiments will consider assessing cell viability,a cetic acid and acetaldehyde concentrations, as well as the effect of the proposed methodology on the quality of the produced wine.Finally,f or as uccessful practical implementation of the fed-batch control approach, al ow-cost alternative for the measurement of glucose in the must should be developed.Selective captor technology found in clinical instrumentation may be apromising approach.
Fig. 2 .
Fig. 2. Process diagram of the experimental setup used for fed-batch 0wine fermentations.
Fig. 3 .
Fig. 3. Profile of total sugar concentration during fed-batch fermentation of grape must. | 2019-12-26T04:50:33.769Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "f06de207ae1ed90c8f9921e5a99e8d8938f04ea0",
"oa_license": "CCBYNC",
"oa_url": "https://ojs.chimia.ch/chimia/article/download/1872/1192",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f06de207ae1ed90c8f9921e5a99e8d8938f04ea0",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
21455817 | pes2o/s2orc | v3-fos-license | Conformational Flexibility and Subunit Arrangement of the Modular Yeast Spt-Ada-Gcn5 Acetyltransferase Complex*
Background: The Saccharomyces cerevisiae Spt-Ada-Gcn5 acetyltransferase (SAGA) complex regulates transcription through chromatin modification and other mechanisms. Results: The overall structure of SAGA and the arrangement of all subunits within this complex were determined. Conclusion: SAGA is flexible and is composed of core modules that support peripheral catalytic modules. Significance: Understanding the structural mechanisms of SAGA multifunctionality improves the understanding of other chromatin-modifying complexes. The Spt-Ada-Gcn5 acetyltransferase (SAGA) complex is a highly conserved, 19-subunit histone acetyltransferase complex that activates transcription through acetylation and deubiquitination of nucleosomal histones in Saccharomyces cerevisiae. Because SAGA has been shown to display conformational variability, we applied gradient fixation to stabilize purified SAGA and systematically analyzed this flexibility using single-particle EM. Our two- and three-dimensional studies show that SAGA adopts three major conformations, and mutations of specific subunits affect the distribution among these. We also located the four functional modules of SAGA using electron microscopy-based labeling and transcriptional activator binding analyses and show that the acetyltransferase module is localized in the most mobile region of the complex. We further comprehensively mapped the subunit interconnectivity of SAGA using cross-linking mass spectrometry, revealing that the Spt and Taf subunits form the structural core of the complex. These results provide the necessary restraints for us to generate a model of the spatial arrangement of all SAGA subunits. According to this model, the chromatin-binding domains of SAGA are all clustered in one face of the complex that is highly flexible. Our results relate information of overall SAGA structure with detailed subunit level interactions, improving our understanding of its architecture and flexibility.
The Spt-Ada-Gcn5 acetyltransferase (SAGA) complex is a highly conserved, 19-subunit histone acetyltransferase complex that activates transcription through acetylation and deubiquitination of nucleosomal histones in Saccharomyces cerevisiae. Because SAGA has been shown to display conformational variability, we applied gradient fixation to stabilize purified SAGA and systematically analyzed this flexibility using single-particle EM. Our two-and three-dimensional studies show that SAGA adopts three major conformations, and mutations of specific subunits affect the distribution among these. We also located the four functional modules of SAGA using electron microscopybased labeling and transcriptional activator binding analyses and show that the acetyltransferase module is localized in the most mobile region of the complex. We further comprehensively mapped the subunit interconnectivity of SAGA using cross-linking mass spectrometry, revealing that the Spt and Taf subunits form the structural core of the complex. These results provide the necessary restraints for us to generate a model of the spatial arrangement of all SAGA subunits. According to this model, the chromatin-binding domains of SAGA are all clustered in one face of the complex that is highly flexible. Our results relate information of overall SAGA structure with detailed subunit level interactions, improving our understanding of its architecture and flexibility.
Transcription is a highly regulated process involving the stepwise recruitment of factors to the site of transcription to facilitate RNA polymerase II activity (1). In eukaryotic cells, DNA is packaged with nucleosomes, an octameric complex of histone proteins, to form chromatin. Chromatin serves as a steric barrier against transcription, and various post-translational modifications of histones play important roles in regulating both the chromatin landscape and the recruitment of transcription factors. Histone acetylation, a modification that correlates with an "open" chromatin conformation and increased transcription, is mediated by several multisubunit histone acetyltransferase (HAT) 2 complexes (2). These HAT complexes are often recruited to specific genes by DNA-binding transcriptional activator proteins such as Gal4 and Gcn4 in yeast Saccharomyces cerevisiae (3).
The Spt-Ada-Gcn5 Acetyltransferase (SAGA) complex is a highly conserved HAT complex that activates the transcription of stress response genes in yeast (4). As the largest HAT complex, SAGA consists of 19 core subunits that associate into a stable assembly of ϳ1.8 MDa in overall mass, with Gcn5 serving as the catalytic subunit for acetylating histone H3 (Table 1) (5). The other subunits of SAGA confer additional functionalities to the complex. These subunits are functionally organized into four distinct modules: the HAT module, the deubiquitination (DUB) module, the SPT module, and the TAF module (6). Within the DUB module, Ubp8 catalyzes the deubiquitination of histone H2B, an important step in the progression of transcription activation (7). SPT module subunits Spt3 and Spt8 enable SAGA to bind TATA-binding protein (TBP), an important general transcription factor in the formation of the transcriptional preinitiation complexes (8), and regulate transcription of different genes (9,10). Tra1, another SPT module subunit, is targeted by different transcriptional activators to recruit SAGA to certain genes (3). The TAF module contains five histone fold-containing Taf proteins, shared with the TFIID general transcription factor complex, that serve important roles in maintaining the integrity of the complex (11).
Several studies have sought to elucidate the structural features of SAGA. Single-particle EM analysis by Wu et al. (12) provided the first step toward understanding the overall structure of this complex by generating the first three-dimensional reconstruction of SAGA, and localizing nine core subunits using antibody labeling techniques. Two-dimensional analysis in this study also revealed that a subpopulation of SAGA particles possesses an additional region of density, which can adopt different conformations. A recent study mapping the DUB module to an EM structure of SAGA also observed this subpopulation (13). More recently, an EM study on human TFIID showed that this complex undergoes massive rearrangement that alters both the position and the connectivity of an entire lobe (14). Even though SAGA shares multiple core subunits with TFIID, whether the observed structural flexibility is a shared property of these complexes is not known. Furthermore, at the time of the first EM study, the H2B deubiquitination activity of SAGA and the subunits associated with this catalytic activity have not been identified (7). Finally, apart from the DUB module, there is a dearth of high resolution structural information for the other SAGA subunits, rendering our structural understanding of the complex incomplete.
By developing a modified purification strategy that enhances the stability of SAGA, we uncovered the remarkable conforma-tional flexibility of this complex using single-particle EM. Systematic subunit deletions and mutations approaches enabled us to further dissect the role of the different modules in mediating structural rearrangement. By combining established EMbased labeling methods with chemical cross-linking of proteins coupled with mass spectrometry analysis (CXMS), we mapped and validated the spatial location of all core components of SAGA, including subunits of the DUB module. Collectively, our data enabled us to generate a model for the molecular organization of SAGA and to gain insights into the physiological relevance of its conformational flexibility.
Purification of Native SAGA-Native SAGA was purified by a traditional TAP purification (12,15) or a modified version substituting the calmodulin binding step with GraFix (18). In particular, TAP-tagged yeast cells were grown to an A 600 of ϳ2.5, harvested by centrifugation, and the cell pellets were frozen at Ϫ80°C. 20 -25 g of frozen cells were resuspended in ϳ80 ml of lysis buffer (40 mM HEPES, pH 7.4, 350 mM NaCl, 10% glycerol, 0.1% Tween 20, 1 mM PMSF, 50 mM NaF, 0.1 mM Na 3 VO 4 , 2 mM benzamidine, and EDTA-free protease inhibitor (Roche)) and lysed by bead beating. The lysate was then centrifuged at 30,000 ϫ g for 30 min. Clarified lysates were incubated with 500 l of IgG-Sepharose (GE Healthcare) at 4°C for 1.5 h. The IgG resin was washed with IPP150 buffer (40 mM HEPES, pH 7.4, 150 mM NaCl, 0.2% Nonidet P-40, and 10% glycerol) and resuspended in 750 l of TEV-C buffer (40 mM HEPES, pH 7.4, 150 mM NaCl, 0.1% Nonidet P-40, 0.5 mM EDTA, 10% glycerol, and 1 mM DTT). Bound proteins were eluted by tobacco etch virus protease cleavage at 16°C for 1.5 h. 2 l of 10 mg/ml RNase A was added to 500 l of the eluate and incubated on ice for 30 min. 2 ϫ 200 l of the eluate were overlaid on two linear 15-30% glycerol gradients, with one containing 0.00 -0.05% glutaraldehyde cross-linker prepared using the Gradient Station (Biocomp). The gradients were spun at 58,800 ϫ g for 16.5 h at 4°C. The gradients were then fractionated using the Gradient Station. Fractions from gradients without cross-linker were TCA-precipitated for silver stain SDS-PAGE analysis, and corresponding fractions with cross-linker, concentrated using 100,000 molecular mass cutoff concentrators (Millipore) if necessary, were used for further EM analysis. For antibody binding and cross-linking mass spectrometry analyses, SAGA was purified using anti-FLAG affinity chromatography. 3ϫFLAG-tagged Spt7 yeast cell pellets were obtained as before. Frozen pellets are pooled and ground using a freezer mill (SPEX SamplePrep 6870) under liquid nitrogen temperatures. 35-40 ml of the finely ground cell lysate was resuspended in ϳ80 ml of lysis buffer. The lysate was centrifuged at 30,000 ϫ g for 30 min. Clarified lysates were incubated with 500 l of ␣-FLAG M2 resin (Sigma) for 1 h. The resin was washed three times with 5 ml of lysis buffer without inhibitors and once with 5 ml of reduced salt (150 mM) lysis buffer without inhibitors. The resin was resuspended in 1 ml of reduced salt lysis buffer containing 2.5 g/ml RNase A and incubated at 4°C for 30 min. The resin was washed twice with 5 ml of reduced salt lysis buffer. Bound proteins were eluted twice in 500 l of reduced salt lysis buffer without inhibitors and 0.5 mg/ml 3ϫFLAG peptides.
Electron Microscopy-Negatively stained specimens for twodimensional analysis were prepared from purified SAGA as described previously (19). Samples for three-dimensional analysis were prepared using the carbon sandwich negative staining technique to improve stain embedding and to minimize sample flattening (19). Samples were visualized using a Tecnai Spirit transmission electron microscope (FEI) operated at an accelerating voltage of 120 kV. Images were taken at a nominal magnification of 49,000ϫ using an FEI Eagle 4K charge-coupled device camera at a defocus value of Ϫ1.2 m under low dose conditions. For tilt pair data collection, the same parameters were used, and two images, one at 60°tilt and one untilted, were taken from the same specimen area. 2 ϫ 2 image pixels were then averaged for a final pixel size of 4.7 Å.
Image Processing-For two-dimensional analysis, individual particle images were interactively selected using Boxer (20). The selected particles were then windowed into 128 ϫ 128pixel images, rotationally and translationally aligned, and subjected to K-means classification to generate class averages using the SPIDER image processing suite (21). For GFP tagging analysis, we used a preliminary round of unsupervised classification and averaging to visualize regions with possible additional density. These averages were used to create masks that focused on the vicinity of the additional density, and the areas within the masks were used to reclassify the input particle images. Aver-ages obtained from this method were compared with untagged SAGA of a similar conformation via subtraction analysis. Images of untagged SAGA were subtracted from images of tagged SAGA, and the resulting difference image was threshold to find signals that were either 3 or 4 standard deviations from the mean pixel value.
For determining the de novo three-dimensional reconstructions of SAGA, 22,518 pairs of particle images were first selected using WEB (21). Particles in the untilted set were windowed into 128 ϫ 128-pixel images, aligned, and classified into 50 classes using SPIDER. Two class averages that correspond to each of the three conformations were merged. Three-dimensional reconstructions were generated from the tilted particles of these combined averages using the backprojection and angular refinement algorithms in SPIDER. The final resolutions of the three-dimensional models were estimated by the Fourier shell correlation function using the 0.5 Fourier shell correlation criterion. The curved, arched, and donut reconstructions had resolutions of 41.7, 45.3, and 38.9 Å, respectively. Molecular docking analysis and model construction was performed using UCSF Chimera (22).
Conformation Population Analysis-Conformation population analysis similar to a previous study was conducted (23). Particle measurements were done using ImageJ (24). 100 class averages from the Spt7-TAP wild type and sgf73 strains were used. Only averages with unambiguous outlines corresponding to SAGA were analyzed. The length of the cleft formed in the arched conformation of SAGA and the shortest distance between the distal end of the tail and the shoulder were measured. Three independent measurements were made, and the averages were used to create combined scatter and bar plots.
Antibody Labeling of SAGA-Non-cross-linked SAGA purified from Sgf73-TAP-tagged strain was incubated with 10 g/ml ␣-TAP antibody (Thermo Scientific) at room temperature for 10 min and used for single-particle EM analysis. Particles with bound antibodies were selected for two-dimensional analysis as detailed above.
Chemical Cross-linking and Mass Spectrometry Analysis-FLAG-purified SAGA was concentrated from 2 ml to ϳ100 l using 100,000 molecular mass cutoff concentrators (Millipore) and were cross-linked with disuccinimidyl suberate as described (25), precipitated with 4 volumes of cold acetone, washed once with cold acetone, air-dried, and then dissolved in 8 M urea, 100 mM Tris, pH 8.5. After 5 mM Tris-(2-carboxyethyl)phosphine reduction and 10 mM iodoacetic acid alkylation, the samples were digested with Lys-C at a 1:100 enzyme:substrate ratio at 37°C for 4 h. The samples were 4-fold diluted with 100 mM Tris, pH 8.5, before they were digested with trypsin (1:50, enzyme:substrate) at 37°C. After 12 h, formic acid was added to a final concentration of 5% to stop digestion. The samples were cleared by centrifugation at 14,000 rpm for 10 min and desalted with a homemade 250-m ϫ 1-cm C18 reverse phase column. Desalted peptides were loaded onto a 75-m -10-cm analytical column packed with 1.8 m, 120 Å UHPLC-XB-C18 resin (Welch Materials Inc.) and separated over a 107-min linear gradient from 100% buffer A (0.1% formic acid) to 30% buffer B 100% acetonitrile, 0.1% formic acid), and then a 10-min gradient from 30 to 80% buffer B and maintaining at 80% buffer B for 6 min before returning to 100% buffer A in 5 min and ending with a 9-min 100% buffer A wash. The flow rate was 200 nl/min. The Easy-nLC 1000 UPLC was coupled to a Q Exactive mass spectrometer (Thermo-Fisher Scientific). The MS parameters were as follows: top 20 most intense ions in a survey full scan were selected for MS2 by HCD dissociation; r ϭ 140,000 in full scan, r ϭ 17,500 in HCD scan; AGC targets were 1e6 for FTMS full scan, 5e4 for MS2; minimal signal threshold for MS2 ϭ 4e4; precursors of charge states ϩ1, ϩ2, Ͼϩ8, or unassigned were excluded; normalized collision energy ϭ 30 for HCD; peptide match is preferred.
Expression and Purification of Recombinant Transcriptional Activators-Coding regions of Gcn4, TBP, and Gal4 activation domain (residues 768 -881) were PCR-amplified from yeast genomic DNA (26). The PCR products were cloned into the NdeI/EcoRI sites of pET28b-HMT vector. BL21* (Life Technologies) Escherichia coli expression strain transformed with the resulting constructs were grown to an A 600 of 0.5 and induced with 1 mM isopropyl -D-thiogalactopyranoside for either 3 h at 37°C or overnight at 16°C. The bacteria were then harvested by centrifugation and stored at Ϫ80°C. For each purification, a frozen cell pellet was resuspended 10 ml/g in lysis buffer (40 mM HEPES, pH 8.0, 500 mM NaCl, and 2 mM PMSF). The cells were lysed by sonication, and the lysate was clarified by centrifugation at 30,000 ϫ g. The clarified lysates were then incubated with 500 l of nickel-nitrilotriacetic acid-Sepharose (Thermo Scientific) for 30 min at 4°C. The resin was washed three times with 5 ml of lysis buffer and then twice with 5 ml of lysis buffer with 50 mM imidazole. Bound proteins were eluted in five rounds, using 1 ml of lysis buffer with 250 mM imidazole. Fractions containing the desired protein were concentrated using a 10,000 molecular mass cutoff concentrators (Millipore) and further purified by gel filtration chromatography (GE Healthcare). Peak fractions containing pure activator proteins were flash frozen in liquid nitrogen and stored at Ϫ80°C.
Activator Pulldown Experiments and Western Blotting-Approximately 50 ng/l of FLAG-purified SAGA was mixed with 300 g/ml HMP and HMP-tagged Gcn4, Gal4AD, and TBP in binding buffer (40 mM HEPES, pH 7.4, 150 mM NaCl, 10% glycerol, 0.1% Tween 20, 0.5 mM DTT, and 1 mM PMSF). The mixture was incubated with 50 l of amylose resin (New England Biolabs) for 30 min at 4°C. The resin was collected using centrifugal spin columns and washed twice with 200 l of binding buffer. Bound proteins were eluted with 200 l of binding buffer with 100 mM maltose. Eluates were then analyzed by SDS-PAGE and Western blot using the following antibodies: mouse ␣-FLAG antibody (Sigma), mouse ␣-His antibody (ABM), and ␣-mouse HRP antibody (Sigma).
Activator Binding Localization Experiments-FLAG and glycerol gradient-purified SAGA was mixed with 4 -8 g/ml of purified activators and incubated at room temperature for 30 min before negative-stained EM specimens were prepared.
Improved Procedure for Isolating Native S. cerevisiae SAGA-
The TAP procedure is an established method for isolating native SAGA from yeast containing genomically tagged SPT7-TAP (12,15). We combined the first part of TAP with the Gra-Fix technique, which involves subjecting the complex to limited glutaraldehyde cross-linking during glyverol gradient ultracentrifugation (18). GraFix has been shown to increase EM image quality and particle homogeneity. To ensure that SAGA is amenable to glycerol gradient ultracentrifugation, we analyzed fractions from a corresponding glycerol gradient lacking glutaraldehyde by SDS-PAGE. SAGA migrates to a single fraction with minimal contaminants (Fig. 1A). Subsequent MS analysis confirmed that this fraction contains all 19 core SAGA subunits (Table 3).
We next examined the purified SAGA using negative stain electron microscopy. We observed particles of similar size and shape as those observed by Wu et al. ( A, SDS-PAGE analysis of yeast SAGA purified from IgG-Sepharose and 15-30% glycerol gradient ultracentrifugation. Protein bands were visualized by silver staining, and the inset on the right represents the fraction that was used for EM and mass spectrometric analysis (Table 3). B, a representative raw image of negatively stained TAP-purified or GraFix-purified SAGA. C, representative two-dimensional class averages of SAGA purified by TAP, glycerol gradient ultracentrifugation without cross-linker, and GraFix. The three Gra-Fix class averages correspond to the three observed conformations. The side length of every class average panel is 60 nm. MW, molecular mass.
by the conventional two-step TAP method and by non-crosslinked glycerol gradient ultracentrifugation. However, the samples from the TAP purification and glycerol gradient purification showed a high degree of subunit dissociation (Fig. 1B). Meanwhile, samples that underwent GraFix treatment display not only significantly reduced the level of sample heterogeneity but also preserved fine structural features of individual SAGA particles (Fig. 1B). SAGA Adopts Three Distinct Conformations-To gain further insights into the structural properties of SAGA, we applied a two-dimensional single-particle EM approach that involves classifying manually selected particles according to similarities in overall morphology and aligning and calculating an average image of the particles constituting each class. The class averages of TAP-purified and glycerol gradient purified SAGA without cross-linker were practically identical to each other and to previously published images (Fig. 1C) (12). However, class averages of SAGA purified using GraFix showed improved image quality and better resolved features (Fig. 1C). These class averages, calculated from 7,753 GraFix-purified particles, showed that SAGA consists of a prominent globular "head" and a long and slender "tail" separated by a "torso" region. The torso region can be further subdivided into two halves: a "joint" that is connected to the tail, and the "shoulder" that does not make a direct connection with the tail. Interestingly, the prominent extended tail that we observed is only found in a small population of particles in the previous EM analysis of SAGA, whereas we observed the tail in 91% of our particles. We attributed this discrepancy to the fact that the tail portion of SAGA is more labile and has a tendency to dissociate from the complex upon purification and/or during the negative staining specimen preparation procedure.
Most strikingly, our analysis revealed that the head, torso, and tail regions of SAGA are all conformationally flexible. In particular, the tail region can curve and sample a broad range of space. The coordinated movement of multiple densities within SAGA is remarkable because of large distances covered; the tip of the SAGA tail traverses over 50 Å between different confor-mations. From the gallery of class averages, three major types of conformations could be distinguished based on the arrangement of the mobile tail region with respect to the rest of the complex (Fig. 1C). In the "donut" conformation, the tail curls up with its tip pointing toward the shoulder of the torso to generate an almost complete circular structure at the bottom half of the complex. In the "arched" conformation, the tail retracts from the shoulder to generate a kink and a pronounced deep cleft at the back of the torso. In the "curved" conformation, the tail adopts a gentle curvature with its tip projected away from the shoulder. Interestingly, the different tail arrangements are accompanied by changes in the morphology of the SAGA head and shoulder regions. We observed multiple class averages that would fall into intermediate states among the three major conformations. Collectively, our two-dimensional analysis suggests that SAGA is structurally dynamic, and its conformational changes involve coordinated movements and rearrangements of different subunits and modules within the complex.
Removal of Key Subunits Affects the Conformational Flexibility of SAGA-We next examined the effects of subunit and module deletions on the conformational plasticity of SAGA. Previous studies have shown that deleting the ADA2 gene dislodges the HAT module from SAGA while leaving the rest of the complex intact (6). Our mass spectrometry analysis confirmed this finding by showing that subunits of the HAT module are absent in SAGA isolated from the ada2⌬ yeast strain (Table 3). Subsequent EM analysis of SAGA devoid of Ada2 revealed that the tail region is severely shortened in all class averages, suggesting that the HAT module likely constitutes a distal segment of the tail (Fig. 2B). Despite the reduced size of the tail region, the absence of the HAT module does not diminish the conformational flexibility of SAGA. In fact, the shoulder region of this mutant SAGA shows even greater mobility, translocating away from the head and toward the tail. Thus, although the HAT module does not influence the ability of SAGA to adopt different conformations, its absence induces additional structural variability in other regions of SAGA. region, but the mutation does not affect the conformational flexibility of the tail (Fig. 2C). Spt8 likely comprises a significant portion of the shoulder region and is a peripheral subunit because its absence does not dramatically alter the rest of this complex. Deletion of the SGF73 gene has been previously shown to dissociate the DUB module from SAGA (6). Although we confirmed that SAGA isolated from the sgf73⌬ strain lacks the DUB module, the two-dimensional class averages of this mutant complex show no apparent loss of density ( Fig. 2D and Table 3). We did observe an increased number of particles with the HAT module absent, suggesting that SGF73 deletion modestly destabilizes the complex.
A centrally located DUB module would render the loss of density more difficult to observe, explaining our observation. Intriguingly, the sgf73⌬ SAGA mutant appears to have a lower propensity than wild type SAGA to adopt the donut conformation. To more accurately assess the shift in occupancy of the different conformation states between wild type SAGA and the sgf73⌬ mutant complex, we defined the three major conformations based on two measured parameters: the shoulder to tail distance and the cleft length (Fig. 2D). More specifically, we defined the donut conformation to have a shoulder to tail distance under 20 Å and the arched conformation to have a cleft length greater than 15 Å, with precedence given to the former. Based on this analysis, we found that 28% of wild type SAGA adopted the donut conformation, compared with only 4% by the sgf73t mutant complex. The DUB module is therefore necessary for SAGA to efficiently adopt the donut conformation, either by stabilizing the rearrangements involved in the movement of the tail or physically mediating the connection between the shoulder and the tail. Our results show that different SAGA modules contribute to the flexibility of the complex in varying degrees.
Three-dimensional Reconstructions of SAGA in Three Conformations-The intrinsic heterogeneity of SAGA and the low yields from our endogenous purification procedure precluded high resolution cryo-EM analysis. Instead, we generated de novo three-dimensional reconstructions of SAGA using the random conical tilt approach (28). This approach was chosen because SAGA adopts a preferred orientation on the carbon support layer of the EM grids, precluding the use of common line techniques that require comprehensive angular representation. We selected tilted particles corresponding to the curved, arched, and donut conformations and used them to calculate three-dimensional reconstructions. Despite the intrinsic flexibility of the complex, we were able to visualize the structural rearrangements associated with the conformational shifts (Fig. 3, A-C). Notably, we observed that the tail undergoes a large degree of rearrangement between the three conformations. The arched conformation showed disconnected density in the middle of the tail, an observation that is indicative of a particularly heterogeneous region. Furthermore, the shoulder and its adjacent head region also displayed substantial rearrangements, with multiple shifting densities. In the donut conformation, the shoulder region splits into two separate densities and shifted away from each other. Despite its limiting resolution, our three-dimensional analysis demonstrated that transition between the three conformations requires large scale structural rearrangement of the subunits within the complex. A recent study by Durand et al. (13) also generated a three-dimensional reconstruction of SAGA purified without the use of GraFix. The overall configuration EM maps of Durand et al. correspond most closely with our SAGA donut reconstruction (Fig. 3D), although the precise distribution of densities varies slightly between the two. The different cross-linking method employed by their study may cause different conformations to be stabilized, resulting in the dissimilarities between the reconstructions.
Tra1 Occupies a Substantial Portion of the Head Region of SAGA-Upon examination of the SAGA three-dimensional reconstructions, we sought to evaluate the proposal by Wu et al. (12) that the Tra1 subunit resides within one half of the head region. Further evidence of this localization comes from the NuA4 HAT complex EM structure, which bears a striking resemblance to the head region of SAGA, while sharing only the Tra1 subunit (29). At 400 kDa, Tra1 is the largest subunit of SAGA and is thought to be responsible for recruitment of this complex to its target genes (3). Tra1 is a pseudokinase that belongs to the phosphatidylinositol 3-kinase-related kinase family of extraordinarily large protein kinases, whose members share a common domain organization: extensive tandem HEAT repeats at its N-terminal region, followed by the FAT, kinase, and FATC domains at the C-terminal region (30). Although there is no high resolution structural data available for Tra1, the crystal structure of the 1,174 C-terminal residues of mTOR, a phosphatidylinositol 3-kinase-related kinase protein that shares a high degree of secondary structure similarity to the predicted C-terminal domain of Tra1, has been reported (31). We used this crystal structure to evaluate the proposed location of Tra1. We found that even at less than one-third the size of full-length Tra1, the mTOR C-terminal domain was too large to fit within the region proposed by Wu et al. (Ref. 12 and Fig. 3D). Furthermore, a low resolution crystal structure of another phosphatidylinositol 3-kinase-related kinase protein, DNA-PKcs, shows a globular region with slender projections that form a ring (32), reminiscent of the arrangement of electron density within the SAGA head. Although our three-dimensional reconstructions are of insufficient resolution for further computational docking analysis, we believe that based on size alone, these comparisons demonstrate that Tra1 likely occupies a large proportion of both lobes of the prominent head region (Fig. 3E).
Comprehensive EM-based Mapping of Subunit Localization-Using an antibody-based approach, Wu et al. (12) deduced the positions of several subunits within SAGA. To validate these subunit locations in light of our ability to visualize SAGA with a fully extended tail and to further expand this analysis, we applied a proven labeling approach that involves introducing C-terminal GFP tags to different SAGA subunits (33). We purified the corresponding GFP-tagged SAGA complexes and located the additional electron density introduced by GFP by negative stain two-dimensional EM method.
Our earlier ADA2 deletion experiment suggested that the HAT module makes up the tail region of SAGA. In agreement with this result, SAGA containing GFP-tagged Ada2, Gcn5, or Sgf29 all display additional density centered about the tail region (Fig. 4A). Our analysis showed that Gcn5 localizes to the tip of the tail region, whereas Wu et al. (12) localized this subunit within the shoulder region. This discrepancy may be explained by the instability and potential dissociation of the tail region of non-cross-linked SAGA. The localization of the HAT activity of SAGA to the most mobile region of the complex provides a tantalizing explanation for its ability to act on a wide range of chromatin templates.
We next investigated the localization of the TAF module subunits Taf5, Taf9, and Taf10. SAGA containing GFP tag fused to these three subunits resulted in additional electron densities observed near the torso joint region (Fig. 4B). Although SAGA is thought to contain two copies of Taf5, Taf6, Taf9, and Taf12, we did not observe two unambiguous densities for GFP-tagged Taf5 and Taf9 (12). This is likely due to a central location of the second copy of each protein, where other protein density can obstruct the visualization of the second GFP density. The TAF module therefore resides within the torso region of SAGA. Because SAGA containing a truncated tail still displays extensive flexibility, the central location of the TAF module supports our proposal that the shared TFIID subunits mediate a large degree of the conformational changes of the complex.
We next targeted the Spt3, Spt8, and Spt20 subunits of the SPT module. Consistent with our truncated Spt7 findings, the corresponding GFP-tagged SAGA complexes all displayed an additional density near the shoulder region and central torso of SAGA (Fig. 4C). The flexibility of the shoulder region, where both Spt3 and Spt8 reside, may be necessary to accommodate TBP binding and release from these subunits. Based on the size and number of the SPT module subunits, some of these proteins likely occupy the torso region near the TAF module subunits. Previous EM studies placed Spt20 on the opposite end of SAGA from Tra1 (12), a location that disagrees with our central localization. However, our observation is consistent with the subunit depletion study of Lee et al. (6) that proposed Tra1 and Spt20 to be in close proximity to each other.
When we applied the same experimental approach to map the location of subunits of the DUB module, we were unable to find any additional density clearly attributable to GFP. Furthermore, we were also unable to unambiguously fit the published crystal structure of the DUB module into the three SAGA three-dimensional reconstructions because of their low resolutions (34,35). As an alternative approach, we applied an anti- body labeling method that involves incubating SAGA purified from an SGF73-TAP strain with the anti-TAP antibody. The large size and characteristic shape of antibodies are more clearly distinguishable compared with GFP. We identified a large additional density corresponding to the antibody adjacent to the torso, near the proposed TAF module location (Fig. 4D), suggesting that the DUB module shares this region with both the TAF module and parts of the SPT module. The central location of the DUB module adjacent to the conformationally flexible TAF core is consistent with its role in facilitating the donut conformation. We summarized the results of our localization studies in Fig. 4E and divided SAGA into regions where each module is likely located. Cross-linking Mass Spectrometry Analysis of apo-SAGA-Our EM-based localization studies enabled us to determine the spatial relationship among the different modules of SAGA. However, the relatively low resolution of these experiments precluded further understanding of the molecular organization of SAGA. Chemical CXMS is a powerful technique that can deduce the subunit connectivity of multiprotein complexes with precision to the amino acid residue level (25). Notably, two peptides of different subunits can be cross-linked only when they are located on or adjacent to the interface between the two proteins. We applied CXMS to comprehensively map the various different subunit interfaces of the SAGA complex. We incubated SAGA purified from FLAG-tagged Spt7 strain with disuccinimidyl suberate, a bifunctional cross-linker that reacts with primary amines, trypsin-digested the complex, and analyzed the resulting peptides using liquid chromatography coupled with LC-MS/MS. We searched the MS/MS data using the program pLink (25) and identified 78 unique intersubunit and 185 unique intrasubunit cross-links (supplemental Table S1). Our cross-linking results are represented in Fig. 5, emphasizing the interconnectivity between modules. Very recently, Han et al. (36) applied the CXMS approach to analyze the molecular architecture of SAGA in complex with TBP. In addition to confirming many cross-links that they identified, we were able to find interlinks involving Sgf11 and Sus1. This is likely due to better preservation of zinc finger domains in the DUB module by the FLAG affinity purification procedure in the absence of the chelating agent EGTA. Furthermore, we found unique cross-links connecting Tra1 to Spt7 and Taf6. We validated our CXMS results using available high resolution structures of SAGA subunits, combined with the ϳ30 Å theoretical C␣-C␣ cross-linking distance that disuccinimidyl suberate is capable of bridging (34,(37)(38)(39)(40). We found that 20 of 21 cross-linked residue pairs were under 30 Å of each other, providing a high degree of confidence for the cross-links detected (supplemental Table S1).
Results from our CXMS analysis suggest that the TAF module in combination with the SPT subunits Spt7, Spt20, and Tra1 form a central core containing highly interconnected subunits, with the remaining subunits peripherally attached to this core. These findings are consistent with our EM-based analysis of deletion mutants and GFP localization, which suggests that the TAF and SPT modules are centrally located within the complex. Interestingly, although almost all of the SPT module subunits cross-link to members of the TAF module and Tra1, these subunits appear to constitute two distinct groups because Spt7 and Spt8 do not cross-link to Spt3 or Spt20. This suggests that Spt7-Spt8 and Spt3-Spt20 are present on separate faces of SAGA, sandwiching the TAF module. On the upper edge of this "sandwich," Tra1 makes contact with the two separate groups of SPT subunits, whereas the HAT module similarly bridges the two groups on the opposite edge. Meanwhile, aside from Sgf73, no cross-links were detected between the DUB module subunits and the rest of SAGA, suggesting that Ubp8, Sgf11, and Sus1 face outwards from the complex. Sgf73, which anchors the DUB module to SAGA, cross-links to Spt20 and Taf5, suggesting that it is situated on the Spt3-Spt20 face of the SPT-TAF-SPT sandwich.
Interaction Interface with Transcriptional Activators-Using CXMS, we generated a detailed linkage map of all 19 SAGA subunits. Armed with this information, we sought to probe the functional interfaces of SAGA through investigating the binding locations of transcriptional activators. We purified recombinant His-maltose-binding protein (HMP)-tagged Gcn4, TBP, and the Gal4 activation domain (Gal4AD), and showed, by pulldown experiments, that these recombinant proteins bound FLAG-purified SAGA (Fig. 6A). We analyzed purified SAGA bound to the HMP-tagged activators by negative stain EM and observed that SAGA bound to Gcn4 or Gal4-AD contains additional electron densities near the globular half of the head region of SAGA near the head-torso junction (Fig. 6B). Because Gcn4 and Gal4 are known to bind Tra1 (3), this observation is consistent with our proposal that Tra1 spans both halves of the SAGA head region. We believe that the slight difference in position of the extra density between Gcn4 and Gal4 can be attributed to the flexibility of the linker between the activator and the HMP tag, as opposed to binding two different sites on Tra1. Meanwhile, HMP-tagged TBP bound SAGA contains additional electron density near the shoulder region where we proposed that its binding partners, Spt3 and Spt8, are located. In contrast to the globular head region, this region undergoes a large degree of conformational rearrangement. Because the transcriptional effects of Spt3 and Spt8 binding to TBP are chromatin context-specific, the flexibility of the region may facilitate this modulation of activity. Taken together, the results from our activator binding experiments further reinforce our proposed organization of the SAGA subunits as well as revealing new possibilities in the physiological role of the flexibility of the complex.
Model of SAGA Subunit Arrangement in the Context of the EM map-By combining results from our EM-based GFP labeling, CXMS, and transcriptional activator binding experiments, we generated a model for the spatial arrangement of the 19 SAGA subunits within our EM reconstruction (Fig. 7). We approximated each subunit as spheres based on their molecular weights and the average density of proteins. The CXMS results provided distance restraints between pairs of subunits, whereas the localization analyses served to map specific modules to regions of density.
We included two copies of Taf5, Taf6, Taf9, and Taf12 and arranged them in the same fashion as the subunits within the recently studied human TFIID core (41). We were unable, however, to confirm that both copies are present in SAGA. Our TAF subunit GFP tagging analyses did not show two additional densities within the same class average, which may be due to the alignment algorithms only focusing on one label at the expense of the other or one label being obscured by other protein density. Despite this, we believe that the composition of TFIID and the fact that the molecular weight of SAGA far exceeds the sum of all its subunits support the dimerization argument. The large size of the entire TAF module can be encompassed within SAGA if it is oriented along the long axis of the complex, with a small portion contained within the globular head region. This placement provides a broad interaction surface to every other region of SAGA, consistent with the role of the TAF module as the backbone of the rest of the complex.
Our analysis of Tra1 being too large to fit within one lobe of the head led us to place spheres corresponding to its N-and C-terminal domains within the two regions of the head. The N-terminal domain of Tra1 consists of a long stretch of HEAT repeat motifs that have the propensity to form superhelical structures thought to generate a flexible scaffold for protein binding (42). Because we observed a larger degree of rearrangement in the region of the head adjacent to the shoulder, we believe that the Tra1 N terminus localizes there. We placed the smaller, likely more globular C-terminal domain of Tra1 in the globular head region, adjacent to one arm of the TAF module. Interestingly, this region corresponds to the Gcn4 and Gal4 binding region. In an earlier UV cross-linking study, Gcn4 was shown to be cross-linked to both Tra1 and Taf12, supporting the proposed arrangement of these two subunits within the SAGA head region (43).
The CXMS results suggest that within the SPT module, Spt7 resides on the opposite side of the TAF module as Spt20 and the DUB module. We positioned the DUB module, which includes the Ubp8-Sgf11-Sus1-Sgf73 N-terminal crystal structure and a sphere corresponding to the C-terminal region of Sgf73, on the Spt20 face oriented outwards, consistent with the peripheral connectivity of the module to SAGA. A recent EM structure of SAGA suggested that the DUB module localizes to a density within the torso of SAGA (13). Our reconstructions do not contain a density that correspond to the one encompassing the DUB module in this other SAGA EM structure. This may be due to the gradual cross-linking of the GraFix treatment capturing a different set of conformations than the direct incubation of TAP-purified SAGA with glutaraldehyde used in this other study. Despite this difference, the two studies agree on the relative location of the DUB module in the context of the fully assembled SAGA complex.
Earlier EM studies on SAGA placed Spt20 to be on the end of the complex away from Tra1 (12), which our localization studies and other MS-based experiments argue against (6,36). In our model, Spt20 is centrally located, adjacent to the SPT/TAF core, the DUB module, and Tra1. We placed Ada1 in two copies in close proximity to Taf12, because the two subunits heterodimerize through their histone fold domains (44). Spt7 is thought to bind both Spt8 and the Taf10 HFD through its C terminus (45). Our placement of the Spt8 subunit within the shoulder region adjacent to Taf10 accommodates Spt7 binding to both subunits and agrees with a recent EM study of SAGA (13). Furthermore, Spt3, Spt7, Spt8, and Spt20 are all known to be in close proximity to the TBP binding location (36). Our own TBP binding analysis showed the binding site to be near the shoulder region along one edge of the SPT-TAF-SPT sandwich, where all of the SPT subunits have access to the activator.
Both sides of the SPT module interact with Ada3, which serves as the major interface between the HAT module and SAGA. Both faces are accessible in the lower tail placement for the module, underneath the SPT-TAF-SPT sandwich. These results are in direct contention with the earlier EM-based model of SAGA, which placed the HAT module to be within the torso (12). We believe that our ability to preserve the tail region in the overwhelming majority of particles contributes to a more accurate investigation of the region. Conversely, the top of the sandwich provides access between Tra1 and its many crosslinking partners. The model we generated suggests a layered arrangement of SAGA modules, where the top layer contains Tra1; the middle layer contains the TAF, SPT, and DUB modules; and the bottom layer contains the HAT module.
DISCUSSION
The large size and sophisticated composition of SAGA pose immense technical challenges to characterizing its detailed molecular structure and subunit organization in relation to its multiple roles in transcriptional regulation. Using single-particle EM, Wu et al. (12) provided the first glimpse of the overall morphology of SAGA and delineated the location of several core components of this complex. Lee et al. (6) subsequently applied an approach combining systematic gene deletion and mass spectrometry to identify SAGA subunits that anchor the functional modules to the core complex. More recently, Han et al. (36) conducted CXMS analysis of SAGA in complex with TBP to determine the subunit interconnectivity. The work presented in this paper built on and further expanded these initial efforts. In particular, through developing an improved purification method that enhanced the stability of SAGA, we were able to more precisely visualize and analyze the different conformational states of SAGA and determined the contributions of the catalytic modules in mediating structural rearrangements. Our comprehensive localization strategy, which combines EM-based labeling and CXMS of SAGA, enabled us to construct a unifying model of the subunit organization of this complex, improving our understanding of the relationships between different activities within this complex.
Conformational Flexibility of SAGA-Our improved purification procedure significantly enhanced the stability of SAGA and enabled us to systematically analyze its conformational flexibility for the first time. In comparison to previously published EM studies of SAGA (12,13), the gradual cross-linking of the GraFix method greatly preserved the presence of the extended tail of SAGA. The first SAGA EM study by Wu et al. (12) showed that the standard TAP procedure resulted in 35% of the particles not displaying the tail density. Meanwhile, a recent study by Durand et al. (13) tail-less SAGA to 25%. Our GraFix-purified SAGA reduced the number of dissociated tails to 9%, demonstrating the effectiveness of this treatment.
We proceeded to analyze the distribution of the different conformational states using single-particle EM methods. To investigate the degree of conformational rearrangement that SAGA undergoes, we generated three-dimensional reconstructions of each conformation. Although our reconstructions have slightly lower resolution than previously published structures (12, 13)-likely because of the GraFix treatment making fine conformational rearrangements resolvable and therefore rendering alignment more difficult-they effectively demonstrate the extensive rearrangements between the three conformations. We suggest that the structural plasticity of SAGA reflects the need to adapt quickly to interact with different substrates and cofactors to mediate different physiological functions in the cell.
SAGA is not the only chromatin-related complex that displays a large degree of conformational flexibility. The chromatin structure remodeling complex adopts an open and closed conformation, where a sizable domain rearranges about a cavity in a manner reminiscent of the SAGA tail (46). TFIID undergoes an even greater degree of conformational rearrangement, where an entire lobe undergoes over 100 Å of movement and alters its connectivity to the rest of the complex (14). Given that the structural core of SAGA is comprised largely of subunits shared with TFIID, it is tempting to speculate that the molecular mechanisms behind the conformational rearrangement might be conserved between these two related complexes. Although we were able to distinguish three major conformations adopted by SAGA, delineating the physiological roles of these conformations would require the ability to isolate SAGA locked in a distinct conformation, a technical challenge that will need to be overcome in future studies.
Several factors appear to affect the conformational flexibility of SAGA, with disruption of the DUB module preventing the donut conformation from forming and the removal of the HAT module causing the shoulder region to be much more mobile. Disruption of the DUB module decreases HAT activity while only marginally affecting its structural integrity. Interestingly, it has been shown that the acetyltransferase and deubiquitination activities of SAGA show significant cross-talk, interacting both genetically and catalytically (7,47). Our finding that the absence of the DUB module affects the flexibility and presence of the tail region of SAGA, where the HAT module is located, may reveal an indirect cooperativity between the two catalytic activities.
Spatial Arrangement of SAGA Chromatin-binding Domains-We combined our three-dimensional SAGA reconstructions with subunit localization and interconnectivity data to generate a model detailing the spatial organization of all 19 SAGA subunits. We show that SAGA is likely arranged into three major layers, with the topmost layer housing Tra1, the middle layer containing the SPT-TAF-SPT sandwich and the DUB module, and the lower layer encompassing the HAT module. This layered arrangement of SAGA is supported by CXMS data from Han et al. (36) and our analyses, with very few subunits bridging the top and bottom layers. Based on the model we constructed, subunits with chromatin-binding domains are clustered along one side of the complex in close proximity to each other (Fig. 8). These domains include the Gcn5 and Spt7 bromodomains, Sgf29 Tudor domain, Ada2 SANT and SWIRM domains, and the Spt8 WD40 domain, all of which have been shown to bind different chromatin templates (48). Their proximity suggests a major interaction surface with the chromatin template within a region of SAGA that shows a large degree of flexibility. Chromatin surrounding a transcribed gene is by definition dynamic, decorated with various post-translational modifications depending upon the state of the cell. SAGA activity transitions between the different phases of transcription, from acetylation during transcription initiation to deubiquitination during elongation. The highly diverse chromatin-binding domains of SAGA are capable of binding methylated and acetylated histones, suggesting considerable versatility in template recognition. Given that transcription involves extensive remodeling of nucleosome occupancy, SAGA must be able to compensate for variations in nucleosome positions.
The extreme diversity of context-specific activities that SAGA fulfills, compounded with the recent proposition that the complex is active in all RNA polymerase II-mediated transcription (49), provides a compelling hypothesis for a highly flexible and adaptable chromatin interaction surface. In contrast, our finding that the activator binding surface is relatively static is likely due to the conserved acidic patches within the activation domains of transcriptional activators serve as universal adapters, delegating the task of chromatin binding to the DNA-binding domains of the activators (3). These hypotheses on the nature of SAGA flexibility predict that other chromatinbinding complexes will likely be flexible, a feature that is likely de-emphasized in single-particle EM studies where this property negatively affects achievable resolution. We believe that conformational variability of chromatin-binding complexes should be studied more closely, because such studies could provide intriguing new insights into the mechanism of action of these important regulators.
SAGA is a fascinating multifunctional complex that provides a paradigm to delineate the molecular mechanisms of fundamental transcriptional processes and to gain insights into how important chromatin-modifying complexes that exert exquisite epigenetic control over all eukaryotic gene expression. Further understanding of SAGA mechanisms of action would require higher resolution analysis of the full complex and individual subunits, using a joint approach of various structural techniques. | 2018-04-03T01:56:01.069Z | 2015-02-20T00:00:00.000 | {
"year": 2015,
"sha1": "f4332320b9f77fc1c08836298d57efb85cfe8042",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/290/16/10057.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "d9f4c9f26a69cfda649169f042b43b4c824d9f8f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
4093012 | pes2o/s2orc | v3-fos-license | Clinical outcome of arthroscopic capsular release for frozen shoulder: essential technical points in 255 patients
Background The purpose of this study was to investigate the long-term clinical outcome and its related factors regarding the severity of adhesion of CH ligament over long head of biceps (LHB) after shoulder arthroscopic capsular release for frozen shoulder with technical points in 255 patients. Methods We performed arthroscopic capsular release for frozen shoulder in 267 shoulders of 255 patients, 112 males and 143 females, with mean age of 56.39 years, mean disease duration periods of 0.934 years for conservative treatment, and mean follow-up periods of 5.6 years. The frozen shoulders were divided based on the severity of adhesion between CH ligament over LHB: those with slight degree of synovitis, no adhesion by obtuse rod, and slight thickness of the released capsule (type A), those with moderate degree of synovitis, moderate adhesion of the LHB by obtuse rod, and moderate thickness of the released capsule (type B), and those with severe degree of synovitis, severe adhesion of the LHB by obtuse rod, and severe thickness of the released capsule adhesion and a flatly shaped LHB (type C). We assessed the clinical factors related to the scoring of the shoulders by the criteria of the American Shoulder and Elbow Surgeons (ASES) and the relationship with severity of LHB adhesion. Results The ASES scores improved at 5 years postoperatively in all three groups significantly. The range of motion also significantly improved in all three groups significantly. The severity of the LHB adhesion over the CH ligament was confirmed to influence the ASES scores before and after the arthroscopic capsular release. There was a significant difference between type A and type B (p < 0.0001) or type C (p < 0.0001) before and after surgery. Logistic regression analysis showed disease duration, diabetes mellitus (DM), and ASES score were significantly associated to the severity type of LHB, especially DM has high odds ratio and was a risk factor for LHB adhesion. There is no adverse event including dislocation or axillary nerve injury and recurrence after arthroscopic capsular release at 5 years after surgery. Conclusions The long-term results of arthroscopic capsular release in frozen shoulder were confirmed in 255 patients. The severity of LHB adhesion over the CH ligament, a pathological condition related to DM as a risk factor, seems to play an important role in the functional outcome. Therefore, the sufficient release of LHB was essential technical point for arthroscopic capsular release in frozen shoulder.
Background
While physiotherapy, analgesics for pain, steroid injection, and silent manipulation can all be effective for frozen shoulder, there has been no description of a long term with more than 200 patients of arthroscopic capsular release for frozen shoulder so far. It is reported recently that arthroscopic capsular release for frozen shoulder is effective and safe in several literatures [1][2][3]. Walther et al. reported that arthroscopic capsular release should be recommended as the early choice for treatment in persistent frozen shoulder in 54 patients [1]. On the other hand, Neviaser used the term "adhesive capsulitis" to reflect his findings in surgery [4]. In pathological aspect, the thickness of the coracohumeral (CH) ligament over 4 mm and joint capsule over 7 mm by MRI was important to the diagnosis of frozen shoulder [5]. In anatomical analysis, the CH ligament was divided into two parts: one part spread fibers over the rotator interval to the posterior portion of the greater tuberosity and the other part enveloped the superior portion of the subscapularis, supraspinatus, and infraspinatus tendons. The anterior CH ligament holds the subscapularis muscle and anchors the muscle to the coracoid process in a similar manner to that of the posterior coracohumeral ligament (CHL) enveloping the supraspinatus and infraspinatus over the long head of biceps (LHB) tendon [6]. We previously reported the classification of arthroscopic findings for frozen shoulder based on the LHB adhesion over CH ligament in 87 patients [7]. The hypothesis in this study is that LHB adhesion to CH ligament is associated with the long-term outcome of arthroscopic capsular release in frozen shoulder. The purpose of this study was to investigate the long-term clinical outcome in 255 patients and extract clinical factors related to the efficacy of shoulder arthroscopic capsular release for frozen shoulder.
Study design
Two hundred and sixty seven consecutive frozen shoulders of 255 patients underwent arthroscopic capsular release admitted in Tokyo Women's Medical University, Medical Center East by a single surgeon (K.K.) from August 2003 including 112 males and 143 females, with mean age of 56.39 ± 10.24, mean disease duration periods 0.934 ± 0.393 years, and mean follow-up periods 5.648 ± 4.060 (range 5-13) years (Table 1). Preoperative treatments for the frozen shoulder included rehabilitation or steroid or hyaluronic acid injections or non-steroid antiinflammatory drugs (NSAIDs) before arthroscopic capsular release at least more than 6 months. The criteria for inclusion in this study were severe night pain concomitant with no improvement of flexion (90°) and external rotation (0°) and poor responsiveness to rehabilitation for at least 5 to 6 months prior to the surgery recognized on the thickness of CH ligament by MRI [5]. Exclusion criteria were complete rotator cuff tear, acromioclavicular subluxation, and biceps tendon rupture in clinical and MRI findings. The frozen shoulders were divided into three types based on the severity of the adhesion of the LHB to the CH ligament as assessed by arthroscopy ( Fig. 1): those with slight degree of synovitis, no adhesion by obtuse rod, and slight thickness of the released capsule (type A), those with moderate degree of synovitis, moderate adhesion of the LHB by obtuse rod, and moderate thickness of the released capsule (type B), and those with severe degree of synovitis, severe adhesion of the LHB by obtuse rod, and severe thickness of the released capsule adhesion and a flatly shaped LHB (type C). The frozen shoulders (n = 267) were divided into 162 shoulders of type A shoulders (56.20 ± 11.20 years; range, 23-82 years), 87 shoulders of type B shoulders (56.61 ± 8.06 years; range, 36-76 years), and 18 shoulders of type C shoulders (57.06 ± 11.13 years; range, 35-78 years). Disease duration with conservative treatment before surgery was 0.790 ± 0.271 years in type A, 1.075 ± 0.362 years in type B, 1.556 ± 0.591 years in type C.
Procedure of arthroscopic capsular release and essential technical points for frozen shoulder: partial capsular release and ASD After placing the patient in the beach-chair position under general anesthesia or interscalene local anesthetic blockade, the shoulder was examined before surgery to assess the range of motion in flexion and extension, external rotation at 0°abduction, external rotation at 90°a bduction, and internal rotation at 90°abduction. After introducing a 4-mm arthroscopy through a standard posterior portal and performing an initial diagnostic arthroscopy, we created an anterior portal just lateral side of coracoid process to superior of the subscapularis tendon using the outside-in technique in order to facilitate maneuvers by instruments such as shavers and a radiofrequency instrument (VAPR®; Mitek, Norwood, MA). Next, we assessed the LHB adhered to the CH ligament over shoulder joint (Fig. 2a). Our first step in the capsular release was to eliminate the adhesion of the LHB to the CH ligament using a radiofrequency instrument. Next, we removed the joint capsule just next to the labrum using a radiofrequency instrument and rasp from 5 o'clock to 11 o'clock of the right-side shoulder over LHB (Fig. 2b). Our method is partial capsular release for frozen shoulder. Thus, we released the anterior, anteroinferior, superior, and superior-posterior capsules in addition to eliminate the LHB adhesion to the CH ligament. Inferior-posterior portion of capsule was remained to maintain shoulder stability and refrain from axillary nerve injury. A rasp conventionally used for arthroscopic Bankart repair proved quite useful in moving the capsule into the neck of the glenoid without axillary nerve complication to move the capsule. After arthroscopically observing the joint, we moved a scope into the subacromial space via a lateral and anterolateral portal, shaved the synovium in the subacromial bursa, and carefully observed the rotator cuff. Arthroscopic subacromial decompression (ASD) was performed and smoothed the surface of rotator cuff and subacromial bursa by using VAPR® and the rasp (Fig. 2c). Then, after removing the scope, we performed the manipulation. Once the scope and instruments were removed, shoulders were manipulated in external rotation at 0°of abduction, external rotation at 90°of abduction, internal rotation at 90°of abduction, and flexion in the plane of the scapula in addition to extension. At the end of the capsular release, the measurement of range of motion obtained after the manipulation was performed. After all procedures, we checked the sliding movement of LHB and wash out intra GH joint to eliminate the coagulation and debris for final step (Fig. 2d). If the insufficient ROM was obtained, the adhesion of LHB should be released again.
As postoperative rehabilitation protocol, passive, assisted-active exercises and stooping exercise were commenced for forward flexion and external rotation 1 day after surgery with the assistance of a physical therapist. After 2 week of passive exercise, the patients began active exercise to strengthen the rotator cuff and scapular stabilizers. After the rehabilitation for 4 to 6 weeks, the patients were back on normal work schedules without any limitations to daily activity. The rehabilitation was still continued for 3 months after surgery to obtain complete muscle strength of the shoulder.
Measurement of outcome
All patients were assessed by the American Shoulder and Elbow Surgeons (ASES) score preoperatively, and at the final evaluation was performed at an average of 5.648 ± 4.060 years postoperatively [8]. Preoperative and postoperative assessments for the progress of recovery of the range of motion at forward elevation (flexion), external rotation at 0 and 90°of abduction, and internal rotation at 0 and 90°of abduction were performed in the three arthroscopic types (types A, B, and C). Informed consent was obtained from all patients, and the study protocol was approved by the ethics committee of Tokyo Women's Medical University. ASES scores were assessed a b c * * * Fig. 1 in each three groups before and after surgery, and multiple regression analysis with logistic procedure was used for detecting the clinical factors related to the severity of LHB type. The population especially of diabetes mellitus (DM) in each group was analyzed.
Statistical analysis
We used the Wilcoxon test to compare ASES scores [8] and the degrees of range of motion with before and after surgery. Mann-Whitney U test was used to compare those results among different types of groups. The logistic regression analysis for LHB type severity was performed including age, disease duration, DM, and ASES scores at baseline and 5 years after surgery. Gender ratio was also calculated in each group. p values < 0.05 were considered to be significant using StatFlex version 6.0 (Statflex, Tokyo, Japan).
Results
The ASES score improved postoperatively in all three groups: from 41. 10 (Figs. 3 and 4). There was a significant difference between type A and type B (p < 0.0001) or type C (p < 0.0001) before and after surgery. The range of motion in flexion improved in all three groups postoperatively, from a mean of 80 ± 6.11 to 165 ± 8.84 in type A, from a mean of 75 ± 5.58 to 155 ± 7.96 in type B, and from a mean of 60 ± 6.38 to 140 ± 7.55 in type C. External rotation at 0°of abduction was improved from a mean of − 10 ± 7.32 to 45 ± 6.51 in type A, from a mean of − 15 ± 7.11 to 40 ± 6.89 in type B, and from a mean of − 25 ± 6.98 to 30 ± 7.45 in type C. Internal rotation improved from a mean of S1 to Th12 in type A, from a mean of S2 to L1 in type B, and from a mean of S2 to L1 in type C. Therefore, the range of motion was also confirmed to be dependent on the recovery of LHB adhesion to the CH ligament after surgery. to the type of LHB (p = 0.0974). Therefore, LHB adhesion to the CH ligament related to clinical outcome and DM ratio in frozen shoulder. There was no adverse event including axillary nerve injury or dislocation and recurrence after arthroscopic capsular release in this study.
Discussion
Management of choice involves conservative treatment such as non-steroidal anti-inflammatory drugs (NSAIDs), intra-articular steroids of hyaluronic acid injection, physical therapy, and silent manipulation under cervical nerve root block anesthesia are applied [9][10][11][12].
However, Cochrane reviews have demonstrated that the current literature base shows that physiotherapy alone has little to no benefit as compared to control groups [13]. There are a number of adjuncts that are often used with physiotherapy including extracorporeal shockwave therapy, electromagnetic stimulation, acupuncture, and the use of lasers, none of which have been subjected to investigation with randomized controlled studies [14]. Even when undergoing rehabilitative treatment, frozen shoulder often continues to feel severe night pain and contracture enough to disturb shoulder function. In some 10% of cases, indications for arthroscopic capsular release are present and currently, shoulder arthroscopic capsular release is a treatment of choice in such cases [14]. We selected arthroscopic capsular release for recalcitrant adhesive frozen shoulder after unsuccessful rehabilitation. However, the comparison of manipulation and arthroscopic capsular release by systemic review was reported that the quality of evidence available is low and the data available demonstrate little benefit for a capsular release instead of, or in addition to, a manipulation under anesthesia [15]. Ogilvie-Harris et al. attempted to compare manipulation with arthroscopic release on a prospective cohort of 40 patients [16]. The release induced removal of synovium from the rotator interval, release of the anterior glenohumeral ligament and the intra-articular portion of the subscapularis tendon, and finally, division of the anterior half of the inferior capsule. Their results after a follow-up of between 2 and 5 years showed a similar range of movement, but the release had a much better outcome in review literature [17]. However, there was no evidence of the efficacy of arthroscopic capsular release in more than 200 patients in long-term results. Our first observation in the current investigation was the restriction of dynamic sliding movement of the LHB in frozen shoulder compared with the normal [7]. The LHB stands upward from the IR to ER positions during this movement. The mechanical physiological functions of the shoulder depend quite closely and sensitively on this area of the LHB, especially for ER. After arthroscopic capsular release, the ER improved in the patients who exhibited the dynamic sliding movement of the LHB. Our data indicated that the physiological movement of the LHB to the rotator interval plays a key role in acquiring an improved range of motion in shoulders rated with high ASES scores. Furthermore, MRI findings on frozen shoulder have typically revealed a thickening of the coracohumeral ligament (CHL) [5]. CHL thickness and wide spread was evident in all three types especially in type C.
Frozen shoulder is thought to have an incidence of 3-5% in the general population and up to 20% in those with diabetes [18]. Its peak incidence in between the ages of 40 and 60 is rare outside these age groups and in manual workers [19] and is slightly more common in women. In this study, DM ratio was 19.85% in total cases. Experimental analysis for frozen shoulder, we reported that mechanical stress on the LHB and rotator interval (RI) in the shoulder may induce the tissue around LHB of mitogen-activated protein (MAP) kinases to express nuclear factor (NF)-κB by CD29 in order to develop capsule contracture, producing matrix metalloproteinase (MMP)-3, interleukin(IL)-6, and vascular endothelial growth factor (VEGF) [20]. Therefore, vascularity of capsule in frozen shoulder was evident in arthroscopic finding. DM also expressed those molecule to induce fibrous tissue in the area of the mechanical stress such as CH ligament and LHB. DM was found to be a possible risk factor related to the severity LHB adhesion with CH ligament which was wide spread out abnormally. Therefore, the patient of frozen shoulder with DM should be careful to manage the arthroscopic capsular release especially around LHB.
In technical point of view, the superior release is then extended to reach the long head of biceps and is continued to release the CHL in the plane between the Fig. 5 The ratio of the patients with DM in each group. DM ratio of type C was significantly higher than that of type A (p = 0.0012) and type B (p = 0.0302) superior glenoid and supraspinatus. If internal rotation or adduction of the shoulder is significantly restricted then the camera portal can be reversed to anterior portal for a posterior capsular release. Some surgeons complete the inferior release with a gentle manipulation but some surgeons advocate a full 360°capsulectomy under direct vision while accepting the higher risk of iatrogenic injury the axillary nerve [21]. Pearsall et al. performed arthroscopic release of the anteroinferior capsule, the intraarticular portion of the tendon of subscapularis, the superior and middle gleno-humeral ligaments, and the coracohumeral ligament in patients who had been recalcitrant to conservative treatment [22]. Among the 35 patients followed at a mean of 22 months after surgery, 83% had normal or only mildly symptomatic shoulders. These patients also received a tapered 21-day course of oral prednisolone. None of our patients were given oral steroids during the treatment. We consider that 1 month period is the most important window for obtaining better results by rehabilitation after arthroscopic capsular release. Most patients obtain their final range of motion by 4 to 6 weeks after capsular release. We released the anterior, antero-inferior, and superior capsules in addition to eliminating the LHB adhesion to CHL. Detailed arthroscopy assessments of the LHB adhesion revealed the clinical mechanism responsible for the decreased shoulder function associated with frozen shoulder. Limitation of study includes no control study and more long results needed to the recurrence of this procedure, and the mechanism of DM which contributed the severity of adhesion over LHB was still unclear. We found the risk factor of clinical outcome was DM condition. Therefore, it is possible to DM frozen shoulder should be separated to another category compare to idiopathic frozen shoulder in pathologic condition. In the future, arthroscopic capsular release with less pain after surgery should be performed in day surgery for the privilege of the patients with frozen shoulder.
Conclusions
The long-term results of arthroscopic capsular release in frozen shoulder were confirmed in 255 patients. The severity of LHB adhesion over the CH ligament, a pathological condition related to DM as a risk factor, seems to play an important role in the functional outcome. Therefore, the release of LHB was essential technical point for arthroscopic capsular release in frozen shoulder. | 2018-03-18T18:10:37.855Z | 2018-03-16T00:00:00.000 | {
"year": 2018,
"sha1": "68e71ad0937c91d01978bbc0edadbadd643323f1",
"oa_license": "CCBY",
"oa_url": "https://josr-online.biomedcentral.com/track/pdf/10.1186/s13018-018-0758-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "68e71ad0937c91d01978bbc0edadbadd643323f1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3535147 | pes2o/s2orc | v3-fos-license | Millimeter Scale Track Irregularity Surveying Based on ZUPT-Aided INS with Sub-Decimeter Scale Landmarks
Railway track irregularity surveying is important for the construction and the maintenance of railway lines. With the development of inertial devices, systems based on Inertial Navigation System (INS) have become feasible and popular approaches in track surveying applications. In order to overcome the requirement of high precision control points, this paper proposes a railway track irregularity measurement approach using the INS combined with the Zero Velocity Updates (ZUPT) technique and sub-decimeter scale landmarks. The equations for calculating track irregularity parameters from absolute position errors are deduced. Based on covariance analysis, the analytical relationships among the track irregularity measurements with the drifts of inertial sensors, the initial attitude errors and the observations of velocity and position are established. Simulations and experimental results show that the relative accuracy for 30 m chord of the proposed approach for track irregularity surveying can reach approximately 1 mm (1σ) with gyro bias instability of 0.01°/h, random walk noise of 0.005°/h, and accelerometer bias instability of 50 μg, random noise of 10 μg/Hz, while velocity observations are provided by the ZUPT technique at about every 60 m intervals. This accuracy can meet the most stringent requirements of millimeter scale medium wavelength track irregularity surveying for railway lines. Furthermore, this approach reduces the requirement of high precision landmarks which can lighten the maintenance burden of control points and improve the work efficiency of railway track irregularity measurements.
Introduction
Railway track irregularity is one of the most important factors affecting the safe operation of the train. The irregularity can be assessed by track geometry parameters, the measurement of which plays a significant role in monitoring the track deformation and guiding the maintenance of railway lines [1]. Trains with higher speed require higher track smoothness. With the development of high-speed railways, the demands for track irregularity measurement techniques with high-accuracy and high-efficiency are increasing rapidly [2].
Traditionally, there are mainly two categories of track irregularity measurement methods namely dynamic measurement and static measurement. Methods based on Track Recording Coaches (TRCs) are a kind of dynamic one under wheel loading [1,2]. TRC can measure long wavelength track irregularities with high work efficiency, but their availability is restricted and the measuring accuracy does not fulfil the requirements for track renewals [2]. Another method based on manual measuring devices is a kind of static one. These kinds of devices used for spot assessment are surpassed by railway track surveying trolleys in terms of data amounts and time efficiency. The lightweight and established the relationship between track irregularities and the absolute position deviations. Based on covariance analysis, the surveying accuracy of alignment and level irregularities is presented, the analytical relationship about irregularities with the precisions of inertial sensors, initial attitudes and observation updates are established. Simulation and experimental results are also presented.
The rest of the paper is organized as follows: Section 2 describes the railway track irregularity and assessment. Section 3 describes the overview of the measurement system and the algorithm. Section 4 describes the design of Kalman filter and smoother. Section 5 presents the calculation method of alignment and level irregularities from absolute position as well as the covariance analysis of them. Section 6 reports the simulation and experimental results of track irregularity. Section 7 concludes this paper.
Railway Track Irregularity and Assessment
Railway tracks can be regarded as a 3-dimensional curve [3,5]. Track irregularity refers to the deviation of the track from its design geometry, which is usually determined by five geometry parameters, namely alignment (horizontal alignment), level (vertical alignment), cant (super-elevation or cross-level), twist and gauge [7][8][9]. As illustrated in Figure 1, the axes of the rail coordinate system are defined as follows: the x-axis is in the travelling direction, y-axis parallel to the running surface, and z-axis perpendicular to the running surface and pointing downwards. Alignment is the track's displacement in the horizontal plane, which can be seen as the deviation of actual track from the design one in horizontal plane. Level is the displacement in the vertical plane [1]. Gauge is the distance between the inner sides of the two railheads. Cant is the difference between the elevations of the running surface of two rails, representing the tilting of the track in curves in order to compensate the centripetal force. Twist is defined as the difference in cant over a given length.
In this paper, the gauges are estimated from track gauge measurement system. Cant and twist can be calculated in simple models, and will not be discussed in detail. The alignment and level irregularities will be evaluated as examples to demonstrate the measuring accuracy. According to the railway standards [8,9], the alignment and level are measured by the vector distance value with a chord of fixed length (e.g., 30 m) on the rail surface in the horizontal and vertical directions respectively. The magnitude of alignment and level irregularities will be calculated by differential method of 30 m chord. As shown in Figure In this paper, the gauges are estimated from track gauge measurement system. Cant and twist can be calculated in simple models, and will not be discussed in detail. The alignment and level irregularities will be evaluated as examples to demonstrate the measuring accuracy.
According to the railway standards [8,9], the alignment and level are measured by the vector distance value with a chord of fixed length (e.g., 30 m) on the rail surface in the horizontal and vertical directions respectively. The magnitude of alignment and level irregularities will be calculated by differential method of 30 m chord. As shown in Figure 2, the red curve represents a segmentation of railway track in 3-dimensional space; the other two curves are the projections in horizontal plane and vertical plane respectively. The 30 m long chord is determined by points p 0m and p 30m on the curve. Take the point p s on the curve as an example. The distance from p s to the chord is the vector distance of this point represented by d s . Vector distance of the next adjacent point p s+5m with 5 m interval is d s+5m . The track irregularity of point p s can be calculated by Equation (1) [8]: where d s and d s+5m represent measurement values of vector distances about points p s and p s+5m . d s and d s+5m represent the design values of them. ∆ s represents the track irregularity of point p s , projection of which in the horizontal plane is the alignment irregularity and in the vertical plane is the level irregularity.
Track Irregularity Measurement System
The track irregularity measurement system is illustrated in Figure 3. The system is equipped with a T-type trolley, a track gauge sensor, a high precision prism, an odometer, and a navigation grade IMU. The IMU consists of three high accuracy ring laser gyros (RLGs, bias instability: 0.01°/h and angular random walk (ARW): 0.005 °√h ⁄ ) and three high stability quartz accelerometers (bias instability: 50 μg and random noise: 10 μg/√Hz). The prism mounted on the trolley is used to provide position observations worked with a Leica optical total station (1 mm and 0.5″) based on control points. The odometer of the system can be used as an aid to determine the position of the irregularity measurements along the track in this paper. The gauge sensor is used to measure the gauge of the tracks. Figure 4 illustrates an overview of the data processing procedure of the Kalman filtering and smoothing algorithm based on ZUPT-aided INS combined with landmarks employed in the paper. The system makes use of the measurements of IMU (angular increments Δθ from gyros and specific force integrations Δv from accelerometers) and the initial position measured by total station for initial alignment to calculate the initial attitudes. After that the trolley is pushed forward manually on the track at walking speed. After it moves across a certain distance interval (60 m in this paper), the trolley stops and a zero-velocity observation and a position observation will be updated. Then Kalman filtering and smoothing algorithm is executed to output the optimized
Track Irregularity Measurement System
The track irregularity measurement system is illustrated in Figure 3. The system is equipped with a T-type trolley, a track gauge sensor, a high precision prism, an odometer, and a navigation grade IMU. The IMU consists of three high accuracy ring laser gyros (RLGs, bias instability: 0.01 • /h and angular random walk (ARW): 0.005 • / √ h) and three high stability quartz accelerometers (bias instability: 50 µg and random noise: 10 µg/ √ Hz). The prism mounted on the trolley is used to provide position observations worked with a Leica optical total station (1 mm and 0.5") based on control points. The odometer of the system can be used as an aid to determine the position of the irregularity measurements along the track in this paper. The gauge sensor is used to measure the gauge of the tracks.
Track Irregularity Measurement System
The track irregularity measurement system is illustrated in Figure 3. The system is equipped with a T-type trolley, a track gauge sensor, a high precision prism, an odometer, and a navigation grade IMU. The IMU consists of three high accuracy ring laser gyros (RLGs, bias instability: 0.01°/h and angular random walk (ARW): 0.005 °√h ⁄ ) and three high stability quartz accelerometers (bias instability: 50 μg and random noise: 10 μg/√Hz). The prism mounted on the trolley is used to provide position observations worked with a Leica optical total station (1 mm and 0.5″) based on control points. The odometer of the system can be used as an aid to determine the position of the irregularity measurements along the track in this paper. The gauge sensor is used to measure the gauge of the tracks. Figure 4 illustrates an overview of the data processing procedure of the Kalman filtering and smoothing algorithm based on ZUPT-aided INS combined with landmarks employed in the paper. The system makes use of the measurements of IMU (angular increments Δθ from gyros and specific force integrations Δv from accelerometers) and the initial position measured by total station for initial alignment to calculate the initial attitudes. After that the trolley is pushed forward manually on the track at walking speed. After it moves across a certain distance interval (60 m in this paper), the trolley stops and a zero-velocity observation and a position observation will be updated. Then Kalman filtering and smoothing algorithm is executed to output the optimized Figure 4 illustrates an overview of the data processing procedure of the Kalman filtering and smoothing algorithm based on ZUPT-aided INS combined with landmarks employed in the paper. The system makes use of the measurements of IMU (angular increments ∆θ from gyros and specific force integrations ∆v from accelerometers) and the initial position measured by total station for initial alignment to calculate the initial attitudes. After that the trolley is pushed forward manually on the track at walking speed. After it moves across a certain distance interval (60 m in this paper), the trolley stops and a zero-velocity observation and a position observation will be updated. Then Kalman filtering and smoothing algorithm is executed to output the optimized position, velocity and attitude measurements of the interval. Since the wheels of the trolley can keep continuous contact with the railway track, the 3-dimensional track geometry can be determined by position and attitude sequences of the INS uniquely. Then track parameters can be calculated and track irregularities can be detected. position, velocity and attitude measurements of the interval. Since the wheels of the trolley can keep continuous contact with the railway track, the 3-dimensional track geometry can be determined by position and attitude sequences of the INS uniquely. Then track parameters can be calculated and track irregularities can be detected.
System Equations and Measurement Equations of Kalman Filter
The error equations of the attitude, velocity and position for the railway track surveying application can be expressed as Equation (2) [10]: where i -frame is the inertial frame. n -frame is the local level frame (North-East-Down) used as the navigation frame. b-frame is the body frame of the IMU (Forward-Right-Down).
[ ] represents the vector of attitude errors about the north, east and downward axes of the navigation frame. δ ω and δf b can be expressed as shown by Equations (3) and (4): where δω n ib is the drifts of gyros expressed in n -frame. ε N , ε E and ε D are the equivalent gyro biases of the north, east and downward directions. gx w , gy w and gz w are the random noises of gyros: where δf n is the drifts of accelerometers expressed in n -frame. ∇ N , ∇ E and ∇ D are the equivalent accelerometer biases of the north, east and downward directions. ax w , ay w and az w are the random noises of accelerometers.
System Equations and Measurement Equations of Kalman Filter
The error equations of the attitude, velocity and position for the railway track surveying application can be expressed as Equation (2) [10]: where i-frame is the inertial frame. n-frame is the local level frame (North-East-Down) used as the navigation frame.
b-frame is the body frame of the IMU (Forward-Right-Down).
represents the vector of attitude errors about the north, east and downward axes of the navigation frame. C n b represents the direction cosine matrix. ω n in represents the turn rate of the navigation frame with respect to the inertial frame expressed in the n-frame. It can be obtained by summing the Earth's rotation rate with respect to the inertial frame and the turn rate of the navigation frame with respect to the Earth as: ω n in = ω n ie + ω n en . δω b ib represents the drift errors of gyroscopes. δv n is the vector of velocity errors. f n represents the specific force in navigation axes. δf b is the drift errors of accelerometers. δr n represents the position error in navigation axes. The errors of inertial sensors in this paper are normally modeled as piecewise constant values. The position coordinates of the measured track are expressed in segmentation with n 0 -frame, which is so near with the n-frame that C n 0 n ≈ I and δr n 0 = C n 0 n δr n ≈ δr n = δr N δr E δr D T . For medium wavelength (30 m chord) track irregularity surveying, the track segmentation is set to be 60 m long in this paper. Moreover δω b ib and δf b can be expressed as shown by Equations (3) and (4): where δω n ib is the drifts of gyros expressed in n-frame. ε N , ε E and ε D are the equivalent gyro biases of the north, east and downward directions. w gx , w gy and w gz are the random noises of gyros: where δf n is the drifts of accelerometers expressed in n-frame. ∇ N , ∇ E and ∇ D are the equivalent accelerometer biases of the north, east and downward directions. w ax , w ay and w az are the random noises of accelerometers. The railway track surveying application has its own characteristics compared with some other applications based on inertial measurement such as land vehicle navigation, airborne gravity measurement and so on. Since the track of high speed railways is almost level and straight with a very large radius of curvature, trolley maneuvers are rather weak when moving on the track at [3,6]. Some error parameters of the INS are coupled together with others, for example, the orientation error is coupled with the equivalent east gyro bias, and the level errors are coupled with equivalent horizontal accelerometer biases, so the equivalent east gyro bias ε E and the equivalent horizontal accelerometer biases ∇ N and ∇ E are unobservable. They will not be estimated as error states in the Kalman filter. Since the trolley moves in walking speed (less than 8 km/h [8]) and the length of the measurement interval is short, the terms of ω n en and δω n in can be ignored as shown in Equation (2).
Consider the analysis above, a typical Kalman filter with 12 dimensional error states is established in this paper. The system error model and the observation model can be expressed as Equation (5): where the error state vector x(t) can be written as in Equation (6): According to Equation (2), the system error matrix A and the system noise matrix G can be expressed in simplified form by Equations (7) and (8): is the filter observation vector and velocity as well as position of the trolley are used as update information in the Kalman filter. The measurement matrix H is defined by Equation (9): w(t) and υ(t) are the system noise and the measurement noise, whose Power Spectral Density (PSD) are Q(t) and R(t) respectively. They can be expressed as Equation (10): Sensors 2017, 17, 2083 7 of 22
Smoothing Algorithm
The position errors and their covariance between two observation updates will increase with time caused by the residual system errors. It is even more serious for the situations that the observation is few and not much precise. In order to obtain optimal position estimations during the updates outages, a smoothing algorithm must be applied utilizing all the past, current and future measurements [11,12]. This paper employs the well-known RTS smoothing algorithm to estimate the states in the measurement intervals. The RTS smoother consists of a common forward Kalman filter and a backward smoother. The backward sweep begins at the end of the forward Kalman filter. Figure 5 illustrates the computation procedure of the RTS smoother.
Smoothing Algorithm
The position errors and their covariance between two observation updates will increase with time caused by the residual system errors. It is even more serious for the situations that the observation is few and not much precise. In order to obtain optimal position estimations during the updates outages, a smoothing algorithm must be applied utilizing all the past, current and future measurements [11,12]. This paper employs the well-known RTS smoothing algorithm to estimate the states in the measurement intervals. The RTS smoother consists of a common forward Kalman filter and a backward smoother. The backward sweep begins at the end of the forward Kalman filter. Figure 5 illustrates the computation procedure of the RTS smoother. Figure 5. The RTS smoothing algorithm computational process.
The forward Kalman filter is the common one, can be expressed in discrete form as Equation (11) shows: where ˆf k + x and fk P + represent the updated estimate of state vector and its corresponding covariance matrix of the forward filter at epoch k. The forward Kalman filter is the common one, can be expressed in discrete form as Equation (11) shows: wherex + f k and P + f k represent the updated estimate of state vector and its corresponding covariance matrix of the forward filter at epoch k.x − f k+1 is the optimal predicted estimate and P − f k+1 represents its covariance matrix. H k is the measurement matrix. K k is the gain matrix of forward Kalman filter at epoch k. Φ k is the system state transition matrix which can be calculated by matrix A.
The backward smoother can be expressed in the discrete form as shown by Equation (12) [13,14]: wherex k is the optimal smoothed estimate of state vector at time epoch k. P k is the error state covariance matrix of the smoother. H k is the smoothing gain matrix.
Alignment Irregularity and Level Irregularity Calculated from Absolute Poisition Devition
In order to assess the track irregularity, the relative geometry parameters of alignment and level should be calculated after obtaining the absolute position of the track. As Figure 6 illustrates, an arbitrarily 30 m chord on the 60 m track segmentation is determined by two points p 0m and p 30m whose coordinates are respectively marked in n 0 -frame as r n 0 p 0m = r p 0m N r p 0m E r p 0m D The measurement values of their coordinates are marked as r n 0 p 0m = r p 0m N r p 0m E r p 0m D T and r n 0 p 30m = r p 30m N r p 30m E r p 30m D T respectively. p s represents an arbitrary point on the trajectory for this chord, coordinates of which are marked as r n 0 p s = r p s N r p s E r p s D T for the true value and r n 0 p s = r p s N r p s E r p s D T for the measurement value. d s is the vector distance from p s to the chord, projections of which in horizontal plane and vertical plane are alignment and level, respectively.
Alignment Irregularity and Level Irregularity Calculated from Absolute Poisition Devition
In order to assess the track irregularity, the relative geometry parameters of alignment and level should be calculated after obtaining the absolute position of the track. As Figure 6 illustrates, an arbitrarily 30 m chord on the 60 m track segmentation is determined by two points 0m p and 30m p whose coordinates are respectively marked in 0 n -frame as 0 As shown in Figure 6, the track segmentation is 60 m long for ZUPT and absolute position update. For arbitrary 30 m chord on the track segmentation, we define a new frame as c -frame, whose x-axis is identical with the chord and can be obtained by a rotation θ a about z-axis and a rotation θ l about y-axis of 0 n -frame sequentially. We can calculate the vector distance values for every point of a trajectory by transforming the coordinates from 0 n -frame to c -frame. The relationship between 0 n -frame and c -frame can be written as Equation (13): As shown in Figure 6, the track segmentation is 60 m long for ZUPT and absolute position update. For arbitrary 30 m chord on the track segmentation, we define a new frame as c-frame, whose x-axis is identical with the chord and can be obtained by a rotation θ a about z-axis and a rotation θ l about y-axis of n 0 -frame sequentially. We can calculate the vector distance values for every point of a trajectory by transforming the coordinates from n 0 -frame to c-frame. The relationship between n 0 -frame and c-frame can be written as Equation (13) According to the definition of alignment and level together with the Figure 6, r c p s y represents the alignment and r c p s z represents the level and they can be calculated by Equation (14): r c p s y = − r n 0 p s N − r n 0 p 0m N sin θ a + r n 0 p s E − r n 0 p 0m E cos θ a r c p s z = r n 0 p s N − r n 0 p 0m N sin θ l cos θ a + r n 0 p s E − r n 0 p 0m E sin θ l sin θ a + r n 0 p s D − r n 0 p 0m D cos θ l Sensors 2017, 17, 2083 9 of 22 In addition, we can express θ a and θ l by coordinate values as shown in Equation (15) according to Figure 6: The deviations of r c p s y and r c p s z can be calculated by variational method as expressed by Equations (16) and (17): Substituting Equations (16) and (17) into Equation (1) yields the alignment irregularity and the level irregularity. Since gradient of the railway track is very small (25 m/1000 m for the largest gradient) and the turning radius is very large (2000 m) in general, the alignment irregularity and the level irregularity can be simplified by ignoring the small terms as Equations (18) where l c = r n 0 represents the length of the chord. l 5m = 5m is the distance between points p s and p s+5m . The derivation processes of Equations (18) and (19) are shown in Appendix A. As Figure 7 illustrates, even though the absolute position measurements may have deviations bigger than centimeter scale, the relative deviations can also be millimeter scale due to the common offset contained by the adjacent points p s and p s+5m .
Covariance Analysis
The track irregularity measurement accuracy can be presented by its variance. According to Equations (18)
Covariance Analysis
The track irregularity measurement accuracy can be presented by its variance. According to Equations (18) and (19), in order to calculate the variance of track irregularity, the covariances among calculate, we will firstly carry out the covariance analysis theoretically for the simplified situation by ignoring the system noises, and numerically for the general situation with system noises. Since the projections of the Earth angular velocity through the attitude errors are small and remain constant in short time interval, they can be equivalent to the gyro drifts. The terms of Coriolis acceleration are so small that they can also be ignored in a short time interval. Therefore, the system matrix can be further simplified as expressed in Equation (20) for the simplified situation [10]: where g is the value of gravity. In addition, in order to simplify the solving process, we suppose that the railway track is a straight track in north direction without loss of generality. In these conditions, Equations (18) and (19) can be simplified as expressed by Equations (21) and (22): ∆ p s z = δr c p s z − δr c p s+5m z = δr n 0 p s D − δr n 0 p s+5m D + l 5m l c δr For a ZUPT-aided INS with landmark integration, the distribution of trolley stop points and position observations is crucial to ensure the surveying accuracy [5]. In general, higher-frequency observation updates will result in better accuracy. However higher-frequency observation means more stop points which will influence the work efficiency. Therefore, in this paper, the distance of two stop points is 60 m for measuring position and providing zero velocity, and the observation updates are only provided in the end of every 30 m chord interval as Figure 6 shows.
For the measurement of every 30 m interval, the observation updates are measured at the end time epoch. For the forward filtering process, the optimal estimate of error state vector and its covariance matrix at other time epochs with no observations can be expressed as the functions of initial values in continuous form as Equation (23) shows according to Equation (11) [13]: where Φ(t, 0) is the system state transition matrix can be calculated as Equation (24) shows: and the initial variance matrix can be expressed as Equation (25): When the observations update at time epoch T, the updated estimate of state vector and its covariance can be obtained by disperse Kalman filter as shown in Equation (26): For the backward smoothing process, the initial optimal smoothed estimate of state vector and its covariance arex(T) =x + f N and P(T) = P + f N . The optimal smoothed estimate and its covariance at arbitrarily time epoch t can be expressed in continuous form as Equation (27) shows: The error of the optimal smoothed estimate of state vector can be obtained by subtracting true value from its optimal smoothed estimate as Equation (28) shows: From Equation (28), we can get the position error δr relationship is t s+5m = t s + l 5m l c T = t s + kT. Substituting δr n 0 p i E and δr n 0 p i D into Equations (21) and (22) respectively and calculating the variances of ∆ p s y and ∆ p s z , we can obtain: Calculating the partial derivatives about variables of Equations (29) and (30) respectively, we can This means that the variances of alignment irregularity and level irregularity are monotone increasing functions. Only with variances about initial state errors and observations less than certain values can the measurements of track irregularity satisfy the surveying accuracy demands. According to Equation (30), the position error (p δr and σ 2 r ) has no effect on level irregularity. As a matter of fact, the influence of position error on the alignment irregularity is also so small than other error terms that can be ignored. When σ 2 r → ∞ which means that there is no position observation, the variance of alignment irregularity can be converted as Equation (31): We can verify that the value of lim P ∆ ps y is very small by a numerical method.
Consider that P ∆ ps y is a monotone increasing function of σ 2 r , it is feasible to implement track irregularity surveying tasks without position observations updating for the ZUPT-aided INS. As a result, the requirements of high precise landmarks are reduced at a large extent. The landmarks can be only used as a determination of the track segmentation that sub-decimeter scale can meet the demand. And they may be replaced by a sub-decimeter scale INS/odometer integration system in short time interval as well.
For the general situation, the system noises cannot be ignored and we carry out the covariance analysis in a numerical method. The measurement accuracy of alignment irregularity and level irregularity are related with the variances of initial error states, the accuracy of inertial sensors and the accuracy of observation update. Here, we suppose that the railway track is a straight track in the direction of north by east 45 degrees without loss of generality. The system matrix is the full form as Equation (7) shows without ignoring the projections of the Earth angular velocity through the attitude errors and the terms of Coriolis acceleration. Considering the previous analysis, we will only make use of the velocity observation to update the Kalman filter.
Firstly, we assess the influences of the observation accuracy on the irregularity measurement accuracy without position observation. Setting the tilt error to 0.006 • and the orientation error is 0.06 • . Setting the gyro bias instability to 0.01 • /h and ARW is 0.005 • / √ h, and setting the accelerometer bias instability to 50 µg and random noise is 10 µg/ √ Hz. The measurement accuracy is also affected by the measurement time of every interval or the velocity of the trolley. Shorter measurement time means less integral time of the errors, and will result in higher measurement accuracy. Here we set the trolley velocity to 1 m/s (8 km/h at most for track surveying trolley), and 30 s will be consumed for every 30 m distance interval. The relationship between railway track irregularities measurement accuracy and the velocity observation accuracy as well as initial position accuracy are illustrated in Figure 8.
only make use of the velocity observation to update the Kalman filter.
Firstly, we assess the influences of the observation accuracy on the irregularity measurement accuracy without position observation. Setting the tilt error to 0.006° and the orientation error is 0.06°. Setting the gyro bias instability to 0.01°/h and ARW is 0.005°√h ⁄ , and setting the accelerometer bias instability to 50 μg and random noise is 10 μg/√Hz. The measurement accuracy is also affected by the measurement time of every interval or the velocity of the trolley. Shorter measurement time means less integral time of the errors, and will result in higher measurement accuracy. Here we set the trolley velocity to 1 m/s (8 km/h at most for track surveying trolley), and 30 s will be consumed for every 30 m distance interval. The relationship between railway track irregularities measurement accuracy and the velocity observation accuracy as well as initial position accuracy are illustrated in Figure 8. According to Figure 8, under the supposed conditions above, the observation accuracy of velocity causes larger influences than the initial position error for the track irregularities measurement accuracy. The initial position error has no effect on the track irregularity, which is coincident with the theoretical analysis previously. In order to satisfy the relative accuracy demand of 1mm, the accuracy of velocity observation should be less than 0.15 mm/s and ZUPT can satisfy the accuracy demand of velocity. A higher level of inertial sensors than the system above should be employed to satisfy the high-speed railway accuracy demand of 0.5 mm.
Secondly, we assess the influence of the random noises of inertial sensors on the irregularity measurement accuracy. Setting the tilt error to 0.006° and the orientation error is 0.06°. Setting the gyro bias instability to 0.01°/h, the accelerometer bias instability is 50 μg. and setting the accuracy of According to Figure 8, under the supposed conditions above, the observation accuracy of velocity causes larger influences than the initial position error for the track irregularities measurement accuracy. The initial position error has no effect on the track irregularity, which is coincident with the theoretical analysis previously. In order to satisfy the relative accuracy demand of 1mm, the accuracy of velocity observation should be less than 0.15 mm/s and ZUPT can satisfy the accuracy demand of velocity. A higher level of inertial sensors than the system above should be employed to satisfy the high-speed railway accuracy demand of 0.5 mm.
Secondly, we assess the influence of the random noises of inertial sensors on the irregularity measurement accuracy. Setting the tilt error to 0.006 • and the orientation error is 0.06 • . Setting the gyro bias instability to 0.01 • /h, the accelerometer bias instability is 50 µg. and setting the accuracy of initial position to 10 cm the velocity observation is 0.1 mm/s. The relationship between track irregularities measurement accuracy and random noises of gyro and accelerometer are illustrated in Figure 9, where under the supposed conditions above, the ARW of gyro should less than 0.0071 • / √ h at most and random noise of accelerometer should less than 14.7 µg/ √ Hz at most to satisfy the demand accuracy of 1 mm both with and without position observation.
Thirdly, the influences of the tilt errors and orientation error on the irregularity measurement accuracy have been assessed. Other parameters are fixed as described values previously. The relationship between track irregularities measurement accuracy and attitude errors are illustrated in Figure 10.
As illustrated in Figure 10, the attitude errors have no effect on the level irregularity, which is coincident with the theoretical analysis as Equation (30) shows. Since the orientation error is much bigger, it has a larger effect on alignment irregularity than the tilt errors.
Finally, the influences of the equivalent biases of gyros and accelerometers on the irregularities measurement accuracy are assessed as illustrated in Figure 11. Other parameters are also fixed at the previously described values. 0.0071°√h ⁄ at most and random noise of accelerometer should less than 14.7 μg/√Hz at most to satisfy the demand accuracy of 1 mm both with and without position observation.
Thirdly, the influences of the tilt errors and orientation error on the irregularity measurement accuracy have been assessed. Other parameters are fixed as described values previously. The relationship between track irregularities measurement accuracy and attitude errors are illustrated in Figure 10. As illustrated in Figure 10, the attitude errors have no effect on the level irregularity, which is coincident with the theoretical analysis as Equation (30) shows. Since the orientation error is much bigger, it has a larger effect on alignment irregularity than the tilt errors.
Finally, the influences of the equivalent biases of gyros and accelerometers on the irregularities measurement accuracy are assessed as illustrated in Figure 11. Other parameters are also fixed at the previously described values. 0.0071°√h ⁄ at most and random noise of accelerometer should less than 14.7 μg/√Hz at most to satisfy the demand accuracy of 1 mm both with and without position observation.
Thirdly, the influences of the tilt errors and orientation error on the irregularity measurement accuracy have been assessed. Other parameters are fixed as described values previously. The relationship between track irregularities measurement accuracy and attitude errors are illustrated in Figure 10. As illustrated in Figure 10, the attitude errors have no effect on the level irregularity, which is coincident with the theoretical analysis as Equation (30) shows. Since the orientation error is much bigger, it has a larger effect on alignment irregularity than the tilt errors.
Finally, the influences of the equivalent biases of gyros and accelerometers on the irregularities measurement accuracy are assessed as illustrated in Figure 11. Other parameters are also fixed at the previously described values. As illustrated in Figure 11, the gyro biases have no effect on the level irregularity, which is coincident with the theoretical analysis as shown in Equation (30) and the influence of the accelerometer bias on the level irregularity is small. In addition, the accelerometer bias has no effect on the alignment irregularity as well as Equation (29) shows.
Simulations
Monte Carlo simulations of the alignment irregularity and level irregularity surveying accuracy for the proposed approach have been implemented based on the real random noises of Figure 11. (a) The relationship between level irregularity measurement accuracy and the biases of inertial sensors; (b) The relationship between alignment irregularity measurement accuracy and the biases of inertial sensors.
As illustrated in Figure 11, the gyro biases have no effect on the level irregularity, which is coincident with the theoretical analysis as shown in Equation (30) and the influence of the accelerometer bias on the level irregularity is small. In addition, the accelerometer bias has no effect on the alignment irregularity as well as Equation (29) shows.
Simulations
Monte Carlo simulations of the alignment irregularity and level irregularity surveying accuracy for the proposed approach have been implemented based on the real random noises of INS. The simulated trajectory is a straight line in the direction of north by east 45 degrees. The random noises of gyros and accelerometers are measured by the mentioned INS in static state. The ARW of RLG gyro in this paper is about 0.005 • / √ h, and the bias instability is set to 0.01 • /h. The random noise of accelerometer is about 10 µg/ √ Hz, and the bias is set to 50 µg. According to the accuracy of the inertial sensor, the initial attitude errors are set to 0.006 • and the initial orientation error is set to 0.06 accelerometer bias on the level irregularity is small. In addition, the accelerometer bias has no effect on the alignment irregularity as well as Equation (29) shows.
Simulations
Monte Carlo simulations of the alignment irregularity and level irregularity surveying accuracy for the proposed approach have been implemented based on the real random noises of INS. The simulated trajectory is a straight line in the direction of north by east 45 degrees. The random noises of gyros and accelerometers are measured by the mentioned INS in static state. The ARW of RLG gyro in this paper is about 0.005°√h ⁄ , and the bias instability is set to 0.01°/h. The random noise of accelerometer is about 10 μg/√Hz, and the bias is set to 50 μg. According to the accuracy of the inertial sensor, the initial attitude errors are set to 0.006° and the initial orientation error is set to 0.06°. The position standard deviations are set to 10 cm, and 0.1 mm/s for the velocity observation. The high precise velocity observation can be provided by ZUPT technique. The velocity of the trolley is set to 1 m/s, and the length of the trajectory is set to 30 m. The observation updates are provided at the beginning and the end of the trajectory. We take the maximum value of track irregularity error to test the statistical accuracy. Five hundred groups of Monte Carlo simulation results based on ZUPT-aided INS approach without position observation are shown in Figure 12. According to Figure 12, the Root Mean Square (RMS) of measurement accuracy is about 0.70 mm for the level irregularity and 0.99 mm for the alignment irregularity. This is consistent to the result calculated by the covariance analysis previously. The results of Monte Carlo simulation based on ZUPT-aided INS approach with position observation are the same.
Experimental Results
Real tests were carried out on an experimental railway line. The railway track is about 120 m long as shown in Figure 13. The absolute position is provided by a Leica optical total station with a high precision prism mounted on the trolley based on Control Points (CPIII) as shown in the figure.
At the beginning of the tests the trolley is put on the track for 15 min static initial alignment, and loading initial position measured by total station. Then pushing the trolley moves forward on the track at walking speed (about 1.5 m/s) and implementing the measurement of the track irregularities. Two different experiments have been carried out.
The first experiment is the comparison test of accuracy between the proposed approach and the total station. For this group of tests, the trolley stops at every 60 m distance interval and the velocity observation provided by ZUPT will be updated for the INS. The track irregularities measured by ZUPT-aided INS will compared with the measurements provided by total station. Since the high precise position measurements are measured by total station in every 3 m interval, the distance of two adjacent points calculating the irregularity in Equation (1) is chosen as 6 m. For 30 m chord, the deviation of measurement results between these two approaches is shown in Figure 14. As illustrated, the RMS of alignment irregularity is about 0.82 mm and level irregularity is 1.02 mm. The 3D spatial trajectories of first 60 m track segmentation measured by total station and ZUPT-aided INS are illustrated in Figure 14c. As shown in the figure, even though the absolute deviations between these two approaches are bigger, the relative deviations can still achieve millimeter scale. and loading initial position measured by total station. Then pushing the trolley moves forward on the track at walking speed (about 1.5 m/s) and implementing the measurement of the track irregularities. Two different experiments have been carried out.
The first experiment is the comparison test of accuracy between the proposed approach and the total station. For this group of tests, the trolley stops at every 60 m distance interval and the velocity observation provided by ZUPT will be updated for the INS. The track irregularities measured by ZUPT-aided INS will compared with the measurements provided by total station. Since the high precise position measurements are measured by total station in every 3 m interval, the distance of two adjacent points calculating the irregularity in Equation (1) is chosen as 6 m. For 30 m chord, the deviation of measurement results between these two approaches is shown in Figure 14. As illustrated, the RMS of alignment irregularity is about 0.82 mm and level irregularity is 1.02 mm. The 3D spatial trajectories of first 60 m track segmentation measured by total station and ZUPT-aided INS are illustrated in Figure 14c. As shown in the figure, even though the absolute deviations between these two approaches are bigger, the relative deviations can still achieve millimeter scale. The second experiment is the repeatability test. For this experiment, six groups of measurements of the same track segment were carried out. Since the designed vector distance is unknown, we just calculate the difference of two points in 5 m intervals, namely The second experiment is the repeatability test. For this experiment, six groups of measurements of the same track segment were carried out. Since the designed vector distance is unknown, we just calculate the difference of two points in 5 m intervals, namely d s − d s+5m to estimate the repeatability of track irregularity. The comparison of track irregularity sequences obtained by ZUPT-aided INS in six runs is illustrated in Figure 15. As shown in the figure, the distance of two adjacent sample points is 0.5 m and only the track irregularities of the first 30 m chord are plotted in the figure. (b) Alignment irregularity deviation for 30 m chord between ZUPT-aided INS and total station; (c) 3D spatial trajectory of 60 m track segmentation measured by ZUPT-aided INS and total station The second experiment is the repeatability test. For this experiment, six groups of measurements of the same track segment were carried out. Since the designed vector distance is unknown, we just calculate the difference of two points in 5 m intervals, namely The irregularity differences at the same railway track points between different runs indicate the repeatability of the measurement. The results of statistic deviation of alignment and level irregularities are listed in Table 1. As Table 1 shows, the standard deviations of differences in The irregularity differences at the same railway track points between different runs indicate the repeatability of the measurement. The results of statistic deviation of alignment and level irregularities are listed in Table 1. As Table 1 shows, the standard deviations of differences in alignment irregularity and level irregularity are approximately 1mm, which is consistent with the theoretical analysis as well as the simulation results.
Conclusions
The measurement of railway track irregularity plays a significant role in monitoring the track deformation and guiding the maintenance of railway lines. This paper makes use of the ZUPT-aided INS for the track irregularity measuring applications. The RTS smoothing algorithm is employed to improve the performance of the surveying system.
The calculation equations of the track irregularity parameter from absolute positions have been deduced in the paper. Based on covariance analysis, the analytical relationships between the track irregularity with the drifts of inertial sensors, the accuracy of attitude and the accuracy of velocity observations as well as the accuracy of initial position are established. The theoretical analysis and numerical analysis show that the position observation of the Kalman filter has no effect on the measurement accuracy of the alignment irregularity and level irregularity, and we can implement track relative geometry surveying based on ZUPT-aided INS without position observation updates. The landmarks can be only used to determine track segmentation, sub-decimeter scale accuracy of which can satisfy the track surveying demand.
Simulations and experimental results show that the relative accuracy for 30 m chord of the proposed approach for track irregularity surveying can reach approximately 1 mm (1σ) with gyro bias instability of 0.01 • /h, random walk noise of 0.005 • / √ h and accelerometer bias instability of 50 µg, random noise of 10 µg/ √ Hz, while only velocity observations are provided by the ZUPT technique in about every 60 m interval. This accuracy can meet the most stringent requirements of the track irregularity surveying for railway lines. For higher accuracy demand of irregularity surveying, the higher level of inertial sensors than that of this paper should be employed.
This paper proposes a relative geometry parameter measuring approach for the railway track. It reduces the requirement of high precision landmarks significantly and lightens the maintenance burden of control points to a large extent. In addition, it also can improve the work efficiency of railway track irregularity measurement task. | 2018-04-03T04:44:41.485Z | 2017-09-01T00:00:00.000 | {
"year": 2017,
"sha1": "2f7fd5bf4cc113834286bda899f9dc20399b8287",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/17/9/2083/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2f7fd5bf4cc113834286bda899f9dc20399b8287",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science",
"Medicine"
]
} |
235280464 | pes2o/s2orc | v3-fos-license | Condition assessment of the structural elements of a reinforced concrete structure using non-destructive techniques
Assessment of the strength and durability properties of structural elements of a reinforced concrete structure can be of great importance for either tracing potential damaged areas or for carrying out the reliability analysis. Composite materials, these days have become an integral part of any construction. Most of the structures are built using concrete as the construction material. Inspite of the strong and durable nature of the concrete as a material, the RCC structures still face common structural issues that includes cracking, spalling, rusting of reinforcement, dampness, acid attack, carbonation etc. If these issues are left unattended during the initial stages then they may cause serious distress in the structures. This paper reports the condition assessment of the structural elements of a residential building using non-destructive techniques. The hardness and the strength of the cover concrete has been assessed using rebound hammer test, structural integrity and homogeneity by ultrasonic pulse velocity test, Additionally, carbonation test and resistivity test were also conducted to assess the extent of corrosion. The results of the assessment have shown a noteworthy decay in the strength and the durability properties of the structural elements.
Introduction
Inspite of having great resistance to the surrounding environmental deterioration, concrete as construction material shows significant signs of distress [1][2][3]. Various deteriorating agents such as acids, salts, chlorides, sulphates in water and carbon dioxide make the reinforced concrete structures structurally deficient [4][5][6][7][8]. A number of techniques are available for assessing the condition of the reinforced concrete structures without actually damaging them [9,10]. These non-destructive and semi-destructive techniques include strength tests, durability tests, performance tests, integrity tests and chemical tests [11,12]. The in-situ strength and quality of the concrete can be determined using these NDT methods to precisely detect the distress and causes of the distress to the structure [13][14][15]. Ultrasonic pulse velocity test determines the homogeneity and the integrity of the concrete [16][17][18]. Rebound hammer test along with the carbonation test helps in assessing the compressive strength of the cover concrete [19,20]. Electrical resistivity measurement techniques are finding its relevance among researchers for the assessment of the durability of concrete. The concrete can be evaluated for its performance using electrical resistivity method which is much easier than RCPT [21]. In Nernst-Einstein equation, the value of the resistivity finds its direct relation to the chloride diffusion coefficient of concrete [22]. Many mechanisms and phenomenon are responsible for the concrete deterioration but the most prominent is the corrosion of reinforcement which drastically damages the strength and the durability of concrete structures [23]. Chloride ions in concrete set up a major source of durability issues distressing reinforced concrete that is exposed to environment. When enough amounts of chloride ions gets accumulated around the reinforced steel, a localized corrosion in which small holes and cavities starts developing, is liable to occur unless the environmental surroundings are intensely anaerobic [24]. The loss of strength of reinforcing steel due to corrosion, bond slip etc. makes the structure unsafe during the earthquake forces. A number of research works focused on improving the bond between the reinforcing bars and concrete [25][26][27][28]. A number of studies have targeted the sulphate ion ingress as one of the reason for the reduced durability of the reinforced concrete structures [29][30][31]. In order to reduce the cement content and to improve the durability of the concrete structures different types of industrial and agricultural waste based pozzolans have been utilized by varuious researchers [32][33][34][35]. This paper presents the condition assessment of a multi-storeyed residential building. The assessment has been done by first visually inspecting the entire building to scrutinize the type, extent and source of damage and to locate the test points. And then the non-destructive investigation was carried out to check the concrete quality, corrosion in reinforcing bars, carbonation of concrete and ingress of salts in concrete. A total of 16 reinforced concrete columns at different locations were tested using non-destructive testing and 6 columns were tested for the presence of the chlorides and sulphate in the concrete.
Ultrasonic Pulse Velocity Test
This test is conducted in accordance with IS 516 (Part5/Sec1): 2018 [36]. The quality of the structural elements that indicate the level of workmanship such as uniformity, presence or absence of internal flaws, cracks and segregation, etc., can be easily assessed using this test. The apparatus used is TICO of Proceq Testing Instruments with 54 kHz transducers. Ultrasonic Pulse velocity depends mainly on elastic modulus of concrete.
Rebound Hammer Test
This test is conducted as per the specifications given in IS 13311-2:1992 [37]. Schmidt N-type hammer is used in the present study. As a general guideline higher is the rebound number, the more is the strength of the cover concrete. The surface hardness of the concrete and hence the rebound number may be considered as a measure of the strength of the concrete. A number of factors such cement type, aggregate type, moisture, age of structure and the degree of carbonation influence the rebound number. This test is conducted to determine the extent of corrosion in the reinforcement by measuring the pH of the concrete. In this study, rainbow indicator was used for the estimation of the extent and depth of carbonation. Carbonation of the concrete reduces the pH value of the water present in the pores of the concrete to about 8.5.
Embedded steel reinforcement will become prone to corrosion once the depth of carbonation reaches the depth of reinforcement.
Resistivity Test
The Resipod resistivity meter [38], which works on the principle of Wenner probe, was used for measuring the resistivity of concrete. The meter consists of two outer probes through which the current is applied and the potential difference is recorded b/w the two inner probes. Values of the resistivity can be interpreted from Table 3
UPV, Rebound Hammer and Carbonation Test
All the testing were performed on a total of 16 columns. At each column, a total of 3 readings for UPV and 9 readings for rebound number were taken to arrive at their average values. A core has been taken from each column and the depth and extent of carbonation is determined by spraying the rainbow indicator over the extracted cores. Figure 1, 2
Estimated and Corrected Compressive Strength test results
Exposed concrete was found to be carbonated. The carbonated concrete should be provided with anticarbonation coating [41,42] if the spalling of cover concrete has not started. If the spalling of cover concrete is taking place the same should be repaired by treating the affected reinforcement and repairing the cover with micro concrete.
Concrete Resistivity
The testing was done on all the selected 16 columns using Resipod resistivity meter that works on the principle of Wenner probe. The potential difference that occurs between the inner probes was measured after applying a current in the outer probes. Figure 5, it can be seen that the resistivity values of the reinforced cement concrete as determined using Resipod resistivity meter was found to be varying from 14.12 kΩ /cm to 39.77 kΩ /cm. These low values of the resistivity are an indication of moderate to high rate of corrosion in the embedded steel reinforcement [43].
Conclusions
The composite material used in the building construction despite having high resistance against environmental deterioration, also undergo a negative impact on its characteristic properties. Observing the damaged condition of the material in the outer columns of the building, it can be concluded that these columns may require full height repair and almost all the columns also require jacketing up to second floor. Exposed concrete was found to be carbonated. The carbonated concrete should be provided with anti-carbonation coating if the spalling of cover concrete has not started. Due to the effect of corrosion, the spalling was observed in these columns, so it is necessary to repair the structure so that it can resist the combination of loads for which it is designed. The spalling concrete from columns should be repaired with micro-concrete. | 2021-06-03T01:21:20.974Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "bc6fde375820dff3f593fa593424fce56a324fab",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/1116/1/012164",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "bc6fde375820dff3f593fa593424fce56a324fab",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
13492295 | pes2o/s2orc | v3-fos-license | Autophagy inhibition augments resveratrol-induced apoptosis in Ishikawa endometrial cancer cells
Resveratrol (RSV), a polyphenolic compound derived from red wine, inhibits the proliferation of various types of cancer. RSV induces apoptosis in cancer cells, while enhancing autophagy. Autophagy promotes cancer cell growth by driving cellular metabolism, which may counteract the effect of RSV. The present study aimed to elucidate the correlation between RSV and autophagy and to examine whether autophagy inhibition may enhance the antitumor effect of RSV in endometrial cancer cells. Cell proliferation, cell cycle progression and apoptosis were examined, following RSV exposure, by performing MTT assays, flow cytometry and annexin V staining, respectively, in an Ishikawa endometrial cancer cell line. Autophagy was evaluated by measuring the expression levels of light chain 3, II (LC3-II; an autophagy marker) by western blotting and immunofluorescence. Chloroquine (CQ) and small interfering RNAs targeting autophagy related (ATG) gene 5 (ATG5) or 7 (ATG7) were used to inhibit autophagy, and the effects in combination with RSV were assessed using MTT assays. RSV treatment suppressed cell proliferation in a dose-dependent manner in Ishikawa cells. In addition, RSV exposure increased the abundance of the sub-G1 population and induced apoptosis. LC3-II accumulation was observed following RSV treatment, indicating that RSV induced autophagy. Combination treatment with CQ and RSV more robustly suppressed growth inhibition and apoptosis, compared with RSV treatment alone. Knocking down ATG5 or ATG7 expression significantly augmented RSV-induced apoptosis. The results of the present study indicated that RSV-induced autophagy may counteract the antitumor effect of RSV in Ishikawa cells. Combination treatment with RSV and an autophagy inhibitor, such as CQ, may be an attractive therapeutic option for treating certain endometrial cancer cells.
Introduction
Endometrial cancer is the most common gynecologic malignancy, and its incidence is increasing worldwide (1). A strong association exists between endometrial cancer and metabolism. Individuals with diabetes mellitus or obesity have 1.8 or 1.5-fold higher relative risks for developing endometrial cancer, respectively (2,3). In addition, metabolic modifiers, including metformin (an oral antidiabetic drug for type-II diabetes mellitus), have been reported to induce antitumor effects in endometrial cancer (4,5).
Resveratrol (RSV) is a natural polyphenol found in a variety of plant-based foods and beverages, such as red wine (6). RSV is able to regulate various physiological functions, such as blocking inflammation and protecting against cardiovascular dysfunctions and obesity (6)(7)(8). These activities suggest that RSV may serve as a promising metabolic modifier in endometrial cancer. Indeed, an antitumor role of RSV has been reported in endocrine-associated cancers, including endometrial cancer (9)(10)(11). However, the mechanism underlying its antiproliferative effect is debated. The effects of RSV have been suggested to be dependent on estrogen, epidermal growth factor downregulation, protein kinase B (AKT) inactivation, and adenosine monophosphate-activated protein kinase (AMPK) activation (11)(12)(13)(14). Loss of AMPK activity can promote oncogenesis (15). Metformin is known to activate AMPK through liver kinase B1 (LKB1) phosphorylation, and this activation is suggested to be involved in its antitumor effect (16). RSV was previously revealed to activate sirtuin 1 (SIRT1) (17). SIRT1 is able to deacetylate certain proteins that regulate longevity and cellular stress, such as tumor protein p53 (TP53) (18,19). Thus, various factors are associated with the antitumor effects of RSV. In addition, cytostatic and cytotoxic effects have been observed following RSV treatment in cancer cells (20).
By contrast, RSV may also induce oncogenesis. Notably, RSV is associated with autophagy induction (21)(22)(23)(24) and activation of the Raf/MEK/ERK signal transduction cascade (25). Autophagy, which literally means 'self-eating' is a major degradation system that promotes the lysosomal digestion of organelles and cytoplasmic components (26). Autophagic activity is commonly assessed through measuring the expression levels of microtubule-associated protein 1 light chain 3 (LC3). LC3-II is a standard marker of autophagic flux and localizes to autophagosomes. Autophagy-related (ATG) genes 5 (ATG5) and 7 (ATG7) directly regulate autophagic processes (26). Autophagy has been suggested to promote cancer progression through driving cell metabolism (27). Activation of AMPK and/or extracellular signal-regulated kinase (ERK) signaling was demonstrated to induce autophagy in human cancers (28,29), which may induce the antitumor effect of RSV on cancer cells.
Chloroquine (CQ) is an autophagy inhibitor with an antimalarial effect (30). In addition, CQ and its derivative, hydroxychloroquine, have been used to treat connective tissue diseases, including rheumatoid arthritis, systemic lupus erythematosus and Sjögren's syndrome (31)(32)(33). CQ exhibits antitumor effects in vitro and in vivo by inhibiting autophagy, and various clinical trials have been conducted using CQ in certain types of cancer (34,35). We recently reported that autophagy inhibition by CQ suppressed endometrial cancer cell proliferation, and improved cisplatin sensitivity (36). Therefore, autophagy inhibition may potentiate the antitumorigenic effects of RSV in endometrial cancer cells.
The purpose of the present study was to investigate the effects of RSV on endometrial cancer cell proliferation and autophagy. In addition, the study also addressed whether autophagy inhibition enhances the effect of RSV, which would suggest a potential new treatment strategy for endometrial cancer.
MTT assays. Ishikawa cells (3,000 cells/well) were seeded 24 h prior to RSV treatment. Subsequently, the cells were grown for 72 h in DMEM, which contained increasing doses of RSV (0.1-200 µM). At the endpoint, 10 µl of the Cell Counting kit-8 reagent containing the tetrazolium salt WST-8 was added to the wells, according to the protocol of the manufacturer (Dojindo, Molecular Technologies, Inc., Kumamoto, Japan), and absorbance (450 nm) was measured in a microplate reader (BioTek Instruments, Inc., Winooski, VT, USA). Proliferation was normalized to absorbance measurements observed in control cells treated with dimethyl sulfoxide alone.
Cell cycle analysis. Ishikawa cells (5x10 5 cells/60-mm dish) were grown in the presence of RSV (25 µM) for 72 h. Cell cycle analysis was performed as previously described (36) in three independent experiments.
Apoptosis measurements by double staining with annexin V and propidium iodide (PI).
Ishikawa cells were plated in 60-mm dishes for 24 h prior to 24 h incubations at 37˚C with the indicated drugs and/or small interfering RNAs (siRNAs), at the indicated doses. As described previously (36), the cells were trypsinized, washed two times with phosphate-buffered saline (PBS), and stained with PI and fluorescein isothiocyanate (FITC)-conjugated annexin V, using the FITC Annexin-V Apoptosis Detection kit I (BD Biosciences, San Jose, CA, USA), as directed by the manufacturer. Apoptotic cells were measured as double-positive cells in three independent experiments using a BD FACSCalibur flow cytometer, and expressed on a percentage basis.
Western blot analysis. Soluble proteins from Ishikawa cell lysates were extracted as described previously (36), followed by western blot analysis with the aforementioned primary antibodies (1:1,000) at 4˚C overnight. Bands were detected using the BioRad Blotting system (BioRad Laboratories, Inc., Hercules, CA, USA) with the ECL Select Detection Reagent (GE Healthcare, Little Chalfont, UK).
Immunofluorescence. Ishikawa cells were cultured in DMEM in 6-well plates, on glass coverslips coated with PBS containing 0.1% gelatin. After 24-h incubation at 37˚C, the medium was replaced with DMEM alone (control cells) or DMEM supplemented with 25 µM RSV. The cells were then incubated for an additional 48-h. Subsequently, the cells were washed in PBS, fixed with 4% paraformaldehyde, and permeabilized with 0.2% Triton X-100 prior to blocking in 6% bovine serum albumin (Thermo Fisher Scientific, Inc.). The cells were then incubated overnight at 4˚C with a primary anti-LC3 antibody (diluted 1:200). On the following day, the cells were incubated for 1 h at room temperature with a secondary Alexa Fluor 488-conjugated goat, anti-mouse IgG antibody (1:200). Nuclei were counterstained with Hoechst 33342 dye at a 1:1,000 dilution. The slides were analyzed by confocal fluorescence microscopy (BX50; Olympus Corporation, Tokyo, Japan).
Gene silencing. Ishikawa cells were grown in culture for 24 h prior to gene-silencing experiments conducted with Stealth RNAi siRNAs against ATG5 or ATG7 (Invitrogen; Thermo Fisher Scientific, Inc.), using Lipofectamine RNAiMAX (Invitrogen; Thermo Fisher Scientific, Inc.). A negative control siRNA was used as a control (Invitrogen; Thermo Fisher Scientific, Inc.). siRNA transfections were performed as described previously (36).
Statistical analysis. The data were presented as the mean ± standard error from at least three independent determinations. The significance of differences between ≥3 samples were analyzed by one-way analysis of variance and post-hoc testing, whereas the significance between two samples were analyzed by a Mann-Whitney U test, using GraphPad Prism, version 6.0 (GraphPad Software, San Diego, CA, USA). P<0.05 was considered to indicate a statistically significant result.
RSV suppresses the proliferation of Ishikawa cells by apoptosis induction.
MTT assays were performed in Ishikawa endometrial cancer cells to assess the antitumor activity of RSV. RSV inhibited the proliferation of Ishikawa cells in a dose-dependent manner (Fig. 1A). The half-maximal (50%) inhibitory concentration IC 50 value was 20 µM. Cell cycle analysis was also performed to elucidate whether growth inhibition by RSV was attributable to cell cycle arrest or cell death. Cell cycle analysis demonstrated that RSV caused a significant increase in the abundance of the sub-G1 population of Ishikawa cells (Fig. 1B). In addition, annexin V-PI double staining showed a significant accumulation of double-positive cells following RSV treatment in Ishikawa cells (Fig. 1C), indicating that RSV induced apoptosis in Ishikawa cells. These results suggested that RSV inhibits the growth of Ishikawa cells, mainly via its cytotoxic effect.
RSV induces autophagy in Ishikawa cells.
To elucidate which proteins are associated with growth inhibition by RSV, immunoblotting was performed against cell growth-associated proteins expressed in Ishikawa cells. RSV markedly increased the expression of p-AMPKα and p-ERK ( Fig. 2A). However, RSV did not increase SIRT1 expression, or decrease the expression of p-AKT ( Fig. 2A). RSV induced LC3-II expression, and LC3-immunofluorescence experiments revealed autophagosome accumulation in the cytosol of Ishikawa cells following 20 µM RSV treatment ( Fig. 2A and B). These data strongly suggest that RSV activates AMPK and ERK signaling in Ishikawa cells, with an induction of autophagy.
Pharmacologic autophagy inhibition by CQ augments RSV-inducible apoptosis in Ishikawa cells.
Next, we addressed whether RSV-mediated autophagy affects the RSV antitumor effect in Ishikawa cells, by adding CQ in combination with RSV. Cell viability was significantly suppressed by combination treatment (25 µM RSV and 5 µM CQ), compared with RSV treatment alone at 25 µM (Fig. 3A). Combination treatment induced significant cleaved PARP accumulation, compared with RSV treatment alone, as determined by western blot analysis (Fig. 3B). In addition, combination treatment showed a trend towards an increased population of double-positive (apoptotic) cells in the annexin V-PI double staining assays (Fig. 3C). These data indicated that combination treatment with RSV and CQ may induce greater cytotoxicity in Ishikawa cells, as compared with RSV treatment alone.
Autophagy inhibition by ATG5 and ATG7 siRNAs augments RSV-induced apoptosis in Ishikawa cells.
To elucidate whether RSV-inducible autophagy renders the antiproliferative effect of RSV, the core ATGs, ATG5 or ATG7, were knocked down in Ishikawa cells using two independent siRNAs for each gene. The efficacy of gene silencing and autophagy inhibition by these siRNAs was already confirmed in our previous report (36). MTT assay revealed that the cells were more sensitive to RSV when either ATG5 or ATG7 was knocked down (Fig. 4A). Moreover, annexin V-PI double 100%). The results are presented as the mean ± SE of three independent experiments. * P<0.05. (B) Immunoblotting of cleaved PARP following each treatment, as described above. β-actin was used as a loading control. (C) Apoptosis was measured by annexin V-PI double staining following each treatment, using the aforementioned RSV and CQ concentrations. The results are presented as the mean ± SE of three independent experiments. RSV, resveratrol; CQ, chloroquine; SE, standard error; PARP, poly ADP ribose polymerase; PI, propidium iodide. staining revealed that RSV-induced apoptosis was enhanced by silencing ATG5 or ATG7, whereas the knockdown of ATG5, or ATG7, alone did not affect apoptosis in cells without RSV treatment (Fig. 4B).
Discussion
RSV is an active compound in foods that can prevent cell proliferation of various types of cancer cells. However, RSV also induces autophagy, which can promote stress tolerance and cell survival by maintaining energy production. Therefore, RSV-associated autophagy may hamper its antitumor effect. In this study, we focused on i) antitumor activity and apoptosis induction by RSV, ii) autophagy induction by RSV, and iii) the efficacy of combined autophagy inhibition and RSV treatment in Ishikawa endometrial cancer cells.
Initially, the results demonstrated that RSV suppressed the proliferation of Ishikawa cells. The IC 50 value of 20 µM for RSV in the Ishikawa endometrial cancer cells was lower than those of cervical, bladder, breast and liver cancer cells (37)(38)(39). This result implies that at least certain endometrial cancer cells may be more sensitive to RSV treatment than other types of cancer cells. The antiproliferative effect of RSV on the tumor cells was revealed to be primarily cytotoxic, not cytostatic. Although the mechanism underlying RSV induction of apoptosis remains unclear, AMPK-dependent signaling pathways may be associated with its ability to induce apoptosis (40). Indeed, RSV markedly increased the expression of p-AMPKα in this study. Although a previous report indicated that RSV attenuated cancer cell proliferation in a SIRT1-dependent manner (41), SIRT1 did not accumulate following RSV treatment in Ishikawa cells. Therefore, RSV-induced apoptosis may be independent from SIRT1. Further investigation is warranted to elucidate the mechanism underlying apoptosis induction by RSV.
In addition, autophagy was induced by RSV treatment in Ishikawa cells, results which were concordant with previous findings in ovarian and cervical cancer cells (21,23). To our knowledge, this is the first report of RSV-mediated autophagy in endometrial cancer cells. Activation of either AMPK or ERK has also been reported to induce autophagy (29,42). AMPK Activation inhibits the mammalian target of the rapamycin (mTOR) signaling pathway, which is frequently activated via phosphatase and tensin homolog mutations in endometrial cancers, including Ishikawa cells (43,44). As activation of mTOR signaling is associated with autophagy inhibition (45), AMPK activation by RSV may counteract mTOR-dependent autophagy inhibition (thereby promoting autophagy) in Ishikawa cells. ERK activation is also associated with autophagy induction, as well as cell proliferation (29). Although the effect of RSV-mediated autophagy on cancer cells is thought to be cancer-type specific (i.e., tumor suppressive in glioma and esophageal cancer (46)(47)(48), or tumor-promoting in ovarian and cervical cancer cells (21,23), the results of the present study suggest that RSV-mediated autophagy may serve a protective role against apoptosis in endometrial cancer cells.
Finally, autophagy inhibition by CQ augmented RSV-induced apoptosis in Ishikawa cells. Moreover, specific autophagy inhibition by siRNAs against either ATG5 or ATG7 significantly enhanced apoptotic cell death by RSV. We previously reported that CQ treatment alone caused apoptosis in endometrial cancer cells (36). The results indicate that combined RSV and CQ treatment may be a promising therapeutic strategy through autophagy inhibition and apoptosis induction.
This study has several limitations. The precise mechanism underlying RSV-induced apoptosis and autophagy remains unclear. Autophagy induction may also be mediated by other factors that are independent of AMPK and ERK signaling. Biomarkers for predicting sensitivity to RSV or combined treatment (RSV+CQ) should be identified for clinical applications. In addition, the safety and efficacy of combination RSV and CQ therapy should be examined in in vivo studies.
In conclusion, the results of the present study revealed that RSV increased apoptosis, and that RSV-mediated autophagy rendered its apoptotic function in Ishikawa cells. Combined autophagy inhibition with RSV treatment significantly augmented apoptosis. Considering that CQ is widely used in clinical settings, combination RSV/CQ therapy may be a viable option for treating endometrial cancer. | 2018-01-13T23:06:52.831Z | 2016-08-08T00:00:00.000 | {
"year": 2016,
"sha1": "d3d3b563617b9be6817c23666c99bdce18db6dcb",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/ol/12/4/2560/download",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d3d3b563617b9be6817c23666c99bdce18db6dcb",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
36175115 | pes2o/s2orc | v3-fos-license | Cigarette smoke and calcium conspire to impair CFTR function in airway epithelia
To maintain health and function in response to inhaled environmental irritants and toxins, the lungs and airways depend upon an innate defense system that involves the secretion of mucus (i.e., mucin, salts, and water) by airway epithelium onto the apical surface to trap foreign particles. Airway mucus is then transported in an oral direction via ciliary beating and coughing, which helps to keep the airways clear. CFTR (cystic fibrosis transmembrane conductance regulator) is a cAMP-regulated Cl- channel in the apical membrane of epithelium that contributes to salt and water secretion onto the luminal surface of airways, thereby ensuring that secreted mucus is sufficiently hydrated for movement along the epithelial surface. Dehydration of airway mucus, as occurs in cystic fibrosis, results in a more viscous, less mobile secretion that compromises the lung’s innate defense system by facilitating a build-up of foreign particles and bacterial growth. Related to this situation is chronic obstructive pulmonary disease (COPD), which is a leading cause of death globally. A major cause of COPD is cigarette smoking, which has been reported to decrease the cellular levels of CFTR in airway epithelia. In their recent article, Rasmussen and coworkers now report that exposure to cigarette smoke elevates cytosolic free Ca2+ in airway epithelium, leading to decreased surface localization and cellular expression of CFTR and reduced levels of secreted airway surface liquid. Blocking this increase in cytosolic Ca2+ largely prevented CFTR loss in airway epithelium and surprisingly, cellular lysosomes appear to be a major source for smoke-induced Ca2+ elevation.
cellular lysosomes appear to be a major source for smoke-induced Ca 2+ elevation.
Experimentally, the authors tracked native CFTR expression/localization in human bronchial airway cells or recombinant CFTR expressed in established cell lines (i.e., BHK, CALU3, and HEK293T) either by western blot analysis or immunocytochemistry. Cytosolic free Ca 2+ was monitored either by Fura-2 imaging or GFP-based Ca 2+ indicators targeted to select sub-cellular organelles (i.e., mitochondria, lysosomes). Exposure of cultured cells to the vapor phase of cigarette smoke was used to mimic that experienced by a typical smoker (i.e., 1 puff per min for 10 min (acute exposure) or 10 puffs per 2 h for 8 h (chronic exposure)).
Using a variety of pharmacologic inhibitors to block major signal transduction pathways, the authors discovered that cigarette smoke-induced decrease in cellular CFTR was insensitive to agents that interfered with protein phosphorylation (i.e., the cAMP/PKA inhibitor H89, the broad spectrum kinase inhibitor staurosporine, the phosphatase 1/2A inhibitor okadaic acid and the PI-3 kinase inhibitors LY294002 and wortmannin). In contrast, chelation of intracellular free Ca 2+ by BAPTA-AM largely prevented the smoke-mediated CFTR reduction, indicating an essential role for cytosolic free Ca 2+ in this response. Interestingly, cigarette smoke exposure did not disrupt the cellular expression of the Ca 2+ -sensitive Clchannel Ano1 (aka TMEM16A), suggesting that cigarette smoke may not cause broad impairment of membrane ion channels.
Whereas BAPTA-AM prevented CFTR downregulation in response to cigarette smoke exposure, treatment of cells with the Ca 2+ ionophore ionomycin mimicked the decrease in CFTR expression, suggesting that elevated cytosolic Ca 2+ per se was an important condition. Monitoring intracellular Ca 2+ dynamics in airway epithelial cells further revealed that cigarette smoke exposure evoked a slow rising and prolonged increase in cytosolic free Ca 2+ that reach its peak within 5-10 min and occurred with a delay of 1-2 min following smoke exposure. This unusual response profile contrasted that observed following the UTP/ATP-dependent activation of endogenous G-protein-coupled purinergic receptors, which evoked a rapid and transient increase in cytosolic Ca 2+ that was not associated with the loss of CFTR. Whereas this latter response was typical of Ins(1,4,5)P 3 -mediated Ca 2+ release from the endoplasmic reticulum and involved STIM1 activation, the authors could find no evidence that cigarette smoke evoked a similar clustering of activated STIM1 or an increase in cellular second messengers known to elevate cytosolic free Ca 2+ (i.e., Ins(1,4,5)P 3 , cyclic ADPribose or NAADP). The smoke-induced Ca 2+ elevation was also insensitive to the established SERCA inhibitor thapsigargin, whereas this agent reduced/prevented Ca 2+ elevations in response to UTP.
Although it was evident that cigarette smoke exposure could elevate cytosolic free Ca 2+ in airway epithelium, the source of the Ca 2+ remained a puzzle. Acute removal of external Ca 2+ did not influence either the amplitude or kinetics of the smokemediated increase, suggesting release from an internal pool. Mitochondria are known to contain millimolar levels of Ca 2+ , but treatment of cells with CCCP, an agent that decreases mitochondrial Ca 2+ uptake by uncoupling the electron transport chain, did not interfere with or desensitize cigarette smoke-induced Ca 2+ elevation or CFTR loss.
In contrast to the above manipulations, the authors observed that pre-treatment of cells with the lysosomal inhibitor bafilomycin A reduced the cigarette smoke-mediated elevation in cytosolic free Ca 2+ , along with the loss of CFTR, both at the cell surface and whole cell levels. By blocking the vacuolar H + ATPase, bafilomycin A reduces acidification of lysosomes, leading to a loss of internal Ca 2+ . Using a FRET-based, genetically-encoded Ca 2+ indicator coupled to the lysosomal protein LAMP1, the authors further noted that cigarette smoke exposure evoked Ca 2+ elevations in the proximity of lysosomes, consistent with the possible release of Ca 2+ from this organelle upon smoke exposure. It was further observed that bafilomycin A treatment could prevent the reduction of airway surface liquid secretion evoked by cigarette smoke exposure; this result thus provides an important functional correlate for the preceding molecular and cell biological data describing changes in CFTR protein levels.
Although the exact mechanisms by which cigarette smoke exposure and lysosomal release of Ca 2+ lead to loss of epithelial CFTR remain unclear, the authors speculate that cigarette smoke may compromise the integrity of CFTR structure and/or the cellular protein quality control machinery regulating CFTR levels, leading to CFTR internalization and aggregation in a detergent-insoluble cellular compartment. Based on the results of the study, it also remains unclear how elevated cytosolic Ca 2+ contributes to CFTR loss, as the authors did not elaborate a clear temporal relation or mechanistic link between intracellular Ca 2+ mobilization and CFTR expression. Identifying cellular events/processes induced by prolonged vs. transient Ca 2+ elevations may provide important insights into this issue. It is also suggested that cigarette smoke-induced elevations in cytosolic Ca 2+ may be involved in longterm events (e.g., gene transcription) that could promote epithelial cell survival in response to the toxic insults associated with smoking. Although the connection between cigarette smoking and COPD is well-established from a health care perspective, the results of this study increase our knowledge of the underlying molecular pathology evoked by chronic cigarette smoke exposure and point to additional targets/strategies that may mitigate the airway dysfunction associated with this disease.
Disclosure of Potential Conflicts of Interest
No potential conflicts of interest were disclosed. | 2018-04-03T05:29:06.148Z | 2014-05-01T00:00:00.000 | {
"year": 2014,
"sha1": "88d703701340d2651c80f47479ce9a9bca7b35f9",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.4161/chan.28970?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "88d703701340d2651c80f47479ce9a9bca7b35f9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
253082168 | pes2o/s2orc | v3-fos-license | SARS-CoV-2 E and 3a Proteins Are Inducers of Pannexin Currents
Controversial reports have suggested that SARS-CoV E and 3a proteins are plasma membrane viroporins. Here, we aimed at better characterizing the cellular responses induced by these proteins. First, we show that expression of SARS-CoV-2 E or 3a protein in CHO cells gives rise to cells with newly acquired round shapes that detach from the Petri dish. This suggests that cell death is induced upon expression of E or 3a protein. We confirmed this by using flow cytometry. In adhering cells expressing E or 3a protein, the whole-cell currents were not different from those of the control, suggesting that E and 3a proteins are not plasma membrane viroporins. In contrast, recording the currents on detached cells uncovered outwardly rectifying currents much larger than those observed in the control. We illustrate for the first time that carbenoxolone and probenecid block these outwardly rectifying currents; thus, these currents are most probably conducted by pannexin channels that are activated by cell morphology changes and also potentially by cell death. The truncation of C-terminal PDZ binding motifs reduces the proportion of dying cells but does not prevent these outwardly rectifying currents. This suggests distinct pathways for the induction of these cellular events by the two proteins. We conclude that SARS-CoV-2 E and 3a proteins are not viroporins expressed at the plasma membrane.
Introduction
SARS-CoV-2 is the third virus of the genus Beta-coronavirus of the Coronaviridae family to be responsible for a Severe Acute Respiratory Syndrome in this century, after SARS-CoV-1 in 2002-2003 [1] and MERS-CoV in 2012 [2]. As a result, it is of great importance to best characterize the coronaviruses and their associated pathophysiologies, with the hope that new treatments will emerge to complement vaccine approaches for people who cannot access the vaccines or are not responsive to them. In addition to Paxlovid which is already available but associated with bothersome side-effects [3], many potential anti-COVID-19 treatments are in development, but it is too soon to tell how efficient they will be, namely with regard to the continuous emergence of new variants, and if their cost will be reasonable [4]. Viroporins, i.e., ion channels encoded by a virus genome, are potential targets for antiviral agents, as demonstrated by the case of amantadine, which inhibits the acidactivated M2 channel of the Influenza A virus [5]. Several studies led to the suggestion that two proteins of SARS-CoV are viroporins. The SARS-CoV-2 Envelop (E) protein is a one-transmembrane-domain membrane protein (75 amino-acids) almost identical to the SARS-CoV-1 Envelop protein (95% identity). The SARS-CoV-2 ORF3a (3a) protein is a larger three-transmembrane-domain membrane protein (275 amino-acids) relatively similar to the SARS-CoV-1 3a protein (73% identity).
Regarding the ion channel function of these proteins, there are clearly several contradicting studies: some of them raising intriguing issues, while others do not confirm these reports. Concerning in vitro membrane incorporation of purified E or 3a protein in lipid bilayers, the presence of ion channel activity is reportedly associated with these viral proteins [6][7][8][9][10][11]. However, a review article soundly outlined the lack of robust data and raised ethical concerns, casting doubts on the validity of these scientific messages [12]. Concerning viral protein expression in cells, the expression of SARS-CoV-1 E protein also led to conflicting results [13,14]. Pervushin et al. managed to identify plasma membrane currents generated by heterologous expression of SARS-CoV-1 E protein in HEK-293 cells [13], but not Nieto-Torres et al. [14]. In Pervushin's study, the strongest evidence for E protein expression at the plasma membrane forming ion channels was the finding that (i) hexamethylene amiloride (HMA) inhibits the induced current and (ii) directly binds to the E protein.
A recent study also detected current after injection in Xenopus laevis oocytes of any RNA among four different RNAs encoding SARS-CoV-2 proteins, including the E protein [15]. On the other hand, in other studies, expression of SARS-CoV-2 E protein did not lead to interpretable ionic currents in HEK-293S cells or Xenopus laevis oocytes [16,17]. In an attempt to favor plasma membrane targeting and reveal a putative current, a C-terminal predicted ER retention signal of SARS-CoV-2 E protein was replaced by a Golgi export signal from the Kir2.1 channel. The expression of this chimera could then be associated with the generation of a non-rectifying and cation-selective current [16]. This current was thus quite different from the outwardly rectifying current observed by Pervushin and collaborators [13]. Furthermore, another study using a membrane targeting sequence, fused to the N-terminus of the SARS-CoV-1 E protein, provided a non-rectifying current that was 100-fold larger than the one observed in the two previous studies [18]. This suggests that such modifications of either the N-or C-terminus are too drastic to faithfully report the actual activity of the native proteins.
The SARS-CoV 3a protein was also investigated. Confocal immuno-imaging detected the expression of WT 3a protein in both plasma membrane and cytoplasm. Membrane expression was reduced for a mutant that showed less current when expressed in HEK-293 cells [19]. Expression of the wild-type (WT) protein in HEK-293 cells but also Xenopus laevis oocytes was associated with a poorly selective outwardly rectifying current in both models, resembling the one observed upon expression of the E protein [15,[20][21][22]. However, again, these observations were not replicated by other laboratories [23].
To summarize, there is no unequivocal evidence that SARS-CoV E and 3a proteins are viroporins active at the plasma membrane of the host cell. However, on one hand, it was recently reported that SARS-CoV-2 E and 3a proteins can promote cell death [24,25]. On the other hand, apoptosis is associated with an increase in the outwardly rectifying current conducted by pannexins [26][27][28] and VRAC channels [29]. This led us to reinvestigate the actual function(s) of SARS-CoV-2 E and 3a proteins in mammalian cells in the frame of the cell toxicity of these proteins.
In this study, CHO cells expressing either CoV-2 E or 3a proteins tended to develop a round-shaped form with a tendency to detach from the Petri dish, a process exacerbated compared to control conditions. This cell phenotype is consistent with cell death [30,31] and we confirmed via flow cytometry experiments that expression of E or 3a proteins does indeed promote cell death. Transfected cells, still attached to the Petri dish (adhering cells), had unchanged basal currents, indicating that E and 3a proteins are unlikely to act as plasma membrane channels. In contrast, recording whole-cell currents on roundshaped and detached cells, we observed large outwardly rectifying currents only in E or 3a protein-expressing cells but not in control dying cells. This current is reminiscent of those observed in previous publications using HEK-293 cells and oocytes expressing SARS-CoV-1 proteins [13,[20][21][22]. The application of carbenoxolone and probenecid, two inhibitors of pannexin channels, suggests for the first time that these currents are pannexin-mediated conductances, potentially activated by altered morphology or apoptosis. In conclusion, both SARS-CoV-2 E and 3a proteins are most likely triggers of endogenous conductance.
Construction of E and 3a Protein-Encoding Plasmids
SARS-CoV-2 E and 3a nucleotide sequences containing a Kozak sequence added right before the ATG (RefSeq NC_045512.2) were synthesized by Eurofins (Ebersberg, Germany) and subcloned into the pIRES2 vector with eGFP in the second cassette (Takara Bio Europe, Saint-Germain-en-Laye, France). Truncated ∆4 and ∆8 E proteins as well as ∆10 3a protein constructs lacking the last 12, 24, and 30 nucleotides, respectively, were also synthesized by Eurofins. The plasmid cDNAs were systematically re-sequenced by Eurofins after each plasmid in-house midiprep (Qiagen, Hilden, Germany).
E and 3a cDNA Transfection
Fugene 6 transfection reagent (Promega, Madison, WI, USA) was used to transfect WT and mutant E and 3a plasmids for patch clamp, morphology analysis, and flow cytometry experiments according to the manufacturer's protocol. The cells were cultured in 35 mm dishes and transfected at 20% confluence for patch clamp experiments and 50% confluence for flow cytometry assays, with a pIRES plasmid (2 µg DNA) with the first cassette empty or containing wild-type or truncated SARS-CoV-2 E or 3a protein sequences. For morphology analysis, cells were cultured in ibidi µ-Slide 8-well dishes and transfected at 20% confluence with the same plasmids. In pIRES2-eGFP plasmids, the second cassette (eGFP) is less expressed than the first cassette, guaranteeing the expression of a high level of the protein of interest in fluorescent cells [32,33].
Electrophysiology
Two days after transfection, the CHO cells were mounted on the stage of an inverted microscope and bathed with a Tyrode solution (in mmol/L: NaCl 145, KCl 4, MgCl 2 1, CaCl 2 1, HEPES 5, glucose 5, pH adjusted to 7.4 with NaOH) maintained at 22.0 ± 2.0 • C. Patch pipettes (tip resistance: 2.0 to 2.5 MΩ) were pulled from soda-lime glass capil-laries (Kimble-Chase, Vineland, NJ, USA) with a Sutter P-30 puller (Novato, CA, USA). A fluorescent cell was selected via epifluorescence. The pipette was filled with intracellular medium containing (in mmol/L): KCl, 100; Kgluconate, 45; MgCl 2 , 1; EGTA, 5; HEPES, 10; pH adjusted to 7.2 with KOH. Stimulation and data recording were performed with pClamp 10, an A/D converter (Digidata 1440A), and a Multiclamp 700B (all Molecular Devices, San Jose, CA, USA) or a VE-2 patch-clamp amplifier (Alembic Instruments, Montreal, QC, Canada). The currents were acquired in the whole-cell configuration, low-pass filtered at 10 kHz and recorded at a sampling rate of 50 kHz. First, a series of twenty 30-ms steps to −80 mV was applied using alternating holding potential (HP) values of −70 mV and −90 mV, and Cm and Rs values were subsequently calculated offline from the recorded currents. The currents were then recorded using a 1-s ramp protocol from −80 mV to +70 mV every 4 s. Regarding non-adhering cells, we considered them as having large current density when the current density measured at +70 mV was superior to mean + 2 × standard deviation of the current density in adhering cells in the same condition.
Cell Morphology Assay
Cell roundness was estimated using the Analyze Particle function of the Fiji software (v 1.53), as described in Supplementary Figure S1.
Flow Cytometry Assay
Two days after transfection, the CHO cells were prepared for cell death detection following the user guide (https://assets.thermofisher.com/TFS-Assets/LSG/manuals/ mp13199.pdf, accessed on 2 February 2021) to measure annexin V binding and propidium iodide (PI) intake. The cells were washed with cold PBS, trypsinized, collected via centrifugation and gently resuspended in annexin-binding buffer (V13246, Invitrogen, Carlsbad, CA, USA) at 1 × 10 6 cells/mL. To each 300 µL cell suspension were added 0.5 µL of annexin V AlexaFluor 647 (A23204, Invitrogen, Carlsbad, CA, USA) and 1 µL of propidium iodide (PI) at 100 µg/mL (P3566, Invitrogen, Carlsbad, CA, USA). The CHO cells were incubated at room temperature for 15 min in the dark, then maintained on ice until flow cytometry analysis within one hour.
The cytometer BD FACSCanto (BD Biosciences, Franklin Lakes, NJ, USA) was used to sample acquisition. CHO cells transfected with an empty plasmid were used to determine the population to be analyzed. Monolabeled cells were used to establish the photomultiplier voltage of each channel (PMT) and proceed with fluorescence compensation after the acquisitions. In order to detect cell death, only eGFP-positive CHO cells (FITC) were selected to study Annexin V AlexaFluor 647 (APC) and PI (Perc-P) labeling. The analyses were performed using FlowJo software (v10.7.1).
Results
We first focused on native E and 3a proteins. To maximize the chance of observing E and 3a protein-induced ionic currents, we chose to use pIRES plasmids, in which the protein of interest situated in the first cassette is more expressed than the eGFP reporter in the second cassette, thereby guaranteeing the expression of a high level of the protein of interest in fluorescent cells [32,33]. For the purpose of this study, we also selected CHO rather than HEK-293 cells because they express minimal endogenous currents [34]. We compared whole-cell currents recorded during a ramp protocol in cells transfected either with a control pIRES2-eGFP plasmid (pIRES) or the same plasmid containing the cDNA of the SARS-CoV-2 E protein (pIRES-E) or 3a protein (pIRES-3a). Unexpectedly, we did not observe any difference in the currents recorded for the SARS-CoV-2 protein-expressing cells compared to the control pIRES condition ( Figure 1A). However, many cells transfected with either E-or 3a-encoding plasmids developed altered morphology, shifting from spindle-like cells to more round cells ( Figure 1B), similar to what was previously observed in MDCK cells heterologously expressing SARS-CoV-1 E protein [35]. Morphology analysis with a Fiji tool confirmed an increase in cell roundness ( Figure 1C and Supplementary Figure S1). In particular, in the patch clamp experiments, some cells were coming off from the dish bottom because of loss of adhesion. Cell counting indicated that slightly more cells were losing adhesion when E or 3a proteins were expressed (3.4 ± 0.6% in non-transfected cells, 5.2 ± 1.0% in pIRES condition, 6.6 ± 0.7% in pIRES-E, p < 0.001 vs. pIRES, 6.0 ± 1.2% in pIRES-3a, p < 0.001 vs. pIRES, five replicates, z-test). As is standard, the currents shown in Figure 1A were recorded from the adhering cells, while the non-adhering cells were disregarded in this initial investigation. Noteworthily, in each condition, both spindle-like and round adhering cells were studied (pIRES: 21 spindle-like and 17 round cells; pIRES E: 9 and 13, pIRES 3a: 18 and 11). Since both E and 3a proteins promote cell death [24,25], we hypothesized that the various cell morphological patterns (spindle-shaped, round-adhering, and round nonadhering) may correspond to the development of cell death, as described earlier in CHO and other cells [30,31]. The flow cytometry analysis performed on the eGFP-positive CHO cells ( Figure 2) showed that expression of E and 3a proteins increases the percentage of dying cells, with more significantly late cell death, revealed by propidium iodide permeability (Supplementary Figure S2). The effect of 3a protein was greater than the effect of E protein. E protein-induced cell death could be reduced by the pan-caspase inhibitor QVD-OPh, while 3a protein-induced cell death could not (Supplementary Figures S3 and S4), suggesting that E protein induces apoptosis, while 3a protein activates non-conventional caspase-independent cell death. Both E and 3a proteins possess a C-terminal PDZ binding motif (PBM). E-protein PB has been suggested to be a virulence factor [11] that binds to host cell PDZ domains, leadi to abnormal cellular distribution of the bound proteins [35]. 3a PBM interacts with at le Both E and 3a proteins possess a C-terminal PDZ binding motif (PBM). E-protein PBM has been suggested to be a virulence factor [11] that binds to host cell PDZ domains, leading to abnormal cellular distribution of the bound proteins [35]. 3a PBM interacts with at least five human PDZ-containing proteins (TJP1, NHERF3 and 4, RGS3, PARD3B), suggesting that it also alters cellular organization [36]. We thus evaluated whether deletion of these domains impacts the propensity of E and 3a proteins to trigger cell death. Two C-terminal deletions used in previous studies to remove E protein PBM, ∆4 for the last four amino acids [35] and ∆8 for the last eight residues [11], abolished the pro-apoptotic effect of E protein (Figure 2). When looking individually at early and late cell death, we observed that both truncations of E and 3a protein decreased late cell death (Supplementary Figure S2).
Since both E and 3a proteins promote altered morphology and cell death, we hypothesized that the cells starting to come off the surface may express currents induced by altered morphology and/or cell death, such as volume-regulated anion channel (VRAC) or pannexin currents [26][27][28][29]37,38]. We thus compared patch clamp recordings of adhering cells vs. non-adhering cells for three conditions: control pIRES, pIRES-E and pIRES-3a plasmids ( Figure 3). For the control pIRES condition, focusing on non-adhering cells in the 35 mm dish and using the ramp protocol, we observed an outwardly rectifying current with a mean current density of 8.5 ± 3.1 pA/pF at +70 mV, slightly higher than those of spindleor round-shaped adhering cells (3.4 ± 0.8 pA/pF). On the other hand, in non-adhering cells expressing either the E or 3a protein, currents were much larger in the E protein condition (I +70mV = 31 ± 9 pA/pF, two-way ANOVA test on the ramp-evoked currents: p < 0.0001) and the 3a protein condition (I +70mV = 44 ± 13 pA/pF, two-way ANOVA test on the ramp-evoked currents: p < 0.0001) compared to non-adhering cells in the control pIRES condition. Noteworthily, only a fraction of the non-adhering cells exhibited large rectifying currents, as shown in Figure 3: 4 out of 43 in the control pIRES condition, 14 out of 46 in the E protein condition, and 16 out of 41 in the 3a protein condition. These experiments suggest that changes in morphology and/or cell death induced by expression of E and 3a proteins may lead to an increased membrane permeability by enhancing the expression or activity of an endogenous ion channel.
Current density is classically used to compare channel activity in single cells, but since cell morphology is affected by the expression of E and 3a proteins, we tested if membrane capacitance is modified in non-adhering cells. Supplementary Figure S5 actually shows a reduction in membrane capacitance in non-adhering cells in several conditions. In order to verify that the increase in current density observed in non-adhering cells in Figure 3 is not indirectly due to the decrease in membrane capacitance, we also compared the current amplitudes (not divided by membrane capacitance) and still observed significantly larger rectifying currents when E or 3a protein was expressed (Supplementary Figure S6).
The outwardly rectifying currents that we observed resemble both VRAC currents conducted by swelling activated anion channels and pannexin currents that are not only apoptosis-induced but also stretch-induced [26][27][28][29]37,38]. We chose carbenoxolone (CBX), which inhibits both channels with similar affinity [39,40], and applied it on non-adhering cells that display large outwardly rectifying currents (Figure 4). We observed that CBX, applied at 50 µmol/L, inhibits the observed current, restoring current amplitudes similar to the ones observed in the control cells. Probenecid is commonly used to inhibit pannexin channels and seems quite specific for pannexin currents, showing little effect on connexin channels and no described effect on VRAC channels [40,41]. We observed that probenecid, applied at 300 µmol/L, also inhibits the large outwardly rectifying currents observed in the non-adhering E protein expressing cells ( Figure 5). Altogether, these observations suggest that the current triggered by the expression of E and 3a proteins is most probably conducted by pannexin channels. We reported above ( Figure 2) that deleting the last four amino acids of the E pr (∆4) drastically reduced its pro-apoptotic effect. Cells expressing the ∆4 E protein sh an average roundness similar to cells expressing the WT protein, suggesting that de did not prevent its effect on cell morphology ( Figure 6A and Supplementary Figur Additionally, when focusing on round and non-adhering ∆4 E protein-expressing we could still record large outwardly rectifying currents (5 out of 20 cells), suggestin C-terminal deletion of E protein does not abolish the induction of pannexin current spite the prevention of apoptosis probed by the flow cytometry experiments (Figu and Supplementary Figure S8).
We also reported in Figure 2 that the deletion of the last 10 amino acids of t protein (∆10) also decreased cell death, albeit to a lesser extent. As for E protein del cells expressing the truncated 3a protein showed an average roundness similar to expressing the WT protein ( Figure 6A and Supplementary Figure S7). Focusing o non-adhering cells, we could still record large outwardly rectifying currents (9 out cells), suggesting that deletion of the last 10 amino acids is not sufficient to abolis induction of pannexin-like currents ( Figure 6C and Supplementary Figure S8). We reported above ( Figure 2) that deleting the last four amino acids of the E protein (∆4) drastically reduced its pro-apoptotic effect. Cells expressing the ∆4 E protein showed an average roundness similar to cells expressing the WT protein, suggesting that deletion did not prevent its effect on cell morphology ( Figure 6A and Supplementary Figure S7). Additionally, when focusing on round and non-adhering ∆4 E protein-expressing cells, we could still record large outwardly rectifying currents (5 out of 20 cells), suggesting that C-terminal deletion of E protein does not abolish the induction of pannexin currents, despite the prevention of apoptosis probed by the flow cytometry experiments ( Figure 6B and Supplementary Figure S8).
We also reported in Figure 2 that the deletion of the last 10 amino acids of the 3a protein (∆10) also decreased cell death, albeit to a lesser extent. As for E protein deletion, cells expressing the truncated 3a protein showed an average roundness similar to cells expressing the WT protein ( Figure 6A and Supplementary Figure S7). Focusing on the nonadhering cells, we could still record large outwardly rectifying currents (9 out of 14 cells), suggesting that deletion of the last 10 amino acids is not sufficient to abolish the induction of pannexin-like currents ( Figure 6C and Supplementary Figure S8).
Discussion
The concept that SARS-CoV E or 3a proteins could be viroporins expressed at the plasma membrane is a seductive one, as it could help the identification of new therapeutic drugs against COVID-19 by setting up a screening program based on channel activity. However, this concept is controversial, and some of the reasons that explain this controversy about the function of E and 3a proteins is likely linked to the fact that these proteins also trigger morphological alterations and/or cell death. One could imagine for instance that morphological alterations and/or cell death would be a way to activate the function of E and 3a proteins at the plasma membrane, but an alternative hypothesis could be simply that morphological alterations and/or cell death triggers endogenous cell conductances unrelated to the cell function of E and 3a viral proteins [26][27][28][29]38,42].
The only way to address these issues is to confirm that both morphological alterations and cell death are induced by E and 3a proteins, to measure plasma membrane conductance, and to characterize them to get insights on their nature and probe the pharmacological agents that would match their conductance identities. We managed to solve these issues by characterizing the membrane conductances triggered by both E and 3a proteins. The fact that both proteins trigger the same conductance independently of each other was the first indication that they could not be viroporins at the plasma membrane. The second hint was that adhering cells, whether they had a round shape or not, did not exhibit any outward conductance in spite of E or 3a protein expression. Finally, the sensitivity to carbenoxolone of the outwardly rectifying currents triggered by E or 3a proteins in nonadhering cells, but also the sensitivity to probenecid of the outwardly rectifying currents triggered by the E protein, was an indication that these viral proteins trigger cellular alterations, such as morphological changes and cell death, that are inducers of pannexinlike current. Globally, these observations remain consistent with previous observations that both E and 3a proteins are mainly localized in intracellular compartments in various cell types [14,[43][44][45][46][47]. Therefore, it is fair to mention that we cannot fully conclude the viroporin nature of these viral proteins, as their localization in subcellular organelles prevents us from clearly testing their intrinsic potential for channel activity.
We showed that expression of either of these two proteins in CHO cells induces an increase in cell death, as quantified by flow cytometry experiments. It is likely, although we did not investigate this point in detail, that this cell death accompanies the change in cell morphology and Petri dish detachment. As such, our observation that pannexin-like currents are mainly observed in detached round-shaped cells indicates that major cell morphology changes, up to the level of surface detachment, are required for the induction of pannexin-like currents. Whatever the exact mechanism, the upregulation of pannexin channels upon cell death has been previously observed [42]. It is thus not so surprising, in fact, that other reports faced problems reporting and identifying the conductances triggered by E and 3a viral proteins. The conditions for observing them are indeed quite drastic and require examining cells that are in the combined dying and detachment process, something that is not naturally pursued by researchers, especially if one hopes to detect viroporin activity. To reconcile our data with earlier publications, we noticed that whole-cell currents observed by others after SARS-CoV-1 E or 3a protein expression in HEK 293 cells [13,19] were also very similar to pannexin currents: outwardly rectifying current with a reversal potential close to 0 mV at physiological ion concentrations indicating poor ion selectivity and an amplitude of a few 100 pA.
One possibility is that the pannexin-like currents that we observed are due to the classical caspase-induced cleavage of pannexin [48]. Intriguingly, deletion of the C-terminal PBM of E protein abolished its pro-apoptotic effect, but cell morphology alteration and the induced outwardly rectifying currents were still present. Regarding the 3a protein, deletion of its PBM domain only decreased and did not completely abolish promoted cell death, but again, cell morphology alteration and pannexin-like currents were preserved. Altogether, these results suggest that cell morphology modification and pannexin induction may be linked and that these processes are not necessarily accompanied by cell death. One has to keep in mind that pannexin currents are activated by many stimuli in addition to cell death [48]. In particular, pannexin currents are also stretch-activated and may be enhanced in the detached cells that are undergoing major morphological alterations [26]. If pannexins are already activated by stretch, they would not be overactivated by their cleavage by caspase, which would explain the fact that E and 3a protein truncations do not prevent pannexin current, but only cell death. It may be difficult to clearly delineate the molecular nature of the outwardly rectifying currents in the absence of specific pharmacological tools [40,41,49]. Once the molecular nature of the CBX-sensitive currents is defined, it will be of interest to test if this channel is sensitive to the "viroporin" blockers that have been used elsewhere as evidence that the E/3a proteins are bona fide ion channels: amantadine, HMA, emodin, or xanthene [15,18,21].
Conclusions
In conclusion, SARS-CoV-2 native E and 3a proteins, and most likely SARS-CoV-1 ones as well, do not act as plasma membrane ion channels, but instead trigger the activity of plasma membrane pannexin channels, most likely through morphological alteration of the cells. However, our study does not rule out potential channel activity in intracellular membranes leading to morphological alterations and/or cell death. Additionally, adding a level of complexity, pannexin currents are associated with the induction of inflammation they may also be increased by cytokines such as TNF-alpha [50] and thus have been suggested as potential therapeutic targets [51][52][53]. Future studies will give more insights on the role of pannexin channels in COVID-19 physiopathology and treatment.
Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/cells12111474/s1, Figure S1: Morphology analysis of cells transfected with control pIRES, pIRES-E or pIRES-3a plasmids; Figure S2: Effects of E or 3a protein expression on early (Annexin V+, PI-in A&B) and late (Annexin V+, PI+ in C&D) cell death; Figure S3: Caspase dependence of E and 3a protein-induced cell death; Figure S4: Test of the apoptosis inducers and inhibitor in CHO cells; Figure S5: Effect of expression of E or 3a protein on membrane capacitance in adhering vs. non-adhering CHO cells; Figure S6: Expression of E or 3a protein is accompanied by outwardly rectifying currents in non-adhering CHO cells only; Figure S7: Distribution of cells roundness; Figure S8: C-terminal deletion of E or 3a protein does not prevent cell characteristic changes. | 2022-10-24T13:30:06.439Z | 2022-10-21T00:00:00.000 | {
"year": 2023,
"sha1": "68334baa98ed9b1ee273e521632facea0963570f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4409/12/11/1474/pdf?version=1685675318",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2a2b20a2d20ea0bc9580ed4f6825ccf9300bf6c8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
43846785 | pes2o/s2orc | v3-fos-license | Effects of Dietary Treatment , Gender , and Implantation on Calpain / Calpastatin Activity and Meat Tenderness in Skeletal Muscle of Korean Native Cattle
The objectives of this study were to examine calpain activity and meat tenderness by three different feeding patterns in Korean native cattle (KNC). Total forty-five animals were assigned each fifteen in long term restriction feeding (LTFR), long-term restriction feeding and hormone treatment (LTFR-tH), and short term non-restriction feeding (STFNR), respectively. Concentrate was restricted based on body weight in exp 1 and 2. However, it was fed ad libitum in exp. 3. Hormonal implantation was made with MPO for bulls and with F-TO for heifers at 18, 20, 22 months of age in exp. 2. Animals were purchased (3-5 month old) from local cattle market and managed in two local farms and university research unit at three different years. Animals were slaughtered at 24 months for long-term trial and at 18 month for short term-trial. Loin and tender loin muscle was used for calpain activity and meat quality. Calpain proteolytic system was not changed by treatment. However, calpastatin activity was low in short-term trial. The calpain and calpastatin activity is reciprocal relationship, therefore, the high calpain activity may effect on quality grade. The shear force value was decreased as the processing of aging after postmortem. On the other hand, the cooking loss was significantly higher in short-term than in long-term trial, and then gradually decreased by the aging. Hormone implants to increase meat yield influenced to calpastatin activity more powerfully than calpain activity to meat tenderness. In meat color-a*, there was not significant difference in loin. Meat color-b* was decreased as postmortem aging time increased in tenderloin. Western blots were done to learn whether these proteins are degraded during postmortem storage and whether this degradation temporally parallels the decrease of shear force value. Vinculin was detected at 0 day and 1 day and degraded after 3 day. In conclusion, Calpain activity was affected slightly on meat tenderness. But meat tenderness was influenced by calpastatin, more effectively. (Asian-Aust. J. Anim. Sci. 2002. Vol 15, No. 11 : 1653-1658)
INTRODUCTION
As beef consumption increase, consumers want higher meat quality.Although there are many factors (color, tenderness, and flavor of meat), which influence on meat quality, the most considerate point of consumers is meat tenderness.Calpain I, II (calcium dependent protease) and calpastatin (calpain inhibitor) influence on the meat tenderness (Goll et al., 1989).Generally, castration improves meat tenderness in beef cattle.Cytoskeletal filaments are associated with the plasma membrane in areas of cell contact.Desmin is one of the five major groups of intermediate filaments and is found in predominantly in skeletal, cardiac, and smooth muscle.Vinculin is associated with the cytoplasmic aspect of contact areas close to the membrane.It has been suggested that vinculin is a possible link between ends of the bundles of actin filaments and the plasma membrane.It has also been proposed that vinculin may be involved in the transmembrane indution of actin bundle formation (Penny, 1980;Valin, 1985).Anabolic implants and adrenomintic components are used to improve growth rate and feed efficiency of cattle during finishing (Trenkle, 1987).
Calpastatin is an endogenous inhibitor of the calpain (EC 3.4.22.17,Ca 2+ -dependent cystein proteinase) proteolytic system.In skeletal muscle, the calpain system has the potential to regulate growth through involvement in myogenic cell differentiation (Hong et al., 1996) and initiation of myofibrillar protein turnover (Goll et al., 1989).It is also well established that calpain are primarily responsible for postmortem proteolysis, which results in meat tenderization (Koohmaraie, 1992).Postrigor calpastatin activity, which is inversely proportional to postmortem tenderization, accounts for a greater proportion of the variation in beef tenderness (about 40%) than any other single measure (Shackelford et al., 1995).
Previous studies have been conducted to assess growth and meat characteristic differences between bulls and steers.In general bulls grow more rapid (15 to 17%), utilize feed more efficient (10 to 13%) at same age and produce higher yielding carcasses with less fat and more muscle than steers.Bulls produced leaner carcasses with lower quality grades than steers do.Meat from bulls had higher values than meat from steers (Morgan et al., 1989).In most cases, these researches have studied in shot-term feeding program beef cattle, but this study was conducted to examine correlation between bulls and steers through inspecting carcass related to long-term feeding program beef cattle.But the study for Korean Native Cattle (KNC) is not reported yet.Therefore, the purpose of this study is to understand the effect of different feeding (long and short) system and gender on carcass grade, calpain and calpastatin activity, and meat tenderness in KNC.
Animals and management
Animal and managements were same as previous report (Choi et al., 2002).Forty-five KNC were randomly assigned each 5 in 3 (treatment)×3 (gender) factorial design during four years of experimental period.Three treatments were long-term (24 month) restriction feeding (LTFR), long-term restriction feeding-hormone treatment (LTFR-tH), and short-term (18 month) non-restriction feeding (STFNR).The trial included heifers as well as castrated and intact males.Korean native calves (about 4 month of age) were purchased from local farm each 15 calves per year during three years (1996)(1997)(1998).In LTFR-tH, anabolic agent was implanted into ear subcutaneous at 18, 20, and 24 months of age.Bulls and steers were treated with M-PO TM implants (Progesterone 200 mg/dose, Oestradiol benzoate 20 mg/dose).Heifers were treated with F-TO TM implants (Testosterone 200 mg/dose, Oestradiol benzoate 20 mg/dose) (Upjohn, USA).The animals were slaughtered at the 24-month-old age (BW 550-650 kg) in local slaughterhouse.Five-gram samples were taken from the loin (L) and tenderloin (TL) in order to determine the levels of the calcium dependent proteases (calpain) and their inhibitor (calpastatin).
Calpain and calpastatin assay
Calpain and calpastatin were assayed by following the methods (Wheeler and Koohmaraie, 1991).Within 1 hour of slaughter, samples (5-6 g) were removed from the muscle of the loin (Longissimus dorsi) and tenderloin (Psoas major) of KNC.After muscle samples were homogenized in ice-cold homogenizing buffer (40mM Tris; 10 mM EDTA; 10 mM 2-mercaptoethanol; 0.2% Triton X-100; pH 7.5 at 4°C: 10 volumes), centrifuged at 30,000×g for 30 min at 4°C and filtered through glass wool (pre-washed with homogenizing buffer).For ion-exchange chromatography, sample was adjusted to pH 7.5 with 6 N HCl, diluted with ice-cold distilled deionized water to reduce its conductivity to below buffer A. The solution was loaded onto a 1.6 cm× 40 cm column of DEAE-sephacel at 24 ml/h.After loading, the column was washed with buffer-A overnight to remove unabsorbed proteins until A278 was closed to buffer-A.Each fraction was eluted with elution buffer.Fractions 1-7 for calpastatin activity, fractions 5-14 for calpain-I activity, and fractions 15-21 for calpain-II were screened by absorbance at 278 nm after pooling of fractions.
Warner-bratzler shear force and cooking loss
Three steaks (two 2.45 cm thick chops per sample) were removed from loin and tenderloin, vacuum packaged, and aged for 3, 9, 15, 21 days.The steaks were then cooked to 75°C in water bath.These steaks were cooled for 2 h before removal of six cores (1.24 cm diameter) paralleled to the longitudinal orientation of muscle fibers.Each core was sheared once with a Warner-Bratzler shear attachment using an Instron Universal Testing Machine (Instron, Canton, MA) with a 50 kg load cell and crosshead speed of 5 cm/min.
Meat color
Meat surface color was measured with a chromameter (Minolta, CR 301).After chopped surface of each sample was exposed in air for 30 min, meat color was expressed by Commission International de Leclairage in L* (lightness), a* (red-green component), and b* (blue-yellow component) values.
Statistical analysis
SAS/STAT 6.03 package was used to analyze the association among bovine gender, feeding pattern, and economic trait in KNC.Statistical procedure was accomplished using General Liner Model (GLM) by least square procedure (Harvey, 1975).Tukey's Studentized Range Test analyzed the analysis of significance test for variable.
Calpain and calpastatin activity
Calpastatin is a powerful regulator of calpain-mediated proteolysis during postmortem aging of meat (Koohmaraie, 1992).However, very little is known about the mechanism or factors that control intracellular protein degradation in growing muscle.Calpastatin and calpain-I activity in LTFR-tH was higher than in LTFR and STFNR (Table 1).Treatment LTFR had a higher calpain-II activity than other treatment.In gender, calpastatin and calpain-II activity in bull was higher than in steer and heifer.Heifer had a higher calpain-I activity than bull and steer in all treatment.The ability to regulate muscle protein degradation could have a large effect on the rate of muscle growth (Goll et al., 1989).The proteolytic capacity of the calpain system may regulate muscle protein degradation during both muscle growth and postmortem storage of meat (Wheeler and Koohmaraie, 1992).In our study, the current data support this possibility.Calpain-I was higher in heifer than in bulls and steers.Calpain-II was higher in bulls than in steers and heifers.Calpastatin activity was higher in bulls than in steers (p<0.05), even if calpastatin activity in 24 month feeding treated with anabolic implants has no significantly difference.Therefore, meat quality is significantly affected by calpain-I and calpastatin ratio between bull and heifer in KNC.These results indicated that calpastatin activity is involved in the meat quality powerfully.
Shear force and cooking loss
As postmortem aging time increased, shear force value was decreased (Table 2).Shear force value in treatment LTFR was higher than another treatment through 15 days after postmortem.Thereafter, shear force value in treatment LTFR and STFNR was similar and higher than treatment LTFR-tH.In bull and heifer, shear force value of 3 days, 9 days, 21 days after postmortem were similar and higher than in steer.Bull had a higher shear force value than steer and heifer on 15 days after postmortem.Similarly, Kim et al. (2001) reported that aging in KNC influenced shear force.
Cooking loss was decreased as postmortem aging time increased (Table 3).In treatment STFNR, cooking loss on 3 days and 15 days after postmortem were higher than another treatment.There was not significantly different in each treatment on 9 days after postmortem.Cooking loss in treatment LTFR-tH and STFNR were similar and higher than treatment LTFR.In gender, there was no significant difference on 3 days and 9 days after postmortem.Bull had a higher cooking loss than steer and heifer on 15 days and 21 days after postmortem.
On meat tenderness, Field (1971) and Seideman et al. (1982) suggest that meat from bull carcasses was less tender than meat from steer carcasses, whereas, others have been unable to detect significant differences in tenderness of meat from young bulls and steers slaughtered at comparable ages.Although no differences in µ-calpain or m-calpain activities were observed between bulls and steers, the reduced proteolytic capacity of muscle due to increased calpastatin activity may serve as a regulator of myofibrillar protein degradation.Many reports link the growth advantages associated with intact males to greater amounts of androgens such as testosterone.And injected female rats with a synthetic androgen, trenbolone acetate increased muscle gain by the reduction of protein degradation (Hietzman, 1980).
Additionally, several reports have concluded that feeding b-adrenergic agonists to growing animals increased muscle mass and improved whole-body composition due at least in part to reduction in muscle degradation.These results have been observed in lambs (Bohorov et al., 1987), rats (Reeds et al., 1986), veal calves (Williams et al., 1987), chickens (Morgan et al., 1989), rabbits (Forsberg et al., 1989), and cattle (Wheeler and Koohmaraie, 1992).Our results on meat tenderness by treated hormone indicated that calpastatin activity was higher meat from bull carcass than meat from steer carcass.Shear force values have significant difference by M-PO TM and F-TO TM implants.This result suggested that hormone implants increase meat yield influenced to calpastatin activity more powerfully than calpain activity to meat tenderness.The National Beef Quality Audit identified that reduced quality of beef, specifically lower marbling score and reduced beef tenderness is due to implants (Smith et al., 1992).
Meat color
Table 4 shows change in meat color-L*, a*, b* values of KNC beef (Loin and Tenderloin) pen fed with ad libitum feeding with a high concentrate diet for 18months.Meat color-L* was increased as postmortem aging time increased in loin.Bull had a lower meat color-L* than heifer and steer in loin.In meat color-a*, there was not significant difference in loin.Meat color-b was decreased as postmortem aging time increased in tenderloin.Bull had a lower meat color-b* than heifer and steer in tenderloin.Kang et al. (1997) has been reported similar result in KNC study.
Muscle protein (desmin, vinculin)
Western blots were done to learn whether these proteins are degraded during postmortem storage and whether this degradation temporally parallels the decrease of shear force value.Vinculin was detected at 0 day and 1 day and degraded suddenly after 3 day (Figure 1).The molecular weight of vinculin was about 90 KDa.As postmortem aging time increased, desmin was degraded.The molecular weight of desmin was 50 kDa.
There is ample evidence indicating proteolysis of key myofibrillar and associated proteins whose function is to maintain the structural integrity of the myofibrils is the cause of tenderization that occurs during storage of meat at 4°C.In our study, vinculin is very susceptible to degradation in postmortem muscle.Vinculin degradation begins during first 1 day postmortem.Almost half the vinculin in loin was degraded after 1 day postmortem, and most vinculin degradation occurs after 3 day, a period during which shear force value decreases.Western blots showed that little or no degradation of desmin occurs in loin of bull during the first 1 day postmortem.Over half the total desmin in the loin, however, was degraded between 1 day and 6 day after death (Figure 1), a period during which shear force value decreases dramatically.Taylor et al. (1995) reported a similar result in biceps femoris and semimembranosus muscle in bovine.
In conclusion, although meat quantity was high in shortterm feeding, long-term feeding and hormone treatment increased meat quality grade than short-term feeding.Meat quality was influenced by genders.Calpain activity was affected slightly on meat quality.But meat quality was influenced by calpastatin, more effectively.
Table 1 .
Comparison of calpain and calpastatin activity (U/g) LTFR: Long-term feeding by restricted supply of diets.LTFR-tH: Long-term feeding by restricted supply of diets with hormone treated.STFNR: Short-term feeding by ad libitum of diets.a,b Means in the same row with a common superscript do not differ (p<0.05).
Table 2 .
Comparison of shear force (kg) among treatments and genders (n=18, LS mean) in skeletal muscle of KNC
Table 3 .
Comparison of cooking loss among treatments and genders (%, n=18, LS mean) * in skeletal muscle of KNC
Table 4 .
Change in Meat color values of KNC beef in short-term feeding (n=3)* a,b Means in the same column of same day with a common superscript do not differ (p<0.05). | 2017-11-06T19:15:09.665Z | 2002-01-01T00:00:00.000 | {
"year": 2002,
"sha1": "60837303baeae4fe4a213d05ee6de961bfeb2757",
"oa_license": "CCBY",
"oa_url": "https://www.animbiosci.org/upload/pdf/15_263.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "60837303baeae4fe4a213d05ee6de961bfeb2757",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
219751185 | pes2o/s2orc | v3-fos-license | An Inverse Correlation of Serum Fibroblast Growth Factor 19 with Abdominal Pain and Inflammatory Markers in Patients with Ulcerative Colitis
Background and Aims Bile acids (BA) play an important role in the modulation of numerous gut functions. Fibroblast growth factor 19 (FGF19) is the ileal hormone regulating BA homeostasis. The aim of the study was to evaluate serum FGF19 level and its correlation with clinical and endoscopic disease activity indices along with inflammatory biomarkers including serum CRP and fecal calprotectin levels in patients with ulcerative colitis (UC). Methods Fasting serum FGF19 level was measured using ELISA test in 16 patients with active UC (7 F, 9 M), 15 patients with nonactive UC (8 F, 7 M), and 19 healthy controls (11 F, 8 M). The disease activity was assessed based on the clinical and endoscopic evaluations as well as serum CRP and fecal calprotectin level measurement. Results The median serum FGF19 level was higher in patients with nonactive UC (175.3 pg/ml (108.7-342.3)) than in patients with active UC (114.3 pg/ml (68.9-155.3), p = 0.093). The median FGF19 level in healthy controls amounted to 151.6 pg/ml (90.6-224.2), and there were no statistically significant differences between the patients with active and nonactive UC compared to the healthy controls. An inverse correlation was observed between FGF19 level and abdominal pain intensity (R = –0.48, p = 0.007) as well as fecal calprotectin (R = –0.38, p = 0.036) and CRP levels (R = –0.36, p = 0.045). The serum FGF19 level was not correlated neither with clinical nor endoscopic disease activity indices. Conclusions The inverse correlations between FGF19 level and abdominal pain as well as inflammatory markers in UC may imply its potential analgesic and anti-inflammatory effects.
Introduction
The results of recent studies have shed new light on the role of bile acids (BA) in the regulation of numerous gut functions including gastrointestinal motility, visceral sensitivity, secretion, inflammatory response, and gut barrier integrity [1][2][3]. Complex interactions between BAs and the gut microbiota participating in their transformation play also an important role [3][4][5][6]. BA malabsorption occurs in approximately 30% of patients with chronic diarrhea [7,8]. Among patients with inflammatory bowel diseases (IBD), up to now, the role of BA malabsorption has been proved in the pathogenesis of diarrhea in patients with Crohn's disease, particularly after resection of the ileum [9]. The overload of nonabsorbed BAs entering the colon lumen induces water and electrolyte secretion, stimulating also colonic contractility. Some scarce data on BA malabsorption in ulcerative colitis (UC) remain ambiguous [9][10][11]. The role of BAs in the pathogenesis of other than diarrhea symptoms in IBD is unclear too.
A better understanding of the regulatory mechanisms in BA synthesis and enterohepatic circulation has enabled to introduce a new test for diagnosis of their malabsorption, which is the evaluation of serum fibroblast growth factor 19 (FGF19) concentration [12][13][14]. FGF19 is released from the epithelial cells of the ileum in response to the farnesoid X receptor (FXR) activation by absorbed BAs. In case of BA malabsorption, serum FGF19 level decreases, which results in increased BA synthesis in the liver [8,15,16]. It may additionally exacerbate bowel symptoms due to increased BA concentration in the colon. Furthermore, it has been shown that inflammation inhibits the FXR activation, while FXR agonists exert anti-inflammatory effect [3]. Therefore, disturbances within the gut-liver axis and FXR-FGF19 interaction may have significant diagnostic and therapeutic implications in IBD.
The evaluation of IBD activity includes the assessment of inflammatory markers as well as clinical features such as the intensity of diarrhea and abdominal pain. The gut immune system activation is directly associated with disturbances in intestinal barrier integrity and induction of visceral hypersensitivity. Potentially, the abovementioned anti-inflammatory effect of FXR activation resulting in FGF19 level increase [3] may contribute to the modulation of visceral pain response.
The main aim of the study was to evaluate fluctuation of BA concentration in active and nonactive phase of UC using serum FGF19 level measurement. Correlations between serum FGF19 level and main UC symptoms, clinical and endoscopic activity indices, and laboratory markers of inflammation such as fecal calprotectin and serum CRP levels were also assessed. All subjects provided stool and fasting blood samples. The disease activity was assessed based on the clinical and endoscopic evaluations using the Rachmilewitz index and the Mayo Endoscopic Score, respectively. The predominant stool type and mean level of abdominal pain intensity over the last 7 days before examination were evaluated using the Bristol Stool Form Scale and the Visual Analog Scale (VAS), respectively. The prevalence of gastrointestinal symp-toms, concomitant disorders, and medications in UC patients was assessed based on a questionnaire. The following features were considered the exclusion criteria: primary sclerosing cholangitis, ileal resection, and other severe conditions that could affect BA metabolism and circulation.
Quantitative Evaluation of FGF19 and Fecal
Calprotectin. The quantitative evaluation of serum FGF19 and fecal calprotectin was performed by immunoenzymatic methods: Human FGF-19 ELISA (BioVendor, Laboratorni medicina a.s., Czech Republic) and EK-CAL (Bühlmann Laboratories, Switzerland), respectively. The patients were divided into active and nonactive subgroups based on the cutoff value of 250 μg/ml for fecal calprotectin.
Statistical Analysis.
Nonparametric statistics were used, and results are expressed as median along with the lower and upper quartiles (25Q-75Q). The Mann-Whitney U test was applied to compare differences in serum FGF19 and inflammatory markers between the groups. For the comparison of differences in frequency of abnormal results between the groups, the chi-squared test was used. The Spearman rank correlation coefficient (R) was also calculated to test associations between variables.
Results
The main characteristics regarding bowel symptoms in UC patients are presented in Table 1. The median VAS scores for abdominal pain during 7 days preceding the examination amounted to 0 (0-4) in patients with nonactive UC vs. 4.5 (2-6.5) in patients with active UC (p = 0:028).
The mean score according to the Rachmilewitz index used for the disease activity evaluation amounted to 1:3 ± 1:5 (median = 1) in nonactive UC and 7:6 ± 2:7 (median = 7) in active UC. Based on endoscopic assessment of the disease activity using the Mayo Endoscopic Score, only in 40% of patients with nonactive UC endoscopic remission was found (0 points). In 40% of subjects with nonactive UC, the Mayo Score amounted to 1, and in 20% to 2 points. In patients with active UC, the Mayo Score amounted to 2 in 37.5% of subjects and to 3 points in 62.5%. The majority of subjects with active UC (75%) had pancolitis, but without backwash ileitis.
Analyzing the serum FGF19 level in UC patients, a clear tendency was revealed that the median FGF19 level was lower in active UC (114.3 pg/ml) than in nonactive UC (175.3 pg/ml) (p = 0:093). The median FGF19 level in the healthy controls amounted to 151.6 pg/ml, but there were no statistically significant differences between the patients with active and nonactive UC compared to the controls ( Figure 1). Despite the fluctuation of the FGF19 level depending on the disease activity, in the majority of UC patients, it was still within the normal range. An increased FGF19 level was found in 3 patients with nonactive UC, while a decreased FGF19 level was demonstrated in one patient with active UC and one patient with nonactive UC.
An important part of the analysis was the evaluation of correlations between the serum FGF19 level as a new marker of disturbances in BA absorption and (1)
Discussion
The main finding of the study is that the serum FGF19 level in UC patients fluctuates depending on the disease activity with a clear tendency to be lower in active UC (114.3 pg/ml) than in nonactive UC (175.3 pg/ml) (p = 0:093). Despite this fluctuation in the majority of UC patients, the FGF19 level was still within the normal range and no statistically significant differences between any of the UC patient subgroup and the controls were revealed. Based on the available literature data, it has been estimated that BA malabsorption is present in about 1% of UC patients [17]. In two recent studies, it has been shown that the FGF19 level was normal [17] or slightly elevated [18] compared to the controls which is 3 Gastroenterology Research and Practice consistent with our own preliminary results [19]. In the current study, primary sclerosing cholangitis was an exclusion criterion and in none of the patients with nonactive UC, any signs of cholestasis were detected. Nevertheless, it cannot be totally ruled out that the increased FGF19 level found in 3 subjects with nonactive UC could be a prodromal sign of the biliary tract pathology. In the physiological conditions, FGF19 is mainly released by the ileum; however, in cholestasis, this hormone is also produced in the liver [20].
The available data on the role of BA in the pathogenesis of UC are not fully consistent that partially may result from the heterogeneity of the patient groups, small sample size, and some methodological differences [11]. In an old study published in 1971, Miettinen [21] postulated that diarrhea in UC is not associated with the loss of BAs in feces, but rather with colonic mucosa injury resulting in disturbances in absorption and increased fluid production to the gut lumen. At the same time, the author claimed that BA malabsorption is limited only to the subgroups of UC patients with backwash ileitis and after proctocolectomy with ileal pouch due to shorter gastrointestinal transit time and significantly smaller absorption surface [21]. In another study conducted in patients after ileorectal anastomosis, alterations in fecal BA profile characterized by decreased level of secondary BAs have been detected [22]. In physiological conditions, secondary BAs are produced by the colonic microbiota. Noteworthy, a growing body of evidence confirms a key role of the gut microbiota in BA metabolism in the gut lumen [23]. In a mouse model, it has been shown that the gut microbiota modulation induced by the administration of probiotics (VSL#3) enhanced BA deconjugation and fecal excretion [23]. These effects were associated with increased hepatic BA neosynthesis resulting from repression of the FXR-FGF15 axis (FGF15 is the murine homolog of FGF19), and treatment with a FXR agonist normalized fecal BA levels in probiotic-administered mice [23]. Of note, only conjugated BAs can be actively absorbed in the ileum, while in the colon, passive transport of secondary BAs occurs [20].
The results of studies in which alterations of BA levels in the serum in UC patients were investigated are also not convergent. Gnewuch et al. [10] performing liquid chromatography in 161 UC patients did not find significant differences in serum BA profile compared to the controls, except for decreased total BA tauroconjugate and unconjugated BA levels, which constitute only a small percentage of the serum BA pool [10]. In two other studies in UC patients, increased serum primary BA level [24] and decreased total serum BA level [25] were reported. However, Gothe et al. [26] assessing BA malabsorption by 7 α-hydroxy-4-cholesten-3-one (C4) did not reveal any significant difference between pediatric IBD patients compared to the controls.
Based on the evaluation of the colonic mucosa biopsies in UC patients with active pancolitis, downregulation in mRNA expression for the main ileal BA transporter-the apical sodium-dependent BA transporter (ASBT)-was found together with decreased activity of BA-detoxifying enzymes [27]. Such changes were not observed in nonactive UC or left-sided UC. Simultaneously, no changes in FXR expression were reported [27]. Moreover, Nijmeijer et al. [28] did not find any changes in FXR expression, but they observed alterations in FXR activation. The decreased FXR activation may impair FGF19 production that was observed also in the current study.
The data on the direct influence of BAs on the clinical course of different forms and phases of IBD remain scarce. Therefore, one of the main aims of this study was to analyze the correlation between the serum FGF19 level and main UC symptoms including diarrhea and abdominal pain, clinical and endoscopic disease activity, and inflammatory markers. The serum FGF19 level was not correlated neither with number of stools per 24 hours nor with the Bristol Stool Form Scale score. To the best of our knowledge, this is the first report on the negative correlation between the FGF19 level and abdominal pain intensity. Previously, it has been shown that activation of TGR5-a membrane-type receptor for BAs-mediates BA-induced itch and analgesia [29]. Relatively higher FGF19 level in patients with nonactive UC, despite the presence of endoscopic signs of colonic mucosa inflammation in 60% of them, could point to the potential analgesic effects of FGF19.
Analyzing the correlation of the FGF19 level with the Rachmilewitz disease activity index, some trend was observed, but without statistical significance (R = -0:33; p = 0:073). Gothe et al. [26] did not reveal any correlation between C4 level as a marker of BA malabsorption and clinical IBD activity neither; however, their study was conducted in children with the use of different scales to score the disease activity. Furthermore, in our study, no correlation was found between the FGF19 level and the Mayo Endoscopic Score (R = -0:28; p = 0:126), which has not been evaluated so far.
One of the most interesting findings of the current study in UC patients is the negative correlation between FGF19 and inflammatory markers levels including fecal calprotectin (p = 0:036) and serum CRP (p = 0:045). The lower FGF19 level in patients with active UC (although in the majority of subjects still within the normal range) could be associated with decreased BA absorption resulting in increased BA pool in feces. In the colon, the gut bacteria participate in the secondary BA production. Interestingly, antibacterial properties of BAs depend on their profile in the fecal pool, whereas dysbiosis present in IBD may contribute to alterations of BA transformation [6]. Moreover, BAs as ligands for transcription factors modulate the expression of genes involved in BA transformation including FXR, which may exert a direct immunomodulatory effect. On the other hand, proinflammatory cytokines may repress FXR expression inducing disturbances in BA absorption [30], which suggests a complex causative relation between BA malabsorption and gut inflammation intensity. Gothe et al. [26] did not reveal any correlation between the C4 level and inflammatory markers in UC. In this study, for the first time, the correlation between FGF19 and fecal calprotectin levels was evaluated and a negative correlation between investigated parameters has been found. Potentially, a higher FGF19 level in nonactive UC could be associated with stimulation of its excretion by steroidotherapy used to induce remission. In a rat model of IBD, steroid-dependent induction of ASBT expression has been shown [31]. Furthermore, it has been demonstrated that in healthy volunteers, 21-day treatment with budesonide induces an increase in ASBT expression (by 34%) in the ileum resulting in increased FGF19 production [32]. The increased FGF19 release in UC remission may exert antiinflammatory effect as well as reduce BA synthesis in the liver and consequently BA concentration in the colon, which may alleviate the symptoms.
Noteworthy, BAs may induce a dual effect-induction or inhibition of inflammation [3]. The effect of BA action is determined by multiple factors such as concentration of BAs, their physicochemical properties, and interactions with the gut microbiota [2]. In a mouse model of UC, it has been demonstrated that experimental colitis may disturb BA synthesis by the negative feedback signaling within the FXR-FGF19 axis [33]. Recent findings have confirmed a crucial role of FXR in the modulation of inflammatory response and intestinal barrier integrity [34]. The results of both in vivo and in vitro studies have demonstrated that on the one hand, inflammation reduces FXR expression, while on the other hand, the activation of FXR exerts antiinflammatory effect by reducing the production of proinflammatory cytokines [35]. Additionally, TGR5 membrane receptors present on enterocytes, enteric neurons, and immune cells also participate in the regulation of numerous gut functions. Therefore, anti-inflammatory effect induced by FXR and FGR5 agonists may be of clinical significance [3].
The fluctuation of the FGF19 level shown in the current study reflects changes in serum and fecal BA concentration. Importantly, fecal secondary BAs due to their cytotoxic effect are considered a risk factor for colorectal cancer, also in the course of IBD. Moreover, chronically increased FGF19 level has also been reported to increase the risk for both colorectal cancer and cholangiocarcinoma in IBD patients which may have relevant clinical implication [36,37].
Among limitations of the study are relatively limited sample size and the fact that the subgroups of UC patients with active and nonactive phase of the disease constituted disjoint sets. However, the subgroups were very carefully characterized with respect to clinical and endoscopic disease activity and lab test results that enabled evaluation of numerous correlations between investigated features and parameters. The novelty of the study is related to the pioneer reports on the negative correlations between the FGF19 level and abdominal pain intensity as well as fecal calprotectin. The evaluation of FGF19 is a useful test to detect disturbances in BA absorption and circulation. The test is easy to perform and noninvasive, but a fasting blood sample is required due to postprandial increase in the FGF19 level [38].
Conclusions
The serum FGF19 level shows fluctuation depending on the disease activity, which indicates the association between the regulatory mechanisms of BA enterohepatic circulation and UC activity. The inverse correlations between the FGF19 level and abdominal pain as well as inflammatory markers may imply its potential analgesic and anti-inflammatory effects-direct or due to the FXR-FGF19 axis activation. The dynamic of the FGF19 level fluctuation depending on the UC phase suggests new therapeutic aims associated with the activation of FXR, which constitutes a key element of the gut-liver axis.
Data Availability
The data used to support the findings of this study are included within the article. Additional data are available from the corresponding author (agata.mulak@wp.pl). | 2020-06-04T09:10:34.771Z | 2020-05-29T00:00:00.000 | {
"year": 2020,
"sha1": "0de87ab9b0d1599d2cedd2749eab04dc3e2c5291",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/grp/2020/2389312.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2651ea92e6de4d2006b5bc348cf0fc835865a38f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
125761112 | pes2o/s2orc | v3-fos-license | Study on Periodic MHD Flow with Temperature Dependent Viscosity and Thermal Conductivity past an Isothermal Oscillating Cylinder
Temperature dependent viscosity and thermal conducting heat and mass transfer flow with chemical reaction and periodic magnetic field past an isothermal oscillating cylinder have been considered. The partial dimensionless equations governing the flow have been solved numerically by applying explicit finite difference method with the help Compaq visual 6.6a. The obtained outcome of this inquisition has been discussed for different values of well-known flow parameters with different time steps and oscillation angle. The effect of chemical reaction and periodic MHD parameters on the velocity field, temperature field and concentration field, skin-friction, Nusselt number and Sherwood number have been studied and results are presented by graphically. The novelty of the present problem is to study the streamlines by taking into account periodic magnetic field.
Introduction
In recent, the study of unsteady free convective flows through an oscillating cylinder evinces a vital role in chemical engineering, turbo machinery, and aerospace technology.
The occurrence of heat and mass transfer is too general in chemical process industries such as the production of polymer and food processing.The study of MHD incompressible viscous flows has several vital engineering applications in devices such as MHD power generators, electrical components transmission lines, cooling of nuclear reactors, geothermal systems, forming metal, crystal growing, aerodynamic processes, and heat exchange designs.The focus of a huge number of researchers has drowned by MHD due to its various applications such as pumps, bearings.Implicit finite-difference scheme of Crank-Nicolson type has been used to solve radiation and mass transfer effects on unsteady MHD free convection flow of an incompressible viscous fluid past a moving vertical cylinder which has been analyzed by Reddy et al. (2009) [1].When the radiation parameter increases the velocity and temperature decrease in the boundary layer which has been founded by them.Unsteady natural convection of air with a variable viscosity over an isothermal vertical cylinder has been solved with the help of implicit finite-difference method by Rani et al. (2010) [2].The objective of the work was to investigate the viscosity effects on the free convective flow of the air along a semi-infinite vertical cylinder and by analyzing they found that velocity profiles near the wall decrease with the increasing kinematic viscosity variation parameter also noticed that the decrease in the viscosity-variation parameter leads to the increase in the average heat transfer rate and to the decrease in the average skin friction.Conduction-radiation effects on periodic MHD flow along a vertical surface have been analyzed by Siddiqa et al.
(2012) [3].Radiation, chemical reaction, and magnetic parameters have been used by Machireddy (2013) [4] to investigate chemically reactive species and radiation effects on MHD convective flow past a moving vertical cylinder.By using implicit finite difference method he founded that the transient velocity increases with an increase in thermal Grashof number or mass grashof number.With the increasing values of magnetic field, parameter decreases the transient velocity.Gauss-Seidel iteration method has been used by Babu et al. (2014) [5] to investigate the effects of chemical reaction and radiation and by considering two concentric cylinders of different radius.They worked on the effects of chemical reaction and radiation absorption on mixed convective flow in a circular annulus at constant heat and mass flux.Unlike other fluid, Cintaginjala et al. (2014) [6] considered the Jeffrey's fluid, By taking Jeffrey's fluid Gauss-Seidel iteration method have been used to investigate the effects of chemical reaction and radiation absorption on mixed convective heat and mass transfer flow through a cylindrical annulus with heat generating sources and nonlinear density temperature relation which have been done by Cintaginjala et al. (2014) [6] and they also used two concentric cylinders to investigate the problem.An explicit finite numerical analysis has been carried out by taking temperature as a variable by Mondal et al. (2015) [7].By using magnetic parameter, permeability parameter, Schmidt number, thermal Grashof number, mass Grashof number and accelerated parameter they worked on free convection and mass transfer flow through a porous medium with variable temperature.Also, they assumed that there were no effects of temperature and concentration on the fluid.The velocity increases with the decreasing of the magnetic parameter and Schmidt number whereas the velocity profiles increase with increasing the Permeability parameter, thermal Grashof number, mass Grashof number and accelerated parameter in case of cooling of the plate which has been investigated by Mondal et al. (2015) [7].A numerical study on unsteady natural convection flow past an iso- thermal vertical cylinder with temperature dependent viscosity has been analyzed by Hossain et al. (2015) [8].They also worked on the effect of viscosity variation parameter on isotherms and streamlines.By taking binary fluid mixture a regular perturbation method has been used to solve MHD flow, heat and mass transfer due to auxiliary moving cylinder in the presence of thermal diffusion, radiation and chemical reactions by Sharma et al. (2015) [9].They used regular perturbation method to solve the above problem and with the increases in the value of magnetic field parameter decreases velocity which has been concluded by Sharma et al. (2015) [9].In a porous medium by using Darcy-Forchheimer model the main aim of that work was to investigate the effects of radiation, chemical reaction, thermal-diffusion and diffusion-thermo on MHD heat and mass transfer free convection flow near the lower stagnation point of an isothermal horizontal circular cylinder.In the presence of Soret and Dufour effects, hydro-magnetic flow of viscoelastic fluid over porous oscillatory stretching sheet with thermal radiation has been investigated by Ali et al. (2016) [10].An implicit finite-difference method of Crank-Nicolson type have been employed to investigate the chemical reaction and temperature oscillation effect on unsteady MHD free convective flow over moving semi-infinite vertical cylinder by Rajesh et al. (2016) [11].They found that velocity increases with the increasing thermal Grashof number, mass Grashof number and decreases with the increasing values of the magnetic parameter, Prandtl number, Schmidt number.Recently, MHD flow and heat transfer of couple stress fluid over an oscillatory stretching sheet with heat source/sink in porous medium have been carried out by Ali et al. (2016) [12].Chemical reaction and radiative MHD heat and mass transfer flow with temperature dependent viscosity past an isothermal oscillating cylinder have been investigated by Ahmed et al. (2016) [13].Recently, magnetic field and thermal radiation effect on heat and mass transfer of air flow near a moving infinite plate with a constant heat sink has been investigated by Arifuzzaman et al. (2016) [14].
By considering temperature dependent viscosity and thermal conducting heat and mass transfer flow with chemical reaction and magnetic field past an isothermal oscillating cylinder have been studied.Unlike other researchers, we have used magnetic field periodically.The main aim of this paper is to investigate the effects of chemical reaction, periodic magnetic field on velocity field, temperature field and concentration field, skin-friction, Nusselt number, Sherwood number and stream-lines with different time steps, oscillation angle and also compared with the absence of a non-periodic magnetic field.The partial dimensionless equations governing the flow are solved numerically by using explicit finite difference method with the help of Compaq visual 6.6a.
Mathematical Model
In the presence of periodic magnetic field unsteady two-dimensional free convective flow of a viscous incompressible electrically conducting fluid past a semi-infinite oscillating cylinder of radius r 0 have been investigated.Here, the x-axis is taken along the axis of cylinder in the vertical direction and the radial coordinate r is considered as normal to the cylinder.Initially the fluid and the cylinder are at the same temperature w T ′ and concentration w C′ .At time t' the cylinder starts moving in the vertical direction with a uniform velocity u 0 .
The temperature of the surface of the oscillating cylinder is increased to w T ′ concen- tration w C′ are maintained constantly thereafter.A uniform periodic magnetic field (B 0 ) is imposed to the oscillating cylinder which is presented in Figure 1.It is further assumed that there is no applied voltage, so that electric field is absent [4].It is also assumed that there exists a homogeneous first order chemical reaction between the fluid and species concentration.But here we assume the level of species concentration to be very low and hence heat generated during chemical reaction can be neglected.Hence, any convective mass transport to or from the surface due to a net viscous dissipation effects in the energy equation are assumed to be negligible.It is also assumed that all the fluid properties are constant except that of the influence of the density variation with temperature and concentration in the body force term.The foreign mass present in the flow is assumed to be at low level, and Soret and Dufour effects are negligible.By considering the above assumptions, the boundary layer equations governing flow past an oscillating cylinder with Boussinesq's approximation can be expressed in the following form.Then, the flow under consideration is governed by the following system of equations: Figure 1.Flow model and physical co-ordinate.
The corresponding boundary conditions in terms of non-dimensional variables are ( ) 0 : 0, 0, 0, 0, for all 0 and 0 0 : Skin friction coefficient, the rare of heat transfer rate and Sherwood number are expressed as follows
Numerical Technique
An explicit finite difference method has been devoted to solve the nonlinear partial differential Equations ( 7)-( 10) along with boundary condition (11).The finite difference equations for the Equations ( 7)-( 10) have been recounted by the Equations ( 15) to ( 18) respectively To get the finite difference equations the region of the periodic MHD flow is divided into the grids or meshes of lines parallel to X and R is taken normal to the axis of the oscillating cylinder.Here we consider that the height of the cylinder is X max = 20.0 i.e.X varies from 0 to 20 and regard R max = 50.0as corresponding to R → ∞.In the above Equations (15) to (18) the subscripts i and j designate the grid points along the X and R coordinates, respectively, where X = iΔX and ( ) Machireddy [4], Rani et al. [2] and Hossain et al. [8].M = 300 and N = 450 grid spacing in the X and R directions respectively.The level ΔX = 0.067, ΔR = 0.111 and the time step ∆t = 0.001.We have been fixed to analyze.In this case, spatial mesh sizes are reduced by 50% in one direction, and then in both directions, and the results are compared.It is regarded that, when the mesh size is decreased by 50% in both the direction.The computer takes too time to compute the numerical values, if the size of the time-step is small.
Results and Discussion
In order to obtain the corporal insight of the problem of the study, the velocity profile, With the increases of viscosity variation parameter (γ) the velocity decreases which elucidates in the Figure 2. Peak velocity for γ = 2.00 is 12.0615% greater than peak velocity for γ = 5.00.Similarly, peak velocity for γ = 1.00 is 29.564%greater than peak velocity for γ = 2.00.It has been noticed that there is a major change of peak velocity for γ = 1.00, γ = −0.50 which occurs very drastically and that is 113.629%.However, there is no effect of viscosity variation parameter (γ) on the velocity at R = 4.5 (approximately) which is indicated by the circle.Figure 3, evinces the velocity curve for Scmidth number (Sc) and magnetic parameter (M) the velocity curves let on a different shape and its decreasing which have been indicated in Figure 4.The black long dashed line shows that there is no effect of the periodic magnetic parameter and the smoothness is decreasing with the increasing of the magnetic parameter (M).With the decreases of viscosity variation parameter (γ), thermal conductivity (ε) and magnetic parameter (M) the velocity increases which appeared in the Figure 5. 71.512%, 19.57% respectively.Figure 8, let on a different shape for different values of thermal conductivity parameter.It has noticed that with the increasing values of thermal conductivity the values of velocity are also increasing.At ε = 2.00 then the height velocity is 1.36589.1.34891 for ε = 1.00.Height velocity 1.34481 at ε = 0.50 and so on.
For the increasing of Scmidth number decreases the molecular diffusivity.That's why velocity curves downward due to increasing the Schmidt number (Sc) which is revealed the Figure 9.According to the descending order of Scmidth number 1.538, 1.905 and 45.455 through percentages are the difference of velocity between two curves.Figure 10, evinces the temperature profile for different values of Prandtl number (Pr).With the increases of Prandtl number results in low thermal conductivity, as a result, conduction even thermal boundary layer thickness decreases.That's, result to decrease the temperature.There is a significant effect of Pr on the temperature profile at R = 1.55556 (approximately).From above, by ascending order of curves at same point R = 1.55556, percentages of the difference of temperature between two curves are 35.514,8.513 and 0.688 respectively.Figure 11, elucidates the temperature profiles for different values of thermal conductivity parameter.By descending order of thermal conductivity (ε), the difference of temperature between two curves is 13.497%, 12.942% and 16.52% respectively.Temperature profiles decreases for the combine increasing values of Pr, Sc and M which is indicated in Figure 12.When the values of Scmidth number (Sc) changes then the concentration curves let on different curves for the fixed values of the rest parameters as shown in Figure 13.By investigating Figure 13, it is Streamlines for ε = 2.0 is 0.40% lower than ε = 0.5.
Finally, a comparison of the present results with the published results (Machireddy [4]) is elucidated in Table 1.The desired accuracy of the present results is qualitatively as well as qualitatively good in case of flow parameters.
Conclusions
An elaborated numerical analysis has been performed for the effects of the chemical reaction of the first order on the periodic MHD free convective flow for a gas past a • The velocity decreases with an increase of Scmidth number (Sc), Prandtl number (Pr) and periodic magnetic field (M) also higher magnetic field indicate more nonsmooth curves than the lower periodic magnetic field (M).i.e. the wavy curves occurs only when we impose the magnetic field (M) periodically.• Higher oscillation angle (ϕ) indicates the lower point on the wall than lower oscillation angle (ϕ) at which the initial velocity starts.
• With the decreasing of chemical reaction parameter (K), viscosity variation parameter (γ), result to increasing the velocity profiles while velocity increases with the increases of thermal conductivity (ε).
• For the decreasing values of Scmidth number (Sc) and Prandtl number (Pr), the temperature increases while temperature increases for increasing values of thermal conductivity (ε).
• The concentration increases with the decreasing values of Scmidth number (Sc), Prandtl number (Pr) and chemical reaction parameter (K).
• Nusselt number increases for the increasing values of Prandtl number (Pr), Scmidth number (Sc) and skin-friction decreases for the increasing values of the periodic magnetic field (M), Prandtl number (Pr).
• Sherwood number increases with the increasing values of Scmidth number (Sc).
• With the increases of viscosity variation parameter (γ) and thermal conductivity (ε) increases the values of stream-lines also lower periodic magnetic field (M) indicates the more smooth streamlines than the higher periodic magnetic field (M).
Figure 2 .
Figure 2. Velocity profiles for different values of γ.
Figure 3 .
Figure 3. Velocity profiles for different values of time steps(t) and M.
Figure 6 ,
indicates the velocity profile for different values of oscillation angle (ϕ), Gr, Gc and M. we have taken the oscillation range, oscillation angle ϕ = π/2 then the highest velocity is 1.94659 (started from 1.00) which is presented through the red long dashed line.2.25630 is the highest velocity at ϕ = π/3 which is displayed in the Black solid line and it started from 1.50.Finally, when the oscillation angle is ϕ = π/4 then the highest velocity is 2.38459 which is started from 1.7071.With the increasing of time, the velocity of the fluid is also increasing gradually which is indicated in the Figure 7.By descending order of time (t), the difference of velocity between two curves is Radial Co-ordinate (
Figure 4 .
Figure 4. Velocity profiles for different values of Pr, Sc and M.
Figure 5 .
Figure 5. Velocity profiles for different values of γ, ε and M.
Figure 6 .
Figure 6.Velocity profiles for different values of Gr, Gc and M.
Figure 7 .
Figure 7. Velocity profiles for different values of time steps(t) and Pr.
Figure 8 .
Figure 8. Velocity profiles for different values of ε.
Figure 9 .
Figure 9. Velocity profiles for different values of Sc.
Figure 10 .Figure 11 .
Figure 10.Temperature profiles for different values of Pr.
Figure 12 .Figure 13 .Figure 14 .
Figure 12.Temperature profiles for different values of Pr, Sc and M.
Figure 15 .
Figure 15.Skin-friction for different values of Pr.
Figure 16 .
Figure 16.Skin-friction for different values of Pr, Sc and M.
Figure 20 and Figure 21 show the streamlines for different values of the magnetic parameter.The smooth solid lines in Figure 19 indicates the value of M = 0.00 i.e., there is no effect of the periodic magnetic field.Higher magnetic field (M) indicates the non-smooth curves which are indicated in the Figure 21.With the increases of viscosity variation parameter (γ) and thermal conductivity (ε) increases the values of the stream which as spectacled in Figure 22 and Figure 23.Streamlines for γ = 2.0 is 6.08% higher than γ = −0.5 and at the same point streamlines decrease for the increasing values of thermal conductivity (ε).
Figure 17 .
Figure 17.Nusselt number for different values of Pr.
Figure 20 .Figure 21 .
Figure 20.The streamlines for different values of M.
Figure 22 .
Figure 22.The streamlines for different values of γ.
Figure 23 .
Figure 23.The streamlines for different values of ε.
moving semi-infinite oscillating cylinder with variable kinematic viscosity and thermal conductivity.The partial dimensionless equations governing the flow have been solved numerically by applying explicit finite difference method with the help Compaq visual 6.6a.The obtained outcome of this inquisition has been discussed for different values of well-known flow parameters with different time steps and oscillation angle.By analyzing the problem, the concluding remarks have been carried out as follows:
Table 1 .
Comparison of the accuracy of the present results with the previous results. | 2019-04-22T13:03:46.887Z | 2016-10-25T00:00:00.000 | {
"year": 2016,
"sha1": "3ba85781e1dc9d629ba45a7707c8e117c9e25d3b",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=71485",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "3ba85781e1dc9d629ba45a7707c8e117c9e25d3b",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
244800232 | pes2o/s2orc | v3-fos-license | The Impact of the Magnitude of the Group of Bullies on Health-Related Quality of Life and Academic Performance Among Adolescents
This study examines the consequences that physical and verbal/social victimization by peers and the magnitude of the groups of bullies have on academic performance and the psychological and social domains of Health-related Quality of Life (HRQoL). 1428 secondary school students in the south-east Spain completed the Spanish version of the “Adolescent Peer Relations Instrument-Bullying” and “KIDSCREEN-52” questionnaires in order to analyse, respectively, peer victimization and the psychological and social domains of HRQoL. Data on sociodemographic characteristics and academic achievement was also collected. Findings emphasise the potential of peer victimization in all its forms as risk factors explaining poor HRQoL in psychological, social and emotional domains. The number of bullies was an imponent and significant risk factor that explains a worse HRQoL in the five socio-psychological dimensions studied (Odds Ratio 4.08, Odds Ratio 9.25, Odds Ratio 4.69, Odds Ratio 2.91, Odds Ratio 11.92). Nevertheless, peer victimization rarely seems to affect academic achievement. Results suggest that much of prevention and intervention are still needed to reduce peer victimization, focusing on large bullies’ groups and their harmful impact on adolescent’s HRQoL.
Introduction
Unfortunately, peer victimization is a societal phenomenon that has become increasingly common and problematic [1], above all in adolescence [2]. At this stage of life, the number of individuals who are the object of aggressive and/or unsolicited behaviour from peers in any of its forms (physical, verbal or social victimization) is considerable [3]. In fact, approximately 10-30% of adolescents in our planet admit that they have been involved in this type of violence, either as victims or bullies. Furthermore, there is a heterogeneous distribution of this behaviour among different countries [4]. Specifically, in Spain, the probability of having been harassed by colleagues is above the European Union average and has increased from 17.1% in 2006 to 21.5% in 2014 [5].
More recent studies have shown even higher rates of peer victimization during the last years, suggesting a prevalence of 72% among the adolescent Spanish population [6]. In other words, despite all efforts put into studying and preventing peer victimization, it is still a worrying matter increasing worldwide and, specifically, in Spain. Previous authors have already highlighted the importance of prevention strategies against peer victimization [7]. However, most of them focus on the US or Scandinavian countries. Therefore, new research is needed to analyse other population groups such as the present study, which examines the Spanish adolescent population.
Cyberbullying-another variety of peer victimization-has recently appeared, caused by the incorporation of emerging technologies into our digital society [8]. However, from all types of violence, social and verbal harassment continue to be by far the most common forms of victimization [9]. Specifically, in Spain, recent studies have shown prevalence rates of social and verbal victimization of 46.1% and 29.5%, respectively, of the total types of violence suffered by adolescents [10].
Several previous studies have analysed the effects of peer victimization on adolescents' health and highlighted its significant negative impact on both physical [11] and mental health [8,12], especially, when different types of harassment are combined [13]. Health concept has been defined by The World Health Organization as not only the absence of physical or mental illness, but "a state of complete physical, mental and social well-being" [14]. In connection with this, the term of "Health-Related Quality of Life (HRQoL)" has been commonly used as a multidimensional construct that covers physical, mental and social functioning [15]. In this sense, it has been clearly demonstrated that adolescents who are victimized by their peers have a poorer HRQoL than those who are not [16]. In addition, this not only affects adolescence but also usually continues during adulthood, even if violence episodes have stopped [17]. To date, most studies on this topic have focused on studying the negative effects of victimization on overall HRQoL [18,19]. However, less is known about the outcome of victimization on each specific HRQoL domain. It would be interesting to know which domains of HRQoL are most affected in order to improve the efforts made towards personal recovery of victimized adolescents. The aim of this study is to investigate how different types of peer victimization affect psychological and social domains of adolescents' HRQoL.
The effects of peer victimization on victims' academic performance are controversial as some studies indicate that they are clearly harmful [20], while others indicate that the impact is negligible or even null [21,22]. Even less known is the impact on academic performance depending on what type of victimization the adolescent has suffered. This study aims to contribute by providing further data that may help shedding light on this controversial issue and adding new information about the impact that different types of victimization may have on academic achievement.
Although the consequences of peer victimization on adolescent population have been commonly studied, less attention has been paid to the impact that the group of bullies may have on adolescents' HRQoL or academic performance. The present study is one of the first to explore the importance of the magnitude of the group of bullies as a possible risk factor that may explain a further worsening of the victimized adolescents' HRQoL or academic performance.
Accordingly, this study was conducted to assess the consequences of different forms of peer victimization (physical and verbal/social) on the psychological and social domains of HRQoL. This study aims to provide further data that may clarify the impact of peer victimization on academic performance. We further sought to explore whether the magnitude of the group of bullies is a risk factor that by itself could explain a worsening of victims' HRQoL and academic performance.
General Design and Participants
A retrospective cross-sectional study was conducted. The study included all secondary school students (12-16 years old) from all schools in the town of Torre Pacheco (Southeast Spain) (N = 1476). This area was chosen due to its great ethnic variety.
Information was collected during the last 30 days of the school year from self-completed questionnaires. The questionnaires were administered by major teachers during class time. Adolescents were given 1 h to complete the entire questionnaire. All questionnaires were completed anonymously after written consent was obtained from parents via Parents' Associations in all participating schools.
Measures
Sociodemographic variables were collected by means of a questionnaire designed ad hoc. Those variables include age, sex, family structure (Nuclear, Mononuclear or No parents at home), ethnic origin (parents' birthplace: both Spanish, one Spanish, Maghreb, Latin-Ecuador or Others which included all other options) and parental educational attainment. Based on the procedure proposed by the Spanish Society of Epidemiology [23], information about social class in terms of parents' employment was also included. For both social class and parental educational attainment, the highest positions of both parents were taken as a reference.
Peer victimization was measured using an adapted-to-Spanish version of the validated Adolescent Peer Relations Instrument-Bullying (APRI), developed by Parada [24,25]. This scale contains 18 items and measures two dimensions of peer victimization: physical victimization (6 items) and verbal/social victimization (12 items). Each item is rated on a 4-point Likert scale indicating the frequency of victimization from the beginning to the end of the academic year (nine months) (0 = Never/seldom, 1 = Frequently, 2 = Very often, 3 = Constantly). The score for each dimension was calculated as the sum of the respective items. The higher the score on each subscale, the stronger the victimization suffered by the teenager. To facilitate the interpretation of this study, verbal/social victimization was categorized into tertiles, and physical victimization was divided into four distinct categories according to the number of physical violence episodes suffered by the adolescent (0, 1, 2-4, ≥ 5). The reliability of this sample was 0.85 and 0.92 for the Cronbach's Alfa for "physical victimization" and "verbal/social victimization", 1 3 respectively. In addition to its good internal consistency, this questionnaire was selected for its applicability to the adolescent population and for its ability to differentiate between different subtypes of victimization.
The magnitude of the group of bullies was assessed by calculating the total number of peers who had bullied the adolescent during the last academic year (0 = None, 1 = 1 or 2 bullies, 2 = 3 bullies, 3 = 4 or more bullies).
Health-related Quality of Life (HRQoL) was measured by analysing five out of ten domains of the validated and adapted to Spanish KIDSCREEN-52 questionnaire [26]: Psychological Well-being (6 items), Mood and Emotions (7 items), Social Support and Peers (6 items), School Environment (6 items) and Social Acceptance (3 items). Each item is rated on a five-point Likert scale corresponding to feelings of well-being over the previous week (1 = Never, 2 = Seldom, 3 = Sometimes, 4 = Often, 5 = Always). Rates were calculated independently for each dimension as T-values of the Rasch scores corresponding to the sum of the response options [27]. Higher scores correspond to a better quality of life. The internal consistency of the five KIDSCREEN-52 domains analysed in the study ranged between a Cronbach's alpha of 0.78 and 0.85. This questionnaire was selected because of its applicability in different cultural contexts and its practical use as well as its good internal consistency.
Academic performance was estimated using two indicators: Academic excellence and Academic failure in core secondary school subjects (Mathematics, Social Sciences, English Language, Natural Sciences and Spanish Language). Academic excellence was assessed by calculating the average of the scores obtained in these six subjects in a range from 0 to 4 (Poor or below F = 0, Not acceptable F = 1, Pass C = 2, Very good B = 3, Excellent A = 4). For this purpose, normalized values were used by calculating the average school score for each academic subject. The adolescent was considered to have achieved academic excellence when his or her grade point average was above one standard deviation from his/her school average. Academic failure was derived by determining the number of subjects with a grade lower than a pass mark. The adolescent was considered to have failed academically if he/she had one subject below a pass mark. Data was obtained from the information provided by the students themselves regarding the last exam they had taken.
Statistical Analysis
Adolescents' sociodemographic characteristics were collected using descriptive analysis by calculating frequencies and percentages.
Several multivariate analyses were performed using binary logistic regression to estimate peer victimization (physical and social/verbal) and "bullies' group density" associations with the likelihood of a poor HRQoL, academic failure or not achieving academic excellence. Each form of peer victimization and "bullies' group density" variable was taken as an independent variable, and HRQoL and academic performance as criterion variables. All models were adjusted by sociodemographic variables that had previously had p-values below 0.10 in the univariate logistic model.
For KIDSCREEN-52, the mean scores varied around 50 (SD = 10) due to T-value standardization [27]. Poor HRQoL was assigned to values below the 50th percentile (P50) or Median. Dichotomized values were used for dependent variables in logistic regression models: KIDSCREEN-52 dimensions (T-values below the 50th percentile were represented by a score of 1, and all others by zero); degree of academic excellence (1 = average score equal to or greater than one standard deviation from school average, 0 = average score less than one standard deviation from school average); degree of academic failure (1 = one or more failures, 0 = no failure).
When half or more of the subscale items were available within each APRI domain and KIDSCREEN scale for a particular participant, missing data was replaced by the mean score of the remaining items on the same subscale. Otherwise, data was excluded for the analysis of the affected dimension [28]. The extent of missing data for each APRI and KIDSCREEN-52 domain ranged from 16 missing data for the "Physical Victimization" scale to 90 for the "School Environment" scale.
All analyses were performed using the Statistical Package for Social Sciences SPSS-24.0. p values < 0.05 were considered to be statistically significant.
Sociodemographic Characteristics
The participation rate reached 96.7% (n = 1428). Fifty-four adolescents did not complete the questionnaire, thus an effective rate of 95.6% was achieved (n = 1411). Participants included 745 boys (52.8%) with a mean age of 14.8 (SD 1.4) and an age range of 12-18. The majority (85.3%) came from nuclear families, in which both parents were Spanish (64.5%). Adolescents from Maghrebi (18.2%) or Latin backgrounds (9.6%) were the next two most common categories in frequency. Two-thirds of the main breadwinners worked in semi-skilled or unskilled manual jobs and only one fifth had higher education (Table 1).
Health-Related Quality of Life According to Sociodemographic Characteristics
After controlling for potential confounders (Table 2), multivariate analyses showed that girls had a lower probability of having a problematic HRQoL in the following dimensions: School Environment (CI 95% OR 0.51-0.85) and Social Acceptance (OR 0.73, p = 0.019), but conversely had a greater risk of poorer Mood and Emotions quality of life values (OR 1.49, p = 0.005).
The youngest adolescents were less likely to have lower scores in most HRQoL categories, especially when compared to the middle age group in Psychological Well-being (OR 0.52, p < 0.001), Social Support and Peers (OR 0.71, p = 0.034) and School Environment values (OR 0.59, p < 0.001), and when compared to the oldest group in Psychological Well-being (OR 0.57, p = 0.002).
Social class had a significant effect only in the case of Moods and Emotions quality of life values. Adolescents from lower classes had 2.17 times more risk of having worse HRQoL than adolescents from classes I and II (p = 0.009).
Compared to adolescents who have two Spanish parents, children of Maghreb, Latin or other ethnicities had 2.08, 1.73 and 1.81 times more risk, respectively, of having worse scores in the Social Support and Peers subscale. However, Maghrebi ethnicity turned out to be an independent protective factor in the HRQoL related to School Environment when compared to adolescents who have two Spanish parents (CI 95% OR 0.45-0.95).
The type of family and parental educational attainment covariates were not significant in any of the regression analyses, thereby indicating that they have no influence on adolescent HRQoL.
Academic Performance According to Sociodemographic Characteristics
The adjusted analyses (Table 3) showed that boys, older students and those who came from non-nuclear families obtained worse results for academic performance. Also, academic achievement worsened significantly in the least privileged classes.
By contrast, good parental educational attainment was considered as an independent protective factor and adolescents whose parents had secondary or higher education were more likely to achieve academic excellence (OR 1.73 with p = 0.02, and 2.17 with p = 0.007, respectively); likewise, children whose parents had enjoyed higher education were also less likely to fail academically (CI 95% OR 0.38-0.84).
Compared to adolescents who have two Spanish parents, teenagers from Latin background had significantly worse academic performance, with a two-fold increase in the risk of academic failure (CI 95% OR 1.26-3.38) or not achieving Academic excellence (CI 95% OR 0.05-0.53). Results also revealed that Maghrebi children were most likely to fail (OR 1.77, p = 0.003).
Associations of HRQoL with Peer Victimization and the Magnitude of the Group of Bullies
The magnitude of the group of bullies was a significant risk factor explaining lower HRQoL in all KIDSCREEN-52 subscales. Furthermore, both types of analysed victimization (physical, verbal/social) were closely associated with worse scores in all the HRQoL subscales. The most affected dimension by all types of victimization was Social Acceptance, followed by Moods and Emotions which, in the most serious cases of harassment, had a 15.86 and 8.35 (p < 0.001) increased risk, respectively, of worse scores in the event of physical violence. In the case of suffering verbal/social violence, an increase in risk of 9.06 and 5.48 (p < 0.001), respectively, was detected in these two dimensions ( Table 2).
The Role of Peer Victimization and the Magnitude of the Group of Bullies in Academic Performance
Multivariate analyses (Table 3) revealed that only serious physical victimization (score ≥ 5 on APRI scale) had an academic impact on adolescents, as physically harassed children were 1.67 times more likely to fail academically (p = 0.048). The density of the group of bullies had an independent effect on poor academic results: there was a 10 times greater risk of not achieving academic excellence when there were more than four bullies (CI 95% OR 0.01-0.71, p = 0.022).
Discussion
The results of this study show that adolescents victims of peer victimization-in any of its manifestations (physical and verbal/social)-have poorer Health-related Quality of Life in psychological, emotional and social domains. Furthermore, the size of the group of bullies, an as-yet poorly studied phenomenon, clearly has a significant independent negative impact on HRQoL. Previous research has shown that peer victimization has a harmful effect on the overall HRQoL of adolescents suffering from this type of harassment [16,18,19]. This investigation demonstrates how it is maintained when studying the influence of peer victimization on specific HRQoL sociopsychological domains, with physical victimization being the most damaging. Previous research has reported that physical victimization is the most harmful [29] while others suggested that social victimization is the one that most affects adolescents' psychological wellbeing [30]. This study reveals physical victimization as the one that most damages adolescents' HRQoL. One hypothesis that could explain this finding is that, usually, indirect victimization behaviours (e.g., social exclusion, etc.) are regarded as less serious than physical ones. It may occur because the lack of visible physical injuries could result in a lack of awareness of the seriousness of the behaviours and, therefore, a less damaging impact on adolescents' HRQoL [29]. On the other hand, it could be explained by the fact that young people who have experienced physical victimization are more likely to have also experienced other violent behaviours (e.g., insults while being assaulted); whereas those who have suffered verbal or social victimization may not have suffered physical abuse. Previous studies have shown that experiencing multiple types of victimization has worse HRQoL outcomes [11]. Therefore, those who have experienced physical victimization along with other violent behaviours would have a greater impact on their HRQoL. In addition, as other studies have shown, this research reaffirms that the more serious and frequent the violence episodes are, the more pronounced the impact on adolescents' HRQoL is [13,31].
Results show how HRQoL's psychological domains are severely affected in adolescents who have experienced both physical and verbal/social victimization, in fact, one of the most affected dimensions in the study is "Mood and Emotions". These findings are in line with those studies suggesting that young people who have been bullied by their peers have higher rates of psychological distress [32]. It is well known that emotional/psychological distress is associated with a higher likelihood of poorer mental health [33]. The importance of this data lies in the fact that peer victimization is most frequent at secondary schools [34] and occurs at a critical age in terms of the onset of mental disorders [35], which is one of the most important causes of Disability Adjusted Life Years lost in young people [36]. The results of this study suggest that victims' interventions should focus on strengthening the psychological and emotional spheres of HRQoL. Also, efforts to prevent peer victimization should be concentrated on this period of life, since these mentioned devastating consequences can extend into adulthood [17].
The finding that the magnitude of the group of bullies acts as an independent negative factor that affects HRQoL is especially noteworthy, since it is yet an unstudied phenomenon. These results are logical and expected, as one may feel more unprotected and less socially supported the more bullies are harassing oneself. These findings also support that empowering adolescents not to imitate or follow bullies is a way to avoid the creation of big groups of bullies and, thus, prevent worse future consequences on adolescents' HRQoL. On the other hand, it has also been demonstrated that suffering peer victimization is a risk factor to become a bully [37]. People who have suffered victimization and have become bullies are known as "bully-victims" and they have a higher risk of experiencing traumatic symptoms and adversity than "only victims" adolescents [38]. For this reason, it is important to consider preventive measures to avoid peer victimization and, thus, reduce the development of "bully-victims" adolescents. In this way, the creation of large groups of bullies that seriously damage adolescents' HRQoL will be avoided, as well as the devastating consequences of "bully-victims" on their psychosocial well-being.
Unlike the findings of other studies [39], this research found that sociodemographic variables such as type of family or parents' academic level do not seem to affect children's emotional, psychological or social HRQoL domains. However, Mood and Emotions dimension does seem to be affected in the less privileged social classes. Furthermore, ethnic origin may affect young people's quality of life [40], since Maghrebi adolescents and those from Latin background showed a lower quality of perceived social support. Socio-ecological interventions within adolescents' community would be necessary to modify aggressive attitudes towards these less privileged groups. In addition, prevention and intervention efforts aiming at the less privileged social classes and non-Spanish ethnic groups should focus on creating quality social support networks to promote better HRQoL. These findings are a reminder of how important is to study HRQoL including cultural and sociodemographic factors. Thus, sociodemographic factors should also be considered to understand risk for low HRQoL.
In addition to the lack of clear evidence related to peer victimization and academic performance [20][21][22], there is still no research into how different types of violence 1 3 independently influence academic performance. This study evidences how high levels of physical victimization are related to greater academic failure in adolescents; however, verbal/social violence seems to have no effect on academic achievement. In the same way, the analysis of the influence of the group of bullies on academic performance shows that this factor is a great obstacle to achieving academic excellence. The controversy in previous studies may be explained by the fact that victimization was study as a whole and not according to specific types of violence. This study highlights the need to draw the attention to adolescents with low academic performance and explore the possible existence of physical victimization.
The link between underprivileged sociodemographic factors and impaired academic performance is in accordance with previous research [41]. This study confirms that the most disadvantaged social classes, adolescents from non-nuclear families and those whose parents have a low educational level are at greater risk of failing academically. The same occurs with young people from non-native ethnic groups, especially adolescents from South American and Maghrebi countries, possibly due to their linkage with more disadvantaged socio-demographic factors. These results suggest the need for better academic support measures for adolescents from social classes and ethnic groups at risk of exclusion.
This study had several limitations. First, it was only analysed episodes of victimization that had taken place in the previous academic year. Such a short period of time precludes speculation about longer-term outcomes, as well as detecting relevant information about episodes of violence by peers from previous school years. Second, the cross-sectional nature of the study makes it difficult to clarify the cause-effect mechanism of the investigated associations. Future prospective studies with a longer follow-up time are necessary if data on as many numbers of violence episodes as possible is to be obtained and causal relationships between the studied elements are to be established. Third, the information was obtained from self-administered questionnaires, so the possibility of a recall bias cannot be ruled out. Cross-checking adolescents' information by different sources would be desirable in future studies. Fourth, the participation of all students was not possible, which may be related to the fact that peer victimization is linked to absenteeism [21] and some potential violence events may have gone unrecorded. Fifth, the study of cyberbullying, which is increasingly important today, was not contemplated. A study similar to the present investigation that included this phenomenon could be relevant for future research. Finally, effects of peer victimization on academic performance have been studied without controlling for academic achievement prior to victimization. It is therefore difficult to make any final conclusions regarding this finding.
On the other hand, the large sample size studied, as well as the great ethnic variety, the different social classes included and the variety of adolescent age groups, was ideal. In addition, to the best of our knowledge, this was one of the first studies to show how different HRQoL domains are affected by the type of peer victimization as well as one of the first to study the impact of the magnitude of the group of bullies as an independent risk factor for poorer HRQoL.
Taken together, this study emphasizes the harmful impact that peer victimization, in all its forms, has on the sociopsychological domains of adolescents' HRQoL. Also, findings point to pay attention at large bullies' groups as a factor to consider for peer victimization related interventions. The results of this study are relevant enough to continue investigating about the magnitude of the groups of bullies as an individual risk factor for poorer HRQoL. Equally, this research provides fresh reasons why school violence should continue to be regarded as a major public health problem.
Summary
This study examines the consequences that physical and verbal/social victimization by peers and the magnitude of the groups of bullies have on academic performance and the psychological and social domains of Health-related Quality of Life (HRQoL). 1428 secondary school students in the south-east Spain completed the Spanish version of the "Adolescent Peer Relations Instrument-Bullying" and "KIDSCREEN-52" questionnaires in order to analyse, respectively, peer victimization and the psychological and social domains of HRQoL. Data on sociodemographic characteristics and academic achievement was also collected. Multivariate analyses were performed using binary logistic regression to study potential associations. Findings emphasise the potential of peer victimization in all its varieties as risk factors explaining poor HRQoL in psychological, social and emotional domains. Both physical and verbal/social victimization were strongly associated with low HRQoL in both psychological and social domains. The number of bullies was an imponent and significant risk factor explaining worse HRQoL in the five socio-psychological dimensions studied (Odds Ratio 4.08, Odds Ratio 9.25, Odds Ratio 4.69, Odds Ratio 2.91, Odds Ratio 11.92). Nevertheless, peer victimization rarely seems to affect academic achievement. Academic performance was only affected in adolescents who suffered serious physical victimization (Odds Ratio 1.67); no influence of verbal/social victimization on academic performance was detected. Groups of bullies (three bullies or more) had an independent effect on poor academic results (Odds Ratio 0.10 for attaining academic excellence). Results suggest that much of prevention and intervention are still needed to reduce peer victimization, focusing on large bullies' groups and their harmful impact on adolescent's HRQoL. | 2021-12-03T06:22:41.278Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "584a413eeed94ae2e5f77a5f61661b9afc398d7b",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10578-021-01290-8.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "0c395cda69d64f15db4ba32ebca837900e5b37c5",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234357786 | pes2o/s2orc | v3-fos-license | RKKY interaction in a spin-split superconductor
We determine theoretically the interaction between two magnetic impurities embedded in a spin-split $s$-wave superconductor. The spin-splitting in the superconductor gives rise to two different interaction types between the impurity spins, depending on whether their spins lie in the plane perpendicular to the spin-splitting field (Heisenberg) or not (Ising). For impurity separation distances exceeding $\xi_S$, we find that the magnitude of the spin-splitting can determine whether an antiferromagnetic or ferromagnetic alignment of the impurity spins is preferred by the RKKY interaction. Moreover, the Ising and Heisenberg terms of the RKKY interaction alternate on being the dominant term and their magnitudes oscillate as a function of distance between the impurities.
I. INTRODUCTION
Superconductors have been experimentally demonstrated to exhibit strongly modified spin-dependent transport properties [1,2] with respect to normal metals, such as spin relaxation times [3][4][5][6] and magnetoresistance effects [7]. Consequently, superconductors have the potential to advance research on spintronic devices, in which the spin of the electron is utilized as the information carrier instead of the electronic charge [8][9][10]. Intrinsically coexisting ferromagnetism and superconductivity, proposed more than 60 years ago [11][12][13], is only possible under rather strict conditions. On the other hand, by creating hybrid structures of ferromagnetic and superconducting materials, it is possible to study the interplay between these orders by virtue of the proximity effect [14].
The Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction [15][16][17] between magnetic impurities is an exchange interaction mediated by conduction electrons of the host material that the impurities are embedded in. This interaction has been vastly studied in different materials with spin-degeneracy, including systems with Dirac fermion excitations [18][19][20] and superconducting materials [21][22][23][24][25][26]. In a clean metal, the RKKY intercation decays as − where is the distance between the impurities and is the dimension of the system. Likewise, the interaction decays faster in higher dimensions also in superconducting systems.
In the presence of spin-degeneracy, the RKKY interaction between magnetic impurities is isotropic in spin space and has no preferred direction for the impurity magnetic moments. On the other hand, it has been shown that in spin non-degenerate systems, the interaction can have different terms of the types Heisenberg, Ising and Dzyaloshinskii-Moriya (DM) [27], depending on the spin structure of the host material. For instance, in a uniformly spin polarized system the Ising-term arises [28] whereas in systems with spin-orbit interactions a DM interaction term can emerge [29][30][31][32][33]. In particular, the interaction between magnetic impurities located on top of an -wave superconductor with Rashba spin-orbit coupling has been found to feature an additional DM term due to the spin-orbit coupling in the superconductor [34]. Similar results have been obtained for the interaction between magnetic impurities on top of a topological insulator with proximity-induced superconductivity from an -wave superconductor [35].
To the best of our knowledge, the RKKY interaction between magnetic impurities in a spin-split superconductor (see Fig. 1) has not been studied. Such superconductors have in recent years been demonstrated to give rise to interesting spin-dependent thermoelectric effects and spin diffusion properties [36]. Due to the spin-splitting, the density of states in the superconductor acquires a large spin-dependent particle-hole asymmetry. Therefore, one might expect that the RKKY interaction could be modified compared to both the purely superconducting case and the case of a superconductor with spin-orbit interaction.
In practice, a spin-split superconductor is achieved by either exposing a thin-film superconductor to a strong in-plane magnetic field or by growing a thin-film superconductor on top of a ferromagnetic insulator. In this case, the thickness of the superconductor has to be much smaller than the magnetic penetration depth . When the superconductor has a thickness smaller than the superconducting coherence length , it can be well-approximated by a superconductor coexisting with a homogeneous spin-splitting field. In this paper, we will consider the RKKY interaction between two magnetic impurity atoms embedded in a spin-split conventional -wave superconductor, contrasting it to the interaction between magnetic impurities in a normal metal subject to a spin-splitting field. While the RKKY interaction, in the normal metal case, is mediated by electrons, the RKKY in the superconducting case is mediated by quasiparticles that are a mix of electron and hole excitations. However, in both the superconducting and normal case a spin-splitting field induced via proximity to a ferromagnetic insulator lifts the spin degeneracy of the system. This causes the RKKY-interaction to have two parts: a Heisenberg-and Ising-term. In the present context, a Heisenberg term denotes the interaction energy obtained when the impurity spins lie in the plane perpendicular to the spin-splitting field. The Ising term describes the interaction for the case when the impurity spins are collinear with the spin-splitting field. We find that it is possible to switch between an AFM and FM interaction between the magnetic impurities by adjusting the magnitude of the spin-splitting field. While this effect is in principle attainable even in the normal-state of the system, it is considerably more robust in the superconducting state where it occurs in a much larger regime of separation distances between the impurities compared to the normal-state. We discuss a possible experimental way to adjust the spin-splitting field strength in order to see this effect. Moreover, we find that the magnitudes of the Ising and Heisenberg terms of the RKKY interaction oscillate as a function of distance between the impurities, causing them to take turns on which is the dominant term.
This paper is structured as follows. We introduce the methodology used to compute the RKKY interaction in Sec. II. In Sec. III, we present a numerical evaluation of the expression for the RKKY interaction and discuss the underlying physics of its behavior. Finally, we summarize our findings in Sec. IV.
II. MODEL AND METHODS
We consider a thin film s-wave superconductor in presence of a spin-splitting field which causes a spin-splitting in the electron bands, as shown in Fig. 1. The superconductor is modelled by a tight-binding Hamiltonian including an attractive interaction between the electrons The first term represents the nearest neighbour hopping term with = being the hopping parameter. The second term is the BCS on-site attractive interaction with < 0 being the pairing strength. In the third term, ℎ exc is the spin-splitting field. In our model, we consider this field to be oriented in the -direction which is assumed to lie in the film plane of the superconductor. The Meissner response of the superconductor is well-known to be suppressed in a thin-film geometry when the field is applied in-plane and we may neglect orbital effects. We consider the system having continuous boundary conditions along both in-plane directions ( and axes here). Using a Fourier transformation where is the total number of the lattice points, leads to the following form of the Hamiltonian in the -space where = −2 cos( ) + cos( )) − and in it ( ) is the lattice constant along ( ) axis, also is the chemical potential. Here, we have redefined / → .
Performing a mean-field treatment, we introduce the superconducting gap We then obtain, Using the following transformation (see Appendix A for details), where, the diagonalized form of 0 will be Here, = √︃ 2 + Δ 2 and , = − ℎ exc . Expressing the electron operators in terms of the quasiparticle operators Eq. (5), the gap equation takes the form In this study, the gap equation is solved self-consistently. Further, the free energy of the system is given by An important characteristic length scale in the system is the superconducting coherence length which is indicative of the size of the Cooper pairs. In the BCS formalism, this quantity for an isotropic -wave superconductor is given by is the Fermi velocity and Δ 0 is the superconducting gap at zero temperature. The Fermi velocity is The main purpose of this paper is to determine the indirect exchange interaction between two magnetic impurity atoms mediated by the quasiparticles inside a superconductor described by the Hamiltonian in Eq. (1). The coupling between the quasiparticle spins and the magnetic impurities will be treated perturbatively. The total Hamiltonian can then be written as in which the first part is the non-perturbative Hamiltonian given by Eq. (1) and the second part is the perturbation defined by Here, is the strength of the interaction between the spin of an impurity atom ( ) and an itinerant spin ( ) at lattice site . The impurity spin is treated classically like a normal vector and itinerant spin is treated quantum mechanically and represented by the operator = † . Here, = ( , , ) is the Pauli matrix vector. Performing a Fourier transformation, the perturbation term in the Hamiltonian becomes By means of Eq. (5), we change the , operators into quasiparticle operators. Then, by means of a Schrieffer-Wolff transformation (SWT), the effective interaction between the magnetic impurity atoms is obtained to second order in the coupling . To obtain the effective interaction, we consider a unitary matrix of the form = . The unitary transformation of the total Hamiltonian is then, The above equation may be expanded as where we take = and discard higher order terms in . This leads to the following effective Hamiltonian for the system, We now choose the unitary transformation so that Δ + [ , 0 ] = 0 and the effective Hamiltonian becomes˜= In order to accomplish this, we consider the following Ansatz for Computing the commutator [ , 0 ], and requiring Δ + [ , 0 ] = 0, the coefficients in are found to be , , The final form of the effective Hamiltonian˜is obtained after calculating [ , Δ ]. In this Hamiltonian, we neglect terms representing feedback from the impurity spin on the superconductor. Feedback from the impurities would ideally be included by self-consistently taking into account both the effect of the presence of the superconductor on the impurity spins and the effect of the impurity spins on the superconducting gap, giving rise to spatial variation of the superconducting order parameter. As the density of impurities in the system is very low, neglecting feedback from the impurities can be justified.
Computing the expectation value of the effective Hamiltoniañ (given explicitly in Appendix B) leads to two different terms in the interaction energy between the two magnetic impurities: a 2D Heisenberg-like ( ) and Ising-like ( ) interaction where 0 is a constant. In the following section III, we will consider these and terms in more detail analytically and then evaluate them numerically to determine the nature of the RKKY interaction in a spin-split superconductor.
A. Analytical
The physical significance of the RKKY interaction terms and is described as follows. The Ising term determines the strength of the interaction between the magnetic impurities when they are oriented collinearly to the spin-splitting field. For > 0, the interaction prefers an AFM alignment of the impurity spins. For < 0, they prefer a FM alignment. The Heisenberg term determines the strength of the interaction between the magnetic impurities when they lie in the plane perpendicular to the spin-splitting field. The same considerations regarding the sign for hold as for the Ising term. The explicit expression for the RKKY Ising-like interaction between the spin of impurity atom 1 and the spin of impurity atom 2 is found to be Here, 21 = 2 − 1 is the relative distance between the two impurity atoms and ( , ) = (1 + , ) −1 is the Fermi-Dirac distribution function. The Heisenberg-like term in the RKKY interaction energy is In the limiting case of ℎ exc = 0, the two above terms are equal. The system then displays a normal 3D Heisenberg-like interaction between the two impurity atoms hosted by an -wave superconductor, which is spin isotropic as it should.
B. Numerical
Proceeding to a numerical evaluation of and , we consider a system of = 800×800 lattice points in the plane. We choose so that the zero-temperature superconducting gap takes the value Δ ≈ 1.5 meV. The lattice constants are set to = = 3.5 Å. The hopping parameter and chemical potential magnitudes are taken to be = 0.2 eV and = −0.6 eV, respectively. The chemical potential is chosen to provide us with a circular Fermi surface as shown in Fig. 1 (b). The superconducting gap at = 0 , the Fermi velocity, the Fermi wave vector, and coherence length take the values Δ 0 = 1.49 meV, = 1.91×10 5 m s , ≈ 0.3 Å and = 269 Å, respectively. Fig. 1 (c) illustrates the gap versus the spin-splitting field for different temperatures. A nontrivial solution to the gap equation does not guarantee that the superconducting phase is the ground state of the system. For each temperature and field strength, the ground state of the system (either Δ = 0 or Δ ≠ 0) has therefore been determined by computing the free energy of the system given in Eq. (9). At T ≈ 0 K the largest spin-splitting which allows for a superconducting phase as the ground-state is approximately ℎ exc ≈ 0.7Δ 0 which is around 1.07meV with our set of parameters. This is consistent with the Clogston-Chandrasekhar limit. It is also seen from the figure that increasing temperature reduces the gap until a phase transition occurs at the critical temperature which is around = 9.829K for ℎ exc = 0. A superconductor with a similar set of parameters as chosen above is niobium (Nb) with a critical temperature ≈ 9.2K [37].
Low temperatures
We start by considering temperatures well within the superconducting phase, , and here set = 1 K. The strength of the exchange interaction between the impurity spins and the quasiparticle spins is taken to be = 1 meV. For ℎ exc = 0, the RKKY energies Eq. (19) and Eq. (20) are presented as a function of the distance between the two impurity atoms in Fig. 2 (a). The RKKY energy goes to zero as 21 increases as seen in the inset of Fig. 2 (a). The effect of the superconducting gap is primarily to shift the RKKY energy above zero for distances larger than coherence length . Consequently, the interaction prefers an AFM orientation of the impurity spins at such distances. In the normal-state of the system, the RKKY signal changes sign between FM and AFM alignment, also for large distances. These results are consistent with previous literature.
Considering instead the case where the spin-splitting field ℎ exc is present, an interesting possibility with regard to the tunability of the RKKY interaction opens up. Since the RKKY interaction is positive in the superconducting state at ℎ exc = 0 for 21 > whereas it oscillates in the normal-state, driving the system through a phase transition by increasing ℎ exc above its critical value will change the sign of the RKKY interaction whenever the oscillations in the normal-state causes < 0. We illustrate this in Fig. 2 (b)-(e) which shows the RKKY energies at four different separation distances taken from the dashed oval region marked in Fig. 2 (a). It can be seen from Fig. 2 (c)-(e) that by increasing ℎ exc one can change the RKKY energy sign from AFM alignment into FM alignment and vice versa. In contrast to the normalstate of the system where varies significantly with ℎ exc , the RKKY interaction in the superconducting phase is practically independent of ℎ exc in comparison. This can be understood from the fact that the superconducting gap changes very slowly as a function of ℎ exc for low temperatures, as seen in Fig. 1(c). As a result, an abrupt change occurs once the phase transition to the normal-state takes place, which can cause a sign change in the RKKY interaction. A sign change can in principle also occur in the normal-state of the system, as shown in Fig. 2 (c), but this effect is far less robust than the one observed in the superconducting state. In the normal-state of the system, the sign-change can only occur at carefully chosen separation distances 21 , whereas the sign-change occurs in the superconducting state for a much larger set of separation distances. More precisely, when the separation distance between the impurities is larger than the coherence length, the sign-change occurs in the superconducting state whenever the normal-state RKKY oscillations cause to be negative. In principle, above the coherence length, this corresponds to half of all separation distances.
It is also of interest to determine whether the interaction between the magnetic impurities in the system favor their spins being collinear with the spin-splitting field or lying in the plane perpendicular to it. To this end, we compute the difference between the magnitude of the Ising and Heisenberg energies (| | − | |) as a function of distance between the impurities for several different values of the spin-splitting field in the superconducting phase (Fig. 3). The term which is largest in magnitude will dictate whether the interaction prefers the impurity spins to orient in the plane normal to the exchange field or collinearly with it. The sign of the largest term thereafter determines whether the interaction prefers the impurity spins to orient parallell or antiparallell. The difference in magnitude between the Ising and Heisenberg interaction energies oscillates as a function of separation distance, making the two interaction terms take turns on being dominant.
High temperatures
In order to show the effect of temperature on the results, we consider in this section = 4 K, taken to represent the regime . Similarly to the previous section, we first compute the change in the RKKY energy as a function of 21 when no spin-splitting field is present for both the normal-state and superconducting phase of the system in Fig. 4 (a). The results are qualitatively similar to the low-temperature case. For 21 , the signal oscillates both in the normal and superconducting state, while above the interaction between the magnetic impurities is AFM in the superconducting state.
When the spin-splitting field is present, as shown in Figs. 4 (b)-(e), the RKKY interaction in the superconducting state is more strongly affected by a change in ℎ exc than in the lowtemperature case considered in the previous section. This can be understood from the exchange field having a larger effect on the superconducting order parameter at higher temperatures, as displayed in Fig. 1(c). As a result, it becomes easier to change the sign of the RKKY interaction energies and by increasing ℎ exc while still remaining in the superconducting phase of the system. In fact, it can be seen from Figs. 4 (c)-(e) that the sign change can occur for much lower spin-splitting fields than in the low-temperature case. We also find that a sign-change of the RKKY interaction becomes more difficult to achieve in the normal-state of the system and no such signchange is observed in any of the plots in Fig. 4. In fact, the sign-change now only occurs at highly selective separation distances 21 in the normal-state where the RKKY-oscillations cause the interaction to almost vanish. Moreover, Fig. 5 shows that the interaction between the two impurity spins still oscillates between Heisenberg and Ising terms as a function of the distance between the two impurity spins even for the case of higher temperatures . The magnitude of the oscillations in Fig. 5 increases with ℎ exc in both cases. This is reasonable since the spin-rotational invariance becomes more strongly broken with increasing ℎ exc , making the Ising and Heisenberg configurations more distinct in energy.
Discussion of experimental aspects
We close this section by discussing possible experimental realizations of the proposed system. The magnitude of the spinsplitting field ℎ exc can be readily tuned by an external magnetic field. Alternatively, the spin-splitting can be induced by proximity coupling the superconductor to a ferromagnetic insulator (FMI), as displayed in Fig. 6. An effective spin-splitting field in the superconductor then arises from quasiparticle reflections at the interface between the superconductor and the ferromagnet. The spin-splitting field can be assumed to be uniform if the thickness of the superconductor is much smaller than the coherence length. Also, the magnitude of the spin-splitting scales as one over the thickness of the superconducting layer [36]. The effective exchange field in the superconductor ℎ exc can therefore be tuned through the thickness of the superconducting layer. Fig. 6 illustrates such a set up where several superconducting samples with varying thickness are grown on top of the same FMI layer. Magnetic impurity spins placed on the top surface of the superconductor will then couple via quasiparticles that experience different values of the effective ℎ exc , depending on the thickness of the superconducting layer.
For RKKY interaction in spin-polarized systems [28], an important point to note is that the preferred direction of the impurity spins will not be solely determined by the RKKY interaction. There are also local effective anisotropy terms of the type ( ) 2 and [( ) 2 + ( ) 2 ] for both impurities = 1, 2 that are contained in 0 in Eq. (18). Moreover, when inducing a magnetization in the superconductor, there will be a coupling between the induced magnetization and the impurities, which is first order in the perturbation parameter and therefore able to dominate over the RKKY interaction for sufficiently large spin-splitting. As the interaction between the impurity spins and the homogeneous magnetization of the superconductor will be equal for both impurities, this interaction will act to align the impurity spins. If the spin-splitting arises from an external magnetic field, there will in addition be a direct Zeeman coupling to the impurity spins. This direct Zeeman coupling, which would otherwise typically be the dominant interaction determining the impurity spin orientation, can be avoided by inducing the spin-splitting through proximity to a ferromagnet.
We want to underline that, although there will be other interactions influencing the magnetic impurity configuration, the RKKY interaction is detectable in experiments as it is the only interaction that depends on the relative orientation of the impurity spins and the distance between them. A possible experiment probing the RKKY interaction could be as follows. Consider the setup in Fig. 6. The impurity spins in the superconductor will prefer to align due to the coupling to the exchange field. Using e.g. spin-polarized scanning tunneling microscopy, the energy needed to flip one of the two spins can be measured [38,39]. The energy necessary to flip this spin at a given impurity separation distance will be decided by the RKKY interaction as well as other present interactions. By subtracting the energy necessary to flip a spin in the absence of RKKY interaction (when there is no other impurity nearby), the RKKY interaction can then be determined.
FIG. 6: Possible experimental setup that can be used to test the effect on the RKKY energies when changing the effective Zeeman-splitting in the superconductor. By growing several superconducting layers on top of a ferromagnetic insulators and making the thickness of each superconducting layer different, the effective spin-splitting experienced by magnetic impurities placed on top of the superconducting surfaces will be different. The thickness of the superconducting layers should in all cases be much smaller than the penetration depth and smaller than the superconducting coherence length in order to justify the approximation of a homogeneous spin-splitting field.
IV. SUMMARY
In conclusion, we have determined the RKKY interaction between magnetic impurities in a spin-split superconductor, in which case the interaction becomes anisotropic in spin space. The magnitudes of the Ising and Heisenberg terms of the RKKY interaction alternate on being the dominant term and oscillate as a function of distance between the impurities, both at low temperatures and high temperatures . We also demonstrate that it is possible to change the preferred orientation of the RKKY interaction from an antiferromagnetic configuration of impurity spins to a parallel configuration by adjusting the magnitude of the spin-splitting field ℎ exc . Such an effect is in principle also attainable in the normal-state of the system, but the effect is considerably more robust in the superconducting state where it occurs for a much larger set of separation distances between the impurities compared to the normal-state. | 2021-05-12T01:16:27.229Z | 2021-05-10T00:00:00.000 | {
"year": 2021,
"sha1": "93fdeaa87fb6f4a3c4fca02b821a2bc8e874fcc1",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2105.04576",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c2ff5117bd296be2c8a4ece787c826f00f949d81",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
4834547 | pes2o/s2orc | v3-fos-license | Comparison of the neuronal differentiation abilities of bone marrow-derived and adipose tissue-derived mesenchymal stem cells
Bone marrow-derived mesenchymal stem cells (BMSCs) and adipose tissue-derived mesenchymal stem cells (ADSCs) are able to differentiate into neuron-like cells when exposed to small molecule compounds, however the specific differences in their neuronal differentiation abilities remain to be fully elucidated. The present study aimed to compare the neuronal differentiation abilities of BMSCs and ADSCs. BMSCs and ADSCs from the same Sprague Dawley rats were isolated and cultured for use. The proliferation capacity was revealed using a cell counting method. Following BMSCs and ADSCs induction by four types of small-molecular compounds, the expression of various neuronal markers and the secretion of several neurotrophic factors were detected by immunofluorescence, western blotting, reverse transcription-quantitative polymerase chain reaction and ELISA. It was demonstrated that the ADSCs exhibited an increased proliferation capacity compared with BMSCs, according to cumulative population doubling analyses. Following a 7-day neuronal induction period, BMSCs and ADSCs exhibited a neuron-like morphology, and were termed neuronal induced (NI)-BMSCs and NI-ADSCs. They expressed neuronal markers including β-tubulin III, microtubule associated protein 2 and choline acetyltransferase. The number of NI-BMSCs that positively expressed the neuronal markers was significantly decreased compared with NI-ADSCs, and the expression and secretion of the neurotrophic factors nerve growth factor and 3′-nucleotidase in NI-BMSCs were additionally decreased compared with NI-ADSCs. The findings of the present study indicated that the neuronal differentiation abilities and neurotrophic factor secretion abilities of ADSCs were increased compared with BMSCs. ADSCs may therefore act as efficient candidates in cell transplantation therapy for diseases and injuries of the nervous system.
Introduction
Treatment of nervous system diseases and injuries remains a clinical challenge, because neurons, terminally differentiated cells, can hardly regenerate once damaged. So seeking for suitable cells to replace damaged neurons has long been a hot research topic in cell therapy field (1,2). Mesenchymal stem cells (MSCs) have been generally considered a viable source for cell therapy due to their self-renewal and multiple differentiation capabilities, as well as their easy availability from various sources including the marrow, adipose tissue, cord blood and other adult tissues (3,4). As BMSCs can proliferate rapidly and differentiate into neuron-like cells under certain conditions, continuous attention has been paid to their potential application to the treatment of nerve injuries and degeneration (5,6). However, the differentiation potential, available quantity and the duration of BMSCs has been weakened with aging (7,8), while ADSCs are less affected by aging. In addition, an individual has abundant adipose tissues separated easily without causing significant injuries (9,10). Given the same mass, more cells could be obtained from the adipose tissue than those from the bone marrow (11). ADSCs differ insignificantly from BMSCs in morphology and phenotypic characteristics (12,13), but their differences in neuronal differentiation ability under the same circumstance remain unknown and need to be further explored.
Knowing that BMSCs can promote functional recovery and protect neurons by secreting neurotrophic factors in nervous system (6), the ability of BMSCs in secreting neurotrophic factors in vitro can reflect their therapeutic ability in vivo to some extent. The aim of the present study was to investigate whether ADSCs and BMSCs have the same ability to secrete neurotrophic factors, and whether these neurotrophic factors undergo any significant changes during neuronal induction.
Comparison of the neuronal differentiation abilities of bone marrow-derived and adipose tissue-derived mesenchymal stem cells
There are controversies over the selection of the neuronal differentiation methods. For instance, neuron-like cells induced by some methods only expressed immature rather than mature neuronal markers (5,11,14,15). In addition, some studies (12,(16)(17)(18) reported that induced cells differed greatly from neuronsin morphology and no electrophysiological characteristics of neurons. However, cells induced by our group method, showed neuron-like morphology and expressed mature neuronal marker MAP2 (19,20). In addition, it exhibited electrophysiological characteristics of neurons and expressed the sodium and potassium channels. When they were transplanted into the injured sciatic nerves of the rats, there presented an obvious effect on recovering the nerve function.
Currently, cell proliferation ability, differentiation ability into neuron-like cells, expression and secretion of neurotrophic factors in BMSCs and ADSCs have been compared in the present study.
Materials and methods
Animals. Male SD rats aged 3 weeks and weighing 40-50 g were obtained from the Animal Center of the Second Military Medical University (Shanghai, China). All animal care and experimental procedures were approved by the Animal Research Ethics Committee of the Second Military Medical University, Shanghai, China (permit no. SYXK-2002-042).
Isolation and culture of BMSCs. Bone marrow was harvested from the bilateral femurs and tibias by cutting off both ends. The marrow cavity was flushed with 10 ml Dulbecco's modified Eagle's medium/nutrient mixture F-12 (DMEM/F12, Gibco, Grand Island, NY, USA). The flushing fluid was collected and centrifuged at 230 x g (Becman Allegra X-12 Centrifuge, Becman Coulter, Inc., Brea, CA, USA). Cells were placed in a 25-cm 2 flask at a concentration of 2x10 4 cells/cm 2 in a Growth Medium for SD rat BMSCs (Cyagen, Guangzhou, China), and then incubated in the humidified atmosphere with 5% CO 2 at 37˚C. After 24 h incubation, the medium was replaced, and nonadherent cells were removed. Cells of passages 2-4 were used for all experiments.
Isolation and culture of ADSCs. ADSCs were obtained from the same animals used for isolating BMSCs. Adipose tissue was harvested from inguinal adipose tissues, cut into about 1x1x1 mm 3 small pieces with a sterile blade and digested in 0.15% type I collagenase (Sigma, St. Louis, MO, USA) under 37˚C for 60 min. Cells were suspended in Growth Medium for SD rat ADSCs (Cyagen, Guangzhou, China), seeded into the 25-cm 2 flask at a concentration of 2x10 4 cells/cm 2 , and finally cultured in a 5% CO 2 incubator under 37˚C. Cells of passages 2-4 were used for all experiments. Cell proliferation assay. Passage 4 BMSCs and ADSCs were digested into single-cell suspension, and then plated into 24-well culture plates with 8 wells per plate: 4 wells for BMSCs and 4 wells for ADSCs at a concentration 1x10 4 cells/well. The number of cells in each well was counted every other day usingCountstar automated cell counter IC 1000 (Inno-Alliance Biotech, USA). The mean values were used to map the growth curve diagram.
Quantitative real-time PCR (qRT-PCR). Both BMSCs and
ADSCs were divided into four groups: native group, and 1-, 3-and 7-day neuronal induction groups. TRIzol was used to extract total RNA. The concentration and purification of cells were detected by using a nucleic acid detector (Nanodrop-2000, Thermo Fisher Scientific, Wilmington, DE, USA, USA). Reverse transcription and PCR amplification were performed as per instructions on the kit. The primers were designed and synthesized by Google Biological Company (Wuhan, China (Table I). The primers sequences were listed in Table I. β-actin was used as an endogenous control to normalize gene expression levels. The reverse transcription conditions are as follows: 37˚C 15 min, 85˚C 5 sec, thermal insulation at 4˚C, and the PCR amplification conditions are as follows: 5˚C 3 min, 95˚C 10 sec, 58.5˚C 30 sec, 72˚C 40 sec, 40 cycles. Analysis of relative gene expression data was usingthe 2 -ΔΔCt method (21).
Western blot analysis. BMSCs and ADSCs in the 6-well plate were flushed with 0.1M PBS, lysed by addition of radio-immunoprecipitation assay (RIPA) buffer (Google), and centrifuged at 4˚C to harvest the supernatant and detected the protein concentration by Bradford method. Then, the supernatant was boiled at 100˚C and centrifuged for later use. After preparation of the separation gel and addition of the sample, sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE) was performed. An equal amount (10 mg) of protein extracted from these samples was resolved on a 4-15% polyacrylamide gradient. Then the protein was transferred to the nitrocellulose membrane. The membrane was blocked in 5% skimmed milk/PBS for 1 h and then incubated with primary antibodies nestin (rabbit anti-rat monoclonal antibody, 1:2,000), β-tubulin III (rabbit anti-rat monoclonal antibody, 1:1,000), MAP2 (rabbit anti-rat monoclonal antibody, 1:2,000), synaptophysin (rabbit anti-rat monoclonal antibody, 1:2,000), NGF (rabbit anti-rat monoclonal antibody, 1:2,000), NT-3 (rabbit anti-rat monoclonal antibody, 1:1,000), brain derived neurotrophic factor (BDNF, rabbit anti-rat monoclonal antibody, 1:2,000; all Abcam) and GAPDH (rabbit anti-rat monoclonal antibody, 1:2,000; Wei'ao, Shanghai, China). After incubation with horseradish peroxidase (HRP)-marked goat anti-rabbit second antibody (1:2,000, Jackson ImmunoResearch Laboratories, Inc., West Grove, PA, USA), chemiluminescence was detected by exposure to X-rays. The bands were quantified using Odyssey v1.2 software (LI-COR Biosciences, Lincoln, NE, USA) by measuring the band intensity for each group and normalizing to GAPDH as an internal control. The western blot experiment was repeated at least three times. Table I. Sequences of primers used for reverse transcription-quantitative polymerase chain reaction analysis.
Morphology and phenotypic characteristics of BMSCs and
ADSCs. Both BMSCs and ADSCs were long spindle-shaped with oval-shaped nuclei, growing homogeneously. Compared with ADSCs, BMSCs were thinner and longer. Neither BMSCs nor ADSCs underwent significant morphologic change from the primary generation to the fourth generation ( Fig. 1A-D).
To further explore the phenotypic characteristics of BMSCs and ADSCs, immunophenotypic analysis was performed by flow cytometry method. It was found that both BMSCs and ADSCs expressed CD44 and CD90 (>99%), but neither CD34 nor CD45, indicating that both BMSCs and ADSCs expressed typical surface markers of MSCs. In addition, the purity of both mesenchymal stem cells were relatively high (Fig. 2).
Cell proliferation ability. Knowing that harvest of large amounts of cells within a short period is of great clinical significance in cell therapy, the cell proliferation ability of passage 4 BMSCs and ADSCs was compared by cell counting method. It was found that the population doubling time for BMSCs was 17.69±2.22 h vs. 14.51±0.89 h for ADSCs (P<0.05). Both BMSCs and ADSCs reached the lag phase of growth at day 15 (Fig. 3). The results suggested that ADSCs proliferated more quickly than BMSCs in vitro.
Morphological changes of BMSCs and ADSCs after neuronal induction.
Morphological changes were observed in BMSCs at day 4 after neuronal induction. It was found the cellular bodies became round or oval gradually, and the processes became thinner and longer. These changes were observed in ADSCs at day 3 after neuronal induction, indicating that morphologic changes occurred earlier in ADSCs than those in BMSCs. At day 7 after neuronal induction, both NI-BMSCs and NI-ADSCs exhibited a neuronal morphology (Fig. 4A-D). In NI-BMSCs, the cellular bodies were looked round or oval, much bigger in size and with 3-5 long processes, while they were round, much smaller in size and with 2-3 short processes in NI-ADSCs. The induction rate were (73.61±3.43) and (93.01±2.65)% for BMSCs and ADSCs, respectively (P<0.05). These changes suggested that both BMSCs and ADSCs were able to differentiate into neuron-like cells in morphology. In comparison, ADSCs differentiation was faster and induction rate was higher.
Expression of neuronal markers in BMSCs and ADSCs.
Immunocytochemistry was used to characterize ADSCs and BMSCs (Fig. 5). It was found that the positive cells for β-tubulin III of BMSCs (Fig. 5A) were (12.5±1.80)%, which were lower than (19.5±1.50)% of ADSCs (P<0.05; Fig. 5D). No MAP2 or ChAT positive cells was exhibited in BMSCs ( Fig. 5B and C) and ADSCs (Fig. 5E and F) (Fig. 5G-L). After neuronal induction, the neuronal marker expression was increased significantly in both NI-BMSCs and NI-ADSCs. The expression of β-tubulin III and ChAT in NI-BMSCs were significantly lower than that in NI-ADSCs, while the expression of MAP2 in BMSCs was higher than that in ADSCs.
The expressions of neuronal marker genes at transcriptional level of BMSCs and ADSCs were detected by qRT-PCR. Expressions of these genes of both cells increased significantly after neuronal induction. Expression of nestin mRNA (Fig. 6A) began to increase after 1-day induction, and remained unchanged significantly after 1-, 3-and 7-day induction, While mRNA expression of β-tubulin III, MAP2 and ChAT (Fig. 6B-D) began to increase after 1-day induction, and reached the peaks at day 7. Compared with BMSCs, the mRNA expression of nestin, β-tubulin III and ChAT in ADSCs were relatively higher both before and after induction, while the mRNA expression of MAP2 mRNA was lower than that in BMSCs.
The results of western blot were consistent with the immunofluorescence and qRT-PCR. After 7-day neuronal induction, the protein expression of nestin, β-tubulin III, MAP2 and synaptophysin (Fig. 7A-E) was higher than that of native BMSCs and ADSCs. The protein expression of β-tubulin III and synaptophysin protein in ADSCs was higher than that in BMSCs both before and after neuronal induction, while the protein expression of MAP2 in ADSCs was lower than that in BMSCs. There was no difference between BMSCs and ADSCs in protein expression of nestin.
Expression and secretion of neurotrophic factors. The results of qRT-PCR showed that both BMSCs and ADSCs expressed constitutive gene of neurotrophic factors such as NGF, NT-3 and BDNF at transcriptional levels. After neuronal induction, the NGF, NT-3 and BDNF increased with culture time and reached the peak at day 7. There was higher expression of BDNF, but lower expression of NGF and NT-3 in BMSCs compared with ADSCs ( Fig. 8A-C). We also detected the neurotrophin glial cell derived neurotrophic factor (GDNF) and found that there was no significant difference between native and neuronal induced BMSCs and ADSCs.
We qualified the protein expression of neurotrophic factors by western blot (Fig. 9A). The results confirmed that native BMSCs and ADSCs had also expressed NGF, NT-3 and BDNF (Fig. 9B-D). After neuronal induction, the expression of NT-3 in NI-ADSCs was increased compared with that in NI-BMSCs, whereas the change of NGF had no statistical significance. However, NI-BMSCs and NI-ADSCs expressed less BDNF than native ones. The protein expression of NGF and NT-3 in NI-ADSCs were greater than those in NI-BMSCs, while the protein expression of BDNF had no statistical significance between NI-BMSCs and NI-ADSCs. Secretions of neurotrophic factor in BMSCs and ADSCs. The secretions of neurotrophic factor in BMSCs and ADSCs were measured by ELISA. It was found that both NI-BMSCs and NI-ADSCs secreted more NGF, NT-3 and BDNF compared with the native ones (Table II, Fig. 10). Compared with ADSCs, native and NI-BMSCs secreted more BDNF, but less NGF and NT-3.
Discussion
ADSCs have advantage in harvest. MSCs can be easily obtained from the bone marrow and adipose tissues, but more easily from subcutaneous adipose tissues, which is more acceptable for the patients. In our study, 8x10 4 adherent BMSCs could be obtained from the bone marrow of bilateral tibias and femurs in a 45 g SD rat after 24 h incubation, and increased to 7-8x10 5 cells after one-week culture, while 1x10 5 adherent ADSCs were harvested from inguinal adipose tissues in the same rat, then increased to 7-9x10 6 after one-week culture. Compared with BMSCs, more ADSCs could be easily harvested from the same donor, and ADSCs also proliferated more rapidly than BMSCs at same condition. The quantity and activity of the BMSCs reduced apparently with aging, while ADSCs were less affected, suggesting that ADSCs are a more effective source for clinical use.
Neuronal differentiation ability. BMSCs and ADSCs are known to differentiate into neurons (22,23). In our study, nestin, a neural precursor stem cells marker, and β-tubulin III, an immature neuronal marker, were expressed in native BMSCs and ADSCs, which suggested that both MSCs retained a native potential for neuronal differentiation and were in conformity with other studies (20). After 7-day neuronal induction, the expression of β-tubulin III, ChAT and synaptophysin in ADSCs was significantly higher than that in BMSCs, indicating that ADSCs had a higher ability of neuronal differentiation than BMSCs. The reason may be that MSCs are composed of more than one type of precursor cells.
As the proportion of different precursor cells is different in BMSCs and ADSCs, their adipogenic, osteogenic and neurogenic abilities are different (24).
Ability of secreting neurotrophic factors of induced cells.
Neurotrophic factors such as NGF, NT-3, BDNF are known as neuron growth nutrients, which play an important role in neuroblast proliferation, maturation and phenotype maintenance (25)(26)(27)(28). To explore whether the NI-BMSCs and NI-ADSCs had neuronal functions, we detected the expression and secretion of neurotrophic factors. It was found that NGF, NT-3 and BDNF mRNA expressions in both BMSCs and ADSCs increased in varying degrees after neuronal induction. But surprisingly, protein expression of NGF, NT-3 had insignificantly changed and BDNF decreased to some extent. The mechanism of this phenomenon is unknown. qRT-PCR detects the mRNA of genes at transcriptional level, while western blot and ELISA are at protein level. Expression of neurotrophins, such as NGF, BDNF, NT-3, not only are regulated at the transcriptional level, also are post-translationally modificated by elaborated intracellular systems (29). They are synthesized as inactive precursor proteins, pro-neurotrophins, and then are processed into active molecules via multipe steps involving glycosylation, sorting, proteolytic cleavage and secretion (30). For example, after synthesis of BDNF mRNA, BDNF protein is initially produced as a precursor protein (proBDNF), followed by post-translational cleavage of proBDNF into the mature form of BDNF by intracellular and/or extracellular proteases (31). Another interesting finding was reported that the mRNA levels of NGF and BDNF in both ADSCs and BMSCs improved rapidly but their protein levels decreased during the course of neural differentiation which was attributed to neurotrophins might be consumed during the course of neural differentiation (17). So there may exist some differe nces between different levels. Both BMSCs and ADSCs secreted more NGF, NT-3 and BDNF after neuronal induction. The native and NI-ADSCs secreted more NGF and NT-3, but less BDNF than BMSCs, suggesting that the two types of MSCs may express different neurotrophic factors.
Based on the results of this comparative study, we draw the following conclusions: (i) There are insignificant differences in morphologic and phenotypic characteristics between BMSCs and ADSCs derived from the same SD rats, except that cell bodies of BMSCs are larger than those of ADSCs.
(ii) Compared with BMSCs, ADSCs proliferate significantly faster. (iii) BMSCs and ADSCs can be easily induced into neuron-like cells by using the four small-molecular compounds. (iv) The expression of neuronal marker β-tubulin III, ChAT and synaptophysin in ADSCs are higher than those in BMSCs, suggesting that ADSCs have stronger capability of differentiation into neuron-like cells. (5) The expression and secretion of neurotrophic factors NGF and NT-3 in ADSCs are higher than those both in native and NI-BMSCs, suggesting that ADSCs have a better trophic effect in neuronal replacement therapy.
In summary, ADSCs differs insignificantly from BMSCs in morphology and phenotypic characteristics. However, ADSCs proliferate significantly faster, differentiate into neuron-like cells faster, and express higher levels of NGF and NT-3. ADSCs may have more potential than BMSCs in the treatment of nervous systems diseases. | 2018-04-03T04:35:44.491Z | 2017-07-21T00:00:00.000 | {
"year": 2017,
"sha1": "b79849c52bce661db4cebae2b1ecb55d067f815d",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/mmr.2017.7069/download",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b79849c52bce661db4cebae2b1ecb55d067f815d",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
267050211 | pes2o/s2orc | v3-fos-license | The Role of Female Physicians in Psychosomatic Medicine: Opportunities and Challenges
Background: Female physicians are in some cases preferred by patients due to their sex-related characteristics such as softness and empathy. Psychosomatic medicine presents a compelling working environment due to its holistic approach. Methods: This brief review synthesizes the challenges encountered by female physicians in psychosomatic medicine and outlines potential strategies for overcoming these barriers. Results: The presence of female role models may constitute a crucial advancement in this process. There exists a pressing demand for specialized clinical and scientific programs in psychosomatic medicine at both national and international levels. Such programs, offered by universities and ministries, as well as comprehensive training initiatives, are indispensable in fostering the next generation of females in psychosomatics. Leading journals can lend their support by publishing special issues dedicated to female physicians. Conclusion: Strengthening female physicians throughout all positions in psychosomatic medicine can contribute ultimately to the improvement of patient care.
The Role of Female Physicians in Medicine While it was debated half a century ago whether female physicians were preferred by female patients, it is now well-established that the sex of both the physician and patient significantly influences their interaction. 1Some female patients prefer to communicate with female physicians instead of male physicians due to cultural reasons, 2 hereby speaking more about biomedical and psychosocial difficulties, making more positive state-ments, and focusing more often on partnership problems. 3However, medicine was dominated by male physicians for centuries. 4Investigations into clinical practice have unveiled noteworthy distinctions concerning gender interactions up to now, 5 wherein female physicians tend to offer a higher frequency of preventive health care services and psychosocial counseling.In contrast, their male counterparts exhibit a greater emphasis on medical history assessment and physical examinations. 1Furthermore, with regard to medical specialization, gender disparities manifest as fewer females opt for surgical and internal medicine specialties.This phenomenon can be attributed to a multifaceted construct encompassing factors such as workload and childcare responsibilities. 6Gender disparities in the medical field are, in part, perpetuated by the presence of gender blindness and stereotyped preconceptions. 7Historically, and due to specific cultural reasons (e.g., negative stereotypes of female scientists, 8 family interferences with work, 9 educational unequal opportunities 10 ), females in all scientific disciplines, not just in medicine, have had to surmount considerable obstacles and engage in persistent struggles to assert themselves in academic fields.These women, including notable figures as Marie Sk1odowska Curie, Rosalind Franklin, and Ada Hopper, achieved global recognition.Only 120 years ago, Dr. Hermine Heusler-Edenhuizen became the first woman in Germany to pass her medical exams and earn the right to practice medicine. 11Female physicians, often associated with empathy and nurturing qualities, 12 are of principal importance in all disciplines.These works aim to synthesize obstacles for female physicians nowadays and demonstrate opportunities for female physicians in psychosomatics.
Psychosomatic Medicine
Psychosomatic medicine is an integral aspect in medicine, 13 adhering to a biopsychosocial model of care. 13It is recognized as an all-encompassing discipline that provides integrated care, thereby transcending the dichotomy of body and mind to treat individuals holistically. 14Not to be conflated with psychiatry or liaison-psychiatry, 15 psychosomatic medicine has a long-standing tradition with roots dating back to ancient Greece, and further enriched by influential figures such as Heinroth, 16 Alexander, 17 and Engel. 18Globally, psychosomatic medicine is not firmly established as an independent institution.Physicians with a specific interest in psychosomatic medicine can be found within somatic-oriented disciplines or in psychiatry.One major challenge for the field of psychosomatic medicine is the ongoing endeavor to define its position and gain acceptance among various disciplines.Despite the fast rise of psychosomatics in Western countries and Asia 19 based on the activities of the Japanese Professor Yujiro Ikemi in Fukuoka, 20 this process was not followed by other countries in the same manner.In Africa and Egypt, there exists only little knowledge on psychosomatic medicine. 19In a global context, owing to significant historical milestones, 21 psychosomatic medicine achieved its independent status as a specialty and institution primarily in Germany. 22atients with functional disorders (e.g., irritable bowel syndrome, chronic pain, chronic fatigue syndrome, or fibromyalgia), eating disorders, the so-called ''somatopsychic disorders'' (e.g., psychocardiology, psychooncology), affective disorders, personality disorders, and trauma disorders are treated after a consultation with a physician of psychosomatic medicine in one of around 223 hospitals for psychosomatic medicine, departments of psychosomatic medicine at general or psychiatric hospitals, or academic institutions in Germany. 21,22ince statistical analysis on physicians in psychosomatic medicine is primarily available for Germany, this article will provide an overview focusing on this European nation.Current statistics for 2021 indicate a total of 416,120 physicians, a steady increase compared to data from 1990. 23Nearly half of these (201,951) are females.The majority of physicians specialize in internal medicine, which, in some cases, incorporates psychosomatics (e.g., Tu ¨bingen and Heidelberg). 22Physicians who opt for specialization in psychosomatic medicine and psychotherapy constitute a minority with *5,000 individuals. 23According to the statistics, half of all physicians in psychosomatic medicine are females, with a count of 2,322. 23Historically, the progress toward gender equality in the psychosomatic field has followed a trajectory similar to that observed in other specialties, where the proportion of female representation has significantly increased over the past few decades. 21,24allenges for Female Physicians in Psychosomatic Medicine Despite the sex balance among practicing physicians in psychosomatic medicine nowadays, career advancement statistics in Germany reveal a striking disparity.Although females constitute half of the physicians in this field, they hold only a third of all professorships in medicine. 25In line, according to United Nations Educational, Scientific and Cultural Organization (UNESCO), females constitute a minority in research globally, with the highest proportions found in South-Eastern Europe (49%), followed by the Caribbean, Central Asia, and Latin America (44%). 26Recent studies indicate that despite the implementation of programs aimed at enhancing female's participation in science via diversity and equity initiatives, sex bias remains in medicine, affecting female's grant applications, remuneration, and overall success. 27everal variables may explain this complex phenomenon.Females are generally underrepresented in science, which may be attributed to the significant yet inadequate presence of female career role models.Early adolescent females demonstrated improved attitudes toward science when exposed to females in scientific roles. 28Unfortunately, such role models are still scarce in psychosomatic medicine.One prominent barrier to career progression are domestic and family responsibilities of female physicians 29 and less work control compared to male physicians. 30Male physicians work more hours; besides, they are more willing to marry and have children, which spur their career options, compared to female physicians. 31Despite increased role complexity was related to stress for both male and female physicians 32 ; nowadays, female physicians are more often responsible for family duties than male physicians. 33However, a change in work-life balance with more willingness to parent in males can be observed. 34Additionally, earning gaps between female and male physicians are found, even in the last 20 years. 35Female physicians show higher rates of mental illnesses 36 (e.g., depression 37 ) and are at higher risk of suicide. 38Regrettably, instances of sexual harassment persist in medicine and in science in general. 27Further investigations show that female physicians face inequalities which do not disappear with age or seniority. 39These findings apply to medicine in general but are also valid for psychosomatic medicine.
Opportunities for Young Female Physicians in Psychosomatic Medicine
In alignment with strategies for enhancing women's participation in science, various resources for young female physicians and scientists are proposed at international and national levels by leading organizations such as the World Health Organization, 40,41 European Commission, 42 and UNESCO. 26The promotion of females in science must commence during early education, as asserted by the European Council in 2018. 43Some progress has been observed though it remains insufficient. 44However, data specific to psychosomatic medicine is lacking.
Universities can actively support young female students interested in pursuing a career in medicine in general, and in particular, in psychosomatic medicine.The Research Partnership on Women in Science Careers offers advice on overcoming barriers in six thematic areas: career progression, mentorship, work-life balance, pathways to leadership, pay equity, and advo-cacy for change. 45These recommendations might support overcoming some challenges.Addressing family responsibilities, part-time models in medicine are urgently required which meanwhile become more and more prominent in psychosomatic medicine, leading psychosomatics to be an attractive discipline for female physicians.While some support exists, increased efforts are needed to foster the combination of research activities with clinical work.
Networks in psychosomatics contribute to a higher engagement of female physicians in psychosomatic (e.g., International College of Psychosomatic Medicine [ICPM], or European Association of Psychosomatic Medicine [EAPM]), hereby combining clinical and scientific work.In the European psychosomatic field, special interest groups, particularly for early career researchers (ECRs), support collaboration and the exchange of ideas.National initiatives such as the German Perspective Psychosomatic, the Greek International Society for Research of Interplay between Mental and Somatic Disorders, and the Spanish Ibero-American Society of Psychosomatic Studies, unite the ECR psychosomatic community.Annual conferences, such as those held by the EAPM, ICPM, and the German Psychosomatic Conference, can provide inspiration and a sense of community.Lastly, scientific journals can aid the advancement of female physicians in psychosomatic medicine.All those opportunities seem to be best achieved through a clear role of policy changes and institutional support in facilitating improvements for female physicians.
Clinical-Scientific Programs
In Germany, universities offer support programs for women in medicine, 46 setting a good example for other European countries.''Clinician scientist'' programs, which provide personalized education plans combining clinical work and research, 47,48 can be a first step toward competitiveness of female physicians in this field.Though these programs are seldom found in psychosomatic medicine, they could serve as a template for all female physicians in this discipline, potentially offering regular work, mentorship, and ideally, female role models as supervisors. 49Grants that include comprehensive training and structured research programs for ECRs are, without question, one of the most significant opportunities a female physician can take in overcoming the gender gap.The Encompassing Training in fUnctional Disorders across Europe (ETUDE) program (https:// etude-itn.eu/,No. 956673), where some ECR positions are held by female physicians, is a prime example.It aims to train ECRs in functional disorders within a 3-year program.As a training network, ETUDE uniquely integrates clinical aspects of diagnostic procedure, stigmatization, and patientcare into its framework, allowing female physicians to acquire essential skills for clinical praxis.Functional disorders as an umbrella term refers to persistent somatic symptoms without reproducibly pathophysiological mechanisms. 50This Marie Sk1odowska-Curie Innovation Training Network serves as an essential platform for facilitating interconnections among female scientists and for disseminating knowledge across successive generations, thereby acting as a paradigmatic example to overcome the existing gender-gap.During their participation in the ETUDE program, female physicians can engage in clinical observations, accompany patient consultations, and collaborate with experienced clinicians.These experiences enable them to develop a deeper understanding of functional disorders and their clinical manifestations.Furthermore, this program significantly enhances the understanding of functional disorders within the medical community and will contribute to provide evidence-based guidelines for patient management, which will be assessable globally.
The ETUDE program holds particular relevance for female physicians in Germany as it provides a unique opportunity to bridge the gender gap in the field of psychosomatic medicine.By offering comprehensive training, research experience, and clinical insights, ETUDE equips female physicians with the skills and knowledge needed to expand their roles and responsibilities.With this innovative training, female physicians can enhance their competence, meeting German governmental requirements for university positions and expanding their career opportunities.Furthermore, it serves as a model for integrating clinical practice and research, thus empowering female physicians to take on more significant roles in addressing complex health care challenges.
Conclusion
Strengthening female physicians in psychosomatic medicine leads to the improvement of patient care and research advances, given the unique perspectives and approaches that female physicians can bring to the field.To address existing disparities, there is need for the implementation of supportive policies and programs within the field of psychosomatic medicine at both national and international levels.The ETUDE initiative, a part of the Marie Sk1odowska-Curie Innovation Training Network, exemplifies an optimal model for internationally advancing the careers of female physicians and scientists within the domain of psychosomatic medicine.Additionally, implementing strategies such as providing increased opportunities for part-time work or flexible schedules, promoting mentorship programs with female role models, and fostering networking and community support are essential in overcoming barriers.
Department of Experimental and Clinical Medicine, University of Florence, Firenze, Italy.*Address correspondence to: Caroline Rometsch, MD, MSc, Department of Experimental and Clinical Medicine, University of Florence, Largo Brambilla, 3, Firenze 50134, Italy, E-mail: carolina.rometsch@unifi.itª Caroline Rometsch, 2024; Published by Mary Ann Liebert, Inc.This Open Access article is distributed under the terms of the Creative Commons License [CC-BY] (http://creativecommons.org/licenses/by/4.0),which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.Women's Health Reports Volume 5.1, 2024 DOI: 10.1089/whr.2023.0070Accepted October 25, 2023 | 2024-01-21T05:06:14.825Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "64ea10600d829d4aafb4f21f836badea08a41d0b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1089/whr.2023.0070",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "64ea10600d829d4aafb4f21f836badea08a41d0b",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245609045 | pes2o/s2orc | v3-fos-license | Dataset of propylene carbonate based liquid electrolyte mixtures for sodium-ion cells
In this manuscript, we present rheology, ionic conductivity, density, chromatography, and life cycle analysis data on the PC+X electrolyte system with and without LiClO4. In particular, the data are presented in contact with Na surfaces. In this case, photographic images of electrolyte-sodium mixtures are also shown. The data was analyzed using OriginPro software to visualize it in an appropriate manner. In our view, the data serve as comparative values, form a basis of a chromatography analysis and are also valuable for modeling. The analysis of the data is presented in the manuscript “Comprehensive characterization of propylene carbonate based liquid electrolyte mixtures for sodium-ion cells” [1].
Specifications
Chemistry Specific subject area Analytical Chemistry and Electrochemistry Type of data Table Image Figure How the data were acquired The data were acquired via • density meter (Anton Paar DMA 4500M) • rheometer (Malvern Gemini HR Nano) • electrochemical workstation (Zahner Zennium IM6) • gas chromatography coupled with mass spectroscopy (Perkin Elmer Clarus 690 including a SQ8T mass spectrometer) • Advanced Electrolyte Model (AEM, Idaho National Laboratory) • Data are taken from www.echa.eu . An overview for each specific compound is provided in Table 12 (supporting information) Data format Raw Analyzed Filtered Description of data collection The data were collected by measurements as well as data analysis and software approach. Data source location All data are provided in the manuscript. The data was received from the following address: • Institution: Karlsruhe Institute of Technology
Value of the Data
• The data support the images and figures shown in Ref. [1] . The GC data additionally helps the reader by substance identification. The data can be used for simulation and modeling of electrolyte properties • Experimental as well as theoretic researches can profit from the data by using them for the described electrolytes • The raw data can be used directly for electrolyte research. Additionally, the experimental data may be a set of data for modeling and/or simulation of electrolyte properties
Data Description
Figs. 1 to 10 show the temperature dependent values of the density, viscosity and conductivity of the PC based electrolytes. Calculated values (mentioned as "AEM") and experimental values (mentioned as "exp") are plotted for a direct comparison. All specified values (calculated values, AEM) of the individual plots are listed in Tables 1a -1i (see supporting information), namely the data of Fig. 1 are listed in Table 1b (in Table 1a, reference is made to Fig. 2 in the research article [1] ), the data of Fig. 2 are listed in Table 1c, the data of Fig. 3 are listed in Table 1d, the data of Fig. 5 are listed in Table 1e, the data of Fig. 7 are listed in Table 1f, the data of Fig. 8 are listed in Table 1g, the data of Fig. 9 are listed in Table 1h and the data of Fig. 10 are listed in Table 1i (no calculations were made for Figs. 4 and 6 ). Experimental as well as calculated values of the density, conductivity and sodium diffusion coefficients at T = 25 °C are provided in Table 3a, experimental values of the dynamic viscosity are listed in Table 3b and experimental density values are provided in Table 3c for all mixtures (all tables are provided in supporting information). In Fig. 11 , a comparison is made of the conductivity and viscosity values between T = 25 °C and T = 50 °C. The raw data of Fig. 11 are listed in Table 4 (supporting information).
In Table 6 , all mixtures are shown with and without NaClO 4 addition before sodium addition and 20 days after sodium addition. In the research article [1] , only exemplary individual substance classes are shown, while here all compounds are shown as an overview. Fig. 11. Relative modification of conductivity and viscosity with temperature increase from 25 °C to 50 °C.
Table 6
Images of the samples after 20 d stored over sodium metal. This table is modified and completed from Table 3 in Ref. [1] .
Fig. 12.
Chromatograms (raw data) (FID) of MTBE, PC, "PC + Na" and "PC + 1 M NaClO 4 + Na". Both sodium samples were measured after 150 days of storage. All gas chromatography (GC) chromatograms are shown in Figs. 12 to 21 (with exception of mixture PC + EC that is shown in Ref. [1] ). All samples are directly compared with pure solvent as well as with the GC solvent methyl tert -butyl ether (MTBE). Since FID data are shown, a pronounced solvent signal is visible in all samples. A qualitative comparison of the individual peak areas is shown in Fig. 22 with and without sodium perchlorate including the formation of DMC and DEC from EMC. In Fig. 23 , relative peak areas of CO 2 against propylene carbonate (PC) are illustrated and compared between mixtures without and with sodium perchlorate. The data of Fig. 22 are listed in Table 5a (without NaClO 4 ) Table 5c (all in the supporting information).
A gas analysis from pure PC + NaClO 4 electrolyte over sodium metal is shown in Fig. 24 and a short description of the individual peaks is provided in the caption text. Impurities and reaction products that arise in the stored electrolyte (over Na), are mentioned in Tables 7 and 8 (supporting information). Both contain information about the detection of such substances with gas chromatography. Table 9 summarizes the results of the LCA for all impact categories (supporting information). Table 10 (supporting information).
In Fig. 26 , a TGA-DSC analysis is depicted with the electrolyte mixture PC + EC + 1M NaClO 4 . In addition to mass loss versus temperature, mass loss, DSC and temperature versus time are shown. Raw data are sown in Table 11 (supporting information). . 23. Gas chromatography results of electrolytes with and without NaClO 4 salt including sodium metal after 4 months of storage. The relative intensity was calculated by referring the peak at 1.84 min (MS detector, mass extraction m/z = 44 from EI total fragmentation, CO 2 ) relative to the peak at 6.80 min (PC, m/z = 87). Additionally, the PC intensity was corrected to the PC wt. content in the electrolyte.
(b) Viscosity measurements
A Malvern Gemini HR Nano rotational rheometer was used for measuring the dynamic viscosity with a 40/1 °cone geometry (sample filling gap: 30 μm). The mixtures were placed between cone and plate under normal atmosphere. A protective solvent evaporation hood was used during the measurement to avoid solvent evaporation. Measurements were carried out in a temperature range of 15-70 °C, and at each temperature a series of increased shear rates (5 to 200 s −1 ) was applied to ensure that viscosity was not dependent on shear rate.
(c) Density measurements
We measured the density values of the electrolyte mixtures using an Anton Paar DMA 4500 M instrument. Firstly, a check-up with air and water was performed to ensure a proper working of the device. Afterwards, the electrolytes were put into the device without any bubbles, and the measurement was performed (temperature range: 20 -60 °C). Approximately 1.2 ml of solution was used for each measurement.
(d) Gas chromatography measurements A Clarus 690 GC (PerkinElmer Inc., Waltham, USA) that was equipped with an autosampler, a FID (flame ionization detector), and an MS (mass spectrometry) detector (SQ 8T) was used for the measurement, while the software packages Turbomass 6.1.2 and TotalChrom 6.3.4 were used for both data acquisition and data analysis. The mixtures were diluted with MTBE and filled into GC vials. Measurements were then performed using an auto-sampler. Separation was performed with He gas 6.0, while the FID was operated with H2 gas (PG + 160, Vici DBS) and dry air. Inside the GC oven, an Optima 5 MS column (30 m in length x 0.25 mm in inner diameter, 0.5 μm in film thickness) was used for separation. A split flow rate of 20 ml min −1 and an inlet temperature at 250 °C were used for injection. The injection volume equaled 0.5 μl. During the measurement, the pressure was continuously increased starting from 175 kPa (start pressure) with the following parameters: -pressure-controlled mode -oven temperature 40 °C -oven and pressure parameters: 40 °C (1.5 min), heating at 20 °C min −1 (up to 320 °C) -pressure from 175 kPa for 2 min, increasing at 7.8 kPa min −1 to 300 kPa.
The gas flow was divided after the separation column by a SilFlow TM GC Capillary Column 3-port splitter to capture signals in both the MS and FID. The MS setup consisted of a filament with a filament voltage that was 70 kV, an ion source temperature that was 200 °C, and an MS transfer line temperature that was 200 °C, respectively, while the FID setup featured 450 ml/min for synthetic air, 45 ml/min for hydrogen gas, and an FID temperature that was 280 °C. The FID was used for quantitative analysis, while the MS was used for identifying all compounds. Consequently, the MS was used in scan mode with a scan range of 33 u-350 u and an event time of 0.3 s. All signals from the FID were used to determine the peak area. First, the raw data were extracted using Turbomass 6.1.2 and then analyzed using OriginPro 2020b software. Impurities in the electrolyte solvents were analyzed based on NIST search (EI fragmentation match) and pure substance measurement wherever possible. Gas formation was tested in a PAT-Cell press (EL Cell), from which the spring in the upper housing was detached and a beaker containing electrolyte and sodium was placed in the lower housing. Following sealing, the pressure was monitored for 400 h at T = 25 °C. The gas was collected using a syringe fitted with a syringe stopper to prevent atmospheric gas from entering the syringe. The gas was then injected into a GC instrument (Arnel GC system from Perkin Elmer) and analyzed qualitatively using TotalChrom 6.3.4 software.
(e) Cell testing
The electrolytes were tested in coin cells to evaluate the performance and aging of the materials and electrolytes. Full cells of hard carbon versus Na 0.7 MnO 2 ( Ø = 16 mm) were assembled in type CR 2032 standard coin cells under protective atmosphere (Ar-filled glove box with humidity and oxygen content below 0.5 ppm). A glass fiber separator ( Ø = 17 mm, QMA, Whatman ® ) that had been wetted with 110 μl of electrolyte mixture was placed between both electrodes. The electrodes as well as the separators were dried (vacuum, 110 °C, 24 h) before assembly. The theoretical surface area capacity of the electrode sheets equaled 2.2 mAh cm −2 (hard carbon) as well as 0.5 mAh cm −2 (sodium manganese oxide). This means that both electrodes were not balanced. Galvanostatic charge-discharge cycles were performed using the LICCY cell cycler (developed by KIT, Institute for Data Processing and Electronics). The cell tests were first performed with a series of continuously increased currents (C rate of charge/discharge as follows: 0.1C/0.1C, 0.2C/0.2C, 0.5C/0.5C, 0.5C/1C, 0.5C/1.5C, 0.5C/5C, 0.5C/7.5C, 0.5C/10C for 1 to 3 cycles), and then the cells were tested at a reduced rate (0.2C/0.2C for 2 cycles, 0.5C/1C for 100 cycles, and 0.2C/0.2C for 2 cycles). The applied C-rate was dependent on the capacity of the cathode materials used (lower capacity charge than HC).
(f) Electrolyte simulation with AEM software
The advanced electrolyte model (AEM) approach for calculating viscosity, density, conductivity, diffusion values, etc. had been published previously [7][8][9] and is available as a software tool. In principle, various physicochemical terms which were derived for multicomponent electrolytes were used to calculate these data. By using the INL's software package, all the values mentioned in the text were calculated, namely density, viscosity, conductivity and diffusion constants. The Advanced Electrolyte Model software can be licensed from the Idaho National Laboratory. Contact td@inl.gov for more information. In detail, appropriate solvents and salts were chosen in the software procedure and a range of concentration as well as temperature was applied. For Triple Ion stability, Option 1 was used that means [ABA + ] = [BAB − ]. Additionally, a contact angle of 0 °a nd a total pore length of 0.1 μm are chosen. No surface-charge attenuated electrolyte permittivity calculation as well as double layer calculation was done. Finally, the desired values were extracted from the calculations.
(g) Thermogravimetry analysis
A STA 449 F3 from Netzsch was used for the thermogravimetric analysis of the "PC + EC + 1 M NaClO 4 electrolyte mixture. 47.9 mg of the electrolyte was placed in an Al 2 O 3 crucible (open) and measured between 25 °C and 700 °C under dry air atmosphere. Then, the temperature ramp was adjusted to 10 K min −1 . To correct for the influences of the measurement system, blank runs or correction runs were performed under the same experimental conditions as for the samples. In addition, the DSC curve was recorded in parallel from T = 25 °C to T = 700 °C. The hazard traffic light (HTL) qualitative method is a color-code of potential hazards for different substances first presented by Hofmann et al. [1] . It is based on the hazard statements described in the regulation of the European Parliament on classification, labeling and packaging [2] and as registered by the European Chemicals Agency -ECHA for each material. These statements can readily be extracted using the search engine for chemicals / regulated substances found in the ECHA homepage ( https://echa.europa.eu/ ) and by typing in each one of the compounds herein presented. Specific details such as the European Community number and Infocard of each substance can be found in Table 12. A total number of 62 hazard statements grouped in 28 hazard classes are defined, each with a code, pictogram and signal words such as 'danger ´, 'warning' or no hazard word. An additional distinction between physical, health and environmental hazard is also taken into account. It is often the task of the producers and suppliers to classify their products following the previous guidelines, but for some specific substances a harmonization is done at the EU level when the perceived hazards are of major concern. Ultimately, a color is assigned to each hazard statement based on the respective signal word that it has received, which allows for a visual distinction of the potential hazardousness of a material. Red color will be assigned to hazards labelled as 'danger ´, whereas yellow will be used for those presented as 'warning ´. Statements without a hazard word are coloured in gray.
The life cycle assessment (LCA) method involves an assessment of the various environmental impacts of a product at different stages of its life cycle, i.e., raw material procurement, manufacturing, use, and final disposal. It closely follows the guidelines described in ISO standards 14,040/14,044. In this manuscript, a cradle-to-gate approach was used, which means that impacts are estimated only up to the final stage of electrolyte manufacture. A functional equivalent quantity of 1 L of electrolyte mixture was chosen and the analysis was carried out using the ReCiPe Midpoint 2016 impact assessment procedure, describing a set of 18 impact classes, of which each has a specific unit of reference. A calculation of the cumulative energy demand can be found in Table 9. The cumulative energy demand depicts the total energy consumed from non-renewable resources up to the final step of mixture production. Data for the preparation of the precursors were taken from both published patents and literature sources, as well as from the commercial life cycle inventory database Ecoinvent 3.7.1. OpenLCA v1.10 software [2][3][4][5][6] was used to perform the assessment.
Ethics Statements
No ethics conflicts.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2022-01-01T16:16:56.247Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "d898c0ce3f2bb1c9325c0ff25a20eeca12c02d9f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.dib.2021.107775",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a323d2def52ea0efbca8fe05ab427fe432054a53",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235472790 | pes2o/s2orc | v3-fos-license | Internet-Administered Cognitive Behavioral Therapy for Common Mental Health Difficulties in Parents of Children Treated for Cancer: Intervention Development and Description Study
Background Following the end of a child’s treatment for cancer, parents may report psychological distress. However, there is a lack of evidence-based interventions that are tailored to the population, and psychological support needs are commonly unmet. An internet-administered low-intensity cognitive behavioral therapy (LICBT)–based intervention (EJDeR [internetbaserad självhjälp för föräldrar till barn som avslutat en behandling mot cancer]) may provide a solution. Objective The first objective is to provide an overview of a multimethod approach that was used to inform the development of the EJDeR intervention. The second objective is to provide a detailed description of the EJDeR intervention in accordance with the Template for Intervention Description and Replication (TIDieR) checklist. Methods EJDeR was developed through a multimethod approach, which included the use of existing evidence, the conceptualization of distress, participatory action research, a cross-sectional survey, and professional and public involvement. Depending on the main presenting difficulty identified during assessment, LICBT behavioral activation or worry management treatment protocols are adopted for the treatment of depression or generalized anxiety disorder when experienced individually or when comorbid. EJDeR is delivered via the Uppsala University Psychosocial Care Programme (U-CARE) portal, a web-based platform that is designed to deliver internet-administered LICBT interventions and includes secure videoconferencing. To guide parents in the use of EJDeR, weekly written messages via the portal are provided by e-therapists comprising final year psychology program students with training in cognitive behavioral therapy. Results An overview of the development process and a description of EJDeR, which was informed by the TIDieR checklist, are presented. Adaptations that were made in response to public involvement are highlighted. Conclusions EJDeR represents a novel, guided, internet-administered LICBT intervention for supporting parents of children treated for cancer. Adopting the TIDieR checklist offers the potential to enhance fidelity to the intervention protocol and facilitate later implementation. The intervention is currently being tested in a feasibility study (the ENGAGE study). International Registered Report Identifier (IRRID) RR2-10.1136/bmjopen-2018-023708
Compared with population controls, parents of children treated for cancer report a higher prevalence of mental health difficulties, including depression, anxiety, and posttraumatic stress symptoms [6][7][8][9][10]. Despite the prevalence of mental health difficulties, parents report a number of significant barriers to accessing psychological treatment to meet their needs [18][19][20]. These barriers occur at the individual level: lack of time, putting the needs of their child first, and guilt [21,22]; provider level: lack of knowledge of mental health difficulties and willingness to diagnose and treat mental health problems; and systemic level: limited availability of trained and qualified health care providers [23][24][25].
Innovative strategies to address barriers and improve access to evidence-based psychological interventions are being implemented worldwide [26]. One such innovation is the Improving Access to Psychological Therapies (IAPT) program in England [27,28], which is now also being piloted in countries including Australia [29] and Norway [30]. The IAPT program was established in recognition that improving access to evidence-based psychological therapies required a fundamental transformation of mental health service delivery. This transformation was achieved through the delivery of psychological treatments within a stepped care service delivery model [31]. One important feature of the stepped care model is that the least restrictive evidence-based treatment available that is likely to result in a significant health gain is provided initially [32,33]. For example, lower demands placed on patients in terms of cost and personal inconvenience [32,33]. At step 2, low-intensity cognitive behavioral therapy (LICBT) is provided by a psychological practitioner workforce trained in competencies to support patients to engage in LICBT interventions [34]. At step 3, high-intensity cognitive behavioral therapy (HICBT) is delivered to patients, primarily face-to-face, by traditional psychological therapists.
LICBT interventions are delivered through a range of cognitive behavioral therapy (CBT) self-help interventions, including print-based formats or e-mental health (eg, internet administered and smartphone apps) formats [35]. Using LICBT interventions to deliver specific CBT techniques enables treatment to be provided with shorter session times while ensuring that patients receive a similar dose of therapy to that delivered by HICBT therapists [34]. With HICBT, evidence-based treatment protocols specify the delivery of several CBT techniques as part of a multistrand approach, such as cognitive therapy for depression [36]. With LICBT, a single-strand approach is adopted, in which a clinical decision is made to adopt a single evidence-based CBT technique for the treatment of a specific, common mental health difficulty [34]. Given the evidence base highlighting larger effect sizes associated with guided LICBT versus those associated with self-administered LICBT [37,38], interventions are supported by a psychological practitioner workforce [34].
The evidence base for LICBT has been demonstrated in over 30 systematic reviews and 50 controlled trials [39]. Controlled trials of guided internet-administered LICBT interventions versus face-to-face psychological therapies have been demonstrated to produce equivalent overall effects [40], and acceptability has been demonstrated in usual care settings [41]. In addition to placing fewer demands on parents of children treated for cancer, guided internet-administered LICBT may represent a solution to address individual-and provider-level barriers to access [42][43][44]. An existing internet-administered CBT intervention for parents of children treated for cancer has been found to be acceptable and feasible [45]. However, this was an HICBT intervention, delivered in real time by a qualified psychologist using a group treatment format. To the best of our knowledge, there is no guided internet-administered LICBT intervention for parents of children treated for cancer.
Objectives
The objectives are twofold. The first objective is to provide an overview of the multimethod approach informing the development of a guided internet-administered LICBT intervention for parents of children treated for cancer (EJDeR [internetbaserad självhjälp för föräldrar till barn som avslutat en behandling mot cancer]), following phase I (development) of the Medical Research Council complex interventions framework [46]. The second objective is to provide a detailed description of the EJDeR intervention in accordance with the Template for Intervention Description and Replication (TIDieR) checklist [47] to overcome criticisms concerning poor and incomplete reporting of complex nonpharmacological interventions [48].
Ethics
Ethical approval for studies informing the EJDeR intervention development process was granted by the regional ethical review
Overview of the EJDeR Intervention
EJDeR is a guided internet-administered LICBT intervention for parents 3 months to 5 years following their child ending treatment for cancer. For parents, the end of treatment is a period of psychological vulnerability [6], and a subgroup reports long-term psychological distress after the end of treatment [10]. EJDeR is delivered on the Uppsala University Psychosocial Care Programme (U-CARE) portal (hereafter referred to as the portal), an in-house platform designed to deliver CBT interventions and support data collection [64,65]. EJDeR is intended to be delivered over 12 weeks and consists of 4 modules: (1) introduction and psychoeducation, (2) behavioral activation (BA), (3) worry management, and (4) relapse prevention. First, parents attend an initial assessment via videoconferencing or telephone interviews with an e-therapist. Consistent with an LICBT single-strand approach, a decision is made during the initial assessment to adopt BA to target depression or worry management for generalized anxiety disorder (GAD). Thereafter, e-therapists provide weekly written messages via the portal to guide parents to use the relevant module. Parents also receive a midintervention booster session with their e-therapist via videoconferencing or telephone. On occasions where difficulties remain after completion of BA or worry management, a collaborative decision may be reached to progress to the other LICBT technique. A detailed description of EJDeR is provided below in accordance with the items included in the TIDieR checklist [47].
TIDieR Checklist Item 1: Brief Name of the Intervention
The intervention was named EJDeR, which is a Swedish acronym for internetbaserat självhjälpsprogram för föräldrar till barn som avslutat en behandling mot cancer.
Overview
Theory related to the CBT model informing the development and maintenance of psychological distress was applied to understand the etiology and maintenance of distress in parents of children treated for cancer [50]. On the basis of the resulting conceptualization of distress in the population [50], depression and traumatic stress were proposed as the main psychological difficulties likely to arise in the population. Symptoms consistent with GAD (eg, persistent future-orientated worry and anxiety, fear, and health-related control behaviors) were also identified [50]. Given that depression and GAD are recommended for treatment with LICBT, EJDeR was developed to target depression and GAD, rather than posttraumatic stress disorder (PTSD) given the lack of evidence base for LICBT for PTSD [66]. Consistent with LICBT, EJDeR comprises two separate single-strand LICBT techniques: BA [67][68][69] and worry management [54,70,71] to target depression and GAD, respectively. EJDeR is not designed to support parents with a diagnosis of severe or enduring mental health difficulties or parents who are suicidal or have a history of persistent self-harm.
BA for Depression
To prioritize their child's cancer treatment, parents of children receiving cancer treatment commonly disengage from activities that make up a normal life routine, such as decreased engagement in work, social activities, and everyday household tasks [5,49,50]. At the time of the child's illness, prioritizing their child's cancer treatment can be helpful for parents in the short term to manage the difficult situation of being a parent to a child with cancer. However, even after treatment has ended, some parents continue to disengage from these activities. This can arise as a consequence of negative reinforcement, whereby continuing to focus on their child's needs at the expense of their own and not re-engaging with previously undertaken activities can provide relief. However, failing to re-engage with previous activities, in particular those found pleasant, reduces opportunities for positive reinforcement, whereas engagement in unnecessary activities associated with their child's treatment is maintained through negative reinforcement [67,69,72,73].
To break this maintenance cycle, EJDeR adopts an LICBT BA technique [69] theoretically informed by Hopko et al [74] to overcome sources of negative reinforcement and increase engagement with pleasurable activities in a structured and graded way [67][68][69].
Worry Management for GAD
Worry in parents of children treated for cancer is commonly related to the child's disease. To help avoid potential problems during cancer treatment, or to avoid thinking about the outcome of future threats, parents may engage in worry behavior in an attempt to problem solve current difficulties and avoid future threats [75,76]. When worry is related to a practical problem and results in successful problem solving, it can be highly productive; for example, ensuring the child avoids situations that increase the risk of exposure to infectious diseases [50]. However, worry can be unhelpful when hypothetical; therefore, solutions cannot be generated. For example, concerns related to future cancer reoccurrence in their child or sickness in themselves or family members without any reason [5]. On such occasions, worry may be used as a form of cognitive avoidance to reduce distress and discomfort associated with uncertainty [77,78]. When successful in reducing distress and discomfort, the use of worry behaviors becomes negatively reinforced, helping to manage an intolerance of uncertainty in the long term [77,78]. This intolerance of uncertainty is a core feature of GAD and is common among parents of children treated for cancer [79].
Behavior Change Models
To influence the degree to which patients are able to engage with the EJDeR intervention, behavior change theory [80] is integrated to supplement specific factors associated with single-strand LICBT techniques. For example, Self-Determination Theory [81] has been adopted to enhance autonomy, competence, and relatedness. A sense of autonomy is enhanced by providing a clear rationale for each LICBT technique. Clear instructions and guidance on how to complete exercises and guidance and feedback provided by an e-therapist foster competence. A sense of relatedness is established by directing significant attention to the language adopted throughout the intervention, such as the provision of empathy, normalization of common difficulties, and encouraging active engagement.
To complement Self-Determination Theory, the selection, optimization, and compensation (SOC) model [82][83][84] is embedded to support parents in re-engaging with activities that were given up while supporting their child through cancer treatment or address worry by problem solving practical difficulties faced during treatment. The SOC model has been demonstrated to be a successful strategy for managing the multiple goals associated with different life domains (eg, work, family, and leisure) in middle adulthood [85,86] that may be experienced by parents of children treated for cancer. Within BA, the SOC model is used to support parents in replacing activities that were necessary to stop by selecting other activities that are more achievable and remain of importance and value. The SOC model can help support problem solving by adapting activities in the event of experiencing changes in resources (optimization; eg, lack of time and finance) and identifying ways of achieving the activity in light of changes (compensation; eg, finding time and asking for support). Applying the SOC model enables parents to maximize desirable gains, goals, and outcomes while minimizing undesirable losses, goals, and outcomes [82][83][84].
Intervention Delivery
EJDeR is delivered on the portal and includes text, illustrations, film, audio files, and a frequently asked questions section. The About Us section presents photos and a brief biography of the EJDeR authors to verify author credibility, previously shown to be important when providing remote treatment [87]. Technical help texts are available throughout EJDeR to support parents to use all functions. To visually present how EJDeR appears to parents, sample screenshots from the intervention can be seen in Figure 3. Parents initially complete the introduction and psychoeducation module, and after the initial assessment session, e-therapists provide access to the module containing the LICBT technique best suited to their main presenting difficulty (BA or worry management). After completion of BA or worry management, a collaborative decision between the e-therapist and parent may be reached to progress to the other LICBT technique; however, parents only work with a single LICBT technique at a time. A detailed description of the module content is found in TIDieR item 4, and an overview of the structure of EJDeR is shown in Figure 4. Consistent with the LICBT approach, participant engagement with the techniques is facilitated through in-module exercises and weekly homework exercises completed on the portal and submitted to the e-therapist (see Figure 5 for an example). To provide choice, homework exercises can also be printed and completed offline, and parents subsequently complete a weekly homework review on exercise on the portal. Parents can access copies of all weekly homework exercises and audio files in a web-based library in the portal. Intervention Training e-Therapists are provided with a portal handbook, with instructions on how to use EJDeR and training videos on the delivery of the BA and worry management techniques. e-Therapists review parent progress through the modules and any completed in-module exercises and homework exercises on the portal.
Module: Introduction and Psychoeducation
Parents are provided with a brief introduction of how to use EJDeR. Psychoeducation about psychological distress in the context of being a parent of a child treated for cancer is also provided. Parents are introduced to two case vignettes that are used throughout EJDeR based on the Five Areas CBT model [88,89] to facilitate an understanding of the CBT rationale. To enhance engagement, case vignettes were informed by our previous research [5,51]. Parents (1) complete their own Five Areas CBT model; (2) identify areas of importance and value in their life; and (3) set three goals that are specific, positive, and realistically achievable. Parents are presented with the two case vignettes briefly outlining the techniques parents will work with during EJDeR.
Alongside completion of this module, parents take part in an initial assessment session with an e-therapist (see TIDieR checklist Item 6) to determine the parent's main presenting difficulty. The e-therapist provides access to the BA module for parents experiencing depression and the worry management module for parents experiencing GAD.
Module: BA
The full clinical protocol for BA has been described elsewhere [67][68][69]. Activities that make up a normal life routine are categorized into three types: (1) routine (providing life structure and typically repeated during the week, such as housework and cooking); (2) pleasurable activities that provide a sense of pleasure or enjoyment that are determined by the parent; and (3) necessary activities that are recognized as having the potential for serious negative consequences if not done (eg, attending hospital appointments, taking medication, or paying a bill). Parents are gradually supported to re-engage with activities they have stopped, aiming to re-establish a balance of routine and pleasurable activities, and where required, include necessary activities. The clinical protocol includes four main steps (identifying current activities, identifying stopped activities, organizing activities, and planning activities). As an adaptation, an additional step entitled Prioritizing Activities was added, recognizing that parents commonly experience difficulties trying to balance their home, work, and family life after cancer treatment has ended [5]. Parents may need to reprioritize routine activities to gain opportunities to re-engage with neglected pleasurable activities. A case vignette is used to guide parents through BA, including examples of completed exercises and occasions where setbacks are experienced, and to provide guidance and feedback on the use of BA [60,61]. Parents are encouraged to work with BA, with the exact number of weeks required decided collaboratively between the parent and e-therapist.
Module: Worry Management
The clinical protocol for worry management has been described elsewhere [54,70,71]. Parents capture worries over a week in a worry diary and categorize worries into two types: (1) practical (eg, important and can be solved) and (2) hypothetical (eg, important but have no way of being solved, such as worries relating to past events, things that might happen in the future, or things that cannot be controlled). Parents review the types of worries they have captured and determine whether a particular type (eg, practical or hypothetical) has a greater impact and is more distressing. Parents are encouraged to use problem solving for practical worries and worry time for hypothetical worries. A case vignette is also used to guide parents through worry management. Parents continue to work with worry management, with the number of weeks decided collaboratively between the parent and the e-therapist. Parents may work with both worry time and problem solving.
Module: Relapse Prevention
This module is based on a relapse prevention protocol for LICBT [54,68] and is completed at the end of the 12-week intervention period or before if a collaborative decision is made between the parent and the e-therapist. Parents identify warning signs that may indicate relapse using the Five Areas CBT model [88,89] completed in the introduction and psychoeducation module. Next, parents identify what activities, skills, and techniques they have learned and found helpful during EJDeR to inform a staying-well toolkit. Parents are encouraged to make a written commitment to check-in with themselves, initially on a weekly basis, to consider what warning signs they may be experiencing. If parents find themselves experiencing warning signs, they should use their staying-well toolkit to identify how to address these.
TIDieR Checklist Item 5: Expertise, Background, and Specific Training Given to Intervention Providers
EJDeR is designed to be guided by e-therapists trained in the competencies required to support LICBT [90]. Within the IAPT program [27], guidance is provided by a psychological well-being practitioner workforce, where practitioners receive 9 months of graduate or postgraduate level training and are not required to have a core health or mental health professional qualification [34]. In Sweden, there is no psychological well-being practitioner workforce. Therefore, e-therapists are intended to be psychology program students, in at least their fourth year of study, including a term of advanced studies in CBT and those who have not yet gained an accredited mental health professional qualification.
A 2-day training program for EJDeR was provided to e-therapists by intervention authors PF (IAPT program LICBT national expert advisor and clinical lead, accredited cognitive behavioral psychotherapist and chartered psychologist) and JW (research psychologist, expert in LICBT, and teacher on educational programs to train mental health professionals using LICBT), a Swedish licensed psychologist, and 2 research assistants (MSc level). Training focuses on developing an understanding of (1) LICBT, (2) BA, (3) worry management, (4) difficulties commonly experienced by parents of children treated for cancer, (5) the structure of EJDeR, (6) support protocols, and (7) using the portal. e-Therapists receive weekly group clinical supervision via videoconferencing or face-to-face with a licensed psychologist with expertise in the population and internet-administered CBT.
On-demand individual supervision with a licensed psychologist is provided, if required.
The Portal
The portal [64,65] incorporates security and safety features to ensure sensitive information management, including (1) user log-in via bank ID (a citizen authentication system used in Sweden); (2) access through an encrypted connection using an HTTPS protocol; (3) protection of the webserver via Uppsala University's secure firewall, allowing only http secure traffic; and (4) storage of study data on a separate database to personal data (eg, the parent's identity and contact details) with both databases encrypted using 256-bit transparent data encryption. User action logging is enabled via action metadata management to allow user behavior analysis, including (1) log-ins; (2) log-outs; (3) opened modules; (4) section views (eg, the library); (5) opening PDFs; (6) homework entries, (7) multimedia (eg, audio and video) file consumption (including play, pause, and stop); and (8) time-stamp data. Message logging is also enabled, for example, the number of automated reminders sent via SMS text messaging or email, and the number of written messages sent between the e-therapist and the parent within the portal. A number of persuasive system design elements [91,92] are integrated to improve intervention adherence: (1) tunneling (eg, intervention content delivered in a predefined step-wise order to guide users through the intervention); (2) tailoring (eg, intervention content is personalized to user needs, ie, their main presenting mental health difficulty); (3) personalization (eg, reminder messages include the parent's first name); (4) self-monitoring (eg, mood monitoring via a visual analog scale); (5) rehearsal (eg, exercises are repeated); (6) reminders (eg, automated messages to remind parents to perform specific actions); (7) similarity (eg, use of case vignettes); and (8) liking (eg, use of professional illustrations).
e-Therapist Guidance
Guidance is provided to parents by a secure inbuilt videoconferencing system, written messages via the portal, and over the telephone. e-Therapists hold an initial assessment session with the parent informed by existing protocols [68] via videoconferencing or telephone. At the end of the assessment, a decision is made concerning which LICBT technique is best suited to the parent depending on their main difficulty (eg, depression or GAD). Thereafter, e-therapists provide weekly guidance via written messages within the portal, informed by evidence suggesting frequent support is associated with adherence [93]. Weekly written messages are informed by an existing brief check-in support protocol [68] and include (1) reviewing and providing feedback on weekly homework exercises; (2) reinforcement of progress made; (3) normalization of any difficulties encountered; (4) assistance with problem solving difficulties and directing the parent to advice in the EJDeR intervention; (5) setting a plan for the use of EJDeR over the coming week; and (6) encouragement to support continued motivation and engagement. The brief check-in support protocol [68] is informed by the ICBT Therapist Rating Scale [94] and designed to minimize the use of undesirable e-therapist behaviors [95]. e-Therapists may provide at-need written support via the portal if requested and are required to respond to parents within 1 working day. Parents receive a booster session via videoconferencing or telephone halfway through EJDeR to review and assess progress, identify and provide assistance for problem solving any difficulties experienced, and provide continued encouragement and motivation.
TIDieR Checklist Item 7: Location e-Therapists were located at Uppsala University, Sweden. EJDeR can be assessed on PCs, smartphones, and tablets.
TIDieR Checklist Item 8: Timing, Duration, and Intensity
EJDeR is designed to be delivered over 12 weeks. The initial assessment session lasts approximately 45 minutes and the booster session lasts for 30 minutes. e-Therapists are expected to spend 20-30 minutes per parent each week, providing weekly written messages via the portal. Parents are expected to complete the introduction and psychoeducation module and one LICBT intervention module (eg, BA or worry management).
TIDieR Checklist Item 9: Tailoring the Intervention
Content has been closely developed alongside PRPs and has been informed by research identifying the experiences, distress, needs, and preferences for support of parents of children treated for cancer [5,6,[49][50][51][52]. Examples of tailoring for the population include (1) the use of case vignettes of parents using the intervention, which were informed by our previous research to enhance realism and relevancy [5,51]; (2) professional illustrations depicting parents throughout the intervention; (3) the inclusion of psychoeducation in the context of the situation of being a parent of a child treated for cancer (eg, fear of cancer reoccurrence); (4) the choice between attending the initial assessment session via telephone or videoconference [51]; and (5) the inclusion of a midintervention booster session [51].
TIDieR Checklist Item 10: Modifications of the Intervention
EJDeR is currently being tested in a single-arm feasibility study, ENGAGE [96,97] (ISRCTN 57233429), with a baseline, posttreatment (12 weeks), and 6-month follow-up, with an embedded qualitative and quantitative process evaluation to inform a future phase III definitive randomized controlled trial. Findings from the embedded qualitative process evaluation will inform future potential modifications to the intervention. Any intervention modifications during the course of the study will be reported in the ENGAGE study results.
TIDieR Checklist Item 11: Assessing Intervention Adherence (Planned)
Videoconference and telephone guidance sessions are audio-recorded with informed consent. Overall, 15% of written communication and 15% of video or telephone communication between parents and e-therapists are reviewed by a member of the research team to assess e-therapist fidelity to the clinical protocol. Parent activity on the portal is logged to examine parent adherence, including the number of log-ins, opened modules, completed in-module and homework exercises via the portal, and the number of written messages via the portal sent to e-therapists.
TIDieR Checklist Item 12: Assessing Intervention Adherence (Actual)
Actual adherence to EJDeR will be reported in the results of the ongoing single-arm feasibility study ENGAGE [96].
Principal Findings
The detailed description of EJDeR, in line with the TIDieR checklist, can help facilitate e-therapist fidelity to the EJDeR protocol during the ENGAGE study [98]. Furthermore, if EJDeR is implemented later, clinical delivery will be replicable.
Limitations and Strengths
Although public involvement was embedded within intervention development and resulted in valuable feedback and intervention changes, involvement was at a consultation level, with feedback provided on materials already developed by the research team. Involvement may have been enhanced by the greater engagement of PRPs earlier in the process. For example, holding in-depth discussion groups, involvement in writing the intervention, and development of case vignettes to add extra authenticity. PRPs only provided feedback on a written version of EJDeR and not when EJDeR was uploaded onto the portal, and was, therefore, reviewed outside of its intended context. However, an important objective of the ongoing study ENGAGE is to examine the acceptability and feasibility of EJDeR in more depth.
EJDeR does not include the collection of routine weekly clinical outcome measurements for clinical purposes, for example, to help inform treatment decisions. Instead, weekly clinical outcome measurements (depression, Patient Health Questionnaire-9; GAD, GAD-7; posttraumatic stress symptoms, PTSD Checklist for DSM-5, and PTSD Checklist-Civilian Version) were collected via the portal to inform a process evaluation for research purposes only [96]. Collection of clinical outcome measurements on a session-by-session basis is a core feature of the stepped care model to inform the treatment planning [27] and a core feature of the successful implementation of internet-administered CBT in routine health care [99].
Consistent with the single-strand LICBT interventions developed in England as part of the IAPT program, EJDeR was adapted to enhance the acceptability for the Swedish population. Adopting a more structured framework to inform the cultural adaptation of evidence-based psychological interventions may improve acceptability and relevance [100]. Finally, to ensure consistency with the LICBT approach, EJDeR targets depression and GAD. Therefore, EJDeR does not target all mental health difficulties commonly experienced by parents of children treated for cancer, such as PTSD [50]. Future psychological interventions developed for parents of children treated for cancer may target other difficulties.
Notwithstanding these limitations, the development of EJDeR was informed by a series of iterative research studies, including evidence synthesis, conceptualization of distress, participatory action research, and a cross-sectional web-based survey, and therefore, it is strongly grounded in research on the population. Public involvement was embedded within the intervention development process, resulting in invaluable feedback and intervention changes. Development included translation by native Swedish speakers and subsequent back-translation by a professional translation company.
Comparison With Prior Work
To the best of our knowledge, this is the first LICBT intervention to be described in detail and in accordance with the TIDieR checklist [47]. Although LICBT clinical protocols have been published [68], the TIDieR checklist represents a systematic and structured approach to facilitating detailed intervention descriptions. The provision of a systematic and structured clinical protocol may be of particular importance, given that therapeutic drift [101] in supporting LICBT is commonly reported [102]. In addition, the content of LICBT interventions differs significantly [34,103] and is poorly described [104]. Furthermore, the use of the TIDieR checklist, alongside the application of further intervention fidelity measures, will facilitate determining the extent to which EJDeR is delivered as planned in the ENGAGE study, thereby increasing confidence in the results of any subsequent effectiveness trial [98].
Conclusions
Informed by phase I (development) of the Medical Research Council guidance for the development and evaluation of complex interventions [46], an overview of the development process is provided, along with a detailed description of the EJDeR intervention informed by the TIDieR checklist. The provision of a detailed and structured intervention protocol is of particular importance for the implementation of evidence-based treatments and reduction of research waste [48], providing procedures to maximize fidelity to protocols [98]. Reducing therapist drift is a core feature associated with the successful implementation of internet-administered LICBT [99]. | 2021-06-19T06:17:04.882Z | 2021-06-17T00:00:00.000 | {
"year": 2021,
"sha1": "33630385f032e6f98652387bae19faf0edc21ce9",
"oa_license": "CCBY",
"oa_url": "https://formative.jmir.org/2021/7/e22709/PDF",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5892c90ffe577c74b15eb5007a0b766cb1b46a72",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
115753397 | pes2o/s2orc | v3-fos-license | Design quality assurance in cultural heritage restoration
This study addressed the problem of quality assurance of scientific design documentation intended for cultural heritage restoration projects. The quality assurance systems, currently used by restoration companies in Russia, were considered. Such systems have disadvantages, implicated by the specific nature of restoration projects. It was proposed to introduce an additional quality assurance element, namely, appraisal of design solutions by a scientific methodological council. The main purpose and principles of the council’s work were formulated. The proposal was tested by implementing such council in the quality assurance procedure of a Moscow restoration and design company in 2016-2018. The main criterion of the council’s efficiency was the percentage of positive conclusions of the Historical-Cultural State Expert Review, obtained on the first try. By the end of the second year of the council’s work, this indicator increased by 30% and approached 100%. The finding of this research can be of interest to restoration companies and specialised organization that develop quality management systems.
Introduction
In Russia, preservation of cultural heritage is a licensable activity. Russian administrative law holds company executives responsible for violating licensing requirements. Moreover, according to Section 6 Article 45 of the Federal Law "On objects of cultural heritage (historical and cultural monuments) of the peoples of the Russian Federation" No. 73-FZ dated 25.06.2002, only natural persons, qualified by the federal authorities for cultural heritage protection, can be admitted to conservation and restoration of cultural assets that are listed in the Unified State Register of Cultural Heritage Objects or have been recently discovered. There are procedures for revoking the qualification of such specialists if in their professional activity they commit violations that result in damage to a cultural heritage object. These strict rules arise from the high cost of any mistake that can lead to an irreparable loss of cultural heritage of the peoples of Russia.
The conservator-restorer community have repeatedly discussed the need to improve the quality assurance (QA) mechanisms for restoration-related design processes. This study was conceived following one of such discussions at the II International Congress of Restorers in September 2015 in Kazan. We aimed to examine Russian experience in QA of research and design documentation related to the cultural assets preservation, to analyse the weak points in the QAS, to develop improvement proposals, and test the improved QA methods in the work of a restoration company. This paper presents the main results of the examination and testing.
Methods
In an attempt to examine the domestic experience in QA of research and design documentation related to the cultural assets preservation, the following methods were selected: • Questionnaire survey. Some multiple-choice questions were included to assess the presence/absence of a QAS, its content and efficiency. The total number of design organizations licensed by the Ministry of Culture of Russia could be designated as a general population for the study. In this case, the total population would exceed 1500. However, we considered it permissible to significantly reduce this value by presenting the following requirements to the survey participants.
a. have a license to carry out activities related to the cultural assets preservation for at least 5 years before the start of the survey; b. be a noticeable participant in the restoration market, having an annual revenue of 50 mln rub or more and at least 5 state contracts signed per year; c. be a specialized company that receives at least 75% of the total revenue from restoration of cultural heritage objects and restoration-related design activities. As a result, the total population comprised about 90 organizations. The required sample size was calculated based on a confidence level of 90%, a confidence interval of 5% and amounted to 68 organizations. The questionnaires were sent to restoration 90 companies, and replies were received from the 71 recipients. The confidence interval was equal to ± 4.52%. Table 1 shows the geographical spread of the survey and the number of companies that returned the questionnaires. Face-to-face interviews with executives of the restoration companies and officers of the governmental agencies for cultural heritage protection. The number of respondents was 42 people, incl. 31 participants, who were representatives of organizations that previously participated in the questionnaire survey. Considering the limitations of this kind of research, no quantitative assessment of the results was made.
In the interviews, we were able to assess the satisfaction of the company executives with the available QAS and to clarify the viewpoint of the governmental agencies on the existing QAS. Detailed notes were taken during each interview. Analysis of quotations to the state tenders, placed in the Unified Information System of Public Procurement. We selected state tenders, in which the availability of QAS/QMS was a necessary criterion. The selected tenders were placed during the year preceding the start of this study. The quotations of the restoration companies were considered in addition to the information obtained from the questionnaires and the interviews. Selection and analysis of proposals from the companies that develop QAS/QMS. We requested commercial proposals on the development and implementation of QAS for design documentation. The main condition was to take into account that the design documentation was intended for cultural heritage restoration projects. The purpose of this step was to assess the potential of QAS/QMS developers as it relates to the design processes for the purposes of cultural heritage restoration projects. When analysing the obtained information, we sought to answer the following questions: if QAS are used by the restoration companies or not; which risks associated with the design development a QAS can reduce or eliminate; and, most importantly, if the specific nature of restoration projects can be taken into account and embedded into the QAS?
The answers to these questions served to assess the efficiency of QAS applied by the restoration companies.
Further on, we determined the weak points in the QAS and proposed ways to mitigate them. The proposal was to introduce a Scientific Methodological Council as a QAS element. It was implemented into the working procedure of restoration and design company AK-Project, LLC (Moscow, Russia). Since the design solutions in the sphere of cultural asset restoration have to be submitted for the Historical-Cultural State Expert Review, the efficiency of using SMC in QA was evaluated by comparing the percentage of positive conclusions of the Historical-Cultural State Expert Review, obtained on the first try, before and after the introduction of SMC.
Results and Discussion
The study showed that the majority of restoration companies (83% of respondents) applied QAS in the design development. At that, the QAS can basically be divided into 2 types.
Type I QAS were aimed mainly at checking the compliance of design documentation with the submission guidelines. In approximately 70% of cases, the controlled parameters included: completeness of documentation; correctness of execution and formatting; interconnection of sections; correctness of references in the drawings; relevance of the explanatory (written) part to the graphic content.
About 30 % of companies, having Type I QAS, reported that they evaluate only the compliance of design solutions per se to the current regulations. It should be noted that companies that use Type I QAS, as a rule, did not engage a specialized organization to implement the QAS; most companies developed their QA procedures independently.
Type I QAS were usually organized in the following way: the QA check was performed at the level of the design department (42%) or at the level of documentation release before delivery to the customer (34%), and only in 24% of cases the design quality was monitored at both levels.
Type II QAS(30% of respondents) were more complex, and the QA of project documentation was embedded in the general QMS that met the requirements of GOST R ISO 9001-2015 [1]. In these companies, the QMS was developed and certified by a specialized organization. It should be noted that the availability of QMS certification was an important criterion in the evaluation of quotations in the Unified Information System of Public Procurement. At the same time, the interviews with the company executives demonstrated low satisfaction with the available QMS. They were described as formal; keeping the QMS documents appeared cumbersome; designers distrusted this form of control; response to deviations from the quality standard and elimination of their causes were slow; and the QMS required large labour costs to ensure full functioning.
Thus, the following observations emerged from the analysis of the available QAS for design documentation in restoration companies: a majority of the restoration companies used QAS for design documentation in one form or another; the most controlled parameters were the compliance of the documentation with the requirements for execution/formatting and completeness; the interconnection of sections and compliance with the current regulations were checked less often; the QAS for design documentation, implemented as a part of the QMS, were regarded as inefficient by the company executives.
In addition to the above shortcomings, another serious disadvantage of the studied QAS was revealed. In accordance with GOST R 55528-2013 [2], the design documentation intended for the preservation of cultural heritage objects, is categorized as scientific design documentation. The scientific component consists in the complex study of the cultural heritage object both at the stage preceding the design development and at the stage of repair and restoration works. The complex study includes historical, cultural, architectural, engineering, chemical-technological and other types of research, the results of which form the basis for the accepted restoration and design solutions. Simultaneously, the statutory regulation in terms of design solutions for restoration purposes is limited to several documents [2][3][4][5]. Hence the final quality of design solutions largely depends on the accuracy of the conducted research, the conformity of design solutions to the research results, and professional competencies of the designers. The available QAS for restorationrelateddesign documentation can only solve the task of assessing the compliance of the design solutions with the research results, while the accuracy of such assessment may require an additional study. All the other factors, arising from the specific nature of restoration projects, remain uncontrolled, and this is the principal weakness of the studied QAS. These findings confirm that there is an urgent need to improve the QA mechanisms for restoration-related design processes.
We requested commercial proposals from 12 companies that specialize in QAS/QMS development and asked for a QA solution that specifically addressed the above weaknesses. Seven proposals were received. However, the analysis of proposed solutions showed they were nearly the same as Type II QAS, considered above. Therefore, the problem requires principally new approach.
A possible solution can be to include an additional element or level of control in the Type I QAS. For this purpose, it appears reasonable to create a Scientific Methodological Council (SMC)affiliated with a professional organization of restorers oran association of restoration companies. A similar practice exists in some restoration organizations, for example, at State Unitary Enterprise "Central Scientific Restoration Design Workshops" (Moscow, Russia). Therefore, claiming no originality for the idea, this paper seeks to formulate the basic principles ofSMC organization and functioning, which make it possible to use it as a QA element for scientific design documentation. The main objective of the SMCis to evaluate the quality of the adopted design solutions, based on the expert appraisal and followed by the recommendation to either approve the documentation for delivery to the customer, or to return for revision, or other. At that, the expert appraisal should not cover such formal aspects of as formatting, completeness, etc., which can remain under the responsibility of the regular QA procedures. The SMC should primarily evaluate the validity and essence of design solutions.
The purpose of SMCdetermines the main principles of its work: 1. The council should consist of the leading experts in the industry and qualified restorers of the first and the highest categories. At the same time, at least 30% of the council members should not have employment relations with the organization whose design documentation is being considered. 2. Any possibility of exerting undue influence on the council members should be eliminated, especially if such pressure comes from the management of the association that the council is affiliated to. For this, the following conditions should be met: when considering a design project, the council must include neither the heads of the company or department that developed the design project, nor the authors of the design project; the council's decision to return the design project for revision or correction cannot be re-negotiated or dismissed by the management of the restoration company that developed the design project; the financing of the council should not depend on the ratio of approved and returned projects. 3. It is advisable to divide the council into architectural and engineering sections. 4. The activities of the council should be regulated by a statute. 5. Council meetings should be held as soon as possible after a design project is submitted for consideration by the interested party. The necessity to appraise the design documentation should not delay the delivery of design documentation to the customer. 6. In order to support the interest of the professional community in the council's work, it may be recommended to include discussions of the current situation in the industry, preparation of legislative initiatives in the field of cultural assets conservation, etc. in the Council's agenda.
Having formulated these main principles, we also drafted the main documents regulating the council's work, i.e. the Statute, the MoM template, the template of SMC meeting notification and others.
In 2016-2018,the SMC was implemented intothe QAS of restoration and design company AK-Project, LLC. In the course of implementation, the following difficulties were successfully addressed: Firstly, well-known experts, such as conservation architects of the highest category Mikhail B. Kanaev and Viktor F. Korshunov and conservation engineer of the highest category Natalia Yu. Tyutcheva, expressed interest in the council's work. The participation of such specialists immediately improved the credibility of the SMCand, hopefully, ensured the correctness of the council's decisions.
Second, the concerns of the designers that the only purpose of the council was to criticize their design, eventually disappeared. Once the documentation was revised following to the experts' comments, it was obvious that the design solutions became more accurate and scientifically grounded. The designers could use the experts' comments to improve their knowledge.
Third, at first the need to review and revise the design documentation led to late delivery of documentation to the customer, and the management of the restoration company had to deliver some urgent projects without appraisal by the council. However, as the positive impact of the new QA step became more and more obvious, the number of design projects submitted for the expert appraisal increased significantly.
To assess the council'sefficiency in the QAS, it was decided to choose the percentage of positive conclusions of the Historical-Cultural State Expert Review, obtained on the first try, as the criterion. After 3 years of the council's work in AK-Project, this indicator increased by more than 30% and approached 100%. During the experimental period, the staff composition in AK-Project, LLC remain the same and the complexity of design problems did not vary significantly. Therefore, this proves the high efficiency of introducing the SMC into QA.
At the same time, we would like to draw attention to the fact that QAS, used in the restoration practice of the European Union, has mechanisms similar to SMC in addition to standard control procedures [6], when a group of experts is involved in quality assessment. In particular, the authors had a chance to familiarize themselves with the existence of this practice during a visit to the European exhibition of restoration, preservation of monuments and the renovation of historic buildings Denkmal (Germany, Leipzig, November 2016)
Conclusion
This study of the QAS for restoration-related design documentation revealed their disadvantages can be implicated by the inability to take into account the specific nature of the cultural assets restoration projects and hence assess the quality of design solutions. Specialized organizations that develop and implement QAS/QMS seem to be unable to offer solutions to these problems. Thus, it was suggested to introduce appraisal of the design documentation by the SMC as an element of the QAS. To test the idea, such council was implemented into the working procedures of one of the leading restoration and design companies. As a result, this element of the QAS was found efficient since the percentage of positive conclusions of the Historical-Cultural State Expert Review, obtained on the first try, significantly increased.
The present findings might be useful for the conservator-restorer community. We believe that councils may improve the QA in design development intended for the preservation of cultural heritage objects. | 2019-04-16T13:29:14.660Z | 2018-10-11T00:00:00.000 | {
"year": 2018,
"sha1": "0d5f32a9cd8afb6dc7dcddd9072675c2a3f5360a",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/110/matecconf_ipicse2018_05043.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "891a80ec63c757c2c966040b9ca599fe762d123d",
"s2fieldsofstudy": [
"Engineering",
"History"
],
"extfieldsofstudy": [
"Engineering"
]
} |
55485996 | pes2o/s2orc | v3-fos-license | Mannan-Binding Lectin (MBL) Deficient Individuals with the O/O Genotype are Highly Susceptible to Gastrointestinal Diseases
Background: Mannan-binding lectin (MBL) and ficolin-3 are initiators of the lectin pathway that is important for clearance of pathogens and apoptotic cells through complement activation. MBL deficiency (MBLD) has been associated with infectious complications but its clinical relevance in adults is unclear. Definition of MBLD is commonly linked to its low serum levels, but is mainly due to functional polymorphisms in the MBL2 gene leading to dysfunctional MBL forms. Homozygotes for dysfunctional alleles (O/O) have the lowest serum levels (<50 ng/ml) with a defect in opsonisation and complement activation. Ficolin-3 deficiency due to homozygosity of a frameshift mutation (1637delC) in the FCN3 gene was recently shown to be associated with pyogenic infections mainly in the lungs.
The physiological role of MBL has been well studied whereas the role of ficolins and CL-11 is less well investigated. MBL has a significant role in a number of pathogenetic and homeostatic processes. It binds and eliminates (through complement activation) various microorganisms and altered self-components, including dying host cells (apoptotic/ necrotic), circulating immune complexes (CICs) and immunoglobulins (agactosylated IgG and certain forms of IgM and IgA) [7][8][9][10]. The MBL molecule itself can act as a TLR-2/6 co-receptor within the cell and direct intracellular signalling, thus, mediating functions outside complement activation [11]. Additionally, MBL modulates proinflammatory cytokine production and clearance of endotoxins via Kupffer cells [12,13].
The concentration of MBL in serum can vary 10,000-fold between individuals and this can be explained by combinations of single nuclear polymorphisms (SNPs) in exon 1 and in the promoter region of the MBL2 gene [14]. The SNPs in exon 1 have been named variant allele B,C and D (collectively called O) whereas the normal allele is referred to as A [14]. Individuals carrying the O allele have dysfunctional MBL forms unable to bind their ligands [15][16][17]. SNPs at positions -50 (H/L) and -221 (X/Y) in the MBL2 promoter and at position +4 (P/Q) in the 5′-untranslated portion of exon 1 are associated with different MBL levels [18][19][20]. The X variant has the strongest down-regulating effect. Genotypes XA/O and O/O are generally referred to as MBL deficiency genotypes [21]. Approximately 60% of Caucasians have been found to MBL deficiency (MBLD) is now classified as primary immunodeficiency (PID) by the International Union of Immunological Societies Expert Committee on Primary Immunodeficencies [22]. MBL cutoff serum value has been defined as ≤ 500 ng/ml in a large cohort (N=1642) from four separate studies and has been suggested to be used in studies of MBL disease associations [21].
Ficolin-3 is the most abundant of the five LP-PRPs in serum (20 µg/ml) [33] and the FCN3 gene is the most highly expressed gene in the liver and lungs among the five LP-PRPs [34]. Three cases of total ficolin-3 deficiency due to the 1637delC mutation have recently been reported [35][36][37]. One case was an adult with recurrent severe pulmonary infections, another case a neonate with necrotizing enterocolitis (NEC) and the third case was a premature newborn with Streptococcus agalactiae infections. The clinical relevance of having low levels or deficiency of both MBL and ficolin-3 is unknown and the role of ficolin-3 is still unclear. Ficolin-3 deficiency is classified as PID by the International Union of Immunological Societies Expert Committee on Primary Immunodeficencies [22].
To date, numerous disease specific cohorts have been screened for MBL deficiency [38]. However, to our knowledge, no data is available were the clinical phenotype of an adult MBLD cohort is compared to randomly selected cohort from the general population. Thus, we collected a relatively large population of adult Caucasians with low MBL levels (≤ 500 ng/ml), evaluated thoroughly each individual's clinical history in relation to randomly selected individuals regardless of their MBL and clinical status. In addition, we determined MBL2 and FCN3 genotypes in the MBLD cohort and their potential disease predisposition.
Subjects and samples
From the diagnostic laboratory database at the department of immunology 228 individuals (≥ 18 years old) were found to have MBL levels ≤ 500 ng/ml measured between 2006-2009. A total of 205 individuals were contacted and 163 of them agreed to participate in the study. Forty-three of these 163 were not able to attend the clinic within the timeframe settings for blood and data collection. Data was gathered from 120 individuals, 90 women (75%) and 30 men (25%). The age range was 18-76 years with mean age 44.7 years. Approximately 70% of the samples sent to our diagnostic laboratory are from ambulatory-or general practice settings in Iceland, sent for various medical reasons unknown to our laboratory. The study was approved by the National Bioethics Committee of Iceland and the Data Protection Committee of Iceland. Informed consent was obtained from all participants in the study. All participants answered detailed questionnaire focusing on health in general including infections. The questionnaire has been previously used in our earlier studies on IgA deficiency [39][40][41][42]. The control group consisted of 63 individuals (age 27 to 76 years) who were randomly selected from the Icelandic National Registry [39,40,42]. Serum and EDTA blood were collected from all participants. Serum was frozen at -80°C until used in ELISA (enzyme-linked immunosorbent assay) and EDTA blood was kept at -20°C until DNA was isolated. Genomic DNA was extracted from EDTA blood samples using NucleoSpin ® Blood QuickPure kit (Machery-Nagel, catno: 740569.50) according to the manufacturer's instructions.
MBL serum levels
MBL serum levels were measured using a sandwich ELISA system previously described (29).
Statistical analysis
The two cohorts were compared with the Mann-Whitney U test in case of continuous variables and Kruskal-Wallis H test was applied when comparing the three genotype subgroubs of the MBLD cohort. Categorical data was compared with χ 2 test or Fisher's exact test. The level of significance was set at 0.05 and the program package SPSS 11.0 (SPSS, Inc, Chicago, Ill) was used for processing the data.
MBL serum levels in the cohorts and among the MBL2 genotypes
Distribution of MBL serum levels of the control (n=63) and MBLD (n=120) subjects is illustrated in Figure 1A. Mean MBL levels for controls and MBLD were 1564 and 175 ng/ml, respectively. The were 320 ng/ml (range 220-500) , 177 ng/ml (range 50-410 ng/ml) and 50 ng/ml (no range), respectively ( Figure 1B).
Gastrointestinal infections:
Surveying the gastrointestinal symptoms profile of the two groups revealed that the occurrence of oesophagitis and gastritis in the preceding 5 years was high among MBL deficient individuals and signficantly higher than found in the control cohort ( Figure 2B). In addition, about 8% of the MBLD individuals had more than five episodes of gastritis over the last two years, whereas none of the control cohort had (p<0.0001). Furthermore, the MBLD cohort had significantly more often been subjected to gastroscopy (MBLD 61% versus control 24%, p<0.0001). Interestingly, four individuals had been diagnosed with Campylobacter infections and they all belonged to the MBLD cohort. The two cohorts did not differ in occurence of ulcer (MBLD 8.5% versus control 1.6%, p=0.0642) or Salmonella infections (MBLD 3.3% versus control 1.6%, p=0.48957).
Mucosa, cutaneous and blood infections:
The MBLD individuals reported a significantly higher incidence of stomatitis, conjunctivitis and onychia than the control group ( Figure 2C). In addition, gingivits tended to be higher amongst MBLD individuals (MBLD 41.5% versus control 28.6%, p=0.0868). No difference was detected between the groups regarding prevalence of external otitis and bacterial skin infections the preceding 5 years (MBLD 12.6% versus control 7.9%, p=0.3408 and MBLD 17.8% versus control 19.0%, p=0.8377, respectively). Eleven MBLD individuals reported 1-4 episodes of sepsis the preceding 5 years, whereas no one in the control cohort did ( Figure 2C). No difference was found between the two cohorts in the recurrence (≥ 5 times last 12 months) of herpes labialis (MBLD 6% versus control 2%, p=0.840).
Urogenital infections: About 25% (8/32) of the men in the MBLD cohort reported prostatitis ≥ 1 times in the 5 preceding years, whereas only 5.9% (2/34) of the men in the control cohort did ( Figure 2D). Episodes (1-4 times the last 12 months) of common infections such as cystisis, urethritis and bacterial infections of the vagina were significantly more frequent in the MBLD group ( Figure 2D). The cohorts neither differed in reported frequencies of vaginal yeast infections nor nephritis (MBLD 47.8% versus control 37.9%, p=0.3582 and MBLD 6.7% versus control 3.2%, p=0.3208, respectively).
Antibiotic treatment: Antibiotic treatment during the last 12 months was significantly more common in the MBLD cohort compared to controls ( Figure 3A). In addition, the MBLD individuals needed at some time in their life prophylactic antibiotic therapy more often than the control group (MBLD 34% versus control 14%, p=0.0001).
Recurrent and severe infections:
The infections were classified into three classes according to severity and frequency ( Figure 3B). This classification is based on a study-specific classification system previously used in our studies on clinical manisfestations of selective IgA deficiency [42]. The infections subjected to this analysis were common cold, pharyngitis, sinusitis, tracheitis, bronchitis, pneumonia, urethritis, prostatitis and conjunctivitis. Approximately 43% of the MBL deficiency individuals suffered from recurrent and severe episodes of infections (class 3), whereas only 16% of the control group did (p=0.0003) ( Figure 3B).
MBL2 and FCN3 genotypes and clinical findings
Recurrent common cold was relatively more often reported by the O/O genotype subgroup than in the A/A genotype subgroup ( Figure 4A). However, no association was detected between genotype and various upper and lower respiratory infections ( Figure 4A). Interestingly, among the individuals in the MBLD cohort that reported pleurisy, two Significantly higher occurrence of gastritis was found among individuals with the O/O genotype and they also tended to have higher frequency of oesophagitis than A/O and A/A and genotypes ( Figure 4B). In the case of gastritis, the effect of the O allele tends to be gene doze dependent. In addition, O/O genotypes had significantly more often been subjected to gastroscopy than A/O and A/A genotypes ( Figure 4B). However, no association was detected between genotypes and Salmonella/Campylobacter infections or ulcer (data not shown). Stomatitis, conjunctivitiis, cutaneous infections, urogenital infections and sepsis were not associated with genotype (data not shown).
Five individuals in the MBLD cohort (4.2%) were heterozygous (C/-) for the 1637delC allele in the FCN3 gene. This is higher heterozygote frequency than we have observed previously in 500 Icelandic blood donors (2%) (unpublished data). The MBL serum levels of the C/individuals ranged from 350 to 500 ng/ml and the MBL2 genotypes were all A/O (three A/B, one A/C and one A/D). The MBLD individuals heterozygous for the 1637delC allele were not more susceptible to infections then the wild-type inidividuals (C/C), regardless of MBL2 genotype.
Discussion
In this case-control study, we found that adult MBL deficient individuals have increased proneness to various respiratorial, gastrointestinal, urogenital, muccosal, skin and blood infections compared to a randomly selected control group. In addition, we found that the infections were recurrent and severe in the MBLD cohort. Furthermore, we showed that MBLD individuals with the O/O genotype were significantly more likely to suffer from oesophagitis and gastritis as well having undergone gastroscopy than MBLD individuals with the A/O and A/A genotypes.
Our results on high occurrence of respiratory tract infections in adult MBL deficient individuals support previous findings [44]. However, our MBLD cohort was more frequently diagnosed with sinusitis than previously reported [44]. The high insidence of tonsillectomy and adenoidectomy among MBL deficient individuals has also been previously observed among MBLD individuals [45]. In addition, we found that tonsillectomy tended to be linked to the O allele which also supports previous results [45]. It has been shown that pneumonia caused by Streptococcus pneumoniae is linked to the O allele and low MBL levels increases the death due to pneumococcal infection [21,46]. Since our questionnarie was retrospective it is not possible to identify the underlying cause of pneumonia of our study cohort.
Previous studies have also reported significantly increased frequency of severe MBL deficiency (MBL levels ≤ 50 ng/ml) in adult patients with a history of recurrent/and or severe infections, including pneumonia and bronchitis [44]. In that study, it was ascertained that the patients neither had concomitant immunodeficiencies nor had received immunosuppressive therapy [44]. Our results ( Figure 3B) support their findings, however we can not rule out that MBL deficiency was the only cause to increased infection susceptibility because concomitant immunodeficiencies were not investigated in our study.
Interestingly, 9.3% of the MBL deficient individuals have had more than one episode of sepsis during the last 5 years, whereas none of the control group had (p=0.013). These results are in concordance with previous reports which indicate that high serum MBL levels may be protective against sepsis [47,48]. The MBL2 geneotypes did not differ with respect to occurrence of sepsis in our study which contrasts other findings suggesting that the O allele predisposes to sepsis [49,50].
What is perhaps the most outstanding finding in our study is the significant association we found between O/O genotypes and gastrointestinal inflammatory symptoms. Supporting this association is the observation that O/O genotypes had significantly more often been subjected to gastroscopy ( Figure 4B). Previous studies have shown that the O allele is associated with a risk of developing more severe gastric mucosal atrophy and intestinal metaplasia in Helicobacter pylori infected individuals [51]. Other studies have shown that the O allele is not linked to chronic gastritis, dyspepsia, duodenal ulcer, ulcerative colitis (UC) and/or Chron's disease [52][53][54][55][56]. The cause behind the high gastrointestinal inflammatory symptoms among O/O genotypes in our study and the role of MBL in upper gastrointestinal immunity remains unclear. Our observation awaits further studies.
The immune system in immunocompetent individuals has various compensatory mechanisms to counteract deficiency in other pathways. Whether the LP-PRPs can compensate for each other is unknown. We screened for the ficolin-3 deficiency allele (1637delC) in the MBLD cohort. None of the five heterozygotes we found were O/O and their MBL levels were all above 350 ng/ml. Although one can not conclude from data of such a small group of individuals, one might postulate that carrying both the O/O genotype and the C/-or -/-genotype could be more relevant for survival and susceptibility to infectious diseases. Elevated levels of ficolin-2 has previously been reported in O/O patients [57]. An observation that suggests that ficolin-2 might be compensating for MBL deficiency. This may also apply for ficolin-3. Ficolin-2 and ficolin-3 are both expressed in liver, have high structure homology, share ligand binding affinity, circulate in serum and both initiate the LP [58]. Ficolins may substitute for MBL deficiency and this may explain why the frequency of the O/O genotype is relatively high in Caucasians (4%) and why a subset of O/O individuals are healthy [24,59]. Our results warrant further studies involving screening for both deficiency alleles (i.e. O and 1637delC) in larger and different cohorts and investigate the advantages and/or disadvantages for the individual.
The results of our study indicate that more clinical attention should be made to adult patients with MBL deficiency. The MBLD cohort was highly susceptible to infections, therefore the MBL serum levels of patients with recurrent and severe infections should be routinely determined. The O allele was only associated with oesophagitis, gastritis and gastroscopy but not with the various respiratorial, urogenital, skin, mucosal and blood infections in the MBLD cohort. Thus, more attention needs to be made towards MBL2 genotyping patients with gastrointestinal complications. Genotyping of a larger study cohort which includes patients with gastrointestinal inflammation symptoms is warranted to better understand the significance of MBL in gastrointestinal immunity and/or homeostasis. | 2018-12-07T13:20:59.096Z | 2014-01-17T00:00:00.000 | {
"year": 2014,
"sha1": "7af94e5fbf1fcf8a6698c55fd53567d6f8ac5e4e",
"oa_license": "CCBY",
"oa_url": "https://www.hirsla.lsh.is/bitstream/2336/550569/1/oo-genotype-2155-9899.1000182.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "25189e72a2ba6f76dae00278d810c528e9f710c0",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
226251943 | pes2o/s2orc | v3-fos-license | Veterinary trypanocidal benzoxaboroles are peptidase-activated prodrugs
Livestock diseases caused by Trypanosoma congolense, T. vivax and T. brucei, collectively known as nagana, are responsible for billions of dollars in lost food production annually. There is an urgent need for novel therapeutics. Encouragingly, promising antitrypanosomal benzoxaboroles are under veterinary development. Here, we show that the most efficacious subclass of these compounds are prodrugs activated by trypanosome serine carboxypeptidases (CBPs). Drug-resistance to a development candidate, AN11736, emerged readily in T. brucei, due to partial deletion within the locus containing three tandem copies of the CBP genes. T. congolense parasites, which possess a larger array of related CBPs, also developed resistance to AN11736 through deletion within the locus. A genome-scale screen in T. brucei confirmed CBP loss-of-function as the primary mechanism of resistance and CRISPR-Cas9 editing proved that partial deletion within the locus was sufficient to confer resistance. CBP re-expression in either T. brucei or T. congolense AN11736-resistant lines restored drug-susceptibility. CBPs act by cleaving the benzoxaborole AN11736 to a carboxylic acid derivative, revealing a prodrug activation mechanism. Loss of CBP activity results in massive reduction in net uptake of AN11736, indicating that entry is facilitated by the concentration gradient created by prodrug metabolism.
Introduction Development of new drugs for infectious diseases has taken new urgency in recent years given the emergence and spread of antimicrobial resistance [1] that threatens global health. Resource-poor countries, where the infectious disease burden is highest, are most at risk. Drug resistant veterinary pathogens seriously compromise global food security. Animal African trypanosomiasis (AAT or nagana) affects millions of domestic animals each year [2] causing billions of dollars' worth of lost productivity in a part of the world where food scarcity impacts the population heavily. Given the rise of resistance to existing trypanocides [3], the Global Alliance for Livestock Veterinary Medicines has developed a programme to seek new drugs for AAT (https://www.galvmed.org/livestock-and-diseases/livestock-diseases/animal-africantrypanosomosis/). The leading class are the benzoxaboroles [4][5][6][7], boron-containing compounds that display versatile therapeutic potential against various infectious diseases [8]. Acoziborole is undergoing Phase II/III clinical trials for human African trypanosomiasis (HAT) [9,10], a neglected tropical disease with unmet medical needs [11]. Acoziborole may play a key role in the HAT elimination programme [12], being active against both bloodstream and CNS involved stages of the disease after a single, oral dose [9].
More recently, another benzoxaborole, AN11736, was identified as a potential development candidate for AAT [13]. AN11736 cures cattle of both Trypanosoma congolense and Trypanosoma vivax infection as a single 10 mg/kg dose [13]. Compared to other benzoxaboroles, AN11736 is extremely potent against trypanosomes, killing at doses two to three orders of magnitude lower than that of the earlier AAT benzoxaborole candidate AN7973 [14] and of acoziborole [15], respectively.
As novel chemical entities, the benzoxaboroles are unlikely to display cross-resistance with current trypanocides. However, characterisation of mode of action and resistance mechanisms of these compounds are only now starting to emerge. Acoziborole resistance was initially associated with multiple genetic changes [16]. Subsequently, acoziborole, AN11736 and AN7973 were shown to target the Cleavage and Polyadenylation Specificity Factor 3 (CPSF3) which, when over-expressed, reduced drug sensitivity [15]. CPSF3 has also been identified as a target for benzoxaboroles in two apicomplexan parasites [17,18], although in highly divergent organisms, i.e. bacteria and fungi, other targets have been proposed, including tRNA synthases [19,20] and beta-lactamase [21]. In Trypanosoma brucei treated with acoziborole, metabolomics experiments revealed a profound change in methionine metabolism [22], that may relate to RNA processing defects, given multi-methylation of the spliced leader sequence used for trans-splicing by trypanosomatids [23]. Some processing mechanisms of particular benzoxaborole molecules by parasites have been identified. A trypanocidal benzoxaborole, of the amino-methyl subclass, was shown to be subject to two-step metabolic processing, involving a primary conversion by an amine oxidase in host serum to an aldehyde, that is further metabolised to a carboxylate via T. brucei aldehyde dehydrogenase [24]. More recently, the benzoxaborole, AN13762 was found to be intracellularly hydrolysed in Plasmodium falciparum by a lysophospolipase homologue, whose loss of function was linked to resistance [25].
Here we report on the risk and mode of resistance to AN11736 in animal trypanosomes. Our results indicate that AN11736 acts as a prodrug that, once inside trypanosomes, is cleaved by specific serine carboxypeptidases, thus creating a concentration gradient resulting in more parent drug entering the trypanosome cell. Loss or reduction of this enzymatic activity renders trypanosomes highly resistant to AN11736 and to related benzoxaboroles containing a common peptidic linker between the boron head group and a secondary moiety.
Selection of resistance to AN11736 in T. brucei and T. congolense
Resistance to AN11736 was selected in T. brucei and T. congolense by continuous culture in escalating doses of drug. T. brucei able to grow in the presence of 9-18 nM of AN11736 were obtained after~30 days of culture ( Fig 1A). Resistant clones (one each of two independent resistant lines) were around 200-fold (TbOX R _A) and >300-fold (TbOX R _C) less sensitive to AN11736 as compared to parent cells.
For T. congolense, in vitro cultivation for more than eight months, in the presence of increasing concentrations of the compound, was required to reach high-level resistance. One individual clone from each of two different resistant lines, able to grow in the presence of 24-50 nM of the benzoxaborole, was chosen for subsequent studies. The sensitivity of these parasites to AN11736 decreased by >50-fold (clone TcoOX R _B) to nearly 200-fold (clone TcoOX R _C) as compared to the parent line ( Fig 1B).
The T. congolense resistant clones grew only slightly slower than the parent line, but the growth rate was further reduced when the trypanosomes were cultured in the presence of AN11736 (S1A Fig). The resistance phenotype of these clones was stable after three months of growth in the absence of AN11736 (S1B Fig). Parent but not AN11736-resistant T. congolense parasites were cleared in in vivo mouse infections upon treatment with 5 mg/kg AN11736 (S1C Fig). The T. congolense AN11736-resistant parasites did not demonstrate cross-resistance to drugs currently licensed for AAT, nor to other trypanocides used for HAT (S1 Table). Notably, no cross-resistance to the clinical candidate acoziborole was found (S1 Table).
Further cross-resistance analysis in T. congolense using a diverse array of benzoxaboroles revealed that AN11736-resistant trypanosomes were cross-resistant to compounds with a peptide-bond linker containing a valinate-amide motif, whereas these parasites showed no crossresistance to benzoxaboroles without the linker. (Fig 1C; see S2 Table for all data). These results agree with the absence of cross-resistance with acoziborole, which lacks the linker. Similar findings were obtained for T. brucei AN11736-resistant parasites when tested against various trypanocides and a selection of the same benzoxaboroles array (S3 Table).
Resistant trypanosomes share genetic changes in a tandem array of serine carboxypeptidase (CBP) genes
Genome sequencing of the two T. brucei resistant clones TbOX R _A and TbOX R _C revealed a notable reduction of read depth in a region on chromosome 10 where a tandem repeat of the three serine peptidases TbCBP1A, TbCBP1B and TbCBP1C (Tb927.10.1030-1050 respectively) is present (Fig 2A). Evidently, a deletion of one or two TbCBPs alleles had occurred in TbOX R _A and TbOX R _C, which also exists in a region of apparent loss of heterozygosity.
Peptidase-activated trypanocidal benzoxaboroles
Genome sequencing of the T. congolense resistant clones TcoOX R _B and TcoOX R _C and the parent lines (TcoWT, cultured for a limited number of passages, and TcoWT_HP, high passage, maintained in culture for the same time required for drug resistance selection), revealed reduced read coverage across the syntenic region of chromosome 10. This region comprises of nine annotated TbCBP1 paralogues in the T. congolense IL3000 reference genome (here referred to as TcoCBP1A-I) (Fig 2B), although the read counts across the T. congolense CBP locus suggest a different arrangement of paralogues in our experimental strain relative to the reference genome assembly. We observed a pronounced drop in read depth at the 5' end of TcoCBP1A and rise across the gene TcoCBP1I, indicating a loss of several CBPs in resistant cells. The high sequence conservation between the genes complicates analysis. TcoCBP1A and TcoCBP1I are the most divergent from the other seven TcoCBPs (S2 Fig). Five genes (TcoCBP1C, TcoCBP1D, TcoCBP1F-H) appeared to have the least mappable reads and hence were most likely deleted in both lines. TcoCBP1A, TcoCBP1B and TcoCBP1I did not appear to be affected. We were not able to identify homozygous SNPs in TbCPSF3 or TcoCPSF3 that could explain the resistance phenotype.
Knockdown of serine carboxypeptidases causes AN11736 resistance in T. brucei
RNA interference (RNAi) target sequencing (RIT-seq) provides a means to identify genes whose knockdown promotes drug resistance [26]. Selecting a library of T. brucei cells containing RNAi-inducing constructs covering the whole genome in the presence of AN11736 also identified CBPs knockdown as the dominant 'hit' conferring resistance ( Fig 3A). Moreover, silencing the expression of the TbCBP1 genes by targeted RNAi confirmed their importance for sensitivity to AN11736, as tetracycline induction of RNAi in these trypanosomes increased the EC 50~2 5-fold ( Fig 3B).
Disruption of TbCBP1A-C (Tb927.10.1030-1050) function by CRISPR-Cas9 gene editing [27] corroborated these results. Cas9 programmed to target TbCBP1A-C was induced for 24 h and then cells were selected with AN11736 using two independent Cas9/sgRNA CBP1 clones. The growth profiles indicated robust drug-resistance with induction of TbCBP1A-C editing that was not observed in wild type control cells (Fig 3C). A PCR-based assay confirmed that the CBP1 locus was disrupted in both independent edited clones ( Fig 3D, upper panel). Consistent with repair by single-strand annealing [28], sequencing of the products revealed recombination within blocks of identity in the 1050 (TbCBP1C) and 1030 (TbCBP1A) genes ( . Notably, these cells retained a chimeric copy of CBP1A and CBP1C, suggesting that the chimeric protein fails to sensitise parasites the drug. This may also be the case for the paralogues retained by the resistant strains described above that emerged following drug selection (Fig 2). Assessment of clones' sensitivity to AN11736 revealed a near 200-fold increase in EC 50 for clone 1 and a 300-fold increase in EC 50 for clone 2 ( Fig 3E). Thus, CRISPR-Cas9 editing confirmed the role of these serine carboxypeptidases in sensitivity to AN11736. and line TbOX R _C (EC 50 97 nM) as compared to the wild type line (TbWT, 0.3 nM) (right). (B) Stepwise in vitro selection of resistance to AN11736 in independent T. congolense lines TcoOX R _B and TcoOX R _C (left) and resistance levels of two resistant clones obtained from lines TcoOX R _B (EC 50 15 nM) and line TcoOX R _C (EC 50 54 nM) as compared to the wild type parent line TcoWT (EC 50 0.3 nM) (right). (C) Cross-resistance of the T. congolense AN11736-resistant clones to other benzoxaboroles revealed the presence of a peptidebond linker (highlighted in blue) in the highly cross-resistant compounds (>20-fold), whereas the same chemical feature was absent in non cross-resistant compounds (< 2-fold). See S2 Table for full data. Values in (A), (B) (right panels) represent means ± SEM of n � 4 (A) or n = 3 (B) independent biological replicates, with data in (B) each generated from two technical replicates. https://doi.org/10.1371/journal.ppat.1008932.g001
Re-expression of serine carboxypeptidases re-sensitises AN11736-resistant trypanosomes to the drug
Re-expression of a functional copy of TbCBP1B re-sensitised T. brucei TbOX R _A to AN11736 ( Fig 4A). Trypanosomes retain the catalytic triad Ser-Asp-His of carboxypeptidases, identified by alignment with other serine carboxypeptidases belonging to the S10 family, well characterised in yeast [29,30] and the other trypanosomatid T. cruzi [31] (S4 Fig). Disruption of the T. brucei catalytic triad in the active site of TbCBP1B, by substituting the nucleophilic S179 with a hydrophobic alanine, failed to re-sensitise the cells to the drug using the same approach ( Fig 4A). Re-expression of TcoCBP1A and TcoCBP1H in TcoOX R _C also partially restored sensitivity to AN11736 ( Fig 4B). Heterologous re-expression of TbCBP1B in TcoOX R _C ( Fig 4C) partially re-sensitised the parasites to AN11736. A similar effect was obtained when re-expressing the only predicted CBP1 serine carboxypeptidase annotated in the T. vivax genome in TbOX R _A ( Fig 4D). Heterologous expression of TcoCBP1H but not TcoCBP1A in TbOX R _A restored sensitivity ( Fig 4E).
These results prove that serine carboxypeptidases sensitise different Trypanosoma species to the benzoxaborole AN11736 and that loss of these genes renders the parasites less sensitive to the compound.
Trypanosome serine carboxypeptidases cleave AN11736 to a carboxylate derivative
Mass spectrometry analysis of T. brucei wild type and resistant parasites treated for 6 h with a high dose of AN11736 (0.9 μM, > 1,000-fold EC 50 ) showed the compound was present in the parent and both resistant lines, indicating there was no defect in uptake (Fig 5A). Further analysis revealed a compound fragment that was present in the wild type, but not the resistant lines (m/z 292.1347, retention time 10 minutes) ( Fig 5B). This fragment had a boron isotope distribution (S5 Fig) In T. congolense treated for 6 h with 2 μM AN11736 we could detect AN11736 in parent wild type, both resistant lines and the CBP1H-complemented line (Fig 5C). In T. congolense the C 14 H 19 O 5 NB fragment was identified as a large peak in the wild type line but at substantially lower levels in the resistant lines TcoOX R _C and TcoOX R _B (Fig 5D). The CBP1H-complemented resistant line TcoOX R _C regained higher levels of the product, proportional to regaining sensitivity to the drug.
Taken together, these data reveal that AN11736 acts as a prodrug that, once inside trypanosomes, is cleaved by serine carboxypeptidases at the ester bond to give a carboxylate derivative (m/z 292.1347, later synthesized under the code name AN14667). In resistant trypanosomes, where genes encoding for serine carboxypeptidases have been deleted or disrupted, this activation does not occur, or does so with substantially reduced efficiency.
When tested against trypanosomes, the carboxylate derivative AN14667 showed much reduced activity compared to AN11736 (~15,000-fold less active against T. brucei wild type and~800-fold less active against T. congolense wild type), most likely explained by the charged carboxylate derivative poorly traversing the parasite membrane (S6 Fig). target-sites; the red arrowheads indicate the primers used for the PCR-assay; blue indicates >99% identical regions among multiple paralogues; grey, unique to Tb927.10.1050; green, unique to Tb927.10.1030. (E) Dose-response curves for AN11736 of the two CRISPR-Cas9 edited clones analysed, both displaying a drugresistant phenotype: when CBP1 function was disrupted by Cas9 editing, T. brucei became, on average, 250-fold more resistant to AN11736. Data in (B), (E) represent means ± SD of n = 3 independent biological replicates. https://doi.org/10.1371/journal.ppat.1008932.g003
AN11736 metabolism causes accumulation of AN14667 and sustains further internalization of the parent compound in sensitive cells
Absolute quantification of AN11736 and its carboxylate metabolite AN14667 by UPLC-MS/ MS in wild type and resistant parasites substantiated these findings (Fig 5E and 5F). Over a
PLOS PATHOGENS
Peptidase-activated trypanocidal benzoxaboroles period of 6 h, relatively unchanged intracellular amounts of AN11736 (1 ng ml -1 at 0 h-essentially cells centrifuged as soon as possible after addition of drug, and 0.6 ng ml -1 at 6 h) were measured in TbWT, while in these cells levels of AN14667 were already 1,600-fold higher at time 0 h and further increased at 6 h (1,560 ng ml -1 at 0 h and 11,436 ng ml -1 at 6 h), indicating a very fast processing of the parent compound. In the resistant line TbOX R _A the opposite was observed: levels of AN11736 were higher at both timepoints (6.4 ng ml -1 at 0 h and 3.85 ng ml -1 at 6 h), while AN14667 levels remained much lower (34.5 ng ml -1 at 0 h and 150.5 ng ml -1 at 6 h) than those found in TbWT (Fig 5E). Quantification of these metabolites in T. congolense corroborated the T. brucei data. Levels of AN11736 remained low and slightly decreased over time in TcoWT (1.6 ng ml -1 at 0 h and 0.6 ng ml -1 at 6 h), while during the same time levels of AN14667 were markedly higher (17,272 ng ml -1 at 0 h and 12,814 ng ml -1 at 6 h). In TcoOX R _C, AN11736 was present at higher concentration (7.5 ng ml -1 at 0 h and 9.2 ng ml -1 at 6 h) than in the wild type line, while its metabolite levels remained lower than in the wild type (767 ng ml -1 at 0 h and 1,757 ng ml -1 at 6 h) (Fig 5F).
T. brucei over-expressing CPSF3 were less sensitive to both acoziborole and AN11736 [15] indicating that both drugs act by inhibiting this target. The differential activity between acoziborole and AN11736 would, therefore, appear to be related to the metabolism of the parent drug by CBP1, creating a more potent, charged derivative that is retained in the cell, as previously shown for another benzoxaborole by a distinct metabolic route [24]. Hence, AN11736 would enter cells down a concentration gradient that is perpetuated by drug metabolism with the cleaved derivative accumulating to concentrations much higher than the parent drug. As absolute quantification shows, the metabolite AN14667 reaches vastly higher levels than the parent compound, supporting the superior activity of AN11736 as compared to other benzoxaboroles that do not undergo enzymatic activation, relating to a far greater intracellular accumulation of active compound inside parasites.
Discussion
The benzoxaborole class of compounds has produced multiple clinical development candidates against a range of conditions, including infectious disease [4][5][6][7]. Acoziborole, for example, is in clinical trials for human African trypanosomiasis [9,10] and AN11736 is a member of a highly potent benzoxaborole subclass currently under consideration for treatment of AAT [13]. Recently, the conserved splicing factor CPSF3 was proposed as the major cellular target for benzoxaboroles in trypanosomes [14,15], supported by an earlier study that revealed, among other changes, amplification of the CPSF3, selected during induction of resistance in T. brucei [16]. AN11736 and a series of related compounds have potency against trypanosomes that exceed that of acoziborole by two to three orders of magnitude. Given requirements of high potency (to keep costs as low as possible for use in cattle), these compounds have received particular interest.
As part of the development process, understanding the risk and mechanisms of resistance is crucial. Here we reveal that the high potency of AN11736 is related to prodrug processing: once the compound has entered trypanosomes it is cleaved by serine carboxypeptidase(s) to a carboxylate product trapped within the cell. This enables accumulation of the benzoxaborole to greatly exceed that where no prodrug conversion and entrapment occurs. The same process, however, leads to a less desirable situation where selection of resistance becomes possible due to loss of enzyme(s) involved in prodrug processing.
Resistance to AN11736 occurs by disruption of expression of serine carboxypeptidase (CBP) genes, which results in diminished AN11736 cleavage. This mechanism of prodrug activation appears analogous to one recently observed in P. falciparum, where benzoxaborole AN13762 is cleaved by esterase activity, whose loss confers resistance to the compound [25].
Peptidase-activated trypanocidal benzoxaboroles
The CBPs have been characterised in the trypanosomatid T. cruzi [32]. In this parasite, the C group serine peptidases in the S10 serine peptidase family proteolytically cleave at C-termini at acidic pHs. This cleavage happens in lysosomes [31], where the enzymes also have esterase and deamidase activities [32,33]. In T. cruzi, activation of serine peptidases may be achieved through cleavage of a pro-domain by cruzipain [34]. T. brucei serine carboxypeptidases also have a pro-domain, but it is not known whether brucipain, the cruzipain homologue, is required for T. brucei serine peptidase activation. It is possible that mutations in brucipain would result in a secondary mechanism of resistance, although this was not observed in our analysis. It is probable that the T. brucei and T. congolense CBP serine carboxypeptidases play similar roles to that in T. cruzi where its lysosomal localisation and multiple hydrolytic capabilities [32,33] likely play a generic role in macromolecule turnover. We have not ascertained whether the genes are essential in procyclic form parasites which are resident in the tsetse fly. It will be important to understand this in the future, since a fitness cost in this lifecycle stage would hinder the transmission of parasites that develop resistance via this route in the mammalian bloodstream.
Gene deletions in the serine carboxypeptidase arrays of both T. congolense and T. brucei could clearly be linked to resistance to AN11736. Due to the high degree of sequence homology of the CBPs in both T. brucei and T. congolense we were unable to identify the precise CBP gene deletion(s). However, resistance to AN11736 occurred relatively quickly in T. brucei in vitro, while for T. congolense, which possesses a larger array of CBP genes, resistance took longer to emerge. Importantly, T. congolense AN11736-resistant parasites retained infectivity and resistance phenotype in mice.
The~200-fold level of resistance obtained for both T. brucei and T. congolense indicates that potency of the otherwise hyper-potent AN11736 would be similar to that of many other benzoxaboroles, including acoziborole (500 nM against T. congolense and 270 nM against T. brucei) [22] in absence of drug processing. This suggests that maintaining a concentration in animals that would still kill, even if drug activation were lost, could be possible, albeit using much higher doses of AN11736, which might compromise economic development.
Conversely, a similar cleavage of AN11736 could occur through peptidases present in the blood of treated animals, hence affecting pharmacokinetics. This possibility could reduce the amount of parent compound in circulation, an occurrence particularly important in view of potential prophylactic applications. Our data suggest that the pre-processed compound is of much reduced activity, presumably as the charged derivative is membrane impermeant, consistent with the sink effect resulting from intracellular generation of a carboxylate product.
Experiments with heterologous expression of serine carboxypeptidases suggest that benzoxaborole activation by ester cleavage identified for T. congolense and T. brucei is most likely shared with other trypanosomes, or at least with the major veterinary species T. vivax. In T. vivax the CBP locus in the Y486 strain reference genome consists of a single gene. Whether this would make resistance easier to acquire, or conversely more difficult, given the lack of redundancy, should be investigated once an in vitro culturing system for T. vivax has been developed.
As well as elucidating the resistance mechanism to a class of potent benzoxaboroles, the discovery of a particular moiety that is specifically cleaved by trypanosomal carboxypeptidases offers the potential to exploit that linker to create novel prodrugs with targeted activity against trypanosomatids, i.e. drugs that may not have the ability to traverse membranes could be linked, via the CBP cleavable bridge, to hydrophobic moieties enabling diffusion into cells where they would be cleaved to release the specific inhibitor. A risk of resistance to such compounds emerging through mutation to the CBP genes, though, could limit such a use.
Ethics statement
The mouse experiment was carried out in accordance with the Animals (Scientific Procedures) Act 1986 and the University of Glasgow care and maintenance guidelines. All animal protocols and procedures were approved by the Home Office of the UK government and the University of Glasgow Ethics Committee. Work was covered by Home Office Project Licence 60/4442.
Generation of oxaborole-resistant lines
T. congolense and T. brucei parasites were selected for resistance to AN11736 by subculturing cells in vitro in the continuous presence of increasing concentration of the compound. Multiple, independent cell lines were selected in parallel. Resistant lines were cloned by limiting dilution.
Trypanocidal activity
In vitro trypanocidal activity was measured using the Alamar Blue method as previously described [36]. T. brucei were seeded at 2×10 4 cells ml -1 and T. congolense at 2.5×10 5 cell ml -1 and EC 50 values determined after a total drug incubation time of 72 h. All experiments were carried out in duplicate and on at least three independent occasions unless stated otherwise. Although the assay measures metabolic conversion of resazurin to resorufin reagent and this can also be hindered by trypanostatic compounds we believe in this case it is a true surrogate for cell death hence label the y-axis as percentage parasite survival in Fig 1.
In vivo virulence of benzoxaborole-resistant trypanosomes
200 μl of 4.5×10 7 T. congolense AN11736 resistant cells and 2.8×10 7 wild type in fresh TcBSF-3 were injected intravenously into immunocompromised NIH female mice (5 per group). Once high parasitaemia had developed (day 16), AN11736 (prepared as a suspension in 10% DMSO) was administered i.p.at a dose of 5 mg/Kg. Parasitaemia was monitored daily by tail blood examination and mice humanely culled when parasitaemia reached 10 8 cells ml -1 .
Whole genome analysis
DNA from T. brucei and T. congolense wild type and resistant clones was extracted using the NucleoSpin Tissue Kit (Macherey-Nagel). Sequencing of paired 75 bp reads was performed using the NextSeq 500 platform (Illumina). Library was prepared with 500 ng input gDNA using QIAseq FX DNA library kit (Qiagen) and fragments of 300 bp, including adaptors, were selected with Agencourt AMPure XP (BeckmanCoulter), according to manufacturer instructions. Reads were trimmed for quality and adaptor contamination using Trim Galore! v0.6.2 (Babraham Bioinformatics) and reads were aligned to either the T. b. brucei TREU 927 reference genome, release 43 (available from TriTrypDB, https://tritrypdb.org/tritrypdb/), or the T. congolense TcIL3000 2019 reference genome assembled from Pacific Biosciences sequencing data by N. Hall group (also available from TriTrypDB), using Bowtie2 v2.3.5 [37]. Alignment rates for all samples were 85% for T. brucei and 98% for T. congolense. Reads were sorted and duplicates marked using SAMtools v1.9 [38] and Picard Tools v2.20.2 (http://broadinstitute.github.io/picard/). To determine depth of coverage, the number of fragments mapping to 100 bp windows along the genome was quantified using featureCounts v1.6.3 [39] with fragments mapping to multiple windows counted as 1/n (where n is the total number of windows to which a given fragment maps). For comparison of depth of coverage, the number of fragments mapping to each window was normalised for sequencing depth by dividing it by the ratio of number of reads aligned to the parent chromosome (chromosome 10) for that sample to the mean aligned reads for that chromosome across all samples.
Genetic manipulation of T. congolense and T. brucei
For the re-expression of CBPs in T. brucei the amplified open reading frame sequences were cloned into the pRM481 vector [40] using XbaI and BamHI (for primers list see S4 Table). Due to high sequence homology between the TbCBP genes the Tb427.10.1040 ORF was synthesized (BaseClear) and used as template for the PCR. The S179-encoding codon was modified in the above-described pRM481-derived vector containing wild type TbCBP1B by using Q5 Site-Directed Mutagenesis (SDM) Kit (NEB). Primers designed for this purpose were TGTTGGGGAAgcCTACGGTGGC and ACAAAGAAGTCGTTTTCAC.
A plasmid was generated that targets the tubulin locus of T. congolense and transcribes blasticidin S deaminase (BSD) and the C-terminal 6×HA tagged trypanosomal CBP described herein. T. congolense 5' and 3'-tubulin and actin intergenic sequences were amplified by PCR with Q5-Polymerase (NEB) from genomic DNA. Blasticidin S deaminase (BSD) was amplified from a plasmid pGL2271 (for primers list see S4 Table). Vector pRM481 was digested with AscI and the plasmid backbone was purified and used for Gibson assembly (NEB) with PCR products. In the resulting vector the CBP genes were flanked upstream by actin intergenic sequence and BSD. Upstream and downstream of the BSD gene were the 5' and 3' tubulin intergenic sequences that allowed integration into the tubulin locus upon AscI digest. This vector was further modified to accept the CBP genes: at 3' of BamHI site a HpaI site was introduced and the ClaI site, separating the gene (GFP) and the actin intergenic region, was exchanged with a SalI site by SDM (for primers list see S4 Table).
Plasmid pRM481 encoding for TcoCBP1A was modified by SDM. A HpaI site was introduced 3' of the 6×HA tag and the XmaI site. Then a PCR was performed to amplify the open reading frame and introduce a SalI site 5' of the TcoCBP1A ATG and a primer that bound in the T. brucei 3' tubulin intergenic region (AAACCTACACATGGTGCGACG). TcoCBP1A was inserted into the pRM481 derivative with SalI and HpaI. Further gene exchanges were done by amplifying the ORF and inserting into the BamHI and SalI sites.
Putative catalytic serine of TbCBP1B (S179) was identified by aligning its ORF sequence with a series of S10 serine peptidase ORF sequences downloaded from MEROPS Peptidase Database [42] including Carboxypeptidase Y from yeast, whose catalytic triad is well characterized [29,30]. Alignment was conducted in CLC Genomics Workbench.
RNAi screen
Determinants of AN11736 resistance were identified using an RNAi library screen as previously described [26]. Briefly, cultures from the screen were split and supplemented with fresh
PLOS PATHOGENS
Peptidase-activated trypanocidal benzoxaboroles AN11736 as required and DNA was extracted from drug-resistant cells. RNAi target fragments were amplified by PCR using the LIB2f and LIB2r primers. The products were then subjected to high-throughput RIT-seq. Sequencing was carried out on an Illumina HiSeq platform at BGI (Beijing Genomics Institute). Reads were mapped to the T. brucei 927 reference genome (v9.0, tritrypdb.org) with Bowtie2 using the parameter: very-sensitive-local-phred33. The generated alignment files were manipulated with SAMtools and a custom script was used to identify reads with barcodes (GCCTCGCGA) [43]. Total and bar-coded reads were then quantified using the Artemis genome browser [44].
Targeted RNAi of CBP locus
A single RNAi construct targeting all three TbCBPs in a common region of their DNA sequence was produced using plasmid pGL2084 as a backbone, and the resulting vector was transfected into T. brucei 2T1 BSF as previously described [45]. Primers to amplify the RNAi target sequence included AttB gateway flanks: Fw: GGGGACAAGTTTGTACAAAAAAGCAG GCTCGTTAATCAATGGAGCGGAT, Rev: GGGGACCACTTTGTACAAGAAAGCTGGGT GCTTTCCCCAACAACAAAGA. Genetically modified parasites were selected in HMI-11 complemented with 0.5 μg ml -1 phleomycin (InvivoGen), and 2.5 μg ml -1 hygromycin B (Calbiochem). RNAi induction was obtained with tetracycline (Sigma-Aldrich) at 1 μg ml -1 24 h before experiments.
Metabolomics analysis
T. congolense and T. brucei metabolites were extracted for untargeted metabolomics analysis following treatment with test compounds at 10×EC 50 or with the DMSO vehicle control (below 1% v/v) for 6 hours. For each sample 1×10 8 cells were collected and their metabolism was quenched by rapidly cooling to 4˚C using a dry ice/ethanol bath. The cells were kept at 4˚C from hereon. After a wash in ice cold PBS, cells were resuspended in 200 μl of extraction solvent (Chloroform:Methanol:Water 1:3:1) and shaken at 4˚C for 1 h. Extracts were centrifuged at 17,000×g, 10 min, 4˚C and the supernatants collected and stored under argon at -80˚C until analysis by LC-MS. Four replicates of each sample were prepared. Samples were analysed on an Orbitrap Fusion mass spectrometer (Thermo Fisher Scientific) in both positive and negative modes (switching mode). Hydrophilic interaction liquid chromatography (HILIC) was carried out on a Dionex UltiMate 3000 RSLC system (Thermo Fisher Scientific) using a ZIC-pHILIC column (150 mm Å~4.6 mm, 5 μm column, Merck Sequant). HPLC mobile phase A was 20 mM ammonium carbonate in water and mobile phase B was 100% acetonitrile. The column was maintained at 30˚C and samples were eluted with a linear gradient from 80% B to 20% B over 24 minutes, followed by 8 minutes wash with 5% B and 8 minutes re-equilibration with 80% B, at the flow rate of 300 μl/minute. Orbitrap data were acquired as previously described [46]. Untargeted peak-picking and peak matching from raw LC-MS data
PLOS PATHOGENS
Peptidase-activated trypanocidal benzoxaboroles were obtained using XCMS and mzMatch respectively. Metabolite identification and relative quantitation was performed using IDEOM interface [46] and PIMP [47], by matching accurate masses and retention times of authentic standards or, when standards were not available, by using predicted retention times. p-values were adjusted for multiple testing using the Benjamini-Hochberg method. Identifications were supported by fragmentation pattern match to MzCloud database (https://www.mzcloud.org/home.aspx) and isotope distribution. The Xcalibur software package from Thermo Fisher Scientific was used for targeted peak picking and fragmentation analysis.
Intracellular quantification of AN11736 and AN14667
Metabolism studies were performed at 1 μM AN11736 with TbWT, TbOX R _A, TcoWT and TcoOX R _C BSF trypanosomes. At 0 h, 1 h and 6 h timepoints 5×10 8 parasites for T. brucei and 7×10 8 parasites for T. congolense were collected and cell pellets resuspended in 100 μl and 50 μl 1×PBS, respectively, precipitated by addition of a 2-fold volume of acetonitrile and centrifuged at 1,700×g, 10 min at room temperature. The supernatant was diluted with water to maintain a final solvent concentration of 50% and stored at -80˚C prior to UPLC-MS/MS analysis, following a similar protocol as the one described in Wyllie and colleagues [48]. . The sequences were blasted against the entire collection of S10 carboxypeptidases stored at MEROPS Peptidase Database [42]. Family S10 has residues of the catalytic triad in the order Ser, Asp and His [29,30] and carboxypeptidase Y (MER0002010) from Saccharomyces cerevisiae is the most representative gene of the family. Indicated with asterisks is the polar catalytic serine (S179) of the triad. This Ser was targeted in Tb927. 10.1040 for site directed mutagenesis, substituting with a hydrophobic alanine (S179A). The alignment was made with CLC genomics workbench. (PDF) S5 Fig. Isotopic distribution for fragment m/z 292.1347 (metabolite AN14667). The fragment had an isotopic distribution that matched the simulated isotopic distribution of C 14 H 19 O 5 NB (example obtained for a TbWT replicate treated for 6 h with AN11736). Table. Primer sequences for integration into pRM481, generation of T. congolense tubulin locus integration plasmid and for insertion into and modification of T. congolense tubulin locus integration plasmid. (PDF) | 2020-11-05T09:09:37.248Z | 2020-11-01T00:00:00.000 | {
"year": 2020,
"sha1": "2ba5f675887ae3614b459de135375220d5518247",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1008932&type=printable",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "e0728278b45ab6d56d9d40ce1a1169f1e983b688",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
267287418 | pes2o/s2orc | v3-fos-license | Radio Frequency System, Power Converters and Cryomodule installation and tests as a Polish in-kind contribution to the European Spallation Source (ESS)
The European Spallation Source (ESS) project currently enters the final stage of installation. Since 2017, a group of engineers and technicians from The Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of Science (IFJ PAN) are involved in the project. The contribution to the project can be divided into three main tasks: Radio Frequency Distribution System (RFDS), RF (Radio Frequency) Power Stations and Cryomodules. The RFDS in ESS project is one of the largest installations of this type consisting of 155 RF high power systems. Engineers and technicians from IFJ PAN were responsible for preparation, installation and RF measurements of the above-mentioned system. The team is also involved in preparation and conducting low and high power tests of the RF stations. The IFJ PAN team is also responsible for the preparation as well as vacuum and cryogenic tests for 9 Medium and 21 High Beta Cryomodules, before they are installed in the tunnel. The advanced quality control and quality assurance were mandatory for this work because the costs of failures, as well as potential delays, would have a huge impact in the project realisation. Therefore dedicated methods and approaches have been adapted to this work using experience gained by the IFJ PAN team on previous projects like LHC, XFEL and W7X.
Radio Frequency Power Distribution System
The RFDS system for the ESS accelerator is currently one of the largest installations of this type.The total length of the waveguide and coax lines is approximately 3.5 km.
In addition to the assembly of system components, the IFJ PAN team was also responsible for carrying out RF testing and any necessary fine-tuning to verify that the lines met the acceptance criteria.The preparation phase and the installation were described previously in other articles [2].As of today, the RFDS system is completed, and the team is now focused on the preparation of power converters and their commissioning, as described below.
Radio Frequency Power Station
The main component of the RF system is the power amplifier, which takes the signal from the Low Level RF (LLRF) system and pulse power from the modulator and converts the power into RF waves at 352.21 or 704.42 MHz.Due to the power levels required, most of the power amplifiers at ESS will be klystrons with a peak power in the range of 1 to 3 MW.In total the linac will require 126 klystrons.The Spoke section of the linac requires 400 kW of peak power per resonator at 352 MHz [1].In this case, it was decided to use technology based on tetrodes.
Tetrodes
The Spoke section will have 26 power stations.The main tasks of IFJ PAN are to provide technical support to the ESS experts, in the installation of tetrodes and to connect the power stations (figure 2) to the waveguide systems in the gallery.
In addition, activities such as visual inspection of components, involvement in pressure testing of cooling water systems, electrical and mechanical incoming inspection of the amplifiers, participation in the assembly process of the RF power station (installation of all its key components) and finally, supporting the ESS experts with high power test of each Radio Frequency Power Station (RFPS) in the test stand.Figure 3. Klystrons used at ESS for MB and HB.
Klystrons
When the ESS first becomes operational, it will contain 78 klystrons -6 for a NCL, 36 for Medium Beta (MB) and 36 for High Beta (HB) [3].IFJ PAN participates in the preparation low and high power testing of the klystrons.Three types of klystrons are in use for MB and HB: E37504 (CANON), VKP-8292A (CPI) and TH2180 (THALES) (figure 3).
Prior to a klystron test, many activities must be carried out, such as visual inspection after delivery, electrical testing of integral parts e.g.coils, sensors, as well as the installation of additional components including arc detectors, filament units, junction box, output waveguide etc.After positioning the klys-tron in the final position, the gun tank is filled with oil, the output waveguide is connected to the waveguide system and all electrical and water-cooling connections are made.Only then is the klystron for further operation.
Low Power tests include activities such as checking electrical connections and testing all the electronic devices necessary for system operation.The procedure consists of starting all individual systems, circuits and sensors (e.g.temperature sensors, arc detectors, flow meters, pressure sensors), checking the parameters (vacuum level, current, voltage, pressure, flow, temperature etc.) and checking the reaction of the interlock system.
High power testing can be split into two main parts: conditioning and the actual test.During DC and RF conditioning, various parameter including high voltage (HV) and RF pulse length, repetition rate and drive power are changed, to reach the nominal working levels.Throughout the test, all key parameters (e.g.current, vacuum level, power, etc.) are continuously monitored.During conditioning, the level of ionizing and non-ionizing radiation is monitored in accordance with safety regulations.
When both conditioning procedures are complete, the formal Site Acceptance Test (SAT) can start.All the results obtained during this part of the test, are compared with the Factory Acceptance Test (FAT).During the SAT, tests such as the transfer characteristic, the sensitivity to variation in HV, bandwidth measurements and Filament Roll-off are measured.
In all the above-mentioned elements, starting from assembly of the waveguide line, connecting cables, the incoming inspection and installation of electronic devices in racks, the preparation of klystrons and finally the formal SAT, the IFJ PAN team supports experts from ESS [4].
IFJ PAN contribution for cryomodules testing
The IFJ PAN team is actively participating in testing and preparation of installation of the cryomodules belonging to the Superconducting Cold Linac (SCL) section of the accelerator.The main responsibility of IFJ PAN is to perform the on-site SAT for 30 elliptical cryomodules.In addition, reception check of the 13 Spoke cryomodules is done after transportation from FREIA laboratory, where their SAT took place.
IFJ PAN members are involved in reception and checking the spoke cryomodules by doing incoming inspections at the ESS site.After inspection, the cryomodules with the cooperation of ESS personnel, are prepared for installation in the tunnel [5].
The main task of the IFJ PAN specialists is the SAT of the elliptical cryomodules as mentioned before.IFJ PAN is responsible for preparing for the RF test at 2 K, of 9 MB (figure 4) and 21 HB elliptical cryomodules [6].During preparation of the cryomodules for tests at an early stage, it is necessary to check if there were any irregularities following transportation.Due to that, the first step after arrival of the cryomodule to ESS is to perform a mechanical and electrical incoming inspection at the preparation area.The instrumentation (e.g.compressed air and helium guard system, vacuum equipment, doorknobs, cabling etc.) can be then installed on the cryomodule and the cryomodule is leak tested.When the inspection and vacuum tests have passed, the cryomodule can be installed inside the test-stand bunker.
After placing the cryomodule into the test bunker, numerous mechanical and electrical connections must be made, such as: jumper/cryogenic lines connection with installation of the thermal shield and multi-layer insulation (MLI), waveguides and auxiliary lines connection, connection of the cryogenic and RF cables etc.Then, vacuum tests of the beamline and cryogenic lines are carried out.For that purpose, first the particle-free pump station is connected to the cavities string inside the clean room and residual gas analysis (RGA) and leak check is performed for the beam line.Before the cooldown starts, the cryogenic lines are pressurized and a leak test of the insulation vacuum/beam volume is done (figure 5).Also prior to cooling down, the pump and purge procedure of the process pipes is performed, cryogenic valves initialization, pressure sensors calibration and many other checks are carried out.In each cryomodule two cryogenic circuits: 4.5 K (cavities) and 40 K (thermal shield) can be distinguished.At nominal gradient the Superconducting Radio Frequency (SRF) cavities are operated at 2 K, 31 mbar He bath pressure while the thermal shield is maintained at 40 K and a pressure of 14 bar.The power couplers are at 4.5 K.The cool down process usually starts from parallel helium flow through the thermal shield and 4.5 K circuits.Helium gas is provided by dedicated cryoplant.Each cryomodule is equipped with two main cryogenic valves.In the beginning, SRF cavity tanks are filled in parallel with liquid helium from the bottom and later there is a transition to a Joule-Thomson valve and then cavity tanks are filled from the top.Once a stable helium level has been achieved, the pumping down process to 31 mbar is performed (figure 6).For better 2 K operation, a heat exchanger is installed in the cryomodule which cools down the supplied helium gas before it is passed through the Joule-Thomson valve.
Once the temperatures and cryogenic condition have stabilised at 2 K, all required RF tests are performed under the responsibility of the ESS SRF Section.When the RF tests at 2 K, as well as the static and dynamic heat loads measurements for the cryomodule are complete, the cryomodule is warmed up.This takes a couple of days [7].When the cryomodule is warm additional vacuum checks are performed before the cryomodule is disconnected from the test stand.Outside the bunker an outgoing inspection is performed and the cryomodule is secured and ready for tunnel installation [8,9].
Conclusion
IFJ PAN is a team that has been supporting ESS experts for many years in various fields, from installation to testing.The IFJ PAN team works according to the quality system created by themselves based on experience gained in previous projects.One of the few key aspect of this quality system is a dedicated database called Quality Database (QDB).The QDB is regularly developed based on experience and needs.The stored data is used for numerous analyses and preparation of reports which are shared with ESS experts.
It should be acknowledged that the participation of IFJ PAN employees in this project, as the Polish In-Kind contribution to the ESS, it is very valuable both for the implementation of the project as well as for the exchange of experience and the acquisition of new skills and science.
Figure 1 .
Figure 1.Part of the waveguide system in Normal Conducting Linac (NCL) area.
Figure 2 .
Figure 2. RFPS unit.Figure 3. Klystrons used at ESS for MB and HB.
Figure 5 .
Figure 5. Leak test of the 2 K cryogenic volume (leak signal vs pressure).
Figure 6 .
Figure 6.Plot of cavity tank helium bath pressure and liquid helium level during pumping down to 31 mbar. | 2024-01-28T16:18:18.697Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "d2e400b7b96ffe7bdc82d2ac34cdd5472a2ed3b8",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/2687/2/022020/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "6383c11ca3089219e9422c9d34fecf87ffd3b562",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": []
} |
15187519 | pes2o/s2orc | v3-fos-license | P-trac Procedure: The Dispersion and Neutralization of Contrasts in Lexicon
Cognitive acoustic cues have an important role in shaping the phonological structure of language as a means to optimal communication. In this paper we introduced P-trac procedure in order to track dispersion of contrasts in different contexts in lexicon. The results of applying P-trac procedure to the case of dispersion of contrasts in pre- consonantal contexts and in consonantal positions of CVCC sequences in Persian provide Evidence in favor of phonetic basis of dispersion argued by Licensing by Cue hypothesis and the Dispersion Theory of Contrast. The P- trac procedure is proved to be very effective in revealing the dispersion of contrasts in lexicon especially when comparing the dispersion of contrasts in different contexts.
Introduction
Traditionally, it has been argued that Phonological constraints account for phonological patterns. Recent studies on the role of articulatory/perceptual phonetic factors in phonological phenomena have provided consistent explanations for phonetically-based phonology (e.g. Flemming, 1995;Jun, 1995;Hamilton, 1996;Silverman, 1997;Steriade, 1997).
Licensing by Cue hypothesis, proposed by Steriade (Steriade, 1997), accounts for the role of phonetic factors in phonology. According to this hypothesis, maintenance of contrasts is closely related to the amount of perceptual salience a context can provide to a segment. The more the feature F of segment S is perceptually salient in context C , the more likely the S will show contrasts based on possible values of F in that context. Providing reasonable explanation for contrast neutralization, Licensing by Cue hypothesis has recently been challenged by some controversies. Kochetov argues that the distribution of Russian plain-palatalized contrast in coronal stops provides evidence against a complete phonetic-perceptual explanation claimed by Licensing by Cue hypothesis (Kochetov, 2006).
The Dispersion Theory of Contrast presents as an alternative explanations of dispersion/neutralization of contrasts in segment sequences (Flemming 1995(Flemming , 2004(Flemming , 2006. This hypothesis suggests three functional goals for phonological contrast dispersion.
•
Maximization of the number of contrasts • Maximization of perceptual distinctiveness of contrasts • Minimization of articulatory effort for production of contrasts Higher number of contrasts helps to distinguish words for efficient communication. The more the segments distinct perceptually, the easier they can be perceived by listeners. Finally minimization of articulatory effort accounts for efficiency of language production. Functional goals accounted for phonological constraints have been reported in several works (Zipf, 1949;Martinet, 1952;Martinet, 1955;Lindblom, 1986;Lindblom, 1990), but there are arguments against functional basis of phonology (Ohala, 1993;Labov, 1994;Trask, 1996).
Licensing by Cue hypothesis and the Dispersion Theory of Contrast share the idea that perceptual factors play a prominent role in shaping phonological processes and patterns.
Consonant perceptual cues are good candidates for probing the role of phonetics in phonology and specially the role of perceptual cues in phonotactics. Consonants vary on the amount of contrast they reveal in different contexts. They show contrasts in prevocalic context but they are more limited in preconsonantal context. Sonority-based explanations, although make some predictions, but fail to provide a consistent prediction in all cases. Phonetically-based phonology, on the other hand, argues to provide such consistent predictions based on the assumption that phonotactics should ensure the perceptibility of cues to segmental contrasts (Kawasaki, 1982;Ohala, 1992;Flemming, 1995;Krichner, 1997;Steriade, 1999;Wright, 2004;Flemming, 2007). The studies show evidence against a sonority-based account of phonotactics and make strong arguments on the role of perceptual cues to segmental contrast in phonotactics of the world languages.
Based largely on Wright (2004), a number of cues to place, manner and voicing are introduced here. Perceptual cues to place contrasts are related to acoustic features including second formant transitions, stop release bursts, nasal pole zero patterns and fricative noise. These features are more recoverable in the short period of transition to the next segment especially in the case of perceptual cues to stops place contrasts. Fricatives have cues internal to the fricative signal itself. Some laterals have cues spreading over an entire syllable (Wright, 2004). Due to the more susceptibly of stop place contrasts to cue loss especially in noisy channels; Languages put more restrictions on the position of stops in segment sequences.
The manner contrasts are recoverable by relative degree of attenuation in the signal as a perceptual cue. An abrupt attenuation in signal is a sign of a stop. A complete attenuation along with fricative noise is a manner cue to fricatives. Nasals have a small decrease in amplitude of the signal compared to fricatives. They also use nasal pole and zero as a cue to nasal manner. Manner cues are more resistant to noise masking and are more perceptually salient compared to voice and place cues (Wright, 2004).
Cues to voicing are periodicity, VOT, the presence and the amplitude of aspiration noise, and duration cues which can be irrecoverable for stops in syllable final and preconsonantal contexts (Wright, 2004).
The study of perceptual cues has revealed that cues to voicing contrasts are weaker than cues to place contrasts and cues to place contrasts are weaker than cues to manner contrasts especially in preconsonantal contexts (Wright, 2004). According to Licensing by Cue hypothesis if perceptual cues of a contrast are weak in a context it is less likely for a segment to show contrast of the feature in that context and the contrast will be subject to neutralization.
The aim of this study was to investigate Licensing by Cue hypothesis (Steriade, 1997) in the case of contrast neutralization of consonants in preconsonantal context. Segments are predicted to show more contrast according to different values of features more perceptually salient in preconsonantal context. Voicing contrasts are weaker than place contrast and place contrasts are weaker than manner contrasts, thus if the hypothesis is true, consonants in preconsonantal context should show the most contrasts on the manner dimension, a medial amount of contrast on the place dimension and the least amount of contrasts on the voicing dimension.
Another goal of this paper is to introduce P-trac procedure, a procedure to find the distribution of contrasts in a context within lexicon. The assumption is if a cue is weak in a context, the contrasts related to that cue will be subjected to neutralization. Diachronically the amount of contrasts in the poor contexts will be reduced while the amount of contrasts of perceptually salient cues will be increased in the proper contexts within lexicon. A procedure can track the distribution of contrasts in a context and provide insights into the degree of perceptibility of a cue in that context.
P-trac Procedure
In this section, we will introduce the P-trac procedure. The goal of P-trac is to track the dispersion of contrasting features in lexicon. The assumption behind the P-trac procedure is that if phonology is perceptually grounded, the lexicon should be diachronically optimized so that enough cues for each contrasting feature exist in segmental contexts. As an example, cues to voicing contrasts are weaker in preconsonantal contexts than in prevocalic contexts so the amount of voicing contrast should be less frequent in preconsonantal contexts than in prevocalic contexts within a lexicon according to diachronic phonological changes. To find the dispersion of contrasting features, we need to define the notion of featural minimal pair and minimal sequence pair. A featural minimal pair is a pair of segments which are the same in all features except one contrasting feature. Segments /b/ and /p/ constitute a featural minimal pair because they share the same features except voicing feature while /b/ and /t/ don't constitute a featural minimal pair because they contrast both in voicing feature and place feature. A minimal sequence pair is defined as two sequences of segments which differ only in segments of one position while the two segments should be themselves featural minimal pairs. For example /band/ and /dand/ are a minimal sequence pair because they only differ in segments /d/ and /b/ in the starting position of the sequence and segments /b/ and /d/ are themselves a featural minimal pair which differ only in place feature. On the other hand /band/ and /tand/ don't form a minimal sequence pair because /b/ and /t/ don't constitute a featural minimal pair. The notion of featural minimal pair is different from the notion of minimal pair usually used in phonology. Two segment sequences that differ in only one phoneme and have distinct meanings are called a minimal pair in phonology literature. Minimal pairs are used to construct phoneme inventory of a language while featural minimal pairs are used to distinguish a contrasting feature.
periodicity VOT aspiration noise duration Table-1: All possible featural minimal pairs in Persian, their contrastive feature and their corresponding acoustic cues P-trac procedure starts with extracting minimal sequence pairs from lexicon. Minimal sequence pairs should be extracted according to the subject and goals of the study. As an example in order to study the dispersion of manner, place and voice features in different context in CVCC syllables of Persian, it required to find and extract all minimal sequence pairs of CVCC type syllables from a Persian lexicon. In another case if the goal of study is to find the distribution of consonant features in preconsonantal context of CVCC syllables of Persian, All minimal sequence pairs in the form of from CV syllables should be extracted. The basic idea of P-trac procedure is given in (1).
(1) a) Extract all sequences according to the goals of study. b) Find all minimal sequence pairs. c) For each minimal pair designate context and the contrasting feature. d) Count the frequency of occurrence for each pair (context, contrasting feature).
At the end of the P-trac procedure we'll have a feature-context matrix that involves the frequency of occurrence for all contrasting features in all contexts. A general feature-context matrix is shown in (2).
(2) P-trac procedure can be applied to find the distribution of one or several contrasting features in one or several contexts. For example, it can be used to find the dispersion of all contrasting features before the voiced labial stop segment /b/ in b consonant clusters (a single context) or it may be used to find the distribution of voicing contrast (a single contrasting feature in all preconsonantal contexts). Similar algorithms to the P-trac procedure have been proposed in the literature in order to find contrasting features but they are used for other goals other than tracking the perceptual optimization of a lexicon (e.g. Archangeli, 1988;Dresher, 2003).
Using P-trac procedure to find the distribution of contrasting features in preconsonantal context of CVCC segment sequences in Persian
In this experiment, we examined the distribution of consonantal contrasts in preconsonantal context. The goal of this experiment was to know whether the distribution of contrasts matches the amount of perceptibility of cues. According to Wright (2004), the perceptibility of cues decreases from cues to manner contrasts to that of place contrasts and from cues to place contrasts to that of voicing contrasts. If Licensing by Cue hypothesis (Steriade, 1997) holds, due to its diachronic effect, we should see a correlation between the amount of the perceptibility of the acoustic cues to features and the number of times the feature is used to contrast a minimal pair.
Data
Just like CELEX lexicon (Baayen et al., 1995) for some western languages, FLexicon is a database which contains information about the lexicon of Persian language. This database contains 54409 words and their phonemic transcription. Syllable structure in Persian is very simple because it doesn't allow complex onsets. The only three possible syllable structures are CV, CVC and CVCC. Therefore, the syllabification of lexemes can be done deterministically.
Applying P-trac procedure
According to the P-trac procedure, we syllabified all 54409 lexemes in lexicon using simple rules of Persian syllabification. In Persian, syllables should start with an obligatory Onset. Moreover, Onset clusters are forbidden in Persian so we can simply syllabify lexemes. Each syllable in Persian starts with a consonant as Onset followed by a vowel. In spite of the Maximal Onset Principle which is used to syllabify segment sequences in English, a so called Minimal Onset Principle is used to syllabify sequences in Persian according to which all consonant clusters belong to coda except the last consonant which is an obligatory onset. For example, CVCCCV sequence is always syllabified as CVCC.CV because there must be just one obligatory consonant as the Onset of the second syllable. In a similar way, CVCV sequence is always syllabified as CV.CV. So the syllabification of Persian segment sequences is a simple deterministic task.
After syllabification of lexemes, 268 distinct clusters were extracted from CVCC syllables and their frequencies were counted within the lexicon (type frequency).
For each in consonant cluster
, we found all featural minimal pairs in position. We defined a featural minimal pair as two phonemes distinguished by just one contrastive feature. The features we used were manner, place and voice.
The distribution of contrastive features was investigated by counting the number of times each feature used to form a minimal consonant pair in position in preconsonantal context ( ) within consonant clusters. For example in all r consonant clusters, all minimal pairs were extracted and the contrasting feature was counted. For each feature (manner, place, voice), a frequency of occurrence in context _ (here /_r/) was resulted. The procedure is described in (3).
Results
Diagram-1 demonstrates the result of applying P-trac procedure applied to clusters in CVCC syllables extracted from FLexicon. The consonants in position provide the preconsonantal context for consonants. It visualizes the distribution of contrasting features which consonants in position use to distinguish meaning in preconsonantal context _ . There are a total of 791 minimal sequence pairs from which 433 pairs contrast in manner, 335 pairs contrast in place and 23 pairs contrast in voice feature. It should be noted that according to P-trac procedure given in (3), the frequencies are computed using the minimum type frequency of each sequence in lexicon. For example, if the type frequency of /bl/ is 200 (it means there are 200 /bl/ sequences in all distinct CVCC sequences in the lexicon) and that of /pl/ is 300 in the lexicon we increment frequencies [l, voice] by 200, the minimum type frequency of the two sequences.
Diagram-1: Frequency distribution of contrasting features in position of sequences
Although the results in diagram-1 are summed over all the consonants in position, P-trac procedure provides the dispersion of the contrasts for all the consonant instances. We used the output of P-trac procedure in order to find in which contexts consonants in position use voicing as a contrasting feature because perceptual cues to voicing contrasts are weak in preconsonantal contexts. Diagram-2 demonstrates all the contexts in which voicing feature has been used as the contrasting feature in order to distinguish meaning in the FLexicon, the Persian lexicon. As shown in the diagram, the only contexts that voicing contrast is used are those before nasals and liquids. In other preconsonantal contexts, voicing contrast is not used anywhere.
Diagram-2: Preconsonantal Contexts in CV sequences where voicing is used as a contrasting feature
Table-2 demonstrates the minimal sequence pairs that use voicing contrast in order to distinguish meaning in the FLexicon. The surprising fact about these minimal sequence pairs is that all of them are either loan words or part of loan words all borrowed from Arabic language. In this experiment, the P-trac procedure was used in order to find the dispersion of manner, place and voicing features in and positions of syllables in FLexicon. The goal of this experiment is to find how distribution of contrasting features is related to the amount of the perceptibility of the cues argued by Licensing Cue hypothesis (Steriade, 1997) and Dispersion Theory of Contrast (Flemming 1995(Flemming , 2004(Flemming , 2006. The Phonemic transcriptions of lexemes were used again as the input to P-trac procedure.
Applying P-trac procedure
All distinct CVCC syllables with their type frequency, the frequency of the syllable in the lexicon, were extracted. According to P-trac procedure, all minimal sequence pairs were found and for each context (consonants in and position of ) the contrasting features were counted. Just like the previous experiment for each minimal sequence pair, the frequencies [context, feature] was added by the minimum type frequency of members of the minimal sequence pair. For example, for /band/ and /pand/ minimal sequence pair if the type frequency of /band/ is 200 and that of /pand/ is 300, frequencies [_and, voice] was added by 200.
Results
In diagram-3 the distribution of manner, place and voicing contrasts are shown. As it can be seen the voicing contrast is minimal in preconsonantal context and is maximal in onset position (prevocalic context ).
Diagram-3: Frequency distribution of contrasting features in C1, C2 and C3 positions in C1VC2C3 syllables of Persian lexicon.
As it can be seen the frequency of contrasts is maximal in prevocalic context (3404 contrasts), medial in position (1015 contrasts) and minimal in preconsonantal context (791 contrasts) whatever the contrasting feature is. The frequency of contrasting features gradually decreases from manner feature to place feature and from place feature to voicing feature.
Discussion
The results of P-trac procedure applied to the case of contrasting features in preconsonantal contexts of clusters in CVCC syllables extracted from FLexicon shows that the dispersion of contrasting features exactly matches with the predictions made by Licensing by Cue hypothesis and the Dispersion Theory of Contrast. The frequency of contrasting features in preconsonantal context gradually decreases from manner contrasts to place contrasts and from place contrasts to voicing contrasts (Diagram-1). According to (Wright, 2004) the amount of perceptibility of acoustic cues of manner, place and voicing features has exactly the same pattern. The more salient is the perceptual cue of a feature in preconsonantal context, the more frequently it has been used as a contrasting feature to distinguish meaning. This statistical evidence, the result of P-trac procedure, provides support for phonetic basis of phonology in general and Licensing by Cue hypothesis and the Dispersion Theory of Contrast hypothesis in particular.
Voicing contrast has the least salient perceptual cues in preconsonantal contextscompared to manner and place contrasts. The results show that from 793 minimal sequence pairs only 23 pairs have used voicing as the contrasting feature. The output of P-trac procedure shows that the only context that voicing contrast has been used is pre-sonorant consonants. Voicing contrast isonly used before liquids and nasals . This provides support for the hypothesis that voicing contrasts has more salient perceptual cues before liquids and nasals. In other contexts such as before an obstruent the voicing contrast neutralizes so the voicing feature is not used as a contrasting feature before stops and fricatives. Another fact revealed by the output of P-trac procedure is that all words that make use of voicing feature as the contrasting feature in preconsonantal contexts are loan words from Arabic language ( Table-2). Persian has borrowed many words from Arabic language and the interestingly all the voicing contrasts in preconsonantal contexts are related to those loan words.
The results of applying P-trac procedure on syllable sequences of Persian shows that the frequency of contrasts decreases from onset position and from to position . The distribution of contrasts in the 3 contexts is again exactly matched with the amount of perceptibility of cues to features in those contexts. The perceptibility of cues is maximal in onset position, medial in last coda position and minimal in preconsonantal position (Wright, 2004) and has the exact pattern of the dispersion of contrasts in those contexts. This again provides support for the direct relation between amount of perceptual salience of cues to features in a context and amount of contrasts in that context.
Conclusion
In this paper we introduced P-trac procedure in order to track dispersion of contrasts in different contexts in lexicon. The results of applying P-trac procedure to the case of dispersion of contrasts in preconsonantal contexts and in consonantal positions of CVCC sequences in Persian provide Evidence in favor of phonetic basis of the dispersion of contrasts argued by Licensing by Cue hypothesis and the Dispersion Theory of Contrast. The P-trac procedure is proved to be very effective in revealing the dispersion of contrasts in lexicon especially when it provides means to compare the dispersion of contrasts in different contexts. | 2015-10-03T00:13:29.000Z | 2015-10-03T00:00:00.000 | {
"year": 2015,
"sha1": "8fb4e52d7c2e54034af2e7c7d1ad17cdaa48b3e5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "401e36d91c1b6d810f1022ca9a52d695d5cce212",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
15651884 | pes2o/s2orc | v3-fos-license | Kolaviron, isolated from Garcinia kola, inhibits acetylcholinesterase activities in the hippocampus and striatum of wistar rats
Background Kolaviron, isolated from seeds of Garcinia kola, have been shown to possess wide pharmacological properties. Purpose The present study examined the effect of kolaviron on acetylcholinesterase activities in the hippocampus and striatum of adult Wistar rats. Methods In this study, histological and histochemical methods were used to investigate the effects of kolaviron on the histology of the hippocampus and striatum and on acetylcholinesterase activities in these brain regions. Results We showed that kolaviron produced no neurodegenerative changes in the hippocampus and striatum. Kolaviron did not significantly alter (p<0.05) neuronal density in these brain regions. Kolaviron significantly reduced (p<0.05) acetylcholinesterase staining intensity, suggesting a likely inhibiting effect on this enzyme. Conclusion To the best of our knowledge, this study provides the first evidence that kolaviron could act as an acetylcholinesterase inhibitor. Kolaviron may be developed as a herbal-based natural product with therapeutic potential in the management of neurodegenerative disorders associated with disturbed cholinergic neurotransmitter systems.
ucts. 4 It has been reported to prevent hepatotoxicity mediated by several toxins. 2,7,8 It exhibit strong antioxidant activities both in vivo and in vitro experimental models. 9 Acetylcholinesterase (AChE) plays a crucial role in cholinergic neurotransmitter systems. It is responsible for terminating the nerve impulses at cholinergic and neuromuscular synapses by splitting the neurotransmitter acetylcholine (ACh) into choline and acetate. 10,11 ACh is dynamic neurotransmitter, acting both in the central and peripheral nervous system. ACh is known to classically excite hippocampal pyramidal neurons by acting as a powerful modulator of synaptic transmission at both gABAergic and glutamatergic synapses through a broad range of muscarinic acetylcholine receptors (mAChRs) and nicotinic acetylcholine receptors (nAChRs). 12 ACh exerts powerful modulatory effects in the striatum which has been recognized as one of the brain areas with the highest concentration of markers of cholinergic neurotransmitter systems. ACh also acts via a variety of mAChRs and nAChRs in the striatum, where it affects the activity of striatal neurons both directly and through modulation of glutamate release from corticostriate terminals and of dopamine release from nigrostriatal terminals. 13 As a result of its powerful modulatory activities on striatal and hippocampal neurons, ACh plays an important role in movement, learning and memory. 14 In view of foregoing, ACh and AChE activities are pivotal factors in the development of drugs for the management of neurodegenerative disorders. Since there is a global advocacy for increase in use of herbal sources in therapeutic management of various diseases including neurodegenerative disorders, the present study has attempted to localize histochemically the activity of AChE in the hippocampus and striatum following kolaviron treatment, as well study the effect of kolaviron on the histology of the hippocampus and striatum in adult Wistar rats.
Animal management
Twenty adult male albino Wister rats weighing between 150 g-200 g were used for this study. The animals were housed in
Introduction
Kolaviron is the major component isolated from the seeds of Garcinia kola and contains biflavanones (gB1, gB2 and kolaflavanone). 1,2 Garcinia kola Heckel (family Guittiferae) 3 is a herb grown in Nigeria and has a striking astringent, bitter and resinous taste. It is popularly called Bitter Kola in Nigeria. Local Nigerian names include; Orogbo in Yoruba, Ugolu in Ibo and Akan in Urhobo. It is used in folklore remedies for the treatment of several ailments such as liver disorders, hepatitis, diarrhoea, laryngitis, bronchitis and gonorrhoea. Flavonoids, oleoresins, tannins, saponins, alkaloids, cardiac glycosides are amongst the phytochemical substances that have been isolated from G. kola. 1 The pharmacodynamics behind G. kola action is based on kolaviron ( Figure 1). [4][5][6] Kolaviron, the biflavonoid complex in G. kola, is responsible for the strong antioxidant properties of G. kola which limits the oxidative conversion of amino acid by reactive oxygen species to other damaging fatty acid prod-
Kolaviron extraction and Animal treatment
Kolaviron was isolated from G. kola as previously described. 1 In brief, powdered dried seeds of G. kola were extracted with n-hexane, in a Soxlet extractor. The defatted, dried marc was repacked and then extracted with methanol in a Soxlet extractor. The extract was concentrated and diluted to twice its volume in distilled water and partitioned with chloroform. The concentrated chloroform fraction gave a yellow-brown solid known as kolaviron. Animals were randomly divided into four groups (A, B, C, and D) of five rats each. group B, C and D were experimental groups and were given 200, 400, and 800 mg/kg body weight of kolaviron daily for 4 weeks. Kolaviron was dissolved in cornoil (Sigma USA), and given orally using an intragastric tube. group A served as control and were given vehicle for extract (corn oil) for 4 weeks. At the end of administration, animals were sacrificed by cervical dislocation and the brain excised. A mid-sagittal cut of the brain was made. Some of the brain tissues were fixed in 10% formal saline for histological studies and others in cold 10% formol calcium for 48 hours and used for histochemical studies. Tissues for histological studies were processed for routine paraffin wax embedding, sectioned on a rotary microtome at 6 µm thickness, and stained using haematoxylin and eosin (H&E) method described by Drury and Wallington, 1980. 16
Histochemical demonstration of AChE
Serial sections of 10 µm thickness were obtained on a cryostat. Sections were processed for AChE demonstration by using acetylthiocholine iodide as substrate (as previously described by Felipe and Lake, 1983), 17 in a solution containing cuprous and ferric sulphate (as modified by Ogundele et al, 2012). 18 Working solutions of the incubating medium were prepared a clean room under standard laboratory conditions. 5 mg of acetylthiocholine iodide (Sigma, USA), was weighed using a sensitive weighing balance. 6.5 ml of 0.1 M acetate buffer (pH.6.0) was prepared by dissolving 0.605 g of acetic acid in 100 ml of distilled water and the pH adjusted using sodium hydroxide. 0.1 M sodium citrate prepared by dissolving 2.94 g of sodium citrate in 100 ml of water. 30 mM cuprous sulphate prepared as 0.58 g of salt in 100 ml of purified water. 5 mM potassium ferricyanide prepared by dissolving 0.165 gm of salt in 100 ml of purified water. To prepare the incubating medium, 5 mg of acetylthiocholine iodide was added to 6 ml of acetate buffer in a glass conical flask and the following reagents were added in this order; 0.5 ml of 0.1 M sodium citrate, 1 ml of 30 mM cuprous sulphate and 1 ml of 5 mM potassium ferricyanide. The mixture was continually stirred with a magnetic stirrer. The incubating medium was applied to sections and incubated in an oven at 37 0 C for 20 minutes, rinsed in distilled water and counter stained in haematoxylin, cleared and mounted in DPX. Areas of AChE are seen as brown or red coloured under a light microscope. All reagents used are of analytical grade.
Photomicrography, Histomorphometry, Image analysis and Statistical analysis
Sagittal stained sections were viewed under a Leica DM750 digital light microscope, and with the aid of an atlas of the rat brain, 19 the striatum and hippocampus were located and observed. Digital photomicrographs were taken by an attached Leica ICC50 camera. Photomicrographs of H&E stained section were imported onto on to OpenOffice.org™ (OOo-dev 3.4.0) software for histomorphometric neuronal count. Image Analysis and Processing for Java (Image J), a public domain software sponsored by the National Institute of Health (USA), was used to analyze and quantify AChE staining intensity. Imported RgB images are converted to grayscale images on Image J. The software quantifies staining intensity by measuring the pixel value of each pixel in grayscale images following threshold of areas of staining activity and converting the pixel value to brightness value or gray value, in a scale of 0 to 255 from less brighter (that is more intensity) to more brighter (that is less intensity). Also percentage area of AChE activity was also measured. Data were expressed as mean ± SEM, analyzed using One-way ANO-VA, and followed by Student Newman-Keuls (SNK) test for multiple comparisons. graphPad Prism 5 (Version 5.03, graphPad Inc.) was the statistical package to be used for data analysis. Significant difference was set at p<0.05.
Effect of kolaviron on hippocampus and striatum
In the present study, treatment with kolaviron at 200, 400 and 800 mg/kg body weight did not cause any histological alteration or neurodegenerative changes in the hippocampus and striatum. Photomicrographs of control and treated groups showed normal histology of the hippocampus, with numerous pyramidal shaped neurons in the CA3 (Cornu Ammonis) region of the hippocampus proper ( Figure 2) and also normal (Figure 3). Neurons exhibit distinct blue nuclei staining with prominent deeply stained nucleoli. Also numerous oligodendrocytes are clearly identified with their classic "fried egg" appearance as seen in non-perfused brain tissues. Histomorphometric neuronal count showed no significant difference (p<0.05) in neuronal density of CA3 hippocampal neurons and striatal neurons ( Table 1).
Effects of kolaviron on hippocampal and striatal AChE activities
The present study has also shown that kolaviron reduced AChE staining intensity in hippocampus and striatum. As shown in Figure 4 and Figure 5, treatment with kolaviron at the various doses reduced the staining intensity AChE compared to control. Further analysis confirmed significant decrease (p<0.01) in staining intensity in kolaviron treated animals as shown by higher mean gray values compared to control groups in the hippocampus and striatum, though there was no significant Values are mean ± SEM of data obtained. (n = 3) N -Numbers of neurons difference (p<0.05) in the percentage area of AChE activity. No significant difference (p<0.05) in staining intensity was observed between kolaviron treated groups ( Table 2).
discussion
We have previously reported that kolaviron could afford some protection to hippocampal neurons following methamphetamine-induced neurotoxicity. 1 Also kolaviron has been shown to protect neurons against gamma radiation induced oxidative ANNALS R E S A R T I C L E
RES ARTICLE
stress. 20 The present study indicates that kolaviron at various doses, does not cause neurodegenerative changes in the hippocampus and striatum.
The results from this study also suggest that kolaviron is a likely inhibitor of AChE activity, as indicated by reduced staining intensity of AChE. Dysregulated cholinergic neurotransmitter systems have also been implicated in the pathophysiology of a variety of neurodegenerative disorders such as Parkinson's disease (PD), Alzheimer's disease (AD), Huntington's disease (HD) and Schizophrenia. 13,14 Long-term treatment with acetylcholinesterase inhibitors is presently the main therapy for AD, and shows potential for other neurodegenerative disorders, which includes PD, HD and tardive dyskinesia. AChE inhibitors also show antipsychotic properties that have advanced the development of cholinomimetic therapy for schizophrenia. 14 A decrease in the activity of the cholinergic neurons is a common feature of AD. Currently, four of the five medication used in treatment of cognitive manifestations of AD are AChE inhibitors (donepezil, tacrine, galantamine and rivastigmine) with the other being an NMDA (N-methyl-D-aspartate) receptor antagonist (memantine). The AChE inhibitors are used to reduce the rate at which ACh is broken down, in order to increase the concentration of ACh in the brain and thus compensating for the loss of ACh caused by the death of cholinergic neurons. 21,22 These drugs are not without side effects with the most common side effects include nausea and vomiting, both of which are probably due to excessive cholinergic activity. Other less common side effects are bradycardia (decreased heart rate), reduced appetite and weight, elevated gastric acid production, and muscle cramps. 23 In view of the likely effects of kolaviron as AChE inhibitors, kolaviron could be developed as an alternative therapy probably with fewer side effects, in the management of AD and other neurodegenerative disorders. Kolaviron has already been shown to prevent oxidative damage to the brain following gamma irradiation. 10 Also we have shown that kolaviron improves impaired cognitive functions following methamphetamine challenge in adult rats. 1 Methamphetamine on the other hand has been shown to impair cognitive functions by altering brain ACh systems. Methamphetamine alters ACh receptors in adult rats and decreases choline acetyltransferase, the enzyme responsible for synthesizing ACh, in adult humans. 24 It is thus probable that the likely abilty of kolaviron to inhibit AChE activities in the brain of rats, thus reducing the rate at which ACh is broken down might be a mechanism by which kolaviron improves methamphetamine-impaired cognitive functions.
Conclusions
In conclusion, the key finding of this study was that kolaviron reduces AChE staining intensity in the hippocampus and striatum of adult rats, thus acting as a likely inhibitor of AChE. However, this calls for further studies attempting to corroborate these findings and elucidate the detailed mechanism of a possible kolvarion-mediated inhibition of AChE. Kolaviron may be used as a novel herbal-based natural product with therapeutic potential in the management of neurodegenerative disorders associated with dysregulated cholinergic neurotransmitter systems.
Acknowledgements
The authors acknowledge Dr. Nwoha PU of the Department of Anatomy and Cell Biology, Obafemi Awolowo University, and Prof. Farombi EO, of the Department of Biochemistry, University of Ibadan, Nigeria, for valuable support and technical guidance. | 2018-04-03T00:44:17.980Z | 2013-04-01T00:00:00.000 | {
"year": 2013,
"sha1": "586612fe50650d81c87c784206377a95ddde8f1c",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc4117109?pdf=render",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "586612fe50650d81c87c784206377a95ddde8f1c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270753081 | pes2o/s2orc | v3-fos-license | Players are positive regarding injury prevention exercise programmes, but coaches need ongoing support: a survey-based evaluation using the Health Action Process Approach model across one season in amateur and youth football
Objectives Implementation of injury prevention exercise programmes (IPEPs) in sports is challenging, and behaviour change among players and coaches is essential for success. The aim was to describe players’ and coaches’ motivation and coaches’ goal pursuit when using IPEPs in amateur and youth football across a season. A secondary aim was to describe players’ motivation to engage in IPEP use in relation to presence or absence of injury. Methods The study was based on questionnaires to amateur and youth, male and female football players and coaches at baseline, mid-season and post-season in a three-armed randomised trial in 2020 in Sweden. Questionnaires were based on the Health Action Process Approach (HAPA) model with questions about the motivational phase when intention for change is created (players and coaches) and a goal-pursuit phase when intention is translated into action (coaches). Results In total, 455 players (126 male), mean age 20.1 years (SD±5.8, range 14–46) and 59 (52 male) coaches took part. Players generally gave positive answers in the HAPA motivational phase (Likert 6–7 on a 1–7 Likert scale). Differences in ratings between injured and uninjured players were minor. Coaches had positive or neutral ratings (Likert 4–6) in the motivational and goal-pursuit phases. Ratings deteriorated across the season, with less positive responses from 40% of players and 38-46% of coaches post-season. Conclusion Positive ratings in the HAPA motivational phase indicated fertile ground for IPEP use. Neutral ratings by coaches and deterioration across the season in players and coaches suggest a need for ongoing support for IPEP use. Trial registration number NCT04272047.
WHAT IS ALREADY KNOWN ON THIS TOPIC
⇒ Implementation of injury prevention exercise programmes (IPEPs) is dependent on behaviour change in players and coaches.⇒ Coaches tend to modify IPEP content or dosage, and players are reported to have low motivation for injury prevention, which together challenges the successful prevention of injuries in sports.
WHAT THIS STUDY ADDS
⇒ The study covered both the motivational (players and coaches) and the goal-pursuit (coaches) phases of the Health Action Process Approach model across one football season and in relation to three different IPEPs.⇒ Players showed high motivation for IPEP use, and differences were small between ratings from players using different IPEPs, between males and females, and between injured and uninjured players.Findings were not in agreement with previous studies describing low player motivation as a barrier to IPEP use.⇒ Coaches were neutral or positive regarding their motivation for IPEP use, and their goal pursuit to start and maintain use of IPEPs over time.Specifically, they rated neutrally on their belief in their ability to use an IPEP (action self-efficacy), their plans for instructing players, and their plans to work around barriers for continued IPEP use, suggesting need for continuous support.
Open access mandate to determine whether, when and how IPEPs are used, and they often lead the preventive training for the whole team.Previous studies have reported low player motivation as a barrier that potentially affects the use of the IPEPs. 8 17-19Considering the gap between efficacy and effectiveness in injury prevention, 20 21 there is a need for a better understanding of factors that drive behaviour change, in the adoption and maintained use of IPEPs.The Health Action Process Approach (HAPA) is a two-phase behaviour change model, with a motivational phase when intention for change is created and a volitional, goal-pursuit phase where this intention is translated into action. 22Risk perceptions, outcome expectancies, action self-efficacy and intention are key constructs in the motivational phase, whereas action and coping planning, maintenance and recovery self-efficacy constitute the goal-pursuit phase. 23The HAPA model has been applied previously in amateur and youth sport contexts in football, rugby union and floorball. 10 18 24-26y identifying a need to strengthen specific constructs, targeted measures to improve adoption and maintained use of IPEPs may be developed.Hence, the aim was to describe players' and coaches' motivation and coaches' goal pursuit when using IPEPs in amateur and youth football across a season.A secondary aim was to describe players' motivation to engage in IPEP use in relation to presence or absence of injury.
METHODS
This study was based on questionnaire data collected at baseline, mid-season and post-season from players and coaches in a three-armed randomised trial that was conducted in Sweden in 2020. 6Teams were randomised to a further developed version of the IPEP Knee Control, the extended Knee Control, 6 or a short single-exercise adductor strength programme based on the Adductor strengthening programme. 27Teams that already used an IPEP on regular basis at study start were not randomised but were allocated to a comparison group that continued with their usual training throughout the season and did not receive any intervention as part of the study.
In this subanalysis, data are presented from players and coaches who responded to the baseline questionnaire and the mid-season and/or post-season questionnaires.Baseline questionnaires were distributed before the competitive season in April, mid-season questionnaires in June before the summer break and post-season questionnaires in October/November, depending on when the team's season ended.The total season ranged in length from 27 to 29 weeks.Since the study was conducted during the COVID-19 pandemic, the preseason was extended, and the competitive season was postponed.However, football training was never cancelled in Sweden and injury prevention training continued throughout the season.The study was checked against the Strengthening the Reporting of Observational Studies in Epidemiology checklist. 28rticipants Players ≥14 years of age and coaches in teams who participated in a male or female adolescent or adult league 2020 series in one football district and who had at least two scheduled training sessions per week were eligible.Teams in the randomised arms had not engaged in regular prevention training during the previous year, while teams in the non-randomised arm (comparison group) had used an IPEP regularly at least once per week during the previous year and planned to do so again in the 2020 season.All coaches for these teams who we had contact information for were eligible.In total, 17 teams were randomised to extended Knee Control, 12 teams were randomised to the adductor programme and 17 teams were allocated to the comparison group.
Interventions
Only information relevant to this substudy is presented here, a detailed description of the interventions has previously been published. 6Teams in the randomised groups were offered workshops or site visits where the respective intervention, extended Knee Control or the adductor programme, was introduced and exercises were practised (table 1).Afterwards, coaches were expected to lead the preventive training in their team.Both randomised groups also received printed and digital programme materials.Teams were instructed to begin training immediately after workshops or site visits and to carry on with preventive training throughout the season.
Questionnaires
Web-based questionnaires were distributed via a link that was sent by email and/or short message service to players and coaches and complemented by two reminders to non-responders.The questionnaires were bespoke but based on a previous questionnaire applying the HAPA model. 18All questions were responded on a 1-7 Likert scale, where 1 represented the least and 7 the most favourable option.Baseline questionnaires were distributed to all coaches after intervention group coaches had HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY ⇒ The finding of motivated players was encouraging, but in the future, other research designs, such as qualitative studies and studies outside the controlled context of a randomised trial, may enable a deeper understanding of the player perspective of injury prevention.⇒ Strategies including workshops at the beginning of the season introducing injury prevention exercise programmes (IPEPs) may be supplemented by ongoing strategies throughout the season to support coaches in formalising plans for IPEP use over time and to support coaches' self-efficacy beliefs in relation to injury prevention.These ongoing initiatives may also potentially prevent the deterioration in motivation and goal pursuit shown across the season among both players and coaches.
Open access taken part in workshops or site visits and to players at the same time.
Players and coaches responded to questions related to the motivational phase in HAPA at all three time points (table 2).Coaches also responded to questions relating to the goal-pursuit phase.Most questions in the goal-pursuit phase were asked at mid-season and post-season.Players received questions about the motivational phase only since motivation for IPEP use may impact their adherence and fidelity to prevention programme use but did not receive questions related to the goal-pursuit phase since the decision to use an IPEP usually lies with the coach.In the interest of brevity and to reduce response burden, we strived to limit the number of questions to players and also adapted questions to better suit each group, players and coaches, based on their specific roles and responsibilities related to IPEP use.Therefore, questions are similar, but not identical, for players and coaches.
Open access
In addition to the HAPA questions, players were asked at baseline whether they had any ongoing or previous injuries (anytime previously in their career) in the hip/ groin, hamstring, knee or ankle.These four injury locations were the primary prevention targets in the main randomised trial.During the season, players reported the occurrence of injury in a weekly questionnaire based on the Oslo Sports Trauma Research Centre (OSTRC) questionnaire (OSTRC-O2). 29For this substudy, players who reported any physical complaint in any body location during the season were treated as 'injured'.
Statistical analysis
No sample size calculation was made for this subanalysis, only for the main study. 6Results are presented descriptively with medians and IQR for each question as well as aggregated per HAPA construct.Aggregated construct scores are presented in the main manuscript, whereas raw scores for each question are presented in the online supplemental file.The distribution of responses across the 1-7 Likert scale is presented in figures.Likert responses 1-2 are considered negative, 3-5 neutral and 6-7 positive.Responses at baseline are also presented separately based on whether the players entered the study with ongoing or previous injury or not.Post-season ratings of players who reported an injury during the season are contrasted with ratings of players who did not report an injury.When presenting ratings stratified by sex (player) or intervention group (players and coaches), we use the post-season ratings since respondents had had most time to experience the interventions during post-season.We also applied Paretian principles in an analysis and compared ratings within the same HAPA phase across the season for each individual relating to whether ratings improved, were unchanged, deteriorated or were indeterminable (where deteriorations were seen in some constructs and improvements in other constructs).This has previously been described for analyses of the EuroQol 5-Dimension for health-related quality of life. 30In our analysis, Likert responses were combined into three categories: Likert responses 1-2, 3-5 and 6-7, and a change was only considered when an individual's rating differed from one category to another between time points.The analysis only includes participants with data from both analysed time points (baseline and post-season or mid-season and post-season).No missing data were imputed.
Patient and public involvement
Extended Knee Control programme development was informed by a qualitative study with coaches 8 and pilottested with players and coaches, 31 but players or coaches did not take part in the planning or conduct of the study.
RESULTS
In total, 455 (126 male) players (mean age 20.1 years (SD 5.8, range 14-46)) responded to the baseline questionnaire, corresponding to 91% of the players who took part in the main study. 6In total, 59 (52 male) coaches (mean age 43.9 years (SD 9.0)) responded to the baseline questionnaire representing 40 teams (87% of participating teams), and an additional 2 teams, where coaches took part in workshops but ended their participation Open access afterwards, whose players never entered the study.15 coached male youth or senior teams, 44 coached female youth or senior teams.
Ratings in the motivational phase among players
Ratings in the motivational phase were either neutral (injury risk perceptions) or generally positive (outcome expectancies, action self-efficacy and intention) among players at both baseline and across the season (figure 1, online supplemental table 1).Differences in ratings in players in different intervention groups were minor (±1 point on the Likert scale) (online supplemental table 2, online supplemental figures 1-3).At post-season, male players seemed slightly more negative than female players in all four constructs (online supplemental figure 4).
Changes in player ratings across one season
In the analysis applying Paretian principles and evaluating changes from baseline to post-season, 65 players (24%) had unchanged ratings in the motivational phase, 109 players (40%) deteriorated (rated a less positive response), 58 players (21%) improved (had more positive ratings in post-season) and 43 players (16%) were indeterminable (ie, had both improved and deteriorated ratings in different constructs).When scrutinising each separate construct in the motivational phase, most ratings in injury risk perceptions (58%), outcome expectancies (63%) and intention (61%) were unchanged from baseline to post-season (online supplemental figures 5 and 6).
Player motivation in relation to injury status
Baseline ratings from players who entered the study with or without injury are presented in figure 2. Notably, injury risk perceptions seem to differ, with more injured players rating negative responses, that is, they believed to a higher extent that they would incur an injury during the season.Differences in outcome expectancies or intention to take part in injury prevention training were negligible between injured/non-injured players.Differences in post-season ratings between players who incurred an injury during the season or not were also negligible (online supplemental figures 7 and 8).
Ratings in the motivational phase among coaches
Coaches' ratings in the motivational phase were neutral (injury risk perceptions, action self-efficacy) or positive (outcome expectancies, intention) at baseline (figure 3, online supplemental table 3).Slightly more positive ratings were seen in the extended Knee Control group, and there were fewer positive ratings in the other two groups at post-season regarding outcome expectancies and action self-efficacy (online supplemental figure 9).
Ratings in the goal-pursuit phase among coaches
In the goal-pursuit phase, coaches rated positively regarding maintenance and recovery self-efficacy but were neutral about action and coping planning (figure 4, online supplemental table 3).Slight differences were seen during post-season, with more positive responses Open access in the extended Knee Control group and more negative responses in the comparison group (the adductor group being somewhere in between) (online supplemental figure 10).
DISCUSSION
In this study on behavioural support for the use of IPEPs, we found motivation for IPEP use among both players and coaches, and positive responses in the goal-pursuit phase among coaches, with neutral or positive ratings in each of the constructs, suggesting fertile ground for IPEP use.Ratings were similar irrespective of intervention group among players, but coaches in extended Knee Control seemed more positive than coaches in the other two groups.Overall, ratings deteriorated slightly across the season, as depicted in the analysis applying Paretian principles which showed that 38%-46% of players and coaches had lower-that is, less positive-ratings in postseason compared with baseline (motivational phase) or mid-season (goal-pursuit phase).Differences between players who had previously incurred or incurred a new injury during the season in contrast to players without injury were minor.
Player ratings were surprisingly positive, considering previous studies describing low motivation for IPEP use among players. 8 17-19However, the present study was carried through in the context of a randomised trial, where possibly only the most motivated coaches and teams choose to take part, and where the amount of support for IPEP use differs compared with studies in a real-world context.Based on the differences seen in the present study, male players may be slightly more negative toward IPEP use than female players.Considering the focus on knee injuries in early studies on IPEPs, players and coaches may believe that the programmes are more for women.Hence, it would be of value to clarify to players and coaches that the programmes target injuries in the lower extremity overall in both males and females.The high action self-efficacy-with players describing high confidence in their ability to do the preventive exercises, their effort in doing so and their ability to listen to the coach's instructions-was positive.However, considering the low exercise fidelity when using the Knee Control exercises in football, 11 either the players are unrealistic, or unable to estimate their own ability in a reliable way; for example, due to insufficient information from coaches about the performance of the exercises.In summary, there seems to be more to this than these current questionnaires can capture and further formalised development of the questionnaires, as well as establishment of their construct validity, responsiveness for change and test-retest reliability is one way forward to learn more about the player's views.Another important way forward is to conduct qualitative studies to extend our understanding of the player's perspective.
In the motivational phase, coaches rated positive regarding outcome expectancies and intention, suggesting that they are aware of the benefits of injury prevention and intend to use IPEPs.Risk perceptions received lower ratings, but have been shown previously to not be as significant as outcome expectancies and action self-efficacy when forming intention for IPEP use. 18 25Coaches were neutral regarding action selfefficacy, suggesting that this may be an area for further improvement.This is in line with previous studies describing low self-efficacy among coaches and that they are unsure whether they are using the programme in the right way. 8 Positive effects on self-efficacy have been shown after taking part in IPEP workshops, 10 26 suggesting there is potential to improve self-efficacy.In the goal-pursuit phase, coaches rated maintenance and recovery selfefficacy high; however, they had lower ratings regarding action and coping planning.This does not align with the previous study in rugby union, where strong correlations were shown between maintenance self-efficacy and action planning. 25This discrepancy is possibly related to the fact that the present study only covered each construct in the goal-pursuit phase with one question, making this measurement less precise and, again, suggesting a need for further development of the questionnaires.During the study, we mainly focused on supporting coaches to adopt IPEPs by offering workshops at the study start and did not specifically target maintained use of IPEPs.This may also be a reason for the deteriorations in ratings that we noticed in the analysis applying Paretian principles from baseline or mid-season to post-season.The deteriorations seen among almost half of the players are more difficult to explain but clearly indicate that coaches and players need ongoing support to ensure maintained use, and we should not assume that they will continue to use the programme effortlessly.The results, with relatively minor changes in median ratings over time and between subgroups, tally with a study in school physical education classes 32 but contradict the findings from Barden et al, 25 who showed that taking part in workshops improved action self-efficacy and intention to use an IPEP among rugby coaches after the season, compared with before.In line with this study, we also included activities aiming to support high coach self-efficacy in delivering the IPEPs, however, intention and planning activities may have been given more attention in the study by Barden et al. 25 In the present study, coaches in the comparison group who did not receive any intervention or specific support were the ones who showed negative ratings in post-season.This Open access emphasises the need for ongoing support also among those who regularly use IPEPs.To further elucidate how this support may be structured, qualitative studies examining the experiences of and need for support for IPEP use in more depth would be valuable.
Even though median values were high, we noticed a high spread in the responses from both players and coaches; that is, there were both positive and negative responses.This suggests that it may be futile to develop standardised support that fits all coaches and teams, and that a better way may be to develop intervention material made up of different parts that can be added or removed as needed, that is, a smorgasbord.Argumentation away from standardised intervention approaches has also been published elsewhere. 33Considering that the results of the present study are comparable with previous studies in the realworld outside the controlled context of a randomised trial, 9 regarding the 11+ 18 and in youth floorball, 26 the results may probably be generalisable to other team sports and IPEPs.
Limitations
The strengths of this study included the good representation of the player perspective, totalling 455 players at baseline.Another strength was the inclusion of both player and coach perspectives in the same study.Even though the questionnaires were not formally validated, they were theoretically based on a behaviour change model, the HAPA model, which is a strength.Similar questions and Likert scale ratings have also been used in previous studies. 9 10 18 24-26he study is limited by the fact that we do not know whether statistical differences in the Likert scores between groups or over time can be regarded as clinically meaningful.For this reason, we chose not to make statistical comparisons between groups or across the season but rather present results descriptively.In circumstances when youth and senior players had the same coach and had football training together, we lack information about which league each player played in and, therefore, only present results mixed for youth and senior players.Another limitation was the relatively small sample of coaches which does not allow for group comparisons.Considering the questionnaire, there are only a few questions per construct, and not all constructs may be properly covered by this small number of questions.To really be able to capture separate constructs, we may need to use other questionnaires, such as a selfefficacy scale, to delve more deeply into this construct.However, the present questions may still indicate target areas for future implementation strategies.Last, but not least, this study was accomplished during the first year of the COVID-19 pandemic.Even though football training continued throughout the pandemic in Sweden, players and coaches were obviously affected by the restrictions in the community and the spread of the disease, and their ratings may have been affected by this uncertainty.When comparing ratings to a 2021 study performed in the same geographical district, 9 we notice ratings in the same HAPA constructs that are about 1 point lower on the Likert scale in the present study.
CLINICAL IMPLICATIONS
Players and coaches had positive ratings in the HAPA motivational phase, indicating fertile ground for IPEP use.Neutral ratings by coaches on action self-efficacy, action planning and coping planning as well as a deterioration in scores across the season in both players and coaches, suggest that they should be offered ongoing support for IPEP use during the season in addition to initial workshops at the beginning of the season.
Figure 1 Figure 2
Figure 1 Distribution of player responses in the constructs in the HAPA model motivational phase across one season.n=455 at baseline, n=320 at mid-season and n=275 at post-season.Action self-efficacy was only rated at mid-season and postseason.For constructs where players responded to more than one question, the averaged aggregated responses are shown in the figure.HAPA, Health Action Process Approach; n/a, not applicable
Figure 3
Figure 3 Distribution of coach responses in the constructs in the HAPA model motivational phase across one season.n=59 at baseline, n=49 at mid-season and n=48 at post-season.For constructs where coaches responded to more than one question, the averaged aggregated responses are shown in the figure.HAPA, Health Action Process Approach.
Figure 4
Figure 4 Distribution of coach responses in the constructs in the HAPA model goal-pursuit phase across one season.n=59 at baseline, n=49 at mid-season and n=48 at post-season.Maintenance self-efficacy, coping planning and recovery self-efficacy were only rated at mid-season and post-season.For constructs where coaches responded to more than one question, the averaged aggregated responses are shown in the figure.HAPA, Health Action Process Approach; n/a, not applicable.
Table 1
Description of the interventions in the randomised groups ► One-legged knee squat ► Hamstring strengthening ► Two-legged knee squat ► Core strengthening ► Lunge ► Jump/landing technique One exercise out of: ► Copenhagen adduction, long lever ► Copenhagen adduction, short lever ► Side-lying adduction ► Adductor squeeze (ball between knees, bent legs)
Table 2
Distribution of questions in relation to the two Health Action Process Approach (HAPA) phases Post-season 8 questions, same as mid-season but slightly different wording focusing on next season instead of present season 11 questions, same as mid-season but slightly different wording focusing on next season instead of present season Baseline questionnaires were distributed in April, approximately 2 months before the competitive season, mid-season questionnaires were distributed in June before the summer break and post-season questionnaires were distributed in October/November.IPEP, injury prevention exercise programme; n/a, not applicable. | 2024-06-27T15:16:55.054Z | 2024-06-01T00:00:00.000 | {
"year": 2024,
"sha1": "e6c2724ab22ac771d27341a038459ce7470324e0",
"oa_license": "CCBY",
"oa_url": "https://bmjopensem.bmj.com/content/bmjosem/10/2/e002009.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "6852c71ea687161e7e12b69caa4a937ff0fe5275",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
51493573 | pes2o/s2orc | v3-fos-license | Development of the Community Active Sensor Module ( CASM ) : Forward Simulation
Modern data assimilation frameworks require sophisticated physical and radiative models to guide assimilation and interpretation of satellite-based observations. To date, satellite-based infrared and passive microwave radiances, in various scenarios, are being assimilated operationally at multiple centers around the world (e.g., ECMWF, NOAA), however precipitating/cloudy radiances assimilation is still under development for most observation streams. Additionally, with the advent of space-based precipitation radars (e.g., TRMM, GPM, CloudSat), active microwave scatterometers (e.g., RapidScat), and radar 5 altimeters (e.g., JASON), interest in directly assimilating satellite-based active microwave observations is increasing. This paper describes the development of the Community Active Sensor Module (CASM), which is designed to simulate active microwave sensor observations, consistent with current and future sensors. This paper presents the forward modeling component of CASM, providing a model description, key physical elements, and sensitivity to the various inputs and implicit / explicit assumptions. CASM is also evaluated against the the Global Precipitation Measurement Mission Dual-Frequency Precipitation 10 Radar (GPM DPR) observations in both a targeted case study and a global, year-long analysis.
Introduction
Satellite data assimilation requires algorithms to properly ingest, process, and interpret a wide range of satellite-based observations.With the advent of space-based precipitation radars (e.g., TRMM, GPM, CloudSat) and active microwave scatterometers (e.g., RapidScat, ASCAT), interest in directly assimilating satellite-based active microwave observations is rapidly increasing.
The present research describes the ongoing development of Community Active Sensor Module (CASM): a framework for efficiently simulating active microwave observations using shared libraries from a suitable radiative transfer platform.The present work uses libraries from the Community Radiative Transfer Model (CRTM) to compute atmospheric absorption, scattering, and surface reflection properties (Kleespies, et al., 2004).
When provided with the appropriate physical description of the surface and atmosphere, CASM is designed to provide a standalone unified framework to simulate the surface and atmospheric response to actively emitted microwave radiation, with a specific focus on radar, altimeter, and scatterometer simulations.When completed, CASM will provide simulated atmospheric reflectivities, path-integrated attenuation, and surface normalized radar cross-section in all-weather conditions for any active microwave sensor platform.The present paper describes the use of CASM to produce a forward simulation of nadir-viewing active radar reflectivities and path-integrated attenuation in a precipitating cloud scene.
The next development step for CASM is to compute the tangent linear and adjoint models for the forward operator, consistent with the operation of CRTM -this will be the subject of a future publication.The computation of the Jacobians of active sensor observables will allow for accurate derivation of atmospheric and surface parameters, to which the active sensors are sensitive (e.g., wind, wave height, cloud and precipitation profiles, etc.) in a variational data assimilation / 1D-VAR framework, such as Multi-Instrument Inversion and Data Assimilation Preprocessing System (MIIDAP) (Garrett et al., 2015).MIIDAPS provides a universal quality control algorithm for all satellite data observations and retrieval algorithm.The integration of CASM within MIIDAPS (or other similar 1D-VAR frameworks) provides a self-contained active forward operator for use in applications where the assimilation of active sensor observations is desired and the adjoint and tangent-linear models are also needed.
For sensors that have both active and passive capabilities, CASM, when combined CRTM, provides simultaneous active and passive simulation capabilities -useful for single-sensor active-passive observations, sensor quality control and crosscalibration, etc.)In the present version, only 1-D vertical profiles of reflectivity and path-integrated attenuation are provided.
Ground-based radars are not explicitly considered here, but may be adapted at a future time.In the present version, there is no explicit treatment of slant-path/off-nadir radar reflectivities or radar multiple scattering enhancements (Battaglia et al., 2010).
A few prior researchers have created models for simulating satellite radar reflectivities and attenuation, such as Quickbeam (Haynes, et al., 2007), and some existing research models have been modified to compute radar reflectivities such as is described in Johnson et al. (2012) and Johnson et al. (2016).Di Michele et al. (2012Michele et al. ( , 2014) ) extended the "ZmVar" model for use as a lidar / radar forward operator in data assimilation at ECMWF, for use in 1D-VAR + 4D-VAR data assimilation frameworks.The present work is similarly designed with operational capabilities in mind, and as such, is designed with computational efficiency in mind.
CASM Technical Description
As introduced previously, The Community Active Sensor Module (CASM) is designed to simulate the active microwave response to the atmospheric and surface properties under all weather conditions.It accomplishes this using the Community Radiative Transfer Model (CRTM) libraries to provide necessary physical, scattering, and absorption properties of the atmosphere and surface.Figure 1 depicts the target design, inputs, processes, and outputs for CASM.
A separate reference model is used for comparison with CASM simulations, the details of which can be found in Johnson et al. (2012).The following sections describe the CRTM libraries and the standalone model codes used for the computation of radar reflectivities and path-integrated attenuation.
Physical Properties of the Reference Model
The hydrometeor model, described in Johnson et al. (2012), is capable of distributing the user-prescribed water content using a four-parameter modified-gamma particle-size distribution (PSD) (Deirmendjian, 1969).For this study, we have adopted a simplified form of the modified-gamma distribution, namely the exponential PSD: where D is the liquid-equivalent diameter, N 0 is the intercept parameter, and D 0 is the median diameter of the PSD.D 0 is related to the slope (Λ) of the exponential PSD by Λ = 3.67/D 0 (Ulbrich, 1983).
To maintain consistency with the CRTM default scattering properties, a spherical particle shape is used for both solid and liquid phase hydrometeors.However, although not presented here, the hydrometeor model also allows realistically shaped particles with extinction and scattering properties generated from the discrete dipole approximation (DDA) (see Johnson et al. (2016)).For spherical particles, we've selected the particle densities for snow, graupel, and hail to be consistent with what is assumed by default in CRTM.For snow, the density is 0.1 g cm −3 , for graupel it is 0.4 g cm −3 , and 0.9 g cm −3 for hail.
The choice of density maps to a frequency-dependent average dielectric constant, according to either of two models for the dielectric constant for pure ice and one of three models for the dielectric constant of a mixture of ice and air (Johnson, 2007).
We then calculate radiative cross sections for individual particles using standard Mie theory (Mie, 1908), and then integrate over the specified PSD to obtain bulk radiative properties given an ensemble of particles.In the default CRTM scattering table, used here, there is no specified temperature dependence for the solid particles types.The dielectric constant for ice is from the tabulation of Warren and Brandt (2008), and the dielectric constant for liquid water comes from Liebe et al. (1991).For the dielectric mixture of ice, water, and air (as needed), we use the Bruggeman method.
The Mie scattering model, developed partially by K. F. Evans was included as part of the "RT4" package (Evans and Stephens, 2014), and has been heavily modified by the author to be more flexible and extensible to non-spherical particles.
The primary inputs are temperature, dielectric constant (averaged), PSD slope and intercept parameters, and wavelength of microwave radiation.Outputs are the vertically and horizontally polarized coefficients of extinction, scattering, and radar backscattering.Also produced is the scattering asymmetry parameter (degree of forward or backward scattering), and the full
Simulation of Atmospheric Reflectivity and Path-Integrated Attenuation
Given the PSD-integrated optical properties, the profile of extinction, scattering, and backscattering can be translated into observable quantities using an appropriate radiative transfer model.In this case, a modified version of the RT4 package includes an adding-doubling model for simulating the thermal emission and upwelling of microwave radiation through a 1-D atmosphere.The top-of-the atmosphere passive microwave brightness temperatures are computed, and can be compared with observations or other models.Furthermore, at each layer, the radar reflectivities for that layer are computed using the following relationship: [dBZ] = 10 log 10 [10 where C back is the radar backscattering cross-section, computed using Mie theory in this study.Unfortunately, the standard CRTM scattering/extinction database does not explicitly provide the backscattering information needed.Instead, the scattering phase function is provided via coefficients from the Legendre expansion for the given hydrometeor category, and the backscattering is estimated from the phase function information, using an appropriate amplitude scaling.
CRTM Libraries and Modifications Used in CASM
CASM requires an external model to describe the single particle scattering and extinction properties of precipitation hydrometeors.It also does not prescribe a specific particle size distribution (PSD), and relies on external libraries to provide this information.In this work, the hydrometeor model from CRTM is used.
The hydrometeor model in CRTM utilizes a static binary file as a look-up table (LUT), providing the necessary extinction, scattering, and asymmetry parameter information when provided with a frequency, effective radius (radius of an equal-volume sphere), and temperature (in the case of rainfall only).Specific care has been taken to relate the slope parameter of the exponential particle size distribution to either radius or diameter, as appropriate -these terms are used interchangeably.Within the CRTM hydrometeor library, the scattering LUT contains information for both microwave and infrared wavelengths, and for liquid and solid particles.There is one class of liquid-phase hydrometeors and four classes of ice-phase hydrometeors: snow, graupel, hail, and cloud ice -each with a pre-defined bulk density.
The outputs of the CRTM hydrometeor model are PSD-integrated mass extinction coefficient (mass weighted extinction cross-section, m 2 kg −1 ), scattering asymmetry parameter (ranging from -1 to 1), single scattering albedo (ratio of scattering to total extinction, ranging from 0 to 1), and the Legendre coefficients of the scattering phase function, which have up to 38 terms (amplitude only, no polarization).Of note is the lack of a radar backscattering cross-section in the look-up table, nor is this necessary quantity currently computed by default in CRTM libraries.
Given this limitation, we sought to roughly estimate the backscattering cross-section given the total scattering cross-section and the intensity value of the scattering phase function at 180 degrees (i.e., "backward" scattering.)Starting with the scattering phase function, p(τ, Θ), as a function of the optical path τ , and the scattering angle Θ, it is expressed as the sum of the Legendre coefficients, P n (cos Θ), and the amplitude weights (χ n (τ )) as follows, At Θ = 180 degrees, the Legendre expansion coefficients are The scattering cross-section, C scat , is a product of the provided mass-extinction coefficient M ext , the single scattering albedo, ω, and the cross-sectional area of the particle A. To obtain the layer-averaged scattering cross-section, this is scaled by the geometric layer thickness δz and layer water content W : The layer-averaged backscattering cross-section is, consequently, the product of the PSD-averaged phase function (eqs. 3 and 4) and the layer-averaged scattering cross-section (see Bohren and Huffman (1983) for a detailed discussion).
Provided with a vertical profile of water content and a particular hydrometeor type, the layer-averaged radar backscattering coefficient and the layer-averaged extinction can be computed for a layer of hydrometeors, for each hydrometeor type present.
From the hydrometeor scattering and extinction perspective, there are no constraints between adjacent layers (i.e., each layer is treated as physically independent from the adjacent layers).
The two-way path-integrated attenuation (PIA) assumes a cloud top-down integration approach, where attenuation "accumulates" moving down through the cloud.It also assumes that the radar signal is attenuated in the same manner on the return trip.
Three primary contributors to the path-integrated attenuation are considered: (1) absorption by gases, particularly water vapor and "air", (2) absorption by cloud liquid water, and (3) absorption + scattering by hydrometeors as the primary contributor.
Like the backscattering cross-section, the two-way path integrated attenuation is computed using the total extinction provided from the CRTM LUT at each layer, and the gaseous extinction provided by ancillary observations.The PIA is written as follows: where r 0 is the geometric distance to the first range gate from the radar, and r is the distance from r 0 to the current range gate, and k scat and k abs are the unitless volume scattering and absorption coefficients, where k * = C * /A.The attenuation is then multiplied by Z ef f (prior to dBZ conversion) to obtain the attenuated reflectivity (Z m ) -consistent with what would be observed by a satellite or aircraft radar.This simulated attenuated reflectivity can be directly compared to typical observed radar reflectivities, after conversion to dBZ.In the following sections, the term "corrected" reflectivity refers to the simulation of reflectivities without the attenuation contribution.In the case of radar observations, this correction is usually provided in the data product.
Reference Model Comparison
In order to assess the range of validity of CASM, the reference model Johnson (2007); Johnson et al. (2012) was developed to simulate the range of applicability of Mie Spheres and computed backscattering cross-sections (see section 2.1).To generate these profiles (black lines), CASM was provided with the same vertical profile of snow water contents as was used in the reference model, and simulated using the same effective radius at each frequency.The densities in the reference model span the range from 0.1 g cm −3 (dark blue) to 0.9 g cm −3 (dark red) .In all cases, in spite of the coarse nature of The breakpoint in Fig. 7 occurs at 1500 microns effective radius (vertical axis), which is also the maximum limit of effective radius in the default CRTM scattering database.This highlights a serious limitation of the existing database in CRTM.
Development is underway for an extended version of the database to allow for the accurate simulation of reflectivities (and all other parameters) at larger effective radii, and over a much wider range of microwave frequencies to support current and future satellite data assimilation efforts.
Simulation Studies and Validation
To validate the performance of CASM, GPM DPR radar reflectivity and attenuation observations are compared with CASM simulations.In the first section, a case study containing a variety of precipitation types (rain, mixed-phase, snowfall), ranging from light to heavy precipitation was selected for comparison.The second section compares CASM simulations against a year-long global dataset of DPR observations at Ku and Ka-band.
Case Study Comparison of CASM and GPM DPR Observations
Level 2A GPM DPR data files were obtained from the official GPM data server.There is a separate data file for Ku-and Kabands.Within each file, a number of parameters are present: For the present validation, the Ku-band reflectivity and attenuation measurements were used as observational data, and the derived mass-weighted median diameter (D m ) and intercept number concentration (N w ) were derived using the surface reference technique method described in Seto and Iguchi (2015).Using these two parameters, CASM (with the CRTM scattering library) was used to compute the effective radar reflectivity and path-integrated attenuation, as described in previous sections.Figure 8 shows a 2-D slice through the 3-D volume of these two parameters.In the melting layer region, odd behavior of D m and N w is evident -this is primarily due to a lack of an explicit melting layer model in the official GPM-DPR level 2 retrieval algorithm, and it compensates by adjusting the PSD parameters to force a fit to the reflectivities.
Figure 9 shows that CASM can accurately reproduce single-profile Ku-band reflectivities with high fidelity, with the notable exception of the melting layer region, where both CASM and level 2A algorithm are lacking an explicit melting layer model.
Extending this single-profile to the 2-D slice is shown in figure 10.
This early version of CASM is performing well for cases where the vertical profile is continuous, but appears to be suffering in regions where the reflectivity column is broken or marginal (e.g., the right-hand side of Fig. 10 (c) and (d)).Further investigation into these artifacts is required, and will be a part of the next round of updates to CASM.
Validation Against a Global DPR Dataset
Extending the comparison above to global dataset allows for a more robust statistical comparison.One year of GPM DPR level 2A (V6) data was downloaded and processed.Following the approach above, the N w , D m , temperature, and observed reflectivity was obtained from each DPR file.To avoid sidelobe clutter effects (Furukawa et al., 2013), only the nadir beam was selected.Processing these variables using CASM, figure 11 shows occasional dramatic departures of D m and N w from nearby similar profiles.This is believed to be a feature of the DPR level 2A processing algorithm, and is not under the control of the author.
Conclusions
CASM, using CRTM libraries, produces vertical profiles of radar reflectivity and path-integrated attenuation.Given the noted limitations of the CRTM scattering lookup table, particularly the maximum effective radius at 1500 microns, we find that radar reflectivity simulations suffer in cases of heavy precipitation where the effective radius exceeds this limit.Comparisons against Future research will explore the integration of CASM into MIIDAPS, starting with the computation of the Jacobians of active sensor observables, which will allow for accurate derivation of atmospheric and surface parameters in an analysis framework (e.g., wind, wave height, cloud and precipitation profiles, etc.).Ultimately CASM is expected to provide a full active microwave 5 sensor simulation capability, for all-weather and all-surface conditions.The tangent-linear and adjoint components of CASM, and subsequent Jacobian calculations, provides the capability of directly interfacing with current numerical weather prediction analysis packages, such as the Global Data Assimilation System (GDAS) at NOAA -an integral component of the operational weather prediction capability in the U.S.
Figure 1 .
Figure 1.CASM design diagram, targeting the final form of the model.
Figure 2 .
Figure 2. Example physical properties and associated PSD properties used during stand-alone testing of CASM.(a) vertical profile of temperature, (b) relative humidity (%), (c) hydrometeor density, (d) liquid equivalent precipitation rate for graupel and snow, (e) median diameter D0 of the exponential particle size distribution, and (f) the intercept parameter, N0 of the exponential size distribution.
10
Figure 2 shows an example of the PSD-integrated optical properties for the profile given in Fig. 2 at 13.4 GHz (Ku-band) and 35.6 GHz (Ka-band) -both consistent with the Global Precipitation Measurement mission (GPM) Dual-frequency Precipitation Radar (DPR).
Figure 3 .
Figure 3. Example PSD-integrated scattering and extinction properties for the physical profiles shown in Fig. 2.
Figure 4 .
Figure 4. Comparison of CASM computed reflectivities at 10.65 GHz using the default CRTM scattering database (black lines) comparedto the reference model (described in the text) as a function of bulk hydrometeor density (colored points).For consistency, both scattering databases the bulk particle density is the same for all sizes in the integrated particle size distribution.
Figure 5 .
Figure 5. Same as figure 4 except at a frequency of 36.6 GHz.
Figure 6 .
Figure 6.Same as figure 4 except at a frequency of 89.0 GHz.
shows CFADs (Contoured Frequency by Altitude Diagrams) of attenuation-corrected reflectivity (Z c ) for GPM DPR observed Ku-and Ka-band reflectivities in panels (a) and (b), respectively; and for the CASM simulations at Ku-and Ka-band in panels (c) and (d), respectively).
Figure 8 .
Figure 8. Particle size distribution parameters Dm and Nw derived from GPM DPR Ku-band observations.These parameters are used in CASM to forward model the reflectivities and compute the attenuation.
Figure 9 .
Figure 9. Simulated reflectivity using CASM (black line) compared to observed reflectivity (dashed lines), for Ku-band (panel a) and Kaband (panel b).For Ka-band, the derived PSD parameters from Ku-band were used.
Figure 10 .
Figure 10.Attenuation-corrected radar reflectivities: (a) is the CASM simulation at Ku-band, (b) is DPR corrected reflectivities at Ku-band.(c) and (d) are the same as (a) and (b), except at Ka-band.Note the significant attenuation corrections at Ka band,
Figure 11 .
Figure 11.CFADs of GPM DPR observations (top row) and CASM simulations (bottom row) for Ku band and Ka band (beam matched).One year of data was used for the analysis, at nadir beam only, from 01 January 2015 to 31 December 2015.Approximately 4 million reflectivity profiles at both Ku-and Ka-band were used in the analysis. | 2018-07-14T10:29:01.291Z | 2016-06-24T00:00:00.000 | {
"year": 2016,
"sha1": "7b50f7b24f71ec4198da8ad3b33c02f35dea31c7",
"oa_license": "CCBY",
"oa_url": "https://www.atmos-meas-tech-discuss.net/amt-2016-154/amt-2016-154.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "7b50f7b24f71ec4198da8ad3b33c02f35dea31c7",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
10470296 | pes2o/s2orc | v3-fos-license | Genes involved in pancreatic islet cell rejuvenation
Pancreas plays an important role in maintaining the glucose homeostasis. The deterioration of β-cells in the pancreas is a crucial factor in the progression of diabetes mellitus; therefore, the restoration of β-cell mass and its function is of vital importance for effective therapeutic strategies. The precise mechanism for increase in functional β-cell mass is still unknown. This review focuses on the importance of certain genes which are involved in the rejuvenation of pancreas. These genes are divided according to their functions into three categories: participate either in proliferation (mitotic division of differentiated β-cells), neogenesis/transdifferentiation (development from precursor cells) or inhibition of β-cell apoptosis (programmed cell death). The rate of β-cell rejuvenation is the balance among the rates of β-cell proliferation, neogenesis and apoptosis. Understanding these genes and their pathways may lead to the discovery of new drugs, target based gene delivery and development of safer antidiabetic drugs.
Introduction
Diabetes is a major cause of health concern in the world and is growing in epidemic proportions. It is assumed that in the next ten years it will become number one disease of the world 1 . Type-1 diabetes mellitus (T1DM) is an autoimmune disease while type 2 is mostly a lifestyle disease. Majority of people suffer mainly due to type-2 diabetes and is responsible for the current diabetes explosion. The detection early markers for the disease and its prevention is an active area of research to develop target based novel drugs.
Dysfunctional pancreas in diabetes
Insulin, a key polypeptide hormone secreted by the pancreas, targets several tissues for the utilization of glucose and thus maintains the glucose homeostasis.
Type 2 diabetes mellitus (T2DM) develops from a combination of genetic and acquired factors (such as changes in metabolic homeostasis) that impair β-cell function on one side, and tissue insulin sensitivity on the other 2,3 . Normally, β-cell mass can adapt to changes in metabolic homeostasis. Recurrence of these changes in metabolism creates a stress on pancreas often predating the on-set of T2DM by many years. This pancreatic stress causes β-cell mass expansion, through enhanced proliferation and neogenesis. The progression from this stress condition to a state of diabetes is inevitably associated with a decrease in the β-cell mass 2-4 . This β-cell loss arises due to an increase in β-cell apoptosis, which clearly outweighs replication and neogenesis.
The war against diabetes through the development of new drugs is an ongoing continuous process 5 . With Table I. Genes involved in beta cell proliferation
Genes proteins Functions References
Reg gene family (RegI, II, IIIα, IIIβ, IIIγ) Increase islets cell size and density, Regeneration of pancreas 16,22,24,25,27 Sox9 Stimulates proliferation and survival of pluripotent progenitors. 28 Hnf - Figure) also contributes for regeneration of pancreas in which gene/gene products play in conjunction either with (1) or (2). All these mechanism may contribute synergistically for the regeneration of pancreas. See Tables 1, 2 and 3 for gene/gene products involved in the above processes for pancreatic rejuvenation. Reproduced with permission from Nature Publishing Group, London, UK 6 .
the technological advancement, efforts are being made to rejuvenate the pancreatic cells or create artificial pancreas. Pancreatic rejuvenation can happen either due to proliferation of existing β-cells or differentiation of progenitor cells to β-cells 6 (figure), or due to decrease in β-cell apoptosis.
β-Cell proliferation
Islets regeneration refers to an increase in β-cell mass by proliferation and replication of existing islet cells. Several mouse studies 7-9 here shown that β-cells do not proliferate, however, lineage tracing studies [10][11][12][13][14] have confirmed that human β-cells proliferate and give rise to a population of progenitor/stem cell. Various genes and transcription factors are involved in this process viz. Reg (Regenerating islets derived proteins), Sox9, Hnf-6, NeuroD1, Neurogenin-3 and Netrin-1 (Table I). Besides these genes, certain peptides or their analogues such as glucagon like peptide-1/exendin-4 are also involved in islet regeneration. These observations are confirmed by using dipeptidyl peptidase (DPP) IV inhibitor sitagliptin in mice 15 .
So far, five REG proteins have been reported in humans that belong to Reg gene family. Some of the members of this family have been implicated in β-cell replication and/or neogenesis as shown in in vivo studies using transgenic and knockout mice 16 . These also preserve the β-cell mass in autoimmune type 1 diabetes 17 . This Reg family of genes are expressed in both young and old mice that were subjected to partial pancreatectomy 18 . In isolated rat islets, Reg1 mRNA levels were significantly increased by glucose, amino acids, foetal serum or specific growth factors such as insulin, growth hormone and platelet-derived growth factors (PDGF) 19 . PDGF receptor signalling controls age-dependent β-cell proliferation in mouse and human pancreatic islet cells 20 . Disruption of RegI gene resulted in a significantly decreased rate of DNA synthesis and diminished β-cell hyperplasia in response to obesity, confirming the role of endogenous RegI in islets cell growth 21 . A study conducted by Huszarik et al 22 showed upregulation of RegII during diabetogenic process and also after adjuvant therapy in NOD mice. While all Reg family mRNAs can be detected from total pancreas, RegII and RegIIIα genes have been detected in pancreatic islet cells as confirmed by immunofluorescence 23 and RegIIIα expression was remarkably increased during pregnancy in rats 24 . Mice overexpressing RegIII β was resistant to streptozotocin induced diabetes mellitus 25 . RegIIIγ, another member of Reg family of genes is also found to be involved in regeneration of pancreas. REG III protein was found to be expressed only in regenerating islets and not in normal rat pancreas 26 and its gene expression level induced 10-100 folds on day 3 of pancreatectomy 27 . These data suggest that there is a strong link between Reg gene family and rejuvenation of pancreatic islets.
Transcription factors in β-cell proliferation
Certain transcription factors (Sox9, Hnf-6, Ngn-3 and NeuroD1) are also found to be involved in the proliferation of β-cells. SOX9 is the first specific marker and maintenance factor of multi-potential progenitors during pancreatic organogenesis. SOX9, in the embryonic pancreas stimulates proliferation and prevents apoptosis of pluripotent progenitor cells. It controls pancreatic progenitor cell maintenance by modulating Notch signal transduction. The phenotypic alterations in the Sox9-deficient pancreas shows a striking resemblance to the pancreatic defects associated with mutations in components of the Notch signalling pathway, thus establishing a possible link between Sox9 and the notch signal transduction pathway for stem cell maintenance 28 . The hepatocyte nuclear factor 6 (Hnf-6), homeodomain-containing transcription factor, is an important regulator of endocrine development. HNF6 is expressed in early pancreatogenesis in all endodermally derived cells, but is not detected in differentiated endocrine cells at late-gestation 29 . Hnf-6 null mice embryos showed impaired endocrine differentiation and perturbed duct morphogenesis during embryogenesis 30 . In addition to defects in endocrine development, Hnf-6 null embryos showed defects in duct development 31 . Loss of Hnf-6 from Ngn-3 expressing cells did not affect β-cell function or glucose homeostasis suggesting that Hnf-6 is dispensable for later events of endocrine differentiation. These data confirm that HNF6 has both early and late functions in the developing pancreas and is essential for maintenance of Ngn-3 expression and proper pancreatic duct morphology 32 . NeuroD1, a downstream target of Ngn-3, carries on the endocrine differentiation programme initiated by Ngn3 and participates in the maintenance of the differentiated phenotype of the mature islet cells 33 . During pancreatic endocrine development, Ngn-3 acts early to determine endocrine cell fate, while NeuroD1 directs endocrine cell differentiation 34 . At early stage of life, mice lacking a functional NeuroD1 (also called as BETA2) gene exhibit a striking reduction in the number of insulinproducing β-cells and failed to develop mature islets with a marked hyperglycaemia. Attempts to rescue the diabetic phenotype by administration of insulin were unsuccessful, suggesting that the mutant animals were unable to respond to insulin, have become insulin resistant, or perhaps contained additional defects 35 . Thus BETA-2 is required for the expansion of the pancreatic β-cells population, as well as other islet cell types which are involved in the development of endocrine cells into islets of Langerhans 34 .
Netrins are laminin-like diffusible chemotactic proteins involved in pancreatic morphogenesis and play a role in the regulation of duct-cell and foetal islet cell migration. In adult rat pancreas, Netrin-1 mRNA was practically undetectable. After duct ligation, its expression was very low in the head part of the pancreas whereas it was strongly upregulated in the tail part at 3 rd , 5 th and 7 th day of post-ligation with the maximum expression on day 5 36 . Netrin-1 mRNA was found to be expressed by islet cells and exocrine cells with ductal characteristics. These observations suggest that Netrin-1 plays a role in pancreatic morphogenesis, both prenatally and in the regenerating adult rat pancreas.
Transdifferentiation of pancreas
Islet neogenesis specifically refers to an increase in β-cell mass via transdifferentiation of adult pancreatic stem cells, putatively found in the ductal epithelium or acinar tissue. Trans-differentiation involves in the conversion of alpha or delta cells of the pancreas into insulin producing β-cells. Various genes/proteins contribute to this process. These include INGAP, Gastrin,MafA,Foxa2,Nkx2.2,Nkx6.1,Pax4,etc. (Table II).
INGAP (islets neogenesis associated protein) is a member of the C-lectin protein family that serves as the initiator of a cascade of events that culminates in islet neogenesis and can reverse diabetes in streptozotocininduced diabetic C57BL/6J mice 37 . These studies were further confirmed in beagle dogs 38 . There was also a significant increase in insulin gene expression in the INGAP treated animals. INGAP is also found in human pancreas during pathological states involving islet neogenesis which further suggests that INGAP is of primary importance in the process of islet neogenesis 39 . Gastrin, a classical gut hormone secreted by G cells in the stomach lining is found to stimulate pancreatic β-cell neogenesis. Intravenous infusion of gastrin into the ligated duct cells, resulted in a doubling of the β-cell mass in rats 40 , due to high expression of gastrin/ cholecystokinin (CCK) B receptors in duct ligated cells 41 . These observations were confirmed using EGF plus gastrin combinatorial therapy, which showed an increase in insulin positive cells in human islet 42 . In another study using GLP-1 and gastrin in NOD mice, there was a significant reduction in blood glucose due to increase in pancreatic β-cell mass and insulin content 43 . MafA a member of Maf subfamily of leucine zippers is capable of strongly activating the insulin promoter. Maf factors play important roles in cellular differentiation of exocrine cells to β-cells 44 . MafA is restricted to β-cells and known to be important in the embryonic development of pancreas 45 . It has been observed that MafA expression is decreased during the diabetic condition 46 .
Transcription factors in pancreatic neogenesis
There are several transcription factors involved both in neogenesis and replication. However, it is not clear whether these work alone or in combination with other transcription factors in a coordinated manner. The list of the transcription factors and their role in transdifferentiation is summarized in Table II. Pancreatic duodenal homeobox-1 (Pdx-1), a homeobox transcription factor, besides being involved as a regulator of pancreatic development (the differentiation and gene expression in the β-cell) 47 , Pdx1 also turns out to be a major player in the maintenance of an adequate pool of healthy β-cells in adults 48,49 . It maintains the homeostasis between β-cell neogenesis and apoptosis. In mice with a 50 per cent reduction in Pdx1, the isolated islets showed more susceptibility to apoptosis at basal glucose concentrations along with impaired ability to maintain β-cell mass with age. Its expression is shown to be down-regulated during hyperglycaemic condition 50 . The survival functions of Pdx1 may be mediated by insulin/IGF signaling acting through the forkhead transcription factor Foxo1 (forkhead/winged helix transcription factor). Foxa2 (formerly known as Hnf-3) is a key regulator of foregut development that plays an essential role in the cell type-specific transcription of the Pdx-1 gene in the pancreas 51 . On deletion of Foxa2 in mice, there is a significant downregulation of Pdx-1 mRNA 52 . This shows that Gastrin Induces islet β-cell neogenesis from pancreatic exocrine duct cells. 40
Nkx6.1
Maintaining & expanding population of β-cell precursors as these progress from precursors to differentiated β-cell.
Pax4
Expressed in endocrine cell progenitors and directs formation of β and delta cells.
is an essential upstream factor that regulates Pdx-1 mRNA levels in β-cells.
Nkx2.2 is another essential pancreatic transcription factor that affects the expression of ghrelin during pancreatic development 53 . Nkx2.2-null mice lose all β-cells and the majority of the α-cells; and the islet is predominantly populated by gherlin expressing cells. The discovery that ghrelin cells often replace the other endocrine populations in the pancreas 54 suggests a lineage relationship between the ghrelin producing epsilon cells and other hormone-producing populations 55 . In another study, disruption of the Nkx2.2 gene has shown to lead to the accumulation of incomplete differentiated β-cells that express some β-cell markers, but not insulin 56 . This illustrates the role of Nkx2.2 in pancreatic endocrine cell differentiation. The phenotypic effects of Nkx2.2 mutant mice may in part result from the loss of other homeodomain transcription factor Nkx6.1. In pancreas, Nkx6.1 also follows an expression pattern similar to that of Nkx2.2; but restricted to the β-cells alone 57 . Deletion of the Nkx6.1 gene in mice caused a marked reduction of β-cells. In the same study, with the double knockout animal model (Nkx2.2/Nkx6.1) showed the same phenotype as of Nkx2.2 single mutant 58 . This shows that Nkx6.1 functions downstream of Nkx2.2 in pancreatic development. The paired homeobox transcription factor Pax4 appears to be a strong candidate for specifying β-cell lineages. Mice deficient in Pax4 fail to develop beta and delta cells within the pancreas suggesting that Pax4 expression commits selected endocrine precursors towards the beta and delta cell lineage 59 . The Pax4 genes are expressed during the embryonic pancreatic development, but later on these are restricted to β-cells alone. Inactivation of Pax4 in mice results in the improper maturation of alpha-and β-cells 60 . This indicates the role of PAX4 in the maintenance of progenitors and in maturation of β-cells.
Inhibition of β-cell apoptosis
Under normal circumstances, apoptosis is highly regulated to maintain normal physiological function of the cells. In diabetes, during excess stress, the pancreatic cells not only undergo apoptosis but also become necrotic and are unable to secrete insulin. A study conducted by Butler et al 61 indicated that increased apoptosis rather than decreased neogenesis/ proliferation might be the main mechanism leading to reduced β-cell mass in T2DM. Thus, decrease in the rate of apoptosis itself, may increase the β-cell mass via proliferation. Several genes/proteins are involved in pancreatic apoptosis and their functions are summarized in Table III. Perforin (pore forming protein) is a cytolytic protein, initiating apoptosis by inducing minimal cell membrane damage while effectively releasing Table III. Genes involved in inhibition of beta cell apoptosis
Pdx-1
Granzymes Possesses anti-apoptotic activity that helps facilitate the maintenance of β-cell mass.
A serine protease, that activates Bid (a pro-apoptotic factor) essential for death-receptor induced apoptosis of islets. Thioredoxin-interacting protein (TXNIP) It is a pro-apoptotic factor that plays an essential role in glucose toxicity-induced β-cell apoptosis.
84
granzymes from the endosomal compartment into the cytosol 62 . Of the granzyme family, granzyme A and B are the most common in human and mouse 63 . Granzyme A induces single-strand DNA breaks, while granzyme B cleaves specific substrates including caspases and the pro-apoptotic molecule Bid 64 . Activated Bid is targeted to the mitochondria where it sequesters antiapoptotic members of the Bcl-2 family, allowing the oligomerization of Bax and/or Bak which mediates loss of mitochondrial outer membrane potential, release of cytochrome C and causes irreversible apoptosis 65 . During hyperglycaemic condition its expression is elevated in β-cells, thus increasing the rate of apoptosis 51 . Deficiency in Bid prevents β-cells from undergoing mitochondrial apoptotic pathway 66,67 . Caspase-3 has been extensively studied in various tissues due to its role as the principal executioner of apoptosis 68 . As such, Caspase-3 is an attractive target to inhibit apoptosis in diseased conditions including diabetes 69 . Caspase-3 null (Casp3 _ / _ ) mice were found to be protected from developing diabetes in a multiplelow-dose streptozotocin autoimmune diabetic model 70 . This illustrates the importance of Caspase-3 in β-cell death and its activity is found to be increased during the diabetic conditions. An attractive regulator of β-cell replication and survival after birth is Survivin. This protein blocks the functions of Caspases in the mitochondria-dependent cell death pathway, protecting cells from apoptosis 71 . Deletion of Survivin within the mice endocrine pancreas results in diabetes manifested by hyperglycaemia and polyuria. Exogenous expression of Survivin in streptozotocin -induced diabetic model protects the β-cells from apoptosis 72 . Thus, it may play a role in the replication and/or survival of matured β-cells 73 . Surprisingly it is modestly upregulated in diabetic patients 74 which may help to assuage diabetes.
Beside survivin, there are other proteins present in pancreas which prevent apoptosis. BCL-xL, an anti-apoptotic protein coded by the "survival gene", is involved in the inhibition of apoptosis. Marked overexpression of Bcl-xL, resulted in a severe defect in insulin secretion and hyperglycaemia in transgenic mice 75 . Under conditions of stress, β-cells require BCL-xL to maintain their survival in vivo 48 . Myc is a potent inducer of both β-cell proliferation and apoptosis in vitro 76 . Myc sensitizes cells to a variety of apoptotic triggers rather than directly inducing apoptosis by itself 77 . Sustained Myc activation leads to initial hyperplasia and increased apoptosis later on, suggesting that apoptosis ultimately predominates over proliferation 78 . In chronic hyperglycemia, an increase in the expression of Myc was observed in pancreas 79 . A thioredoxin-interacting protein (TXNIP), a regulatory protein, is involved in the inhibition of thioredoxin and thereby modulates the cellular redox state and promotes oxidative stress 80 . Its overexpression in β-cells is found to induce apoptosis 81 , and the process involves the activation of intrinsic mitochondrial pathway, while the Endoplasmic reticulum (ER)mediated cell death remains unaffected. It was demonstrated that their mRNA expression levels are elevated during the diabetic conditions 82 . In a recent study, huntingtin-interacting protein is found to posses anti-apoptotic property of β-cells. This also plays an important role in glucose-stimulated insulin secretion 83 . Bace2 (Beta site amyloid precursor protein cleaving enzyme 2), a proteolytic enzyme acts as a negative regulator of β-cell mass via inhibiting Tmem27 expression. Overexpression of Tmem27 or ablation of Bace2 leads to an increased β-cell mass by inhibiting apoptosis 84 .
Future prespective
Several lifestyle diseases such as T2DM, obesity are increasing significantly. It has been predicted that in near future diabetes will overtake all the current infectious and non-infectious diseases. In spite of rapid progress in our understanding regarding the pathophysiology of diabetes, challenges remain high due to increased complexity of the disease contributed by genetic and environmental factors. In future, there need to be more emphasis on the prognosis and better treatment for diabetes associated diseases, and on the discovery of reliable biomarker(s) for its early detection. Development of new technologies i.e. rejuvenation of the pancreas by small molecules or gene targeted drug delivery, pancreatic transplantation, stem cell therapy and creating artificial pancreas, etc. will be the mainstay of the diabetes research. Thus, it becomes essential to have better understanding of the various pathways at the molecular level that are involved in pancreatic islet cell rejuvenation. | 2018-04-03T01:29:21.961Z | 2013-04-01T00:00:00.000 | {
"year": 2013,
"sha1": "44f3036913087a0a468fa4682c74e99ad4898092",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7f8958b8b52e5821b4fbffccb36221a2f6202696",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
251039285 | pes2o/s2orc | v3-fos-license | Three new genera, two new species and one new combination of family Hystrignathidae (Nematoda: Thelastomatoidea) from Ceracupes fronticornis (Westwood) (Insecta: Passalidae) in China
Two new genera and two new species of family Hystrignathidae were collected from Ceracupes fronticornis (Westwood) from the Yunnan Province, China. Pseudoxyo yunnanensisgen. et sp. nov. differs from the related genera by having the cervical region armed with alternating rows of spines, with 22 spines in the first row, and by lacking the first cephalic annule. Sinospinata chitwoodigen. et sp. nov. can be easily distinguished from the related genera by having the cervical region of females armed with irregularly arranged spines, two or three spines clustered together in their roots in some spines. Meanwhile, Huntinemagen. nov. was proposed to replace Huntia Zhang, Yin, Carreno & Zhang, 2021 because the name of this genus was preoccupied by Huntia Gray & Thompson, 2001 for placement of two new species of spiders. In addition, the 18S and 28S rDNA partial sequences of two new species were obtained.
Introduction
The members of Hystrignathidae Travassos, 1920 (Nematoda: Thelastomatoidea) are only parasitic in passalid beetles. To date, 36 genera have been described with more than 100 species (Zhang et al. 2021(Zhang et al. , 2022. The species are mainly distributed in North and South America, Africa, and Australia (Adamson & Van Waerebeke 1992;Morffe & García 2011, 2013aGarduño-Montes de Oca & Oceguera-Figueroa 2020). However, recent studies on thelastomatoid nematodes from two Chinese passalid beetles have been found 4 new genera and 5 new species of hystrignathid nematodes (Zhang et al. 2021(Zhang et al. , 2022, which indicated it is highly probable that there is a great richness of nematodes to be discovered in passalid beetles in China. In the present study, nematode specimens were collected from Ceracupes fronticornis (Westwood) from the Yunnan Province, China. Two new genera and two new species of Hystrignathidae were confirmed and described here. In addition, Huntinema was proposed to replace Huntia Zhang, Yin, Carreno & Zhang, 2021 because the name of Huntia was preoccupied. Moreover, the 18S and 28S rDNA partial sequences for the new species were also generated.
Light and scanning electron microscopy
The passalid beetle, Ceracupes fronticornis (Westwood), were collected from the Yunnan Province, China. They were dissected and examined for nematodes. The collected nematode specimens were killed with hot water (60-70°C) and then fixed in 80% ethanol. For light microscopical examination, preserved nematodes were placed in a 5% solution of glycerin in 95% ethanol. These were left uncovered for 48 h to allow the ethanol to evaporate, thereby leaving the specimens in 100% glycerin. This was done to limit any damage to the worms caused by rapid transfer to pure glycerin. Measurements were taken with the aid of a calibrated eyepiece micrometer. De Man's ratios a, b, c, V% and V'% were calculated. All measurements are given in micrometers as range values followed by mean values in parentheses. Drawings were made with the aid of a Nikon microscope drawing attachment. For scanning electron microscopy studies, specimens were fixed in 7.5% glutaraldehyde, post-fixed in 1% OsO4, dehydrated through ethanol and acetone, and then subjected to critical point drying. The specimens were coated with gold and examined with a S-4800 Field Emission scanning electron microscope at an accelerating voltage of 15 kV. The specimens have been deposited in the College of Life Sciences, Hebei Normal University (HBNU), Hebei Province, China.
Molecular Procedures
Randomly selected samples were used for further molecular analysis. Genomic DNA from each sample was extracted using a Column Genomic DNA Isolation Kit (Shanghai Sangon, China) according to the manufacturer's instructions. The 18S sequence was amplified by PCR using the forward primer 18SF (5 0 -CCCGATTGATTCTGTCGGC-3 0 ) and the reverse primer 18SR (5 0 -TGATCCTTCTGCAGGTTCACC-TAC-3 0 ) (Floyd et al. 2005). The 28S sequence was amplified by PCR using the forward primer D2A (5 0 -ACAAGTACCGTGAGGG AAAGTTG-3 0 ) and the reverse primer D3B (5 0 -TCGAAGGAACCAGC-TACTA-3 0 ) ( Morffe et al. 2019). The PCR reactions for both 18S rDNA and 28S rDNA were performed in a total volume of 25 lL, containing 2 lL of template, 0.5 lL each of forward and reverse primers, and 12.5 lL of 29Taq MasterMix (Beijing Bio-Lab, China). The 28S rDNA PCR cycling parameters were as follows: an initial enaturation at 94°C for 5 min, followed by 35 cycles of 94°C for 30 s, 56°C for 30 s, and 2°C for 70 s, followed by a final extension step at 72°C for 7 min. The 18S rDNA PCR cycling parameters were as follows: an initial denaturation at 94°C for 5 min, followed by 35 cycles of 94°C for 30 s, 58°C for 30 s, and 72°C for 70 s, followed by a final extension step at 72°C for 7 min. PCR products were checked on GoldView-stained 1% agarose gels. Samples were sent to Shanghai Sangon, China for sequencing. Sequencing for each sample was carried out for both strands. Sequences were aligned using ClustalX and adjusted manually. The 18S rDNA and 28S rDNA sequences determined were compared (using the algorithm BLASTn) with those available in the National Center for Biotechnology Information (NCBI) database. Type-material: Holotype female (HBNU-I-2021023); paratypes: 9 females (HBNU-I-2021024-2021032). Prevalence: 6.1% (7 infected out of 115 examined). Intensity: 2-10 (mean 5) specimens. Site in host: Hindgut. Representative DNA sequences: One partial 28S and one partial 18S rDNA sequence of the new species are deposited in the GenBank database under the accession numbers ON751930 and ON751935, respectively. Etymology: The new species is named for its occurrence in Yunnan Province, China.
Systematics
Description. Female: Body relatively stout. Cervical cuticle bearing alternating rows of spines. Spines originating 20 lm behind head, ending at level of anus ( Fig. 2A, E). First row with 22 spines, about 8 lm long; second row of spines longer than the first, about 13 lm long. Size of spines gradually smaller after nerve ring (Fig. 1A). Oral opening rounded, surrounded by a cuticular ring (Fig. 2C). Head bearing eight papillae arranged in 4 pairs, a pair of amphids (Fig. 2C). Length of stoma about 5 times of head (Fig. 1A). Oesophagus consisting of a muscular, clavate corpus, short isthmus and basal bulb. Bulb rounded, valve-plate well-developed. Nerve ring encircling corpus in middle of its length (Fig. 1A). Excretory pore located posterior to basal bulb (Fig. 1D). Intestine simple, its anterior region slightly dilated. Reproductive system amphidelphic. Vulva located near mid-body (Fig. 1D). Vagina slightly extending anteriorly, connecting with two opposite uteri. Anterior ovary reflexed at excretory pore; posterior ovary reflexed at about two times of body width before anus (Fig. 1C, D). Eggs ovoid, smoothshelled (Fig. 1E). Tail conical, attenuated, sharply pointed. Male not observed.
Measurements. Female (n = 10): a = 9.8-15.6 (12.8); b = 4.8-7.0 (5.9); c = 6.2-8.0 (7.5); V opposite rows of spines, by having the first row with 22 spines instead of 16 spines, and by lacking the first cephalic annule. Pseudoxyo gen. nov. is different from Urbanonema and Xyo by having the first row with 22 spines instead of 32 spines, and by lacking the first cephalic annule. In addition, Urbanonema differs from Pseudoxyo gen. nov. by having the stoma with a dilated anterior end.
The new genus is very similar to Parahystrignathus by having females with the cervical region armed with alternating rows of pointed spines, clavate procorpus, and similar cephalic structure, however, it can be distinguished from the latter by having the first row with 22 spines instead of 16 spines.
Sinospinata gen. nov. Diagnosis: Female body relatively stout. Cervical cuticle bearing irregularly arranged spines, spines originating just behind first cephalic annule, extending to level of excretory pore. Two or three spines clustered together in their roots in some spines. First row of spines with 44 elements. Lateral alae absent. Oral opening rounded, surrounded by a cuticular ring.
Head bearing 8 papillae, and a pair of amphids. First cephalic annule larger than head. Oesophagus consisting of a muscular, clavate corpus, short isthmus and basal bulb. Reproductive system amphidelphic. Vulva located near mid-body. Eggs ovoid, smoothshelled. Tail conical, attenuated, sharply pointed. Male unknown. Description. Female body relatively stout. Cervical cuticle bearing irregularly arranged spines, spines originating just behind first cephalic annule, extending to level of excretory pore (Fig. 3A). Two or three spines clustered together in their roots in some spines (Figs. 3C,F;4D,E). First row of spines with 44 elements. There are a few of small spines sparsely distributed before first row (Figs. 3C, 4A). After fifth row, number of spines gradually decreased. Lateral alae absent. Oral opening rounded, surrounded by a cuticular ring. Head bearing 8 papillae, and a pair of amphids (Fig. 4B, C). First cephalic annule cone-like, truncated, larger than head. Length of stoma is about twice of the first cephalic annule. Anterior end of stoma dilated and spherical (Fig. 3G). Oesophagus consisting of a muscular, clavate procorpus, short isthmus and basal bulb (Fig. 3D). Bulb rounded, valveplate well-developed. Nerve ring encircling corpus at 40% of its length. Excretory pore located just posterior to basal bulb. Intestine simple, its anterior region slightly dilated. Reproductive system amphidelphic.
Vulva located near mid-body. Vagina slightly extending anteriorly, connecting with two opposite uteri. Anterior ovary reflexed at excretory pore, posterior ovary reflexes forward at the mid-region between vulva and anus. Eggs ovoid, smooth-shelled. Tail Discussion: Sinospinata gen. nov. is similar to Carlosia, Hystrignathus, Parahystrignathus, Pseudoxyo gen. nov., Urbanonema and Xyo by having females with the cervical region armed with pointed spines, by having clavate procorpus and didelphic reproductive system. However, it can be easily distinguished from above related genera by having the cervical region of females armed with irregularly arranged spines, two or three spines clustered together in their roots in some spines.
In addition, Carlosia differs from the new genus in the cervical region in having only two longitudinal rows of spines. Sinospinata gen. nov. differs from Hystrignathus, Parahystrignathus, Pseudoxyo gen. nov. and Xyo by having the stoma with a dilated vs. a narrow anterior end.
Sinospinata gen. nov. resembles Urbanonema because both genera share the presence of a dilated anterior end of stoma. However, the new genus can be separated from Urbanonema by having the cervical cuticle armed with irregular vs. alternating rows of spines, and by the first row with 44 spines instead of 32 spines. | 2022-07-26T06:17:06.180Z | 2022-07-25T00:00:00.000 | {
"year": 2022,
"sha1": "974492cc11f6f47d7505b0fcdfc07ac58c7faf56",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-1637079/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "Springer",
"pdf_hash": "d3681a856b154e385a5cba69eed0c79b6f996c42",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53225827 | pes2o/s2orc | v3-fos-license | Clinical Features of Varicella-Zoster Virus Infection
Varicella-zoster virus (VZV) is a pathogenic human herpes virus that causes varicella (chickenpox) as a primary infection, following which it becomes latent in peripheral ganglia. Decades later, the virus may reactivate either spontaneously or after a number of triggering factors to cause herpes zoster (shingles). Varicella and its complications are more severe in the immunosuppressed. The most frequent and important complication of VZV reactivation is postherpetic neuralgia, the cause of which is unknown and for which treatment is usually ineffective. Reactivation of VZV may also cause a wide variety of neurological syndromes, the most significant of which is a vasculitis, which is treated with corticosteroids and the antiviral drug acyclovir. Other VZV reactivation complications include an encephalitis, segmental motor weakness and myelopathy, cranial neuropathies, Guillain–Barré syndrome, enteric features, and zoster sine herpete, in which the viral reactivation occurs in the absence of the characteristic dermatomally distributed vesicular rash of herpes zoster. There has also been a recent association of VZV with giant cell arteritis and this interesting finding needs further corroboration. Vaccination is now available for the prevention of both varicella in children and herpes zoster in older individuals.
Introduction
Varicella-zoster virus (VZV) is a pathogenic human alpha-herpesvirus that causes chickenpox (varicella) as a primary infection, which usually occurs in children in locales where vaccination is not practiced [1]. Following the primary infection, this neurotropic virus becomes latent, primarily in neurons in peripheral autonomic ganglia throughout the entire neuroaxis including dorsal root ganglia (DRG), cranial nerve ganglia such as the trigeminal ganglia (TG), and autonomic ganglia including those in the enteric nervous system [1][2][3]. Up to decades later, latent VZV may reactivate, either spontaneously or following one or more of a variety of triggering factors to cause herpes zoster (shingles), which usually appears as a painful or pruritic cutaneous vesicular eruption that occurs in a characteristic dermatomal distribution [1,2]. This viral reactivation becomes more frequent with the increased age of the human host because of diminished cell-mediated immunity to the virus in such individuals [4,5]. Other specific triggers for viral reactivation include immunosuppression from disease or drugs, trauma, X-ray irradiation, infection, and malignancy [1]. While the main and most important complication of herpes zoster is postherpetic neuralgia (PHN), it has been increasingly recognised over the last decade that VZV reactivation causes a variety of acute, subacute, and chronic neurological syndromes, so its clinical manifestations are protean [3].
VZV is a double-stranded DNA virus with a genome of just under 125,000 base pairs and it contains 68 unique open reading frames (ORF) [6]. The mechanisms of VZV latency are slowly being unravelled, but several issues remain to be clarified. It is known that during ganglionic latency, VZV DNA is located predominantly, if not exclusively, in neurons [7] in which it is present in a nonintegrated form, probably as endless episomes of unit or concatemeric length [8,9]. It has been known for some time that viral transcription during latency is highly restricted, with transcripts for VZV gene 63 being the most commonly detected transcript [10][11][12], and previous work using different techniques has also reported transcription of VZV genes 21, 29, 62, and 66 [10][11][12][13]. However, a problem with many previous reports is that the ganglia obtained at autopsy have been studied only after 12-48 h after death, at which time the process of viral reactivation may well have already started. Indeed, when human ganglia were analysed at less than 9 h after death, no transcripts for VZV were detected, though VZV ORF63 transcript levels in human TG increased with longer postmortem intervals [14]. This study suggested that expression of other VZV genes previously detected was probably a reflection of viral reactivation, a view that is supported by the detection by multiplex polymerase chain reaction (PCR) of several VZV ORFs, including those other than those just corresponding to immediate-early or early transcripts [15]. On the other hand, studies of human enteric ganglia removed during gastrointestinal surgery from children immune to varicella and placed immediately in "RNA later" solution revealed transcripts for ORFs 63, 4, and 66 [16]. One possibility is that when analysing human ganglia for VZV latency, both true latent transcripts and those indicating a degree of low-level viral reactivation are being detected unless the ganglia are studied prior to 9 h postmortem. Very recently, a unique spliced latency-associated VZV transcript was detected in human TG neurons which maps antisense to the viral transactivator gene 61 [17]. Since the latter ganglia studied had been obtained at about 6 h after death, it is clear that this could not have been detected due to viral reactivation. Given the inconsistent results in various laboratories, the molecular status of VZV during latency needs further study.
In this review, we consider the main clinical manifestations of VZV reactivation, the most common of which is generally recognised as being herpes zoster, which may be followed by PHN. Further, a wide variety of neurological features may also be caused by VZV reactivation from latency and these are mentioned here. We also review the current evidence for the benefit of VZV vaccination in both children and adults.
Varicella and Its Complications
The primary infection with VZV is varicella, commonly known as chickenpox. Varicella is highly contagious; it is most commonly seen in children under the age of 10 years in countries where live attenuated varicella vaccine is not routinely administered [1]. The major feature of the illness is a vesicular pruritic rash that occurs mainly on the trunk, head, and face. The extremities are somewhat spared; skin vesicles are full of infectious, well-formed virus which are aerosolized and serve to transmit VZV to others who have not had the disease previously. The skin lesions commonly occur in crops and progress from papules to vesicles to crusts over a few days. There may be anywhere from a few to many hundreds of vesicles, with an average of about 500. More severe cases manifest more severe rashes and take longer to heal. Concomitant symptoms include malaise, fever, and fatigue, and the illness usually lasts about a week. Complications include bacterial superinfection of the skin, encephalitis, and pneumonia. Adults and immunocompromised patients are more prone to severe infections than healthy children [1,[18][19][20].
Individuals who have received live attenuated varicella vaccine may still develop varicella after an exposure to the virus (either a person with varicella or one with zoster). Patients with zoster can transmit varicella to others but with a lower attack rate of chickenpox than those with primary infection [20]. Vaccinees who nevertheless develop varicella usually have mild cases with fewer vesicles and complications. This situation is termed "breakthrough varicella" and is less contagious than primary varicella. Vaccinees who have had only one dose of vaccine are much more likely to transmit the virus to varicella susceptibles who are exposed than those who have had two doses of vaccine. When a person develops varicella despite receiving two doses of vaccine, the disease is often very minor and may be difficult to diagnose as varicella both clinically and in the laboratory [1,20].
The diagnosis of VZV infection is usually made clinically by the appearance of the skin rash. In confusing or unusual appearing cases, the diagnosis may be made by identifying VZV DNA in skin lesions by PCR. Culture of VZV from skin lesions may also be used, but it is more expensive, takes more time, is poorly available, and is less sensitive than PCR [20]. In patients with suspected meningitis or encephalitis and other complications due to VZV, the viral DNA may be demonstrable in cerebrospinal fluid and/or saliva [18,20].
The infants of women with varicella in the first 20 weeks of pregnancy are at about a 2% risk of developing the congenital varicella syndrome [1]. These infants often have a variety of severe abnormalities of their brain, eyes, extremities, and skin and most succumb in infancy or early childhood. They frequently experience recurrent VZV reactivation and may have multiple cases of clinical zoster [20]. Fortunately the syndrome is unusual in that only about 2% of women who develop varicella in pregnancy give birth to an infant with the congenital varicella syndrome [20].
Adults are more likely to experience severe varicella than children. Severe and even fatal varicella, moreover, often occurs in patients who are immunocompromised due to disease or medications such as corticosteroids or cancer chemotherapy [1,20]. These patients manifest extensive, often haemorrhagic rashes and may also develop complications such as pneumonia, hepatitis, and/or encephalitis all due to VZV. They are likely more prone to develop severe bacterial infections as well. Severe varicella may be prevented to some extent by administration of passive immunization with VariZig, a form of immunoglobulin containing high titers of antibodies to VZV; passive immunization should be administered as soon as possible after a recognised close exposure to VZV in a high-risk person who has never had varicella [20].
Routine treatment of varicella in otherwise healthy children is not uniformly recommended, although an oral form of the antiviral acyclovir is available. Otherwise, healthy adults and immunocompromised patients who develop varicella should receive treatment. If it develops or seems to be developing, severe varicella should be treated with intravenous acyclovir. For the best outcome, antivirals should be given as soon as possible to immunocompromised individuals and to anyone who seems to be developing severe varicella [20].
Patients who develop zoster should be treated as soon as possible with acyclovir, famciclovir, or valacyclovir, which are administered orally. If zoster is severe, especially in immunocompromised patients, intravenous acyclovir can be administered, especially at the start of treatment [20]. Although generally well tolerated, adverse effects of antiviral therapy for which clinical monitoring is necessary include gastrointestinal, neurological, and renal toxicity [1,21].
An important feature of varicella is the development of a viremia that just precedes the appearance of the rash. The virus is carried to the skin in T lymphocytes, where the rash develops [21]. Latency in DRG, Cranial nerve ganglia CNG, and also to autonomic ganglia may be established by two mechanisms during the viremia and as VZV travels directly from skin to ganglia by anterograde transport to DRG and CNG [1,21].
Herpes Zoster
Reactivation of VZV in neurons occurs with unknown frequency but is possibly very common [22]. Over 50 years ago, Hope Simpson postulated that reactivation was frequent and could occur with or without symptoms [22]. The reality of subclinical reactivation was demonstrated when it was determined that one-third of astronauts developed reactivation of VZV transiently during space travel [23]. The diagnosis was made by finding VZV DNA in saliva; the astronauts had no symptoms of zoster and the viral DNA disappeared within a few weeks after return to Earth [23]. Importantly, it is very rare to isolate infectious VZV from saliva of patients with active or subclinical VZV infections [24].
When symptomatic reactivation of VZV occurs, the condition is termed herpes zoster, often referred to as "zoster". Although it is now recognised that zoster may occur in the absence of rash, the classical presentation is appearance of a unilateral, dermatomal rash that is painful, pruritic, or both. The causes and mechanisms of reactivation remain unclear, but zoster is associated with a preceding decrease of cellular immunity (CMI) to VZV [4,5]. Vaccination against zoster is aimed at restoring CMI to VZV to prevent zoster from occurring [1].
Since the process of reactivation is not fully understood, the incubation period of zoster is unknown. Characteristically, zoster presents with a unilateral vesicular rash on the face, head, or trunk, although it can also occur on the extremities. The vesicles are full of infectious virions that can become airborne and infect nearby varicella susceptibles as chickenpox, although zoster is only about half as contagious as varicella. The zoster rash may be mild and heal quickly or it can be severe with extensive lesions that may last for weeks. The latter possibility is more likely to occur in elderly patients or immunocompromised individuals than others [1].
Postherpetic Neuralgia (PHN)
Patients who develop PHN are often those who have experienced zoster with many skin vesicles accompanied by severe pain. At some point after resolution of these symptoms, usually within about three months, the persistent pain of PHN begins in the area of the healed rash. The pain ranges from mild to extremely severe and may be very debilitating, especially in the elderly. It may last for a year or longer. Pain is often described as throbbing, burning, or shooting. A prominent symptom is allodynia, in which a mere touch of the skin even by light clothing causes intense pain. The mechanism by which PHN occurs is not entirely understood [1,20]. Two leading theories are that the excitability of ganglionic neurons is altered and/or that there is a form of persistent VZV infection (not latency) in involved ganglia [25]. Unblinded studies have described improvement in PHN with antivirals administered for months [26], but definitive information awaits the performance of a double-blind study of antiviral therapy vs. placebo. Vaccines aimed at preventing zoster (see below) are useful to prevent PHN from occurring. Although it is possible that early antiviral therapy in zoster may dampen the character of PHN that follows, there is currently general agreement that antivirals should not be used to treat established PHN [27]. There is no known cure for PHN, although gabapentin and some antidepressive medications have been tried with some success. Patients with severe PHN should be referred to a pain specialist.
Neurological Complications of VZV Reactivation
It is now recognised that VZV reactivation is associated with a wide variety of neurological complications. When there is a long delay between the VZV episode and the neurological condition, it may be difficult to prove a clear causal relationship, but this usually seems logical in the absence of an alternative explanation. The key complications are now outlined, but it should be pointed out that for many conditions, our knowledge is based on a relatively small number of reported cases, though for some, such as vasculopathy, the association is strong.
VZV Vasculopathy
Probably the most important neurological complication of VZV reactivation is a vasculopathy due to a productive viral infection of both large and small cerebral arteries, though its exact frequency is unknown [19]. However, since recent evidence has established that herpes zoster infection is a risk factor for stroke, and since zoster itself is frequent, occurring in about half of all individuals by the age of 85 years [19], this suggests that VZV vasculopathy is probably not an uncommon complication. The clinical presentation may be highly variable including both ischaemic and haemorrhagic stroke, cerebral aneurysm, temporal artery involvement (which is discussed below), arterial dissection, transient ischaemic attacks, cerebral venous thrombosis, and spinal and peripheral artery thrombosis [19,28,29]. Pathologically, an inflammatory infiltrate with T cells and macrophages is seen in the adventitia and intima of the affected arteries and also the media at a later stage [19]. Infected arteries usually contain multinucleated giant cells, Cowdry A inclusion bodies, herpes virions, and VZV DNA and antigens [30,31]. The clinical presentation of VZV vasculopathy varies considerably, but a typical case may present weeks or months after the zoster rash or even without a rash (see below), followed by neurological features such as progressive cognitive impairment, seizures, and other focal signs [3,19]. Both MRI and CT may show evidence of ischaemia or haemorrhage and angiography may suggest a vasculitis. The diagnosis may be established by the detection by PCR of VZV DNA in the cerebrospinal fluid (CSF) of the affected patient, though it has been shown that the presence of anti-VZV IgG antibody in the CSF can also establish the diagnosis and is more sensitive than PCR for DNA, which may sometimes be negative in VZV vasculopathy [19,28]. Treatment should be with a two-week course of intravenous acyclovir and a one-week course of oral corticosteroids, though in the case of immunocompromised patients and relapsing cases, the duration of corticosteroid and antiviral therapy should be longer than in the immunocompetent [28].
Giant Cell Arteritis and VZV
In view of the ability of VZV to cause a vasculopathy, it was reasonable to investigate whether VZV might also play a pathogenetic role in giant cell arteritis (GCA), in which there are inflammatory changes in the temporal arteries (TAs) and a serious risk of sudden blindness. Gilden and colleagues [30] examined TA sections from a total of 82 biopsied cases of GCA and detected VZV antigens in 74% of GCA positive TAs, mainly in "skip" regions, but were positive only in 8% of normal TAs and 38% of skeletal muscle sections adjacent to the VZV antigen positive areas. Subsequently, this group also analysed GCA negative but clinically positive TAs and found that 64% of these and also 22% of normal TAs contained VZV antigens [31]. They concluded from these studies that, irrespective of whether or not they show characteristic histopathology, TAs from patients with clinically suspected GCA contain VZV antigens. If VZV is truly causing at least a proportion of GCA, then it follows that GCA patients with demonstrable VZV in their TAs should be treated with both corticosteroids and acyclovir. However, a recent study [32] using the same techniques only detected VZV antigens in only 3/25 (12%) of TAs in sections of biopsy-proven GCA. They also found that false positive staining for VZV antigens was detected in several TA biopsy sections. The problems of false-positive staining of human tissue sections with antibodies to VZV due to antibody cross-reactivity were also emphasised in another recent study [33], where caution was advised in interpreting such apparently positive staining. As well as the problems of possible nonspecific staining, the presence of VZV antigens in some normal TAs, and the varying positive detection rates in different studies, a critical issue is the one of a cause-and-effect relationship between the viral detection and the production of the human disease [34]. Gilden [30] considered a causal relation likely and that there may be a subgroup of patients with GCA in which VZV is a critical determinant in whom corticosteroids alone in the absence of antiviral therapy may actually be deleterious since it may allow an ongoing untreated viral infection to persist. It seems likely that the only definitive way of proving a causal effect of VZV in GCA is to carry out a prospective clinical trial of corticosteroids alone vs. corticosteroids plus acyclovir in biopsy-proven GCA to determine whether the former has a substantial benefit. A recent study [35] which did not find a decreased incidence of GCA in individuals who had received VZV vaccination is certainly noteworthy but does not by itself disprove a relation between VZV reactivation and GCA. At present, the situation remains somewhat unclear and further studies need to be carried out to confirm or refute these remarkable findings of VZV in TA.
Segmental Weakness and Myelopathy
Segmental motor weakness occurring after an episode of herpes zoster is well recognised, though the interval between the rash and the weakness may be highly variable from a day to several months [28]. While the rash and weakness usually occur in the same region, there may be a topographical dissociation between the two in about 10% of cases [28]. The weakness may affect the upper or lower limbs, the diaphragm, intercostal muscles, or the sphincters [28]. The structures involved in zoster-associated weakness may be the anterior or posterior spinal roots, the anterior or posterior spinal horns, or the brachial plexus, findings that may be confirmed by MRI and/or electrophysiological investigations [36]. The prognosis is generally thought to be quite favourable, with complete recovery occurring in about 55-75% of cases [28,36], though these figures may be somewhat optimistic in the authors' experience. Treatment when the diagnosis is certain should be with a 14-day course of intravenous acyclovir and a 5-7-day course of oral corticosteroids. Related to this, muscle and sphincter weakness may also be caused by a zoster-associated myelitis in which the spinal cord is preferentially affected by the virus. The myelopathy thus produced may be acute or chronic, usually occurs one to two weeks after the rash, is symmetrical with a typical bilateral leg, and also presents sphincter weakness. The diagnosis is usually established by the prior herpes zoster, the clinical pattern, and using investigations such as MRI of the relevant region of the spinal cord which may show characteristic T2 hyperintense lesions and cord swelling as well as a typical CSF pleocytosis and the demonstration by PCR of VZV DNA and/or anti-VZV IgG antibody in the CSF [28]. Treatment is the same as for VZV-associated myelitis.
VZV Encephalitis
A meningoencephalitis has also been described as occurring in association with herpes zoster, though it may precede, be simultaneous with, or occur after the rash itself. This complication is comparatively rare and may be mild (and therefore under-reported), and in the opinion of Gilden, some cases of VZV encephalitis may actually be a vasculopathy [36]. However, in the authors' opinion, an encephalitis in the absence of a vasculopathy may rarely occur in association with zoster and may complicate around 0.25% of all zoster cases. The pathogenesis is not understood. The clinical picture is with a relatively mild onset, either acute or more gradual, or an encephalomyelitis with headache, fever, and neck stiffness if there is an associated meningitic element. The illness may be more serious in immunocompromised individuals such as those with AIDS. During the illness, there may also be motor weakness. There is a CSF pleocytosis and the diagnosis may be established using PCR to detect VZV DNA in the CSF, which may also contain anti-VZV IgG antibody. The EEG (electroencephalogram) may be normal or else show nonspecific abnormalities. Treatment should be with a 14-day course of intravenous acyclovir and possibly also a one-week course of oral corticosteroids.
VZV Cranial Neuropathies
The cranial nerve that is most frequently described as affected [36] by herpes zoster is the seventh cranial nerve, also known as the facial nerve. When this occurs, it is called the Ramsay Hunt syndrome, which classically is the combination of otic zoster and ipsilateral facial paralysis as described by Hunt in 1907. The cause is thought to be herpes zoster affecting the geniculate ganglion. Clinically, the syndrome produces otalgia with tinnitus, deafness and vertigo, zoster vesicles in the external auditory meatus, and loss of taste in the anterior two-thirds of the tongue due to involvement of the chorda tympani branch of the facial nerve [36]. This syndrome may be associated with dysfunction of other cranial nerves as well as a localised brainstem encephalitis. Treatment is with a combination of oral acyclovir (or valacyclovir or famcyclovir) and corticosteroids.
When zoster affects the ophthalmic division of the trigeminal (fifth) cranial nerve, there may follow a number of local ocular complications such as keratitis, scleritis, iritis, and retinitis, which require specialist ophthalmic attention. Further, ophthalmoplegia may occur in a small proportion of cases of cephalic zoster Most of the cranial nerves have been described at some stage as being affected by herpes zoster, either alone or together with others [36].
VZV and Guillain-Barré Syndrome (GBS)
Although GBS is a well-recognised complication of varicella [37], this neurological syndrome is thought to only rarely follow an episode of herpes zoster. The paucity of published cases, together with the fact that both GBS and herpes zoster are relatively common diseases, makes the attribution of a definite association between the two conditions somewhat problematic as it might be fortuitous. However, zoster does genuinely appear to be occasionally followed by GBS where the clinical picture is similar to the disease described due to other triggering factors [37]. However, the prognosis in zoster-associated GBS appears to be poorer than other cases, and a shorter latent period between the rash and the GBS has been reported to be associated with a worse outcome compared to cases occurring after a longer (>2 weeks) latent interval [37]. Treatment should be the same as is given in other cases of GBS and the pathogenesis is not understood, though an immune-mediated mechanism seems the most likely.
Zoster Sine Herpete
Zoster sine herpete (ZSH) is the term used when typical dermatomal pain due to VZV occurs in the absence of the characteristic rash. That VZV is indeed the cause of such symptoms was proved by Gilden and his colleagues by demonstrating the presence of VZV DNA by PCR in the CSF of two such patients in whom treatment with acyclovir improved their dermatomal pain [38]. The diagnosis of ZSH can be established by PCR detection of VZV DNA in the CSF or peripheral blood mononuclear cells or by the presence of anti-VZV IgG antibody in the CSF [3,39]. It is possible to make a diagnosis of ZSH by virtue of the presence of anti-VZV IgG antibody even when no VZV DNA can be detected. It has emerged in recent years that the spectrum of ZSH is much wider than was previously thought. Indeed, VZV vasculopathy often presents in the absence of a preceding rash [39]. While this spectrum of syndromes is increasing, it is important that the clinician suspects a diagnosis of ZSH in any patient with persistent radicular pain or who presents with apparently undiagnosed acute, subacute, or chronic cerebral or spinal cord features, especially in the absence of a rash and the presence of a CSF pleocytosis [39].
VZV and Enteric Complications
The development of an in vitro model of VZV infection with features of latency and reactivation in guinea pig enteric neurons led to the search for whether latent VZV could be found in human enteric neurons [40]. Studies of children undergoing routine gastrointestinal (GI) surgery led to the identification of latent GI VZV by demonstrating ORFs 63, 4, and 66 in enteric neurons (typical of latent infection) but no transcripts from lytic genes such as ORF 68 [16]. Subsequent identification of VZV DNA in saliva from children and adults with zoster led to a means to detect reactivation of VZV in the GI tract in the absence of skin lesions [18]. Stomach and colonic ulcers caused by VZV were identified by VZV in saliva and by immunofluorescent assays on GI tissues [18,41]. Further studies on enteric zoster are being conducted. Current results indicate that these conditions are not rare. Presumably, VZV reaches enteric ganglia during the viremia of varicella and establishes latency there. Latent vaccine virus (Oka strain) has also been demonstrated on occasion [18,41]. It is possible that asymptomatic reactivation of VZV in the GI tract, which is the largest immune organ in the body, plays some role in maintaining long-term immunity to the virus.
VZV Vaccinations
Development of a live attenuated vaccine to prevent varicella was accomplished by Takahashi in 1974 [42]. Initially, use of the vaccine was controversial. There was no doubt that it was important to prevent severe varicella, especially in immunocompromised patients; at that time, it was the early days for antiviral therapy. Whether it was safe, however, to immunize children with a live virus that caused latent infection that could reactivate was problematic for many [20]. It did not take long, however, for it to be demonstrated that varicella vaccine could decrease morbidity and mortality and was safe to administer to children with leukemia in remission [43].
Given the success in immunocompromised children, it was then decided to conduct clinical trials in healthy children who were susceptible to varicella in the United States. These trials too were highly successful, and live attenuated varicella vaccine was eventually licensed in the United States for all healthy children, eventually in a two-dose regimen [20]. Today, varicella vaccine is used all over the world, although many countries are not yet using a two-dose schedule. One dose offers about 85% protection, while two doses protect about 98% of vaccinees [44,45]. Widespread immunization of healthy infants and children has led to herd immunity in the United States, where varicella vaccine is no longer offered to immunocompromised patients. Varicella has now become rare in the United States; there has been a dramatic concomitant fall in hospitalizations and deaths (87% decrease) from varicella as well [44][45][46].
Only a few countries refuse to immunize children routinely; among them is England, where modeling studies have suggested, but far from proven, that widespread varicella immunization results in less circulation of wild-type VZV, leading to an increase in zoster in middle-aged persons. England, therefore, still has children who die of varicella every year. A controversy as to whether varicella vaccine increases the incidence of zoster due to less boosting of immunity has grown up. Most investigators reject the idea that varicella vaccination increases zoster incidence [47]. For one thing, while the incidence of zoster is increasing in the United States, this increase began in the 1950s, long before the varicella vaccine was developed. Zoster is also increasing in countries where the varicella vaccine is not being used, and the increase is probably multifactorial and includes increased identification of zoster, an aging population, and more and more immunocompromised people in the population, including those on biologicals to control autoimmune and other diseases [20,44].
The availability of live attenuated varicella vaccine led directly to the development of a live vaccine to prevent zoster. In order to boost cellular immunity in older individuals who had varicella many years ago, it was necessary to use a formulation of vaccine 14 times as strong as varicella vaccine. This vaccine, known as Zostavax TM and produced by Merck and Co., led to protection against zoster and PHN in 50-60% of individuals over 60 years of age [48]. This protection, unfortunately, begins to wane, however, in some cases as early as the first year after immunization and is essentially gone within eight years [49]. Boosters are not recommended. This vaccine is not guaranteed safe for immunocompromised persons, in whom it may cause serious VZV infections [50].
In order to try to develop a vaccine that would provide better protection of older individuals and be safe for vaccination of immunocompromised patients, a new vaccine, Shingrix TM , was developed by Glaxo Smith Kline. This vaccine is a "subunit" vaccine containing as the antigen the main glycoprotein of VZV, termed "glycoprotein E", along with an adjuvant ASO1B that enhances innate and adaptive cellular immunity to VZV. This vaccine requires two doses, given two to six months apart. It provides about 97% protection to healthy persons as old 70 years of age when immunized. It also provides protection against PHN, which is notoriously difficult to treat. It is currently being tested for safety and immunogenicity in immunocompromised patients [51,52]. The most challenging aspect of Shingrix TM is that it results in a high incidence of side effects for the first few days after immunization, including reactions at the injection site, fever, and malaise. Relatively serious adverse effects, however, are rare [51,52]. | 2018-11-15T17:36:51.809Z | 2018-11-01T00:00:00.000 | {
"year": 2018,
"sha1": "30e6bc234c922decf6365aaf0467900b323722d3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4915/10/11/609/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "30e6bc234c922decf6365aaf0467900b323722d3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
153335173 | pes2o/s2orc | v3-fos-license | Some Issues of the Organization of Elective Training in the Light of Modernization of the Russian Educational System
The relevance of the research is determined by the importance of modernization of Russian education in high school which provides specialized education and is designated as a means of differentiation and individualization of learning. The aim of the article is to study changes in the structure, content and organization of educational process. Elective courses (electives) play an important role in the system of Special Education. Unlike optional courses, currently existing at school, elective courses are obligatory. The main approaches used in this research are analysis and systematization. According to the concept of Special Education at the senior level of general education approved by the Russian Ministry of Education differentiation of learning content in high school is based on various combinations of three types of courses: basic, core, elective. Each of these types of courses contributes to the objectives of Special Education. However, you can select the range of tasks, prior for each type of course. The main results of the study allow taking more fully into account the interests, aptitudes and abilities of students, creating conditions for their training according to their professional interests and intentions with regard to continuing education. The materials of the study may be used by educational administrative staff while settling the schedule of a studying course.
Introduction
Modern realities of Russian education require the search of ways to increase interest in learning a foreign language and the ways of intensification of the educational process.Elective courses is one of the most important engines of personal training and, as a consequence, help in sharing learning profiles (The concept of profile training at the senior step of the general education).After all, every student is unique, he has his own preferences.
The level of cognitive interests of senior students often goes beyond the traditional school subjects (Letter from the organization…).This determines the appearance of elective courses.It is in this age when a conscious understanding of the educational material comes, positive attitude to knowledge valuable to the student's own is increased, horizons broaden , the interests of the students take shape and develop (Ermakov, 2007).This age is characterized by the depth of thought and imagination.The students seek to apply their skills on the basis of interests, exercise their choice, determine the way of life, their future profession (Ministry of Education, 2003) Pre-training represents a system of pedagogical, psychological, informational and organizational support to primary school students, promoting their self-determination upon completion of general education (Blagodarskaya, 2005).
Methodological Framework
The main methodological approaches in elective training: 1.The system approach.Essence: relatively independent components are treated as a set of interrelated components: the purpose of education, the subjects of the educational process -the teacher and the student, educational content, methods, forms, means of the pedagogical process.The task of the teacher is to account interconnection components.media, and does not allow the reduction of the individual to nature.Personality is treated as the aim, the subject, the result and the main criteria of efficiency of pedagogical process.The task of the teacher is the creation of conditions for self-development of the disposition and the creative potential of the individual.
3. The activity approach.Linguistic activity is the basis, the means and condition for the development of language competence.The task of the teacher is to select and organize the student's activity from the view point of the subject of work and communication.It includes awareness, goal setting, planning activities, its organization, evaluation and introspection (reflection).
4. Poly subjective (dialogic) approach.The essence of a man is wider than his work.Personality is the product and the result of communicating with people and characteristic relations, i.e. it is not only the substantive result of the activity which is important but also relational.The fact of "dialogic" content of the inner world of a man was hardly taken into account in the Theory of teaching, though it has found its reflection in proverbs ("Tell me who your friends are ...", "who leads ...").The task of the educator is to watch the relationship, to promote humane treatment, to establish the psychological climate in the team.Dialogic approach in unity with the personal and activity ones is the essence of the methodology of humanistic pedagogy.
5. Cultural approach.The justification of the approach is axiology -the doctrine of values and value structure of the world.It is driven by the objective connection of a man with culture as a system of values.
Results
Elective courses are compulsory for the students to be attended.They are a part of the profile of education in upper school.Elective courses are realized at the expense of school component of the curriculum and perform two functions: (1) some of them can "support" the study of basic core subjects on a given standard level; (2) others serve to intra profile specialization training and for the construction of individual educational lines.
The general structure of all courses in high school is: 1. Basic general educational courses, which are the mandatory part of education; they are aimed at the completion of general education of students.
2. Specialized courses focused on in-depth study of particular subjects and preparing of graduates for subsequent vocational education.
3. Elective courses aimed primarily at meeting the individual educational interests, needs and aptitudes of each student.It is on them creation of individual education programs are based as a student chooses a course himself, depending on his/her interests, abilities and life plans.
Organization of elective courses is the newest mechanism of updating and personalization of the learning process.With a well-developed system of elective courses each student can get an education with certain desired bias in a particular area of expertise.
There are the following types of elective courses: 1.The courses are "add-in" core courses providing increased level of study of a subject for the most able students.
2. Courses that provide interdisciplinary communication and giving the opportunity to study academic subjects related to the profile level.
3. Elective courses, which help students to get enrolled in core classes, where one of the subjects studied at a basic level, to prepare for the delivery of the exam on the subject at a higher level.
4. The type of elective courses, focused on the acquisition of students' educational outcomes for the successful promotion of the labor market.
Elective courses must perform the following tasks: -to increase the motivation of students, -to acquaint them with the leading activities -to activate the cognitive activity, -to improve the communicative competence of students -to generate skills and ways of working to solve practical problems, -to prepare students for the class chosen profile -to ensure the continuity of vocational guidance, -to assist in the realization of opportunities and ways to implement the chosen way of life, -to provide a higher level of development of basic academic subjects, -to meet the educational interests, to help solve vital problems -to provide students with skills for successful advancement in the labor market.
Motivation in choosing elective course may be different, such as the desire to: -prepare for the exam; -improve their knowledge, increase understanding, to get deeper into the subjects chosen; -gain experience for future solutions to life problems; -define his career, and others.
The teacher should: 1. Highlight the differences of an elective course and the basic course.
2. Identify the material and forms of work with students, thus helping them to find their way in the selection profile.
3. Decide what activities he to use in working with students (work in pairs, in a group, individual assignments).
4. Identify the object of study and share the independence of students.
5. Develop evaluation criteria, tests and methods of analysis.
Prepare the logical conclusion of the course, and various forms of reporting (final tests, interviews, creative task).
In drawing up the elective course program the following structural parts should be considered: 1. Theme, which should reflect the incentive nature of the study and be appreciated by the students; 2. Explanatory Note, identifying designation, type course, relevance, purpose, objectives, forms and methods of training; 3. Educational-methodical plan; 4. Contents, specifying each topic; 5. Guidelines; 6. Evaluation Criteria (cognitive, creative activity, diligence, the object of labor); 7. References; 8. Appendix (notes, scripts).
Requirements for the program of an elective course:
1) conformity to the concept of profile school; 2) practical orientation; 3) the logic of construction and supply of educational material; 4) the structure and content of the connection; 5) realistic investment of time and resources; 6) the use of active learning methods, which give students the opportunity to consciously and objectively choose to continue their education and careers; 7) novelty; 8) generalized content, allowing us to develop learning and subject skills.
Discussion
Elective courses as the most differentiated and various part of education will require new solutions in their organization.According to Gubarev T. organizing electives make it possible to bring up a communicative-creative personality (Gubarev, 2006).While other courses may bring up competitiveness (Mitina, 2003).A wide range of diverse electives can put a separate school in a difficult position, determined by the shortage of teaching staff, the lack of training and methodological support.In these cases network forms of interaction between educational institutions acquire a special role.Network configurations provide association, cooperative educational potential of several educational institutions, including primary, secondary, tertiary, vocational and further education.Experience of a number of regions participating in the experiment providing specialized education shows that in advanced training institutes and teacher training colleges definite courses are created.Many of them are interesting and deserve support.In this regard we can recommend the regional and municipal education authorities to create databases for elective courses, organize informational support and exchange of experience of introducing elective courses.
Conclusion
Elective courses help students to achieve their goals and interests, and have an impact on the qualitative improvement of the level of language proficiency.The role of English as the language of international communication does not require advertising.
Certainly, knowledge, vocabulary and speech training should begin in high school.This is important not only for the pre-professional training of students, but also to prepare for their future independent life in society.
However, in accordance with the requirements of the federal standard the English language is taught in high school mainly as a means of communication (General English) and as a means of learning (Academic English), which does not allow to develop students' communication skills sufficiently and enable students to master necessary knowledge.The contradiction between the needs of students, future profession requirements and training content laid in the federal standard is due to the relevance of any of the elective English course.
Various elective courses do not only improve the general and language education, general and special knowledge and skills, but also encourage students to use the acquired knowledge of communication in specific situations, which is an effective incentive verbal communication.
As a result of the course high school students: • master the vocabulary according to topics and areas of communication; • learn to hold talks on the phone for business purposes in English; • learn to find solutions and ways out of situations that can arise in real life independently.Thus, we see a real practical value of elective courses for students.These courses enhance students' motivation, stimulate their cognitive activity, and most importantly -make them more communicative. | 2018-11-30T21:30:57.666Z | 2015-06-29T00:00:00.000 | {
"year": 2015,
"sha1": "4fc46f2893b529c8d4ab2aad25eb805aeb5827e7",
"oa_license": "CCBY",
"oa_url": "https://www.ccsenet.org/journal/index.php/jsd/article/download/50497/27127",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "4fc46f2893b529c8d4ab2aad25eb805aeb5827e7",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Political Science"
]
} |
237193691 | pes2o/s2orc | v3-fos-license | Questioning the source of identified non-foodborne pathogens from food-contact wooden surfaces used in Hong Kong's urban wet markets
In this study, a phylogenic analysis was performed on pathogens previously identified in Hong Kong wet markets' cutting boards. Phylogenetic comparisons were made between phylotypes obtained in this study and environmental and clinical phylotypes for establishing the possible origin of selected bacterial species isolated from wet market cutting board ecosystems. The results reveal a strong relationship between wet market bacterial assemblages and environmental and clinically relevant phylotypes. However, our poor knowledge of potential cross-contamination sources within these wet markets is further exacerbated by failing to determine the exact or presumed origin of its identified pathogens. In this study, several clinically relevant bacterial pathogens such as Klebsiella pneumoniae, Streptococcus suis and Streptococcus porcinus were linked to cutting boards associated with pork; Campylobacter fetus, Staphylococcus aureus, Escherichia coli, and A. caviae in those associated with poultry; and Streptococcus varanii, A. caviae, Vibrio fluvialis, and Vibrio parahaemolyticus in those associated with seafood. Identifying non-foodborne clinically relevant pathogens in wet market cutting boards in this study confirms the need for safety approaches for wet market meat, including cold storage. The presented study justifies the need for future systematic epidemiological studies to determine identified microbial pathogens. Such studies should bring about significant improvements in the management of hygienic practices in Hong Kong's wet markets and work towards a One Health goal by recognizing the importance of wet markets as areas interconnecting food processing with animal and clinical environments.
Introduction
Hong Kong's wet markets continue to be recognized as longestablished zones facilitating access to fresh foods. Over the years, thanks to public health awareness, significant efforts have been made towards improving the safety and quality of processed fresh meats in these wet markets [1,2]. Nevertheless, despite the increase in food safety awareness, wet markets have repeatedly been identified as epicenters of potential public health hazards [3][4][5][6][7], primarily biological hazards.
Microbial examination of pork from local wet markets revealed the presence of Escherichia coli, molds, and Salmonella, indicating the potentially hazardous nature of the meat [1]. Reports have also suggested that pathogens such as Salmonella spp., Staphylococcus aureus, Vibrio parahaemolyticus, and Listeria monocytogenes are commonly associated with traditional Chinese processed meats known as Sui-mei and Lo mei [8,9]. Elsewhere, Laribacter hongkongensis have been linked with community-acquired gastroenteritis and travelers' diarrhea from minced freshwater fish meat [10]. Furthermore, skin injuries, such as cuts, during meat preparation have been shown to be potential entrance points for pathogens such as Streptococcus suis and Streptococcus iniae [11][12][13]. In recent years Hong Kong has witnessed noticeable increases in Vibrio parahaemolyticus food poisoning cases [14].
Wet markets are densely populated hubs characterized by a large influx of customers and regulated or unregulated meats, and the hygiene level of the wooden cutting boards used to process these meats remains poorly described. Previous reports from Hong Kong wet markets noted a significant breach in cleaning standards meant for wooden cutting boards, in which surface scraping was used in most studied cases as a traditional cleaning technique [5,15]. Further analyses revealed that these hygienic practices were incapable of guaranteeing proper surface hygiene. Clinically relevant species such as Klebsiella pneumoniae exhibiting potential resistance to an array of multiple antibiotics were isolated and identified among microbial communities found on wet market cutting boards [15].
It has been previously established that the improper hygienic maintenance of wooden cutting boards can lead to the development of biofilm niches within their cracked surface patterns [16,17]. Biofilm formation dynamics can be summarized by the initial reversible and irreversible attachment of planktonic cells when first interacting with the abiotic surface, followed by a consolidation stage where, under ideal growing conditions, the adhered cells form microcolonies. The establishment of macrocolonies characterizes the last stage of the biofilm formation dynamic, otherwise recognized as a mature biofilm [18]. Biofilm detachment can arise at different stages of the biofilm formation dynamic. In the case of the surface microcosm of cutting boards, it may lead to the release and transfer of bacterial cells onto the foods being processed [19]. The wooden cutting board surface can be described as a porous material with hydrophilic properties that can provide a suitable environment conducive to the harboring, persistence and proliferation of spoilage and diverse pathogenic organisms [9,20,21]. The processing of raw meat on cutting boards usually leaves behind an abundance of nutrients on its surface, allowing for the proliferation of microbial contaminants [22,23], consequently increasing the likelihood of spreading disease-causing microorganisms when hygiene standards are not met. Therefore, a failure to properly clean cutting boards may promote further biofilm formation, especially the development of the biofilms' most crucial attribute, its matrix of extracellular polymeric substances. This biofilm matrix, synthesized by the cells embedded within the biofilm, acts as a protective barrier against antimicrobial agents and a nutrient trap, allowing for the survival and persistence of embedded cells [24][25][26][27]. Cells are well protected in this matrix, including a wide range of microbial pathogens.
A recent microbial profiling study revealed significant differences in hygienic cleaning protocols and access to modern meat processing facilities in 11 Hong Kong wet markets [5]. That study demonstrated that that inefficient hygienic routine practices of cutting boards were responsible for harboring foodborne pathogenic organisms belonging to Campylobacter, Clostridium, Escherichia, Staphylococcus, and Vibrio genera. Moreover, other pathogen species such as Klebsiella pneumoniae, Enterobacter cloacae, and Vibrio vulnificus, known for causing nosocomial infections, were also found repeatedly on these same cutting boards [5].
Despite these findings, the exact source of the detected biological hazards remains unclear. Pinpointing the likely source would help clarify possible contamination paths, thereby potentially improving existing hygienic routines and cross-contamination measures via improvements in food safety regulations and public health policies. From a One Health perspective, Hong Kong wet markets can therefore be described as a hub in which multiple contamination sources could merge during food processing, ultimately leading to the potential spread of biological contaminants. Although a recent study showed that wet market cutting board hygiene factors affected the prevalence of nonfoodborne pathogens on the cutting board, our study sought to validate further the origins of identified non-foodborne pathogens via phylogenic analyses. Here, the full-length 16S ribosomal RNA gene sequences of pathogens identified on cutting boards used in the processing of pork, poultry, and seafood in various wet markets in Hong Kong were used to construct a phylogenetic tree with global datasets on foodborne vs clinically relevant pathogens.
Study area and sample collections
Samples were previously obtained from traditional or modern wet markets [5]. Traditional wet markets are located outdoors or in indoor environments without air conditioning. Modern wet markets generally have operational air-conditioning systems and are typically located in buildings meant for wet market activities. The exact wet market sampling locations were presented by Ngan et al. (2020) [5]. In these markets, swab samples were taken in July 2019 from wooden cutting boards meant for pork, poultry, and seafood processing.
Environmental swabs (Zymo, CA, U.S.A.) were sampled from an area of approximately 18 × 8 cm on the boards, as previously described by Lo et al. (2019) [15], with slight modifications. The swab samples were preserved in DNA/RNA shield collection tubes (R1107, Zymo), allowing the preservation of sampled DNA for up to 1 year at room temperature. For each sample, the total genomic DNA (gDNA) was extracted within one month of sampling. DNA extraction and sequencing were performed as previously described by Ngan et al. (2020) [5].
Screening of pathogens from wet market cutting boards
The Divisive Amplicon Denoising Algorithm (DADA2) [28] was used to infer amplicon sequence variants (ASVs) that differed from each other at least by a single nucleotide. The ASVs were inferred from filtered reads obtained by the new version 1.12.1 of the DADA2 R-software package. The latest version had previously been updated for the efficient processing of long amplicon reads and for appropriately modeling Pac-Bio CCS sequencing errors [28].
This study reports 16S full-length rRNA gene phylogeny in samples from wooden cutting boards used to process pork, poultry, and seafood. An earlier study reported the presence of food-associated pathogens [5]. Phylogenetic analysis was conducted to understand the affiliation of these pathogens to clinical isolates; these analyses included 52 ASVs associated with cutting boards used for pork, 17 associated with cutting boards used for poultry, and 13 associated with cutting boards used for seafood.
Phylogenetic analyses
Multiple-sequence alignments for each dataset were conducted using the Muscle program [29] and optimized manually using the Bio-edit program; these data sets were manually edited using Bio-edit [30]. Maximum likelihood trees were estimated using iqtree v0.9.5 [31] using the best-fit nucleotide substitution model [32] chosen by the Bayesian information criterion. An ultrafast bootstrap approximation (UFBoot) was used to assess branch support [31,33]. Here, the number of bootstraps was 1000 replicates. Furthermore, several fast branch tests were carried out using SH-aLRT phylogenetic testing, which was also set to 1000 replicates.
Meta-analysis of bacterial species composition on wooden cutting boards
The dominant bacterial species in samples from cutting boards used to process pork at wet markets were Aeromonas dhakensis and Escherichia coli (Fig. 1a). In contrast, those found in samples associated with poultry included A. caviae and Enterococcus gilvus (Fig. 1b), and those found in samples from seafood cutting boards were A. caviae, Vibrio vulnificus, and Vibrio parahaemolyticus (Fig. 1c). Phylogenetic analysis indicated that the 16S rRNA genes obtained via metagenomic sequencing of wet market wooden cutting board microbiomes were affiliated with clinically relevant human pathogens and food pathogens, predominantly in the Proteobacteria and Firmicutes phyla, respectively.
Phylogeny of identified pathogens associated with cutting boards used to process pork
The 16S rRNA gene phylogenetic analysis (Fig. 2a) showed that most of the ASVs clustered together with the human-associated clinical strains with high Bootstrap and Bayesian support (Table 1). Thirty-five ASVs from the pork cutting boards clustered closely with the human clinically associated 16S rRNA reference sequences (
Phylogeny of identified pathogens associated with cutting boards used to process poultry
Phylogenetic analysis of the pathogenic bacteria isolated from wooden cutting boards used for poultry processing showed that they were affiliated with 10 ASVs (Fig. 2b)
Phylogeny of identified pathogens associated with cutting boards used to process seafood
For pathogens isolated from cutting boards used to process seafood, the phylogenetic analysis revealed the association of nine ASVs to the human clinical samples (Fig. 2c)
Discussion
This study aimed to characterize the presence of pathogens on cutting boards from Hong Kong's wet markets and determine their presumptive source via phylogenetic analysis. This study represents a complete phylogenetic assessment of wet market foodborne and clinically relevant bacterial pathogens. Both environmental and clinically relevant datasets of 16S rRNA were used to test the phylogenetic affiliations among pathogens from the cutting board samples. In this study, the phylogenetic analysis of the pathogenic bacterial assemblages from cutting boards used to process pork, poultry, and seafood identified a clear association with human-associated and clinically relevant phylotypes.
The porous surfaces of wooden cutting boards are perfect channels for the circulation of nutrients and water and thus provide favorable conditions for biofilm-forming communities. Furthermore, the lack of proper hygienic maintenance of these wooden cutting boards may have led to the establishment of niche-harboring pathogenic bacteria that can form biofilms. In this study, cutting board pathogens were dominated by Aeromonas dhakensis, A. caviae, A. jandei, and A. veronii. Earlier reports suggested that Aeromonas has the remarkable ability to colonize different environments through biofilm formation and cell-cell signaling [34]. Moreover, Aeromonas species were likewise observed in mixedspecies in food contact surfaces [35]. Aeromonas also plays a significant role in many health conditions, including gastroenteritis, wound infections, bacteremia, and, although less frequently, peritonitis, urinary tract infections, and ocular infections [36]. In this study, E. coli, another biofilm-forming pathogen, was linked with cutting boards associated with pork and seafood processing. It is well established that E. coli is a commensal organism predominantly associated with the gastrointestinal tract in animals and humans alike, where it thrives in complex biofilm consortia characterized by a plethora of other microorganisms [37,38]. Furthermore, in clinical environments, E. coli has also been found to contaminate medical devices such as catheters, leading to catheter-associated urinary tract nosocomial infections [39]. Enterococcus was predominantly recovered in cutting boards associated with pork and poultry when assessing other biofilm-forming pathogens, suggesting its persistence in the wet market food processing environment. An earlier report demonstrated enterococci's ability to form biofilms [40]. Elsewhere, in clinical settings, enterococcal biofilms associated with infections are hard to eradicate, given their high tolerance to antimicrobials [41].
The finding of foodborne and non-foodborne pathogens in wet market cutting board settings should be considered an alarming indicator of poor hygienic conditions. Recent studies have indicated that poor hygiene practices at wet markets may have exposed cutting boards to spoilage and pathogenic surface contamination [5,19]. Regular surface hygiene of such wet market cutting boards may be necessary. The food contact surface may interact with previously contaminated foods during processing, especially considering that these wet markets are usually characterized by poor storage/display conditions linked to inappropriate temperature control or lack thereof. The identification of Streptococcus suis on cutting boards used to process pork suggests poor storage conditions, leading to its proliferation over time and its transfer to cutting boards during processing (Fig. 1). Earlier reports in Hong Kong have shown that S. suis is a key bacterial pathogen responsible for various human infections in Hong Kong [42]. Although S. suis is enteric and nonpathogenic in pigs, its spread when handling raw pork products through cross-contamination increases the likelihood and risk of infections [12]. In 2019, the Hong Kong Centre for Health Protection (CHP) investigated an S. suis infection in a patient who died following fever, abdominal pain, vomiting, and diarrhea [43]. The initial investigations revealed that the patient had handled raw pig organs during the incubation period. In 2005, Hong Kong temporarily suspended all pork imports from Sichuan due to an S. suis outbreak. The Ministry of Health of China once reported 215 cases of human disease associated with the outbreak, 39 of which were fatal [44].
Another streptococcal species associated with various pathological infections in swine, S. porcinus, was identified as having potential clinically relevant associations. Earlier reports identified S. porcinus in female genitourinary tracts [45][46][47]. Other pathogens such as Staphylococcus aureus in cutting boards used for poultry also show a close affiliation to human-associated clinical phylotypes (Fig. 2). Earlier studies indicated S. aureus's ability to survive, colonize, and persist in poultry processing plants [48]. More specifically, the persistence of S. aureus was shown to be linked to its ability to adhere to different types of material [48,49] and to withstand cleaning and disinfection through the synthesis of a glucosamine-rich extracellular polymer [48,49]. Phylogenetic analysis of the cutting board pathogens showed the affiliation of ASVs in cutting boards used for processing pork and poultry meat with clinically relevant Campylobacter fetus, a foodborne illness pathogen among humans. C. fetus is also known to cause bacteremia and thrombophlebitis [50], and in rarer cases, can cause sepsis in newborn and immunocompromised individuals [51].
Evolutionary analysis of our bacterial 16S rRNA data found that several non-foodborne pathogens identified on cutting boards had a high likelihood of clinical relevance. For instance, Klebsiella pneumoniae, isolated from pork cutting boards, had a high affinity to clinically relevant nosocomial pathogens affiliated to a human-associated phylotype (AF228920) isolated from human urine (Fig. 1). This observation may suggest that wet markets in Hong Kong either lack proper sanitary toilets and handwashing stations or have precarious proximity to clinics and hospitals. In the latter case, the closeness of hospitals and wet markets is not unusual in Hong Kong. For example, in Hong Kong's Kowloon district (KLC) (Fig. 1), one wet market was surrounded by three hospitals/ clinics (Supplementary Table 1). Regardless of the location, access to handwashing stations and their proper use could reduce the spread of non-foodborne pathogens [52]. The general lack or improper usage of sanitation stations at these wet markets may have led to our finding of other pathogens that are phylogenetically affiliated to clinical strains, including the identified species A. nosocomialis and A. baumannii in cutting boards used to process pork; A. caviae and Enterobacter cloacae in cutting boards used to process poultry, and A. caviae and Vibrio parahaemolyticus in cutting boards used to process seafood. The significance of such non-foodborne pathogens on food processing surfaces should not be ignored because of the possibility of their additional anti-microbial resistance properties. The persistence and survival of most nosocomial pathogenic organisms in hospitalised patients' flora and the surrounding environment can be attributed to their multi-drug resistance abilities [53][54][55]. Epidemiological approaches in the characterization and tracking of pathogens have allowed for implementing safety and prevention measures for improving public health. In a previous outbreak at the National Institute of Health (USA), an epidemiological investigation helped further our understanding of the spread of K. pneumoniae and its ability to increase its antibiotic resistance [56]. Our study lacks the detailed epidemiological data needed to confirm whether these pathogens originate from nearby hospitals or track their source. One of the limitations of studying hospital-associated infection (HAI) is the lack of molecular assays. All HAI confirmations thus far have relied on culturing techniques. Future work should incorporate different models, including geospatial system models, to evaluate pathogens' true origin in wet markets. Adopting whole genomic-based approaches, the quantification and characterization of identified pathogens by integrating genetic and epidemiological information would systematically improve wet markets' surveillance routines, ultimately strengthening food safety policies.
Conclusions
This study investigated the phylogenetic relationship among bacterial communities associated with Hong Kong's wet market wooden cutting boards used for meat, poultry, and seafood processing. This was achieved via high-throughput metagenomic sequencing of full-length bacterial 16S rRNA amplicons. The data were then compared with environmental and clinically associated pathogens. First, the pathogens in cutting boards used for pork were more diverse than those used for poultry and seafood. Second, the phylogenetic analysis indicated that the wet market wooden cutting board bacterial communities were closely affiliated to human pathogenic strains associated with clinical infections. Thus, improvements in meat storage conditions are critical to avoid pathogen contamination in wet markets. Furthermore, refrigeration and cooling infrastructure at wet markets would improve the safe storage and display of raw meat. Such installations would delay the growth of unwanted and pathogenic microorganisms and their dissemination into the surrounding environment via cross-contamination. Finally, cleaning and sanitation stations would also help reduce the potential spread of non-foodborne pathogens by improving general personal hygiene.
Data availability
Raw sequencing reads have been deposited in the EMBL-EBI Sequence Read Archive under the accession number PRJEB37431.
Declaration of Competing Interest
The authors declare no conflict of interest. | 2021-08-19T05:27:37.059Z | 2021-08-05T00:00:00.000 | {
"year": 2021,
"sha1": "72bba7661d910c609f79c8d0ff460ea5dc59fa1b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.onehlt.2021.100300",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "72bba7661d910c609f79c8d0ff460ea5dc59fa1b",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
70970752 | pes2o/s2orc | v3-fos-license | Análise clínica e estudo anatomopatológico do prepúcio de pacientes submetidos à postectomia
BACKGROUND AND OBJECTIVE: The objective of this study is to analyze the histologic features of the prepuce with phimosis and the incidence of lichen sclerosus (LS) causing phimosis. METHODS: Our prospective study included 40 male patients from 05 to 14 years of age, with phimosis referred for circumcision. The patients were distributed into two groups: those with primary phimosis and those with secondary phimosis. The patients were operated at Joana de Gusmao Children's Hospital, and the specimens were examined separately by 3 pathologists. In accordance with their histopathologic features, patients were distributed into three groups: normal histologic findings, lichenoid infiltrate and LS. RESULTS: The clinical intercurrences most frequently reported by our patients with phimosis were balanoposthitis and urinary tract infection (UTI). Most patients (65%) did not present with histologic alterations of the skin; in 22.5% we found lichenoid infiltrate; and 12.5% had alterations typical of LS. Most cases found belonged to the acquired group, representing 57.5% of the patients studied. All LS cases occurred in the acquired group. The main surgical indication in our casuistry was failure of clinical treatment (45%), followed by balanoposthitis (25%), narrowing of the prepuce (17.5%), UTI (10%) and associated urinary diseases (2.5%). CONCLUSION: Our study concluded that LS, as the cause of phimosis, showed an incidence of 12.5%.
INTRODUCTION
The separation of the prepuce from the glans penis begins at around the sixth month of gestation and involves the keratinization of the prepuce and epithelium of the glans.The keratinization begins at opposed ends, i.e. the corona glandis (or, crown of the glans) and distal margin of the prepuce, to which it extends from both sides.Keratinization leads to forming the preputial space separating the skin from the glans. 1 At birth, the penis, like the rest of the body, is premature.In the juvenile penis, the development of balanopreputial adherence, resistance of the preputial orifice to retraction, and greater length of the prepuce are normal and considered physiological conditions. 2 In 90% of non circumcised boys, the prepuce becomes retractable at roughly five years of age.From this age onward, impossibility of prepuce retraction is called phimosis. 4,5ecently, surgical treatment for phimosis was replaced by the use of topical substances.Some studies [6][7][8][9] have shown a 67-95% cure rate with the use of medium and high potency topical steroids (0.05% clobetasol or betamethasone).In the latter studies, the fact that anatomopathologic analyses of the treated prepuces were not performed has prevented knowledge from being acquired that could otherwise inform us about the real incidence of the cause of phimosis.
In cases where the topical treatment is not effective and severe narrowing of the prepuce persists, surgical treatment is indicated. 10ircumcision or postectomy, i.e. partial or total removal of the prepuce, has been practiced as a ritual since pre-Christian times. 11here exists little evidence to assert an association between circumcision and good penis hygiene, and its real effectiveness is very controversial. 3,12he reason for the appearance of the phimotic ring remains unknown, with the exception of cases consequent to recurrent inflammatory processes (balanoposthitis); local trauma leading to the appearance of fissures and secondary fibrosis; and dermatological disorders leading to preputial dystrophy, such as lichen sclerosis et atrophicus (LS). 13,14ichen sclerosis (LS), also known as balanitis xerotica obliterans in men, was first described by Hallopeau in 1887.In 1892, Darier formally described the histologic characteristics of LS.This disorder may affect all age ranges.Reports have ranged from six-month old children to the elderly, with Caucasians making up the majority of patients.LS mainly affects the inguinal area in percentages varying between 85 and 98% of cases.It occurs more in women, in a ratio of five women to every affected male.][17][18] All ages saw various factors reported with their pathogenesis, like human papilloma virus ( HPV) and An bras Dermatol, Rio de Janeiro, 79(1):29-37, jan./fev.2004.
Borrelia burgdorferi, in association with auto-immune diseases (vitiligo, alopecia areata, diabetes mellitus, pernicious anemia and thyroid disease), trauma and genetic factors related to the human leukocytic antigen (HLA). 15,16 few studies 19,20 show the LS incidence in pediatric patients with phimosis and were submitted to postectomy to be roughly 15%.However, the data are not precise due to the fact that the tissue removed by circumcision is rarely examined histologically and that the operation is usually curative, with no relapses of the condition. 16he definitive treatment for late-onset LS cases of the prepuce is circumcision.Treatment with local steroids tends to be effective when the inflammatory mechanism is active and tissue damage reversible, thereby including the initial and intermediary histologic conditions of the disease.
OBJECTIVE
Analyze the histologic characteristics of the prepuce when phimosis is present, as well as the incidence of lichen sclerosis as causing phimosis.
METHOD
The present study is a cross-sectional descriptive survey with prospective data collection.
CASUISTRY
This study includes 40 pediatric male patients aged between five and 14 years (median 9.5 years), who were diagnosed with phimosis and had surgical indication as treatment.The patients were distributed into two groups based on disease duration: patients with primary phimosis (since birth) and secondary phimosis (or acquired). 21his study obtained a favorable report from the Federal University of Santa Catarina Committee on Ethics in Research Involving Human Beings, project no 086/2001.
Procedures
By means of the data collection form, the research supervisors provided information referring to the patient-name, age, place of residence-and their disease-onset of the condition, previous treatment and duration, clinical intercurrences, (balanoposthitis, urinary tract infection (UTI), paraphimosis)-as well as whether there was a family history of phimosis.The research supervisors had the patients sign the free consent form and explained the study to them that was to be performed.
The excised pieces were conserved in a 10% formol solution and sent to the pathologic anatomy laboratory (IDAP -Anatomopathologic and Diagnostic Institute).Each piece was examined separately by three pathologists.In the laboratory, the pieces were fixed in 10% formol, embedded in paraffin blocks, sectioned with a microtome, and prepared as slides stained in Hematoxylin-eosin.Then, the histologic characteristics of the epidermis, dermoepidermal junction and dermis were observed and described without prior knowledge of the clinical data.In accordance with the histopathologic characteristics, the patients were distributed into three groups: normal histologic findings, lichenoid infiltrate and LS.For the histologic diagnosis of LS, the following pathological alterations were considered, in accordance with Lever: 22 (1) follicular hyperkeratosis, (2) atrophy of the stratum malpighii with hydropic degeneration of the basal cells, (3) pronounced edema and homogenization of the collagen in the upper dermis, and (4) presence of an inflammatory infiltrate in the dermis. 22
Statistical analysis
The data were typeset by using Microsoft Excel ® 97.Simple and specific descriptions were used as the statistical approach, with corresponding adjacent percentages.
The prevalence of all factors of interest was described.Data were presented in tables and charts.
RESULTS
All patients come from Santa Catarina state: 37 from the middle region of Greater Florianopolis, two from the middle South-Catarinense region and one from the Vale do Itajai (Figure 1).
Forty patients with phimosis aged five to 14 years, participated in the study.All were indicated for surgical treatment after being assessed by a pediatric surgeon.None of the patients was clinically diagnosed with LS.The distribution according to age groups 23 is featured in graph 1.
The most frequent clinical intercurrences in patients with diagnosed phimosis were: balanoposthitis (30%) and urinary tract infection (10%).There were no reports of paraphimosis.As for family history, among the 40 patients, 20 (50%) confirmed the history (Table 2).In relation to histopathologic findings, the results are presented in table 1.They agreed fully with all of the anatomopathologic expert reports issued by pathologists.Group 1, with no histologic alterations of the skin, consisted of 26 patients (65%); group II, lichenoid infiltrate, included nine patients e/ou estenosado, hemostasia do prepúcio, sutura mucocutânea com fio categut simples 5-0 e reconstituição do sulco balanoprepucial.
The results correlating family history of phimosis with the anatomopathologic findings are exhibited in table 2.
The surgical criteria with respect to each age group are presented in table 3.
Forty patients were studied and submitted to postectomy at the Joana de Gusmão Children's Hospital (HIJG) between May 2001 and June 2002.
The majority of patients (92%) came from the Greater Florianopolis region.
The patients who were submitted to postectomy had previously diagnosed phimosis.Given Bloom 4 and Piro's 5 criteria, referring to phimosis as the impossibility of prepuce retraction from the age of five onward, thereby believing this alteration to be normal up to the age of four, the casuistry spreads from ages five to 14 years.Among these patients, the highest frequency of postectomies was in the seventh year (17.5%).This agrees with the higher rate of phimosis in the six-to-seven year-old age range found in the literature. 8ith respect to phimosis type, the patients were distributed into primary or congenital (n=17; 42.5%) and secondary or acquired (n=23; 57.5%) types.Among the causes of acquired phimosis, the literature cites LS Upon considering this pattern closer, the patients analyzed here with primary phimosis had 76% normal histology and 24% of lichenoid infiltrate; no LS was found in the primary phimosis group.In the cases of secondary phimosis: 56% were normal, 22% with to por 26 pacientes (65%); o grupo II, infiltrado liquenóide, incluiu nove pacientes (22,5%); e o grupo III, LE, cinco pacientes (12,5%).
In relation to clinical intercurrences, the cases most frequently found in this study were balanoposthitis (30%) and UTI (10%).There were no cases found of paraphimosis in these patients.These data resemble those in the literature. 26hen analyzing family history, LS was found to be reported in mothers and daughters, sisters and even in mono-and dizygotic twins.The HLA complex appeared to control susceptibility to these inflammatory diseases.Still, conflicting studies do exist, with some authors finding a strong association, and others none at all. 15,16,17In the LS cases found in this study, 50% reported a positive family history, against 50% with a negative history.
The majority of these patients (65%) showed normal histology of skin. 27,28,29Clemmensen 30 found 46.5%, and Chalmers, 19 76% of histology without alteration in pediatric patients with phimosis, who made up the majority of his cases.
Em relação ao tratamento da fimose, a literatura aponta 80% de eficácia do uso de corticosteróide tópico na with the research literature.Nonetheless, the analysis of statistical significance in these results has to be evaluated with a higher number of cases in order to justify the real incidence of LS in this medium.Further to this point, in clinical practice LS has been diagnosed little due especially to the lack of specific characteristics of the disease in most cases, apart from phimosis, that is.We should like to add the fact that its incidence is uncertain, due to the lack of routine in the anatomopathologic analysis of the excised prepuces in the phimosis cases. 20nother anatomopathologic alteration that could be verified was lichenoid infiltrate, found in 18% of patients.Ackerman 27 includes this alteration as one of the initial histopathologic findings of LS.Therefore, as the lichenoid infiltrate is possibly associated with LS, one cannot exclude the possibility that these cases belong to an evolutive pathological process whose final result is LS.Such a possibility would considerably increase the incidence of phimosis as caused by LS.With respect to treating phimosis, the literature points to 80% effectiveness with topical corticosteroid use in phimosis. 6,29Likewise, some authors have shown that Gráfico 2: Distribuição dos pacientes quanto ao uso ou não-uso prévio de corticosteróide.Graph 2: Distribution of patients regarding whether there was previous corticosteroid use.
5. O líquen escleroso, como causa de fimose em crianças, mostrou a incidência de 12,5% na amostra estudada.q topical corticosteroids are active on treating LS at the initial and intermediary stages; at more advanced stages of the disease, circumcision is the chosen treatment. 10,24The aforementioned treatment with topical corticosteroids was utilized with 70% of patients analyzed in this paper.LS cases represent 14% of the cases whose treatment failed.Nonetheless, such data are hardly the most precise given that some patients undergo treatment only for a few days.
Failure of clinical treatment with corticotherapy was the main indication for surgery in this casuistry, with 45% of cases.This difference with the literature, which shows 80% effectiveness, cannot be valorized due to the fact that the authors did not follow up this prior treatment.The latter situation thereby makes it impossible to figure out whether the procedure was performed correctly, namely, with the objective of obtaining data in order to comparing them with the literature.The other surgical indications were: recurrent balanoposthitis, with 25% of indications; the clinical condition of (severe) narrowing of the prepuce, with 17.5% incidence; recurrent UTI, 10%; and associated urinary diseases, 2.5%.The urinary disease found in the latter instance was a case of lithiasis of the lower urinary tract.
There were no complications resulting from the treatment performed, independently of the diagnosis of LS, which confirms postectomy as the definitive treatment for this condition. 31
CONCLUSION
1.The histologic characteristics of phimosis are varied.They do not show a single pattern.
2. Among the histopathologic findings, the absence of alterations, lichenoid pattern and specific alterations of LS were featured.
3. In most cases, the histopathologic findings are inexpressive, i.e. without showing any significant alterations.
4. If the lichenoid patterns encountered in some cases were considered to be initial manifestations of LS, this would significantly increase LS incidence as a cause of phimosis.
5. Lichen sclerosis as a cause of phimosis in children showed an incidence of 12.5% in the sample studied.q
demonstraram LE em 10% dos casos.Nas 40 peças referentes a lichenoid infiltrate, and 22% with LS; which included all of the LS cases.These results stood in agreement with the work of Meuli et al. in which 90% of patients with LS demonstrated acquired phimosis.
Table 1 :
demonstrated LS in 10% of cases.In 40 pieces relating to this study and examined histologically, a 12.5% incidence of LS was found, which agrees Tabela 1: Fimose primária e adquirida e os achados anatomopatológicos.Primary and acquired phimosis and anatomopathologic findings.
Table 2 :
Positive or negative family history and anatomopathologic findings. | 2018-12-11T20:15:46.561Z | 2004-02-01T00:00:00.000 | {
"year": 2004,
"sha1": "363d7179574fc74a2d08556a29a51d3df2821bf7",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/abd/a/frd6PRkz3hqZG3H8W8wn6rG/?format=pdf&lang=pt",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "363d7179574fc74a2d08556a29a51d3df2821bf7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
8031540 | pes2o/s2orc | v3-fos-license | Evaluation of Live Recombinant Nonpathogenic Leishmania tarentolae Expressing Cysteine Proteinase and A2 Genes as a Candidate Vaccine against Experimental Canine Visceral Leishmaniasis
Canine Visceral Leishmaniasis (CVL) is a major veterinary and public health problem caused by Leishmania infantum (L. infantum) in many endemic countries. It is a severe chronic disease with generalized parasite spread to the reticuloendothelial system, such as spleen, liver and bone marrow and is often fatal when left untreated. Control of VL in dogs would dramatically decrease infection pressure of L. infantum for humans, since dogs are the main domestic reservoir. In the past decade, various subunits and DNA antigens have been identified as potential vaccine candidates in experimental animal models, but none has been approved for human use so far. In this study, we vaccinated outbreed dogs with a prime-boost regimen based on recombinant L. tarentolae expressing the L. donovani A2 antigen along with cysteine proteinase genes (CPA and CPB without its unusual C-terminal extension (CPB-CTE) and evaluated its immunogenicity and protective immunity against L. infantum infectious challenge. We showed that vaccinated animals produced significantly higher levels of IgG2, but not IgG1, and also IFN-γ and TNF-α, but low IL-10 levels, before and after challenge as compared to control animals. Protection in dogs was also correlated with a strong DTH response and low parasite burden in the vaccinated group. Altogether, immunization with recombinant L. tarentolae A2-CPA-CPB-CTE was proven to be immunogenic and induced partial protection in dogs, hence representing a promising live vaccine candidate against CVL.
Introduction recombinant A2-antigen adjuvanted by saponin called LeishTec [23] were licensed for prophylaxis against canine ZVL and has been used in Brazil, and also a formulation related to the LiE-SAp vaccine was licensed for commercialization under the name of CaniLeish [30,31] in Europe.
We have shown previously that prime and boost immunization with A2-CPA-CPB -CTE recombinant L. tarentolae protects BALB/c mice against L. infantum challenge and that protection was associated with high levels of IFN-γ, lower levels of IL-10, high nitric oxide production and low parasite burden [16]. In this study, we evaluated the immunogenicity and protective immunity of recombinant L. tarentolae expressing A2-CPA-CPB -CTE as a live vaccine against VL in dogs. Recombinant L. tarentolae was administered subcutaneously both as prime and boost regimen. Vaccinated dogs were followed for almost 20 months and different parameters, including cellular and humoral immune responses, parasite load in bone marrow, and clinical evaluations revealed a partial protection against an infectious L. infantum challenge. (WWW.ruralareavet.org). Dogs were housed individually in conventional kennels (90 Ã 110 Ã 170cm) at the School of Veterinary Medicine, Tehran University and fed with standard commercial diet (Nutripet, Iran). Animals were acclimated for three/four months in the animal facility and temperature (15-20°C), light/dark (12h on/12h off), humidity (40-60%) and food were controlled every day. During the whole period of our study (every day) the welfare including separate cage with soft floor mat, optimum temperature and humidity, free access to water and once per day access to food were strictly applied. In addition, all dogs had daily access to the outside about 30 min. The conditions of the animals were followed by veterinarians routinely (including appetite, physical examination and physical activity) and every 3 months CBC and serum biochemistry tests were measured. All the invasive procedures were performed following the rules of ethical procedures in animal experimentation and biosafety.
Vaccine administration and experimental infection
Dogs were divided into three groups (according to their weight, sex and age, each including 10 dogs) named as G1, G2 and G3. The first group (G1) was immunized subcutaneously (SC) with 2x10 7 L. tarentolae A2-CPA-CPB -CTE EGFP. Group G2 was immunized with wild type (WT) L. tarentolae and G3 was immunized with PBS. Three weeks later, all groups were immunized similarly as booster. Three weeks after boost, all groups were challenged by intravenous injection with 4 x 10 7 L. infantum (MCAN/ES/98/LLM-877) stationary phase promastigotes.
Evaluation of humoral immune response
Sera of dogs were collected at different time courses (T0: before challenge at day 41; T1: 2 months after challenge at day 60; T2: 6 months after challenge at day 180; T3: 11 months after challenge at day 330; T4: 14 months after challenge at day 420; T5: 17 months after challenge at day 510) in order to measure individually the level of specific antibody production against freezed/thawed L. tarentolae A2-CPA-CPB -CTE -EGFP and freezed/thawed L. infantum crude lysate. Briefly, similar to our previous studies [29], the plates were incubated overnight at 4°C and then blocked with 1% (w/v) BSA in PBS at 37°C for 2 h. Sera were diluted and added at 1:100 in PBS supplemented with 0.05% (v/v) Tween 20 and 1% (w/v) BSA. After incubation for 2 h at 37°C, plates were washed three times with PBS containing 0.05% (v/v) Tween 20, then goat anti-dog IgG1 (Bethyl Laboratories Inc., Montgomery, TX, USA) and sheep anti-dog IgG2 (Bethyl Laboratories Inc., Montgomery, TX, USA) conjugated to peroxidase were used and incubated for 2 h at 37°C. IgG1 and IgG2 conjugates were diluted in PBS-0.05% (v/v) Tween 20-1% (w/v) BSA at 1:10,000 and 1:50,000, respectively. The plates were washed three times and binding of conjugate was visualized with Peroxidase substrate system (KPL, ABTS). The reaction was stopped by adding 1% SDS and absorbance value was measured at 405 nm in an automatic micro-ELISA reader. In all tests, sera from infected dogs were used as a positive control and sera from healthy dogs as a negative control.
Evaluation of cytokine production
Levels of IFN-γ, TNF-α and IL-10 were assessed before (T0) and two (T1), six (T2), eleven (T3), fourteen (T4) and seventeen (T5) months after challenge. For this purpose, peripheral blood mononuclear cells (PBMCs) were obtained from heparinized blood, mixed 1:1 with PBS at room temperature, layered over Ficoll (Histopaque 1077 Sigma, USA) and centrifuged at 2200 rpm for 30 min at room temperature. PBMCs were collected and then washed twice in DMEM medium (centrifuged at 1700 rpm, for 10 min). The pelleted cells were resuspended in 1 ml DMEM medium and cells were counted with a haemocytometer. The isolated PBMCs were resuspended in DMEM medium supplemented with 20% (v/v) heat-inactivated FCS, 10 mM HEPES, and 50 μg/ml gentamicin. 1.5 ml of cell suspension (3×10 6 /ml) was plated in duplicated 48-well culture plates. Isolated PBMCs were incubated for 96 h in the presence of 10 μg/ml of PHA (as positive control), 20 μg/ml of L. tarentolae A2-CPA-CPB -CTE -EGFP (F/ T), and 20 μg/ml of L. infantum (F/T) or in the absence of antigens (as negative control) at 37°C and 5% CO 2 . The supernatants were collected for assessing the production of IL-10 and TNF-α after 24 h, for IFN-γ after 96 h, then stored at -70°C until assayed by the sandwich ELISA (Duoset ELISA Canine IFN-γ, Duoset ELISA canine TNF-α and Duoset ELISA canine IL-10; R&D Systems). For the assay, specific mouse anti-dog IFN-γ (1 μg/ml), mouse anti-dog IL-10 (2 μg/ml) and mouse anti-dog TNF-α (2 μg/ml) antibodies as the capture antibody and biotinylated goat anti-dog IFN-γ (4 μg/ml) and goat anti-dog IL-10 (100 ng/ml), goat anti-dog TNF-α (100 ng/ml) antibodies as the detection antibody were used. The test was developed with ABTS 2-Component Microwell Peroxidase Substrate system kit. The reaction was stopped by 1% SDS and the absorbance value was measured at 405 nm in an automatic micro-ELISA reader. Standard curves for IFN-, TNF-α and IL-10, respectively, were performed by the use of recombinant canine proteins. Detection limits were 17.5-2000 pg/ml for the canine IFN-γ and IL-10 and also 8.75-1000 pg/ml for TNF-α, according to the manufacturer kits.
Leishmanin skin test
The delayed type hypersensitivity (DTH) was determined by intradermal injection, at 11 and 16 months after challenge. Dogs were inoculated intradermally in the right shaved groin with 3×10 8 /ml stationary phase promastigotes of L. infantum in 0.4% phenol-saline [32]. The left shaved groin received only 0.1 ml saline (control). The largest diameter of the induced indurations and their perpendicular diameter were measured at 48 hours. Indurate areas were marked, and each time the values of the saline control were subtracted from the reaction due to the Leishmania antigen. Reactions showing diameters !5mm were considered positive.
Real time PCR in bone marrow
Bone marrows from all dogs were taken at 18 months after challenge. Dogs were anesthetized with a mixture of medetomidine hydrocholoride (Domitor) and ketamine (5 mg/kg). The bone marrow was aspirated from the iliac bone with a 16 mm x 25 mm Klima needle into 20 ml syringe containing 0.5% EDTA in RPMI. Each sample was divided into three parts for quantification of parasite burden by using cytology, immunocytochemistry (ICC), and real time PCR examinations. Real time PCR was used to quantify the parasite load in the bone marrow 18 months post-challenge. One milliliter of bone marrow iliac aspirates were collected into EDTA tubes and stored at −20°C. Genomic DNA was extracted from 200 μl of bone marrow using DNeasy Blood & Tissue kit (Qiagen). Two sets of primers which targeting a region of kinetoplastid minicircle DNA of L. infantum named as RV1 and RV2 (forward:-CTTTTCTGGTCCC GCGGGTAGG-39; reverse: 59-CCACCTGGCCTATTTTACACCA-39) were used [33]. Quantification of Leishmania DNA was performed using an absolute method, by comparison of Ct values with those from a standard curve constructed from 10-fold dilutions of L. infantum DNA extracted from cultured parasites, from 1×10 6 to 0.1 parasite equivalents/ml, using Applied Biosystem 7500 real time PCR system. All samples were run in duplicates on every plate. For quantification of parasites in bone marrow, 200 ng of DNA was subjected to the reaction containing 5 pmol of each forward and reverse primers, 12.5 μl Qiagen QuantiFast SYBR Green Master Mix in total volume of 25 μl. Conditions for PCR amplification were as follows: 95°C for 10 min; 40 cycles consisting of 95°C for 15 s, 58°C for 30 s, and 72°C for 40 s. Specific amplification of the target region was confirmed by gel electrophoresis of the PCR products.
Cytology and immunocytochemistry (ICC)
Multiple aspirated smears were made on slides and were both air-dried and alcohol-fixed and stained by Wright and Giemsa methods. Specific antibody (WHO LXXVIII-2E5-A8 (D2) for L. donovani/L. infantum used as a primary antibody that was available from our previous study [34]. The slides were rehydrated and treated with 3% hydrogen peroxide solution for 10 minutes at room temperature to quench endogenous peroxides. The antigen retrieval was conducted by pre-treatment with microwaving (power 100 for 10 min and then power 20 for 20 min) using a 10-mmol/L concentration of citrate buffer (pH 6.0) and proteinase K. The primary antibody was applied for 1 hour (diluted 1:200). Detection of the immunoreaction was achieved. The detection system used was Envision+ (DakoCytomation) and developed with diaminobenzidine (Dako Cytomation). 3, 3'-diaminobenzidine-hydrogen peroxide was applied as the chromogen and hematoxylin was used as the counterstain. The cytological and immunocytochemistical smears were examined under different magnifications. The modified scoring method, explained by Shirian et al. [35], was used for leishman body burden. The samples were considered negative if amastigotes were not found in 1000X magnification (oil immersion field, OIF) in the whole slide smear. The density of amastigotes was quantified using a semiquantitative scale according to Table 1.
Endpoint culture of spleen tissues
All dogs were sacrificed by intravenous injection of thiopental sodium 33% (5 ml/kg) at the end of the study (20 months post-infection). A piece of spleen was removed in aseptic conditions and cultured in 2 ml of Schneider's Drosophila medium supplemented with 20% heatinactivated fetal calf serum and gentamicin (0.1%). After incubation at 26°C, the cultures were examined daily for the presence of promastigotes under an inverted microscope at a magnification of 40X during 1 month periods.
Clinical examination and biochemical evaluations
Routine clinical evaluation of the animals was carried out every 3 months. In each evaluation, dogs were weighed and their general health status was examined by a veterinarian. At the end of project, all dogs were clinically classified, according to presence/absence of infection signs: subpatent (clinically well and bone marrow DNA positive only), asymptomatic (clinically well, bone marrow DNA positive and spleen culture positive), or symptomatic when the dogs showed one or more clinical signs of CVL including, lymphadenopathy, alopecia, weight loss, bone marrow DNA positive and also spleen culture positive [36,37]. Biochemical analysis was performed in all animals 20 months after challenge. Serum levels of alanine aminotransferase (ALT), aspartate aminotransferase (AST), urea, creatinine, alkaline phosphatase (ALP), albumin(Alb) and total proteins were determined by a biochemistry serum analyzer (Technicon RA-1000, USA).
Statistical analysis
Statistical analyses were performed using Graph-Pad Prism 5.0 for Windows (Graphpad Software Inc 2007, San Diego, USA) as well as SPSS version 18. Data were expressed as median. Each group (G1 and G2) were compared with control PBS group (G3). In some cases, G1 and G2 were compared similarly. Non-parametric analysis were used for all tests including humoral, cellular immune responses, DTH responses and parasite load since they were not normally distributed. Mann-Whitney test and Fishers exact test were also used for the comparison of different parameters between groups. The correlation between the IFN-γ and IgG2 production at 14 and 17 months after challenge was calculated using Spearman correlation method for each group (G1, G2 and G3). The p value <0.05 was considered significant.
Vaccination regimens and clinical follow up
To assess the ability of recombinant L. tarentolae A2-CPA-CPB -CTE -EGFP to protect dogs against challenge with L. infantum, 30 outbreed dogs subdivided into three groups (G1 to G3) were tested. The first group (G1) was vaccinated subcutaneously with two doses of L. tarentolae A2-CPA-CPB -CTE -EGFP, the second group (G2) received two doses of L. tarentolae wild type (WT), and the third group (G3) received two doses of PBS alone and used as a control. Additional details about the route of vaccination, dose interval, and vaccine formulation are summarized in Table 2. Animals were followed up throughout the duration of the experiment for 20 months to evaluate clinical symptoms of leishmaniasis, as well as the development of cellular and humoral immune responses (S1 Fig). The vaccine was well tolerated and there was no local reactivity at the point of inoculation. One dog from each group died during the study for reasons unrelated to canine leishmaniasis (gastric dilatation volvulus (GVD) and Uremia and Chronic kidney disease). In addition, one animal from G2 (16 months post-infection) and one from G3 (20 months post-infection) died from visceral leishmaniasis. 1A). Group 3 showed significantly higher levels of anti-L. infantum IgG1 at T2, T3 and T5 compared to G1 (p<0.001, p<0.001 and p<0.05, respectively) and also at T2 and T3 in comparison to G2 (p<0.05). Interestingly, the levels of IgG1 at T4 in G2 were significantly higher than in G1 and G3 for the same period (p< 0.05).
Antibody responses to leishmanial antigens
The specific levels of IgG2 against L. infantum (F/T) were significantly higher in G1 compared to the control group (G3) at T5 (p<0.05). Similarly in G2, the most significant difference was observed at T3-T5 (p<0.01, p<0.01 and p<0.05, respectively) as compared to PBS group (G3) (Fig 1B).
Following stimulation with L. infantum (F/T), we observed a remarkable increase in IL-10 levels in G3 at T1-T5 intervals (p<0.05) in comparison to the G1 group but at T2 only a significant difference (p<0.05) between G2 and G3 was observed ( Fig 2C). As shown in Fig 2D, IL-10 production in response to L. tarentolae A2-CPA-CPB -CTE -EGFP was significantly higher in G3 than in G1 at different time intervals including T1, T2, T3 and T5 (p< 0.001, p<0.01, p<0.05 and p<0.05 for each time intervals, respectively) and only at T1 in G2 (p<0.05).
Altogether, the levels of IFN-γ and TNF-α significantly increased in vaccinated group (G1) whereas levels of IL-10 significantly decreased in comparison to the control group (G3). Moreover, we analyzed the correlation between the IFN-γ and IgG2 at fourteen and seventeen months after challenge in all groups. Our results showed that G1 had the highest correlation between IFN-γ and IgG2 production for both periods (Spearman r = 0.99, p< 0.001) in comparison to the other groups as shown in S2 Fig.
Delayed-type hypersensitivity response
Delayed type hypersensitivity (DTH) against L. infantum promastigotes was tested after 11 and 16 months post-challenge. All dogs developed DTH response as measured 11 months after infection (Fig 3A). The size of the indurations was determined 48 hours after administration of L. infantum antigens. The G1 group showed a significantly higher (p< 0.05) DTH response compared to G3 (Fig 3A). We also observed that 77% of dogs in G1 had an induration higher than 10 mm in comparison to G2 and G3 in which only 33% showed this pattern. Interestingly, at 16 months post-challenge, although G1 had higher DTH response (55% have more than 10 mm induration), there was no significant difference between these groups. Of note, one dog in the G1 and two dogs in G2 and G3 did not show any DTH response at 16
Low parasite density in vaccinated groups
Bone marrow is an important lymphoid organ in clinical analyses of canine visceral leishmaniasis. In the present study, the amount of Leishmania DNA (due to L. infantum) was detected by quantitative PCR in bone marrow samples at 18 months post-challenge.
Cytological and immunocytochemical findings
Quantified amastigote density in Fine Needle Aspiration (FNA) and ICC smears for Leishmania were classified by two independent observers. The density of amastigotes in the G1 (vaccinated group) was grade I and II. Respectively, three and six cases of G1 showed grade I and grade II cytologically as well as immunocytochemically ( Table 1). The density of amastigotes in the G2 and G3 was varying from grade I to IV. Two cases of both G2 and G3 groups had grade I, also four cases of G2 and two cases of G3 was verified as grade II. One case of G2 and two cases of G3 showed grade III. Severe parasite loading (grade IV) was seen in one case of G2 and three cases of G3. Cytologically and immunocytochemically, the density of amastigotes in G1 was lower than in G2 and G3 as shown in Table 1 and Fig 5. Cytologically, there was no dog with grade III and IV in group G1 as compared to G2 and G3 (2 dogs in G2 and 5 dogs in G3 with grade III and IV). Our observations indicate that group G3 had the highest number of dogs with grade III and IV (p<0.05).
Clinical status and laboratory findings
Different criteria were used for dividing the dogs into three categories, including sub patent (only positive for bone marrow PCR), asymptomatic (bone marrow PCR positive, spleen culture positive with minor biochemistry abnormality and minor weight loss) and symptomatic (bone marrow PCR positive, spleen culture positive, intensive weight loss and strong clinical biochemistry abnormality). The main clinical features presented by dogs are summarized in Table 3. Clinical signs of VL appeared at the earlier stage in dogs of control group (G3) as compared to the vaccinated dogs in G1. In addition, 56% of dogs in the control group were symptomatic whereas 33% of vaccinated group (G1) and 34% of G2 were symptomatic. One animal in each of G2 and G3 presented a progressive form of VL signs and died, whereas none died in G1. The evaluation of different biochemical parameters related to protein alterations showed a significant difference in the AST, Alb, ALP, Urea, creatinine and total proteins concentration between G1 and G3 (p< 0.05). Between G2 and G3, we observed a significant increase in the levels of AST and total protein in the G3. There were no significant differences in respect to the ALT levels between groups. Altogether, the clinical findings showed that control group (G3) had the highest symptomatic dogs in comparison to G1 and G2 groups.
Discussion
Here, we vaccinated dogs with a live-vectored vaccine against VL using a non-pathogenic protozoan parasite, L. tarentolae, expressing the L. donovani A2 antigen along with CPA and CPB cysteine proteinases and tested its immunogenicity and protective potential against infectious challenge. Our previous study demonstrated that vaccination of dogs with cysteine proteinases type I and II (CPB and CPA) elicited an increased expression of IFN-γ mRNA and a strong parasite-specific Th1 response and conferred protection against parasite challenge [29]. Fernandes et al. also showed that immunization with rA2 antigen was immunogenic and induced partial protection in dogs, associated with increased IFN-γ and low IL-10 levels detectable in vaccinated animals before and after challenge [23].
In this study, we have evaluated both cellular and humoral immunity associated with postvaccination protection against L. infantum. We demonstrated that vaccination with L. tarentolae A2-CPA-CPB -CTE -EGFP induced an antibody response that reacted with L. infantum. There is some experimental evidence that antibody production may have some roles in protection against VL. It has indeed been reported that antibodies induced by vaccination interfered not only with the parasite survival and multiplication but also with binding and/or internalization of promastigotes by macrophages [39]. In our study, the levels of IgG1 increased in response to L. infantum (F/T) in all groups after infection but in the PBS control group (G3), the levels of IgG1 were significantly higher than in groups G1 and G2 immunized with L. tarentolae A2-CPA-CPB -CTE -EGFP and L. tarentolae WT, respectively. The post-infection level of IgG2 increased in all groups but group G2 demonstrated the highest level of IgG2. The level of IgG2 in vaccinated group (G1) was significantly higher than the PBS group only at seventeen months post-challenge. There are several studies demonstrating an association of high IgG2 production with asymptomatic infections and elevated IgG1 levels with disease [39,40]. Here, we showed that the levels of Leishmania-specific IgG2 were higher than those of Leishmaniaspecific IgG1 antibody in dogs vaccinated with recombinant L. tarentolae A2-CPA-CPB -CTE -EGFP. In contrast, levels of the IgG1 subclass were higher than IgG2 in dogs that received PBS only, in agreement with some previous reports [41][42][43]. Although all experimentally infected dogs in our study developed anti-L. infantum antibody responses, there is, however, some controversy over the association between canine IgG subclass ratio and protective cellular immune responses in canine visceral leishmaniasis [43][44][45][46][47][48]. It has been suggested that the IgG2 / IgG1 ratio in dogs infected with L. infantum is an alternative measure of Th1 ⁄ Th2 polarization of the immune response [29,49]. It has been reported that IgG2 ⁄ IgG1 ratio in vaccinated and protected dogs is >1 whereas the ratio of <1 is due to canine visceral leishmaniasis (CVL) with progression towards overt disease [50]. In this study, the IgG2 ⁄ IgG1 ratio at T5 in all vaccinated groups (G1, 100%) was more than 1 in contrast to G2 (77%) and G3 (55%).
Here, we found that the levels of IFN-γ increased after infection in the vaccinated group (G1) in comparison to the PBS group (G3). Higher levels of IFN-γ were observed at two, fourteen and seventeen months after challenge. It is worth mentioning that at fourteen and seventeen months after challenge in G1, the peak production of IFN-γ in response to vaccination occurred concurrently with the significant elevation of IgG2 and this correlation was higher than in the other groups. This suggests that the recombinant L. tarentolae A2-CPA-CPB -CTE -EGFP polarizes the immune system towards a Th1 response and that high levels of IFN-γ can stimulate macrophages to kill Leishmania amastigotes. These results are in agreement with previous studies showing that the main effector mechanism involved in protective immune response of dogs infected with L. infantum is the activation of macrophages by IFN-γ and TNF-α to kill intracellular amastigotes via the nitric oxide pathway [51]. It has been shown that NO production and anti-leishmanial activity were also detected in a canine macrophage cell line infected with L. infantum after incubation with IFN-γ, TNF-α and IL-2 [52] as well as in macrophages from dogs immunized with killed L. infantum promastigotes [53]. IFN-γ was seen to increase and correlate with protection in vaccinated dogs [54][55][56]. A large number of studies using putative protective antigens or attenuated parasites in mice have shown that protection against progressive visceral infection involves high expression of IFN-γ and decreased expression of IL-10 [57][58][59]. In dogs, low parasite burdens of L. chagasi in lymph nodes were also associated with high expression of IFN-γ and TNF-α [60]. Also, recent studies in dogs showed that live attenuated L. donovani with the centrin gene deleted (LdCen−/−) were capable of inducing protection against an infectious L. infantum challenge. This protection was associated with significantly higher production of IFN-γ, IL-12/IL-23p40 and TNF-α that skewed type 1 immune response hence contributing to a remarkable reduction in bone marrow parasite load [55,56]. The elevated expression of IFN-γ during severe disease has also been described in patients with active VL [61,62]. TNF-α has also been shown to play a protective role by synergizing with IFN-γ in mediating parasite killing [63]. In the present study, levels of TNF-α were significantly higher in the vaccinated group G1 against L. infantum F/T antigen during T1, T3 and T5 periods.
Here, we found that the IL-10 levels in the vaccinated group G1 were lower in comparison to the PBS group during the periods T2-T5. Our results showed that recombinant L. tarentolae A2-CPA-CPB -CTE -EGFP changes the immune profile to Th1. Previous studies showed that IL-10 is related to progressive disease in human visceral leishmaniasis [62] and plays a role in susceptibility to VL in hamsters and murine models [64]. Other studies also showed that high IL-10 expression was associated with increase in parasitic loads and progression of the disease [65,66]. Also, increased levels of IL-10 mRNA were reported in PBMCs from control infected dogs after challenge with L. infantum [29]. In human L. chagasi infection, IL-10 production has been correlated with pathology [67]. Taken together these reports are in agreement with our findings. IFN-γ to IL-10 ratio is another relevant indicator of successful immunization [38]. The IFN-γ/IL-10 ratio in G1 stimulated with L. infantum F/T increased significantly after challenge during 11 to 17 months in comparison to the PBS group. Furthermore, the IFN-γ/IL-10 ratio against recombinant L. tarentolae A2-CPA-CPB -CTE -EGFP (F/T) in G1 and G2 was significantly higher than in G3 at all tested time intervals post-challenge.
Skin delayed-type hypersensitivity (DTH) response has been used as another indicator of the immunogenicity of an antigen immunization, measured by the presence of a specific cellular type of immune reaction [68][69][70]. Positive DTH response is a marker of a type 1 immune response and has been used to assess the immunogenicity of candidate vaccine antigens against leishmaniasis [68,70,71]. Our results demonstrated a DTH response in all dogs after infection but in vaccinated group (G1) this response was stronger both at 11 and 16 months post-challenge. It has been shown previously that in naturally infected groups, those that were asymptomatic and did not progress to active visceral disease had a stronger DTH response compared to dogs that progressed to an active VL [72,73].
Parasite density in the bone marrow and spleen was the most reliable marker to explore the clinical status of CVL [74]. It has been shown that bone marrow parasite density could act as a factor of major phenotypic changes in peripheral blood leukocytes in canine visceral leishmaniasis. Also, it was reported that dogs displaying higher bone marrow parasite density are more likely to develop severe CVL [56,75]. In this study, parasite density in bone marrow of all dogs was evaluated by reliable methods including direct detection (cytology and ICC) and real time PCR. Recent findings showed a high sensitivity in detection of L. infantum DNA by real-time PCR. These results indicate the usefulness of this method for quantification of Leishmania DNA [76,77]. The results of direct detection particularly ICC showed the highest density of amastigotes in G3, followed by G2. These findings are in agreement with other obtained results. High sensitivity and specificity of ICC in amastigote detection has been reported recently [35]. The vaccinated group (G1) showing partial protection had significantly the lowest quantity of parasites compared to the PBS group (G3). In group G2, almost all dogs (with the exception of two) showed lower quantity of parasites in comparison to G3. Our results indicate that although immunization with recombinant L. tarentolae A2-CPA-CPB -CTE -EGFP (G1) induced significantly higher immune response in comparison to the control group (G3), G2 that was immunized with wild type L. tarentolae demonstrated a similar clinical status as G1 at 20 months post-infection.
In conclusion, our study supports that live vaccination with recombinant L. tarentolae A2-CPA-CPB -CTE -EGFP as prime/boost vaccine is shown to be safe and immunogenic in uninfected, unexposed outbreed dogs. After experimental infection with promastigotes, all dogs progressed from subpatent infection to asymptomatic and finally to symptomatic infection. The vaccinated group (G1) had the highest percentage of subpatent stage (34% in comparison to 22% in G2 and 11% in G3) and lowest percentage of symptomatic stage (33% in comparison to 56% in G3). It is worth mentioning that the full picture of the in vivo response in dogs is very complex and hardly can correlate individual markers with absolute resistance to disease. Therefore, it is important to take all parameters into account to conclude that there is protection. In our study, the experimental challenge with high levels of metacyclic parasites may have underestimated the vaccine efficacy results. Although it is a matter of speculation, if a more relevant (smaller dose, intradermal inoculation) challenge were used, higher levels of protection could be observed. Our results indicate that although vaccination with L. tarentolae A2-CPA-CPB -CTE EGFP may not prevent the disease in all cases, it could render the disease development slower and milder (considering clinical observation and weight lost) in vaccinated groups. If the development of the disease can be slowed down in cases where it cannot be prevented, this could favor early treatment with better longer-term survival. The work presented here is among the first line of research using vectored based vaccination in dog models and could act as a platform for future studies in large animals. Using this strategy, it would be possible to consider only one time immunization by further improving our live vaccine regimen to enhance protective and long-term immune responses, may be by using some immune-potentiators such as CpG-ODN. In future experiments, we could also include to our live vaccine regiment the immunogenic component of salivary gland of sand fly. Recently, we showed that a combination of recombinant L. tarentolae with a sand fly salivary antigen (PpSP15) of Ph. papatasi has elicited strong protective immune responses against cutaneous leishmaniasis in both resistance and susceptible mice against L. major infection [78].
Supporting Information S1 Fig. Experimental setup and timelines. Three groups of dogs were allocated for this experiment. According to their weight, sex and age dogs were divided in three groups (each including 10 dogs) named as G1, G2 and G3. They have immunized two times with three weeks intervals. Before challenge, both humoral and cellular immune responses were assessed. At different time periods after infectious challenge with L. infantum, besides the immune response evaluation, DTH, parasite burden as well as cytology and immunohistochemistry were carried out. statistical analysis of the project, Elham Gholami and Sima Habibzadeh for their great assistance in blood preparation, Davoud Eravani for DTH evaluation, Negar Norouzi, Ebrahim Bijari, Mohammareza Asgari, and also Shahram Alizadeh for their technical assistance.
Author Contributions
Conceived and designed the experiments: SR BP. Performed the experiments: MS FZ TT YT SJ SS NM MH YD SHZ. Analyzed the data: MS SR. Contributed reagents/materials/analysis tools: SR BP SHZ. Wrote the paper: MS SR BP. | 2017-06-07T20:05:34.876Z | 2015-07-21T00:00:00.000 | {
"year": 2015,
"sha1": "0720c8f6b1ecd51446f876f64deb02adb07808f4",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0132794&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0720c8f6b1ecd51446f876f64deb02adb07808f4",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
250622674 | pes2o/s2orc | v3-fos-license | Carotid plaque surface echogenicity predicts cerebrovascular events: An Echographic Multicentric Swiss Study
Abstract Background and Purpose To determine the prognostic value for ischemic stroke or transitory ischemic attack (TIA) of plaque surface echogenicity alone or combined to degree of stenosis in a Swiss multicenter cohort Methods Patients with ≥60% asymptomatic or ≥50% symptomatic carotid stenosis were included. Grey‐scale based colour mapping was obtained of the whole plaque and of its surface defined as the regions between the lumen and respectively 0‐0.5, 0–1, 0–1.5, and 0–2 mm of the outer border of the plaque. Red, yellow and green colour represented low, intermediate or high echogenicity. Proportion of red color on surface (PRCS) reflecting low echogenictiy was considered alone or combined to degree of stenosis (Risk index, RI). Results We included 205 asymptomatic and 54 symptomatic patients. During follow‐up (median/mean 24/27.7 months) 27 patients experienced stroke or TIA. In the asymptomatic group, RI ≥0.25 and PRCS ≥79% predicted stroke or TIA with a hazard ratio (HR) of respectively 8.7 p = 0.0001 and 10.2 p < 0.0001. In the symptomatic group RI ≥0.25 and PRCS ≥81% predicted stroke or TIA occurrence with a HR of respectively 6.1 p = 0.006 and 8.9 p = 0.001. The best surface parameter was located at 0‐0.5mm. Among variables including age, sex, degree of stenosis, stenosis progression, RI, PRCS, grey median scale values and clinical baseline status, only PRCS independently prognosticated stroke (p = 0.005). Conclusion In this pilot study including patients with at least moderate degree of carotid stenosis, PRCS (0‐0.5mm) alone or combined to degree of stenosis strongly predicted occurrence of subsequent cerebrovascular events.
INTRODUCTION
Degree of atherosclerotic narrowing of the extracranial carotid artery is used to predict the risk of future ischemic strokes. This strategy has limitations as severe carotid atherosclerotic lesions may remain asymptomatic for years, while others, more moderate, may progress rapidly and lead to ischemic stroke. [1][2] Carotid plaque morphology has been shown to be an independent predictor of ipsilateral stroke risk. [3][4][5] Noninvasive imaging techniques such as high-resolution ultrasound have emerged in recent years for the characterization of arterial wall pathology. [5][6][7][8][9] Most studies performed in this field are based on visual plaque analysis alone with a poor inter-and intraobserver agreement. 10 As a consequence more operator-independent approaches have been developed using a computer-assisted analysis of grey-scale values. The first and most widely used method is the grey-scale median (GSM) measurement. [11][12][13][14][15][16][17][18] In fact, various studies demonstrated that plaques with low GSM values were associated with an increased risk of subsequent stroke.
We reported an alternative method consisting of a more regional analysis of plaque components and in particular of plaque surface with the use of colour mapping. [19][20][21] In our previous study we could show that plaque surface echogenicity alone or combined to degree of stenosis (Risk index) allowed to distinguish between symptomatic and asymptomatic patients with diagnostic accuracy. 21 The aim of the present work was to determine prospectively the prognostic value for stroke or transitory ischemic attack (TIA) of plaque surface echogenicity alone or combined to degree of stenosis in a multicenter Swiss cohort of patients with ≥ 60% asymptomatic or ≥ 50% symptomatic carotid stenosis.
Two groups were included: Patients with ischemic stroke occurring within the last 6 months and patients with asymptomatic carotid stenosis. These two groups defined the clinical baseline status. Clinical history, presence of vascular risk factors and usual treatment were assessed. The interventional management of carotid stenosis (surgery, stenting) was similar in all participating centers: an intervention was recommended in patients with symptomatic 50-99% stenosis, and in patients with asymptomatic ≥80% stenosis. Patients were asked to participate in the present study, when they either refused the aforementioned recommendations or were considered not to be candidates for a carotid intervention, e.g. because of reduced life expectancy.
Patients with a potential cardio-embolic source of stroke and who were not under anticoagulants were excluded from the study.
All the participants gave informed consent before taking part.
Ultrasound criteria
All investigations were performed using ultrasound devices (Phillips iU22, Siemens Antares, Toshiba and LOGIQ P6 GE) with 4-8MHz tranducers. The patients were examined in the supine position, with their head slightly rotated to the opposite side of the carotid artery being imaged. All plaques were examined on an axial and longitudinal plane. For analysis, however, only the longitudinal plane was considered. Probe placement and site of plaque delineation, e,g near or far wall, were left to the appreciation of the sonographer.
The flow velocity and stenosis rate were measured at the site of the common carotid artery, bulb, and proximal internal carotid artery (ICA). Peak systolic velocities (PSV) at the level of the stenosis and the ICA/ common carotid artery (CCA) ratio were used in order to distinguish the different groups of degree of stenosis: 50%−59% with PSV >120 cm/sec and ICA/CCA >1.5; 60%−69% with PSV >170 cm/sec and ICA/CCA>3.2 and 70%−99% with PSV >220cm/sec and ICA/CCA>3.7. Stenosis of >80% were considered whenever end-diastolic velocity was >130 cm/sec. [22][23][24][25][26] These velocity criteria similar to the North American Symptomatic Carotid Endarterectomy Trial grading were applied across all centers. The plaque with the highest degree of stenosis was considered whenever presence of asymptomatic bilateral or tandem stenoses.
Gray-scale based colour mapping
The spatial distribution of grey scale values of the pixels of the plaque was used as the measurement of echogenicity. Three colours were chosen for colour mapping according to the intensity of echogenicity: To facilitate assessment of degree of stenosis, we integrated to the colour mapping analysis a morphological measurement of diameter reduction according to the European Carotid Surgery Trial criteria for all patients. [28][29][30] The Risk index was established on a combination of degree of stenosis as assessed by means of the morphological measurement and the We also investigated the correlation, between risk index and proportion of red colour on the surface in order to assess whether the systematic use of both parameters was necessary or whether the use of only one of may be sufficient.
Colour mapping was performed by all centers and were sent to Geneva center where all native datasets were centralized. All colour mapping images were then reanalysed by a trained nurse technician blinded to the clinical history of the patients (DW) and in case of discordance validated by an experienced medical doctor (RS). Values were considered discordant whenever there was a difference for RI of more than 20% and/or a difference for PCRS of more than 15%.
Gray scale median assessment of the whole plaque
Grey median scale values of the whole plaque were obtained according to the method described by El-Barghouty and colleagues. 12 The GSM computation was also implemented in our software.
MRI examination
In the symptomatic group, brain MRI was performed within a time span of 48-72 hours and in asymptomatic one, within a delay of 10 days after the detection of the carotid stenosis. When present on diffusion-weighted sequences, the lesion was considered as acute. MRI scans were also used to confirmed the clinical baseline status (asymptomatic versus symptomatic) and to distinguished in those patients who became symptomatic during the follow-up period whether they had a stroke or TIA. 31
RESULTS
From 2008 to 2016, we included 282 patients. All patients were monitored every 6 months by ultrasound investigation during the follow-up.
Their baseline characteristics are given on Table 3. The best surface parameter was located at 0-0.5mm and the best grey scale interval between <60 and <90 (Tables 1 and 2). Accordingly RI and PRCS refered to these two values.
DISCUSSION
In the present study we could show that PCRS alone or combined to degree of stenosis expressed as the Risk index were strong predic-tors of stroke or TIA in patients with asymptomatic or symptomatic carotid stenosis (Figure 4, Table 4). Threshold values which may used for asymptomatic and symptomatic patients were similar regarding RI, but slightly different regarding PRCS (respectively 79% versus 81%) (Table 4 and Figures 4 and 5). Furthermore although the majority of patients who experienced stroke or TIA during the follow-up period had both parameters located above the threshold values, the systematic use of both for plaque analysis is nevertheless necessary as in some cases one value may be still located outside of its respective limit ( Figure 5).
In the Cox regression model, neither RI, nor degree of stenosis, nor stenosis progression resulted to be significant when compared to PRCS. These findings suggest regarding prognosis a possible (Table 5). Characteristic features commonly thought to be associated with an increased cerebrovascular risk include plaques with low echogenicity also called echolucent and opposed to echogenic or non-echolucent ones. [32][33][34][35][36] Histologically echolucent plaques indicate the presence of a large necrotic core, whereas non echolucent plaques reflect a predominantly fibrotic component. [5][6][7][8][9] Studies comparing carotid plaques removed from symptomatic and asymptomatic patients have revealed that the main features of unstable plaques include surface ulceration, thining and/or rupture of the fibrous cap. 37 The size of the necrotic has not been shown to be significantly different between these two groups. 37 Accordingly an echolucent plaque even though reflecting the presence of an important necrotic core is not necessarily unstable. On the other hand, the proximity between the necrotic core and the lumen may exert a critical role with respect to plaque instability. 5,[37][38] The distance between the necrotic core and the lumen is determined by the thickness of the fibrous cap. On ultrasound, plaque surface appears echogenic when the fibrous cap is thick whereas it becomes more hypo-or anechogenic when the cap is thin or ruptured.
In a systematic review, Brinjikji and colleagues demonstrated that plaques with complex features, particularly those with echolucency, neovascularization, ulceration and intraplaque motion were associated with ischemic symptoms. 39 In this meta-analysis whole plaque and surface parameters presented similar predictive values. A recent study analysed multiparametric indices including a vulnerability index, Abbreviations: GSM = grey median scale; n = number of patients; *P value cutoff <0.05 **25 asymptomatic and 12 symptomatic patients with dual antiplatelet therapy *** all patients under oral anticuogulants also received antiplatelet therapy. of asymptomatic patients (p < 0. 05). 18 Other more recent studies sought to determine in ultrasonic images of internal carotid artery plaques, the diagnostic value of the juxtaluminal anechogenic area without a visible echogenic cap. The authors found in a multiple logis-tic regression model an association between hemispheric symptoms, increasing stenosis (mild, moderate, severe), low GSM values (<15) and a juxtaluminal black (hypoechoic) area equal or greater than 8 mm 2 . 18,33,43 Our study was exploratory as the thresholds for RI and PRCS were not predefined, but established on ground of the present findings (Table 4). Our study was further limited by the small number of patients, TA B L E 5 Prediction of stroke or transitory ischemic attack for asymptomatic and symptomatic patients (n = 259) according to age, sex, risk index, proportion of red colour on the surface, degree of stenosis, stenosis progression, grey median scale and clinical baseline status Abbreviations: n = number of patients CI = confidence interval *P value cutoff <0.05 **symptomatic or asymptomatic at baseline.
however the duration of the follow-up period was relatively long and also the number of events resulted to be sufficient in order to obtain significant findings. We further found 12% of discordant findings of colour mapping performed by the various centers. Although the rate of agreement was acceptable, these results suggest nevertheless that the method needs careful monitoring and cannot be used without a previous training.
To conclude, we found in our cohort of patients that PRCS alone or combined to degree of stenosis expressed as the Risk index were strong predictors of ischemic stroke or TIA in patients with asymptomatic or symptomatic carotid stenosis. These findings suggest the importance of plaque surface echogenicity as a potential criterion for assessment of the embolic risk in asymptomatic or symptomatic carotid disease.
Furthermore as surface echogenicity may be difficult to assess visually in clinical practice, echographic computerized approaches should be preferred. | 2022-07-19T06:18:02.761Z | 2022-07-18T00:00:00.000 | {
"year": 2022,
"sha1": "abffb883705e13ae7c1c84f1ff5f866dc7e812ce",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Wiley",
"pdf_hash": "efa6e622dc6cdde532b2e0293173906b537274b8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268540232 | pes2o/s2orc | v3-fos-license | Recent Advances in the Epidemiology of Pathogenic Agents
The COVID-19 pandemic has underscored the pivotal role of epidemiology in studying pathogenic agents [...].
Antibiotic Resistance and Emerging Diseases
These studies on the epidemiology of pathogenic agents have shed light on the critical issues of antibiotic resistance and emerging diseases.A global systematic review and meta-analysis revealed a resurgence of syphilis, a sexually transmitted disease that was once on the brink of elimination [ Contribution 1].The study highlighted the global spread of antibiotic resistance, posing significant challenges for the treatment of syphilis.This underscores the need for improved diagnostic methods, treatment strategies, and ongoing research into the development of a vaccine for syphilis.In Taiwan, a study reported a high prevalence of rapidly growing mycobacteria (RGM) infections, with high resistance rates to available antimicrobial agents [ Contribution 2].The findings highlight the need for timely antimicrobial susceptibility testing to guide the selection of appropriate treatment for RGM infections.A study conducted in Mexico found a high prevalence of Helicobacter pylori among patients with gastric diseases, with the vacA s1m1/cagA+ genotype being the most frequent [ Contribution 3].The study also revealed a 19.8% prevalence of clarithromycin resistance-associated mutations.These findings suggest the need for routine testing for H. pylori virulence factors and clarithromycin resistance-associated mutations to guide treatment decisions.In Japan, a study investigated the serotypes and antibiotic resistance of Streptococcus pneumoniae before and after the introduction of the 13-Valent Pneumococcal Conjugate Vaccine [ Contribution 4].The key finding is that cefotaxime resistance was confirmed for the serotype 15A isolates.Future trends in the spread of these isolates should be monitored with caution.
In addition, a comprehensive analysis of SARS-CoV-2 outlines the genomic structure of the virus, demonstrating its close relation to bat-derived coronaviruses, suggesting a zoonotic origin [3].Researchers explored the virus's receptor-binding mechanisms, highlighting similarities with SARS-CoV, which indicates that the virus utilizes the same receptor, ACE2, for cell entry.The study underscores the virus's transmission dynamics and potential cross-species transfer, laying a foundation for developing effective diagnostics, treatments, and vaccines to combat the spread of the disease, which would later be designated as COVID-19.These studies highlight the urgent need for robust epidemiological surveillance and research to understand disease transmission dynamics, inform public health interventions, and protect global health.As we navigate the post-COVID-19 world, studying the epidemiology of pathogenic agents will continue to be paramount.
Tuberculosis and Hematological Parameters
Tu et al. indicated that these parameters displayed a substantial discriminatory power in differentiating between PTB and GUTB [Contribution 5], suggesting they could serve as potential markers for these tuberculosis types.Specifically, the PTB patients exhibited higher NLR, whereas GUTB patients demonstrated increased lymphocyte and monocyte ratios, indicative of maintained cell-mediated acquired immunity.This finding aligns with others [4,5] and highlights the significance of analyzing immunological markers, hematological parameters, and biochemical profiles in tuberculosis patients for diagnostic accuracy, disease differentiation, and tailored treatment approaches.The implications include incorporating these parameters into routine assessments to enhance tuberculosis diagnosis, patient management, and treatment outcomes.Development of more targeted diagnostic strategies, personalized treatment plans based on disease manifestations, and improved monitoring techniques for patients with different forms of tuberculosis may be applied for better patient outcomes.Incorporating immunological, biochemical, and hematological parameters into clinical practice could lead to more precise and effective tuberculosis management, ultimately contributing to better patient outcomes and disease control strategies.
Outbreak Investigation and Virology
An epidemiological, clinical, and virological study was conducted on the first four cases of Monkeypox (Mpox) found in Cartagena, Colombia, during the 2022 outbreak through passive surveillance [ Contribution 6].Since Monkeypox was first isolated in 1958 from laboratory monkeys, genomic studies have characterized MPXV into Central African/Congo Basin and West Africa clades with differential epidemiology and clinical manifestations [6,7].All cases in the study tested positive for MPXV, specifically identifying clade IIB and lineage B.1 from the genetic sequencing.The study yielded vital genomic, clinical, and epidemiological data that deepen the understanding of Mpox at a local level, emphasizing Cartagena's susceptibility due to its significant domestic and international connectivity.Furthermore, the research suggests that multiple introduction events possibly happened in Cartagena, with transmission likely occurring through skin-to-skin sexual contact.
In addition, a retrospective study was conducted in Romania to assess the prevalence of bacterial and fungal co-and superinfections in COVID-19 patients [ Contribution 7].The study found that Pseudomonas aeruginosa and Klebsiella pneumoniae were the most common pathogens identified in sputum samples, followed by Escherichia coli and Acinetobacter baumannii.Patients with fungal co-infections were noted to have a shorter duration between symptom onset and hospitalization, elevated lymphocyte counts and transaminase levels, and more severe complications at the time of admission compared to others.Furthermore, these patients showed lower oxygen saturations and a higher rate of mortality, particularly when the infections were multidrug-resistant.The authors emphasized the urgent need for stringent antimicrobial stewardship and infection control policies in Romania, particularly considering the widespread use of antimicrobial agents and the increased antibiotic resistance during the COVID-19 pandemic, exacerbated by over-the-counter antibiotic access.
Co-Infections and Community Health
A study assessed the prevalence of single and multiple diarrheal-causing pathogen combinations in children from rural and peri-urban communities in South Africa [Contri- bution 8].A total of 275 diarrhea stool specimens were collected and analyzed using the BioFire®FilmArray®Gastrointestinal panel.The results showed that 82% of the specimens contained enteric pathogens.The most detected bacterial, viral, and parasitic pathogens were EAEC (42%), EPEC (32%), Adenovirus F40/41 (19%), Norovirus (15%), Giardia (8%), and Cryptosporidium (6%), respectively.Single enteric pathogen infections were recorded in 24% of the specimens, while multiple enteric pathogen combinations were recorded in 59%.The study demonstrated the complex nature of pathogen co-infections in diarrheal episodes, which could impact treatment effectiveness.Additionally, understanding the prevalence and combinations of co-infections could inform public health interventions to prevent diarrheal diseases, particularly in vulnerable populations such as young children in rural and peri-urban communities.
COVID-19 and Dermatological Manifestations
A case study was presented to discuss a patient with genetic thrombophilia and a mutation in the MTHFR gene in Africa [Contribution 9].Treatment with rivaroxaban and prednisone led to the resolution of dermatological symptoms and decreased D-dimer levels, indicating reduced blood clot formation.The patient was also diagnosed with COVID-19, caused by the P.2 variant of SARS-CoV-2.This case study highlights the importance of considering genetic predispositions and thrombophilic tendencies in patients with COVID-19, as these factors may influence disease presentation and outcomes, including dermatological symptoms.Future research in this area could focus on elucidating the underlying mechanisms by which genetic thrombophilic conditions interact with SARS-CoV-2 infection to manifest dermatological symptoms, as well as exploring potential implications for personalized treatment approaches and disease management strategies in thrombophilic individuals with COVID-19.Further investigations into the association between thrombophilic mutations and dermatological manifestations in COVID-19 could enhance our understanding of disease pathogenesis and inform targeted interventions for this specific patient population.However, further research is needed due to the limitations of a single case study.
In the field of epidemiology and pathogenic agents, more research and funding are much needed.For example, there is an increasing prevalence of infections caused by drugresistant infectious agents and the development of bacteriocins, a group of antimicrobial peptides produced by bacteria, as a potential solution [8].The urgency to increase and advance the in vivo models that both assess the efficacy of bacteriocins as antimicrobial agents and evaluate possible toxicity and side effects, which are key factors to determine their success as potential therapeutic agents in the fight against infections caused by multidrug-resistant microorganisms, has been noted.Furthermore, plant extracts, essential oils, small antimicrobial peptides of animal origin, bacteriocins, and various groups of plant compounds (triterpenoids, alkaloids, phenols, flavonoids) have shown antimicrobial and antiviral activity [9,10].Many existing studies utilize in silico and in vitro testing as an initial approach to ascertain the health advantages of natural products, both with and without a carrier system.In vitro tests might demonstrate the potential antimicrobial effect of the complex formed by the natural product and its delivery system.However, these findings should be supplemented with toxicity tests to prevent severe side effects.Subsequently, in vivo studies should be conducted using the most appropriate animal models.These studies aim to determine if certain compounds in the body could inhibit, block, degrade, or interfere with the drug.
Last but not least, the World Health Organization (WHO) is initiating a global scientific process to update the list of priority pathogens, which are agents that can cause outbreaks or pandemics.This process aims to guide global investment, research, and development, particularly in vaccines, tests, and treatments.Over 300 scientists will consider evidence on over 25 virus families and bacteria, as well as "Disease X", an unknown hypothetical pathogen that could cause a serious international epidemic [11].The current list of priority pathogens includes COVID-19, Crimean-Congo hemorrhagic fever, Ebola virus disease, Marburg virus disease, Lassa fever, Middle East respiratory syndrome (MERS), Severe Acute Respiratory Syndrome (SARS), Nipah and henipaviral diseases, Rift Valley fever and Zika [12].The WHO develops R&D roadmaps and target product profiles and facilitates clinical trials for these priority pathogens.The list has become a reference point for the research community to manage future threats.It is important to note that the field of epidemiology is dynamic and constantly evolving.New pathogens can emerge and known pathogens can develop new resistance patterns.Therefore, ongoing surveillance and research are crucial for early detection and response to potential epidemics.
Conflicts of Interest:
The authors declare no conflict of interest. | 2024-03-21T15:03:20.933Z | 2024-03-01T00:00:00.000 | {
"year": 2024,
"sha1": "a090848f76d701e21a70441f16f32ed02987b53a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-0817/13/3/263/pdf?version=1710854385",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "779e5422d7f5bdf6c37676fa61133428efe0bec7",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
} |
258265082 | pes2o/s2orc | v3-fos-license | The Metamorphosis. The impact of a young family member’s problematic substance use on family life: a meta-ethnography
ABSTRACT Purpose This meta-ethnography seeks to provide insight into the impact that a young family member’s problematic substance use has on family life. Background Problematic substance use (PSU) usually emerges in adolescence or young adulthood. Living with a family member with PSU is highly stressful. An understanding is needed of families’ experiences and their needs for adapted help and support, hence we have explored the impact of a young family member’s PSU on family life. Methods Systematic literature searches for qualitative research that explores the impact of PSU on family life and family relationships were conducted and the seven stages of meta-ethnography were used. Results Fifteen articles were included. The Metamorphosis was established as an overarching metaphor. Five main themes accompany this metaphor: stranger in the family; injuring chaos; no trust any more; family lock-up; and helpless societies. Conclusion The Metamorphosis reflects the all-embracing change experienced by families. Family members have felt powerless and helpless; often they wish to stay involved but do not know how. PSU at a young age can develop into lifelong chronic health challenges. Family-oriented help must be readily available in this phase as parents and siblings become deeply involved. Family involvement is seldom incorporated into routine treatment practices; such incorporation is therefore needed.
Introduction
Problematic substance use (PSU) usually emerges in adolescence or young adulthood. Most research suggests that adolescence (the period between the ages of 12 and 17) is a critical risk period for the initiation of substance use and that substance use may peak among young people aged 18 to 25 (World Drug Report, 2018). In addition, the legal substance alcohol is used by more than a quarter of all those aged 15 to 19 worldwide (WHO, 2018). PSU (e.g., alcohol and/or drug use) is a serious health problem associated with a severe threat of premature death (WHO, 2018). PSU at any age is difficult for close others, such as family members (Orford et al., 2010;Orr et al., 2014;Ray et al., 2007;Rodriguez et al., 2014). Living with a relative who excessively drinks or takes drugs is highly stressful (Lindeman et al., 2021;Orford et al., 2010) Orford (2017, p. 9) points out that although the harm experienced by all family members living with a member's PSU is similar to an extent, such family harm is variable and depends in important ways on relationship, social, and cultural factors. Orford states that it is essential to keep this variation in family members' experiences in mind. Lindeman et al. (2021) have produced a summary of the studies exploring the impact of an adult family member's PSU on family life. Their meta-ethnography states that rather than seeing the consequences for the family members simply as a "problem" or a "difficulty", the situation can be viewed as an intrusion that overshadows all other aspects of life and is a health risk to all involved family members (Lindeman et al., 2021). The studies refer to families' endless adaptation to a constantly changing intruder. Every new strategy of adaptation and coping brought hope to the families initially, but such hope soon turned to despair when it became clear that such strategies for adapting were inadequate (Lindeman et al., 2021).
This meta-ethnography focuses on the impact of PSU on families when a substance-using family member is a young family member aged 12-26. Some of the young family members aged 12-26 have already used substances for several years, but all of them are in an early phase of life. The period from adolescence to young adulthood is characterized by numerous developmental transitions, including changes in social roles, and it is a critical period for the development of substance use problems (Cadigan et al., 2019). The pathway from initiation to problematic use of substances among young people is complex and influenced by several factors (World Drug Report, 2018). These factors are both at the personal level (such as behavioural and mental health, neurological developments, and gene variations resulting from social influences), micro-level (parental and family functioning, connectedness to school staff and peers, and friend influences), and macro-level (the socioeconomic and physical environment) (Atherton et al., 2016;Moore et al., 2018;World Drug Report, 2018). Lacking connection to family, school, peers, neighbourhood and community influences adolescents' psychological well-being and predicts problems such as substance use (Jose et al., 2012). Risk factors such as trauma and childhood adversity, mental health problems, poverty and negative school climate are out of the individual's control and can make young people vulnerable to substance use (Jose et al., 2012;World Drug Report, 2018).
This meta-ethnography focuses on young family members, whether adolescents or young adults. A young family member is expected to have a different position than an adult one. In most countries, for example, parents usually help their children throughout most of their lives, and the direction of such help tends to remain stable until the parents reach the age range of 70 to 75 (Herlofson & Daatland, 2016). The tasks and responsibilities that adult family members are typically expected to carry out differ from what is expected of young family members.
In this study, family life is understood as a social process that unfolds over time, embracing the everyday life of the family and the daily experiences of relations in the family. The study contributes to what is known about how the substance abuse of family members aged 12 to 26 influences family life. This study aims to integrate and synthesize the research into family members' experiences of family life when a young family member's substance use is perceived as problematic. To acquire a more extensive understanding of family experiences and their needs for adapted help and support, the following research question is explored: What is the impact of a young family member's PSU on family life? Noblit and Hare's (1988) meta-ethnography for interpreting, integrating, and synthesizing qualitative studies has been chosen. We recognize and have reflected on the development of meta-ethnography and critical discussion (Bondas & Hall, 2007a;France et al., 2019b;Bondas & Hall, 2007b;Britten et al., 2017;Thorne, 2017aThorne, , 2017b. Booth (2019) describes dual heritage from both systematic reviews and primary qualitative research methodologies. The eMERGe guidelines (France, et al., 2019a) are used to guide the review's reporting to achieve a transparent and accurate account (Appendix I). In meta-ethnography, the relationship between studies informs the basic analytical decision when translating included qualitative studies into each other. The term "translation"is understood here as taking findings from one study and identifying similar findings in another study, although these may be phrased differently (Noblit & Hare, 1988). What it strives for is a broader and deeper understanding in the form of synthesis through themes and metaphors (Noblit & Hare, 1988). Metaethnographies of qualitative studies are helpful for developing both new knowledge and evidencebased practice (Bondas et al., 2017).
Search strategies and outcomes
Systematic literature searches were conducted by the first (SKL) and second (KBT) authors and an academic librarian, Marianne Nesbjørg Tvedt (MNT). In addition, an academic librarian, Gunhild Austrheim (GA), peer-reviewed the electronic search strategy. A wide search strategy was chosen to find studies with rich descriptions of family life and family relationships, which are complex and multifaceted phenomena. The following databases were considered to be relevant to the topic: CINAHL (EBSCO) (1981-), PsycINFO (Ovid) (1806-), SocINDEX (EBSCO) (1908-), Web of Science (1950-), and the Scandinavian database SveMed+. To increase coverage, the first author (SKL) conducted manual searches in the journals Journal of Substance Use, Substance Use & Misuse, Journal of Family Therapy, Family Relations, Addiction, and Nordic Studies on Alcohol and Drugs and using a Scandinavian digital publishing platform for academic journals and books (Idunn). Back-and-forward reference searches (Sandelowski & Barroso 2007;Cooper et al., 2018) were completed twice, in January 2020 (SKL, KBT) and May 2021 (SKL), who examined the titles of all search results, abstracts, and full texts of original qualitative articles; those considered suitable according to the research objective were included. PIOS elements were used to define eligibility criteria. PIOS is here an acronym for participants (P), intervention/phenomena (I), outcome (O) and study design (S). The inclusion criteria were based on the research question and related to family, next of kin, parent, child, sibling, and spouse (population); family members living with another family member's PSU (the phenomenon of interest); and qualitative peer-reviewed empirical studies (type of research). Studies with rich descriptions of family life and family relationships were included, while studies primarily focusing on the impact of PSU on individual family members' lives and coping without providing a description of family life were excluded. Searches were performed without restriction, and an academic librarian (MNT) performed the search in April 2019 (Appendix II). An update search of the databases was performed in June 2020 (MNT).
The systematic search yielded 26,255 records ( Figure 1). An additional 14 records were identified in the citation, reference, and journal searches. After reviewing the titles and abstracts and removing duplicates, 24402 studies were assessed against the inclusion and exclusion criteria and 133 studies (119 from databases and 14 from citation search)were subsequently read in full text. At this stage, 114 studies were excluded. During both stages, the entire selection process was executed by SKL and KBT, using the Rayyan application (see Ouzzani et al. (2016) for more information about Rayyan). All full-text articles excluded at this stage of the selection process are presented in the excluded studies table, together with the reason for their exclusion (Appendix III).
Quality appraisal
The CASP checklist for qualitative research (Critical Appraisal Skills Programme, 2018) was used to critically appraise the studies that met the inclusion criteria (SKL, LL). The overall rating of the studies' quality was defined according to the risk of bias. We will, in this study, use the term "risk of bias" to indicate the extent to which all aspects of a given study's credibility, design, and conduct have been evaluated. Questions in both checklists were answered with "Yes", "Can't tell", or "No". Studies were rated overall as low risk of bias =/< 2 "Can't tell"; unclear risk of bias>2 "Can't tell" or "1 No"; high risk of bias =/> "2 No".
Twelve studies were rated as having a low risk of bias, and three studies as having an unclear risk of bias. Only two studies (Choate, 2015;Smith et al., 2018) adequately addressed the relationship between the researcher and participants. Four studies were rated as being of low methodology quality due to the high risk of bias and were hence excluded (Appendix IV).
Included studies
The 15 included articles are presented in Table I. Included studies represent the experience of 168 persons: 135 mothers (two of these the mothers of an adopted child), 22 fathers (one the father of an adopted child), five grandmothers, three stepfathers, one stepmother, and two other caretakers. However, some of the studies use the same sample (Groenewald & Bhana, 2016, 2017Jackson et al., 2007;Mathibela & Skhosana, 2019, 2021
Included
Reports sought for retrieval (n=14) Reports not retrieved (n=0) Figure 1. The PRISMA 2020 flow diagram showing the study selection process (see Page et al., 2021). Coping emerged as a complex construct in our analysis. The mothers used problemfocused and emotion-focused coping in different combinations of withdrawing, tolerating, and engaged coping responses. The mothers' coping responses were also influenced by individual and relational factors such as subjective distress and the motheradolescent relationship.
(Continued ) (Continued ) (Continued ) Usher et al., 2007). Studies included in this metaethnography represent countries with different welfare systems and substance-use services. Eight studies are from South Africa, three from Australia, two from Canada, and two from Brasil. The young substanceusing family members were between the ages of 12 and 26 and used different substances, including cannabis, alcohol, nyaope, and whoonga. Some studies do not differentiate between substances and use terms such as "substances", "drugs", "alcohol", and "psychoactive substances".
Data analysis and synthesis
Noblit and Hare 1988 describe meta-ethnography as seven phases that overlap. As recommended by Noblit and Hare 1988, the researchers read the included studies repeatedly in order to familiarize themselves with the data. Data extraction was conducted independently by the first, third, and last authors (SKL, LL, TB) and some issues were discussed in order to obtain a consensus. Next, we analysed the relationship between the studies. Noblit and Hare 1988 state that studies can relate to each other in different ways: they can be compared as analogous (reciprocal translation) or in opposition (refutational translation), or they can be combined into "lines of argument" as aspects of the phenomenon. It was possible to analyse the studies as reciprocal translations, which consist of a lengthy back-and-forth process where the findings of each study are translated into the findings of the other studies. We also analysed whether findings could be understood as conflicting with each other and whether studies accounted for different aspects of the phenomenon (see Noblit and Hare 1988).
A data matrix of each study's findings was constructed. After several readings and discussions within the team, we determined how the studies were related by juxtaposing the major findings (see Noblit & Hare, 1988). The translation was idiomatic, based on interpretation of meaning, and not literal, i.e., not word for word, and continued until all of the studies were translated. The refutational translation explains differences, exceptions, and inconsistencies across the studies (see Appendix V). Finally, a lines of argument synthesis was created. Based on the themes (5), an overarching metaphor (Metamorphosis) was constructed to express the synthesis (see Noblit and Hare 1988).
The iterative process meant interpreting interpretations of experience. We moved back and forth between the seven phases of meta-ethnography, including the original data, both citations and analysis. The authors' backgrounds have had a significant influence on this analysis. All of the authors have broad clinical and research experience in health and the social sciences, which has played an essential role in their in-depth and reflective interdisciplinary analysis and subsequent synthesis. The authors represent professions such as family therapist, clinical social worker, and nurse. All of the authors have experience or an interest in family perspectives on substance use and the mental health field. The multicultural, multiprofessional, multidisciplinary, and multimethod team collaborated throughout the process under the coordination of the first author. All of the authors contributed to critical and fruitful discussions in repeated virtual meetings on the emerging themes and the overarching metaphor, which was initially suggested by the lead author. Intra-and inter-reviewer negotiations and explications, as well as think-aloud strategies, were helpful and constructive (in accordance with Sandelowski & Barroso, 2007).
Results
Metamorphosis was adopted as an overarching metaphor based on the findings in the 15 included studies. In all of the included studies, the young family member's substance use problems were described as causing a colossal upheaval in the family. Family members did not recognize their young family member anymore, and family life changed dramatically. The Metamorphosis is one of Franz Kafka's (2006) bestknown works. It tells the story of salesman Gregor Samsa, who wakes one morning to find himself inexplicably transformed into a giant insect and subsequently struggles to adjust to this new condition. The Kafka-inspired metaphor illustrates the extreme change that all the included studies described. Five themes emerged during the translation and thematization of all the included studies: Stranger in the Family, Injuring Chaos, No Trust Any More, Family Lockup, and Helpless Societies. To retain the readability of this article, we have chosen to present the occurrence of the themes in the included articles in Appendix VI and VII. The appendix shows that most themes are presented in all included articles.
Stranger in the family
The theme Stranger in the Family refers to how family members in all the included studies described the beginning of the metamorphosis. Family members were no longer able to see the child they had known before the substance use began. The included studies explained that the change emerged in layers. First, the young family member's behaviour changed. Subsequently, according to eight studies (Asante & Lentoor, 2017;Choate, 2015;Groenewald, 2018;Smith et al., 2018;Jackson et al., 2007;Takahara et al., 2019;Wegner et al., 2014), anger was the result, while six studies (Groenewald & Bhana, 2016, 2017Jackson & Mannix, 2003;Kalam et al., 2018;Mathibela & Skhosana, 2019;Usher et al., 2007;Zerbetto et al., 2018) described that young family members attitude changed. As one mother expressed this change in attitudes (Mathibela & Skhosana, 2019, p. 94): My son was such a respectful child but then he started to be very rude and disrespectful towards the teachers at school then we just knew something was not right. Then I confronted him wanting to understand what was going on.
As a consequence, more distance and anger developed in the family. Eventually, school results and peer connections changed, and many families experienced criminality. Consequently, parents felt that their influence on their young family member had diminished. As one mother described: "Nobody had the right to tell him what he could and couldn't do and if he wanted to do that in his home that was his business and nobody should be telling him that he can't do it" (Jackson et al., 2007, p. 327).
The included studies reported that the changes confused other family members for quite some time. It was common for parents not to see the seriousness of the situation until long afterwards and to perceive the first changes as part of normal teenage behaviour. Parents were looking for explanations other than substance use, such as mental health problems, school problems, or past events in family life. As a result, it took a long time for the family to contact substance use services. The exception was when families were confronted with direct evidence, such as overdose, hospitalization, or arrest, and took swift action.
Family members in the included studies responded to young family members' PSU in a reactive rather than a planned fashion. The included studies described parents trying to regain control by way of confrontations and emotional reactions, such as by crying and through anger. Choate (2015) states that parents who had their own experience of substance use understood the young person's problems as these related to their own earlier experience, while other parents were confused and struggled to understand.
Injuring chaos
The theme Injuring Chaos reflects the families' growing desperation, stress, and increased inability to cope effectively. All of the included studies described the families trying almost anything to address their young family member's PSU. Parents attempted to manage the problems by using various strategies, such as constant vigilance, to control the young family member.
Some parents used strategies that, in retrospect, they saw as "crazy", as stated by the father and mother in the following (Choate, 2015, p. 468): For example, a father spoke of confiscating his son's drugs but then went on to say, "If you have any obligations for what I've confiscated, I'll cover it" (Participant 9, Father). A mother spoke of putting $800 in cash into an envelope so her son could pay off his drug debts that he then headed off to do, wondering "if I would ever see him again" (Participant 21, Mother).
The included studies indicated that family members often felt powerless. Everything they tried failed to make a difference, and nothing seemed to be effective.
No trust any more
All of the included studies referred to the young family members' PSU influencing their parents, siblings, and other family members such as grandparents. For many families, substance use-related problems are a multi-generational theme (Choate, 2015;Jackson et al., 2007;Kalam & Mthembu, 2018;Smith et al., 2018;Takahara et al., 2019;Usher et al., 2007;Wegner et al., 2014). Some parents had an upbringing with substance-using parents, some parents had experimented with substance use themselves (Choate, 2015;Smith et al., 2018). In some families, the grandmother had the role of primary carer (Takahara et al., 2019), and in other families the father was absent (Kalam & Mthembu, 2018;Mathibela & Skhosana, 2019, 2021. The included studies revealed an atmosphere of mistrust and tension between family members. Many of the families experienced terrifying situations, with numerous episodes of violence when the young family member was Under the influence of drugs or were looking for money to buy substances. Family members were constantly afraid. Parents feared their children would be killed or die: Because I, I didn't think anything, I can't think: it's 12 o'clock he didn't come home, maybe he is dead, maybe he's in hospital, maybe he is taken by the police. I think . . . I can't think because I'm feeling distracted . . . maybe he is dead, maybe he's in hospital, maybe he is taken by the police. (Groenewald & Bhana, 2017, p. 428) Some of the parents were afraid of being attacked by their children and worried about the safety of their other children: I am always scared when he needs these drugs because he becomes so violent and disruptive; you can see that he can kill anyone. (Mathibela & Skhosana, 2019, p. 99) The included studies also describe how the young family member's PSU has generated family disruption and strained interpersonal relationships within the family. Family members blamed each other or themselves, and their different ideas about how the PSU should be dealt with led to disagreement.
The included studies described siblings often being directly and indirectly affected by their sister's or brother's ongoing substance use. Examples of direct effects included being stolen from or assaulted. Some siblings felt so threatened by the drug-dependent sibling that they moved out of the family home (Wegner et al., 2014). The indirect effects included the parents' focus being solely on the substanceusing child. As Choate (2015, p. 470) writes, "In some ways, the siblings lost their brother or sister as well as the family as a unit."
Family lock-up
Families who were experiencing the Metamorphosis described feelings of severe loneliness and isolation. The theme of Family Lock-up shows how family members isolated themselves from close friends, extended family, and the community. The Family Lock-up was described as being both self-selected and externally applied. Family members felt unable to seek help or talk to other people about their problems because they felt that others could not understand their situation. They also avoided social engagement and community events because of the criminality of their substanceusing family member (Asante & Lentoor, 2017). Some parents also felt isolated from neighbours: People don't want to talk to me anymore; others feel that I need to get him arrested, but how do I even do that? He has a case whereby he stole the neighbour's generator, but the magistrate said he cannot be arrested as he is underage and he was under the influence of drugs, but he should be put under my custody. My neighbour then accused me of bribing the police not to arrest him . . . I once heard from another neighbour that everyone is talking about me that I protect my son even when he steals from them. The other one told me that they are planning on beating him up if they catch him stealing again because they think I protect him. (Mathibela & Skhosana, 2019, p. 97)
Helpless societies
The theme of Helpless Societies reflects how services, communities, and societies often fail to help families who are living with the PSU of young family members. In countries where the welfare system has fewer resources and where citizens may experience insecurity, families with a young family member from PSU are left alone because help and support, for example from police, child welfare or substance use services, is rarely available, with severe consequences in the form of violence and crime. Still, among all the different cultures represented in the articles, family members seemed to be disappointed by the lack of assistance or the quality of the support provided.
Family members sought help mainly for the young family member with PSU and not for themselves. When they did seek help, the families often found that the help was insufficient or lacking. "No one had anything concrete to tell me, and no one seemed to be able to point me in the right direction" (Smith et al., 2018). The main reason for their lack of assistance is that substance-use services were dependent on the cooperation of the substance-using family member. "Drug rehab couldn't help because he didn't want to be helped" (Jackson & Mannix, 2003, p. 173).
Professionals might also offer solutions that conflicted with family values or needs. Parents also had the experience of substance-use services holding back information about their youth's situation due to confidentiality issues (Choate, 2015). The parents felt disempowered: It was like the counselor was reprimanding me, and I felt stupid. It is so confusing because when our children are young, we are told to protect them but there is no guidance when they become adolescents and are struggling. When I wasn't provided the help then I felt hopeless and so ashamed. (Smith et al., 2018, p. 517)
Discussion
The Metamorphosis metaphor reflects the transformative change families were going through when a young family member started and continued to use substances problematically. We have chosen this Kafka-inspired metaphor to emphasize the allembracing change in the families. In his work The Metamorphosis, Kafka (2006) shows Gregor Samsa and his family's experience of Gregor's transformation into a giant insect. Kafka wrote that the Samsa family met a hideous fate, worse than any other they knew of. The results in this meta-ethnography show how all-embracing the consequences have been to both the young family member concerned and their families. The change that first transformed the young person continued as an avalanche that also changed family life, health situation and relations. The studies show how the families were "locked up" in the new situation and how inadequate and lacking the support was that they received from others.
In this meta-ethnography, family situations are described mainly from the parents' perspective; they include the experience of mothers more than that of fathers and lack descriptions of what was experienced by siblings and the substance-using family member. Our findings are in line with those of Orford et al. (2010), who reviewed the experience of family members over the course of two decades of qualitative research. Female partners and mothers were the most represented in their study, and the males who participated were often fathers. Orford (2017, p. 14) also points out that the hardship for family members seems to be more significant in close family relations, particularly those in which the family is characterized by structural subordination with dependence and several burdens. In many of the studies included in this meta-ethnography the mother was the sole provider and had to cope with several practical and economic burdens without either a public or a private safety net. This affects the health situation of women who experience these responsibilities.
The parent's perspective on the metamorphosis is nevertheless important. In most countries, it is mainly the family that has responsibility for the children (Daatland et al., 2009). The parents' task is to support their children in their transition to adulthood long after they have reached the age of majority. The included studies show how difficult it was for parents to realize and accept that their child had developed PSU. The parents spent a long time, sometimes several years, trying to find other reasons to explain the changes. When they could no longer use these other explanations, the parents often felt shame and guilt and found that the environment held them responsible. This was a new and highly stressful situation for the parents, one in which they wanted to help their child but did not know how to do so. At the same time, advice from substance use services was lacking or perceived as unhelpful. As a meta-ethnography on adults with PSU (Lindeman et al., 2021) concludes, there are several traits associated with PSU that mean that it places incredible demands on families. Recovery from substance-use problems is a process with an unknown course, as PSU can result in recovery or in life-threatening and/or long-lasting illness. The distinction between the earlier meta-ethnography (Lindeman et al., 2021) and the current study is the impact of time. When a young family member starts using substances problematically, family members are very determined to find help and solutions to the young person's problems. They are at the beginning of the process, which often means they experience powerlessness and uncertainty over the outcome. With time, the families find ways to survive, which may mean feeling resigned and putting distance between themselves and the substance-using family member (Lindeman et al., 2021). There are also differences in the expectations of family members in terms of different positions, ages, and family roles. In this meta-ethnography, the substance-using family member is young, a child in the family, and family life is described from the parent's perspective. The responsibility experienced in the relationship between the parent and their young child is different to that in other relationships, such as between adult siblings, of parents towards adult children, of a child towards a parent, or between partners.
The role of the parents in the life of a young substance-using family member requires a complex and nuanced discussion. The World Drug Report (2018) shows that the pathway from initiation to PSU among young people is complex and influenced by several factors. As the World Drug Report (2018) concludes, it is important to keep in mind that it is the critical combination of risk factors that are present and the protective factors that are absent at a particular stage in a young person's life that make the difference in their susceptibility to drug use. As explained in World Drug Report (2018, p. 6): Early mental and behavioural health problems, poverty, lack of opportunities, isolation, lack of parental involvement and social support, negative peer influences, and poorly equipped schools are more common among those who develop problems with substance use than among those who do not.
A lack of parental involvement and social support may nevertheless be part of the picture. For many families, PSU is a multi-generation theme, and some family members have a family history of difficult childhood or childhood maltreatment (Zarse et al., 2019). For example, the Adverse Childhood Experience Questionnaire (ACE-Q) provided substantial evidence of the link between adverse childhood experiences and mental and physical illness in adulthood (Felitti et al., 1998;Zarse et al., 2019). In our study, the multigenerational theme shows a family vulnerability, where troubles may have been part of family life for generations. Orford (2017, p. 14) offers an important hypothesis on variation in the accumulated burden for family members. The more that a family member lacks financial or socio-economical resources and the more that the family member faces other hardships, the greater also is the burden of PSU. The greater the accumulated burden that the family member bears, the more challenging it is to cope with a relative's PSU. As Orford explains the consequences to family members (Orford, 2017, p. 14): The greater the degree to which an AFM (affected family member) is exposed to family disharmony associated with a relative's addiction problem, the greater the level of AFM coping difficulty and strain. Family disharmony, or lack of family cohesion, may be a complex concept with multiple indications, but a key index of disharmony is the presence and extent of domestic violence including physical violence, emotional abuse and coercive control. This is an important wake-up call for substance use services, which still struggle to incorporate family involvement into routine treatment practices in many countries. However, there are interventions already that include family and network perspectives. For example, several systemic family-therapeutic approaches are well-suited to this (see Lorås and Ness (2019)). The 5-Step Method for affected family members is also an acknowledged and researchbased method that is suitable for reducing familyrelated harm from addiction . The research also shows encouraging results on the effect of family interventions both in PSU patterns and in family functioning (Akram & Copello, 2013). This is also an important reminder of the need for differentiated services and support, where those with a more significant accumulated burden and vulnerability across generations need comprehensive help for their families. Our opinion is that intergenerational problems should not be reduced to individual problems and that it is important to keep the bigger relational picture in mind in health and social services.
This meta-ethnography also suggests how important it is to keep in mind the societal conditions of families. Several included studies are from South Africa. Qualitative studies in Europe, Asia, and the USA appear to be lacking, but the included studies nevertheless represent countries with different political, economic, and cultural situations. When there is a low level of safety and security in society and the society lacks an inclusive welfare system, this exacerbates the lack of protection for both the young substance-using family member and other family members. As a result, families faced crime, threats, and violence alone, without any assistance available to them, as shown in the present study, and for families such as these, homicide related to substance use was a daily threat. Perhaps the geographically varying interest in researching substance-using young people and their families can be linked to the extreme situation families can experience when society lacks an inclusive and easily accessed welfare system.
Strengths and limitations
A strength of this meta-ethnography is the rigorous methodology of a systematic review with its strengths and opportunities for qualitative synthesis. The meta-ethnography allows the depth and scope to examine participants' meanings, experiences, and perspectives. Following the eMERGe reporting guidance improves the transparency and wholeness of the research process, which is a quality indicator for meta-ethnography. The flexible methodology of meta-ethnography has allowed us to handle the large number of studies that the search yielded. The systematic, peer-reviewed, and extensive search strategy and the vast number of articles allowed us the opportunity to see different perspectives and ways of describing family lives affected by young family members' substance use from our own interdisciplinary and multi-professional perspectives, including those derived from our personal experiences. Another strength is the fact that the metaethnography team includes experts in health, substance use and family therapy and experts in the meta-ethnography methodology.
The included studies varied in sample size and represented different countries and families. This ensured that there were detailed descriptions of family life and family relationships. However, it is important to keep in mind that the included studies represent parents' perspectives and other perspectives, for the voices of members of the extended family, of siblings, and of the young family members with PSU are not represented. The female perspective is also more represented because more mothers than fathers are included. There also appears to be a lack of qualitative studies originating in Europe, Asia, and the USA. The studies do not reveal systematic information about the participants' own problematic substance use or family violence history.
Implications for future research
The fact that a family member's PSU affects family life and relations has been documented persuasively by several researchers (Lindeman et al., 2021), especially by Orford and his research group Orford et al., 2013;Orford, 2017). We agree with Orford's (2017) suggestion that, while it is essential to acknowledge cross-cultural similarities in family members' situations, it is also important to look at the variations and nuances in the experience of family members. Based on the findings of the current metaethnography, further research is essential in order to address several critical knowledge gaps. More research is required on the impact on the family during different phases of substance use. It is also essential to include the family members' PSU perspectives on family life and relationships because this perspective is rarely included in the research. The young person is seen in this study only through the experience of family members, thus more research is needed in order to understand the young person's situation. As demonstrated by the results of this metaethnography, it is vital to include more sibling perspectives, as described both by the siblings themselves and by other family members. It is also important to remember gender perspectives and include the perspectives of fathers, brothers, male partners, and sons. More research is also required within different societies and societal conditions so as to better understand and support families with their accumulated burden. Finally, more research is needed from regions such as Europe, Asia, and the USA, as evidenced by the lack of studies from these regions.
We have included all substances in this study. We think, nevertheless, that it is essential to see the impact of specific substances on family life and understand the differing implications that different substances-ranging from opioids with their high risk of overdose to cannabis, which is legal in some countries, to alcohol and doctor-prescribed medicinesmay have for family life.
Family members are often most interested in getting health help for the substance-using family member. More research that focuses on and creates a nuanced understanding of young people's pathways towards the problematic use of substances and towards recovery processes is important for both families and professionals. We also need in-depth studies of intergenerational substance use problems and how to end a negative family spiral of problematic substance use. We need to know more about how to turn the course of young people's PSU towards recovery in cooperation with their families.
Conclusions
The overarching metaphor, the Metamorphosis, reflects the all-embracing change experienced by families with an adolescent or young adult family member with PSU. Substance use problems often start at this age and can develop into lifelong chronic health challenges, but they can also lead to recovery. This study shows how powerless and helpless the families often were and how alone and locked up they felt with the Metamorphosis.
Family-oriented help must be readily available in this phase of substance use problems. Parents and siblings become deeply involved when a young family member develops substance use. Family members often want to stay involved and provide support but do not know how. Family involvement is often not incorporated into routine treatment practices, and families in crisis are forced to make a big effort to get help. Kafka (2006, p. 26) wrote that, in the face of the metamorphosis, the Samsa family became so preoccupied with the problems in the present that it lost all ability to move forward. This reminds us of the included studies' accounts of the loneliness and powerlessness experienced by the family members of a substance-using young person or young adult. We especially hope that our results contribute to an increased awareness of the accumulated burden for some families. Multi-generational and multi-troubled families need extra attention because of their concerning situation. There may be fewer opportunities for them to protect and support not only substanceusing young family members but also siblings, the families themselves and forthcoming generations. Another important conclusion is that complex social problems such as PSU require global political attention. The most vulnerable families and family members are often left on their own without support, as the present study indicates, and with the worst consequences of substance use problems, such as terrifying episodes of violence and other horrifying experiences. | 2023-04-21T06:16:25.323Z | 2023-04-20T00:00:00.000 | {
"year": 2023,
"sha1": "53caa0a6341960e5f8beda94dfb5b3bc0437c7c0",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "0645451096fd089f970b4a296a95120f30a4d368",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248311796 | pes2o/s2orc | v3-fos-license | Preserving Data Aggregation for Smart Grid with User Anonymity and Designated Recipients
: Smart grids integrate modern Internet of Things technologies with the traditional grid systems, aiming to achieve effective and reliable electricity distribution as well as promote clean energy development. Nowadays, it is an indispensable infrastructure for smart homes, wisdom medical, intelligent transportation, and various other services. However, when smart meters transmit users’ power consumption data to the control center, sensitive information may be leaked or tampered. Moreover, distributed architecture, fine-grained access control, and user anonymity are also desirable in real-world applications. In this paper, we propose a privacy-preserving data aggregation scheme for a smart grid with user anonymity and designated recipients. Smart meters collect users’ power consumption data, encrypt it using homomorphic re-encryption, and then transmit the ciphertexts anonymously. Afterward, proxies re-encrypt the aggregated data in a distributed fashion so that only the designated recipients can decrypt it. Therefore, our proposed scheme provides a more secure and flexible solution for privacy-preserving data aggregation in smart grids. Security analyses prove that our scheme achieves all the above-mentioned security requirements, and efficiency analyses demonstrate that it is efficient and suitable for real-world applications.
Introduction
Electricity is important for modern civilization. However, power outages occur from time to time across the world, causing significant economic losses and social impacts. For example, in 2019, the Guri Hydropower Station, which provides 80% of Venezuela's electricity, was maliciously attacked, causing 21 out of the 23 states to experience power outages [1]. At the same year, a large-scale power outage also occurred in South America, affecting more than 40 million people in Argentina, Brazil, Uruguay, and Chile. When such an accident occurs, the traffic lights ceased operation and all public transportation was suspended, making the affected cities into chaos [2]. One of the main reasons for this catastrophe is that the traditional power grid was designed more than a century ago and its effectiveness and robustness are far from satisfactory in the modern era [3].
In 2001, the concept of "smart grid" was introduced, expecting to enhance the traditional power grid using some latest information technologies, such as the Internet of Things (IoT) and computer networks [4]. In the smart grid, power transmission can be scheduled more intelligent and reliable thanks to the digitization and standardization of information [5]. Moreover, through two-way communications, the power grid can be continuously monitored in real-time, reducing the probability of power outages. To alleviate the phenomenon of isolated data islands, various cryptographic primitives, such as symmetric and asymmetric ciphers, have been employed to realize privacy-preserving and authentication in data sharing. Therefore, not only the desirable security requirements can be guaranteed for users' personal data, but also the whole society can benefit from more effective information utilization. Nowadays, many countries have adopted the development of smart grids as a national strategy.
As shown in Figure 1, the smart grid generally consists of three layers [6]. At the bottom layer, smart meters collect users' power consumption data and upload it into the grid system regularly. Based on this data, the electricity company can charge the users for power usage. This information also can be used to set-up flexible price packages to smooth the power usage, e.g., higher prices in the peak period and lower prices in the other periods, enhancing the reliability and effectiveness of the smart grid system. At the middle layer, the cloud is responsible for forwarding the aggregated power consumption data to the power station. During this process, individual users' power consumption data must be kept secret from the cloud. Otherwise, users' living habits, as well as some other private information, may be leaked. Moreover, if a malicious attacker tampers or forges the power consumption data during transmission, it will not only cause economic losses to the electricity company but also affect the power distribution of the entire grid. At the top level, the power station generates electricity based on demands and the power is distributed through the substations. As it is expensive to store electricity, it requires that the amount of electricity generated by the power station roughly matches the realtime demands. Otherwise, it will reduce the reliability of the smart grid or even cause catastrophic events. At present, the design and implementation of a smart grid need to consider the following security features: • Confidentiality: The data collected by the smart meters may contain users' sensitive information. If an attacker obtains this data, users' living habits could be leaked, so the power consumption data must be protected. • Authentication: Power consumption data transmitted in the smart grid can be tampered with by a malicious adversary, so it is necessary to ensure that the adversary cannot modify, fabricate or delete the transmitted data without being detected. • User anonymity: The power consumption data is normally sent with the user's identity. When the cloud collects the data, users' identities may be exposed to the cloud. In many circumstances, such exposure is undesirable and users' identities should also be protected.
•
No single point of trust: The decryption power should not be possessed by a single party. Otherwise, it could become a single point of trust in the system. For example, if this party is compromised, all sensitive information within the system can be read or leaked by this party. Instead, a distributed architecture should be employed. • Designated recipients: Based on the minimum disclosure principle, fine-grained access control should be posed on the aggregated power consumption data, e.g., its access should be strictly restricted to the designated recipients.
To address the above problems, this paper proposes a privacy-preserving data aggregation scheme with user anonymity and designated recipients. In our proposed scheme, smart meters first collect users' power consumption data, encrypt it using homomorphic re-encryption and then send the ciphertexts anonymously. The control centers aggregate the received data and re-encrypt it in a distributed fashion so that only the designated recipients can decrypt it. Moreover, novel verification techniques are employed to ensure that only legitimate users' data is accepted and an adversary cannot tamper with this data during transmission. Our main contributions can be summarized as follows.
1.
Apart from the traditional security requirements, such as confidentiality and authentication, our proposed scheme also achieves user anonymity and no single point of trust. Moreover, it can ensure that the aggregated data can only be accessed by the designated recipients, realizing fine-grained access control. Therefore, it provides a more secure and flexible solution for privacy-preserving data aggregation in smart grid.
2.
Security analyses prove that our scheme achieves all these desirable security requirements, and efficiency analyses demonstrate that it is efficient to be implemented in real-world applications.
The rest of the paper is organized as follows. In Section 2, we briefly review some related works in the literature. The notations and preliminaries are outlined in Section 3. In Section 4, models and definitions are described. Then, our proposed scheme is introduced in Section 5, and its security and efficiency analyses are presented in Sections 6 and 7, respectively. Finally, we conclude in Section 8.
Related Works
Nowadays, it is widely accepted that smart grids are a fundamental infrastructure for renewable energy [7]. Smart meters are important devices to realize two-way communication in smart grids, so they are vulnerable targets for attackers [8,9]. Hence, it is worth investigating methods that securely transmit information within smart grids and build a flexible smart grid architecture [10,11]. It is necessary to build a security model to meet the security demands of a smart grid [12,13]. To address this issue, various privacy-preserving data aggregation schemes have been proposed in the literature [14][15][16]. Moreover, these works can be divided into two main categories: one protects users' power consumption data and the other protects users' identities.
In the first category, homomorphic encryption [17] is used as a popular building block, thanks to its feature of allowing operations on the ciphertexts. Lu et al. [18] have proposed a data aggregation scheme EPPA that uses hyper-increasing sequences to record the multi-dimensional data and Paillier encryption to encrypt the data. The local gateway aggregates the encrypted data and sends it to the control center, which can then decrypt the aggregated data without learning any individual data. Later, Shen et al. [19] have proposed a modified data aggregation scheme in which the aggregated data of different regions can be aggregated in a hierarchical manner. Ding et al. [20] have proposed a novel encryption scheme that supports homomorphic re-encryption, in which the ciphertexts can be either decrypted or re-encrypted, both requiring two parties to operate in a distributed fashion. However, the majority of existing data aggregation solutions need to employ a trusted third party (TTP) [21][22][23][24]. To address this issue, Liu et al. [25] have proposed a scheme without a TTP. The trick is to select some users to construct a virtual aggregation area to mask the power consumption data of a particular user. Xue et al. [26] have proposed another data aggregation scheme without a TTP using secret sharing. However, it suffers heavy communication overheads and it is vulnerable to the man-in-the-middle attack.
To improve efficiency of data sharing in smart grids, Zhao et al. [27] have introduced a fog-assisted data aggregation scheme that can reduce network bandwidth and realize smart pricing. Su et al. [28] proposed a lightweight data aggregation scheme for smart grid with forwarding secrecy. However, its limitation is that if any user's data is missing, the aggregated data will become unreadable. To solve this issue, Huang et al. [29] have proposed a lightweight data aggregation scheme with fault tolerance. Xu et al. [30] have proposed a similar scheme that allows collusion between the aggregator and some entities, achieving a high level of fault tolerance. Although the above-mentioned schemes can achieve privacy protection for individual users' power consumption data, very few have considered fine-grained access control for the aggregated data.
In the second category, smart meters have to send the power consumption data anonymously. A pseudonym is a common technique used to achieve user anonymity. Tan et al. [31] suggest using pseudo IDs instead of real identities, where these pseudo IDs are generated using a function with inputs of the group key, the time, and the number of smart meters. To hide the relationship between a user's identity and her pseudonym, Guan et al. [32] suggested using the user's public key as her pseudonym. Each user can be associated with many pseudonyms, and the Bloom filter is used to verify the validity of a user's pseudonym. Liu et al. [33] have proposed a solution using a blind signature, but it has not considered the protection of individual user's power consumption data. Sui et al. [34] have proposed a method to realize strong anonymity through anonymous networks. Moreover, a reward mechanism is designed where the user who requests a reduction in power usage can revoke her anonymity and gets some rewards. Yu et al. [35] have proposed a privacy-preserving power request scheme. Each smart meter is associated with a unique identifier, and a ring signature is used to protect their identities. Cheung et al. [36] have proposed a scheme achieve user privacy and data authentication, in which users generate a group of credentials and the control center signs them blindly. However, as the control center needs to generate many signatures, its computational overheads are very high.
Notations and Preliminaries
In this section, we describe the notations and briefly review some cryptographic primitives, such as ElGamal encryption, Schnorr signature, and Homomorphic re-encryption.
Notations
The notations used in the proposed scheme and their meanings are outlined in Table 1. Table 1. Some notations.
Notations
Meaning The public key m i User's power consumption data M i The power consumption data recorded by RMM i M The total amount of power usage across all areas , S The number of SM i in each area and RMM i RID The smart meter's real identity T * The current time stamp of the RMM i ∆T The allowed time delay in the system CID Computation identifier L( * ) Bit length of the input data || The message concatenation operation
ElGamal Encryption
• Setup: Randomly choose x ∈ Z q and compute y ≡ g x (mod p). The public key is (y, p, g) and the private key is x. • Encryption: Given the plaintext m, randomly choose a value r ∈ Z q and calculate the ciphertext as C = (C 1 , C 2 ) = (g r , m · y r ). • Decryption: The entity with the private key x can decrypt the ciphertext as:
Schnorr Signature
• KeyGen: Randomly choose x ∈ Z q and compute y ≡ g x (mod p). The public key is (y, p, g) and the private key is x. • Signing: Given the message m, the signer randomly selects k ∈ Z q and computes r ≡ g k (mod p), e = H(r, m) and s = xe + k (mod q). Now, (e, s) is the signature for m.
•
Verifying: After receiving the signature (e, s), the verifier computes r ≡ g s y −e (mod p) and H(r , m). Then the following equation is verified:
Homomorphic Re-Encryption
The ciphertext can be either decrypted or re-encrypted, while both operations need two entities to collaborate. The homomorphic property permits users to perform computations on the encrypted data. The computation results are left in encrypted form. But when decrypted, the value is identical as the operations are performed on the plaintext data. The re-encryption property allows the ciphertext to be re-encrypted to another one containing the same plaintext but under a different public key. Note that the re-encryption operation ensures that only the designated recipients can derive the plaintext. Its operation works as follows: • Setup: p and q are two safe primes, where p = 2p + 1, q = 2q + 1 and n = p · q . Denote QR as the cyclic group of quadratic residues in Z * n 2 , and g is a generator of QR. • KeyGen: The data center (DC) and the access control server (ACS) generate their public and private key pairs (SK DC = a, PK DC = g a ) and (SK ACS = b, PK ACS = g b ).
These two parties execute the Diffie-Hellman key exchange to obtain the system public key PK = PK SK ACS DC = PK SK DC ACS = g ab . Every designated recipient generates its public and private key pair sk i = k i , pk i = g k i .
• Encryption: Given a message m i ∈ Z n , one randomly chooses r ∈ [1, n 4 ) and generates the ciphertext as Re-Encryption Phase I: DC chooses and publishes a computation identifier CID. It then computes h 1 = H pk j SK DC ||CID and re-encrypt the ciphertext as Re-Encryption Phase II: ACS calculates h 2 = H pk j SK ACS ||CID after receiving It then re-encrypts the ciphertext as Decryption: The designated recipient can decrypt the ciphertext [m i ] pk j as
Models and Definitions
In this section, we describe the system model, adversary model, and security requirements.
System Model
Our proposed system, as shown in Figure 2, consists of five types of participants: smart meters (SM), regional master meters (RMM), grid company (GC), operation center (OC), and power transmission units (PTU).
1.
SM: It collects user power consumption data and sends it to the RMM regularly. Note that this data needs to be sent anonymously in our proposed scheme. Moreover, each SM is assumed to contain some tamper-proof device, and its internal states can be protected.
2.
RMM: It is responsible for aggregating users' power consumption data in some regions and it will forward the aggregated result to the GC. 3.
GC: Once it receives the aggregated power consumption data from the RMMs, it aggregates the received data again and then performs the first phase of proxy re-encryption.
4.
OC: It executes the second phase of proxy re-encryption and sends the outputs to the designated recipients.
5.
PTU: They are the designated recipients of power usage data, such as power plants and data analysts. Each of them will use its private key to decrypt the received ciphertexts.
Communication Model
We assume that public channels are used to transmit data from SMs to RMMs and from RMMs to the GC. Moreover, we assume that a secure channel exists between the GC and the OC, and authenticated channels are used to transmit data from the OC to PTUs. In smart grids, there might be a large number of SMs and RMMs. Hence, it is impractical to assume that secure channels or authenticated channels exist among them. Moreover, as there is only one GC, one OC, and a few PTUs, the assumption of a secure channel between the GC and the OC, and some authenticated channels from the OC to PTUs is feasible. Note that the assumption of these channels allows us to focus on the protocol design without digging into the low level of technical details. It is well known how these channels can be implemented in practice using standard cryptographic primitives, e.g., encryption and digital signatures.
Adversary Model
In our proposed scheme, we assume that all participants are honest-but-curious. In other words, these participants will follow the protocol, but they will try to learn some sensitive information beyond their authorization. Moreover, we assume that the GC and the OC will not collude. The adversary A can eavesdrop on the exchanged messages through the public channel and the authenticated channel. In addition, it can also tamper with the data through the public channel but it neither intercepts nor falsifies the data through the secure channel.
Security Requirements
Under the above models, our design goal is to develop a privacy-preserving data aggregation scheme for smart grids with user anonymity and designated recipients. Specifically, the following security requirements are considered.
1.
Correctness: If all participants follow the protocol, it will output the correct aggregated power consumption data to the designated recipients.
2.
Confidentiality: The adversary A cannot learn the power consumption data of any individual user.
3.
Authentication: Only data from legitimate participants will be accepted. If the data is tampered with during transmission, it can be detected.
4.
User anonymity and un-linkability: The adversary A cannot extract the real identities of the smart meters. Moreover, A cannot link two messages that are sent by the same smart meter.
5.
No single point of trust: The secret key is distributed among multiple entities, i.e., no single party can decrypt or leak sensitive information within the smart grid.
6.
Designated recipients: The aggregated power consumption data can only be accessed by the designated recipients but no one else.
The Proposed Scheme
In this section, the privacy-preserving data aggregation scheme with user anonymity and designated recipients is introduced, which mainly consists of the following six algorithms: initialization, key generation, identity anonymization and encryption, verification and aggregation, proxy re-encryption, and decryption.
Initialisation
In this phase, GC generates the system parameters. It first randomly chooses two large primes p and q , and then computes n = p · q . Denote G as a cyclic group of quadratic residues modulo n 2 , and g as a generator of G. GC also selects a secure hash function H: {0, 1} * → G.
KeyGen
All entities generate their own public and private key pairs. In addition, GC and OC jointly negotiate a key using Diffie-Hellman key exchange.
1.
GC and OC randomly chooses α and β respectively as its private key. Their public and private key pairs are (SK GC = α, PK GC = g α ) and (SK OC = β, PK OC = g β ).
2.
Each power transmission unit PTU j generates its public and private key pair (sk j = d j , pk j = g d j ).
3.
OC negotiates the key with GC to obtain the system public key: 4. Finally, the system parameters pp = (g, G, PK) are made public.
Identity Anonymization and Encryption
In this phase, the smart meter encrypts its real identity and then sends the anonymous identity, power consumption data, and digital signature to the RMM. The following steps are executed during this phase.
1.
Before smart meter SM i sending the power consumption data m i to RMM i , SM i needs to encrypt the data m i and hide its real identity. And SM i generates its public and private key pair (sk i = x i , pk i = g x i ).
2.
In each period, SM i randomly chooses η i ∈ Z q and calculates H ID i,1 = g η i (mod p), H ID i,2 = RID * (PK GC ) η i . Then SM i uses the public key PK to encrypt data and sign,
Batch Verification and Aggregation
In this phase, RMM i checks the validity of received messages. In addition to the traditional verification methods, it also allows a batch of data to be verified simultaneously.
1.
Traditional verification: RMM i checks the validity of σ i using the following equation: 2.
Batch Verification: The above verification can be made more efficient using the small exponent test technology [37].
If the above equation does not hold, RMM i rejects the messages.
3.
Aggregation: RMM i aggregates the encrypted data C m i by calculating where is the number of SM i in the current area. Finally, RMM i sends C M i and its corresponding signature and current timestamp T j to GC.
Proxy Re-Encryption
After receiving the message, GC first verifies the freshness and validity of the signature. It then aggregates and stores the received power consumption data. When a designated recipient requests electricity data, proxy re-encryption is performed.
1.
GC verifies the freshness and correctness of the received data C M i and it then aggregates them: 2.
The PTU j issues a request to the electricity data. After verifying that it is a legitimate designated recipient, the proxy re-encryption will be performed as follows:
Decryption
Once the PTU j has received the C M from OC, it can be decrypted using its private key.
PTU j first calculates
The aggregated electricity data M can be decrypted as follows: 3.
Once PTU j obtains the aggregated power consumption data M, it can perform dynamic power distribution according to the power consumption across the area.
Security Analyses
In this section, we analyze the security properties of the proposed scheme, proving that it meets the aforementioned security requirements.
Correctness
Theorem 1. If the data sent by the SM i were not tampered by the adversary A, the RMM i would accept it. Proof 1. Once the RMM i receives the message {C m i , σ i , H ID i , T i } from the SM i , it can verify its authenticity using the following equation. Therefore, if the data sent by the SM i was not tampered by the adversary A, the RMM i will accept it.
Theorem 2. Given multiple messages and their corresponding valid signatures {σ i } 1≤i≤n from different smart meters, the batch verification technique (3) can be used to verify their authenticity simultaneously. (3) can be proved as follows:
Proof 2. The correctness of Equation
Theorem 3. The designated recipients can decrypt the received message with their own private key to obtain the correct electricity data. (6) can be proved as follows:
User Anonymity and Un-Linkability
Theorem 4. Our proposed scheme achieves user anonymity and un-linkability, i.e., the adversary A with probability polynomial-time resources cannot link the identity sent by the same smart meter.
Proof 4.
When a SM i sends its power consumption data to RMM i , it firstly hides its identity RID to achieve anonymous transmission H ID i , where H ID i = {HID i,1 , H ID i,2 } = {g η i , RID · (PK GC ) η i }. As the ElGamal encryption is semantic secure, i.e., the adversary cannot learn any plaintext information from the given ciphertext. Hence, A cannot learn the real identity of the smart meter. Moreover, the ElGamal ciphertext, which encodes the pseudo-identity, can be re-encrypted. The re-encryption can be performed multiple times, and it does not require the knowledge of the private key. After re-encryption, the ciphertext appears random and it cannot be linked to its previous form because ElGamal encryption is semantic secure. Therefore, if this pseudo-identity is refreshed regularly, A cannot link the identity to the same smart meter.
Confidentiality
In our scheme, all transmissions are encrypted, so the adversary A cannot eavesdrop on the smart meter to get electricity data.
Theorem 5.
If the semantic security of the encryption scheme [38] holds, our proposed scheme satisfies confidentiality against malicious GC or OC.
Proof 5.
Assume that there is a probabilistic polynomial-time adversary A that can break the confidentiality of our proposed scheme. Our goal is to use A to construct an algorithm S to break the semantic security of the encryption scheme in [38]. S is given the public parameters (n, g, pk 2 = g a (modn 2 )), the adversary A can construct pk 1 = g b . Then the adversary A choose two messages of the same length m 0 and m 1 , we randomly select β ← {0, 1} and encrypt m β as follow: Enc(m β ) = {(1 + m β · n)(pk 2 ) r , g r } mod n 2 . The The encrypted ciphertext is sent to the adversary A. Adversary A performs further calculations as follows: Based on (A, B), A further construct a re-encryption ciphertext (A , B ), where We can observe that adversary A can obtain two raw data m 0 = b · m 0 and m 1 = b · m 1 . We set m β = b · m β . We can observe that (A , B ) is one HRES ciphertext. It has already been proved that if the encryption scheme is semantically secure, then the HRES scheme is also semantically secure. Because the HRES is semantically secure, adversary A cannot guess the value of β . Hence, our proposed scheme satisfies confidentiality.
No Single Point of Trust
The secret key of the system is shared by the GC and OC, and it is assumed that these two participants will not collude. Hence, neither of them can obtain the sensitive information within the system that is encrypted under the corresponding public key.
Designated Recipients
In the decryption phase, only the designated recipients can decrypt the ciphertext outputted by OC. The designated recipients have the private key sk j to compute h 1 = H((PK GC ) sk j ||CID) and h 2 = H((PK OC ) sk j ||CID). Hence, designated recipients can obtain the aggregated power consumption data M. Although A and B are transmitted over the communication network, and the adversary A is assumed to be able to intercept this information, A cannot decrypt C M because it cannot calculate h 1 = h 1 and h 2 = h 2 . Therefore, only the designated recipients can obtain the computational results, but no one else.
Comparison of Security Properties
Our proposed scheme is compared with several related schemes, such as [20,[33][34][35]. The following table presents the comparison results. As shown in Table 2, our scheme is the only one that can satisfy all of the desirable security properties, such as user anonymity, un-linkability, confidentiality, correctness, and designated recipients.
Efficiency Analyses
In this section, we evaluate the performance of our proposed scheme in terms of computation and communication.
Computation Costs
The following notations are used to denote different operations in our scheme. Let C e , C m and C H denote one exponentiation operation, one multiplication operation, and a hash function, respectively. The bilinear pairing C p incurs the most computation costs. The other operations are much faster, such as the hash operation and the addition operation.
is the number of SM i in each area. S is the number of the regional master meters. In Table 3, the computation cost of all entities are listed, where "-", GW, OA and CC denote non-considered, gateway, trusted operation organization and control center, respectively.
When smart meter SM i generates power consumption data {C m i , σ i , H ID i , T i }, the computational costs of user anonymity are considered negligibly. Then, 2 exponentiation operations and a multiplication operation are required to encrypt electricity data, and a hash operation are required to generate σ i . Thus, the computation costs of a smart meter is 2C e + C m + C H . After receiving the power consumption data from smart meters, the RMM first verifies the received data by performing a batch verification, including exponentiation operations, multiplication operations, and hash operations. In addition, the RMM should aggregate the data from different SM i and encrypt the data, in which the computation costs are 2C e + C m . As follows, the GC aggregates the data from different RMM, which costs 2C e + C m .
When a designated recipient requests electricity data from the OC, OC forwards the request to the GC. It costs 3 exponentiation, a multiplication, and a hash operation. Then OC also needs to perform 3 exponentiation, a multiplication and a hash operation. After the designated recipient receives the data, it needs to spend 4C e + 3C m + 2C H to perform the decryption operation. As hash function can be computed much faster than the other computations, we will ignore the computational costs of hash function evaluation.
Communication Costs
The communication overheads of our proposed scheme can be divided into two parts, power consumption data transmission, and electricity data request. In Figure 3, we compare the communication overheads of our scheme with some related schemes, such as EPPA [18], Shen's scheme [39], and Jo's scheme [40]. We first consider power consumption data transmission phase, where smart meters transmit the power consumption data to the RMM. The data is in the form of {C m i , σ i , H ID i , T i }. Thus, the size of power consumption data is S s = |C m i | + |σ i | + |HID i | + |T i |. The group element in G is of 160 bits and Z * p contains elements of 160 bits. Each ciphertext is composed of two parts, we have 4L(n) = 4096 bits if we choose 1024-bit n. When we set |T i | = 100-bit length, the communication overheads of SM i -to-RMM are S s = 4516 bits. Then the communication overheads of RMM-to-GC is S R = |C m i | + |σ| + |T j | = 4356 bits.
Next, we consider the electricity data request phase. Electricity data sent by GC to OC is in ciphertext, so the size of the communication overheads are S G = 4096 bits. OC still sends PTU j encrypted data after re-encrypting. Thus the communication overheads are also S O = 4096 bits.
In Figure 3a,b, we plot the communication overheads versus the number of smart meters. We set the number of smart meters from 1 to 1000 and increased it by an interval of 100. As shown in Figure 3a,b, the communication overheads in the grid increase linearly with the number of smart meters. In Figure 4, we present a graph of the relationship between the number of regions and the communication overheads. In our scheme, the encryption mode with long ciphertext length is used, so the communication overheads of our scheme are about twice compared with the scheme proposed by Shen et al. [39]. However, the increase in the number of regions does not affect the communication overheads sent by the OC to the designated recipients.
Conclusions
In this paper, we have proposed a privacy-preserving data aggregation scheme for smart grids with user anonymity and designated recipients. The smart meters collect users' power consumption data but this data is encrypted using homomorphic re-encryption so that the adversary cannot intercept it and only the designated recipients can obtain the aggregated results. Moreover, users' identities are protected and there is no single point of trust. Therefore, it provides a more secure and flexible solution for privacy-preserving data aggregation in the smart grid. Performance analysis demonstrates that it is generally as efficient as the existing related schemes, achieving more desirable security features.
In future work, we would like to investigate further how to remove the assumption that all participants are honest-but-curious, and introduce novel verification techniques to ensure those dishonest participants can be detected and identified. Moreover, the security proof for the authentication property suffers a loose security reduction because security arguments for the Schnorr signature require to use the Forking Lemma. In the future, we would like to explore efficient authentication techniques with a tight security reduction. | 2022-04-22T15:14:40.809Z | 2022-04-19T00:00:00.000 | {
"year": 2022,
"sha1": "89349d7c446d691753b9e531673c0700370ab9b5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-8994/14/5/847/pdf?version=1650785574",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a560806939548bd422bc9ca7b80d5984756cab78",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
29959527 | pes2o/s2orc | v3-fos-license | Efficacy and safety of escitalopram versus desvenlafaxine in the treatment of major depression: A preliminary 1‑year prospective randomized open label comparative trial
Aim and Objective: To compare efficacy and safety of escitalopram with desvenlafaxine in the treatment of major depression. Materials and Methods: A total of 60 patients of depression were randomized into two groups after meeting inclusion criterion. In the first 3 weeks, escitalopram 10 mg/day was given and then 20 mg/day for the next 3 weeks in group 1 ( n = 30). Desvenlafaxine in the first 3 weeks was given 50 mg/day and 100 mg/day for the next 3 weeks in group 2 ( n = 30). The parameters evaluated during the study were efficacy assessments byHamilton Scale of Rating Depression (HAM‑D), Hamilton Rating Scale of Anxiety (HAM‑A), and Clinical Global Impression (CGI). Safety assessments were done by UKU‑scale. Results: Escitalopram and desvenlafaxine significantly ( P < 0.001), reduced HAM‑D, HAM‑A, and CGI scores from their respective base lines. However, on comparison failed show any statistical difference at 3 and 6 weeks of treatment. Escitalopram and desvenlafaxine were both found to be safe and well‑tolerated and there was not much difference between the two groups as evident from UKU Scale and their effect on various biochemical parameters. Conclusion: The present study demonstrated similar efficacy and safety in reducing depression and anxiety with both escitalopram and desvenlafaxine, but clinical superiority of one drug over the other cannot be concluded due to limitations of the small sample size.
INTRODUCTION
Major depressive disorder is a prevalent and disabling illness worldwide, associated with significant impairment in physical and social functioning as well as increased morbidity and mortality. [1] At its worst, depression can lead to suicide. Fewer than 25% of those affected have access to effective This is an open access article distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as the author is credited and the new creations are licensed under the identical terms. attributed to their improved tolerability, ease of use and far greater safety. [2] However, the safety and tolerability of individual SRIs vary within the class [3] and it is clear that many patients may not respond optimally to an acute course of any given treatment. Thus, there is a constant need of search for new better effective treatment.
Hence, a preliminary 1-year prospective randomized open label trial comparing efficacy and safety of escitalopram with desvenlafaxine in the treatment of major depression was undertaken.
MATERIALS AND METHODS
A total of 60 patients from psychiatry outpatient departmentof a tertiary level hospital diagnosed with major depression were included in the study after they fulfilled the inclusion criteria. A total of 30 patients were taken in each group. A written informed consent was obtained from first relative after explaining the nature and purpose of the study along with reverse consent from the patients on their clinical improvement was taken. The study was approved by institutional ethics committee, GMC, Jammu vide number 36A/Pharma/IEC/2010/528 dated 27. 10. 2010. The flow diagram of the study is shown in Figure 1.
Patients with major depression of both sexes in the age group of 18-55 years with a Hamilton Scale of Rating Depression (HAM-D) score >20; who meet Diagnostic and Statistical Manual of Mental Disorders-IV criteria for depression, [4] with a minimum total score of 20 on the 24-item Hamilton Scale for Depression, [5] non pregnant females and without any comorbid conditions like DM, HT, and IHD were included in the study.
Exclusion criteria
"Patients with the following conditions were excluded from the study -those taking any antidepressant in the last 6 weeks; intake of any drug which may causes depressive state, psychosis or anxiety; any chronic ailment; any substance abuse; and those patients intolerant or allergic to the drug.
In case any exacerbation of disease or ADR event during the study, the data from this population were decided to be included in statistical analysis at the time of their exclusion.
Patients meeting eligibility criteria at the screening visit were assigned randomly in 1:1 ratio to either receive escitalopram or desvenlafaxine. The total duration of study was 6 weeks. The dose of study medication was to be increased after 3 weeks in accordance with the approved labelling information. In the first 3 weeks, escitalopram 10 mg/day was given and then 20 mg/day for the next 3 weeks. Dose of desvenlafaxine in the first 3 weeks was 50 mg/day and 100 mg/day for the next 3 weeks. The primary endpoints were HAM-D, Hamilton Rating Scale of Anxiety (HAM), and Clinical Global Impression (CGI).
Adverse event monitoring was done by using UKU-scale. Furthermore, some of biochemical parameter like blood sugar, LFT, and RFT were also analyzed and compared among two treatment arms for safety assessment.
Statistical analysis
All the analysis was carried out with the help of computer software SPSS version 15 for windows. The evaluation of patients in two groups was done by applying HAM-D, HAM-A, and CGI and will be reported as mean ± standard deviation scores. The score were reported as % change from the baseline and were assessed by use of paired/unpaired "t" test/analysis of covariance whichever was applicable. Categorical variables were reported as percentage and the statistical analysis done by use of Chi-square test. A P < 0.05 was considered statistically significant. All P values reported analyses were two-tailed. The analysis was done on intention-to-treat basis.
RESULTS
The base line parameters of both the group has been shown in Table 1. Both escitalopram as well as desvenlafaxine significantly (P < 0.001) reduced HAM-D, HAM-A, and CGI scores from their respective base lines [ Table 2]. However, on comparison failed show any statistical difference at 3 and 6 weeks of treatment [ Table 3].
Escitalopram and desvelafaxine were both safe and well-tolerated and there was not much difference between the two groups as evident from UKU-scale [ Table 4].
Escitalopram and desvelafaxine were both safe and well-tolerated and there was not much difference between the two groups as far the effect on various biochemical parameters was concerned [ Table 5].
DISCUSSION
In the present study, it was seen that escitalopram-reduced depression levels with a decrease from the baseline value HAM-D from 3 weeks onward till the end of the study. The above findings are in concurrence with various published reports in which these drugs, individually were efficacious in decreasing depression and anxiety in comparison to placebo.
A similar reduction in depression by escitalopram is also reported by Burke et al., [6] and Leopolo et al., [7] who showed that escitalopram significantly improved MADRS scores in comparison to placebo starting with in within one week of treatment and persisting till 8 weeks of therapy at both the doses (10 mg and 20 mg).
In the present study, it was seen that desvenlafaxine produced a decrease from the baseline value of HAM-D from 3 weeks onward till the end of the study, that is, at 6 weeks. Desvenlafaxine has also been shown to cause a significant reduction in depression in several studies. [10][11][12][13] These studies found out that mean HAM-D scores for desvenlafaine 50, 100, 200, and 400 mg/day were significantly lower than placebo, respectively.
In the present study, there was a trend toward the higher overall response of escitalopram in decreasing depression and anxiety than with desvenlafaxine (F = 3.014, P = 0.057).
Our results are almost in accordance to those put forth by Soares et al., [14] who reported a similar reduction in depression with escitalopram and desvenlafaxine. Reductions in HAM-D 17 total score at acute phase end point were similar for desvelafaxine and escitalopram-treated women (-13.6 vs. 14.3, respectively, P = 0.24). No significant difference was observed between groups at continuation phase end points in the proportion of who maintained response (desvenlafaxine, 82%; escitalopram, 80%, P = 0.70).
Limited studies have directly compared the efficacy of escitalopram with desvenlafaxine in treatment of major depression, otherwise escitalopram has been compared with class SNRI's in other studies. Kornstein et al., [15] showed that mean reduction in MADRS score from baseline to week 8 was significantly greater for the escitalopram group versus the SNRI group.
In the present study, it was seen that escitalopram produced a decrease from the baseline value of HAM-A, from 3 weeks onward till the end of the study, that is, at [16] who consistently demonstrated significant anxiolytic properties in addition to antidepressant efficacy with escitalopram. It also showed efficacy in treating panic disorder and generalized and social anxiety disorders.
Bandelow et al., [17] documented the efficacy of escitalopram, on symptoms of anxiety in patients with major depressive disorder. Similarly, escitalopram was shown significantly more effective compared to placebo in treating both anxiety symptoms and the entire depression in the total depressive population, as well as in depressive patients with a high degree of anxiety by Cyril and Jaromir. [18] In the present study, it was seen that desvenlafaxine also produced a decrease from the baseline value of HAM-A, from 3 weeks onward till the end of the study, that is, at 6 weeks. Similar reductions in anxiety by desvenlafaxine were put forth by Tourian et al. [19] In this secondary analysis, desvenlafaxine-treated patients experienced significant improvement in anxiety symptoms compared with placebo-treated patients.
In current study, there is no significant difference in the efficacy of escitalopram and desvenlafaxine in reducing anxiety (F = 2.596, P = 0.083). Similar results are shown by Soares et al., [14] where there are no differences in the efficacy of escitalopram and desvelafaxine in reducing anxiety.
In the present study, it was seen that escitalopram produced a decrease from the respective baseline value of CGI from 3 weeks onward till the end of the study, that is, at 6 weeks. A similar reduction in CGI by escitalopram is also reported by, Olié et al., [20] and Yevtushenko et al. [21] In the present study, it was seen that desvenlafaxine also produced a decrease from the respective baseline value of CGI from 3 weeks onward till the end of the study, that is, at 6 weeks. Similar reductions in CGI were seen in other studies also. [14,19] In current study, there is no significant difference in the CGI scores of escitalopram group and desvenlafaxine group (P = 0.475) Scores between subjects. Similar results were seen by Soares et al., [14] with almost same reduction in both escitalopram and desvenlafaxine group.
In the present study, escitalopram and desvenlafaxine were safe and well-tolerated. UKU scale is used to evaluate side effects and the values are (0.93 and 1.13), respectively in escitalopram and desvenlafaxine group.
There is no statistically significant difference between the two groups (t =0.96, P = 0.33). There was no other discontinuation due to any other reasons including adverse effects in the present study. Similar side effect profile was put forth by Soares et al., [14] who showed both desvenlafaxine and escitalopram were generally safe and well-tolerated.
The present study does have some limitations. First, it is a short-term study with less number of patients andeffect of small sample size cannot be ruled out on results of the study. | 2018-04-03T01:33:08.750Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "2e466de392eba14c05ea332216c1f7e53d2ffb33",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/2229-3485.173771",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "093cda4c898c95d5020cf1f19eea86dcfa83adf9",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225372976 | pes2o/s2orc | v3-fos-license | How a faith-methodology distinguishes theology from philosophy
Recent developments in both philosophy and theology have blurred the line between the two disciplines: philosophers of religion have sought to use philosophical methodology to answer explicitly theological questions, while at the same time the rise of analytic theology has led to philosophical tools increasingly being used for the thelological task. This paper sets out to demonstrate that the most useful way to distinquish theology from philosophy of religion is to adopt a faith-methodology and goes on to outline what this would look like in practice.
Tertullian's famous question 'What has Athens to do with Jerusalem?' concerns the relationship between theology (Jerusalem) and philosophy (Athens). Regardless of answer, the question assumes some distinction between the two. Developments in modern philosophy, however, have complicated this assumption. Some philosophers of religion, for instance, have used their philosophical training to answer explicitly theological questions, 1 and the rise of 'analytic theology' is equally explicit about its desire to adopt philosophical tools for the theological task. 2 A more pressing question, then, is not what Athens has to do with Jerusalem, but do the sprawling cities overlap such that they are no longer necessarily distinct?
One might wonder at the outset why this question mattersdoes it matter whether a particular work is called 'philosophy', 'theology', 'philosophy of religion', 'philosophical theology', or anything else? What is at stake, however, is whether the discipline of theology has anything unique to offer or whether it is rendered superfluous. The aim of this paper is to show how theology is distinct from philosophy of religion.
Words about God and loving wisdom: Defining theology and philosophy
In order to distinguish the disciplines of theology and philosophy, initial definitions must be offered. History or etymology can be guides but are ultimately insufficient to provide helpful definitions. 3 If philosophy is the love (philo) of wisdom (sophia), then anyone who loves wisdom is a philosopher. Similarly, if theology is simply words (logos) about God (Theos), then most human beings are theologians. This might correctly describe the kind of activity germane to each, but as definitions of a discipline they are too broad. On the other hand, defining theology or philosophy only as academic disciplines would be too narrow. 'Armchair philosophers' or 'armchair theologians' are those with an interest in the respective discipline without any academic training. 4 Good definitions of philosophy and theology will be sufficiently broad enough to include practitioners inside and outside the academy but sufficiently narrow enough to be helpfully descriptive and not include all who 'love wisdom' or 'speak about God'.
What is philosophy? Keith Yandell is right that there is no such thing as a 'noncontroversial answer' to this question. 5 He defines it as 'the enterprise of constructing and assessing categorical systems.' 6 This suggests that philosophy, broadly understood, is best defined by its methodology, or how it operates, rather than its object of study as most other disciplines do (e.g. 'biology' is the study of living things). William Lane Craig and J. P. Moreland call philosophy a 'second-order discipline' for this reason. A first-order discipline studies particular objects, but a second-order discipline studies other fields or disciplines. 7 Eleonore Stump argues that philosophy does seek something in particular (wisdom), but is distinct because what it seeks is not a concrete object but 'an abstract universal.' 8 Understood in this way, philosophy and theology are not identical since theology is, in some way, indexed to the study of God. 9 However, philosophy of religion is a sub-discipline of philosophy indexed to religious claims and practices. Yandell says it offers 'philosophically accessible accounts of religious traditions and assessing those traditions'. 10 Charles Taliaferro defines it as 'the philosophical examination of the themes and concepts involved in religious traditions' including 'alternative concepts of God or ultimate reality'. 11 The challenge, then, will be distinguishing theology from philosophy of religion.
What is theology? Like philosophy, there is no such thing as noncontroversial answer to this. By 'theology', I mean distinctly Christian theology. Andrew Torrance argues that theology in the Christian tradition is marked by a 'commitment to being "scientific."' 12 This 'refers to theology as an endeavor to understand a mind-independent object in a way that is true to the nature of that object.' 13 For him 'the task of theology should be characterized as a commitment to understanding God and all things in relation to God (GATRG) in a way that is accountable to the true nature of GATRG […] and takes into account God's self-disclosure.' 14 Torrance 'follows Aquinas' in this understanding of theology as science. In what follows, I adopt a scientific understanding of theology. 16 There are at least four features which mark the task of theology. First, theology is the study of God, a mind-independent person. Stump contrasts philosophy, which seeks the abstract universal 'wisdom', with theology, which seeks a person 'characterized by mind and will' who 'cannot be construed as an abstract universal.' 17 Second, theology depends on God's self-disclosure. 18 It is 'received in faith from a superior knowledge,' the principles of which cannot be derived without revelation. It, adds Torrance, 'is bound up with God's historical self-disclosure in the spatiotemporal order.' 19 Likewise, Thomas McCall says theology is the attempt to articulate 'what we may know of God as God has revealed himself to us.' 20 Third, the context of theology is Scripture and the church. The revelation on which theology depends is 'given to us in the scriptural revelation through the tradition of the church.' Although made possible by God, theology is a human task and occurs within this particular context. Fourth, theology is performed in faith. It must be 'received in faith'. Torrance argues that 'without the condition that is described as "faith"' the theologian 'has no recognition of GATRG.' 21
A faith-methodology: Distinguishing between theology and philosophy of religion
Using the definitions above, theology and philosophy of religion cannot be called identical. It is easy to distinguish philosophy of religion from theology. A systematic account of Buddhism, for instance, could be an example of philosophy of religion but not theology. However, distinguishing theology from philosophy of religion is more difficult. What is needed is some feature or characteristic that could be ascribed to theology but, necessarily, not to philosophy of religion. In this section I analyze three ways to distinguish theology from philosophy of religion before suggesting a fourth as a better way forward.
The first way to distinguish theology from philosophy is to argue that theology is a science and philosophy of religion is not. Torrance, for instance, argues that theology's object is 'mind-independent,' but philosophy of religion is 'mind-dependent' and reducible to 'human thoughts about GATRG.' 22 Jonathan Rutledge, however, doubts whether this approach is sufficient since philosophy too can be defined as a science. 23 He says philosophy 'centrally involves some form of conceptual analysis' that includes concepts and propositions which, most philosophers agree, are mind-independent. Rutledge thinks Torrance's understanding of philosophy as necessarily mind-dependent demonstrates 'a fundamental misunderstanding of what philosophy is.' Since philosophy includes, for Rutledge, 'investigating mind-independent objects', 24 then it can count as a science in Torrance's definition.
A second way to distinguish theology from philosophy of religion is to argue that the referent in each is different. This can take at least two forms. First, one might argue that the conception of 'God' used in philosophy of religion is different than the conception of theology. Theology requires one 'not merely to say things about God (or God-and-everything)it is to speak truly of God (so far as we can).' 25 This requires, adds Torrance, 'the revelatory activity of God' without which 'a person cannot know the triune God and, therefore, cannot know the one to whom theological words refer'. 26 In this form, the philosopher of religion might attempt to speak about God but fails to do. Second, one might argue that the referents are different kinds of things. Rutledge, for instance, recognizes that the concepts and propositions used in philosophy of religion are not the same thing as 'God' because they are not a person. Theology refers to a person while philosophy of religion does not. Stump takes a similar view: […] the difference between theology and philosophy lies most centrally in this difference in what they seek. It makes a great difference to one's method of seeking and one's view of the nature of depth-in-understanding whether what one is seeking is an abstract universal such as wisdom or something with a mind and a will. 27 Philosophers of religion trade in concepts and ideas while theologians, first and foremost, study a person.
The third way to distinguish theology and philosophy of religion, similar to the second form of the second way, is based on their 'epistemological orientation.' 28 Stump argues that theology helps 'connect human persons to the person of God and to gain comprehension of him.' 29 Theology and philosophy, then, incorporate different ways of knowing; philosophy aims for knowledge that while theology aims for personal knowledge. 30 The basic orientation of each, says Stump, is distinct 'in terms of the kind of epistemology each needs to pursue its aims'. 31 Similarly, Rutledge says that personal knowledge is 'exclusive and fundamental to the practice of theology.' 32 These ways have much to commend, yet there remain intuitive problems. 33 The first way fails to offer a definition of science which excludes philosophy of religion. The second way says that the referent for each discipline is different, but it is difficult to see why this need be the case. The philosopher of religion, regardless of faith commitment, might refer to an all-powerful, perfectly loving person who created everything. 34 This, at least initially, appears to refer to the same person of Christian theology even if there remain significant differences. Moreover, why can the Christian philosopher of religion not, qua philosopher of religion, refer to the Triune God of Christianity in her work? The third way would require any work using concepts and propositions about the nature of God to be philosophy of religion and theology to be non-propositional. Theology, however, as a human task of speaking about God does use propositions and, without propositions, it would be difficult to consider it an academic discipline. These are not intended to be defeaters, but they are, to my mind, intuitive weaknesses of each way.
The fourth way to distinguish theology from philosophy of religion avoids these weaknesses. Theology can distinguish itself not principally in what kind of task it is (a science), nor in its object of study (God), nor in its kind of knowing (personal), but instead in how it is performed. The fourth feature of scientific theology is particularly important: theology is performed in faith. This could be understood as merely engaging discipline while having faith, but I understand it as something more fundamental to the task. Theology, unlike other disciplines including philosophy of religion, adopts a faith-methodology. A methodology is the mode of operation for a discipline; it is the structure or system that one operates within. Faith, for the Christian theologian, is characterized by a trust or allegiance to the Father known in Jesus Christ by the power of the Spirit. A faith-methodology, then, is a mode of operation whereby faith determines the practices and context of the discipline and is not merely incidental to it.
To clarify this further, we can see how a faith-methodology manifests itself in at least two ways. First, a faith-methodology manifests itself by inhabiting what John Webster calls 'a Christian culture.' 35 For Webster, 'a culture is a space or region made up of human activities. It is a set of intentional patterns of human action which have sufficient coherence, scope, and duration to constitute a way of life.' 36 By inhabiting a particular culture, theology remains, to some degree, a human task. In the third way of distinction, Stump and Rutledge are both correct to conclude that the task of theology is not reducible to propositional content. It is, as the study of a person, necessarily personal. Yet it continues to participate within a human culture and, therefore, continues to use human language (i.e. propositions) to describe God. 37 Theology adopting a faith-methodology remains, then, academically appropriate.
Theology, however, inhabits not just any culture but a distinctly Christian one. That is, a culture 'which seeks somehow to inhabit the world which is brought into being by the staggering good news of Jesus Christ'. 38 It cannot, then, be primarily conceived of as an academic discipline but an activity which is 'characterized by a certain regional specificity'that of the church of Jesus Christ. 39 Sarah Coakley, likewise, argues that theology is 'a form of intellectual investigation' but nonetheless a form 'in which a secular, universalist rationality may find itself significantly challengedwhether criticized, expanded, transformed, or even at points rejected.' 40 Webster thus insists that the better question for the relationship of theology and academy is not 'what does theology need to become in order to fit into the academy?' but rather 'what does the academy need to become in order to profit from Christian theology?' 41 Theology has an ecclesial vocation that is prior to, and more fundamental than, its academic vocation.
The second way a faith-methodology manifests itself is in the habits and practices germane to the method. Since the task of theologian is primarily ecclesial, Webster argues that 'being a Christian theologian involves the struggle to become a certain kind of person, one shaped by the culture of Christian faith' 42 ; the theologian will be one continually disrupted. 43 Coakley adds that 'the task of theology is always, if implicitly, a recommendation for life. The vision it sets before one invites ongoingand sometimes disorientingresponse and change, both personal and political, in relation to God.' 44 Unlike other disciplines, theology, insists Webster, 'requires the cultivation not only of technical skills but also of habits of the soul.' 45 This means that certain practices, or habits, are not incidental to task of theology, but fundamental to it. These practices include 46 but are not limited to: • Prayer, in the sense that conversation with God in individual and communal prayer counts as reflection and engagement with God; 47 • Worship, in the sense that the liturgy of the church can contribute to a cognitive apprehension of God; 48 • Humility, in the sense that human language about God is subservient to the revelation of God; 49 • Submission to and engagement with Scripture and the church tradition, in the sense that the theologian perceives her task as within this particular tradition that is governed by particular norms and criteria for truth. 50 These practices, in the specific senses identified here, proceed from a faithmethodology. They are fundamental to theology because they are the way one comes to know God. Sarah Coakley points out that 'if one is resolutely not engaged in the practices of prayer, contemplation, and worship, then there are certain sorts of philosophical insight that are unlikely, if not impossible, to become available to one.' 51 Without these practices theological practice is deficient if not impossible. It is of course true that practitioners of other disciplines might, for instance, pray while practising their discipline, but this is not a faith-methodology. In a faith-methodology, prayer can actually be a way the discipline is practised. This does not mean, however, that all of these practices, in the senses identified above, are always necessary for any theological work. 52 The theologian may, for instance, produce a work of theology without showing how worship in the liturgy is contributing to that work, but she will recognize worship as appropriate, and even normative to some degree, in the task of theology. By adopting a faithmethodology, the theologian practises her discipline in a way the philosopher of religion cannot.
One objection 53 to the faith-methodology as the distinguishing mark of theology is that the Christian philosopher of religion might adopt a faithmethodology just like the theologian. Moreland and Craig, for instance, argue that 'the task of the Christian philosopher of religion' need not differ from the theologian 'insofar as he philosophizes as a Christian'. 54 It is true that the Christian philosopher of religion, or practitioners of other disciplines for that matter, might have a deep personal faith in Jesus Christ and find that faith relevant to her work. Her methodology, however, determines the discipline in which she engages. A philosophical methodology performed by a person of faith is not the same as a faithmethodology. A philosopher can pray while practising philosophy, but the theologian prays in order to practise theology. 55 If a practitioner adopts a faith-methodology to speak about God, then the better conclusion would be that she ceases to do philosophy of religion and, instead, performs theology. There is little reason to think, after all, that an academic trained in one area (like philosophy) might do work in another area (like theology).
Conclusion
Theology and philosophy, or philosophy of religion, have much in common. Both operate within the academy, and both use propositions to describe the nature of God. Moreover, the work of many modern philosophers and theologians have brought the disciplines closer together. Yet they remain distinct primarily in their methodology. Theology's method is best characterized by a faith-methodology, a methodology which is determined by one's faith in Jesus Christ. By adopting this methodology, theology proves distinct from all other academic disciplines, including philosophy of religion.
understanding of theology and, as I say above, I simply assume this definition. 54 Moreland and Craig, Philosophical Foundations for a Christian Worldview, 464. 55 This is not to suggest that prayer is merely instrumental to the theologian. Webster is helpful here: 'Prayer is not to be thought of functionally or instrumentally. It is not a means to an end; it is not some kind of contemplative clearing of the mind or spirit, a positioning of oneself more accurately before the intellectual task […] Prayer is speech addressed to God in which we ask for help with an urgency and intensity which only makes sense if we really are in dire straits. Prayer […] corresponds to our incapacity, to our unsuitability for what is required of us, and therefore to utter necessity of the merciful intervention of God.' (The Culture of Theology, 143) | 2020-08-06T09:06:37.038Z | 2020-08-04T00:00:00.000 | {
"year": 2020,
"sha1": "fc6e973371cdb9dace4cd3e2cd9c31462b0df9b2",
"oa_license": "CCBYNC",
"oa_url": "https://ojs.st-andrews.ac.uk/index.php/TIS/article/download/2100/1563",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8f06e8d2cd17cbb81be0a53f1969ad1263696275",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": [
"Philosophy"
]
} |
90141614 | pes2o/s2orc | v3-fos-license | Ichthyotoxic Cochlodinium polykrikoides red tides offshore in the South Sea, Korea in 2014: III. Metazooplankton and their grazing impacts on red-tide organisms and heterotrophic protists
Copyright © 2017 The Korean Society of Phycology 285 http://e-algae.org pISSN: 1226-2617 eISSN: 2093-0860 Ichthyotoxic Cochlodinium polykrikoides red tides offshore in the South Sea, Korea in 2014: III. Metazooplankton and their grazing impacts on red-tide organisms and heterotrophic protists Moo Joon Lee, Hae Jin Jeong*, Jae Seong Kim, Keon Kang Jang, Nam Seon Kang, Se Hyeon Jang, Hak Bin Lee, Sang Beom Lee, Hyung Seop Kim and Choong Hyeon Choi Department of Marine Biotechnology, Anyang University, Incheon 23038, Korea School of Earth and Environmental Sciences, College of Natural Sciences, Seoul National University, Seoul 08826, Korea Advanced Institutes of Convergence Technology, Suwon 16229, Korea Water and Eco-Bio Corporation, Kunsan National University, Kunsan 54150, Korea Marine Biodiversity Institute of Korea, Seocheon 33662, Korea School of Marine Life and Applied Sciences, Kunsan National University, Kunsan 54150, Korea
INTRODUCTION
Metazooplankton including copepods, cladocerans, chaetognaths, larvae of invertebrates, and hydrozoans are a major component of the marine ecosystem (Fulton 1984, Kang et al. 1996, Uye and Liang 1998, Calbet 2001, Gallienne and Robins 2001, Puelles et al. 2003, Kimmel and Roman 2004, Turner 2004, Tseng et al. 2009). They are consumers of phytoplankton and heterotrophic protists and are in turn prey for larval fish and other metazoans (Porter et al. 1985, Houde and Roman 1987, Turner et al. 1988, Stoecker and Capuzzo 1990, Sanders and Wickham 1993, Carlsson et al. 1995, Croll et al. 2005, Jeong et al. 2007, Waggett et al. 2008, Lazareva and Kopylov 2011. Thus, the dynamics of metazooplankton affect the population dynamics of diverse marine organisms. Red tides or harmful algal blooms are caused by exclusively autotrophic, mixotrophic, and heterotrophic protists (Smayda 1997, Aktan and Keskin 2017 and often cause great economic losses in the aquaculture and tourism industries (Smayda 1990, Glibert et al. 2005, Anderson et al. 2012, Fu et al. 2012. Thus, understanding the processes of red tides and predicting their outbreak, persistence, and decline are needed to minimize loss (e.g., Jeong et al. 2015). There have been many studies on the feeding of metazooplankton on red tide organisms, but fewer studies on the grazing impacts of metazooplankton on red tide organisms (Uye 1986, Turner and Granéli 1992, Turner and Tester 1997, Calbet et al. 2003. Thus, it is worthwhile to explore the effects of metazooplankton on red tide organisms in natural environments. In general, many heterotrophic protists are effective grazers on red tide organisms and are in turn prey for metazooplankton (Houde and Roman 1987, Jeong and Latz 1994, Carlsson et al. 1995, Jeong 1999, 2001, Tillmann 2004, Cohen et al. 2007, Park et al. 2013). An assessment of predation impacts of metazooplankton on populations of heterotrophic protists that graze on red tide organisms is needed to understand the dynamics and interactions among these three components.
Red tides frequently occur in the South Sea of Korea where there is a high concentration of aquacages , Lim et al. 2017. These red tides often cause a great loss in the aquaculture industry (e.g., Park et al. 2013). The causative species for red tides in this region are Cochlodinium polykrikoides, Prorocentrum spp., Alexandrium spp., Ceratium spp., Karenia spp., and diatoms (e.g., Jeong et al. 2017). In http://e-algae.org in each cruise. Additionally, the temperature, salinity, pH, and dissolved oxygen (DO) for each sampling depth were measured using a YSI Professional Plus instrument (YSI Inc.). The chlorophyll-a concentration (Chl-a) was measured as described in American Public Health Association (1995).
Metazooplankton samples were collected at every station by towing a 303 µm mesh, 45 cm diameter conical plankton net with a flowmeter vertically from the bottom to the surface (or 30 m if the bottom depth was more than 30 m) every sampling interval from May to November 2014. Each plankton sample was poured into a 500-mL samples were collected from one ship at stations 101-115, whereas the other ship collected samples at stations 201-309. On the second day, one ship collected water samples at stations 501-507 and 601-608, whereas the other ship collected at stations 701-703, 801-805, and 901-904. The sampling time at the first station in each sampling day was between 07:30-08:00 h and at the last station between 14:00-15:00 h.
Water temperature and salinity in the water column were measured using two CTDs (YSI6600; YSI Inc., Yellow Springs, OH, USA and Ocean seven; Idronaut S.r.l., Milan, Italy). The data obtained from the CTDs were calibrated
The grazing coefficients (g, d -1 ) were calculated as , where CR (mL predator -1 h -1 ) is the clearance rate of the predator on a target prey at a prey concentration and PC is a predator concentration (predator mL -1 ). The CR values were calculated as CR = IR / X , where IR (cells eaten predator -1 h -1 ) is the ingestion rate of a predator on the target prey and X (cells mL -1 ) is the prey concentration. These CR values were corrected using Q 10 = 3.2 (Hansen et al. 1997) because the in-situ water temperature and the temperature used in the laboratory for this experiment (20-22°C) were sometimes different.
polyethylene bottle and preserved with 4% formalin. Species identification and determination of metazooplankton abundance was performed using dissecting and inverted microscopes at magnifications of ×40 and ×200. Data on phytoplankton, including red tide species and heterotrophic protists, were obtained from Jeong et al. (2017) and Lim et al. (2017) in which sampling was conducted at the same times and stations as in this study.
Grazing impact
The calanoid copepods Acartia spp. are known to feed on various species of dinoflagellates such as Cochlodinium polykrikoides, Prorocentrum donghaiense, Polykrikos kofoidii, and Oxyrrhis marina (Jeong et al. 2001, Kim 2005. Thus, it was assumed that the ingestion rate of Acartia spp. on dinoflagellates C. polykrikoides, P. donghaiense, P. kofoidii, and O. marina were the same as the ingestion rate of total calanoid copepods on the C. (Appendix 1). However, C. abdominalis, Neocalanus sp., S. tenellus, and E. nordmanni were only present during one sampling.
Abundance of metazooplankton
The abundance of total metazooplankton at each sampling during this study were 1-13,131 individuals (inds.) m -3 and mean abundance was 297-1,119 inds. m -3 (Table 1, Fig. 3). The mean abundance of total metazooplankton was high from Jun 23 to Aug 13 (Fig. 3B).
Spatiotemporal distributions of the total metazooplankton
Metazooplankton predominantly inhabited the marginal and shallower regions (<30 m depth) of the study area. From May 7 to Aug 13, metazooplankton occurred in high concentrations (<13,200 inds. m -3 ) near the shallow waters of Goheung and Yeosu (Fig. 4). In contrast, on Sep 1, metazooplankton were regionally concentrated
Data process
The spatiotemporal distributions of each taxon of metazooplankton communities were plotted by Surfer (Golden software, LLC, Golden, CO, USA). The correlation coefficients between physical, chemical, and biological properties were calculated using the Pearson's correlation (Conover 1980, Zar 1999. By combining field data on the abundance of predator and prey species with the ingestion rates of the predator on the prey obtained in the literatures with some assumptions, we estimated the grazing coefficients.
Environmental properties
The mean surface water temperature during the study varied from 15.6 to 24.7°C with the highest temperature occurring in August 2014 and the lowest temperature in May ( Fig. 2A). The mean surface salinity ranged from 31.4 to 34.1 with the highest salinity occurring in May and the lowest salinity in August (Fig. 2B). The pH ranged from 8.02 to 8.22 and the Chl-a from 1.0 to 3.6 µg L -1 with a peak on Sep 1 (Fig. 2C & D). DO ranged from 7.2 to 8.3 mg L -1 (Fig. 2E).
Correlations between abundance of major metazooplankton taxa and environmental factors
The abundance of total metazooplankton was significantly positively correlated with the concentration of total phototrophic dinoflagellates, but negatively correlated with pH ( Table 2). The abundance of copepods was significantly positively correlated with salinity, but negatively correlated with water temperature (T), pH, and DO (Table 2). Moreover, the abundance of cladocerans was significantly positively correlated with DO ( Table 2).
The abundance of invertebrate larvae was significantly positively correlated with T, Chl-a, and abundance of phototrophic dinoflagellates and tintinnid ciliates (TCI) but negatively correlated with salinity and pH (Table 2). Furthermore, the abundance of the chaetognaths was significantly positively correlated with T, but negatively correlated with salinity, pH, and DO (Table 2). However, the abundance of chaetognaths significantly positively correlated with Chl-a and TCI (Table 2). In addition, the abundance of the hydrozoans was significantly positively correlated with temperature, but negatively correlated with pH (Table 2).
Grazing impact by calanoid copepods on redtide organisms and heterotrophic dinoflagellates
When the abundance of the phototrophic dinoflagellate C. polykrikoides and co-occurring calanoid copepods was 1-2,990 cells mL -1 and 1-2,480 inds. m -3 , respectively, the calculated grazing coefficient attributable to calanoid copepods on co-occurring C. polykrikoides was up to 0.018 d -1 (Jeong et al. 2017) (Fig. 6). Furthermore, when the abundance of the phototrophic dinoflagellate Proro- http://e-algae.org abundance of copepods were much lower than that in Masan Bay and Fukuyama Harbor (Uye andLiang 1998, Kim et al. 2013a). Thus, the lower Chl-a in this study may be partially responsible for the lower maximum abundance of copepods.
Effect of environmental factors on the abundance of metazooplankton taxa
During this study, the abundance of total metazooplankton was not significantly affected by T, S, DO, and the concentrations of NO 3 , PO 4 , and SiO 2 . Furthermore, it was also not significantly affected by the abundance of diatoms, euglenophytes, cryptophytes, heterotrophic dinoflagellates, tintinnid ciliates, and naked ciliates. However, it was significantly affected by the abundance of phototrophic dinoflagellates. During this study, four phototrophic dinoflagellate species such as P. donghaiense, C. furca, A. fraterculus, and C. polykrikoides formed red tides (Jeong et al. 2017). Thus, dinoflagellate red tides may positively affect the abundance of total metazooplankton (Griffin et al. 2001, Turner and Borkman 2005, Jansen et al. 2006. During this study, P. donghaiense formed red tides from June to July (Jeong et al. 2017). The abundance of the copepods C. affinis, L. euchaeta, L. rotunda, P. parvus, barnacle nauplii, decapod zoea, fish larvae, chaetognaths,
Abundance of metazooplankton
The maximum abundance of total metazooplankton obtained during this study (1.3 × 10 4 inds. m -3 ) is comparable to that in Jinhae Bay, Kangjin Bay, Seomjin River Estuary, and coastal waters of Yeosu which are located in the South Sea of Korea and Balearic Sea of Mallorca, but slightly lower than that in the Masan Bay (Korea), Newport River Estuary (UK), and NW Mediterranean (Table 6). Chl-a concentrations in the water from which the maximum abundance of total metazooplankton was obtained (2.4 µg L -1 ) is also comparable to that in the Seomjin River Estuary, Yeosu, NW Mediterranean, and the Balearic Sea of Mallorca, but much lower than that in Masan Bay , Puelles et al. 2003, Youn et al. 2010, Oh et al. 2013. Thus, in general, the maximum abundance of total metazooplankton is likely to be affected by Chl-a. Furthermore, the maximum abundance of total copepods during this study (0.4 × 10 4 inds. m -3 ) is comparable to or slightly lower than that in Kangjin Bay, coastal waters of Yeosu, the NW Mediterranean, and the Balearic Sea of Mallorca, but considerably lower than that in Masan Bay, Fukuyama Harbor (Japan), the NE Atlantic Ocean, and the Pearl River Estuary (China) ( Table 6). Chl-a concentrations coinciding with the maximum noid copepods on co-occurring Polykrikos spp. were up to 0.008 d -1 (i.e., up to 0.8% of the population of Polykrikos spp. were removed by total calanoids in a day). Therefore, calanoid copepods may not control populations of Gyrodinium spp. and Polykrikos spp. During this study, the heterotrophic dinoflagellates Gyrodinium spp. were abundant during the red tides dominated by P. donghaiense (Jeong et al. 2017, Lim et al. 2017. Furthermore, the calculated grazing coefficients attributable to Gyrodinium dominans / G. moestrupii on co-occurring P. donghaiense were up to 0.58 d -1 (i.e., up to 44% of the population P. donghaiense were consumed in 1 d). Therefore, populations of P. donghaiense might be affected by the grazing of Gyrodinium spp., but the grazing impact could be lowered by the predation of calanoid copepods. Moreover, C. polykrikoides populations were affected by grazing of Polykrikos spp. (Lim et al. 2017). However, the grazing impact may not be lowered by predation of calanoid copepods.
Red tides have occurred in coastal waters of many countries (Holmes et al. 1967, Hallegraeff 1993, Anderson 1997, Sordo et al. 2001, Jeong et al. 2017. The results of this study suggest that red tides could affect the abundance of metazooplankton and in turn the grazing impact by metazooplankton could sometimes affect the abundance of red tide organisms in the South Sea of Korea. Thus, to understand interactions among red-tide organisms, heterotrophic protists, and metazooplankton, and the roles of heterotrophic protists and metazooplankton in the dynamics of red-tide organisms, the temporal and spatial variations in their distributions should be simultaneously investigated. Additionally, the grazing impact by heterotrophic protists and metazooplankton on populations of red tide organisms and in turn the predation impact on populations of heterotrophic protists should be quantified. (NRF-2015M1A5A1041806; NRF-2017R1E1A1A01074419) and Pilot project for predicting the outbreak of Cochlodinium polykrikoides red tides funded by MSI (NRF-2014M4A1H5009428), the Useful Dinoflagellate Program of Korea Institute of Marine Science and Technology Promotion (KIMST) funded by the Ministry of Oceans and hydromedusa, siphonophores, and the cladoceran P. avirostris is significantly positively correlated with that of P. donghaiense. However, there have been no studies on feeding by these metazooplankton taxa on P. donghaiense and thus, it is worthwhile to explore this topic. During this study, Ceratium spp. formed red tides from July to August (Jeong et al. 2017). The abundance of the cladocerans E. tergestina and P. avirostris, barnacle nauplii, and echinoderm larvae is significantly positively correlated with that of two Ceratium species. The cladocerans E. nordmanni, P. avirostris, and Podon intermedius are known to feed on diverse algal species including Ceratium spp. (Katechakis and Stibor 2004). However, there have been no studies measuring ingestion rates of these metazooplankton taxa on Ceratium spp. and thus it is worthwhile to investigate.
Grazing impact by dominant metazooplankton on red tide organisms
In this study, calanoid copepods dominated metazooplankton assemblages at most sampling times. The calculated grazing coefficients attributable to calanoid copepods on co-occurring C. polykrikoides were up to 0.018 d -1 (i.e., up to 1.8% of the population of C. polykrikoides was removed by calanoids in a day). Furthermore, the calculated grazing coefficients attributable to calanoid copepods on co-occurring Prorocentrum spp. were up to 0.029 d -1 (i.e., up to 2.9% of the population of Prorocentrum spp. was removed by calanoid copepods in a day). Therefore, calanoid copepods may not control populations of C. polykrikoides or Prorocentrum spp. Furthermore, this maximum grazing coefficient is much lower than that by heterotrophic dinoflagellates and ciliates (Lim et al. 2017). In Masan Bay in 2004-2005, the grazing impact by the dominant calanoid copepod Acartia spp. on populations of Prorocentrum minimum was also much lower than that by the heterotrophic dinoflagellates and ciliates ). The abundance of copepod grazers is usually much lower than that of heterotrophic protist grazers although the ingestion rates of the former are greater than those of the latter. Thus, lower copepod abundance may be partially responsible for a lower grazing impact on populations of red tide dinoflagellates.
The calculated grazing coefficients attributable to total calanoid copepods on the co-occurring heterotrophic dinoflagellates Gyrodinium spp. were up to 0.047 d -1 (i.e., up to 4.6% of the population of Gyrodinium spp. were removed by total calanoids in a day). Furthermore, the calculated grazing coefficients attributable to total cala-Fisheries (MOF) and Management of marine organisms causing ecological disturbance and harmful effect Program of KIMST, and Research Institute of Oceanography, SNU award to HJJ. - - | 2019-04-02T13:12:42.492Z | 2017-12-15T00:00:00.000 | {
"year": 2017,
"sha1": "5bc603b18a9ef8f69b1f033e11ef3e7d3d63903f",
"oa_license": "CCBYNC",
"oa_url": "http://www.e-algae.org/upload/pdf/algae-2017-32-11-28.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "326182ed0a42fe0f4d082fa43dd3d94d44d2a0f4",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
85747462 | pes2o/s2orc | v3-fos-license | Intensity matching in cuttlefish
For efficient background matching it is essential that animals closely match a set of salient visual statistics of their visual surroundings. The mean intensity of a background is a key statistic, because it can be estimated across a large range of viewing distances by a simple computation. We investigated how the dynamic neuromuscular camouflage system of the cuttlefish Sepia offcinalis responds to changes in the mean background intensity of uniform backgrounds. We find that cuttlefish adapt their body intensity in response to variations in the mean background intensity, yet show biases in their body intensity beyond what can be predicted from the limited dynamic range of their camouflage system. On sandy backgrounds of various reflectance values their uniform body patterns maintain a constant yellow hue. This color constancy may represent an example of a color prior in a colorblind animal because a yellow body color would be the optimal hue for camouflage on sands typically encountered in their natural environment. Cuttlefish adapt their appearance to the background via a dynamic process composed of a complex mixture of intensity transients spanning timescales from the subsecond to the minute range. In very young animals camouflaging on dark sands the masquerade strategy is preferred over background matching. Masquerade is implemented by combining partial background matching with frequent expression of disruptive components. We furthermore provide an objective definition of disruptive components using hierarchical clustering and automated image analysis thus highlighting the role of chromatophore activity correlations in structuring the motor output of S.officinalis.
Introduction
The soft-bodied cuttlefish uses camouflage to avoid detection by its predators (1). Two features present considerable challenges to a camouflaging cuttlefish-the varied and complex nature of its visual surroundings and the keen eyesight and diversity of visual systems among its many predators (2). Many animals facing less sophisticated visual predators or less diverse visual surroundings have evolved simple yet effective heuristic solutions for camouflage. Examples include the stripes of zebras, which are effective at deceiving the visual system of tse-tse flies (3) or the white fur of artic hares (4), which enables a visual blending into its monotone artic surroundings. That the camouflage system of the cuttlefish is much more sophisticated can be readily appreciated by an examination of its physical structure (1). The body of a cuttlefish is tiled with millions of tiny pigment sacks called chromatophores. Each chromatophore is surrounded by a set of radial muscles whose expansion level is under direct neuromuscular control. The activity of the radial muscles determines the size of the chromatophore and allows regulation of the local intensity of a patch of skin. The dense tiling of chromatophores on the skin provides the cuttlefish with physical machinery that rivals modern printing and telecommunications technology in its capabilities. Yet such sophisticated camouflage machinery would be ineffective without a concomitantly sophisticated control system regulating the generation of camouflage patterns. Such a control system must implement a sensorymotor transformation from the visual surroundings to an appropriate body pattern. A collective body of experimental work has used the camouflage response as a sensorymotor assay to determine the many parameters of its visual environment to which cuttlefish are sensitive. These parameters include contrast (5), the presence of edges (6), intensity (7), polarization (8), image frequency content (6) and more complex configural features such as sensitivity to illusory contours (9), but surprisingly exclude information regarding color (10). In parallel, behavioral work has identified largescale units of chromatophores commonly called components that form recognizable units from which body patterns are assembled (11). What is required to further refine our understanding of cephalopod camouflage is a comprehensive set of quantitative rules, which define the transformation of visual features into cuttlefish body patterns. When considering background matching on visual textures, many authors have expressed hope that the theory of image statistics will provide a principled way forward towards a normative theory of camouflage (12). In the early 1970s Bela Julez investigated various statistics of images beginning with the mean, variance and autocorrelation and put forward a conjecture that two textures which have matching values for a set of statistics will be perceptually indistinguishable (13). While the Julez conjecture remains unproven and the necessary set of statistics unknown, results from computer vision (14) indicate that when images are analyzed with sets of local oriented filters (commonly called wavelets, similar in appearance to the receptive fields of cortical visual neurons found in mammalian visual cortex (15) but also across the animal kingdom (16)) and the resulting analysis coefficients are summarized in a set of a few hundred statistics, the resulting numbers can be effectively used to synthesize arbitrary textures to generate a very good perceptual match between template and synthetic images. The systematic method of pattern generation by a cuttlefish might therefore involve the calculation of the salient statistics of its visual surroundings followed by choosing from among its many body patterns the one that minimizes the discrepancy between surround and body pattern statistics or defaulting to a body pattern such as disruptive (which uses the eponymous mode of camouflage rather than background matching) in case no good match is found. Even with this rather specific hypothesis about the method of cuttlefish camouflage, we are left with many ambiguities the most important of which is the question regarding the necessary set of statistics. It is not even known whether the set of statistics is unique for any given wavelet transform and little is known about the effects that a change of basis has on the necessary set of statistics. In an effort to find a suitable starting point for investigating a normative theory of camouflage under all these uncertainties, we reasoned that the mean intensity of a texture would be the most relevant statistic for two reasons. First, cuttlefish are likely to be viewed by predators from a large variety of distances. The angular resolution of the visual system is limited due to blurring by the optics of the eye and the limit imposed by the finite size of photoreceptor angular spacing. Thus, increasing viewing distances would progressively degrade access to information contained within the high frequency components of an image. Information about the mean is contained within the lowest frequency bands and thus could be estimated from the greatest range of viewing distances. Secondly, the mean is a statistic that should be easy to estimate in most predator visual systems as its estimation involves a simple summation of photoreceptor activities within the appropriate retinal region. We thus chose to investigate whether and how well cuttlefish match this most important statistic by placing them on a series of seven backgrounds of uniform sands with increasing intensities varying from black to white. We found that cuttlefish varied their mean intensity as function of background intensity, but showed a series of systematic biases not dictated by the capabilities of their motor system. They were consistently brighter than the background at low background intensities and dimmer than background on the brightest backgrounds. Despite being exposed predominantly to grey backgrounds their color was a low saturation yellow at all intensities. Their adaptation to the background was a gradual process spanning several hundreds of seconds, despite their well-known capability to dramatically transform their appearance within less than a second. Finally, juvenile animals displayed an increasing activity of their disruptive components as background intensity decreased thus showing evidence of using a mode of camouflage different from their older conspecifics.
Results
We prepared four sand backgrounds of uniform intensity (white, brown, grey and black), granularity and illumination to investigate animal responses to changes in mean background intensity. The intensity response of the animal to a background was characterized by calculating the mean intensity of the animal in 6 frames located 5,9,13,17,21 and 25 minutes from the start of the experiment. The animal response was then estimated as the average of those six measurements. The intensities were calculated in camera units of intensity (see methods) and we used the green intensity channel as the spectral sensitivity of that channel most closely matches the sensitivity of the cuttlefish visual pigment (the qualitative conclusions of the following analysis also held true when images were converted from RGB to greyscale prior to analysis). For each background between 6 to 10 animals approximately three months in age were tested and the animal responses were averaged over the number of animals tested to calculate the population intensity for each background. The plot of population intensity versus background intensity clearly demonstrates that animals adapted their intensities to the intensities of their background ( Figure 1A,B). Surprisingly, the animals displayed biases that were significantly different from background intensity on each of the four backgrounds tested. For the three darkest background animals showed patterns brighter than the background, whereas for the lightest backgrounds the animals were darker than the background. We first speculated that the limited dynamic range of the camouflage system might cause the biases on darkest (black) and brightest (white) backgrounds. We determined the brightest possible pattern of the camouflage system by exposing five animals to magnesium chloride, an anesthetic agent that causes a paling of the animals and is presumed to act as a muscle relaxant (17). The mean intensity of animals thus anesthetized was intermediate between the intensity of the white background and the population response on the white background. The difference between the anesthetized and white population response was statistically significant (p=0.007) demonstrating that the biases shown on the brightest backgrounds were only partly accounted for by the restricted dynamic range of the motor system. We estimated the dark limit of the chromatomotor system by finding the darkest individual pattern from the samples from which the population means were calculated. This response was similarly intermediate between the intensity of the black background and the population response on the black background demonstrating that motor system limitations again do not fully account for biases shown on the darkest backgrounds. For the intermediate intensity backgrounds from grey to light brown, the background intensity values lay within the dynamic range of the motor system, yet biases were still shown on these backgrounds. Periods of motion have been reported to be correlated with reduced periods of camouflage (18). We hence automatically tracked all animals and excluded periods of motion from the analysis of the mean intensity (the first 5 minutes after introduction were excluded from analysis for reasons of comparability and stationarity, see below). Exclusion of periods of motion from intensity analysis gave mean animal intensity values which were only 3% lower than mean intensity calculated using the whole time period, but the difference between the two values was not statistically significant. We conclude that the biases found represent a robust experimental finding not explained either by animal motion or the restricted dynamic range of the motor system. The function and origin of these biases presently remains unknown. In addition to biases in mean intensity, the animals also displayed systematic biases in their color. Cuttlefish possess two dominant color classes of chromatophores (yellow and black (1)). Because each color class can be controlled independently, the system has two degrees of freedom and the space of possible mean intensities in RGB space is expected to be two-dimensional. When we plotted the mean luminance of 154 different patterns in RGB space, we surprisingly found that most of the points fell onto a straight line ( Figure 1C). Principal component analysis confirmed that a single dimension could account for 99% of the observed variance in the data. It thus appears that during intensity matching the activities of yellow and black chromatophores remain tightly correlated. The results of such a correlation is that the animals retain a low saturation yellow hue across the their full dynamic range of body intensities. The dynamics of the chromatomotor system allow the animal to dramatically change its appearance within less than a second and such fast appearance transformations have indeed been observed in the context of interspecific threat signaling (1) and dynamic wave displays (19). Yet little is known about the timescales under which animals adapt to their environments in the context of camouflage. We measured the dynamics of intensity changes of our experimental animals for ten minutes following introduction to the experimental tank (3Hz sampling rate). The population as a whole adapted to the background with an exponential decay with a 100 second time constant (Figure 2A). At the level of individual animals, the time course of intensity changes was much more diverse and irregular ( Figure 2C). Individual animals displayed intensity transients ranging from the subsecond to the minute timescale ( Figure 2C,D). When we tested camouflage responses on four sands in younger animals around 1 month in age, we were able to confirm our findings of intensity biases ( Figure 3A,B), which were qualitatively similar on all four backgrounds (a quantitative comparison of biases is not appropriate because the younger animals produced different classes of patterns). On black and dark grey backgrounds we surprisingly found that the animals frequently preferred to adopt a strongly disruptive rather than a uniform pattern. We found a background intensity dependent increase in the tendency of young animals to display disruptive patterns, which could also be quantified as an increase in energy in low frequency bands of the spectra (5) of young animals. Despite being clearly identifiable visual elements on the skin of cuttlefish an objective definition of a disruptive component has been lacking in the literature. We utilized the tendency of young cuttlefish to express disruptive components to assemble a library of 170 disruptive patterns. We subsequently morphed all 170 images to conform to a common cuttlefish template and subjected the morphed image ensemble to hierarchical clustering (the head was excluded from analysis due to its frequent partial occlusion by the mantle edge). The resulting clusters clearly resembled the disruptive components and their edges (Figure 4). Because image segmentation can be based on both pixel intensity correlations and edges (20), we computed the edge density map over the image ensemble. The map of edge density showed clear density maxima around the borders of disruptive components. We thus propose that disruptive components can be defined as large territories of correlated chromatophore activity with sharp borders between the regions.
Discussion
We investigated how cuttlefish matched the mean intensity of uniform backgrounds. To our surprise, we found systematic biases in the intensity responses of cuttlefish with respect to both mean intensity and color. The biases in mean intensity were not caused by limited dynamic range or a motion-dependent break in camouflage. A finding of biases in intensity matching is not unique to cuttlefish (21). An adaptive pigmentation system must balance camouflage requirements against other physiological functions such as thermoregulation (22) or predation risk (21). It is presently unclear what if any functional purpose is served by intensity biases in cuttlefish.
In our experiments, we exposed cuttlefish to grey sands yet found that the animals retained a low saturation yellow hue across their entire observed body intensity dynamic range. On grey sands of varying intensity such a response represents a systematic color bias. In their natural habitat however, the situation is rather different. Cuttlefish face the challenge of having to control their body color for camouflage despite being colorblind (10). They must thus assume the color of their background based on evolutionary knowledge. In the natural environment of cuttlefish, yellow sands predominate as the typical substrate. The color biases they exhibit in our experiment might thus represent an evolutionarily adaptive prior knowledge about the predominant colors of sands in their natural habitats. By developing automatic tracking system we were able to quantitatively monitor animal intensity as they adapted to dark backgrounds. Individual animals displayed considerable variation in their adaptation behavior and showed intensity transients spanning a 100-fold range of timescales. In contrast, the population as whole relaxed towards the steady state intensity in a regular exponential decay process. Exponential relaxations are characteristic of negative feedback control systems and stochastic Poisson processes. Some combinations of negative feedback based adaptation (23) and a stochastic stepwise relaxation process (24) may explain both the finding of irregularity in individual waveforms and regularity across the population. While cephalopods are able to adapt to backgrounds in the absence of visual feedback (1), the observation of exponential decay suggests that negative feedback may play a role in guiding camouflage responses under some conditions. The camouflage strategies adopted by cuttlefish on uniform sands were agedependent. While older animals employed uniform or weakly mottled patterns, young animals frequently produced disruptive patterns. The tendency of young animals to express disruptive components on uniform backgrounds seems at first sight counterproductive, because such components cause a sharp break of the visual statistics of an animal's body from its uniform visual surroundings. Nor is our result explained by the fact that a typical sand grain size is larger relative to the size of the animal in younger individuals, because studies on checkerboard backgrounds have shown that in order to elicit disruptive patterns on checkerboards the size of a checker must exceed 40% of the size of the animal white square and have a contrast larger than 0.54, whereas our backgrounds had sand grain sizes below 20% of white square size and a coefficient of variance less than 10%, values fully consistent with a clearly uniform body pattern on checkerboard backgrounds (5). A possible solution to the apparent paradox lies in considering alternative modes of camouflage. Age-dependent expression of disruptive components represents a clear bias from the point of view of matching background visual statistics, but it represents an adaptive behavior if we interpret the behavior of young cuttlefish as a form of masquerade (25-27). Disruptive components of the cuttlefish have often drawn comparison to pebbles found on the seabed. Thus, by expressing disruptive components on part of their body while adapting the intensity of the rest of their body to the background, young cuttlefish bear a close resemblance to a pebble on the seabed. Why would masquerade behavior decrease with age? Masquerade has been found to be more effective at deceiving predators when the inanimate objects that the camouflaging animals resemble are abundant in the environment (26). The statistics of pebble sizes in natural environments demonstrate a decreasing abundance of exemplars with increasing pebble size (28). Masquerade is thus predicted to be more effective in younger animals simply because smaller pebbles outnumber larger pebbles in most environments making masquerade more likely to be effective in deceiving predators in smaller animals.
Conclusion
Our studies uncovered results that prompt a re-examination of several commonly held views about the rules of cuttlefish camouflage. Our finding of systematic intensity and color biases and the use of disruptive components on uniform backgrounds by younger animals all demonstrate that the hypothesis of optimal matching of background statistics alone is insufficient in explaining observed responses. Perceptual limitations, motor system dynamic range, alternative modes of camouflage and expected statistics (based on evolutionary knowledge) rather than actually observed statistics of the environment may need to be considered to explain animal responses on naturalistic backgrounds. Also, our work highlights problems in drawing conclusions across different modes of chromatophore-based behavior. Although intensity transients produced for the purpose of camouflage can be rapid (less than 1 second in duration) as has been found in the context of social communication and dynamic wave displays, the typical adaptation to background proceeds gradually over a timescale of several minutes.
Intensity matching protocol
Cuttlefish eggs collected from the North Sea were reared in tanks filled with recirculating artificial seawater until tested at 1 or 3 months of age respectively. Each animal was tested on all backgrounds. Cuttlefish were placed into a rectangular tank whose bottom was uniformly covered with a layer of loose sand. The thickness of the sand layer was around half a centimeter, sufficiently thick to encourage the animals to settle at the bottom of the tank and dig into the sand, but not so thick as to allow the animals to dig in to the point where their bodies would be covered by sand and obscured from view. The illumination of the bottom of the tank was uniform (less than 10% coefficient of variation) and constant for all conditions. The lamps were placed outside the tanks below the height of the water surface to avoid obscuring the image with reflections. The animals were recorded with a Sony camera acquiring images at 3 frames per second. After placement in the experimental tank, the animals were left undisturbed in the experimental room for 30 minutes. To determine the lightest intensity five animals were anesthetized in a magnesium chloride solution isotonic to artificial seawater and then each animal was observed at six random positions in the experimental tank containing white sand based on which the intensity was estimated.
Image analysis
All image analysis was performed using custom-written code in Matlab. The sensitivity of the camera to object luminance was determined from a calibration curve to approximately follow a power law with an exponent of 0.65. Such a power law demonstrates diminishing sensitivity to changes with increasing intensity reminiscent of the luminance sensitivity of animal visual systems (which generally obey logarithmic sensitivity). We thus elected to present all our intensity measurements in camera units unless otherwise noted. For analysis of intensity matching, animals were manually segmented from the background and their mean intensity was then calculated. For analysis of adaptation time course, the difference between the red and blue image channels was calculated and low pass filtered with a 50*50 square kernel and the resulting image histogram subjected to kmeans clustering to detect the animal. The center of mass and mean intensity were calculated for each frame. Velocity was inferred from the motion of the center of mass. The algorithm was tested and produced reliable segmentation on the four darkest backgrounds. For analysis of disruptive components a library of 170 images of sepia patterns was assembled. Each cuttlefish was segmented from the background, initially morphed so that the length and width of the mantle were equal for all images. 16 reliably identifiable reference points (see supplementary Figure 1) were then marked out on each image, from which a Voronoi tessellation was assembled and each triangle was then subjected to an affine transformation followed by bilinear interpolation to generate perfect match between target and template. The morphed library was subjected to hierarchical clustering using the built-in Matlab routine hclust using a Pearson correlation based distance metric. One-dimensional spectra of juvenile cuttlefish were calculated by morphing all animals to have the same width and height, Fourier transforming the images, binning 2D Fourier transform coefficients according to the modulus of their x and y frequency and summing the moduli of the coefficients within a bin to yield the frequency-energy distribution. A Plot of the time course of the population intensity response on the two darkest backgrounds (black curve, grey region shows s.e.m). The mean intensity of an animal was sampled at 3Hz for 10 minutes following the introduction of the animal to the experimental tank lined with black or dark grey sand. Each time point represents the average over 16 animals. The population as a whole adapts to the background with an exponential decay depicted in red. Best fit value for the time constant is 100 seconds.
Figure 3: Differences in intensity matching between juveniles (1 month old) and young adults (2.5 month old)
A1-4 Example juvenile animals camouflaging on white, light grey, dark grey and black sands. Note the disruptive patterning seen in the animals on dark grey and black sands. B Summary graph of intensity matching for juvenile animals. The red curve illustrates unbiased intensity matching, the blue curve illustrates the behavior exhibited by the animals. Note that juveniles like young adults adapt to the background intensity and show similar biases. The green curve plots the average disruptive index (calculated from visual annotation, varies between 0 and 1 for uniform and full disruptive respectively) as a function of background intensity. Darker backgrounds increase disruptive tendencies. C Quantification of body pattern differences on different sands is here demonstrated as an increase in disruptive band energies in the animals Fourier spectrum as the background intensity decreases (color-background correspondence: black-black, blue-dark grey, red-white, green-brown). Figure 4: Automated extraction of disruptive patterns A1-4 Four example animals showing patterns illustrating a uniform (A1) and three different kinds of disruptive (A2-4) body patterns. B Components extracted after 170 patterns were morphed to a common template and then analyzed by hierarchical clustering. The resultant components closely map on to disruptive components previously identified from behavioral observations. C The average edge density map on the body of a cuttlefish outlines the borders between the disruptive components. Two frontal eye spots are also visible. | 2016-11-01T19:18:48.349Z | 2015-04-16T00:00:00.000 | {
"year": 2015,
"sha1": "ab0863cda8fab7c43af98c66ba140440f7c5ce6f",
"oa_license": "CCBY",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2015/04/16/018176.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "4dddaac315a3f43dacde7f7c35f076b5fd2334d7",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
42318823 | pes2o/s2orc | v3-fos-license | Shedding of membrane-associated LDL receptor-related protein-1 from microglia amplifies and sustains neuroinflammation
In the CNS, microglia are activated in response to injury or infection and in neurodegenerative diseases. The endocytic and cell signaling receptor, LDL receptor-related protein-1 (LRP1), is reported to suppress innate immunity in macrophages and oppose microglial activation. The goal of this study was to identify novel mechanisms by which LRP1 may regulate microglial activation. Using primary cultures of microglia isolated from mouse brains, we demonstrated that LRP1 gene silencing increases expression of proinflammatory mediators; however, the observed response was modest. By contrast, the LRP1 ligand, receptor-associated protein (RAP), robustly activated microglia, and its activity was attenuated in LRP1-deficient cells. An important element of the mechanism by which RAP activated microglia was its ability to cause LRP1 shedding from the plasma membrane. This process eliminated cellular LRP1, which is anti-inflammatory, and generated a soluble product, shed LRP1 (sLRP1), which is potently proinflammatory. Purified sLRP1 induced expression of multiple proinflammatory cytokines and the mRNA encoding inducible nitric-oxide synthase in both LRP1-expressing and -deficient microglia. LPS also stimulated LRP1 shedding, as did the heat-shock protein and LRP1 ligand, calreticulin. Other LRP1 ligands, including α2-macroglobulin and tissue-type plasminogen activator, failed to cause LRP1 shedding. Treatment of microglia with a metalloproteinase inhibitor inhibited LRP1 shedding and significantly attenuated RAP-induced cytokine expression. RAP and sLRP1 both caused neuroinflammation in vivo when administered by stereotaxic injection into mouse spinal cords. Collectively, these results suggest that LRP1 shedding from microglia may amplify and sustain neuroinflammation in response to proinflammatory stimuli.
Microglia constitute 8 -12% of the cells in the brain (1,2). These cells are regulators of innate immunity and are related in function to cells of the monocyte-macrophage lineage (1,2). A principal function of microglia is surveillance. In response to injury or infection, microglia become activated and express cytokines and other mediators, which help orchestrate the inflammatory response. However, in various forms of neurodegeneration, including Alzheimer's disease, chronically activated microglia may accelerate disease progression (1)(2)(3)(4)(5)(6). Injury to peripheral nerves activates microglia in the spinal cord, which promotes central sensitization and neuropathic pain (7)(8)(9). Understanding pathways that control microglial activation is an important problem.
LDL receptor-related protein-1 (LRP1) 2 is an endocytic and cell signaling receptor with over 100 ligands, including proteins released from injured and dying cells (10 -14). The structure of membrane-anchored LRP1 includes the 515-kDa ␣-chain, which is entirely extracellular and coupled to the cell surface by non-covalent interactions with the 85-kDa -chain. The ␣-chain is responsible for most of the ligand-binding activity of LRP1. The -chain includes an ectodomain, a transmembrane domain, and the intracellular tail that becomes phosphorylated when LRP1 functions in cell signaling.
LRP1 is shed from cell surfaces by the metalloproteinases ADAM10, ADAM17, and MMP-14 (28 -31). In shed LRP1 (sLRP1), the -chain is truncated; however, the entire ␣-chain is intact and detectable in plasma, brain, cerebrospinal fluid, and the peripheral nervous system (29,32,33). The concentration of sLRP1 is increased in the plasma of mice treated with lipopolysaccharide (LPS) and in humans with rheumatoid arthritis or systemic lupus erythematosus (30). sLRP1 also is increased in osteoarthritic cartilage and in broncho-alveolar lavage fluid from patients with adult respiratory distress syndrome (31,34). Although the biological activity of sLRP1 remains incompletely understood, sLRP1 promotes expression of inflammatory mediators by macrophages (30). sLRP1 also may bind biologically active proteins such as ADAMTS-5, MMP-13, and TIMP-3, preventing their endocytosis (31,35). Once sLRP1 is released from the cell surface, the residual LRP1 fragment may be further processed by ␥-secretase to generate an intracellular product that opposes inflammation (17). Phylogenetically, LRP1 shedding is conserved throughout mammalian, avian, and reptilian species (36), supporting the hypothesis that shedding may be physiologically significant.
Herein, we demonstrate that LRP1 is shed from microglia exposed to the proinflammatory mediators LPS and RAP. We also identify calreticulin (CRT), a known LRP1 ligand, as an activator of microglia, which induces LRP1 shedding. Purified sLRP1 was potently proinflammatory when added to primary cultures of microglia and when injected into spinal cords in mice. When LRP1 shedding was inhibited with GM6001, expression of proinflammatory mediators in response to RAP was largely attenuated. These results suggest a model in which LRP1 shedding converts an anti-inflammatory receptor into a proinflammatory product. sLRP1 may amplify and sustain neuroinflammation.
LRP1 gene silencing modestly increases cytokine expression by microglia
Microglia were isolated from 8-week-old mice that were homozygous for the floxed LRP1 gene (LRP1 fl/fl ) and LysM-Cre-positive (20). LysM-Cre drives expression of Cre recombinase in monocytes, macrophages, neutrophils, and microglia, although the level of Cre recombinase expressed in microglia may depend on whether the cells are activated or in culture (37,38). As a control, microglia were isolated from LRP1 fl/fl -LysM-Cre-negative mice. RNA and protein were harvested from cells without culturing. Fig. 1A shows that LRP1 mRNA was decreased 72 Ϯ 8% in cells from LRP1 fl/fl -LysM-Cre-positive mice. LRP1 protein was decreased 85 Ϯ 8%, as determined by immunoblot analysis and densitometry (Fig. 1, B and C).
RAP robustly increases expression of proinflammatory mediators by microglia
Next, we studied the effects of RAP on expression of proinflammatory mediators by microglia. For these studies, microglia were isolated from wild-type mouse pups and established in A-F, microglia were isolated from LRP1 fl/fl -LysM-Cre-positive (black bar) and LysM-Cre-negative (open bar) adult mice. A, RT-qPCR was performed to quantify expression of LRP1 mRNA. B and C, cell extracts were immunoblotted to detect LRP1 -chain. GAPDH was used as loading control. Densitometry analysis was performed to determine the relative level of LRP1 protein, standardized against the loading control, in extracts from Cre-negative (N) and Cre-positive (P) cells (mean Ϯ S.E.; n ϭ 4; **, p Ͻ 0.01, paired t test). D-F, RT-qPCR was performed to compare relative quantities (RQ) of mRNA for TNF-␣, IL-6, and IL-1 in microglia from Cre-positive (P) and Cre-negative (N) mice (mean Ϯ S.E.; n ϭ 4; paired t test). G-K, microglia were isolated from C57BL/6J mouse pups and transfected with LRP1-specific or NTC siRNA. RT-qPCR was performed to determine mRNA levels for LRP1 (G) and TNF-␣ (H) (mean Ϯ S.E.; n ϭ 4; ***, p Ͻ 0.001; paired t test). I, cells transfected with LRP1-specific siRNA, and NTC siRNA were allowed to condition medium for 48 h. CM was recovered, and ELISAs were performed to quantify TNF-␣ protein (mean Ϯ S.E. n ϭ 3). NC shows medium that was "not conditioned." J and K, mRNA levels were determined for IL-6 and IL-1 (mean Ϯ S.E.; n ϭ 4; *, p Ͻ 0.05; paired t test).
LRP1 shedding from microglia is proinflammatory
primary culture. Cells were treated with 150 nM RAP, LPS (1 g/ml), or vehicle (phosphate-buffered saline/PBS) for 24 h. Because RAP is expressed as a glutathione S-transferase (GST)fusion protein, as an additional control, cells were treated with purified GST (150 nM).
RAP robustly increased expression of TNF-␣ mRNA (p Ͻ 0.001), as did LPS but not GST ( Fig. 2A). RAP and LPS also substantially increased the level of TNF-␣ protein in CM (Fig. 2B). Pre-boiling LPS at 100°C for 5 min had no effect on its ability to increase TNF-␣ protein secretion, as anticipated (21). By contrast, pre-boiling RAP eliminated its ability to stimulate TNF-␣ protein expression, arguing against LPS contamination as contributing to the activity of RAP. Pre-boiling RAP also completely blocked its ability to induce expression of TNF-␣ mRNA (Fig. 2C).
Treating microglia with RAP increased expression of IL-6 mRNA (Fig. 2D), IL-6 protein (Fig. 2E), and IL-1 mRNA (Fig. 2F). In each case, the response elicited by RAP was either equivalent in magnitude to that elicited by LPS or only slightly decreased. No response was observed with GST. Pre-boiling RAP completely blocked its ability to induce expression of IL-6 and IL-1. Pre-boiling LPS did not significantly decrease its activity. The increases in expression of IL-6 and IL-1, observed in microglia treated with RAP for 24 h, were 100 -1,000-fold greater than the increases observed in LRP1 genesilenced cells.
Inducible nitric-oxide synthase (iNOS) is a proinflammatory enzyme expressed by activated microglia (39). Fig. 2G shows that RAP robustly increased iNOS mRNA expression in microglia. The response was similar in magnitude to that caused by LPS. To assess iNOS activity, we measured nitrite in CM. RAP significantly increased nitrite levels in CM (p Ͻ 0.01), mimicking the response observed with LPS ( Fig. 2H). Boiling RAP blocked its ability to increase nitrite production, but boiling LPS had no effect. Purified GST did not increase nitrite levels.
In RAP-treated macrophages, TNF-␣ functions as an early mediator that increases expression of other proinflammatory cytokines, such as IL-6, in a secondary wave (20). Fig. 2I shows that the protein synthesis inhibitor, cycloheximide (CHX), failed to inhibit the increase in TNF-␣ mRNA observed 6 h after adding RAP. By contrast, CHX significantly attenuated the increase in IL-6 mRNA caused by RAP (Fig. 2J). CHX decreased TNF-␣ protein (Fig. 2K) and IL-6 protein (Fig. 2L) in CM from RAP-treated cells, as anticipated. The effects of CHX on IL-6 mRNA expression suggest that a protein intermediate or intermediates are involved in the pathway by which RAP increases expression of this cytokine.
RAP regulates microglial cell morphology, proliferation, and migration
Non-activated microglia demonstrate a ramified morphology, which upon activation transforms into a more rounded, amoeboid shape (40). We compared the morphology of microglia after incubation with RAP, LPS, or GST by phalloidin staining. Fig. 3A shows that LPS and RAP induced similar changes in microglial morphology. In response to both agents, the cells
Next, we compared the effects of LPS and RAP on microglial proliferation and migration. Increased proliferation and migration are characteristic of microglial activation (1,2,40). Fig. 3C shows that treating cells with RAP or LPS for 48 h significantly increased microglial cell proliferation (p Ͻ 0.001). RAP and LPS also apparently promoted microglial cell migration. Representative images of Transwell membranes, showing cells that migrated through membrane pores, are shown in Fig. 3D. The results of four separate experiments are summarized in Fig. 3E. Because RAP and LPS promoted microglial cell proliferation, we cannot exclude the possibility that the measured effects of these reagents on cell migration were artifactually increased due to an increase in the number of cells present during the course of the 16-h assay.
LRP1 deficiency protects microglia from the proinflammatory effects of RAP
Members of the LDL receptor gene family in addition to LRP1 bind RAP (41). We therefore conducted RAP ligand-blotting studies, as described previously (42), to identify RAP-binding proteins in microglia. Microglia from LRP1 fl/fl -LysM-Cre-positive and Cre-negative mice were compared. In LRP1-expressing microglia from LysM-Cre-negative mice, a single band with a mass of ϳ500 kDa was detected, consistent with the known mass of the LRP1 ␣-chain (Fig. 4A). The absence of bands with lower molecular masses indicated that RAP-binding receptors, such as the VLDL receptor and ApoER2, were not present in substantial quantity. In LRP1deficient microglia, isolated from LRP1 fl/fl -LysM-Cre-positive mice, the 500-kDa band was nearly absent, confirming the identity of that band as the LRP1 ␣-chain. When equivalent blots were probed with purified GST instead of GST-RAP, no bands were detected. These results demonstrate that LRP1 is the principal RAP-binding protein in microglia and the most likely target for RAP in cultured microglia, as reported previously by Pocivavsek et al. (27).
RAP is generally considered an LRP1 antagonist, which blocks binding of other ligands to LRP1, including ligands added exogenously or produced endogenously by cells in culture (11,41). We therefore conducted experiments to test why the effects of RAP on inflammatory mediator expression by microglia appeared so much greater than the effects of LRP1 gene silencing or deletion. At first, we hypothesized that the modest effects of LRP1 gene silencing and deletion reflected residual LRP1. To test this hypothesis, microglia from LRP1 fl/fl -LysM-Cre-positive and LysM-Cre-negative mice were treated with RAP. The goal of this experiment was to neutralize residual LRP1 in cells from LysM-Cre-positive mice. Fig. 4, B and C, shows that RAP increased TNF-␣ mRNA and TNF-␣ protein in
LRP1 shedding from microglia is proinflammatory
LRP1-deficient and -expressing cells. Following RAP treatment, the levels of TNF-␣ mRNA and protein were not significantly different in the two cell types.
RAP increased IL-6 mRNA expression in microglia from LysM-Cre-positive and -negative mice; however, unexpectedly, the quantity of IL-6 mRNA detected in RAP-treated LRP1-deficient cells remained significantly decreased, compared with that detected in RAP-treated LRP1-expressing cells (Fig. 4D). RAP also was substantially less effective at inducing expression of IL-6 protein (Fig. 4E) and IL-1 mRNA (Fig. 4F) in LRP1-deficient cells from LysM-Cre-positive mice, compared with LRP1-expressing cells from LysM-Cre-negative mice. Taken together, these results demonstrate that the modest effects of LRP1 gene silencing and deletion on cytokine expression are not entirely due to incomplete LRP1 neutralization. Furthermore, LRP1 gene deletion is at least partially protective against the proinflammatory effects of RAP.
RAP and CRT promote LRP1 shedding
It is reported that the proinflammatory mediators, LPS and interferon-␥, decrease LRP1 mRNA expression in microglia (22,25). We treated microglia with 150 nM RAP or 1.0 g/ml LPS for 24 h and demonstrated that LRP1 mRNA levels are significantly decreased (Fig. 5A). Both treatments also decreased the abundance of LRP1 protein, as determined by immunoblot analysis and densitometry (Fig. 5, B and C).
Next, we tested whether LPS and RAP induce LRP1 shedding by subjecting CM to RAP ligand blotting. Fig. 5D shows that both LPS and RAP induced time-dependent shedding of a high molecular mass RAP-binding protein consistent with the known mass of the LRP1 ␣-chain (ϳ515-kDa). The abundance of the high molecular mass protein in CM from LPS-and RAPtreated cells was similar. No other RAP-binding species were detected.
To confirm that the RAP-binding protein was sLRP1, CM samples were subjected to immunoblot analysis. LRP1 ␣-chain was detected; the mobility of the ␣-chain was equivalent to that of the protein detected by RAP ligand blotting. The 85-kDa LRP1 -chain was not detected using an antibody that recognizes an intracellular epitope, arguing against contamination of CM with cells or cell fragments that have full-length cellular LRP1. RAP-binding proteins and LRP1 ␣-chain were absent in CM from microglia treated with PBS instead of RAP or LPS. Densitometry analysis summarizing the results of three separate LRP1 shedding experiments are presented in Fig. 5, E and F. A, RT-qPCR was performed to determine LRP1 mRNA (mean Ϯ S.E.; n ϭ 4; ***, p Ͻ 0.001, one-way ANOVA followed by Dunnett's post hoc test). Cell extracts were subjected to immunoblot to detect LRP1 and -actin (B) and densitometry analysis was performed (C) (mean Ϯ S.E.; n ϭ 4; **, p Ͻ 0.01, one-way ANOVA followed by Dunnett's post hoc test). Microglia were treated with LPS (1 g/ml), RAP (150 nM), or vehicle (PBS) for up to 24 h (D). At the indicated times, CM was recovered. sLRP1 was detected in CM by RAP ligand blotting, as described in Fig. 4A (upper panels). CM also was subjected to immunoblot analysis to detect LRP1 ␣-chain (middle panels) and LRP1 -chain (lower panels). Molecular mass standards are shown to the left of each blot. Results are representative of three experiments. E and F, densitometry analysis was performed on RAP ligand blots to quantify the increase of sLRP1 in the CM after treatment with LPS (E) and RAP (F) (mean Ϯ S.E.; n ϭ 3; *, p Ͻ 0.05; **, p Ͻ 0.01, one-way ANOVA followed by Dunnett's post hoc test). G, microglia were treated with increasing concentrations of RAP. After 24 h, CM was recovered and subjected to RAP ligand blotting to detect sLRP1. Results are representative of two independent experiments. Fig. 5G shows that RAP induced detectable LRP1 shedding at concentrations down to 15 nM. tPA and ␣ 2 M* are LRP1 ligands that induce anti-inflammatory responses in peripheral macrophages (20,21). Fig. 6A shows that enzymatically-inactive tPA (EI-tPA) (12 nM) and ␣ 2 M* (10 nM) both failed to induce IL-6 mRNA expression in microglia. Fig. 6B shows that EI-tPA and ␣ 2 M* also failed to induce expression of IL-1 mRNA. The concentrations of EI-tPA and ␣ 2 M* studied in these experiments were selected to match those that generate maximum anti-inflammatory responses in macrophages (20,21). When microglia were treated with 12 nM EI-tPA or 10 nM ␣ 2 M* for 24 h, LRP1 shedding was not observed (Fig. 6C). In separate experiments, we studied EI-tPA at concentrations up to 100 nM; again LRP1 shedding was not observed (results not shown). These results suggest that LRP1 ligands do not, in general, induce LRP1 shedding.
LRP1 shedding from microglia is proinflammatory
Although lactoferrin generates proinflammatory responses in macrophages (20,21), in microglia, lactoferrin, at concentrations up to 100 nM, did not significantly regulate TNF-␣ mRNA expression or induce LRP1 shedding (results not shown). We did not explore why microglia do not respond to lactoferrin; however, we did study another LRP1 ligand with known proinflammatory activity. CRT is a heat-shock protein known to bind directly to LRP1, induce LRP1-dependent inflammatory responses in antigen presenting cells, and function together with LRP1 in efferocytosis (43)(44)(45)(46). Fig. 6, D and E, shows that CRT increased expression of TNF-␣ protein and IL-6 protein in CM from microglia. CRT also induced LRP1 shedding (Fig. 6F). These results suggest that the ability of a reagent to stimulate LRP1 shedding from microglia correlates with its ability to stimulate a proinflammatory response. Stimulation of shedding is a property of some but not all LRP1 ligands.
Shed LRP1 activates microglia in vitro
To test whether sLRP1 regulates cell signaling and gene expression in microglia, sLRP1 was purified from human plasma. A single major band with a mobility consistent with that of the 515-kDa LRP1 ␣-chain was detected by SDS-PAGE (Fig. 7A). RAP ligand blotting confirmed that the high molecular mass band was the LRP1 ␣-chain.
Wild-type microglia were treated with increasing concentrations of purified sLRP1 in 0.5% serum-supplemented medium for 6 h. sLRP1 robustly increased expression of TNF-␣ mRNA (Fig. 7B) and stimulated TNF-␣ protein accumulation in CM (Fig. 7C). sLRP1 also increased expression of the mRNAs for IL-6 ( Fig. 7D), IL-1 (Fig. 7E), and iNOS (Fig. 7F). In all four mRNA expression studies, the responses were sLRP1 concentration-dependent and statistically significant with 60 ng/ml sLRP1 (0.15 nM). A significant increase in TNF-␣ protein accumulation in CM was observed with 30 ng/ml sLRP1. Boiling sLRP1 at 100°C for 5 min neutralized its ability to induce cytokine expression at the mRNA and protein levels, excluding LPS contamination as an explanation for the activity of sLRP1.
Next, we compared the ability of sLRP1 to increase cytokine expression in LRP1-expressing and -deficient microglia, isolated from LRP1 fl/fl -LysM-Cre-positive and LysM-Cre-negative mice. sLRP1 increased expression of TNF-␣ (Fig. 7G), IL-6 ( Fig. 7H), and IL-1 (Fig. 7I) similarly in LRP1-expressing and -deficient cells. These results suggest that membrane-anchored LRP1 is not essential in the pathway by which sLRP1 induces expression of proinflammatory mediators in microglia.
To rule out the possibility that species differences between the cells (mouse) and sLRP1 (human) contributed to the results observed, we repeated the cytokine expression studies using full-length LRP1, purified from mouse liver (mLRP1). In bone
LRP1 shedding from microglia is proinflammatory
marrow-derived macrophages, sLRP1 and mLRP1 are equally effective at inducing inflammatory responses (30). Purified mLRP1 robustly increased expression of the mRNAs for TNF-␣ (supplemental Fig. 1A), IL-6 (supplemental Fig. 1B), IL-1 (supplemental Fig. 1C), and iNOS (supplemental Fig. 1D). Boiling purified mLRP1 blocked its activity or significantly inhibited it. sLRP1 and mLRP1 were purified by affinity chromatography using GST-RAP covalently coupled to Sepharose (30). Because of the covalent coupling method, it is unlikely that RAP coeluted with and contaminated purified LRP1 preparations. The low concentrations of sLRP1 and mLRP1 necessary to induce cytokine expression (100 pM or less) further argue against RAP contamination as an explanation for the activity of these proteins. When purified sLRP1 and mLRP1 (up to 1.0 g of each protein) were subjected to immunoblot analysis to detect GST-RAP, no signal was observed (supplemental Fig. 2).
sLRP1 amplifies the microglial response to RAP
We hypothesized that LRP1 shedding contributes to the proinflammatory response observed in RAP-treated microglia. To test this hypothesis, first we examined cell signaling in microglia treated with RAP or sLRP1. Yang et al. (25) demonstrated that RAP activates NFB and JNK in microglia.
To examine RAP-initiated cell signaling in an unbiased manner, we treated microglia with RAP or vehicle for 1 h and identified protein phosphorylation events using the phosphoprotein proteome-profiler from R&D Systems, which profiles 43 distinct protein phosphorylation events. IB␣ is not represented in the array; however, we did observe increased phos-phorylation of c-Jun N-terminal kinase (JNK), together with its downstream substrate c-Jun (Fig. 8, A and B). ERK1/2 and p38 MAPK were phosphorylated, as was Akt at Ser-473 and its downstream target GSK3. Phosphorylation of GSK3 by Akt results in GSK3 inactivation (47). The transcription factor, cAMP-response element-binding protein (CREB), is a target for multiple kinases (48).
The results of the phosphoprotein array experiment were confirmed in separate immunoblotting studies. Fig. 8C shows that RAP caused phosphorylation of p38 MAPK, ERK1/2, c-Jun, and Akt Ser-473. The results of three separate immunoblotting experiments are summarized in Fig. 8D.
Next, microglia were treated with sLRP1 (60 ng/ml) for up to 8 h. Fig. 8E shows that p38 MAPK was phosphorylated, and this response was sustained. ERK1/2 activation also was observed; this response was apparent throughout the time course but appeared to maximize at 2 h. Ser-473 in Akt was phosphorylated transiently within the 8-h incubation. These results demonstrate overlap in the phosphorylation events caused by RAP and sLRP1.
To test whether inhibiting LRP1 shedding may attenuate the response of microglia to RAP, microglia were pre-treated with the general metalloproteinase inhibitor GM6001 (50 M) and then with RAP. LRP1 shedding was largely blocked when assessed 12 h after adding RAP (Fig. 9A) and remained substantially decreased 24 h after adding RAP (Fig. 9B). Cell viability was not compromised (results not shown). Next, we examined expression of the cytokines IL-6 and IL-1. GM6001 markedly
LRP1 shedding from microglia is proinflammatory
inhibited expression of IL-6 mRNA (Fig. 9C) and IL-1 mRNA (Fig. 9D) in cells treated with RAP for 12 or 24 h. When sLRP1 is released from cells, the residual membraneanchored LRP1 fragment may be further processed by ␥-secretase to release a cytoplasmic fragment, which has been reported to attenuate inflammation (17). We therefore tested whether inhibiting ␥-secretase further increases the proinflammatory response to RAP in microglia. Cells were pre-treated with 10 M N-[N-(3,5-difluorophenacetyl)-L-alanyl]-S-phenylglycine t-butyl ester (DAPT) for 2 h and then with RAP or vehicle. The concentration of DAPT selected for this experiment was previously shown to block processing of LRP1 by ␥-secretase in mouse macrophages (17). Although DAPT slightly increased expression of TNF-␣ (Fig. 9E), IL-6 ( Fig. 9F), and IL-1 (Fig. 9G) in RAP-treated cells, the increases were not statistically significant.
RAP and sLRP1 induce neuroinflammation when injected into the spinal cord
To test whether RAP induces neuroinflammation in vivo, 120 pmol of RAP (2 l of 60 M stock solution) or vehicle (PBS) was injected directly into the right dorsal horn of the spinal cord (T10 -T11) of wild-type adult mice using a stereotaxic instrument. Tissue was harvested 24 h later and immunostained for Iba-1 to assess microgliosis (Fig. 10A). RAP induced a significant increase in the density of Iba1-immunopositive cells, as determined by image analysis (Fig. 10B).
Next, 120 pmol of RAP, 0.2 pmol of sLRP1 (2 l of 0.1 M stock solution), or vehicle was injected into spinal cords, using the equivalent procedure. RNA was isolated from the ipsilateral side, 24 h later. Expression of proinflammatory mediators was determined by RT-qPCR. RAP significantly increased expression of TNF-␣, IL-6, IL-1, and iNOS (Fig. 10, C-F). sLRP1 also increased expression of TNF-␣, IL-6, IL-1, and iNOS (Fig. 10, G-J). sLRP1 is thus capable of inducing neuroinflammation in vivo. Overall, our results support a model in which LRP1 shedding from microglia converts an anti-inflammatory receptor into a proinflammatory product in the CNS (Fig. 10K).
Discussion
LRP1 gene deletion is embryonic lethal in mice, implying a critical function for LRP1 in development (49). In adult mammals, the function of LRP1 remains incompletely understood; however, there is increasing evidence that LRP1 regulates the activity of cells that respond to tissue injury (11,14). Previous studies suggest that in microglia, LRP1 may function to oppose activation (24 -27). Our results suggest a more complicated model in which the effects of LRP1 on microglial activation reflect a balance between the opposing activities of membraneanchored and shed LRP1. The balance may be controlled by signals in the microglial microenvironment that either promote or attenuate LRP1 shedding.
Our work that led to the identification of LRP1 shedding as a proinflammatory pathway were initiated in an attempt to explain why RAP treatment was so much more robust at inducing cytokine expression in microglia, compared with LRP1 gene silencing or deletion. A key observation was the ability of RAP to cause LRP1 shedding, like LPS. We then showed that sLRP1 is potently proinflammatory against cultured microglia. When LRP1 shedding was inhibited with GM6001, expression of IL-6 and IL-1 in response to RAP was attenuated. Similarly, expression of IL-6 and IL-1 in response to RAP was substantially decreased in LRP1-deficient microglia isolated from LRP1 fl/fl -LysM-Cre-positive mice. We interpret this result to reflect a decreased capacity for LRP1-deficient microglia to generate sLRP1.
In addition to RAP and LPS, we showed that CRT also induces LRP1 shedding from microglia. CRT was selected for study because it is known to bind to LRP1 and also to trigger proinflammatory cell signaling in antigen-presenting cells (43,45). We did not confirm that CRT-induced LRP1 shedding resulted from an interaction with LRP1. CRT may interact with other cell surface-associated proteins such as C1q as well, either independently of LRP1 or as part on an LRP1-containing mul- , n ϭ 1). C, to confirm the results of the array, immunoblot analysis was performed to detect the indicated phosphorylated proteins, including phospho-p38 MAPK, phospho-ERK1/2, phospho-c-Jun, and phospho-Ser-473 in Akt. Blots were probed to detect -actin as a loading control. D, immunoblots like those shown in C were subjected to densitometry. The results of three separate experiments were averaged (mean Ϯ S.E.; *, p Ͻ 0.05; ***, p Ͻ 0.001, unpaired t test). E, microglia were cultured in low serum medium for 30 min and then treated with sLRP1 (60 ng/ml) for the indicated times. Immunoblot analysis was performed to detect the phosphorylated forms of p38 MAPK, ERK1/2, and Akt Ser-473. Blots were re-probed for -actin as a loading control. Results are representative of two independent experiments.
LRP1 shedding from microglia is proinflammatory
tiprotein complex (50,51). Results obtained with RAP, LPS, and CRT suggest that LRP1 shedding may represent a common pathway by which diverse proinflammatory mediators promote microglial activation.
Purified sLRP1 was potently proinflammatory at concentrations under 1.0 nM in experiments with cultured microglia. The mechanism by which sLRP1 activates cell signaling in microglia and induces cytokine expression remains to be determined. Because sLRP1 binds ligands similarly to membrane-anchored LRP1, sLRP1 may serve as a "receptor decoy" competing for endogenously-produced ligands (31,35,52), including proteins that stimulate anti-inflammatory responses if they bind to cells. Alternatively, sLRP1 may interact directly with microglia. This interaction, if essential, does not appear to require membraneanchored LRP1. The response to sLRP1 is similar in microglia and peripheral macrophages (30); however, in Schwann cells, sLRP1 has an apparently opposite effect, attenuating cellular activation, which induces cytokine expression and recruits macrophages to injured peripheral nerves (33). Direct binding of sLRP1 to Schwann cell surfaces was demonstrated (33). Furthermore, Schwann cells may be pre-conditioned by sLRP1 and resist subsequent challenges with inflammatory agents in the absence of sLRP1. It is therefore likely that the mechanism by which sLRP1 regulates Schwann cell physiology is different from that which is operational in microglia and macrophages.
To test whether RAP and sLRP1 induce neuroinflammation in vivo, these proteins were injected directly into spinal cords in mice. Induction of cytokine expression was observed. In mice injected with RAP, we observed microglial activation; however, microglia may not be the only cells responsible for the changes in gene expression in the spinal cord. Diverse cells in the CNS express LRP1, including neurons and astrocytes (53)(54)(55), and thus may respond to RAP. Furthermore, cells in the CNS in addition to microglia may be targets for exogenously-administered sLRP1 and contribute to the proinflammatory responses observed.
Chuang et al. (24) demonstrated that LRP1 expression is increased in microglia in association with multiple sclerosis lesions and that microglial LRP1 is protective in experimental autoimmune encephalomyelitis. Because multiple sclerosis lesions are considered the focus of inflammation, the study by Chuang et al. (24) emphasizes the importance of identifying mediators that regulate microglial LRP1 expression in vivo. If LRP1 expression is increased in vivo, the quantity of substrate available for shedding also may be increased, and the resulting sLRP1 may contribute to the chronicity of inflammation. In macrophages, LRP1 expression is up-regulated by colony-stimulating factor-1 and then decreased by factors such as interferon-␥ and LPS (56 -58). This example of dynamic regulation of gene expression suggests that the abundance of LRP1 may be
LRP1 shedding from microglia is proinflammatory
fine-tuned to the degree of differentiation and activation in cells of monocytic lineage.
Although our data indicate that sLRP1 contributes to the potency of RAP in activating microglia, the mechanism by which RAP initially triggers a cellular response remains to be determined. In this study, we added a fairly high concentration of RAP (150 nM), assuming that RAP functions by blocking autocrine LRP1 signaling initiated by endogenously-produced ligands. However, RAP at concentrations as low as 15 nM was effective at inducing LRP1 shedding. The ability of RAP to activate cell signaling in microglia, as demonstrated here and elsewhere (25), raises the possibility that RAP regulates microglial cell physiology autonomously and directly through LRP1, as opposed to functioning as an antagonist of other ligands. This model is supported by results obtained with macrophages (20). In these cells, IB␣ phosphorylation was observed within 5 min of adding RAP. For all LRP1 ligands that trigger cell-signaling responses, there is now evidence that diverse essential co-receptors may be involved (21,59,60). The presence or absence of an essential co-receptor may explain why the response to a specific ligand, such as lactoferrin, may be cell type-specific.
Overall, we view two pathways by which the cell-surface abundance of LRP1 may be regulated in microglia in response to proinflammatory mediators. First, LRP1 gene expression may be down-regulated (22,25). Second, LRP1 is subject to shedding. In both cases, an anti-inflammatory receptor was removed from the cell surface. With LRP1 shedding, a proinflammatory agent was added to the microglial microenvironment. The same proinflammatory mediators may stimulate both processes simultaneously.
In conclusion, we have identified sLRP1 as a biologically active product, capable of activating microglia and promoting neuroinflammation. The opposing activities of membrane-anchored and shed LRP1 suggest that proteinases, which release LRP1 from the cell surface, also may be potent regulators of microglial activation. Understanding the function of this novel biochemical system is an important goal for future work.
Proteins and reagents
RAP was expressed as a GST fusion protein in bacteria and purified as described (61). As a control, free GST was expressed in bacteria transformed with the empty vector, pGEX-2T. GST fusion proteins were subjected to chromatography on Detoxi-Gel endotoxin-removing columns (Pierce). Recombinant human CRT was purchased from Sino Biological. LPS, serotype 055.B5, was from Sigma. DAPT and GM6001 were from EMD Millipore. Primers and probes for RT-qPCR experiments were purchased from Thermo Fisher Scientific. sLRP1 was purified from human plasma by the method of Gorovoy et al. (30). In brief, fresh-frozen human plasma was supplemented with proteinase inhibitors, dialyzed against 50 mM Tris-HCl, 150 mM NaCl, pH 7.5, with 1 mM CaCl 2 (TBS-Ca) for 12 h at 4°C, and then subjected to affinity chromatography on GST-RAP covalently coupled to Sepharose 4 Fast Flow (GE Healthcare). RAP-associated proteins were eluted in 0.1 M sodium acetate, 0.5 M NaCl, pH 4, and neutralized by rapid mixing with 50 mM Tris-HCl, pH 8.0. Each sLRP1 preparation was examined for integrity and purity by SDS-PAGE with Coomassie staining and by RAP ligand blotting. mLRP1 was purified from mouse livers, as described previously (30). EI-tPA was from Molecular Innovations. ␣ 2 M was purified from human plasma and activated for binding to LRP1 as described previously (56).
LRP1 shedding from microglia is proinflammatory Mice
Wild-type C57BL/6J mice were from The Jackson Laboratory. Mice in which the promoter and first two exons of the LRP1 gene are flanked by loxP sites were originally generated by Rohlmann et al. (62). Mice that are homozygous for the floxed LRP1 gene (LRP1 fl/fl ) were bred with mice that express Cre recombinase under the control of the lysozyme-M promoter (LysM-Cre) in the C57BL/6J background to generate LRP1 fl/fl -LysM-Cre-positive mice. Littermate controls were LRP1 fl/fl and LysM-Cre-negative. All animal experiments were approved by the Institutional Animal Care and Use Committee at University of California San Diego.
Microglia also were isolated from 8-week-old male LRP1 fl/fl -LysM-Cre-positive and LysM-Cre-negative mice. Brains were harvested and dissociated using the neural tissue dissociation kit. Myelin debris was removed using Myelin Removal Beads II (Miltenyi Biotec). Microglia were then isolated by magnetic cell sorting using CD11b microbeads (Miltenyi Biotec). Cells were plated in medium supplemented with 10% FBS and 5 ng/ml GM-CSF (R&D Systems) and studied within 5 days.
In RAP ligand-blotting studies, blocked PVDF membranes were incubated with 100 nM RAP-GST in 5% nonfat milk for 1 h at 22°C. As a control, equivalent membranes were incubated with 100 nM free GST. The membranes were washed three times and then incubated with GST-specific antibody coupled to horseradish peroxidase (84 -814; Genesee Scientific). Conjugated antibody was detected with ECL Plus TM (GE Healthcare) and HyBlot CL autoradiography film (Denville Scientific). Blots were scanned (Canoscan), and densitometry was performed using ImageJ software.
Analysis of conditioned medium
Microglia were allowed to condition medium that contained 0.5% FBS. TNF-␣ and IL-6 in CM were determined with mouse quantikine ELISA kits (R&D Systems). sLRP1 was determined by immunoblot analysis without concentrating samples. Nitric oxide was determined by measuring nitrite in CM with the Griess reagent system (Promega). In these studies, CM (50 l) was incubated with 100 l of Griess reagent in 96-well plates for 30 min at 22°C. The absorbance was determined at 540 nm using a SpectraMax M2e microplate reader (Molecular Biodevices).
Phosphoprotein array studies
Phosphorylated proteins were detected in an unbiased manner using the human Phospho-kinase Array Proteome Profiler TM (R&D System). Although this system was originally developed to examine human proteins, we have demonstrated its effectiveness using rodent cell extracts (21,64). Microglia in 0.5% serum-supplemented medium were treated with RAP (150 nM) or vehicle (PBS) for 1 h. Protein extracts were prepared and applied to the membranes. The membranes were developed using ECL, as described by the manufacturer. Blots were LRP1 shedding from microglia is proinflammatory scanned (Canoscan), and densitometry was performed using ImageJ software.
Cell proliferation assay
Microglia were plated in 96-well plates at a density of 10 5 cells per well and cultured in low-serum medium for 48 h in the presence of LPS (1 g/ml), RAP (150 nM), GST (150 nM), or vehicle (PBS). Cell proliferation was determined using the Cayman's WST-1 assay according to the manufacturer's instructions. Briefly after 48 h, cells were incubated at 37°C for 2 h with WST-1 mixture. Absorbance at 450 nm was measured using a SpectraMax M2e microplate reader (Molecular Biodevices).
Transwell cell migration assays
Microglia (1 ϫ 10 4 ) were treated with LPS (1 g/ml), RAP (150 nM), GST (150 nM), or vehicle (PBS) for 10 min at 37°C and then added with the same reagents in 0.5% FBS-supplemented medium to the top chamber of 24-well Transwell units with 8.0-m pores (Corning Glass). The underside of each membrane was coated with 10 g/ml fibronectin (Millipore). The bottom chamber contained 10% FBS. Cells were allowed to migrate at 37°C in 5% CO 2 for 16 h. Non-migrating cells were removed from the upper surface using a cotton swab. The lower surfaces were stained with Hema 3 (Thermo Fisher Scientific). Stained membranes were mounted on microscope slides and imaged using a Leica DMIRE2 microscope. The number of migrated cells was determined in four representative fields, selected by a blinded investigator, using ImageJ software. Three separate membranes from four independent experiments were analyzed for each condition.
Stereotaxic injection of spinal cords
Mice were anesthetized and placed in a stereotaxic frame. Under sterile conditions, an incision was made from thoracic vertebrae T8 to T12, and the spinal cord was exposed by laminectomy between T10 and T11. A total volume of 2 l of each experimental solution (RAP, sLRP1, vehicle) was slowly injected into the right dorsal horn of the spinal cord using a Hamilton neuro-syringe. The needle was withdrawn after 5 min to avoid efflux of the injected solution. The wound was closed with 6-0 nylon suture.
Immunohistochemistry
Spinal cords were harvested 24 h after stereotaxic injections. Mice were deeply anesthetized and subjected to intracardiac perfusion with fresh PBS followed by 4% paraformaldehyde. Tissues were paraffin-embedded, and 4-m sections were prepared (at least 3/tissue). Tissue sections were incubated with 10% nonfat milk and then with primary antibody against Iba-1 (019-19741; Wako) for 1 h. Next, sections were incubated with anti-rabbit antibody conjugated to HRP and developed with 3Ј,3-diaminobenzidine. Control sections were treated with secondary antibody only. Light microscopy was performed using a Leica DFC420 microscope with Leica Imaging Software 2.8.1 (Leica Microsystems). IHC studies were subjected to image analysis using ImageJ software (National Institutes of Health). The total number of Iba1-positive cells on the ipsilateral side of the spinal cord was determined by a blinded investigator.
Fluorescence microscopy
Microglia were cultured on Nunc TM Lab-Tek TM II CC2 TM chamber slides (Thermo Fisher Scientific). The cells were fixed in 4% paraformaldehyde, permeabilized in 0.3% Triton X-100 (Sigma), and blocked with 10% normal donkey serum (Sigma). Cells were stained with Oregon Green-Phalloidin (Molecular Probes). Slides were mounted using Prolong Gold Antifade reagent with DAPI (Molecular Probes) and viewed under an inverted fluorescence microscope.
Statistics
Statistical analysis was performed using GraphPad Prism 5.0 (GraphPad Software Inc.). All results are expressed as the mean Ϯ S.E. Comparisons between two groups were performed using paired or unpaired t test. Results from more than two groups were analyzed by one-way ANOVA followed by Tukey's or Dunnett's post hoc analysis as stated. p Ͻ 0.05 was considered as statistically significant. | 2018-04-03T05:17:34.554Z | 2017-09-28T00:00:00.000 | {
"year": 2017,
"sha1": "d080090c1e117b36357a060db8678c51f21f7a51",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/292/45/18699.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "24724a5260bf7b3cd04b78ef9821808a7548a208",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
268322017 | pes2o/s2orc | v3-fos-license | Kidney Outcomes and Trajectories of Tubular Injury and Function in Critically Ill Persons with and without Coronavirus-2019
Background Coronavirus disease-2019 (COVID-19) may injure the kidney tubules via activation of inflammatory host responses and/or direct viral infiltration. Most studies of kidney injury in COVID-19 lacked contemporaneous controls or measured kidney biomarkers at a single time point. To better understand mechanisms of AKI in COVID-19, we compared kidney outcomes and trajectories of tubular injury, viability, and function in prospectively enrolled critically ill adults with and without COVID-19. Methods The COVID-19 Host Response and Outcomes (CHROME) study prospectively enrolled patients admitted to intensive care units in Washington state with symptoms of lower respiratory tract infection, determining COVID-19 status by nucleic acid amplification on arrival. We evaluated major adverse kidney events (MAKE) defined as a doubling of serum creatinine, kidney replacement therapy, or death, in 330 patients after inverse probability weighting. In the 181 patients with available biosamples, we determined trajectories of urine kidney injury molecule-1 (KIM-1) and epithelial growth factor (EGF), and urine:plasma ratios of endogenous markers of tubular secretory clearance. Results At ICU admission, mean age was 55±16 years; 45% required mechanical ventilation; and mean serum creatinine concentration was 1.1 mg/dL. COVID-19 was associated with a 70% greater incidence of MAKE (95% CI 1.05, 2.74) and a 741% greater incidence of KRT (95% CI 1.69, 32.41). The biomarker cohort had a median of three follow-up measurements. Urine EGF, secretory clearance ratios, and eGFR increased over time in the COVID-19 negative group but remained unchanged in the COVID-19 positive group. In contrast, urine KIM-1 concentrations did not significantly change over the course of the study in either group. Conclusions Among critically ill adults, COVID-19 is associated with a more protracted course of proximal tubular dysfunction.
INTRODUCTION
Coronavirus disease-2019 (COVID- 19) is a viral syndrome caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2).Clinical manifestations range from mild upper respiratory illness to acute respiratory distress syndrome (ARDS), multi-organ system failure, and death.(1,2) Evidence suggests that SARS-CoV-2 infection may cause injury to the kidney tubules, either via direct viral in ltration and/or secondary activation of in ammatory host responses.In cell culture, SARS-CoV-2 directly infects proximal tubular cells, endothelial cells, and podocytes via the angiotensin-converting enzyme 2 (ACE2) receptor.(3,4) Relatively high incidences of acute kidney injury (AKI) and kidney replacement therapy (KRT) are reported among hospitalized persons with COVID-19,(5-8) and markers of tubular injury such as kidney injury molecule-1 (KIM-1) are elevated early in the course of disease.(6,9) Moreover, case series have described a syndrome of proximal tubular dysfunction among some patients with COVID-19 based on impaired reabsorption of beta-2-microglobulin, glucose, and uric acid.(10,11) On the other hand, most previous human studies of COVID-19 have lacked contemporaneously enrolled control persons without SARS-CoV-2, con ating potential kidney effects of this infection with the underlying severity of illness and temporal differences in care.Detectable SARS-CoV-2 is relatively uncommon in the blood (12) or urine (13) of patients with COVID-19, challenging the clinical relevance of direct kidney infection observed in cell culture.Yet, the mechanisms and natural course of injury to the proximal tubules remain poorly understood.
To that end, we sought to better de ne the patterns and longitudinal changes to the proximal tubules attributable to COVID-19 infection in critically ill patients.In this study, we compared the incidence of AKI outcomes and the trajectories of tubular injury, viability, and function in prospectively enrolled and comparably ill patients from intensive care units with and without COVID-19.
Study population
The COVID-19 Host Response and Outcomes (CHROME) study prospectively enrolled 380 critically ill adults from intensive care units (ICU) at the University of Washington Medical Center, Harborview Medical Center, and Northwest Hospital, all in Seattle, WA, between April 2020 and May 2021.(14) Enrollment criteria were age >18 years, fever, hypoxemia (de ned as requiring any supplemental oxygen or an oxygen saturation of <94% on ambient air), and symptoms of lower respiratory tract infection that prompted SARS-CoV-2 testing.Subsequent COVID-19 status was de ned based on the results of rapid nucleic acid ampli cation testing (NAAT) of nasopharyngeal swabs, which were performed within 24 hours of ICU admission.The prospective enrollment of critically ill patients based on clinical suspicion for COVID-19 was designed to yield comparably ill cohorts of patients with and without the disease and minimize temporal differences in care.The CHROME study excluded persons who were pregnant, transferred from another ICU after more than 24 hours, had a history of solid organ transplantation, were institutionalized, or were unlikely to survive for more than 24 hours.
For the present study, we excluded23 CHROME patients who had a history of end-stage kidney disease (ESKD), six who had received dialysis prior to study enrollment, 20 with an admission serum creatinine concentration corresponding to an estimated glomerular ltration rate (GFR) <15 ml/min/1.73m 2 , and one without a collected urine sample, leaving 223 SARS-CoV-2 positive and 107 negative patients for analyses ("Clinical cohort").We then measured biomarkers of kidney injury, viability, and secretory clearance in a subsample of 117 SARS-CoV-2 positive and 64 negative patients who had at least one paired plasma and urine sample for analysis ("Biomarker cohort").
Ethical Statement
Study procedures were approved by the Institutional Review Board (IRB), with consent obtained from all patients or waived by the local regulatory board early in the pandemic.All procedures were followed in accordance with the ethical standards of the responsible committee on human experimentation and with the Helsinki Declaration of 1975.
Measurements of clinical study data
Study coordinators prospectively abstracted demographic data, respiratory status, vital signs, laboratory results, and the receipt of kidney replacement therapy (KRT) from electronic medical records.We calculated baseline Acute Physiology and Chronic Health Evaluation (APACHE) III and Sequential Organ Failure Assessment (SOFA) scores based on available clinical and laboratory data.(15) We determined the presence of acute respiratory distress syndrome (ARDS) based on the ratio of inspired to arterial oxygen concentration and adjudication of chest radiographs by an attending radiologist.(16)Kidney outcomes were assessed over the course of hospitalization and included (1) the major adverse kidney event (MAKE), de ned by at least a doubling of the serum creatinine concentration from baseline, requirement for kidney replacement therapy, or death, (17) (2) individual components of the MAKE outcome, and (3) any stage of acute kidney injury (AKI), de ned by the Kidney Disease Improving Global Outcomes (KDIGO) as an absolute increase of ≥0.3 mg/dL or a ≥50% increase in serum creatinine from baseline.(18)We de ned the baseline serum creatinine concentration as the clinically obtained value closest to, and before, the time of study enrollment on ICU day one.For 1 patient who did not have a serum creatinine measurement before ICU day 1, we used the rst available clinical value after day one within 24-hours.
Measurement of kidney biomarkers
Study coordinators collected blood and spot urine samples within 24 hours of ICU admission (day 1) and then subsequently on hospital days 3, 7, 10, and 14 if the patient remained hospitalized.Blood and urine samples were centrifuged at 3,000 RPM for 10 minutes at room temperature.We measured urine concentrations of kidney injury molecule 1 (KIM-1) and epithelial growth factor (EGF) using commercially available immunoassays (Enzo life sciences and R&D systems, respectively).The inter-assay variability is 6.2% for urine KIM-1 and 5.5% for urine EGF.We indexed measurements of KIM-1 and EGF to urine creatinine to account for variation in urinary concentration.We measured plasma concentrations of creatinine and cystatin C, and urine concentrations of creatinine and albumin using the Beckman-Coulter DxC Unicell 600.We estimated GFR in the biomarker cohort using the 2021 combined CKD-EPI equation based on plasma concentrations of creatinine and cystatin C. (19) We estimated tubular secretory clearance based on plasma and urine concentrations of endogenous secretory solutes using a targeted liquid chromatography/mass spectrometry assay as previously described.(20)Plasma samples were precipitated in organic solvent and extracted using solid-phase extraction; urine samples underwent two parallel solid-phase extractions.Dried extracts were reconstituted in 80µl 5% acetonitrile/0.2%formic acid in H 2 O and passed through a large-pore lter plate (MSBVN1210; Millipore).Labeled internal standards were used to reduce sample-speci c matrix effects and single-point external calibration was used to determine concentrations and reduce between-batch variability.We calculated the urine-to-plasma ratio for each solute as an approximation of their secretory clearance.(21) To facilitate interpretation and provide a single metric of secretory clearance, we also created a summary score by rst standardizing each secretory ratio to a 0-100 scale: (20) Standardized ratio X = [ln(Ux/Px) -min(ln(Ux/Px))] / range(ln(Ux/Px)) where ln(Ux/Px) represents the log-transformed urine to plasma ratio of each solute, min(ln(Ux/Px)) represents the minimal value in the distribution, and range(ln(Ux/Px)) represents the range of these measurements.We then computed the summary score as the mean of the eight standardized ratios.
Analytic plan
We tabulated baseline characteristics according to COVID-19 status using means and standard deviations for normally distributed data or medians and interquartile ranges for variables with skewed distributions.To increase the degree of similarity between SARS-CoV-2 positive and negative patients, we created a propensity score for SARS-CoV-2 positivity using logistic regression with the following clinical data at baseline: age, race, sex, body mass index (BMI), APACHE III score, SOFA score, admission source, extracorporeal membrane oxygenation, sepsis, trauma, pneumonia, history of hypertension, heart failure, chronic obstructive pulmonary disease, cancer, and diabetes, and use of beta blockers and diuretics.To assess covariate balance after weighting, we calculated weighted means and standard deviations (for continuous variables), and weighted proportions (for categorical variables) and then compared the standardized difference between covariates.Standardized differences below 0.25 are generally considered to indicate appropriate matching.(22) For the MAKE outcome, patients were followed from the time of ICU admission until they either incurred a component of MAKE or their data was censored at hospital discharge.For outcomes of AKI, doubling of serum creatinine, and KRT, patients were censored for in-hospital death.We used weighted log-linear Poisson regression with robust Huber-White standard errors to estimate associations of baseline COVID-19 status with each clinical outcome.Models were weighted by the inverse probability of the COVID-19 propensity-score and additionally adjusted for the baseline serum creatinine concentration to control for confounding.
To model the trajectories of biomarkers over the course of hospitalization, we employed weighted generalized estimating equations with an independent working covariance structure to account for the correlation within person.(23) To account for selection bias that may arise from informative censoring, for each post-baseline sample collection, we constructed inverse probability of censoring weights (IPCW), by modeling the probability that the sample collection occurred with logistic regression, as a function of COVID-19 status and baseline covariates, including baseline measures of kidney function.At each time point, weights were the product of the baseline IPTW weights, divided by the probability of sample collections at the current and prior time points (i.e., IPCW).Within each group we estimated the mean daily change in kidney biomarkers using the slope after linear regression.
Baseline characteristics of the clinical study cohort
The clinical study cohort included 223 COVID-19 patients and 107 SARS-CoV-2 negative control patients (Table 1).The mean age at ICU admission was 55 16 years; 45% required mechanical ventilation; and 35% required vasopressors.The mean admission serum creatinine concentration was 1.1 mg/dL in each group.After propensity matching, baseline characteristics of COVID-19 patients and control patients were similar, including severity of illness scores and baseline serum creatinine concentrations.Baseline medication use was similar after propensity scoring (Supplemental Table 2a).
Clinical kidney outcomes
In the clinical study cohort, median hospital length of stay for the MAKE outcome was 10 days (IQR 5-19 days).The cumulative incidence of MAKE was 40% among COVID-19 patients (82 events) and 20% among negative controls (25 events; Figure 1).After inverse probability weighting by propensity-score and additional adjustment for baseline serum creatinine, SARS-CoV-2 positivity was associated with an estimated 70% greater incidence of MAKE (Table 2; relative risk 1.70; 95% CI 1.05-2.74;p-value = 0.03).SARS-CoV-2 positivity was associated with an estimated 7-times higher incidence of KRT (relative risk 7.41; 95% CI 1.69-32.41)and nearly 1.8-times higher incidence of death (relative risk 1.79; 95% CI 1.06-3.00).The associations of COVID-19 with MAKE were statistically similar after further adjusting for vasopressor use at study admission (Supplemental Table 3).
Baseline markers of tubular injury, viability, and function
The biomarker cohort included 117 COVID-19 patients and 64 SARS-CoV-2 negative control patients (Supplemental Table 1).Patients in the biomarker cohort had modestly greater APACHE III and SOFA scores compared with those in the clinical cohort.The median urine albumin:creatinine ratio at baseline was 72.1 mg/g (IQR 24.7-143.7) in COVID-19 patients and 48.2 mg/g (IQR 21.9-197.9) in control patients.Nephrotic range proteinuria was present in only one patient, who was SARS-CoV-2 negative.COVID-19 negative patients tended to be on more home medications, although these differences were small after propensity score weighting (Supplemental Table 2b).Baseline urine concentrations of KIM-1 tended to be modestly lower, and the summary secretion score modestly higher, in COVID-19 positive compared with COVID negative patients (Table 3 and Supplemental Table 2).There was no association with COVID-19 status and baseline secretory solute urine:plasma ratios (Supplemental Table 4).
Longitudinal changes in markers of tubular injury, viability, and function
There was a median of three follow-up measurements in the biomarker cohort: 125 patients had at least two follow-up measurements, 93 had at least three measurements, and 61 had four or ve measurements.After propensity-score inverse probability weighting and adjustment for informative censoring, urine KIM-1 concentrations remained signi cantly unchanged over time in both COVID-19 positive and COVID-19 negative patients.(Figure 2 and Table 4).In contrast, urine EGF concentrations increased by an average of 7% per day (95% CI 4.1% -10.0 per day) in the COVID-19 negative group but by only 0.5% per day (95% CI -1.1% to +2.2% per day) in COVID-19 positive group (p-value for interaction <0.001).Similar trends were observed for trajectories of the summary secretion score and estimated GFR, with modest increases over time in the COVID-19 negative group but negligible changes in the COVID-19 positive group.Individual secretory solute urine:plasma ratios tended to increase in COVID-19 negative patients and decrease in COVID-19 positive patients, with the most signi cant differences displayed by kynurenic acid and tiglylglycine (Supplemental Figure 1).
DISCUSSION
Herein we have shown differential trajectories of markers of tubular injury, viability, and secretion between prospectively enrolled, critically ill patients with and without COVID-19.This study adds unique insight into the mechanisms of kidney injury in COVID-19 by illustrating patterns of tubular function over time in comparison with contemporaneously enrolled control persons without the disease.Among control patients, urine EGF concentrations, secretory clearance ratios, and eGFR increased over the course of the study, consistent with a pattern of kidney recovery.In contrast, these markers did not appreciably change in comparably ill patients with COVID-19.These ndings suggest that COVID-19 may cause a more protracted and severe course of tubular dysfunction.Similar to other studies, we found COVID-19 to be associated with greater risks of kidney replacement therapy and death.
Proposed pathways of AKI in SARS-CoV-2 infection include a protracted in ammatory response, overstimulation of pro-thrombotic pathways, and direct viral infection of the kidneys.(24) A postmortem study found more extensive tubular necrosis and microvascular thrombosis in COVID-19 cases compared with bacterial sepsis.(25) Direct kidney infection of SARS-CoV-2 requires viremia, which is relatively uncommon and limited to severe cases of COVID-19, however more sensitive methods have detected SARS-CoV-2 in urine sediments suggesting kidney infection may be more common than previously appreciated.(12,13,26) Proximal tubule reabsorption defects have been reported in hospitalized patients with COVID-19, including phosphate loss, hypouricemia, and urine glucose wasting.(10,11) However, these studies lacked suitable control groups or longitudinal measures of function.We found that urine KIM-1 concentrations were similar between patients with and without COVID-19 over the course of this study, suggesting comparable tubular injury.Yet, patients without SARS-CoV-2 tended to recover eGFR and had a positive trend in EGF and the tubular secretory clearance compared to patients with COVID-19, suggesting a slower pattern of kidney recovery in COVID-19 which is consistent with clinical observation.(5,27) Previous case series have reported relatively high incidences of AKI and KRT in critically ill patients with COVID-19.For example, the incidences of AKI and KRT were 51% and 19%, respectively, in a multicenter study of 3,309 persons with COVID-19 from ICUs across the United States.(7) Similarly high incidences of these outcomes have been reported in individual ICU-based studies of COVID-19.(28,29)In one of the few studies with a control group, the relative risk for AKI and KRT were 1.5 and 3.1, respectively, in 3,345 patients with COVID-19 and 1,265 patients without COVID-19 from the New York City area.(30)Another study comparing hospitalized patients with COVID-19 versus patients with a positive test for in uenza found that COVID-19 was associated with a 2.1-times greater incidence of > stage 2 AKI and a 53% lower chance of kidney recovery at discharge.(27) In the only prospective study, a single center in Switzerland enrolled 507 consecutive adults who presented with symptoms of respiratory infection.The incidence of AKI over the course of hospitalization was 2.5-times higher in patients who tested positive for SARS-CoV-2 compared to ßthose with another etiology of their respiratory illness.(31)Among these, our study is unique in focusing on critically ill persons with COVID-19 and comparing to a matched control group of patients with symptoms of a respiratory infection; in particular, we identi ed a substantially greater risk for KRT in COVID-19 compared to controls of comparable illness severity in the ICU.
Strengths of the current study include prospective enrollment of critically ill patients based on a clinical indication for SARS-CoV-2 testing and the use of propensity matching within the cohort to increase similarity between COVID-19 patients and negative controls.Longitudinal assessment of tubular injury, viability, and secretory clearance markers provides objective measures of these processes over the course of hospitalization.Several limitations of the study warrant comment.Despite matching on indication and propensity score, unmeasured differences between groups may have distorted associations with the trajectories of tubular markers and outcomes.We used statistical methods to account for differential dropout given the competing risk of death; nonetheless, unmeasured differences in surviving patients may have biased the observed associations.The selected markers of tubular injury, viability, and secretory clearance (KIM-1, EGF, and secretory solute ratios) may incompletely re ect these underlying biological processes.Individual secretory markers have differing a nities for tubular transporters, which in aggregate are intended to summarily re ect tubular secretion in absence of a true gold standard; the summary secretion score was created for ease of interpretation, although there may be a more optimal combination of markers.The calculation of eGFR while creatinine and cystatin C are not in steady-state may limit accuracy in monitoring kidney function trajectories.Finally, evolution of prevalent viral strains and practice patterns since the data collection period may limit generalizability.
In summary, we found SARS-CoV-2 infection to be associated with more severe AKI and a pattern of prolonged tubular dysfunction in comparably ill ICU patients with and without this infection.
Figures
Figures
Figure 1 Association
Figure 1
Table Association of
COVID-19 status with in-hospital kidney outcomes in the clinical cohort.De ned by at least a doubling of the serum creatinine concentration, kidney replacement therapy, or death.Table Baseline kidney measures by COVID-19 status in the biomarker cohort.
1Relative risk compares COVID-19 positive with COVID-19 negative patients using propensity-score inverse probability weighting and additional adjustment for baseline serum creatinine concentration.2*Afterpropensity-score inverse probability weighting and adjustment for informative censoring. | 2024-03-01T14:10:26.160Z | 2024-02-28T00:00:00.000 | {
"year": 2024,
"sha1": "7020d1022e6c8ddf90a279007a13e556e070b2d3",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-3974635/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "30c99403363a9e10a322eff4bbd2222926ddda14",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257847768 | pes2o/s2orc | v3-fos-license | Covariance of resonance parameters ascribed to systematic uncertainties in experiments
. In resonance analyses, experimental uncertainties a ff ect the accuracy of resonance parameters. The resonance analysis code REFIT can consider the statistical uncertainty of the experimental data when evaluating the resonance parameter uncertainty. However, since the systematic uncertainties are not independent at each measured energy, they must be treated di ff erently from the statistical uncertainty. In the present study, we developed a new method to incorporate the systematic uncertainty coming from sample thickness into the uncertainty of resonance parameters. We applied this method to transmission of natural zinc measured at ANNRI of MLF in J-PARC and derived the systematic uncertainty of resonance parameters. We found that some of resonance parameters have larger systematic uncertainties than the statistical ones.
Introduction
The experimental uncertainties of cross-section measurements in neutron time-of-flight (TOF) method consist of statistical and systematic uncertainties. Furthermore, the systematic uncertainty can be separated into neutronenergy dependent and independent terms. The neutronenergy dependent term contains the uncertainties of, for example, incident-neutron beam spectrum and correction for self-shielding effect, etc. On the other hand, as the neutron energy independent term, the uncertainties of the sample thickness and normalization, etc., can be considered. These independent uncertainties can be represented by only one value and give a uniform relative uncertainty over all energy regions.
Although the resonance analysis code, REFIT [1], can treat the statistical uncertainty, it cannot consider the systematic uncertainty in the resonance analysis. Some methods to treat the systematic uncertainties have been proposed in the literature [2][3][4]. However, many studies of cross-section measurements have not yet included the systematic uncertainty in their resonance analyses.
We propose a new method to evaluate the systematic uncertainty and correlation of resonance parameters using REFIT. In particular, the uncertainty of sample thickness is discussed because it gives the largest uncertainty in many cases of transmission measurements. In the present work, the resonance analysis of the transmission of natural zinc (Zn) is used as an example.
Measurement
The transmission measurement was performed in the Accurate Neutron-Nucleus Reaction measurement Instru- * e-mail: endo.shunsuke@jaea.go.jp ment (ANNRI) of the Materials and Life Science Experimental Facility (MLF) in the Japan Proton Accelerator Research Complex (J-PARC). The accelerator in J-PARC with a proton beam power of 700 kW injected two proton pulses (so-called double-bunch mode) with an interpulse of 0.6 µs into the mercury target to generate neutrons. The moderated neutrons were used for the present TOF measurements. A natural Zn sample with the dimensions of 50 × 50 × 6 mm and an areal density of n t = 3.92 ± 0.05 atoms/barn was used. For the measurements, two different types of Li-glass detectors were employed. A 6 Li-enriched glass detector was used to measure transmitted neutrons, whereas a 7 Li-enriched glass detector was utilized to estimate the background events due to gamma-rays. The details of transmission measurements are given in Ref [5].
Transmission analysis
The transmission analysis was performed in the same manner as described in the past analysis in Ref [5]. Figure 1 shows the pulse height spectrum of the 6 Li-and 7 Liglass detectors. The events in the filled color region, where single-hit and double-hit events by 6 Li(n,α) reactions were found, were adopted for the present analysis. The deadtime correction was employed using the extended deadtime model [5,6]. The frame-overlap backgrounds were evaluated by fitting the TOF spectrum between 37 to 40 ms by the following function; p 1 exp(−p 2 t) + p 3 . The TOF spectra of two Li-glass detectors after dead-time correction and the estimated frame-overlap spectrum are shown in Fig. 2. To remove gamma-ray backgrounds, the TOF spectrum of the 7 Li-glass detector was subtracted from that of the 6 Li-glass detector. The TOF spectrum of the 7 Li-glass detector was normalized by a factor of 2.2 ± 0.2, which was derived from the black-resonance in a notchfilter inserted measurement, to correct the difference of the detection efficiencies. The transmission was obtained by dividing the sample-in spectrum from the sample-out spectrum. The obtained transmission is shown in Fig. 3. The reduced total cross section, which includes the resolution function of the MLF and the Doppler broadening, can be calculated by where T is transmission. The relative uncertainties of total cross section are listed in Table 1 at two neutron energies. The other systematic uncertainty contains the uncertainties of dead-time correction, beam intensity, and the spectrum normalization factor of the 7 Li-glass detector.
Resonance analysis and covariance evaluation
The resonance analysis was made using REFIT. As mentioned in Sec. 1, since REFIT does not currently have the ability to evaluate uncertainty of resonance parameters caused by the systematic uncertainty, we derived the sets of resonance parameters with varying sample thickness for the obtained transmission data. The sample thickness of a REFIT input was changed from n t − α∆n t to n t + α∆n t dividing into N cases. The systematic uncertainty was calculated as where Γ η,i is the obtained resonance parameter in i-th sample thickness; Γ η is the obtained resonance parameter for the nominal sample thickness; w i is the weight calculated by where β i is calculated by and means that the i-th sample thickness is n t + β i ∆n t . The correlation between resonance parameters Γ η and Γ ζ was determined as Applying this method, the systematic uncertainty and correlations were estimated. In this estimation, we used α = 4 and N = 9, i.e. using sample thickness n t − 4∆n t , n t − 3∆n t , · · ·, n t + 4∆n t . The neutron width and resonance energy were fitted with fixing the gamma-width to the value in JENDL-5 [7]. Figures 4 and 5 show the fitting result and the definition of resonance number. Because of the double-bunch effect in MLF, some resonances make two dips, such as for resonance No. 6. Resonance No. 11 partially overlapped with resonance No. 12. The obtained Fig. 4, but in the neutron energy region between 800 to 5000 eV. correlation of neutron width is shown in Fig. 6. The resulting resonance parameters and uncertainties are listed in Table 2. Figure 7 shows the obtained resonance energy of resonance No. 4 for each sample thickness. The error bar represents the fitting uncertainty only considering statistical uncertainty. The resonance energies are consistent regardless of the sample thickness. According to this study, the systematic uncertainty of resonance energy coming from the sample thickness is negligible compared to the statistical uncertainty. Figure 8 displays the neutron width of resonance No. 4 for each sample thickness. As expected, the neutron width decreases as the sample thickness used in the fitting increases. The systematic uncertainty defined by Eq. (2) corresponds to the slope of this plot. Moreover, the neu- tron widths for each sample thickness of resonance No. 11 are shown in Fig. 9. In this case, it is difficult to evaluate the systematic uncertainty and correlations among the res- Table 2. Obtained resonance parameters. The gamma width was adopted from JENDL-5 [7]. In the neutron width, the first uncertainty represents the fitting uncertainty, and the second uncertainty represents the systematic uncertainty evaluated by Eq. (2).
Discussion
The uncertainty of the resonance energy was deduced from uncertainties of fitting, flight length and initial time delay. If the experimental systematic uncertainty is not considered in the resonance analysis, the total uncertainty of neutron width in some resonances, such as resonances No. 1, 4 and 5, is underestimated. Therefore, it is significant to evaluate the systematic uncertainty in the resonance analyses.
Positive correlations among many resonances were found in Fig. 6. This is an expected result from the following consideration. When the input value of sample thickness in REFIT becomes small, the calculated transmission from a cross-section increases. To reproduce the experimental transmission results, the cross section has to become larger. Therefore, the resonance parameters, especially neutron width, become larger. Such behavior makes the correlation among many resonances positive. On the other hand, a weak negative correlation between resonance No. 11 and the other resonances was found. According to Eq. (5), the correlation with resonance No. 11 should be weak because fitted neutron widths have a flat distribution as seen in Fig. 9. Such "negative" correlation may be incidental.
This technique with reliable resonance parameters is applicable to determining unknown sample thicknesses and sample temperature. The sample thickness can be estimated from the χ 2 distribution by varying an input sample thickness and performing a fit to measured data. Moreover, in the same way as the case of sample thickness, the sample temperature can be deduced by varying an input sample temperature. This way makes use of the resonance broadening due to the Doppler effect as in Kai et al. [8]. The applications to those are underway.
Summary
We proposed a new method to derive the systematic uncertainty and correlations among the resonance parameters in the resonance analyses. This is the simple method that the sets of resonance parameters were obtained by changing the input value of the sample thickness in REFIT. The results show that it is essential to consider the systematic uncertainty when deriving the resonance parameters, especially the neutron width, because its contribution to the total uncertainty might be higher than that of the fitting uncertainty. | 2023-03-31T15:18:40.344Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "186d0d617897c164af227397de2d74e3bf720f1d",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2023/07/epjconf_cw2023_00012.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "033b11b667bd7bf908b1a00b5c13d9b3ed19d04b",
"s2fieldsofstudy": [
"Geology",
"Physics"
],
"extfieldsofstudy": []
} |
196656793 | pes2o/s2orc | v3-fos-license | Multiple conformations facilitate PilT function in the type IV pilus
Type IV pilus-like systems are protein complexes that polymerize pilin fibres. They are critical for virulence in many bacterial pathogens. Pilin polymerization and depolymerization are powered by motor ATPases of the PilT/VirB11-like family. This family is thought to operate with C2 symmetry; however, most of these ATPases crystallize with either C3 or C6 symmetric conformations. The relevance of these conformations is unclear. Here, we determine the X-ray structures of PilT in four unique conformations and use these structures to classify the conformation of available PilT/VirB11-like family member structures. Single particle electron cryomicroscopy (cryoEM) structures of PilT reveal condition-dependent preferences for C2, C3, and C6 conformations. The physiologic importance of these conformations is validated by coevolution analysis and functional studies of point mutants, identifying a rare gain-of-function mutation that favours the C2 conformation. With these data, we propose a comprehensive model of PilT function with broad implications for PilT/VirB11-like family members.
T ype IV pilus-like (T4P-like) systems are distributed across all phyla of prokaryotic life 1,2 . T4P-like systems include the type IVa pilus (T4aP), type II secretion (T2S) system, type IVb pilus (T4bP), Tad/Flp pilus (T4cP), Com pilus, and archaellumwith the latter found exclusively in Archaea. These systems enable attachment, biofilm formation, phage adsorption, surface-associated or swimming motility, natural competence, and folded protein secretion in bacteria and Archaea [3][4][5] , and thus are of vital medical and industrial importance. As many of these systems are critical for virulence in bacterial pathogens 1,[6][7][8] ; conserved components could have value as therapeutic targets. Despite the importance of T4P-like systems, basic questionsincluding how the pilus is assembled and disassembledremain open.
All systems have at least three conserved and essential elements: a pilus polymer of subunits termed pilins, a pre-pilin peptidase, and a motor 9 . The pre-pilin peptidase cleaves the Nterminal leader peptides of pilin subunits at the inner face of the cytoplasmic membrane, leaving the mature pilins embedded in the membrane 10 . Polymerization requires extraction of pilins from the membrane by the cytoplasmic motor by using energy generated from ATP hydrolysis [11][12][13] . The motor is made up of two well-conserved components: a cytoplasmic ring-like hexameric PilT/VirB11-like ATPase and a PilC-like inner-membrane platform protein 9 . As mature pilins cannot interact directly with the cytoplasmic ATPases, their polymerization requires that both the ATPase and pilins interact with the PilC-like platform protein 9 . Cryo-electron tomography (cryoET) studies of the T4aP, T4bP, archaellum, and T2S systems are consistent with localization of the PilC-like protein in the pore of the hexameric ATPase, connecting it to the pilus on the exterior of the inner membrane [14][15][16][17] . The ATPase connects to stator-like components, suggesting that a fixed ATPase moves the PilC-like protein 15 . Thus, PilT/VirB11-like family members are thought to power pilin polymerization by rotating the PilC-like protein to extract pilins from the membrane and inserting them into the base of the growing pilus polymer 15,18 .
Detailed structural analysis of PilT/VirB11-like ATPases has advanced our understanding of how the T4P-like motor could insert pilins into the base of a helical pilus. They form hexamers that can be represented as six rigid subunits, termed packing units, held together by flexible linkers 18 (Supplementary Fig. 1). Adjacent packing units adopt one of two conformations: open (O) or closed (C) 18 . PilB is the ATPase that powers pilin polymerization in the T4aP. All PilB motor structures determined to date are similar in overall conformation: they exhibit C 2 symmetry with a CCOCCO pattern of open-and closed interfaces around the hexamer. In this conformation, the pore of PilB is elongated. Using the heterogeneous distribution of nucleotides in ADP-bound and ADP/ATP-analog-bound PilB crystal structures, we deduced that ATP binding and hydrolysis in the CCOCCO PilB structure would propagate conformational changes leading to a clockwise rotation of the elongated pore 18 . PilC, bound in the pore, would thus be turned clockwise in 60°increments, while accompanying conformational changes in the PilB subunits would displace PilC out of the plane of the inner membrane, toward the periplasm 18 . If a pilin is inserted at each clockwise increment, these motions would build a one-start, right-handed helical pilus 18 , consistent with cryoEM structures 19,20 .
T4aP polymers can be rapidly depolymerized at the base, resulting in fiber retraction. PilT is the PilT/VirB11-like ATPase that powers T4aP depolymerisation 21 . We applied the same analysis used to deduce the movements of PilB to the C 2 symmetric structure of PilT from Aquifex aeolicus (PilT Aa , PDB 2GSZ 22 ) 18 . We found that this protein had an OOCOOC pattern of interfaces, which would give the impression of counterclockwise rotation of the elongated pore and potential downward movement of PilC 18 . Thus, we proposed that PilT may act like PilB in reverse, consistent with powering pilus depolymerization 18 . This analysis highlighted the importance of clarifying the symmetry and pattern of open-and closed interfaces in PilT/VirB11-like family members when interpreting their structures and defining mechanisms.
In contrast to PilB structures, which exhibit only C 2 symmetry, PilT has been crystallized in a variety of conformations with C 6 symmetry [22][23][24] . Other PilT/VirB11-like family members have crystallized in conformations with C 2 , C 3 , and C 6 symmetries 12,25,26 . Multiple potential crystallographic conformations of PilT and PilT/VirB11-like family members suggest that the OOCOOC conformation may not represent the active PilT retraction motor 22,27 . Further, the OOCOOC PilT structure was determined by using a homolog from Aquificae 22 and may not reflect a conformation typical of PilT from Proteobacteria, where most of the phenotypic analyses of the T4aP have been conducted. The specific conformation adopted by the PilT motor, and the details of retraction remain to be clarified.
Here, we crystallize PilT in four unique conformations, including the highest resolution structure to date of a hexameric PilT/VirB11-like family member. These structures allow the identification of conserved open-and closed-interface contact points that are used to differentiate the conformations of available PilT/VirB11-like structures into six unique classes. To examine the conformations adopted by PilT and PilB in a noncrystalline state, we determine their structures by using cryoEM. Those structures of PilT reveal a clear preference for C 2 and C 3 conformations in the absence of nucleotide or with ADP, and the C 6 conformation in the presence of ATP. These structures are validated by coevolution analysis and functional analysis of point mutants. A gain-of-function mutation with increased in vivo activity is identified, and cryoEM analysis reveals its preference for the C 2 conformation. From these data, we propose a comprehensive model of PilT function with broad implications for all PilT/VirB11-like family members.
Results
PilT crystallizes in C 3 and pseudo-C 3 symmetric conformations. To gauge the reproducibility of previously crystallized PilT conformations and to see if additional conformations could be identified, PilT4 from Geobacter metallireducens (PilT Gm ) was crystallized. PilT Gm was selected because we previously crystallized PilB from G. metallireducens to derive models for PilBmediated extension 18 . There are four PilT orthologs in Geobacter, and PilT4 is the primary retraction ATPase in Geobacter sulfurreducens 28 . In the absence of added nucleotide, PilT Gm crystallized only after reductive methylation. These crystals diffracted to 3.3 Å, and the structure, determined by molecular replacement, revealed a C 3 symmetric hexamer in the asymmetric unit (Fig. 1a). No nucleotide was found in this structure, although density consistent with sulfate, present in the crystallization conditions, was observed in the nucleotide-binding site (Fig. 1b). By comparison with the interfaces in PilB Gm , we categorized the interfaces between the packing units of PilT Gm as alternating between open and closed. This OCOCOC conformation has not previously been reported for PilT.
We hypothesized that this conformation in the absence of a nucleotide could have resulted from the methylation process, rather than representing a physiologically relevant PilT conformation. Therefore, to crystallize PilT Gm in the absence of added nucleotide without reductive methylation, we extensively optimized the buffer (see the "Methods" section). The resulting crystals diffracted to 3.0 Å, and the structure was solved by molecular replacement with the entire hexamer present in the asymmetric unit (Fig. 1c). This structure was approximately C 3 symmetric with alternating open-and closed interfaces, similar to the methylated OCOCOC PilT structure. However, small deviations make this structure pseudo-C 3 symmetric. Although a nucleotide was not added, density consistent with ADP allowed the nucleotide with partial occupancy to be modeled in all nucleotide-binding sites (Fig. 1d). These nucleotides were likely carried over from Escherichia coli during protein purification. Nucleotide was absent in the methylated PilT Gm structure, possibly due to the lengthy methylation protocol or competition with sulfate during crystallization.
An isomorphous structure with high ADP occupancy was obtained by preincubating PilT Gm with Mg 2+ and ATP, then removing the unbound nucleotide prior to crystallization (Fig. 1e, f). This structure is consistent with the OCOCOC structure reflecting a post-hydrolysis ADP-bound conformation. The isomorphic low-occupancy and high-occupancy ADP OCOCOC PilT structures have an RMSD Cα of 0.6 Å per hexamer. The RMSD Cα of these two structures with the methylated OCOCOC PilT structure is 1.9 Å per hexamer.
PilT Gm also crystallizes in a C 6 symmetric conformation. Modifying the protocol so exogenous ATP that was not removed prior to crystallization yielded distinct PilT Gm crystals that diffracted to 1.9 Å, the highest resolution to date for any hexameric PilT/VirB11-like family member. The structure was solved by molecular replacement with three protomers in the asymmetric unit. In the crystal, two nearly identical C 6 symmetric hexamers could be identified (Fig. 1g). Compared with the interfaces of PilB Gm , all six interfaces in these hexamers are closed. This conformation is denoted CCCCCC.
Density consistent with Mg 2+ and surprisingly for an active ATPase ATP could be modeled in the active sites (Fig. 1h). The ribose moiety of ATP puckers in two alternate conformations consistent with the small number of direct protein contacts to the O2' of ATP (Fig. 1j). These conformations are consistent with C2' exo and C2' endo low-energy ATP ribose conformations 29 addition, two ethylene glycol molecules from the cryoprotectant solution could be modeled in each packing-unit interface. The ethylene glycol was introduced after the crystals formed and bound to Arg-83 and Arg-278, next to the nucleotide-binding site. As these crystals formed only when the pH was less than or equal to 6.5, we hypothesised that PilT Gm may not have ATPase activity at acidic pH. When we assayed PilT Gm ATPase activity over a broad range of pH values, we could not measure ATPase activity below pH 6.5 (Fig. 1i). H230 of the HIS-box motif in PilT is predicted by Rosetta 30 to have a pKa of 6.5; thus, H230 deprotonation might be important for efficient ATP hydrolysis. The corresponding histidine in PilB coordinates the nucleotide ɣphosphate, but in the CCCCCC PilT structure, H230 is facing away from the ɣ-phosphate in each of the nucleotide-binding sites. We propose that the protonation state of H230 affects its preferred rotamer and thus the catalytic activity of PilT. Since H230 is conserved in all PilT/VirB11-like family members 18 , this pH dependency for activity may be a conserved feature.
PilT Gm also crystallizes in a PilB-like conformation. To determine what conformation PilT Gm adopts above pH 6.5 with a non-hydrolyzable ATP analog, we also crystallized PilT Gm with Mg 2+ and ANP (adenylyl-imidodiphosphate or AMP-PNP) at pH 8. Unlike previous PilT Gm crystals that formed after 16 h and were stable for weeks, these crystals took a week to form and stayed crystalline for 2 days before dissolving. These crystals diffracted anisotropically to 4.1-, 6.7-, and 4.0-Å resolution along a*, b*, and c* reciprocal lattice vectors, respectively. The structure was solved by molecular replacement, and a hexamer was present in the asymmetric unit (Fig. 1j). Despite the low resolution, density consistent with ANP could be modeled into four of the six nucleotide-binding sites (Fig. 1k). The density in the other two sites was consistent with ADP. It is possible, given the slow and transient crystallization, that ANP partially hydrolyzed, yielding a transient ADP/ANP mixture that facilitated formation of these particular crystals. In this case, decay to ADP is likely noncatalytic. This structure of PilT Gm is C 2 symmetric but distinct from that of the OOCOOC PilT Aa structure 22 (RMSD Cα 6.8 Å/ hexamer). Surprisingly, the pattern of open-and closed interfaces between packing units was PilB-like: CCOCCO.
ATP but not ADP binding correlates with closed interfaces. As PilT Gm crystallized in multiple conformations and the resolution and quality of the electron density was sufficient to resolve the bound nucleotides, we looked for correlations between ADP or ATP/ATP-analog binding and the open-or closed interfaces. In the ADP-bound OCOCOC PilT Gm structures, the estimated occupancy of ADP was similar in the open-and closed interfaces (Fig. 1d, f). In the CCCCCC PilT Gm structure, all interfaces bound to ATP (Fig. 1j). In the CCOCCO PilT Gm structure, the four closed interfaces are bound to the ATP analog ANP, while the open interfaces appear to be bound to ADP (Fig. 1h). Thus, there is a correlation between bound ATP (or ATP analog) and closed interfaces, suggesting that ATP facilitates closure of interfaces in PilT Gm . In contrast, there is no correlation between bound ADP and open and closed interfaces, suggesting that both open-and closed interfaces in PilT may have a similar affinity for ADP, and ADP may be insufficient to induce or maintain closure. This scenario contrasts with PilB Gm structures, where ADP is correlated with closed interfaces 18 .
Conserved interactions facilitate open or closed interfaces. We established previously that in the closed interface of PilB Gm , T411 contacts H420, and in the open interface, R455 contacts T411 (ref. 18 ). To comprehensively define the interfaces in PilT Gm , we graphically plotted the open-and closed-interface contacts of the PilT Gm crystal structures, as well as previously published PilT structures (Fig. 2). For this analysis, a contact was defined as any atom (main chain or side chain) of a residue within 4 Å of any atom from another residue. To compensate for imperfect rotamers in low-resolution structures, the definition of contacts for this analysis was expanded to also include main-chain atoms in one residue within 8 Å of a main-chain atom from another residue. Despite disparate hexameric conformations, the PilT structures make fairly consistent open-and closed-interface contacts. Residues in the closed interface that contact one another include P271-D32, G277-D32, H230-T221, T221-T133, R195-D161, and D164-R81. P271-D32 and H230-T221 are specific to the closed interface. Residues in the open interface that consistently contact one another include G339-Y259, G180-Q59, L269-T221, R195-T133, T221-T133, E178-S73, I163-S73, and D164-R81. G339-Y259, G180-Q59, and L269-T221 are specific to the open interface. Many of these residues are part of the conserved HIS (T221 through H230) and ASP (E160 through E164) box motifs 31,32 . Involvement in the open and closed interfaces explains why several residues in these motifs are conserved, even though only H230 and E164 contact ATP or magnesium, respectively 18,22 . The contacts observed in the open and closed interfaces are similar to those found in the PilB structure: T221 contacts H230 in the closed interface (T411 and H420 in PilB Gm ), while T221 contacts L269 (T411 and R455 in PilB Gm ) in the open interface.
Based on this information, closed-and open interfaces can be easily distinguished by measuring the Cα distance between interchain residues T221-H230 and T221-L269. In the closed interfaces of PilT Gm structures, the mean Cα distances for T221-H230 and T221-L269 are 5.7 ± 0.3 Å (mean ± SD) and 13.7 ± 0.8 Å, respectively. In the open interfaces of PilT Gm structures, the mean Cα distances for T221-H230 and T221-L269 are 12.7 ± 1 Å and 8.6 ± 0.4 Å, respectively. The T221-H230 and T221-L269 distances are closer in the closedand open interfaces, respectively, permitting easy interface classification.
Characterization of the open and closed interfaces in all PilT/ VirB11-like family member structures allowed most to be placed into one of five different states based on their conformation: CCOCCO, OCOCOC, CCCCCC, OOOOOO, or OOCOOC (Fig. 3). These states reflect all possible arrangements of openand closed interfaces that maintain rotational symmetry. A GspE structure (PDB 4KSS) and all VirB11 crystal structures (PDB 2PT7, 2GZA, 1NLY, 1NLZ, and 1OPX) are CCCCCC (Fig. 3a). In addition to the ATP-bound PilT Gm structure described herein, the PilT structures from Pseudomonas aeruginosa (PilT Pa ) are also in the CCCCCC conformation (PDB 3JVV and 3JVU) (Fig. 3a). The three OCOCOC PilT structures described here are the only examples of PilT in this conformation determined to date (Fig. 3b). A FlaI and a DotB structure (PDB 4II7 and 6GEB, respectively), as well as two Archaeal GspE2 structures (20AP and 2OAQ) have OCOCOC conformations (Fig. 3b). Other GspE, FlaI, and DotB structures (PDB 4KSR, 4IHQ, and 6GEF) fall into the CCOCCO class (Fig. 3c). All available PilB structures are CCOCCO (Fig. 3c). Our CCOCCO PilT Gm structure is the only example of PilT in this conformation to date (Fig. 3c). PilT Aa is the only example of the OOCOOC conformational state (PDB 2GSZ) (Fig. 3d). Similarly, only PilT4 from G. sulfurreducens (PilT Gs ) exhibits an OOOOOO conformation (Fig. 3e). This classification scheme suggests that PilT and PilT/VirB11-like family member crystal structures have a high fidelity for open or closed interfaces and rotational symmetry.
Cross-referencing PilT structural classification with packingunit interface contacts (Fig. 2) revealed that there are contacts Residues of nucleotide-bound packing unit Residues of adjacent packing unit unique to some conformational states. For example, the G339-R124 and G275-R124 contacts are specific to the CCCCCC and OOOOOO PilT conformational states, respectively (Fig. 2b, e). There are many contacts unique to the OOCOOC PilT structure (Fig. 2d), though this may be a result of the evolutionary distance of PilT Aa from those of Proteobacteria. There are no contacts unique to the CCOCCO and OCOCOC PilT Gm structures (Fig. 2a, c). PilT Aa was crystallized in a distinct C 6 symmetric conformation bound to ATP (PDB 2EWW) or ADP (PDB 2EWV) (Fig. 3f). The two structures can be considered isomorphic (RMSD Cα 0.6 Å/hexamer) and are distinct from other PilT/VirB11-like family member structures. In these structures, there is almost no interface between packing units and they appear to be held in place by crystal contacts, suggesting that this conformation may be uncommon outside of a crystal lattice. The distances of T221-H230 and T221-L269 are atypically large (11 and 14 Å, respectively) (Supplementary Table 1). These distances suggest that the packing units adopt neither a closed-nor open-interface and thus we refer to it as an X-interface. Thus, these C 6 symmetric PilT Aa structures represent a distinct conformational state: XXXXXX. The H242-R229 salt bridge is a major constituent of the X-interface; these residues are H230 and T217, respectively, in PilT Gm (Fig. 2f). As R229 in PilT Aa is not conserved in PilT Gm , the X-interface or XXXXXX conformation may not be critical to function. Supporting this hypothesis, it was suggested that these XXXXXX PilT Aa structures are not in a conformation that could facilitate ATP hydrolysis 22 .
Residues in the closed interface are evolutionarily coupled. One explanation for the perceived heterogeneity in structures of PilT and PilT/VirB11-like family members is that the proteins are conformationally heterogeneous and the crystallization process selects for just one of many possible conformations. To probe this possibility, we used the EVcouplings server 33,34 to identify residues that coevolve in PilT (Fig. 2g). Residues that contact one anotherand are needed for biological functiontend to be evolutionarily coupled, and therefore this analysis can be used to independently validate structural analyses 33,34 .
The evolutionarily coupled residues of PilT were consistent with its tertiary structure contacts, as well as the contacts that support the packing unit. There were also evolutionarily coupled residues clustered around G277-D32, P271-D32, H230-T221, T221-T133, R195-T133, and G339-R124 consistent with the closed interface, and I163-S73 consistent with the open or closed interface. This analysis unambiguously demonstrates phylogenetic conservation of the closed interface across PilT orthologs. The G339-R124 contact is specific to the CCCCCC PilT Gm structure, specifically validating the biological relevance of this conformation.
The quality of sequence alignments is worse near the C terminus of PilT, and thus the power to find evolutionarily conserved residues is lower at the C terminus. Also, some open-and closed-interface contacts overlap graphically with tertiary or packing-unit-forming contacts, obscuring their identification in this type of analysis. Thus, while this analysis validates the CCCCCC PilT structure and closed interface, other noncrystallographic techniques are required to test the biological relevance of particular conformations.
2D cryoEM of PilT Gm reveals conformational heterogeneity.
To examine what conformation(s) PilT can adopt in a noncrystalline environment we used cryoEM. We found that PilT Gm adopted preferred orientations on the EM grid with its symmetry axis normal to the air-water interface (top views), preventing calculation of 3D maps from untilted images. Nonetheless, comparison of the shape, size, and internal features of the 2D class average images with projections of PilT crystal structures ( Fig. 4a) allowed for unambiguous assignment of 2D class averages into one of the six defined PilT conformational states. In the absence of the nucleotide, the 2D class averages of PilT Gm were a mixture of the OCOCOC and OOCOOC conformational classes in a 45:55 ratio (Fig. 4a).
In the presence of 0.1 mM ADP and 1 mM ADP, the ratio of the particles in the OCOCOC and OOCOOC states shifted from 45:55 to 54:46 and 71:29, respectively (Supplementary Fig. 2). Thus, adding ADP favors the OCOCOC conformation, consistent with crystallographic OCOCOC structures of PilT Gm with ADP bound in every interface. There may be an unoccupied nucleotide-binding site available in the OOCOOC conformation that upon binding ADP converts to the OCOCOC conformation. This hypothetical unoccupied nucleotide-binding site might prefer to bind a different nucleotide such as ATP, since mM quantities of ADP were required to favor the OCOCOC conformation.
PilT Pa also has conformational heterogeneity. PilT Pa has been crystallized in the CCCCCC conformation (Fig. 3a) both in the absence of nucleotide and in the presence of an ATP analog 24 . We hypothesized that purified PilT Pa may also exhibit the OCOCOC or OOCOOC conformation in the absence of nucleotide in a noncrystalline environment. Thus, cryoEM analysis was performed with purified PilT Pa . Analysis of the 2D class averages werelike PilT Gmconsistent with the OCOCOC and OOCOOC conformations (Fig. 4a). The particle distribution ratio between OCOCOC and OOCOOC class averages was 63:37.
3D cryoEM validates coexisting OOCOOC and OCOCOC structures. To confirm our interpretation of the 2D class averages, we tilted the specimen 35 to obtain sufficient views of the hexamers to calculate 3D maps. CryoEM specimens of PilT Gm without nucleotide were tilted by 40°during data collection. Using heterogeneous refinement in cryoSPARC 36 and perparticle determination of contrast transfer function parameters 37 , two distinct maps could be obtained from the data Fig. 2 Contact maps for packing-unit interfaces reveal similarities between distinct PilT conformations. Inter-and intra-chain contacts between packing units color coded based on closed (red) or open (blue) interface; the x-interface is shown in gray. For reference, the linear domain architecture of individual packing units is shown as box cartoons beside the axes with each chain shaded distinctly. To minimize confusion when comparing between species the residue labels represent the corresponding residue in PilT Gm . a OCOCOC conformation PilT Gm crystal structure (PDB 6OJY). b CCCCCC conformation PilT Gm crystal structure (PDB 6OJX). c CCOCCO conformation PilT Gm crystal structure (PDB 6OKV). d OOCOOC conformation PilT Aa crystal structure (PDB 2GSZ). e OOOOOO conformation PilT Gs crystal structure (PDB 5ZFQ). f XXXXXX conformation PilT Aa crystal structure (PDB 2EWV). g Evolutionarily coupled residues in PilT (light gray) are compared with the structure of PilT Gm . Tertiary structure contacts (not including intra-chain interactions between packing units) are noted in black. N2D n to CTD n+1 contacts (that create individual packing units) are noted in brown. Open-or closed-interface contacts identified in (a-f) are labeled if they overlap with evolutionarily conserved residue pairs that are not clearly accounted for by the tertiary structure or N2D n to CTD n+1 contacts. h OOCOOC conformation PilT Gm cryoEM structure (PDB 6OLL) without enforcing symmetry: one~C 3 map and one~C 2 map, both at~4.4-Å resolution. The particle distribution ratio was 48:52 between C 3 and C 2 maps, respectively, similar to that obtained by 2D classification, validating the 2D conformation assignments. Applying their respective symmetries during refinement yielded 4.0 and 4.1 Å resolution maps, respectively (Fig. 4c, d).
Molecular models could be built into these maps by fitting and refining rigid packing units of the PilT Gm crystal structures (Fig. 4c, d). Assessment of local resolution suggests that the nonsurface-exposed portions of the map are at higher resolution than the rest of the complex (Supplementary Fig. 3). No density consistent with nucleotide was identified in these structures; presumably a nucleotide that was potentially carried over from the E. coli expression system is present at too low an occupancy to be observed. The model built into the C 3 symmetric map is consistent with the OCOCOC PilT structure (RMSD Cα 1.3 Å/ hexamer). Before symmetry was applied to this map, it more closely matched the methylated C 3 symmetric structure of PilT than the pseudo-C 3 symmetric structures, suggesting that the slight asymmetry of the latter is a crystallographic artifact. The model built into the C 2 symmetric map was not consistent with any PilT Gm crystal structure. Annotation of its packing-unit interfaces revealed that it has an OOCOOC conformation, consistent with the PilT Aa crystal structure (Supplementary Table 1). Thus, the cryoEM structures confirm that the OOCOOC and OCOCOC conformations observed for PilT Aa and PilT Gm , respectively, were not crystal artifacts. Further, these maps suggest that available crystal structures have oversimplified our view of PilT/VirB11-like family members as they do not capture the multiple stable conformations accessible in a given condition.
While the OOCOOC PilT Gm cryoEM structure validates the conformation of the OOCOOC PilT Aa crystal structure, the two are distinct (RMSD Cα of 6.4 Å/hexamer), consistent with the evolutionary distance between species. Analyzing the packingunit interfaces of the OOCOOC PilT Gm cryoEM structure reveals that they are nearly identical to the interfaces in the PilT Gm CCOCCO and OCOCOC crystal structures (Fig. 2h).
CryoEM of PilT Gm with ATP reveals CCCCCC conformation. Since PilT hydrolyzes ATP slowly and cryoEM samples can be frozen within minutes of sample preparation, we opted to determine the conformation of PilT Gm incubated briefly with ATP. In these conditions, the top-view 2D class averages of PilT Gm corresponded only to the CCCCCC conformational class, consistent with the ATP-bound CCCCCC PilT crystal structure (Fig. 4a). A small minority of 2D class averages appeared to be tilted-or stacked side views, permitting 3D map construction. Only one map with~C 6 symmetry could be constructed and applying C 6 symmetry during refinement resulted in a 4.4 Å resolution map (Fig. 4e). The molecular model built from this map is consistent with the CCCCCC crystal structure (RMSD Cα 0.6 Å) and density in the nucleotide-binding sites is consistent with ATP (Fig. 4e).
In an attempt to reproduce the conditions that we postulated led to the CCOCCO PilT crystal structure, PilT Gm was incubated with mixtures of ATP and ADP. In the presence of 1 mM ATP and ADP, or 1 mM ATP and 0.1 mM ADP, only class averages consistent with the CCCCCC conformation could be identified ( Supplementary Fig. 2). This analysis does not support the reproducibility of the CCOCCO PilT conformation in a noncrystalline environment, nor the OOOOOO or XXXXXX PilT conformations, which were not identified in any condition.
The cryoEM experiments suggest that in the absence of its protein-binding partners in vitro, at approximately physiological ATP and ADP concentrations, PilT Gm is predominantly found in the CCCCCC conformation.
CryoEM analysis of PilB Gm consistent with CCOCCO conformation. During the course of our studies, an 8-Å cryoEM structure of PilB from T. thermophilis (PilB Tt ) was published that revealed a CCOCCO conformation in a noncrystalline environment 27 . No conformational heterogeneity was reported 27 . To determine whether this homogeneity was observed in a Proteobacteria PilB, we performed cryoEM analysis of PilB from G. metallireducens (PilB Gm ) in the absence of nucleotide. Projection of the PilB CCOCCO crystal structure revealed that the PilB Gm top-view 2D class averages were consistent with the CCOCCO conformation (Fig. 4b). A 3D map was calculated at~7.8-Å resolution (Fig. 4f), and the model built into this map is also consistent with the CCOCCO PilB structure (RMSD Cα 2.3 Å/ hexamer, PDB 5TSG). Thus, cryoEM analysis reveals that PilB preferentially adopts the CCOCCO conformation in multiple species. This is in contrast to PilT Gm and PilT Pa that both adopt OOCOOC and OCOCOC conformations in similar conditions. These results show that the preferred conformation(s) are conserved withinbut not between -PilT/VirB11-like subfamilies, consistent with distinct conformation preferences facilitating PilB-like or PilT-like functions. It should be noted that the Nterminal domain of PilB Gm , known as the N1D, MshEN, or GSPII domain, was not observed in the 3D map although its presence was confirmed by trypsin digest followed by mass spectrometry (98% coverage from the His tag to the C terminus). It may be that in the absence of its binding partners, this domain is disordered relative to the core motor domains of PilB Gm .
CCCCCC and OOCOOC or OCOCOC conformations are essential. The cryoEM structures of PilT Gm and PilB Gm were determined in the absence of other components of the T4aP system. To explore the functional importance of these PilT conformations in vivo, we introduced mutations targeting the packing-unit interface into P. aeruginosa, a model organism for studying PilT function. From our analysis of key contacts (Fig. 2), we mutated residues predicted to alter packing-unit interfaces and thus overall conformations. As controls, we mutated the catalytic glutamate (E204A) and the ɣ-phosphate coordinating HIS-box histidine (H229A), which eliminates twitching motility in P. aeruginosa 11 . E204A and H229A mutants lost twitching motility and accumulated extracellular PilA, while E204A also led to PO4 phage resistance, consistent with a retraction defect (Fig. 5a).
In PilT Gm structures, residue R240 participates in the closed interface, E220, D32, and D243 are at both open-and closed interfaces, and F259, R296, and Q59 are at the open interface (Fig. 2). In PilT Pa these correspond to R239, E219, D31, E258, D242, R294, and K58, respectively. Mutation of most of these residues abrogated twitching motility, increased extracellular Fig. 3 All PilT/VirB11-like family member structures can be divided into one of six unique conformations. Structures are shown as cartoons with individual packing units (N2D n plus CTD n+1 ) uniquely colored. Black spheres, the ɑ-carbons of the residues that align with T221 from PilT Gm . Red spheres, the ɑ-carbons of the residues that align with H230 from PilT Gm . Blue spheres, the ɑ-carbons of the residues that align with L269 from PilT Gm . Top, block cartoons of PilT/VirB11-like family hexamer conformations. a-f To highlight similarities between conformations, structural elementslike domains, ɑ-helices, or β-sheetsthat are not well conserved across all PilT/VirB11-like family members are shown as thin white ribbons. The hexamers are labeled with their protein name, followed by the species of origin, followed by their PDB identifier, and finally the PDB identifiers of similar structures with the range of RMSD Cɑ (over the full hexamer) of these structures aligned with the shown structure. The interface between packing units is annotated as determined by Using the T221, H230, and L269 distances in Supplementary Table 1. a PilT/VirB11-like proteins with the CCCCCC conformation. b PilT/VirB11-like proteins with the OCOCOC conformation. c PilT/VirB11-like proteins with the OOCOOC conformation. d The only PilT/VirB11-like protein with the OOCOOC conformation. e The only PilT/VirB11-like protein with the OOOOOO conformation. f The only PilT/VirB11-like protein with the XXXXXX conformation PilA, and led to PO4 phage resistance (Fig. 5a). These data suggest that the XXXXXX PilT conformation, lacking open-or closed interfaces, is insufficient for PilT function. Likewise, these experiments imply that neither the CCCCCC nor OOOOOO PilT conformationslacking open or closed interfaces, respectively are sufficient for PilT function. The E219K and D31K mutants had reduced stability or expression, complicating their interpretation (Fig. 5a). The side chains of these residues participate in the closed interface but not the open interface. Curiously, despite the instability of D31K and the corresponding accumulation of PilA, D31K had significantly increased twitching motility and was partially susceptible to PO4 phage infection (Fig. 5a). One interpretation of this phenotype is that PilT misfolded in most bacteria expressing the D31K mutant, leading to accumulation of extracellular PilA, but in a subpopulation of bacteria the D31K mutant protein folded properly and unexpectedly facilitated increased twitching motility. Alternatively, it is possible that the D31K mutation reduced binding of the antibody used to detect PilT. The R294E mutation, predicted to decrease the stability of the open interface, decreased twitching motility, pilin accumulation, and phage resistance, consistent with a pilus depolymerization defect. In contrast, the E258A mutation, also predicted to decrease the stability of the open interface, decreased twitching motility but allowed approximately wild-type levels of pilin accumulation and phage susceptibility, consistent with a pilus polymerization defect, or more plausibly a moderate defect in PilT function that did not cause an obvious overabundance of extracellular pilin. The K58Q mutant of PilT Pa had wild-type twitching motility, extracellular pilin accumulation, and was sensitive to PO4 phage, consistent with this residue being glutamine in wild-type PilT Gm . Surprisingly, the K58A mutant had twofold increased twitching motility and decreased levels of extracellular PilA indicative of hyperretraction, while the conservative K58R mutation reduced twitching motility (Fig. 5a).
Mutations were also introduced at residues that are important for particular conformations. The R124 residue in PilT Gm stabilizes the CCCCCC and OOOOOO conformations, as it forms a salt bridge with the backbone carbonyl of G339 and G275, respectively (Fig. 2). R124 in PilT Gm aligns with R123 in PilT Pa . The R123D mutation in PilT Pa eliminated twitching motility, prevented phage infection, and led to accumulation of extracellular pilins, consistent with a retraction defect (Fig. 5a). This result suggests that either the CCCCCC or OOOOOO PilT conformation is essential for retraction. In the OOOOOO conformation, T217 forms a polar interaction with H230 (Fig. 2) and its mutation to arginine is predicted to eliminate that conformation. T217 in PilT Gm aligns with T216 in PilT Pa . The T216R mutant of PilT Pa had slightly decreased twitching motility, and wild-type pilin accumulation and phage infection (Fig. 5a) CCCCCC and the open interface are necessary for binding PilC. We hypothesized that the observed retraction defects reflected the inability of some PilT mutants to adopt a conformation compatible with PilC binding. To test this hypothesis, we used bacterial two-hybrid analysis (BACTH) to quantify the interaction between PilT mutants and PilC (Fig. 5b). BACTH has been used previously to demonstrate an interaction between PilT and PilC 38 . Each PilT mutant was capable of homomeric interactions consistent with correct protein folding, with the exception of the D31K mutant, consistent with its putative stability defect (Fig. 5b). The open-interface-targeting R294E mutant and CCCCCC-targeting R123D mutant had reduced PilC interactions (Fig. 5b). These residues are not in the pore of PilT and thus unlikely to be important for directly contacting PilC, although confirmation of this hypothesis awaits a PilT-PilC co-structure. Accordingly, these results are consistent with CCCCCC and an open-interface-containing conformation being important for binding PilC.
Discussion
Here, we demonstrate the conformational heterogeneity of PilT. This protein can adopt conformations consistent with all PilT/ VirB11-like family member conformational states defined here. We present several unique PilT Gm crystal structures and demonstrate that multiple conformations of PilT Gm and PilT Pa coexist in solution. We show that specific PilT conformations are important for in vivo function and interaction with PilC. By extrapolation, these findings have major ramifications for interpretation of other PilT/VirB11-like crystal structures, which have individually been used to suggest idiosyncratic molecular mechanisms. Based on our ability to clearly categorize all PilT/VirB11-like family members, we predict that PilT/VirB11-like family members of T4P-like systems operate with a common mechanism. The T4SS lacks a PilC-like inner-membrane platform protein, so VirB11 or DotB may have distinct mechanisms. This study unifies the structural description and analyses across PilT/VirB11-like family members. We found that the inter-chain distances between T221-H230 and T221-L269 can be used to easily and quantitatively define open and closed interfaces. This simple definition enables the conformational state of the hexamer to be defined and would easily be missed if only individual chains are annotated. Given the conservation of these inter-chain distances, we predict that these residues have functional significance. The catalytic glutamate E204 is thought to abstract a proton from a water molecule for subsequent hydroxyl nucleophilic attack of the ɣ-phosphate of ATP 18 . Given its location adjacent to E204, H230 may then abstract this proton and shuttle it to T221 in the closed interface. From T221, the proton could be passed directly or indirectly via T133 to the recently hydrolyzed inorganic phosphate. The requirement for T221 from an adjacent packing unit for proton shuttling would prevent efficient ATP hydrolysis in the open interface prior to its closure. Consistent with this proposed mechanism, we found that PilT Gm ATPase activity is pH sensitive in the range consistent with histidine protonation, and that in PilT Gm structures H230 faces away from the nucleotidebinding site in most open interfaces but toward the nucleotidebinding site in most closed interfaces.
Our highest resolution crystal structure of a hexameric PilT/ VirB11-like family member determined to date also revealed that the ribose moiety of ATP can adopt multiple conformations due to the lack of interactions with its O2' hydroxyl. It is thus not surprising that ATP analogs with fluorophores attached at the O2' position bind PilT/VirB11-like family members 39,40 . We also found two ethylene glycol molecules in the packing-unit interface, suggesting that rationally designed small molecules could target this interface. Targeting the nucleotide-binding site or packing-unit interface to inhibit ATPase activity in T4P-like systems may have therapeutic value as these systems play a major role in virulence for many pathogens. Indeed, there are two recent reports of smallmolecule inhibitors of the Neisseria meningitidis T4aP that target pilus polymerization and depolymerization dynamics to reduce virulence 41,42 . One of these drugs targets PilB directly 41 , although whether this drug binds the nucleotide-binding site or packing-unit interface is not yet clear.
Our cryoEM maps establish the coexistence of both OOCOOC and OCOCOC conformational classes in PilT in the absence of a nucleotide or with added ADP. We also identified the CCCCCC conformation in the presence of ATP or approximately physiological concentrations of ATP and ADP. Based on these analyses, we propose a model explaining how the conformation of PilTand probably other PilT/VirB11-like family memberschanges in vitro depending on the nucleotides present (Fig. 6b). We also showed by cryoEM analysis, in accordance with the recently determined PilB Tt structure 27 , that PilB Gm uniquely adopts the CCOCCO conformation in a noncrystalline state. Although the structures of these proteins were determined in isolation, the conformations observed are likely biologically relevant, as PilB and PilT are only intermittently engaged with PilC and the T4aP machinery. No core T4aP proteins are unstable in the absence of PilB or PilT 43 , cryoET analysis of the T4aP in M. xanthus shows a significant portion of T4aP systems without attached PilT/ VirB11-like family members 14 , and PilB and PilT migrate dynamically in some bacteria while the core T4aP proteins are anchored in the cell envelope 44,45 . Thus, we propose that at physiological ATP and ADP concentrations, when it is not engaged with the T4aP, PilT preferentially adopts the CCCCCC conformation.
Our mutational and coevolution analyses support the in vivo importance of the CCCCCC conformation. From BACTH data, the CCCCCC-targeting R123D PilT mutation impairs PilC interaction. Given this analysis supporting its importance, it was initially surprising that the closed-interface-destabilizing D31K mutation and the open-interface-stabilizing K58A mutation promote hypertwitching. These results suggest that the active form of PilT is likely to contain open interfaces. We propose that after binding to PilC in the CCCCCC conformation, PilT converts to an open-interface-containing conformation to power pilus depolymerization (Fig. 6a). Our data suggest that the only open-interface-containing PilT conformations found in solution are OOCOOC and OCOCOC. In the absence of a co-structure of PilT and PilC, the specific motor conformation of PilT remains unclear. PilC is proposed to be a dimer in vivo 15 and PilC-like proteins have crystallized with C 2 and asymmetric-pseudo-C 2 symmetry 46,47 ; thus, when PilT binds PilC, we anticipate that the interaction would induce C 2 symmetry in PilT. Since the OOCOOC conformation of PilT is the only C 2 symmetric conformation of PilT found in solution, we propose that the active motor conformation is the OOCOOC conformation. Adopting other conformations in the presence of ATP may be a strategy to limit unnecessary hydrolysis in the absence of other T4aP proteins, and could explain why the activity of PilT/VirB11-like family members is notoriously low in vitro 11,39 . In our previous model of OOCOOC PilT function, we suggested that binding of two ATPs to opposite open interfaces caused them to close 18 . This is consistent with our observations that bound ATP correlates with the closed interface. Closure of two interfaces was predicted to open the neighboring closed interfaces to enable release of ADP 18 . Our data herein suggest that ADP may not be released immediately. Unlike in PilB Gm structures 18 , ADP was found in both the open and closed interfaces in PilT Gm structures. This affinity for ADP for the open interface is consistent with a model in which PilT temporarily retains ADP for an additional round of ATP hydrolysis after the closed interface opens (Fig. 6c). Such a mechanism would parallel that of PilB 18 despite its different patterns of open and closed interfaces. The consequence of this nuance would be that for PilB and PilT, only two open interfaces would be available at any one time for binding ATP. This scenario would commit PilT to a single direction of ATP binding and hydrolysis and thus a single direction of pore rotation. Consistent with the distinction between CCOCCO and OOCOOC conformations promoting PilB-like and PilT-like functions, respectively, we note that while PilB adopted the CCOCCO conformation in solution here and elsewhere 27 , the equivalent C 2 symmetric conformation adopted by PilT Gm and PilT Pa is the OOCOOC conformation. Thus, the preferred conformations of PilB and PilT in solution correlate with function. Given that neither PilT Gm nor PilT Pa crystallized in the OOCOOC conformation, individual PilT/VirB11-like crystal structures should be interpreted with caution in the absence of accompanying cryoEM analysis.
In addition to the OOCOOC conformation, PilT is also found in the C 3 symmetric OCOCOC conformation in solution. The function of this conformation remains unclear. That a motor capable of rotating a substrate protein that would switch between C 2 and C 3 symmetries during its catalytic cycle is unprecedented. As judged by the similarity between open and closed interfaces between conformations, it may have been prohibitively difficult during evolution to stabilize the OOCOOC conformation without also stabilizing other conformations. Perhaps, evolution favored the relative stability of open versus closed interfaces rather than particular hexamer conformations. Thus, the relevant difference between a retraction and extension ATPase may be the relative stabilities of their open-and closed interfaces.
Although the CCOCCO PilT conformation was not observed in solution, our finding that a single PilT otholog can adopt both CCOCCO and OOCOOC conformations may be critical for understanding PilT/VirB11-like ATPase function and evolution. This finding suggests that PilT Gm and potentially other PilT/ VirB11-like family members could have the capacity to switch between OOCOOC powered counterclockwise pore rotation (i.e., pilin depolymerization), and CCOCCO powered clockwise pore rotation (i.e., pilin polymerization), blurring the line between extension and retraction ATPases. Indeed, PilT is inexplicably essential for T4aP pilin polymerization in Francisella tularensis 48 . Similarly, some T4P-like systemsincluding the T2S, T4bP, T4cP pilus, and even some T4aP systemshave been shown to retract their filaments in the absence of a dedicated retraction ATPase or in PilT-deleted backgrounds [49][50][51] . A similar conformation switch could also explain how FlaI switches between clockwise and counterclockwise archaellum rotation 52 . Such a switch might easily be regulated by post-translational modifications or alternate partner-protein interactions that modulate the relative stability of open-versus closed interfaces. Indeed, evidence emerged during the completion of this paper that the single PilT/VirB11-like family member from the Caulobacter T4cP system powers both pilus polymerization and depolymerization 53 . It may be that the last common PilT/VirB11-like family member ancestor catalyzed both clockwise and counterclockwise rotation, facilitating both pilus polymerization and depolymerization, and only more recently have PilB and PilT specialized to perform separate functions.
Expression and purification. E. coli BL21-CodonPlus ® cells (Strategene, Supplementary Table 2) were transformed with pET28a:PilT Gm , pET28a:PilT Pa , or pET28a:PilB Gm and grown in 4 L of lysogeny broth (LB) with 100 µg/ml kanamycin at 37°C to an A 600 of 0.5-0.6, then protein expression was induced by the addition of isopropyl-D-1-thiogalactopyranoside (IPTG) to a final concentration of 1 mM, and the cells were grown for 16 h at 18°C. Cells were pelleted by centrifugation at 9000 × g for 15 min. Cell pellets were subsequently resuspended in 40 ml of binding buffer (50 mM Tris-HCl, pH 7, 150 mM NaCl, and 15 mM imidazole). Subsequent to crystallization of methylated OCOCOC PilT, the buffer was optimized to improve PilT Gm thermostabilitythe pH was increased, HEPES was used instead of Tris, the concentration of NaCl was increased, and glycerol was added; hereafter the binding buffer for PilT Gm After resuspension in binding buffer, the cells were lysed by passage through an Emulsiflex-c3 high-pressure homogenizer, and the cell debris removed by centrifugation for 45 min at 40000 × g. The resulting supernatant was passed over a column containing 5 ml of pre-equilibrated Ni-NTA agarose resin (Life Technologies, USA). The resin was washed with ten column volumes of binding buffer and eluted over a gradient of binding buffer to binding buffer plus 600 mM imidazole. Purified PilT Gm was additionally purified with a HP anion exchange column pre-equilibrated with binding buffer; the flow-through contained PilT Gm . PilT Gm , PilT Pa , or PilB Gm was then further purified by size-exclusion chromatography on a HiLoad TM 16/600 Superdex TM 200-pg column preequilibrated with binding buffer without imidazole or glycerol. For the OCOCOC PilT structure with full-occupancy ADP, 2 mM ATP and 2 mM MgCl 2 were added just prior to size-exclusion chromatography. For the crystallization of methylated OCOCOC PilT, the size-exclusion chromatography buffer was 50 mM HEPES, pH 7, 150 mM NaCl, 10% v/v glycerol, and subsequent to purification PilT Gm was reductively methylated overnight (Reductive Alkylation Kit, Hampton Research), quenched with 100 mM Tris-HCl, pH 7, and the size-exclusion chromatography step was repeated. All purified proteins were used immediately.
Crystallization, data collection, and structure solution. For crystallization, purified PilT Gm was concentrated to 15 mg/ml (4 mg/ml for methylated OCOCOC PilT) at 3000 × g in an ultrafiltration device (Millipore Diffraction data were collected by using synchrotron X-ray radiation as noted in Supplementary Table 3. The data were indexed, scaled, and truncated by using XDS 54 . The CCOCCO PilT data were anisotropically truncated and scaled by using the Diffraction Anisotropy Server 55 . PHENIX-MR 56 was used to solve the structures of PilT Gm by molecular replacement with PDB 3JVV preprocessed by the program Chainsaw 57 . In every case, the resulting electron density map was of high enough quality to enable building the PilT protein manually in COOT 58 . Through iterative rounds of building/remodeling in COOT 58 and refinement in PHENIX-refine 59 the structures were built and refined. Rosetta refinement in PHENIX 60 helped improve models early in the refinement process, while refinement in the PDB-redo webserver 61 helped improve models late in the refinement process. CCCCCC PilT was refined with individual B-factors, all other structures were refined with a single B-factor per residue. The occupancy of the nucleotides in the partial-occupancy-ADP OCOCOC PilT structure was estimated by PHENIX-refine, restricting all atoms in a nucleotide to a uniform occupancy. Progress of the refinement in all cases was monitored by using R free . Enzyme-coupled ATPase assay. Enzyme-coupled ATPase assays were performed as done elsewhere 62 , with minor modifications. Briefly, the reaction buffer included 40 U/ml lactate dehydrogenase (Sigma), 100 U/ml pyruvate kinase (Sigma), 100 mM NaCl, 2 mM MgCl 2 , 25 mM KCl, 0.8 mM nicotinamide adenine dinucleotide, 10 mM phosphoenolpyruvate, 5 mM ATP, and 0.088 mM PilT Gm . The pH of the reaction buffer was set with a 200 mM MES and 200 mM HEPES dual buffer, at pH 5.5, 6.0, 6.5, 7.0, 8.5, or 9.0. The reaction volume was 100 µl. Conversion of NADH to NAD +proportional to the ADP produced by the hydrolysis of ATPwas monitored by measuring the A 340 every 2 min at 25°C for 2 h. Initial reaction rates were used. Control experiments were performed (without added PilT Gm ) spiking the reaction buffer at different pH values with 2 mM ADP; conversion of all NADH to NAD + occurred almost immediately at every pH value used herein indicating that the reagents were not rate limiting.
Identifying hexamer symmetry. Each hexamer (extracted from the corresponding PDB coordinates) was aligned in the PyMOL Molecular Graphics System, version 2.2 (Schrodinger, LLC 2010) against the same hexamer six times. Specifically, all six chains were aligned with the n + 1 chains, n + 2 chains, n + 3 chains, n + 4 chains, n + 5 chains, or n + 6 chains, and the RMSD Cα of the alignments was noted. If the RMSD Cα was below 1 Å, the rotation was considered to be equivalent, and if the RMSD Cα was above 4 Å the rotation was considered to be distinct; this was used to define the symmetry of the hexamer. For example, in the CCOCCO PilT structure, the RMSD Cα , the n + 3, and n + 6 alignments were equivalent, while the n + 1, n + 2, n + 4, and n + 5 alignments were distinct, consistent with C 2 rotational symmetry. If the RMSD Cα was between 1 and 4 Å, the rotation was considered to be pseudosymmetric. For example, in the ADP-bound OCOCOC PilT structures, the n + 1, n + 3, and n + 5 alignments were distinct, while the RMSD Cα of the n + 2, n + 4, and n + 6 alignments was~3.5 Åa pattern consistent with pseudo-C 3 symmetry.
Plotting the residues at the packing-unit interface. CMview 63 was used to identify residues that contact one another. The contact type was set to 'ALL' (i.e., every atom available in the structure) and the distance cutoff was set to 4 Å. To identify residues that contact one another even in low-resolution structures, where side-chain modeling is less definitive, additional residue contacts were identified by setting the contact type to 'BB' (i.e., backbone atoms) and the distance cutoff to 8 Å. To identify tertiary structure contacts, the N2D (residues 1-100 in PilT Gm ) and CTD (residues 101-353 in PilT Gm ) were loaded separatelyto simplify analysis, the linker between the N2D and CTD was considered to be part of the CTD. To identify packing-unit-forming contacts, the N2D n and CTD n+1 from adjacent chains were loaded together, and the previously identified tertiary structure contacts were subtracted from this contact list. To identify contacts that form the interface between two adjacent packing units, two adjacent packing units were loaded together, and then the previously identified tertiary structure and packingunit-forming contacts were subtracted from this contact list. For clusters of residues that are in proximity, only the most prominent contacts (salt bridges and dipolar interactions) were considered.
Identifying open versus closed packing-unit interfaces. The open-and closed interfaces of PilT Gm crystal structures were initially identified by qualitative comparison with the characterized open-and closed interfaces of PilB Gm . Subsequent to our finding that the open and closed interfaces are correlated with intermolecular T221-H230 and T221-L269 distances, the α-carbon distances between these residues were measured in PilT Gm . In other PilT/VirB11-like family members, the α-carbon distances between the residues that correspond with PilT Gm residues T221, H230, or L269 were used. If the T221-H230 distance was NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-019-13070-z ARTICLE NATURE COMMUNICATIONS | (2019) 10:5198 | https://doi.org/10.1038/s41467-019-13070-z | www.nature.com/naturecommunications greater than 11 Å and the T221-L269 distance was less than 11 Å, this interface was classified as O. If the T221-H230 distance was less than 9 Å and the T221-L269 distance was more than 12 Å, this interface was classified as C. Interfaces that did not meet these criteria were classified as X-interfaces.
Identifying evolutionarily coupled residues. The PilT Pa amino acid sequence was analyzed by using the EVcouplings webserver 34 with default parameters. Homologs were identified, aligned, and analyzed -30,421 in total. This analysis did not include the last 60 C-terminal residues of PilT as the alignment had too many gaps in this region and default parameters enforce 30% maximum gaps allowed. Relaxing this parameter to 50 and 75% maximum gaps allowed more C-terminal residues to be included in the analysis, though some of the evolutionarily coupled residue pairs identified with default parameters were not discovered. To compensate, we merged the evolutionarily coupled residues identified with default parameters, with 50% maximum gaps, and 75% maximum gaps. This analysis yielded overall coverage from residue 18 to 347, though coupled residues in the last 60 C-terminal residues likely have a lower likelihood of being identified. Only residue pairs with a PLM score greater than 0.2 were included in subsequent analysis. To better understand the significance of these evolutionarily coupled residues, they were compared with the tertiary structure contacts, packing-unit contacts, and open-and closed-interface contacts identified in CMview.
CryoEM analysis. Newly purified PilT Gm at 0.5 mg/ml in binding buffer without imidazole or glycerol was incubated with and without 1 mM MgCl 2 plus 1 mM ATP, 1 mM MgCl 2 plus 0.1 mM ADP, 1 mM MgCl 2 plus 1 mM ADP, 1 mM MgCl 2 plus 1 mM ATP and 0.1 mM ADP, or 1 mM MgCl 2 plus 1 mM ATP and 1 mM ADP at 4°C for 10 min before preparing cryoEM grids. PilT Pa at 0.75 mg/ml or PilB Gm at 0.6 mg/ml in binding buffer without imidazole or glycerol were also incubated without added nucleotide at 4°C for 10 min before preparing cryoEM grids. Three microliters of protein sample was applied to nanofabricated holey gold grids [64][65][66] , with a hole size of~1 µm and blotted by using a modified FEI Vitribot Mark III at 100% humidity and 4°C for 5.5 s before plunge freezing in a 1:1 mixture of liquid ethane and liquid propane held at liquid nitrogen temperature 67 .
CryoEM data was collected at the Toronto High-Resolution High-Throughput cryoEM facility. Micrographs from untilted specimens were acquired as movies with a FEI Tecnai F20 electron microscope operating at 200 kV and equipped with a Gatan K2 Summit direct detector camera. Movies, consisting of 30 frames at two frames per second, were collected with defocus values ranging from 1.2 to 3.0 µm. Data were recorded with an exposure rate of 5 electrons/pixel/s with a calibrated pixel size of 1.45 Å/pixel. For the 40°tilted data collection, micrographs were acquired as movies with a FEI Titan Krios electron microscope (Thermo Fisher Scientific) operating at 300 kV and equipped with a Falcon 3EC direct detector camera. Movies, consisting of 30 frames at 2 s per frame, were collected with defocus values ranging from 1.7 to 2.5 µm. Data were recorded with an exposure rate of 0.8 electrons/pixel/s with a calibrated pixel size of 1.06 Å/pixel.
All image processing of the cryoEM data was performed in cryoSPARC v2.8.0 (ref. 36 Table 4). Movie frames were aligned with an implementation of alignframes_lmbfgs within cryoSPARC v2 (ref. 68 ) and CTF parameters were estimated from the average of aligned frames with CTFFIND4 (ref. 69 ). Initial 2D class averages were generated with manually selected particles; these classes were then used to select particles. Particle images were selected and beam-induced motion of individual particles corrected with an improved implementation of alignparts_lmbfgs within cryoSPARC v2 (ref. 68 ). For the 40°t ilted data, an implementation of GCTF 37 wrapped within cryoSPARC v2 was used to refine the micrograph CTF parameters while also locally refining the defocus for individual particles with default parameters (local_radius of 1024, local_avetype set to Gaussian, local_boxsize of 512, local_overlap of 0.5, local_resL of 15, local_resH of 5, and refine_local_astm set to Z-height). Particle images were extracted in 256 × 256-pixel boxes. Candidate particle images were then subjected to 2D classification. For the particles that preferentially adopted top views, comparison of 2D class averages of these top views with 2D projections of PilT structures was used to identify the corresponding conformation. 2D projections of PilT structures were generated by using genproj_fspace_v1_01 (J. Rubinstein, https://sites.google.com/ site/rubinsteingroup/3-d-fourier-space). For the purposes of 3D classification, particle images contributing to 2D classes without high-resolution features were removed. For samples with tilted views, ab initio reconstruction was performed by using two to four classes. Ab initio classes consistent with hexamers were used as initial models for heterogeneous refinement; particles from 3D classes that did not converge at this stage were removed. Particles from distinct 3D classes were then subjected to homogeneous refinement.
) (Supplementary
Molecular models could be built into these maps by fitting rigid packing units of the PilT Gm crystal structures into the maps in Chimera 70 . These models were refined against the maps in Phenix-Refine 71 with reference model restraints to the ATP-bound PilT Gm crystal structure (or for the PilB Gm model, PDB 5TSH), enabling the following refinement options: minimization_global, rigid_body, simulated_annealing, and adp. The overall quality of the maps was impacted by the anisotropy from preferred orientations, so side chains were not modeled.
In vivo transcomplentation assays. P. aeruginosa PAO1 pilT::FRT (Supplementary Table 2) was electroporated with pBADGr, pBADGr::PilT Pa , or pBADGr:: PilT Pa derivative mutant constructs for transcomplementation of PilT. Twitching assays were performed in 150 mm by 15-mm petri polystyrene dishes (Fisher Scientific) with 30 µg/ml gentamicin for 18 h at 37°C 72,73 . After this incubation, the agar was then carefully discarded and the adherent bacteria were stained with 1% (w/v) crystal violet dye, followed by washing with deionized water to remove unbound dye. Twitching zone areas were measured by using ImageJ software 74 . Twitching motility assays were performed in six replicates.
Surface pili were analyzed as previously described 75 , with the exception that sheared supernatant pili and flagellins were precipitated with 100 mM MgCl 2 incubated at room temperature for 2 h prior to pelleting. These pellets were resuspended in 100 μl of 1× SDS-PAGE sample buffer, 10 μl of which was loaded onto a 20% SDS-PAGE gel. In parallel, western blot analysis was performed on the cells used to produce the pili to confirm mutant stability by using rabbit polyclonal anti-PilT antibodies (Supplementary Table 2).
In preparation for the PO4 phage infection assay, 25 ml of LB with 1.5% (w/v) agar was solidified in 150 mm by 15-mm polystyrene petri dishes (Fisher Scientific). On top of this layer, 8 ml of 0.6% (w/v) agar preinoculated with 1 ml of A 600 = 0.6 transcomplemented P. aeruginosa PAO1 pilT::FRT was poured and solidified. Three microliters of 10 4 plaque-forming units per ml PO4 phage were then spotted onto these plates in triplicate and incubated at 30°C for 16 h before images of the plates were acquired.
Bacterial two-hybrid analysis (BACTH). BACTH analysis was performed as done previously 38 with minor adjustments. Briefly, E. coli BTH101 cells (Supplementary Table 2) were co-transformed with pUT18C::pilC, or pUT18C::pilT and mutants of pKT25::pilT. Preliminary tests suggested that prolonged incubation in induction conditions at 25°C improved the signal to noise, so three colonies from each of these transformations were individually streaked onto MacConkey-Maltose Agar plus 100 µM ampicillin, 50 µM kanamycin, and 0.5 mM IPTG and incubated overnight at 25°C. A single colony from these plates were then used to inoculate 500 µl of LB plus 100 µM ampicillin, 50 µM kanamycin, and 0.5 mM IPTG at 25°C until the A 600 was 1.0. Five microliters of this solution was then used to inoculate 500 µl of LB plus 100 µM ampicillin, 50 µM kanamycin, and 0.5 mM IPTG at 25°C for 16 h. Thirty-five microliters of this was then used to inoculate 600 µl of LB plus 100 µM ampicillin, 50 µM kanamycin, and 0.5 mM IPTG at 25°C for 1 h, then at 18°C until the A 600 = 0.6. All the replicates were then normalized to 600 µl and A 600 = 0.6, then pelleted at 3200 × g for 5 min. The supernatant was carefully removed, and the pellet was resuspended in 100 µl of 200 mM Na 2 HPO 4 , pH 7.4, 20 mM KCl, 2 mM MgCl 2 , 0.8 mg/ml cetyltrimethylammonium bromide detergent, 0.4 mg/ml sodium deoxycholate, and 0.54% (v/v) β-mercaptoethanol. After 5 min of incubation at room temperature, 10 µl of this solution was then transferred to a 96-well clear-bottom plate. One hundred and fifty microliters of a second solution was then added: 60 mM Na 2 HPO 4 pH 7.4, 40 mM KCl, 20 µg/ml cetyltrimethylammonium bromide detergent, 10 µg/ml sodium deoxycholate, 0.27 % (v/v) β-mercaptoethanol, and 1 mM ortho-nitrophenyl-β-galactoside. The A 420 and A 550 were measured every 2 min for 30 min at 30°C. β-galactosidase activity in Miller units was calculated by finding the slope of 1000 × (A 420 -1.75 × A 550 )/(0.6 absorbance units × 0.06 ml) over time (minutes) of the linear portion of the initial reaction. BACTH assays were performed in triplicate.
Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Data availability
Structural data that support the findings of this study have been deposited in the Protein Data Bank with the accession codes 6OJY, 6OJZ, 6OK2, 6OJX, 6OKV, 6OLL, 6OLK, 6OLM, and 6OLJ, as well as the Electron Microscopy Data Bank with the accession codes EMD-20116, EMD-20115, EMD-20117, and EMD-20114. The source data underlying Figs. 1i, 4a, b and 5a, b and Supplementary Fig. 2 are provided as a Source Data file. All other data are available within the paper and its Supplementary information files or are available from the corresponding author upon reasonable request. | 2019-07-16T22:04:34.765Z | 2019-06-15T00:00:00.000 | {
"year": 2019,
"sha1": "7f9854057afea1937d6965e978f3e5ce453a0f1c",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-019-13070-z.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1aa3594338a5915a248eefac9bdfa6de58fc3156",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine",
"Chemistry"
]
} |
246358109 | pes2o/s2orc | v3-fos-license | Assessment of Coriandrum Sativum L., Trigonella Foenum Graecum L., Pimpinella Anisum L., and Their Combinations Effect on Growth Performance, Carcass Trait and Hematobiochemical Parameters in Broiler Chicken
This study has been designed to examine the effects of three phytogenic feed additives (PHT) on certain zootechnical and hematobiochemical parameters in broiler chicken. The PHT were formulated from Coriandrum sativum L., Trigonella foenum-graecum L., and Pimpinella anisum L. 360 one-day-old Cobb broilers were randomly divided into 4 dietary treatment groups; a control group (CTLG), and three groups fed a basal diet supplemented respectively with 3% of Coriander (PHT1G), 3% of a combination 50% Coriander- 50% Fenugreek (PHT2G), and nally 3% of a combination 50% Coriander- 50% Anise (PHT3G). The results showed that the birds of PHT3 group realized the highest live body and internal organs weight. However, the weight of the abdominal fat was not affected. Broilers in the same group had a signicantly (P <0.05) higher lymphocyte level 120.10 3 /µl, followed by PHT2 group with 80.10 3 /µl. The level of monocytes in PHT2 and PHT3 groups was respectively 66.10 3 /µl and 60.10 3 / µl. Concerning the granulocytes, we noted 200.10 3 /µl in PHT2 group and 102.10 3 /µl in PHT3G. A signicant difference (P <0.05) was recorded in the uric acid levels with 50.4mg/l, 59.84mg/l, and 47.29mg/l respectively for PHT1G, PHT2G, and PHT3G groups. Levels of the uric acid were lower than the level recorded in the control group (84.36mg/l). The use of the phytogenic feed additives we formulated may have a positive effect both on weight gain and hematobiochemical parameters in broiler chicken, especially the levels of different types of white blood cells, and the uric acid rate.
Introduction
The utilization of herbs and spices was extensively studied in poultry diets as an alternative to antibiotics, and as growth promoters (Abd El-Hack et al., 2021; Kuralkar and Kuralkar, 2021;Paula et al., 2020). Feeding medicinal herbs to poultry is bene cial to respond to the consumer's needs and legislative limitations about avoiding the utilization of antibiotic growth promoters and ionophores in modern intensive poultry production (Adhikari et al., 2019). Consumers have interested in meat quality, which is produced through sustainable livestock products that are free of chemicals which harmful to health, and simultaneously have superior sensorial and preservation characteristics (Socaci et al., 2020). The effets of phytogenics are linked to their speci c phytochemical components. The bioactive molecules improve chicken potential production, through enhancing poultry immunity (Oladokun and Adewole, 2020; Rasouli et al., 2019) and improving the digestive process. They preserve the balanced gut micro ora and intestinal uptake (Kim et al., 2019), and reduce the disease spreading. These advantages can be achieved by including various medicinal plants in the feed or drinking water of broiler breeding (Seidavi et al., 2021). Another advantage of incorporating additives is to enrich the food with antioxidants and bioactif antimicrobial compounds (Hashemi et al., 2012). Among many spices, coriander is a medicinal and spice plant, that leaves, seeds, and fruits have many bene cial biological characteristics, such as antimicrobial, antioxidant, and anti-in ammatory activities (Silva et al., 2020;Socaci et al., 2020). Considerable researches outline that the coriander seed in poultry feed has a positive impact on improving zootechnical performance, carcass yield, blood biochemical pro le, and mineral composition of chicken meat (Khubeiz and Shirif, 2020; Jameel, 2019). Hosseinzadeh et al. (2014) reported that the coriander seed powder has been used as an alternative to antibiotics against Newcastle and infectious bronchitis in the chicken feed.
Fenugreek is rich in avonoids, phenols, saponins, alkaloids, and other bioactive compounds (Akbari et al., 2020). It has many have interesting bioactivity characteristics such as antimicrobial, antioxidation, antifungal, antiviral properties, digestive stimulation, and immunomodulation (Srinivasa and Naidu, 2020). Recent studies in broiler chickens have shown that the supplementation of fenugreek reduces signi cantly blood cholesterol and Glycemia levels, promotes immune response, improved plasma total protein, and globulin Either individually or as a combined mixture, phytogenic herbs and species preserve broiler's safety and production (Hafeez et al., 2020;Meradi et al., 2020). Studies on the use of phytogenics as growth promoters in animal production are numerous, but the virtues of these natural products are still worth exploring. In this context, the present study aimed to assess the effects of Coriandrum sativum L. and its combination with Pimpinella anisum L., and Trigonella foenum graecum L. on growth performance, carcass trait, and hematobiochemical parameters in broiler chickens.
Material And Methods
Animals and diets
A total of 360 one-day-old, Cobb 500 broilers (non sexed) were purchased from a commercial hatchery, and raised in litter oor pens at the department of Agricultural sciences. University of Biskra. Algeria. The chicks had an initial body weight of 47.33 ± 0.10 g. 4 dietary treatment groups were formed: a control group fed a basal diet (CTLG), and three groups fed a basal diet supplemented with a Phytogenic formulation. A group with 3% of Coriander supplementation (PHT1G), a group with 3% of a combination 50% Coriander-50% Fenugreek (PHT2G), and nally a group with 3% of a combination 50% Coriander-50% Anise (PHT3G). Each experimental group contain 3 repetitions of 30 birds.
Feed and water were given ad libitum. The rations were formulated to be isocaloric and isonitrogenous according to NRC (1994) recommandations. In the table 1 the ingredients and nutrient compositions of basal diet are summarized.
Plant material
In our study, the three plants tested were used as a seeds. Within 6 days after collection, seeds were cleaned, air-dried and stored under correct conditions until used as phytogenics. The harvest of the three types of seeds (Coriandrum sativum L., Trigonella foenum graecum L., and Pimpinella anisum L.) was carried out during 2020 at Biskra province in Algeria.
Growth Performances and Carcass Trait
Zootechnical parameters were measured. Regularly, diets distributed and feed refused were weighted to determine the feed intake and the feed conversion ratio. However, weekly birds were weighed to calculate the average daily weight gain. At the end of the experiment period (day 42), ten subjects per replicate were randomly taken from each group, and individually weighed to determine live body weight. The selected birds were sacri ced, eviscerated. The carcasses and the internal organs: liver, proventriculus, gizzard, small intestine were weighted. The abdominal fat, the breast, and the leg (thigh+drumstick) were measured. The carcass yield was expressed as a percentage of live body weight.
Hematobiochemical parameters
At the39th day, ten subjects were randomly chosen from each group. The blood samples were collected from the wing vain. Blood biochemistry (Glycemia, Total cholesterol, Total Proteins, Uric acid, Creatinine level, Globulin, and Albumin levels) were tested. An assessment of the blood cellular composition (Red Blood Cell, lymphocytes, monocytes, and granulocytes) was performed.
Statistical analysis
Using SPSS, Data obtained on various parameters were statistically analyzed by analysis of variance (ANOVA) followed by a comparison of means, according to the Newman and Keuls tests. The difference was considered signi cant when p<0.05.
Growth Performances
The analysis of the results shows that during 42days, the best live body weight and the highest average daily gain (ADG) were recorded in PHT3G with 2966.98g and 70.64g successively. However, the feed intake did not affect (P <0.05) among PHTG3, PHTG2, and the CTLG, which are successively 4206.
Hematobiochemical Parameters
Glycemia values of all groups have not affected by the phytobiotics compounds (P <0.05). However, the total cholesterol were signi cantly lower in PHT G 1, PHTG2, and PHTG3 groups than the CTLG (P <0.05), the cholesterol levels were 0.92 g/l; 0.95 g/l and 1.05 g/l for PHTG3, PHTG1, and PHTG2 successively.
The values of total protein with PHTG1, PHTG2 were highest than the control and PHTG3 (P <0.005), both coriander and PHTG2 caused an increase in plasma protein levels. Concerning globulin, we have recorded the highest value in PHTG1 and PHTG2 but without signi cant difference (P <0.05). The highest value of Albumin was recorded with PHTG2 followed by PHTG1 (12.19 g/ l, 11.41g/l and 10.81g/l) successively. Regarding the ratio (Albumin/Globulin), the most important value was recorded in PHTG3with 0.86 and in the CTLG with 0.76. The lowest ratio was recorded in PHTG1 and PHTG2 successively (0.37 and 0.43).
We have noted a signi cant difference in the uric acid levels (P <0.05), the lower values were registered in experimental groups 50.4mg/l, 59.84mg/l, and 47.29mg/l respectively in PHTG1, PHTG2, and PHTG3. In fact, we were recorded the lower value of red blood cells 3.48.10 6 / mm 3 in the CTLG and the highest lymphocyte level with PHTG3 group which is 120.10 3 /µl, followed by PHTG2 with 80. 10 In our study, the improvement of the growth performance with the phytogenic feed additive composed by coriander and green anise can be due to the enhancing of palatability and digestive enzymes, which are affected by the level of linalool (Brenes and Roura, 2010). Therefore, the antibacterial characterization of the spices against the development of damaging micro ora, which they act as phytogenic growth promoters (Pathak et al., 2011).
Carcass Trait
Results about the effect of phytobiotics on carcass and internal organ performance are in agreement with the results of several researchers, who have reported that natural products improve feed intake and live weight of birds, which affects carcass yield, liver, heart, and gut ( noted that incorporation of fenugreek in the diet has no negative effect on performance, carcass and internal organ weights of chicken. In fact, herbs and spices bioactive compounds have bene cial effects on animal welfare and enhance meat nutritional quality. Several strategies have been adopted to enrich animal products, especially the fatty acid pro le of meat by introducing plants into their diet by enriching them with Omega3 (Mourot, 2009). Phytobiotics promote digestion which can in uence weight gain, because it involved regulation and modulation of the metabolic and immune system (Gadde et al., 2017). Actually, much research focused on meat quality and the oxidation process. Lipid and protein oxidation is recognized as a major threat to the quality of poultry products. The implication of phytochemicals in the feed or directly in the meat product is a signi cant solution (Akram et al., 2020).
In our study the signi cant improvement of carcass parameters could be explained by the stimulatory effects of digestive enzymes secretions, which induce to better absorption of nutrients, such as amino acids, from the digestive tract (Rahimi et al., 2011). Furthermore, the antioxidants compound and phenolic substance in vegetables products improved the carcass breast of broiler by 1.2% (Abo Omar et al., 2016).
Hematobiochemical Parameters
Our hematobiochemical parameters results are in agreement with those obtained by Saeid and Al Nasry (2010), who noted tha the Glycemia level varied between (1,09g/l -2,13g/l) with different incorporating levels of coriander seeds in broiler's feed. Chettouh that coriander seeds improve the hematological composition of red blood cells, hemoglobin, and plakets, while he found no difference in white blood cells in broiler chickens.
Conclusion
The effect of phytobiotic compound PHTG2 and PHTG3 are more importance than the effect of coriander used alone. However, indeed PHTG3 was the most interesting formulation, followed by PHTG2. Live body weight, Average Daily Gain, feed intake, FCR, and carcass yield have been improved. The percentage of the breast and the leg were enhanced. In fact, the abdominal fat was not affected by the phytobiotic compounds. The PHTG3 affectes positively the weight of the liver, proventriculus and gizzard. The association of Coriandrum sativum L. with Pimpinella anisum L. and Trigonella foenum graecum L. have a positive effect on hematobiochemical parameters in broiler chicken especially increasing the level of different types of white blood cells (lymphocyte, monocytes, and granulocytes). Further, the incorporation of these natural products reduced the total cholesterol, increase plasma protein levels. The ratio (Albumin / Globulin) was affected strongly. The uric acid levels were decreased signi cantly.
Declarations
This study followed the international guidelines of animal care and use in research and teaching (NRC, 2011). All procedures performed in this research were approved in the Scienti c and Technical Research Centre on Arid regions (CRSTRA)-University of Biskra. | 2022-01-28T16:48:18.365Z | 2022-01-25T00:00:00.000 | {
"year": 2022,
"sha1": "9f0d30714d22bf7b97fcaef2e19bc54bb8df5134",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-1247372/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "16e87a835e74aa32d18e681490dd10ad5ea68524",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
15183443 | pes2o/s2orc | v3-fos-license | Effects of follicular versus luteal phase-based strength training in young women
Hormonal variations during the menstrual cycle (MC) may influence trainability of strength. We investigated the effects of a follicular phase-based strength training (FT) on muscle strength, muscle volume and microscopic parameters, comparing it to a luteal phase-based training (LT). Eumenorrheic women without oral contraception (OC) (N = 20, age: 25.9 ± 4.5 yr, height: 164.2 ± 5.5 cm, weight: 60.6 ± 7.8 kg) completed strength training on a leg press for three MC, and 9 of them participated in muscle biopsies. One leg had eight training sessions in the follicular phases (FP) and only two sessions in the luteal phases (LP) for follicular phase-based training (FT), while the other leg had eight training sessions in LP and only two sessions in FP for luteal phase-based training (LT). Estradiol (E2), progesterone (P4), total testosterone (T), free testosterone (free T) and DHEA-s were analysed once during FP (around day 11) and once during LP (around day 25). Maximum isometric force (Fmax), muscle diameter (Mdm), muscle fibre composition (No), fibre diameter (Fdm) and cell nuclei-to-fibre ratio (N/F) were analysed before and after the training intervention. T and free T were higher in FP compared to LP prior to the training intervention (P < 0.05). The increase in Fmax after FT was higher compared to LT (P <0.05). FT also showed a higher increase in Mdm than LT (P < 0.05). Moreover, we found significant increases in Fdm of fibre type ΙΙ and in N/F only after FT; however, there was no significant difference from LT. With regard to change in fibre composition, no differences were observed between FT and LT. FT showed a higher gain in muscle strength and muscle diameter than LT. As a result, we recommend that eumenorrheic females without OC should base the periodization of their strength training on their individual MC.
Introduction
In past decades, it has repeatedly been verified that serum concentrations of luteinizing hormone (LH), follicle-stimulation hormone (FSH), estradiol (E2) and progesterone (Prog) fluctuate during the menstrual cycle and that the level of androstenedione and testosterone reaches its peak prior to, or at the time of ovulation (Longcope 1986, Van Look andBaird 1980). This fluctuation of hormones during the menstrual cycle may influence 1) acute exercise performance during the respective phase, and 2) the trainability of muscle strength in a period when hormone milieu favours gain in muscle mass (Constantini et al. 2005;Janse de Jonge 2003;Lebrun 1994).
A number of studies on the effects of the menstrual cycle on exercise performance, however, show conflicting results. The above mentioned reviewers revealed in their newest available overviews in this topic that some studies showed a higher strength during the follicular phase than during the luteal phase whereas other studies reported the highest strength during the mid-luteal phase, while the majority of studies could not find any alteration in muscle strength over the menstrual cycle (Constantini et al. 2005;Janse de Jonge 2003;Lebrun 1994). In the recent 10 years only three studies on variation of muscle strength across the menstrual cycle that included hormone analysis for verification of the phase of the menstrual cycle when subjects were tested were found. None of them did find any effect of the phase of the menstrual cycle on isokinetic peak torque of knee extensors/flexors and maximum isometric strength of knee extension (Bambaeichi et al. 2004), on maximum voluntary isometric force of the first dorsal interosseus muscle (Elliott et al. 2003), or on handgrip strength and isokinetic muscle strength of knee extensors (peak torque), muscle endurance and one leg hop test (Fridén et al. 2003).
The effects of the phase of the menstrual cycle on trainability of strength in humans are even less clear, even though the available empirical evidence is promising. The only strength training intervention study using the different hormonal milieu of FP and LP as modulators of training adaptability analysed the possible divergent effects of training stimuli in either FP or LP on the amount of strength gain in healthy women (Reis et al. 1995). The authors described a higher trainability of strength of one-leg knee extensor muscles in 7 women when the respective leg was mainly trained for 4 weeks in FP compared to a training periodization without regarding the phase of the cycle. Despite the small number of subjects and wide inter-individual variability all subjects of this study showed higher strength adaptations during the follicular-phase based training.
One possible link between MC-depending training induced increases in muscle mass is the fluctuation of steroid hormones throughout the cycle and their possible effects on protein synthesis. The effects of estrogens on the human muscle have mainly been investigated in peri-and postmenopausal women. The striking decline in muscle strength occurring during the perimenopausal and postmenopausal period can be reversed by hormone replacement therapy (Bergström 1962;Jabbour et al. 2006). Discovery of 3 types of estrogen receptors has led to the finding that estrogen may govern the regulation of a number of downstream genes and molecular targets (Enns and Tiidus 2010;Lowe et al. 2010). Very recently estrogen receptor α and ß have been shown to be involved in muscle differentiation including slow myosin-heavy chain isoform (MHC) which is the dominant MHC isoform in type I fibres (Pellegrini et al. 2014) indicating that estrogens might influence muscle fibre type distribution. Further, one recent study reported that those women using hormone replacement therapy had significantly greater up-regulation of pro-anabolic gene expression both at rest and following eccentric exercise compared to a control group (Dieli-Conwright et al. 2009). Estrogens may also positively influence post-damage repair processes through activation and proliferation of satellite cells which are well known mechanism for skeletal muscle cell adaptation after a (strength) training stimulus (Enns and Tiidus 2010). Furthermore, it has recently been postulated that the beneficial effect of estrogens on muscle strength is accomplished by improving the intrinsic quality of skeletal muscle, whereby fibres are enabled to generate force, i.e. myosin strongly binds to actin during contraction, which might also lead to higher strength gains during training (Lowe et al. 2010).
Several recent clinical trials have indicated that testosterone supplementation at physiological doses in androgendeficient women induce improvements in lean body mass clearly indicating the pro-anabolic effects of low-level testosterone on female skeletal muscle. These physiological effects may be critical for athletic performance and strength, albeit the effects of testosterone supplementation in women with serum androgen concentrations within the normal range has not been studied (Enea et al. 2011).
Only very few data exist on the physiological effects of progesterone (P4) on the female skeletal muscle cell. Recent studies have consistently found amino acid oxidation and protein degradation to be greater in the luteal phase (LP) compared with the follicular phase (FP) at rest and during exercise. It appears that P4 is responsible for the consistent finding of increased protein catabolism in LP, while estrogens may reduce protein catabolism (Oosthuyse and Bosch 2010).
All these studies support the hypothesis that estrogen and testosterone induce anabolic effects and progesterone has more katabolic effects on the skeletal muscle and that timing of strength training according to hormone concentrations might affect skeletal muscle adaptions. Indeed, Reis et al. (1995) did include analysis of steroid hormones in the above mentioned strength training intervention study. In short, they found estradiol in the training period being positively correlated with the muscle cross sectional area, estradiol before the training period being positively correlated with the development of maximal strength after one menstrual cycle, changes of progesterone between the luteal phases being negatively correlated with the development of maximal strength, and testosterone in the training period being positively correlated with the changes of the muscle cross sectional area. Although the authors summarised that the sample was very small and correlations do not necessarily represent cause-and-effect relationships, they concluded that the findings of their study suggest to consider not only testosterone and free testosterone but additionally the characteristic female hormones like estradiol and progesterone when investigating the interrelations between the physical performance and the endocrine system of female athletes.
Overall, the existing data indicate a more anabolic state in FP and the peri-ovulatory phase of the MC, compared to a more catabolic state in LP. No study, however, has so far combined the analysis of strength, macroscopic and cellular parameters after menstrual-cycle triggered training. Therefore the aim of the present study was to investigate the effects of menstrual cycle phase-based strength training on strength, macroscopic and microscopic muscle adaptations in a controlled training intervention study in healthy young females.
Methods
Subjects 20 healthy eumenorrheic women (mean (± SD) age: 25.9 ± 4.5 yr, height: 164.2 ± 5.5 cm, weight: 60.6 ± 7.8 kg) volunteered to participate in this study. Subjects were either untrained or moderately trained students from the university who partly participated in sport programs like aerobics or yoga. Some of them used a bicycle for transportation purposes. All subjects performed less than 2 hours of regular physical training per week. No one was experienced in or was currently performing resistance training. Subjects had not been taking oral contraceptives (OC) or any other hormonal treatments during the year prior to participation in this study and had no history of any endocrine disorders. All subjects had regular menstrual cycles (28.6 ± 2.3 days), and basal body temperature increased during the luteal phase of each cycle. Prior to the study, participants were informed about the purpose, procedures and risks of the study, however were not informed about the underlying hypotheses in order to avoid influence of motivation especially on maximal strength diagnosis. Blinding of training frequency concerning the phase of the menstrual cycle was not possible. Written informed consent was obtained from each participant. Approval for the study was obtained from the Ethics Committee of the Medical Faculty of Ruhr-University Bochum, Germany.
Experimental design
This study was a controlled trial where the effects of two different menstrual cycle-based leg strength training programs were compared to each other (follicular phasebased-(FT) versus luteal phase-based training (LT)). Every participant performed both programs at the same time: one program with one leg, the other program with the other leg. To eliminate effects of a preference for one body side, the assignment of training programs to body sides was performed by randomization. The duration of the study for the individual participant was based on the length of the menstrual cycle. The entire study took five MC (two control cycles followed by three training cycles) which was equivalent to about 140 days considering that one menstrual cycle took approximately 28 days.
Throughout the whole study period the individual menstrual cycle integrity was analysed by measurements of basal body temperature and documented in a menstrual cycle calendar. If no clear increase in body temperature could be demonstrated around the middle of any of the menstrual cycles the subject was excluded from the study. Four subjects were excluded according to missing mid-cycle increase in body temperature. The remaining n = 20 subjects all had detectable increments of mid-cycle basal body temperature in each of five menstrual cycles. In the second control cycle and in each training cycle, blood samples for hormonal analysis were taken from a cubital vein on day 11 (late follicular phase) and on day 25 (late luteal phase) of the menstrual cycle when cycle length was between 27 and 29 days. When cycle length was between 30 and 31 days, venous blood was collected on day 13 and on day 27, and when cycle length was between 25 and 26 days, venous blood was collected on days 9 and 23. The condition of blood sampling was strictly standardized. Blood was taken between 8 and 9 am after an overnight fasting with the subjects staying in supine position. Strength of maximum isometric knee extension (F max ) was measured around day 11 and around day 25 of the menstrual cycle, according to the individual length of the cycle. The sum of the diameters of rectus femoris, vastus intermedius and vastus lateralis muscle (Mdm) were measured on day 25 and muscle biopsies were taken from vastus lateralis muscle on day 27. Strength training was performed throughout the three consecutive training cycles. F max was repeatedly measured throughout the training period on day 14 and on day 27 in each training cycle. In the third training cycle, the investigations which had been performed in the second control cycle (blood sampling, F max , Mdm, muscle biopsy) were repeated on the respective days.
We did not control for diet during the intervention period. Subjects, however, were instructed not to change their normal diet pattern and they did not reply any change when we asked them. Further, subjects were instructed to maintain their normal physical activity pattern outside the strength training. If they planned any one-leg training outside the study they were instructed not to do any of this training within the study period.
Monitoring of menstrual cycle integrity
In order to determine the exact individual training and testing schedule, subjects measured their basal body temperature every morning throughout the entire study period at the same time before getting out of bed. The occurrence of ovulation was defined when an increase in basal body temperature of at least 0.3°C was measured (Kelly 2006;Owen 1975).
Strength training program
The subjects completed three cycles of a controlled, supervised strength training program. The training was done four times a week: three times per week (Mondays, Wednesdays and Fridays) under supervision on a leg press machine and one time per week (Saturdays) at home with the subject's own body weight (squats). Training participation was documented by the supervisors and additionally subjects documented their training participation in their training calendar. According to strong supervision, participation rate reached 92%. On the leg press subjects performed a one-leg sub-maximum strength training (80% of maximum strength of the respective leg) with three sets of 8-10 repetitions until exhaustion and with 3-5 minutes of recovery between sets. More weight was added for the leg press exercise when subject performed more than 12 repetitions. At home they performed three sets of 15-20 one-leg squats with 3-5 minutes of recovery between sets.
One leg was mainly trained in FP (FT) and the other leg mainly in LP (LT). The strength training program started individually with the 1st day of the menstrual cycle. As soon as basal body temperature increased more than 0.3°C for 3 days subjects changed their training leg. This scheme resulted in eight times training sessions in FP and just twice in LP in the follicular phase based training, and in eight times training sessions in LP and just twice in FP in the luteal phase based training session ( Table 1). The respective two training loads in LP in the follicular phase based training (FT), and in FP in the luteal phase based training (LT) aimed at conserving the induced adaptation processes.
Hormone analysis
Venous blood was centrifuged after blood clotting and the serum kept frozen at −80°C until analysis. Each sample was analysed for estradiol (E2), P4, total testosterone (T) and free testosterone (free T), and dihydrotestosterone-sulfate (DHEA-s). E2, P4, T, and DHEA-s were assayed by immunochemistry (Elecsys® 1010 System, Roche Diagnostics GmbH), and free T was analysed by radioimmunoassay (Multi-Crystal LB 2111 gamma counter, Berthold Technologies GmbH & Co. KG).
The Elecsys 1010 analyser is a fully automatic, run-oriented analyser system for determination of immunological tests using the ECL/Origen electro-chemiluminescent process. ECL is a process in which highly reactive species are generated from stable precursors at the surface of an electrode. These highly reactive species react with one another producing light. The system measures samples in the form of serum and plasma. Depending on the test used, the results are produced either as quantitative or qualitative results. All components and reagents for routine analysis are integrated in or on the analyser. The measurement signals produced are used by the Elecsys 1010 to calculate the results. The measuring cell is a sealed chamber and consists of a working electrode, counter electrodes, a magnet and a photomultiplier. An immunological ECL test is made up of various pipetting steps, at least one incubation period and a measurement step. Generally, at least three test components (sample, reagent and microparticles) are pipetted into an assay cup. After the appropriate incubation period, the reaction mixture is aspirated into the measuring cell where the measurement process takes place. Each of these pipetting cycles is performed within a defined period (approximately 60 seconds).
The Multi Crystal LB 2111 gamma counter is a compact, robust and easy-to-use instrument for most applications where gamma-emitting isotopes are used. The 12 detectors are made of high-quality NaI well-type crystal providing best measurement geometry for gamma emitters located in the sample tubes. The principle of the analysis of free testosterone follows the basic principle of radioimmunoassay where there is competition between a radioactive and a non-radioactive antigen for a fixed number of antibody binding sites. The amount of I125-labelled testosterone analog bound to the antibody is inversely proportional to the concentration of the free testosterone present.
Measurement of the strength of maximum isometric knee extension (F max ) F max of the right and left leg was measured separately once in late FP (day 11) and once in late LP (day 25) in the 2nd control cycle and in each training cycle. F max was determined on a leg press machine (Medizinische Sequenzgeräte, Compass, Germany) using a combined force and load cell (GSV-2ASD, ME-Messsysteme GmbH, Hennigsdorf, Germany). Prior to testing the subjects underwent a 10-min warm-up period of aerobic, low-resistance ergometer cycling and were familiarized with the test and the testing position (knee angle: 90°, ankle angle: 90°) on the leg press. Each measurement was repeated three times with 30 seconds of rest between the tests. The best result was selected for data analysis. Two subjects were not able to perform the isometric strength tests during the training cycles because of personal reasons. A reliability analysis was performed for the isometric measurement. The Intraclass Correlation Coefficient was 0.998, which indicates that the system has a high internal consistency (reliability).
Determination of muscle diameter (Mdm)
Mdm of the rectus femoris, vastus intermedius and vastus lateralis muscle of the right and left leg was measured by real-time ultrasound imaging prior to and after training on day 25 in LP of the second control cycle and the third training cycle, analysing the distances between the outer and inner muscle fasciae using a Vivid I CE 0344 ultrasound device (GE Medical System, Solingen, Germany) with a parallel scanner (8 L-RS, 4.0-13.3 MHz) which provides 10 cm penetration depth of the sound wave and enables diagrams of deep lying muscles. The linear measurement of the shortest distance of the muscle depth was used. Previous studies showed that muscle cross-sectional area might reliably be measured using real-time ultrasound imaging (Martinson and Stokes 1991). Subjects prevented long-lasting static muscular tension for at least 30 minutes prior to the measurement in order to avoid alterations in Mdm (Reimer et al. 2004). All subjects lay supine with stretched legs on an examination couch without any pad, cushion or pillow underneath. Ultrasound images were obtained half-way between hip bone and knee cap and the transducer was placed gently on thighs to avoid compression and distortion of the underlying tissue. The transducer was held at an angle of 90°to the muscle borders to ensure a clear image. The images were displayed and frozen on the screen and they were photographed to measure the muscle diameter. The positions of transducer were recorded for each muscle to reproduce the exact position after training intervention. The means of three measurements of M. vastus medialis, M.vastus intermedius and M. vastus lateralis were taken at the same site for all subjects and the sum of three muscles diameters was calculated and used for data analysis (Mdm). Reliability analysis was performed for Mdm determination. The obtained ICC was 0.997, indicating a high reliability of the ultrasound imaging of Mdm used in this study.
Histochemical analysis of muscle samples
Muscle samples (70 mg -300 mg) were obtained from the vastus lateralis muscle of both right and left leg by the percutaneous needle biopsy technique (Bergström 1962). Nine subjects volunteered to participate in muscle needle biopsies on day 27 of the second control cycle and of the third training cycle (about two days after the last training). The muscle samples were removed from the needle and oriented cross-sectional, mounted in a Tissue-Tek OCT (Sakura Finetek Europe B.V., Zoeterwoude, Nederland) embedding medium, frozen in isopentane cooled by using liquid nitrogen, and stored at −80°C for subsequent histochemistry. Thin sections (10 μm) were cut in a cryostat at −20°C and mounted on cover glasses for staining. Histochemical analysis for the determination of muscle fibre types (types Ι and ΙΙ) was performed with adenosinetriphosphatase (ATPase) staining procedures using an alkaline pre-incubation at pH 4.3 and 9.6 (Brooke and Kaiser 1970). Moreover, muscle cell nuclei were stained with hematoxylin and eosin for nuclei-to-fibre (N/F) ratio analysis (Yan 2000). Fibre type counting and measurements were performed on photographs. For muscle fibre composition, an average of 288 fibres from each sample was identified, and the percentage of each fibre type was calculated. For the muscle fibre diameter averages of 62 fibres (range 20-119) from each fibre type (type Ι and type ΙΙ) were selected and measured using Cell-D life science documentation software (Olympus life and material science Europe GmbH, Hamburg, Germany).
Statistical analysis
Data are presented as mean values with SD. Normality of distributions was proved by the Kolmogorov-Smirnov test. A two-tailed paired t-test was used to evaluate improvements in training workload, of F max , Mdm, fibre composition, fibre diameter and N/F between values before (pre) and after the training intervention (post). ANOVA with repeated measures was used to determine main effects of time, cycle phase, and time by cycle phase interaction. In all cases, P values <0.05 (two-tailed paired t-test) were taken to indicate statistical significance. The intra-class correlation coefficient of repeated measurements (ICC) was determined to evaluate reliability of the determination of F max and Mdm (McGraw andWong 1996, 2004). Effect size and statistical power for changes in F max , Fdm, and alterations in muscle morphological characteristics were analysed post hoc as a function of significance level α, the respective sample sizes, and the means and standard deviations of differences using G*Power power analysis program (Faul et al. 2007, Faul et al. 2009, Cohen 1988).
Number of training sessions
The total number of single-leg training sessions was approximately 28 sessions per leg and was not different between FT and LT (FT: N = 28.6 ± 1.7; LT: N = 28.1 ± 1.9; P > 0.05).
Hormone concentrations
We did not find any significant differences in the serum concentrations of E2 and DHEA-s between day 11 and day 25 of the menstrual cycle prior to training, while P4 was significantly higher (effect size: 0.995, power: 0.98), and T and free T were significantly lower on day 25 compared to day 11 (effect sizes: 0.67 and 0.81, power: 0.76, 0.88, respectively) ( Table 2). After the strength training period, E2 became significantly higher in LP compared to FP (effect size: 0.56, power: 0.61), and P4 became higher in LP compared to LP prior to the training intervention (effect size: 0.67, power: 0.77), while DHEA-s remained unchanged. The differences in T and free T between both days were no longer detectable after the training period. T declined significantly from pre-to post-training in FO (effect size: 0.64, power: 0.73), and free T tended to decline from pre-to post-training in FO (p = 0.066, effect size: 0.54, power: 0.56).
Maximum isometric muscle strength
F max of one-leg knee extension muscles did not differ between FO and LU prior to the training period. F max of knee extension muscles increased significantly (P < 0.05) after both types of training periodization compared to the pre-training level (Figure 1). Absolute increase in F max was significantly smaller after LT (ΔLT: 188 ± 98 N) compared to FT (ΔFT: 267 ± 101 N) (P < 0.05, effect size: 0.87, power: 0.96). F max increased progressively during FT and LT compared to the mean of both measurements in the control cycle ( Figure 2).
Muscle diameter
The sum of Mdm of the three muscles increased significantly (P < 0.05) after both types of training periodization compared to the pre-training level. Increase in Mdm was significantly higher after FT (ΔFT: 0.57 ± 0.54 cm) compared to LT (ΔLT: 0.39 ± 0.38 cm) (P < 0.05, effect size: 0.47, power: 0.52, Figure 3).
Muscle fibre characteristics
Fibre type distribution remained nearly the same after both kinds of strength training periodization with about 40% type I fibres and 60% type II fibres (Table 3). Fdm increased significantly after FT in type II fibres (P < 0.05, effect size: 0.94, power: 0.70) and tended to increase after LT in type II fibres (P = 0.095, effect size: 0.63, power: 0.38), but remained the same in type I fibres after FT and LT. The N/F ratio increased significantly after FT (P < 0.05, effect size: 0.90, power: 0.66) and remained unchanged after LT.
Discussion
The most important finding of our study is a significantly higher (power 0.96) increase in F max after FT compared to LT (Figure 1). This is in line with the main finding of (Reis et al. 1995) who described a higher percent increase in F max after the second training cycle in the follicular phase-trained leg compared to the regularly trained leg (33% increase vs. 13% increase in F max ).
The second important finding of our study is a significantly higher (power 0.52) increase in Mdm after FT compared to LT, which is in line with the higher increase in F max after FT. This higher increase in Mdm after FT may be explained by a higher ratio between protein synthesis and protein breakdown during or after each strength training session in FP compared to LP (Oosthuyse and Bosch 2010).
This study is the second one to address the planning of strength training with respect to hormonal fluctuations during the MC and the first to include the analysis of muscle cell parameters. We analysed the effects of a longer-lasting training period and we clearly varied the strength training periodization between FP and LP in contrast to the first study (Reis et al. 1995) which strengthens the view that menstrual cycle induced alterations in hormone concentration are one probable cause for the differences in the height of strength increase. Plasma hormone concentrations, however, represent a balance among production, metabolism, utilization, clearance, and plasma volume. With our measurements we only considered one variable of a complex system, which additionally represents just the actual concentration. To minimize the variations in this system we strictly standardized the conditions during blood sampling, although this remains a limitation of the study.
The more pronounced increase in muscle strength and muscle diameter in FT compared to LT could be explained by the higher concentrations of T and free T during FP compared to LP in the pre-training and probably also first training period (Table 2), although power of the effects is not very high and we analysed hormone concentration rather in the late follicular than in the early follicular phase which makes interpretation of the hormone values somewhat weak. This is another limitation of the present investigation. Data in the literature on variation of anabolic hormones throughout the menstrual cycle, however, are consistent with our most relevant findings. Since androgen secretion from the ovary is under luteinizing hormone (LH) control, it is not unexpected that ovarian androgen secretion varies through the cycle: the blood levels of T have been described as lowest in the early follicular phase and then rising to their highest levels just prior to, or at the time of ovulation and then gradually fall during the luteal phase (Alexander et al. 1990;Jabbour et al. 2006;Longcope 1986). Therefore, it seems probable that they account for differences in strength, muscle diameter, and muscle cell characteristics between follicular-compared to luteal phase-based strength training. Furthermore, the higher increases in F max and Mdm after FT might also be explained by menstrual cycle-dependent alterations of estradiol and progesterone. It has long been demonstrated that the ovarian hormones fluctuate during the menstrual cycle (Oosthuyse and Bosch 2010;Reilly 2000). E2 peaks prior to ovulation and during LP, while P4 reaches its highest values during LP after ovulation (Van Look and Baird 1980). The ovarian hormones are known to have a noticeable influence on protein metabolism at rest and during exercise. It appears that progesterone is responsible for the consistent finding of increased protein catabolism in LP, while estrogen might reduce protein catabolism (Oosthuyse and Bosch 2010).
The similar concentrations in E2 around day 11 and day 25 prior to the training period in our study are probably due to the fact that day 11 represents a phase very close to ovulation, when E2 is already elevated compared to early and middle FP (Van Look and Baird 1980). A time point during the early follicular phase (around day 4 to 6) of the menstrual cycle probably would have led to clearly lower concentrations in FP compared to LP for these hormones. This is a limitation of the study. The higher value of E2 in LP compared to FP, the increase in P4 in LP, and the decline in T and free T in FP after the strength training intervention may be due to exercise-and training-induced changes in menstrual cycle physiology, including alterations in feedback regulation of steroid hormones. Recently, serum estradiol and progesterone have been shown to increase in the mid-luteal phase, and testosterone has been shown to decrease in the early follicular and in the mid-luteal phase after a single bout of resistance exercise in healthy young women, indicating that the responses of steroid hormones to acute resistance exercise are different between hormones and vary between menstrual cycle phases in young women (Nakamura et al. 2011). The authors concluded that the menstrual cycle state may influence the exercise training-induced skeletal muscular adaptation, and that it would be possible for training programs for eumenorrheic women to be timed in accordance with the menstrual cycle in order to maximize anabolic effects.
We found an increase in serum estradiol and P4 in LP after 3 months of strength training, suggesting that the acute strength training-induced increase in the luteal phase described by Nakamura et al. (2011) might chronically lead to an increase in serum estradiol and progesterone basal concentration in this cycle phase. Further, we found a decline in serum testosterone in FP after 3 months strength training, suggesting that the acute strength traininginduced decline described by Nakamura et al. (2011) might chronically lead to a reduction in serum testosterone basal concentration in this phase.
A remarkable finding of the muscle biopsy analysis was the significant increase in the type II fibre diameter after FT compared to only a tendency for an increase after LT. Resistance training has been shown to increase the volumes of myofibrils, of the interfibrillar space, of mitochondria and lipid droplets in females (Wang et al. 1993). An increase in the number and or size of myofibrils requires an increase in specific protein biosynthesis, which is dependent on anabolic agents such as testosterone and estrogens. Therefore, the slightly higher increase in cell diameter of type II fibres after FT compared to LT in our study is again in line with the higher increase in muscle strength and muscle diameter after FT compared to LT, and the well-known menstrual cycle-dependent alterations in anabolic hormones.
Interestingly, the N/F ratio increased after FT but remained unaffected after LT. Enlargement of muscle fibres is accompanied by an increase in the myonuclear number. Existing myonuclei are able to support a certain level of fibre hypertrophy. However, when the transcriptional activity of existing myonuclei reaches its maximum, the enhancement of the number of myonuclei is thought to become involved in the enhancement of protein synthesis (Kadi 2008). A substantial increase in the size of myofibres in the muscles requires the availability of satellite cells that can provide additional myonuclei to support hypertrophy (Adams 2006). A variety of alterations in the surrounding environment of the satellite cell, including mechanical and growth factors, as well as hormonal signalling including testosterone could regulate the activation and proliferation of satellite cells (Kadi 2008). Furthermore, sex-mediated differences in muscle-fibre regeneration and satellite-cell numbers may be directly attributed to estrogenic influence, and estrogen may exert its influence on post-exercise muscle-satellite cell populations through events upstream of satellite-cell activation (Enns and Tiidus 2010). Taken together, although spoken with caution to not over-interpret data, our results underpin a possible role of hormonal alterations, both of testosterone and estrogens, throughout the menstrual cycle in the process of satellite-cell incorporation-induced muscle hypertrophy.
Conclusions
In conclusion, this study demonstrated that follicular phase-based strength training induced a greater effect on muscle strength, muscle and type II fibre diameters, and nuclei-to-fibre ratio compared to luteal phase-based strength training in untrained and moderately trained women. We recommend that eumenorrheic females without oral contraception base the periodization of strength training on their individual menstrual cycle. | 2016-05-16T03:46:15.030Z | 2014-11-11T00:00:00.000 | {
"year": 2014,
"sha1": "7e2f0945df5026ab36d31b867c5befe5d0105cd2",
"oa_license": "CCBY",
"oa_url": "https://springerplus.springeropen.com/track/pdf/10.1186/2193-1801-3-668",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7e2f0945df5026ab36d31b867c5befe5d0105cd2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55246179 | pes2o/s2orc | v3-fos-license | Anaemia in HIV infected HAART naïve and HAART exposed children
Background: The 2016 UNAIDS report estimates about 2.1 million people living with HIV in India, of whom about 7 per cent are children under the age of 15 year. The primary objective of this study was to analyze the prevalence of anaemia in HIV Infected HAART naïve and HAART exposed children. The secondary objectives were to analyze the type of anaemia and correlation of anaemia with dietary habits and associated opportunistic infections. Present study was a cross-sectional, observational study carried out in pediatric ART OPD and ward of a tertiary care teaching hospital, from June 2011 to May 2013. Methods: Complete haemogram, peripheral smear and CD4 counts were done on 130 children with confirmed diagnosis of HIV infection. CDC staging was used to stratify children. We used Chi square test to determine the association between CDC staging, HAART therapy, diet and opportunistic infection with anaemia. P-value <0.05 was taken as significant. Results: 80% (n = 27) of the children with no immunosuppression, 86% (n = 58) of the children with moderate immunosuppression and 84% (n = 24) children with severe immunosuppression were anemic. There was no statistically significant relation between worsening immunosuppression and prevalence of anaemia (p = 0.715). 88% of the children with no opportunistic infection (n = 72) were anemic, while 76% (n = 34) children with opportunistic infection were anemic. This difference was statistically significant (p = 0.016). 88% (n = 53) of the children on HAART were anemic while 74% (n = 51) of the children not on HAART were anemic. Children on HAART did not have significantly high prevalence of anaemia when compared to children not on HAART (p = 0.99). Anaemia was significantly more common in children consuming vegetarian diet (88%, n = 46) compared to children consuming mixed diet (74%, n = 58, p <0.01). Conclusions: Prevalence of anaemia is similar in children on HAART compared to HAART naïve children and at all stages of immunosuppression. Anaemia was more common in the presence opportunistic infections and in children consuming vegetarian diet. Microcytic hypochromic anemia was most common type of anaemia followed by normocytic normochromic anaemia.
affected populations in India. Two such major comorbidities include anaemia and poor nutrition, whose detrimental effects are magnified in the context of HIV infection. High prevalence of anaemia in children of all age group continues to be a major health problem in most parts of India. Anaemia is common in children with HIV infection. There is lack of literature about anaemia in pediatric patients with HIV infection in India. The causes of HIV-related anaemia are multifactorial and include direct and indirect effects of HIV infection. 3 Few studies have shown correlation between immunosuppression, HAART drugs and anaemia. However, there is a lack of studies focused on correlation of anaemia in HIV infected children with their dietary habits, and opportunistic infections. Understanding nutritional co-morbidities will be beneficial in planning appropriate intervention strategies to reduce the overall burden of pediatric HIV in India.
The aim of this study was to determine prevalence of anaemia in HIV infected HAART naïve and HAART exposed children. We also aimed to study correlation of anaemia with their dietary habits, degree of immunosuppression and presence of opportunistic infections.
METHODS
One hundred thirty consecutive children from 6 months to 12 years of age with a confirmed diagnosis of HIV infection were enrolled from the pediatric ART OPD and ward over a two-year period.
Written informed consent was obtained from all parents/caregivers. Patients were excluded if parents/caregivers refused to become part of study or have any known hematological abnormality, like thalassemia, sickle cell anaemia, recent bleeding or bleeding disorder etc. Permission was obtained for conducting this study from Institutional Ethics Committee.
Detailed history, general and systemic examination was carried out. Patients were categorized as per CDC guidelines. Relevant investigations results (e.g. sputum and chest X-ray for tuberculosis, endoscopy for esophageal candidiasis) where necessary were carried out. Detailed dietary history was taken (24-hour recall) and specific enquiry was made whether the child consumes non-vegetarian food. Children who consumed non-vegetarian food (chicken, mutton, beef, eggs) at least once a week were classified as taking mixed diet and others were labeled vegetarian. Presence or absence of pallor was noted on general examination.
Blood samples were collected using universal precaution. Samples were tested immediately after collection. The investigations included complete hemogram and RBC indices using auto (PCE 120 by ERMA Inc) as well as peripheral blood smear examination. Absolute CD4 cell count analysis was carried out by flowcytometry (Fax caliber machine, Manufacturer-Biological Diagnostics).
Age related normal hematological values (reference ranges) were obtained from Nelson's text book of Pediatrics. Anaemia was defined as values that deviate from reference ranges. CD4 percentage was obtained by absolute CD4 count and total lymphocyte count. Values were categorized according to CDC guideline as no immunosuppression, moderate immunosuppression and severe immunosuppression.
Data was analyzed using Microsoft office excel 2007 data sheet and SPSS version 20, STATA. Presence/absence of anaemia, opportunistic infections, HAART status, dietary habits (vegetarian/mixed) were analyzed in the study group as a whole and also in different stages of the disease. Chi Square test was used for determining the association between CDC staging and abnormal hemoglobin. P-value <0.05 was taken as significant.
RESULTS
Children in the study group were resident of Pune or surrounding districts of Maharashtra. The mean age of the study population was 7.35 years, among them 61% (n = 79) were males and 39% (n = 51) were females. Most of the study population belonged to lower socioeconomic status. All of them acquired infection by vertical transmission from the mother. Out of 130 children, 34 children had no immunosuppression, whereas 68 and 28 had moderate and severe immunosuppression respectively as per CDC staging. Fifty two percent (n = 69,53%) children in the study population were not on ART and 48% (n = 61,47%) were on ART. Anaemia in different stages of immunosuppression in study population is shown in Table 1.
Eighty three percent of this study population was anaemic. The mean hemoglobin concentration in this study population was 8.38±1.90 gm%. The maximum and minimum values of hemoglobin were 13.1 gm% and 4.8 gm% respectively. Commonest type of anaemia found in this study group was microcytic hypochromic anaemia, seen in 52 (40%) cases followed by normocytic normochromic anaemia seen in 46 (35%) cases.
In this study, 80% (n = 27) of the children with no immunosuppression were anemic, 86% (n = 58) of the children with moderate immunosuppression were anemic and 84% (n = 24) children with severe immunosuppression were anemic. There was no statistically significant relation between worsening immunosuppression and prevalence of anaemia (p = 0.715).
Microcytic hypochromic anaemia was found almost equally in all stages. This blood picture was seen in 39% (n = 13) in no immunosupression,40% in moderate(n=27) and 42% in severe immunosuppression (n = 12). Normocytic normochromic blood picture was most commonly found in severe immunosuppression (n = 16,57%) compared to children in no immunosuppression (n = 8,24%) and children with moderate immunosuppression (n = 22,32%). Thus, normocytic normochromic anaemia was more common as the disease progressed. This difference was statistically significant (p<0.01). Once anaemia was diagnosed these children were shifted to non-AZT based therapy. Anaemia was significantly more common in children consuming vegetarian diet (88%, n = 46) compared to children consuming mixed diet (74%, n = 58, p<0.01). This may be due to less bioavailability and absorption of iron from vegetarian food sources compared to nonvegetarian food sources.
DISCUSSION
Present study reports high prevalence of anaemia in HIV infected children. Significantly, present study shows that anaemia is more common in HIV infected children consuming vegetarian diet compared to children who had mixed diet and in children with Opportunistic infections compared to those who did not.
Anaemia may occur as a result of HIV infection itself, as sequelae of HIV-related opportunistic infections or malignancies or as a consequence of therapies used for HIV infection and associated conditions. 4 In the present study, anaemia was found to be common in children with HIV/AIDS. 109 out of 130 children (83%) were anemic. Prevalence of anaemia in the study by Shet et al was 66% and in that Adetifa et al was 77.9%. 5,6 Microcytic hypochromic anaemia was the commonest type (40%) of hematological abnormality in the study. Its frequency remained almost same irrespective of the degree of immunosupression. Normocytic normochromic anaemia was seen in 35% of our patients, however, occurred more frequently as immunosupression increased. This may be the result of anaemia of chronic disease, probably due to direct effect of HIV/AIDS. Erhabor et al in their studies also found that the prevalence of normocytic normochromic anaemia was more in the children with advanced and severe immune suppression than those with mild and no immune suppression. 7 Microcytic hypochromic anaemia on the other hand was probably due to iron deficiency and not related to HIV status as iron deficiency was very common in India. 8 Studies have clearly demonstrated that anaemia is associated with decreased survival and increased disease progression in adults with HIV infection. 9,10 A study of serum immunoreactive erythropoietin, in HIV-infected patients, in various stages of illness, showed that levels of the hormone failed to rise commensurate with increasing anaemia, suggesting that insufficient amounts of erythropoietin may be one cause of anaemia in this setting. 11 Other studies have suggested that soluble factors in the serum of HIV-infected patients may inhibit hematopoiesis, or that direct HIV infection of marrow progenitor cells may play a role in producing anaemia and other hematological abnormalities associated with HIV infection. 12 Zidovudine (AZT) therapy can cause macrocytic anaemia in children on Zidovudine based ART. 13 Dapsone used for treatment or prevention of Pneumocystis Carinii Pneumonia (PCP) may cause hemolytic anaemia or generalized myelosuppression. 14 None of the children in our study received Dapsone. Infection with Mycobacterium Avium Complex (MAC) and parvovirus B19 are other common cause of anaemia in advanced HIV disease. 15,16 Anti-erythrocyte anti-bodies produce a positive direct anti-globulin test in approximately 20% of HIV-infected patients with hypergammaglobulinemia. 17 Opportunistic infections were found to be significantly associated with anaemia in our series.Recent studies in adults have found higher prevalence of anaemia in adults with TB coinfection. 18 The strengths of the present study include number of children studied and assessment of diet of the children. The limitations include lack of studies for the exact etiology of anaemia (serum ferritin, B12 levels etc.) due to lack of resources.
CONCLUSION
Anaemia is very common in HIV infected children. Children on HAART had similar prevalence of anaemia when compared to children not on HAART. Anaemia was significantly more common in children consuming vegetarian diet compared to children consuming mixed diet and in children who had opportunistic infections. Microcytic hypochromic anaemia was the commonest type followed by normocytic normochromic type. Frequency of normocytic normochromic anaemia increased in advanced stage of the disease. | 2019-06-13T13:18:19.035Z | 2018-08-24T00:00:00.000 | {
"year": 2018,
"sha1": "1a44edd232ca1d92eda425a461dc11e89c75be45",
"oa_license": null,
"oa_url": "https://www.ijpediatrics.com/index.php/ijcp/article/download/1865/1370",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "bc785c883162c72d0d190edd023e12b86032ca9a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
212651028 | pes2o/s2orc | v3-fos-license | Alveolar Socket Preservation with Different Autologous Graft Materials: Preliminary Results of a Multicenter Pilot Study in Human
Background: The histological and histomorphometrical results were evaluated between vital whole and non-vital endodontically treated teeth used as autologous grafts in post-extractive socket preservation procedures. Methods: Twenty-eight patients (average age 51.79 ± 5.97 years) with post-extractive defects were enrolled in five dentistry centers. All patients were divided into two groups: with whole teeth (Group 1) and teeth with endodontical root canal therapy (Group 2). The extracted teeth were processed with the Tooth Transformer device to obtain a demineralized and granulated graft material used with a resorbable collagen membrane for socket preservation. After four months, 32 bone biopsies were obtained for histological, histomorphometric, and statistical analysis. Results: During the bone healing period, no infection signs were observed. Nineteen biopsies in group 1 and 13 biopsies in group 2 were detected. The histological analysis showed neither inflammatory nor infective reaction in both groups. Autologous grafts surrounded by new bone were observed in all samples and, at high magnification, partially resorbed dentin and enamel structures were detected. No gutta-percha or cement was identified. Small non-statistically significant differences between the groups, in total bone volume (BV), autologous graft residual, and vital bone percentage were detected. Conclusions: The study showed that the TT Transformer grafts were capable of producing new vital bone in socket preservation procedures. The histomorphometric results showed no statistical differences comparing whole and endodontically treated teeth in bone regeneration. Further studies will be carried out in order to understand the advantages of the autologous graft materials obtained from the tooth compared with the current biomaterials in bone regeneration treatments.
Introduction
After tooth loss, natural bone remodeling is carried out, with volumetric alveolar bone reduction (vertical 1.67-2.03 mm and horizontal 3.87 mm) [1] and hard and soft tissue remodeling found to be higher during the first year [2,3]. To prevent volumetric bone loss, various surgical techniques were suggested with or without the use of graft materials and resorbable or non-resorbable membranes [4,5].
Fresh or demineralized freeze-dried human, animal (xenograft), and artificial (allograft) materials, used alone or combined, were studied in the literature in order to reduce the volumetric contraction of hard tissues (average bone loss of 0.36 mm horizontally and 0.58 mm vertically) [4,6].
For a long time, autogenous bone was considered the "gold standard" for its osteogenic, osteoconductive and osteoinductive properties; however it may have some problems, such as donor site morbidity and surgery, limited availability, and, in some cases, a high resorption rate [7]. For these reasons, in the last 10 years, great effort was applied to developing a large number of biomaterials, with rapid or slow reabsorption, used as scaffolds that show osteoconductive properties [8,9].
Many studies confirmed a similar composition of hydroxyapatite in the inorganic component and type 1 collagen, as well as other proteins in the organic component of bones, dentin, and enamel, although with different percentages [10][11][12]. In 1967, Bang et al. showed the osteoinduction potential features of demineralized dentin matrix [13,14] and, in 1991, Bessho et al., in an animal model, detected the presence of bone morphogenetic proteins (BMPs) in a human dentin matrix after a demineralization process [15]. In 2017, Rijal theorized how the dentin demineralization process of autologous extracted teeth allows better bone augmentation for the increased availability of BMPs [16]. and Minamizato (2018) showed the efficacy of a chairside-prepared autologous partially demineralized dentin matrix for clinical bone regeneration procedures in human [17,18].
The aim of the present study was to evaluate the histological behavior of two different autologous graft materials from healthy whole vs. endodontically treated teeth, used in human as a demineralized graft material produced by the innovative TT Transformer medical device for a clinical alveolar socket preservation procedure.
Materials and Methods
Between February 2018 and October 2018, 28 patients (average age 51.79 ± 5.97 years) with 34 post-extractive defects and in good health condition were enrolled in five dental centers in Italy. All patients signed a written informed consent form before being included in the study, and they were assigned into two different groups: Group 1 (G1) with healthy whole teeth and Group 2 (G2) with teeth extracted for endodontic root canal treatment. After tooth extraction, all teeth were cleaned, separated, and automatically demineralized with the TT Transformer medical device.
All patients received the same alveolar socket preservation procedure using the autologous tooth as grafting material covered with a resorbable collagen membrane porcine pericardium (Bego oss®).
Inclusion Criteria
The study included patients over 18 years of age who needed a tooth extraction treatment, in good health condition (ASA-1 and ASA-2), and who were able to undergo dental surgical and restorative procedures. Tooth extractions were needed for trauma, caries, or periodontal diseases. The alveolar socket preservation procedures were requested in order to maintain the bone volume for dental implant rehabilitation after tooth extraction.
Exclusion Criteria
Pregnant subjects, patients with a history of allergies, tobacco use (within the last six months), diabetes, cancer, human immunodeficiency virus (HIV), bone or metabolic diseases, immunosuppressive agents, or use of systemic corticosteroids or intramuscular/intravenous bisphosphonates, and patients in radio or chemotherapy were excluded.
Preoperative Work-Up
Clinical and radiographic analysis with cone-beam computed tomography (CBCT, Planmeca ProMax 3DS Helsinki, Finland), periapical X-rays, or panoramic X-rays was performed. Two weeks before the oral surgery treatment, all patients received a professional oral hygiene session, and chlorhexidine 0.2% mouth rinses, twice a day for two weeks, were prescribed.
Surgical Procedures and Follow-Up
Antibiotic administration (2 g of amoxicillin/clavulanic acid in one solution 2 h before the extractions) was performed. The dimensions and the post-extractive socket morphology were recorded through direct measures. All extracted teeth were firstly cleaned using a diamond drill with abundant irrigation. Afterward, the G1 teeth (healthy whole) were cut into 5-mm-long samples.
In the G2 teeth (endodontic treatment) the filling materials (gutta-percha, composite, etc.) were firstly carefully removed and then cut in the same dimensions. All materials were inserted in the TT grinder device (TT Tooth Transformer srl. Milan, Italy) for automatic single-use demineralization procedures for 25 min.
Next, the bone defects were filled with a particulate demineralized dentin and enamel graft covered with absorbable membranes (Bego oss®Bremen, Germany). A second surgical stage was requested for the implant fixture placement at four months. Before the implant insertion, a bone biopsy was performed using trephine cylindrical drills graduated to indicate the depth (from 5 to 18 mm) with abundant irrigation using sterile saline.
Collection and Statistical Analysis of Data
Histological and morphometrical data were detected in accordance with the protocol recorded at the University of Chieti (ethical committee approval: request ID richhtnc4, protocol N • 1869 12/12/2018, approved 17 verb 21.03. 19 St.638 PI Perfetti). Data statistical analysis was carried out to obtain average values and to compare the behavior of the G1/G2 groups. Outcome measures of the exploratory study were analyzed with a t-test for paired samples for pre-post differences with time as the factor using Statistical Package for Social Sciences (SPSS for Windows, Version 11.5, Chicago, IL, USA) software, to detect significant differences between pre-test and post-test scores.
Histological Technique
The sample was dehydrated with a series of alcohol solutions of increasing concentration, and then fixed into methacrylic resin. After that, the sample was processed to obtain non-decalcified sections using a disc abrasion system (LS2 Remet, Bologna, Italy) and a diamond disc cutting system (Micromet Remet, Bologna, Italy) with a high speed in order to obtain a 200-µm-thick slide sample. With low abrasive paper, the sample was then abraded to progressively reduce the sample thickness to about 40-50 µm. Then, the samples were colored with basic fuchsin/blue toluidine and observed using light/polarized light microscopy. For histomorphometric measurements, the histological images obtained using the transmitted light microscope were digitized through a digital camera and analyzed by means of image analysis software IAS 2000; for each sample, the percentage of vital bone (VB%), the percentage of the remaining graft (Graft%), and the percentage of residual bone volume (BV%) were detected.
Results
Twenty-eight subjects (10 men and 18 women) of 51.79 years (±5.97) average age were enrolled for the research. Thirty-four teeth were extracted and used for alveolar socket preservation treatment. Twenty teeth in G1 and 14 teeth in the G2 group were included.
After all surgery treatments, no complications were shown, and 32 biopsies (19 in G1 and 13 in G2 group) were performed in second-stage surgery after four months of healing time. The histological analysis of the samples showed that dentin and enamel graft materials, partially resorbed, were surrounded and included in various new bone layers (Tables 1-4). Table 4. Etiology for each extraction.
Discussion
The knowledge of the physio-pathological processes following tooth extraction suggest constant three-dimensional bone reabsorption in height and thickness, which is higher in buccal vs. lingualpalatal regions [19,20].
As of today, the best results were shown in the use of autologous bone for its osseoconduction and osseoinduction characteristics [3,4]. However, all procedures required a second surgical site for bone harvesting or a double surgery treatment with increased discomfort of the patient [3,4]. To limit the discomfort, several biomaterials with slow or rapid reabsorption were suggested; however, all materials showed only osteoconductive capabilities [7,8].
From these scientific considerations, and from several studies of dental tissue embryology, the first aim of research should be to verify if the extracted tooth, currently considered waste material [21][22][23], could be used as graft material in alveolar socket preservation procedures [24]. Furthermore, the behavior of the endodontically treated tooth in bone regeneration procedures should be determined, as well as if the tooth, properly cleaned after endodontic treatment, could be used in these surgical procedures [25,26].
The results of the study confirmed the high biocompatibility of demineralized dental tissue used in socket preservation procedures. No inflammation signs or clinical failure were seen in all surgical procedures. No surgical sites showed difficulty in healing, and no resorbable membranes were discovered. No clinical or histological signs of inflammation or necrosis were detected in all sites or samples analyzed. All histological specimens showed no gutta-percha, composite, or cement filling materials. All histological results of demineralized autologous tooth materials showed a high value of vital bone around the grafts, capable of preventing volumetric bone loss in post-extracted alveolar sockets.
After alveolar socket preservation treatment using an autologous demineralized dentin/enamel graft material, the histomorphometric analysis showed total bone volume and vital bone percentages higher in Group 2 vs. Group 1, while a higher residual graft value was detected in Group 1 vs. Group 2. However, no statistically significant differences between the two groups were detected.
Discussion
The knowledge of the physio-pathological processes following tooth extraction suggest constant three-dimensional bone reabsorption in height and thickness, which is higher in buccal vs. lingual-palatal regions [19,20].
As of today, the best results were shown in the use of autologous bone for its osseoconduction and osseoinduction characteristics [3,4]. However, all procedures required a second surgical site for bone harvesting or a double surgery treatment with increased discomfort of the patient [3,4]. To limit the discomfort, several biomaterials with slow or rapid reabsorption were suggested; however, all materials showed only osteoconductive capabilities [7,8].
From these scientific considerations, and from several studies of dental tissue embryology, the first aim of research should be to verify if the extracted tooth, currently considered waste material [21][22][23], could be used as graft material in alveolar socket preservation procedures [24]. Furthermore, the behavior of the endodontically treated tooth in bone regeneration procedures should be determined, as well as if the tooth, properly cleaned after endodontic treatment, could be used in these surgical procedures [25,26].
The results of the study confirmed the high biocompatibility of demineralized dental tissue used in socket preservation procedures. No inflammation signs or clinical failure were seen in all surgical procedures. No surgical sites showed difficulty in healing, and no resorbable membranes were discovered. No clinical or histological signs of inflammation or necrosis were detected in all sites or samples analyzed. All histological specimens showed no gutta-percha, composite, or cement filling materials. All histological results of demineralized autologous tooth materials showed a high value of vital bone around the grafts, capable of preventing volumetric bone loss in post-extracted alveolar sockets.
After alveolar socket preservation treatment using an autologous demineralized dentin/enamel graft material, the histomorphometric analysis showed total bone volume and vital bone percentages higher in Group 2 vs. Group 1, while a higher residual graft value was detected in Group 1 vs. Group 2. However, no statistically significant differences between the two groups were detected. Furthermore, the extracted tooth was totally autogenous, with a dentin structure and composition very similar to bone. Dentin and enamel after TT Transformer treatment showed similar features to heterologous or synthetic bone substitutes on the market, whereas no expensive costs and no additional surgical procedures are required; it was also well accepted without further discomfort for the patient.
Conclusions
Several studies will be needed to know the real impact of this innovative technology on dental and maxillo-facial hard tissue regeneration therapy; however, the very promising results of our study show a high percentage of new vital bone around the residual graft material, suggesting that the autogenous demineralized tooth graft obtained by the TT Transformer medical device can be considered a feasible alternative to biomaterials currently used in human alveolar socket preservation procedures to promote bone healing in intraoral defects. | 2020-03-11T13:10:28.866Z | 2020-03-01T00:00:00.000 | {
"year": 2020,
"sha1": "b92b2443956765c40e4b8de94072452d4b4296f4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/13/5/1153/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cb1b16d5f492381a6944c564c83b35eb4edc8d4f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
10259664 | pes2o/s2orc | v3-fos-license | A Note on the Asymptotic Limit of the Four Simplex
Recently the asymptotic limit of the Barrett-Crane models has been studied by Barrett and Steele. Here by a direct study, I show that we can extract the bivectors which satisfy the essential Barrett-Crane constraints from the asymptotic limit. Because of this the Schlaffi identity is implied by the asymptotic limit, rather than to be imposed as a constraint.
The asymptotic limit [7], [6], [4] of the Barrett-Crane models [2], [3] has been recently studied systematically in [5]. Here by a direct study, I show that we can extract the bivectors which satisfy the essential Barrett-Crane constraints 1 from the asymptotic limit. Because of this the Schlaffi identity is implied by the asymptotic limit, rather than to be imposed as a constraint as in Ref: [2]. Here I focus on the Riemannian Barrett-Crane model [2] only, but it can be generalized to the Lorentzian Barrett-Crane model [3].
Consider the amplitude of a four-simplex [2] with a real scale parameter λ, where the θ ij is defined by n i .n j = cos(θ ij ). Here the θ ij is the angle between n i and n j . The asymptotic limit of Z λ (s) under λ −→ ∞ is controlled by where the q i are the Lagrange multipliers to impose n i .n i = 1, ∀i. My goal now is to find stationary points for this action. The stationary value under the variation of n j 's are determined by and n j .n j = 1, ∀j where the j is a constant in the summation.
where the vector index has been suppressed on both the sides.
Using equation (2) in equation:(1a) and taking the wedge product of the equation with n j we have, then the last equation can be simplified to We now consider the properties of E ij : • Each i represents a tetrahedron. There are ten E ij 's, each one of them is associated with one triangle of the four-simplex.
• The square of E ij : • The wedge product of any two E ij is zero if they are equal to each other or if their corresponding triangles belong to the same tetrahedron.
• Sum of all the E ij belonging to the same tetrahedron are zero according to equation (3).
It is clear that these properties contain the first four Barrett-Crane constraints [2]. So we have successfully extracted the bivectors corresponding to the triangles of a general flat four-simplex in Riemannian general relativity and the n i are the normal vectors of the tetrahedra. The J ij are the areas of the triangle as one would expect. Since we did not impose any non-degeneracy Barrett-Crane conditions [2], it is not guaranteed that the tetrahedra or the four-simplex have non-zero volumes.
The asymptotic limit of the partition function of the entire simplicial manifold with triangulation ∆ is where I have assumed variable s represents the four simplices of ∆ and i, j represents the tetrahedra. The ε ijs can be interpreted as the orientation of the triangles. Each triangle has a corresponding J ij . The n is denote the vector associated with the side of the tetrahedron i facing the inside of a simplex s. Now there is one bivector E sij associated with each side facing inside of a simplex s of a triangle ij defined by E ijs = ε ijs J ij n js ∧ n js sin(ζ ijs ) .
If the n is are chosen such that they satisfy stationary conditions can be considered to describe the Regge calculus [1] for the Riemannian general relativity. The angle θ ij are the deficit angles associated with the triangles and the n is are the vector normals associated with the tetrahedra. | 2014-10-01T00:00:00.000Z | 2005-11-05T00:00:00.000 | {
"year": 2005,
"sha1": "9f98248aad4394847e5d52e6fa0a2177e655f812",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9f98248aad4394847e5d52e6fa0a2177e655f812",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
34659298 | pes2o/s2orc | v3-fos-license | Proteomic Analysis Reveals a Novel Mutator S (MutS) Partner Involved in Mismatch Repair Pathway*
The mismatch repair (MMR) family is a highly conserved group of proteins that function in correcting base–base and insertion–deletion mismatches generated during DNA replication. Disruption of this process results in characteristic microsatellite instability (MSI), repair defects, and susceptibility to cancer. However, a significant fraction of MSI-positive cancers express MMR genes at normal levels and do not carry detectable mutation in known MMR genes, suggesting that additional factors and/or mechanisms may exist to explain these MSI phenotypes in patients. To systematically investigate the MMR pathway, we conducted a proteomic analysis and identified MMR-associated protein complexes using tandem-affinity purification coupled with mass spectrometry (TAP-MS) method. The mass spectrometry data have been deposited to the ProteomeXchange with identifier PXD003014 and DOI 10.6019/PXD003014. We identified 230 high-confidence candidate interaction proteins (HCIPs). We subsequently focused on MSH2, an essential component of the MMR pathway and uncovered a novel MSH2-binding partner, WDHD1. We further demonstrated that WDHD1 forms a stable complex with MSH2 and MSH3 or MSH6, i.e. the MutS complexes. The specific MSH2/WDHD1 interaction is mediated by the second lever domain of MSH2 and Ala1123 site of WDHD1. Moreover, we showed that, just like MSH2-deficient cells, depletion of WDHD1 also led to 6-thioguanine (6-TG) resistance, indicating that WDHD1 likely contributes to the MMR pathway. Taken together, our study uncovers new components involved in the MMR pathway, which provides candidate genes that may be responsible for the development of MSI-positive cancers.
nuclease activity plays a critical role in 3Ј-5Ј excision involving EXO1. EXO1 then excises nascent DNA from the nick toward and beyond the mismatch to generate a single-strand gap, which is filled by DNA polymerases ␦ (lagging strand) or (leading strand) using the parental DNA strand as a template. Finally, the nick is sealed by DNA ligase I (19,20). In addition, two MutS homologues, MSH4 and MSH5, share similar structure and sequence features with the other members of the MutS family. Recent evidence suggests that they function beyond MMR and are involved in processes such as recombinant repair, DNA damage signaling, and immunoglobulin class switch recombination (21,22).
It has been well documented that impairment of MMR genes, especially MSH2 and MLH1, cause susceptibility to certain types of cancer, including human nonpolyposis colorectal cancer. At the cellular level, deficient MMR results in a strong mutator phenotype known as microsatellite instability (MSI), which is a hallmark of MMR deficiency (3)(4)(5). However, a significant fraction of MSI-positive colorectal cancers express MMR genes at normal levels and do not carry detectable mutation or hypermethylation in known MMR genes (23). Similarly, certain noncolorectal cancer cells with MSI also appear to have normal expression of known MMR protein (24,25). These observations suggest that additional factors and/or mechanisms may exist to explain these MSI phenotypes in patients.
To address this question, we performed tandem affinity purification coupled with mass spectrometry analysis (TAP-MS) to uncover MMR-associated protein complexes. Our proteomics study of the MMR family led to the discovery of many novel MMR-associated proteins, and gene ontology analysis expanded the roles of MMR in multiple biological processes. Specifically for MSH2, we uncovered a novel MutS binding partner WDHD1, which associates with both MutS␣ (MSH2-MSH6 heterodimer) and MutS (MSH2-MSH3 heterodimer). We provide additional evidence suggesting that WDHD1 is involved in the MMR pathway, which can be used as potential biomarker for MSI phenotypes in cancer patients.
Antibodies-The anti-MSH2 antibody was obtained from Cell Signaling Technology. The monoclonal anti-FLAG M2, anti--actin, and anti-WDHD1 antibodies were purchased from Sigma-Aldrich. The anti-Myc (9E10) antibody was obtained from Covance.
Coprecipitation and Western blotting-Cells were lysed with NETN buffer (100mM NaCl; 1mM EDTA; 20mM Tris HCl; and 0.5% Nonidet P-40) containing protease inhibitors on ice for 20 min. The soluble fractions were collected after centrifugation and incubated with protein A agarose beads coupled with anti-MSH2 antibody, or S-protein beads for 4 h at 4°C. The precipitates were then washed and boiled in 2 ϫ sodium dodecyl sulfate (SDS) loading buffer. Samples were resolved on SDS-polyacrylamide gel electrophoresis (PAGE) and transferred to polyvinylidene fluoride membrane, and immunoblotting was carried out with antibodies as indicated.
Clonogenic Survival Assays-Briefly, a total of 1 ϫ 10 3 HeLa cells were seeded onto 60 mm dish in triplicates. Twenty-four hours after seeding, cells were treated with different concentrations of 6-TG (0, 1 M, 3 M, 8 M) for 3 days, washed, and cultured in the medium. After 14 days, cells were stained with crystal violet and colonies counted. Numbers of colonies were expressed as a percentage of the colonies formed in the absence of the drug. Results were the averages of data obtained from three independent experiments.
Tandem Affinity Purification-HEK293T cells were transfected with plasmids encoding various SFB-tagged MMR proteins. Stable cell lines were selected with media containing 2 g/ml puromycin and confirmed by immunostaining and Western blotting. HEK293T cells stably expressing SFB-tagged MMR proteins were lysed with NETN buffer on ice for 20 min. After removal of cell debris by centrifugation, crude lysates were incubated with streptavidin Sepharose beads for 1 h at 4°C. The bead-bound proteins were washed three times with NETN buffer and eluted twice with 2 mg/ml biotin (Sigma) for 1 h at 4°C. The eluates were combined and then incubated with S-protein agarose (Novagen) for 1 h at 4°C. The S beads were washed three times with NETN buffer. The proteins bound to S-protein agarose beads were separated by SDS-PAGE and visualized by Coomassie Blue staining. The eluted proteins were identified by mass spectrometry analysis, performed by the Taplin Biological Mass Spectrometry Facility (Harvard Medical School).
Mass Spectrometry Analysis-Gel bands were excised into small pieces, destained completely, disulfide bonds were reduced with 5 mM tris(2-carboxyethyl)phosphine (TCEP), cysteines were alkylated with 10 mM IAA, and then subjected to trypsin digestion at 37°C for overnight. The peptides were extracted with acetonitrile and vacuum dried. Samples were reconstituted in HPLC solvent A (2.5% acetonitrile, 0.1% formic acid), delivered onto a Proxeon EASY-nLC II liquid chromatography pump (Thermo Fisher, Waltham, MA), and eluted with acetonitrile gradient by increasing concentrations of solvent B (97.5% acetonitrile, 0.1% formic acid) from 6% to 30% in 30 mins. The eluates directly entered Orbitrap Elite MS (Thermo Fisher), setting in positive ion mode and data-dependent manner with full MS scan from 350 -1250 m/z, resolution at 60,000, automatic gain control target at 1 ϫ 10 6 . The top 10 precursors were then selected for MS2 analysis.
The MS/MS spectra were used to search SEQUEST (ver. 28) (Thermo Fisher). Spectra were converted to mzXML using a modified version of the ReAdW.exe. Database searching included all entries from the human Uniprot database (March 11, 2014). This database was concatenated with one composed of all protein sequences in the reverse order. The number of entries in the database was 141,456. Searches were performed using a 50 ppm precursor ion tolerance for total protein level analysis. The product ion tolerance was set to 1 Da. Enzyme specificity was set to partially tryptic with two missed cleavages. Carboxyamidomethyl for cysteine residues (ϩ57.021 Da) was set as static modifications, and oxidation for methionine residues (ϩ15.995) was set as a variable modification. The identified peptides were filtered by false discovery rate Ͻ 1% based on the target-decoy method. The parameters XCorr, ⌬Cn, missed cleavages, peptide length, charge state, and precursor mass accuracy were considered for the peptide-spectrum match (PSM) filtering using a linear discriminant analysis (28,29). Single peptide identifications were removed. The identified proteins and peptides are shown in Supplemental Tables S1 and S2.
Mismatch Repair Protein Interactome Analysis-For the evaluation of potential protein-protein interactions, identified proteins and the corresponding PSM numbers (10 baits of mismatch proteins with one biological repeat for each) were subjected to assessment using the CRAPome methodology. The CRAPome scoring strategy is based on quantitative comparison of abundance (spectral counts) of coprecipitating proteins in purifications with bait against the distribution of prey abundances across a set of negative controls. This fold change (FC) score includes primary score FC-A and more stringent score FC-B. The FC-A calculation averages the counts across all control, whereas the FC-B score takes the average of the top three highest spectral counts for the abundance estimate (30). In this study, we used 233 TAP-MS data with randomly selected baits as the control group. An FC-B score higher than two was taken as the threshold for potential binding proteins. To further select for HCIPs, we chose the proteome profiling data of HEK293T whole-cell lysis as the background to assess the specificity of protein-protein interaction. The spectra number for the identified proteins was normalized by total spectra counts. By comparing with this global expression background, only proteins that were enriched above the average enrichment fold following the TAP-MS procedure were included in the HCIP lists.
The HCIPs of MMR proteins were analyzed by Cytoscape (31). We analyzed the network and created custom styles then applied yFiles organic layout with minor adjustments when necessary. The principal component analysis of the interactomes was studied with R statistical computing software. The HCIPs with normalized spectra number for each MMR protein were analyzed. The gene ontology annotations with p value were performed based on the Knowledge Base provided by Ingenuity Pathway Analysis software (IPA, Ingenuity Systems), which contains findings and annotations from multiple sources, including the Gene Ontology Database. False discovery rate correction of p value was used to correct for multiple testing to get the significantly enriched function with R statistical computing (32).
Experimental Design and Statistical Rationale-All the TAP-MS experiments of MMR proteins were performed with two biological replicates in HEK293T cells. These biological replicates came from two independent stable clones. The purified protein lysis from TAP were digested with trypsin and analyzed by MS. The raw data were calculated with SEQUEST and filtered by false discovery rate Ͻ 1% as we described in the methods. These identified proteins were filtered by combining CRAPome analysis and background enrichment strategy considering the two biological replicates. Function enrichment of the HCIPs was analyzed by IPA. False discovery rate correction of p value was used to correct for multiple testing. Clonogenic survival assays were performed with at least three biological replicates and statistical analysis was performed using the Student's test.
Proteomics Study of Mismatch Repair Protein Interactome
Using TAP-MS Approach-To build the interaction network of DNA MMR pathway, we used the well-established tandem affinity purification followed by mass spectrometry (TAP-MS) strategy (33)(34)(35), which was described in Fig. 1A, to identify the binding proteins. In humans, the DNA MMR pathway includes 10 proteins, MSH2, MSH3, MSH4, MSH5, MSH6, PMS1, PMS2, MLH1, MLH3, and EXO1. We established HEK293T derivative cell lines stably expressing each of the triple-tagged (S-protein, FLAG, and streptavidin-binding peptide) MMR proteins. TAP experiments were performed twice for each protein using independent stable clones, and the purified proteins were digested and delivered to mass spectrometry for identification. The identified protein numbers are shown in Figs. 1B and 1C. Details of the identification results are shown in Supplemental Tables S1 and S2. In total, 131,449 peptides and 20,001 proteins were acquired from the 20 TAP-MS experiments. Analysis of our repeat purifications verified strong reproducibility of our TAP-MS procedure (Fig. 1D), especially for the proteins identified with high PSMs, suggesting the high quality of our TAP-MS data.
To obtain the high-confidence candidate interacting proteins (HCIP) list, we submitted 20 TAP-MS results with spectra counts information for DNA MMR proteins and 233 controls with random selected unrelated control proteins for CRAPome analysis (30). FC-B score was used to filter our TAP-MS dataset for HCIPs. We obtained 648 proteins out of the total 14,340 identification list with the score higher than two. Furthermore, to improve the confidence of our interacting protein list, we adopted the proteome profiling data of input cell lysate as background for our protein-protein interaction study, which allowed us to remove background contaminants. Finally, we obtained 230 HCIPs as the "interactome" for all 10 MMR proteins with 36.1% of the proteins identified as nuclear proteins and 45.0% as cytoplasmic components (Fig. 1E). The details of the identified proteins and HCIPs for each bait protein are shown in Fig. 1C and Supplemental Table S3.
Overview of Protein-Protein Interaction Network of Human DNA Mismatch Repair Pathway-To understand the interactomes of MMR proteins, we first used IPA to reveal the function of all the identified HCIPs. The IPA analysis found that interactomes are highly enriched in proteins with reported roles in the MMR pathway, cell cycle, cellular growth and proliferation, DNA damage response, cellular development, cell morphology, and cellular assembly and organization ( Fig. 2A). Our results are in agreement with many published reports, which not only further demonstrate the high reliability of our dataset and methodology but also provide us with clues on how these proteins function in the MMR pathway.
MMR proteins do not function in isolation. There are many interactions among these MMR proteins and their HCIPs. Therefore, we studied the interactome network of all the HCIPs using Cytoscape (Fig. 2B). From the interaction data among various DNA MMR proteins, we found there are strong bindings among some of these proteins, which are already known as functional complexes involved in MMR, for example, the MutS and MutL complexes. MSH2 forms heterodimers with MSH6 and MSH3, which are, respectively, called MutS␣ and MutS, while MLH1 forms heterodimers with PMS2 and PMS1, which are MutL␣, and MutL. As some of the HCIPs are shared among several MMR proteins, the comparison of different identified spectra number for the common HCIPs were analyzed by unsupervised principal component analysis of the 10 TAP-MS results. We generated the principal component analysis plot with the top two principal components, which explained 21.4% and 16.3% of total data variation (Fig. 2C). As expected, our analysis validated the MutS and MutL complexes. A DNA exonuclease EXO1 has been reported to function in DNA MMR by excising mismatch-containing DNA tracts directed by strand breaks to the mismatch (36,37). According to our HCIPs list, EXO1 interacts with both MutS and MutL complexes, supporting active interaction and coordination between MutS, MutL, and EXO1 in lesion recognition, incision, and excision steps during the MMR process. We also identified an interaction between MSH4 and MSH5, suggesting that they may function as a complex in MMR pathway and/or other cellular processes.
Subinteractome Network Study of MutS, MutL, and EXO1-As EXO1 interacts with both MutS and MutL complexes, they may form a large "MMR repairsome" involved in the MMR process. Here, we further studied the subinteractome network. First, we integrated the HCIPs of MutS complexes (including MSH2, MSH3, and MSH6) and MutL complexes (including MLH1, PMS1, and PMS2) individually, and then built a subnetwork with HCIPs of these three components, MutS, MutL, and EXO1 (Fig. 3A). The proteins in the three cycles around the baits or complexes are the proteins only identified as HCIPs in the identical TAP-MS experiment. The proteins labeled in purple are the HCIPs identified by at least two baits. Some of the common identified HCIPs are involved in DNA repair pathways. For instance, x-ray repair cross-complementing protein 3 (XRCC3) is involved in the homologous recombination repair pathway (38). This protein was identified as HCIPs of MutS and MutL complexes, indi- cating that it may play a role in the DNA MMR pathway. Of course, it is also possible that MutS and MutL may associate with XRCC3 and function in homologous recombination repair pathway.
To globally reveal the functions of HCIPs of MutS, MutL, and EXO1 identified in our TAP-MS study, we used the software IPA for the localization and function analyses (Fig. 3B). Many of the HCIPs localize in the nucleus, which include 53.73% HCIPs of MutS complex, 51.16% of EXO1, and 34.29% of MutL complex. The functional analysis illustrates that these HCIPs are highly enriched in several functional pathways, including the MMR pathway, homologous recombination repair, nucleotide excision repair, cell cycle, and DNA replication. The proteins with these functions may be involved in DNA MMR pathway and vice versa.
Validation of MSH2 Interactome Reveals a Novel MutS-
Binding Partner WDHD1-To further validate our proteomics data, we decided to perform an in-depth study of the MSH2 interactome. In this interactome, we identified several known MSH2-binding proteins, including MSH3, MSH6, and EXO1 (Fig. 4A). Excitingly, we uncovered WDHD1 as a major MSH2associated protein (Fig. 4A). To confirm that WDHD1 exists in the same complex as MSH2, we performed reversal TAP-MS analyses using SFB-tagged WDHD1 as the bait protein and were excited to identify MSH2, MSH3, and MSH6 as WDHD1associated proteins (Fig. 4A). These data suggest that Immunoprecipitation reactions were performed using S-protein beads and then subjected to Western blot analyses using antibodies as indicated. (F) WDHD1 depletion confers an increased cellular resistance to 6-TG. Colony-formation assays were performed as described in the Experimental Procedures. Statistical analysis was performed using the Student's test. A p value less than 0.05 was considered significant. An asterisk (*) represents the p value. Data are presented as mean Ϯ S.E. KIAA1671, SMARCAD1, SDF4, and MeCP2 (negative control) with Myc-tagged MSH2 in HEK293T cells. Results indicated that, besides the known interactions (i.e. MSH2-MSH3, MSH2-MSH6, MSH2-EXO1), several HCIPs such as WDHD1, KIAA1671, and SMARCAD1 also bind to MSH2, therefore validating the MSH2 interactome we identified (Fig. 4B). In addition, we confirmed the MSH2-WDHD1 interaction between endogenous proteins (Fig. 4C), suggesting that these two proteins indeed associate with each other in vivo.
Mapping the Interaction Domains of WDHD1 and MSH2-We next attempted to define the MSH2-binding region(s) on WDHD1. A series of truncation mutants of WDHD1 were coexpressed with SFB-tagged MSH2 in HEK293T cells. We were able to map the minimal MSH2-binding region to a small region at the C terminus of WDHD1 (residues 1122-1126). Interestingly, within this region, an Ala 1123 to Pro missense mutant of WDHD1 was detected in a lung cancer patient Catalogue of Somatic Mutations in Cancer (COS-MIC). We found that mutation of Ala 1123 (Ala 1123 Pro) site alone or both Ala 1123 and Phe 1124 (Ala 1123 ProPhe 1124 Ala) sites abolished the interaction between MSH2 and WDHD1 (Fig. 4D), indicating that this interaction may contribute to cancer development.
Next, we sought to identify the region(s) of MSH2 that is responsible for its interaction with WDHD1. Again, we generated a series of truncation and internal deletion mutants of MSH2. As shown in Fig. 4E, the D5 mutant (the second lever domain, residues 550 -620) of MSH2 dramatically reduced the MSH2-WDHD1 interaction, indicating that this domain of MSH2 is important for its binding to WDHD1.
WDHD1 Depletion Confers an Increased Cellular Resistance to 6-thioguanine (6-TG)-It is well documented that the levels of MSH2 inversely correlate with 6-TG and N-methyl-N'-nitro-N-nitrosoguanidine (MNNG) resistance (39). Consistently, there were fewer colonies formed in parental HeLa cells upon 6-TG treatment, while knockdown of MSH2 or WDHD1 in these cells resulted in resistance to 6-TG, i.e. more colonies formed after 6-TG treatment (Fig. 4F). These results indicate that WDHD1 may not only bind to MSH2 but also function with MSH2 in the MMR pathway. DISCUSSION This work provides an extensive analysis of MMR proteinprotein interaction network, identifies over 230 HCIPs, and therefore greatly broadens our current understanding of the MMR pathway. We uncovered several uncharacterized partners for MMR proteins like MutS, MutL, and EXO1 (Fig. 3). The biological significance of these interactions remains to be determined. Given that the MMR pathway is a critical genome maintenance pathway and MMR deficiency leads to MSI and cancer development, we speculate that some of the MMRbinding proteins discovered in this study may be mutated or downregulated in cancer and therefore contribute to cancer development and MSI phenotypes identified in cancer patients. This possibility warrants further investigation.
In our proteomics analysis of the MMR pathway, we built a subnetwork with HCIPs for three MMR components, MutS, MutL, and Exo1 (Fig. 3A). It is known that MMR is implicated in other repair processes, including DNA damage signaling, homologous recombination, interstrand cross-link repair, and meiotic DNA recombination (40). Indeed, HCIPs of three MMR components clearly indicate the connections between MMR and other DNA repair pathways. For example, XRCC3 was identified as HCIPs of MutS and MutL complexes. This protein is implicated in homologous recombination repair (41,42), indicating that the MMR pathway may participate in homologous recombination repair through interactions with multiple factors involved in homologous recombination. MutL complexes have also been shown to participate in the repair of interstrand cross-links, with the evidence that MutL␣ interacts specifically with Fanconi anemia protein FANCJ (Fanconi Anemia Group J Protein, BRCA1-Interacting Protein 1) to facilitate interstrand cross-link repair (43). As a matter of fact, BRIP1 was repeatedly identified in our purifications of MutL complexes (Fig. 3A). Moreover, another Fanconi anemia protein FAN1 was also identified in the MutL complexes (Fig. 3A). The specific interaction between FAN1 (FANCD2/FANCI-Associated Nuclease 1), and MutL was further confirmed by reverse purification conducted by us and others (44,45), suggesting that MutL may participate in interstrand cross-link repair through its interaction with several Fanconi anemia proteins.
MSH2 is a central component of the MMR pathway that recognizes mismatches arising during DNA replication. The analysis of MSH2 interactome revealed not only several known components of MutS complex, including MSH3 and MSH6 but also several previously unidentified partners, such as WDHD1, KIAA1671, and SMARCAD1 (Fig. 4A). In particular, WDHD1 protein containing an amino-terminal WD40 domain (tryptophan-aspartic acid (W-D) dipeptide repeat) and a carboxyl-terminal HMG High-mobility group motif. It has been shown that WDHD1 acts as a component of the replisome to regulate DNA replication and S phase progression (46 -49). It is also well documented that MMR corrects DNA mismatches generated during DNA replication. Thus, it is reasonable to speculate that WDHD1 may function to recruit MutS complex to chromatin during DNA replication and thus facilitate the MMR pathway in removing mismatches after ongoing DNA replication forks. In this study, we not only validated the interaction between MSH2 and WDHD1 but also showed that a missense mutant of WDHD1, Ala 1123 -to-Pro, detected in a lung cancer patient (COSMIC), abolished the WDHD1-MSH2 interaction, indicating that this mutation in WDHD1 may be functionally important for lung cancer development. Of note, we also checked cBioPortal and found 163 WDHD1 mutations in colorectal cancer, endometrial cancer, bladder cancer, and others, indicating that WDHD1 may be mutated in multiple types of cancers and contribute to tumorigenesis. Moreover, similar to MSH2, knockdown of WDHD1 confers cellular resistant to 6-TG, indicating that WDHD1 likely participates in the MMR pathway. Future studies will be directed at defining whether and how WDHD1 may facilitate the loading of MSH2 during DNA replication and promote MMR.
In conclusion, our proteomics analysis of the MMR pathway provides a rich resource for further exploration of MMR functions in various DNA repair pathways, which will offer new ideas and therapeutic approaches for cancer patients. | 2018-04-03T02:42:05.675Z | 2016-01-31T00:00:00.000 | {
"year": 2016,
"sha1": "2b1dc0644a409d5766351500e3ddc725c2c2bbd0",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1074/mcp.m115.056093",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "7656510257930e4e1fb78f7e01be812fad9b0612",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
54647474 | pes2o/s2orc | v3-fos-license | Parallaxes and proper motions of interstellar masers toward the Cygnus X star-forming complex. I. Membership of the Cygnus X region
Whether the Cygnus X complex consists of one physically connected region of star formation or of multiple independent regions projected close together on the sky has been debated for decades. The main reason for this puzzling scenario is the lack of trustworthy distance measurements. We aim to understand the structure and dynamics of the star-forming regions toward Cygnus X by accurate distance and proper motion measurements. To measure trigonometric parallaxes, we observed 6.7 GHz methanol and 22 GHz water masers with the European VLBI Network and the Very Long Baseline Array. We measured the trigonometric parallaxes and proper motions of five massive star-forming regions toward the Cygnus X complex and report the following distances within a 10% accuracy: 1.30+-0.07 kpc for W 75N, 1.46^{+0.09}_{-0.08} kpc for DR 20, 1.50^{+0.08}_{-0.07} kpc for DR 21, 1.36^{+0.12}_{-0.11} kpc for IRAS20290+4052, and 3.33+-0.11kpc for AFGL 2591. While the distances of W 75N, DR 20, DR 21, and IRAS 20290+4052 are consistent with a single distance of 1.40+-0.08 kpc for the Cygnus X complex, AFGL 2591 is located at a much greater distance than previously assumed. The space velocities of the four star-forming regions in the Cygnus X complex do not suggest an expanding Stroemgren sphere.
Introduction
In the early days of radio astronomy, a conspicuously strong, extended source of radio emission was found at Galactic longitude ∼80 • and named the Cygnus X region (Piddington & Minnett 1952), which also stands out in infrared surveys of the Galaxy (Odenwald & Schwartz 1993; see e.g., the spectacular Spitzer imaging in Kumar et al. 2007). All phases of star formation and stellar evolution are observed projected across the Cygnus X region, including a population of dense, massive, and dusty cores with embedded protoclusters and high-mass protostellar objects (Sridharan et al. 2002;Beuther et al. 2002;Motte et al. 2007), ultracompact HII regions (Downes & Rinehart 1966;Wendker et al. 1991;Cyganowski et al. 2003), hundreds of OB-type stars (of which ∼65 O-type; Wright et al. 2010 and references therein), and some supernova remnants (Uyanıker et al. 2001).
The proximity of a large number of OB associations and molecular cloud complexes on the sky motivated the explanation of Cygnus X as consisting of various objects at different distances seen superposed on one another (e.g., Dickel et al. 1969, Pipenbrink & Wendker 1988, and Uyanıker et al. 2001. Recently, the CO imaging survey of Schneider et al. (2006) has rekindled the idea of Cygnus X as one large star-forming complex, which was already suggested in the sixties by, e.g., Véron (1965). Obviously, both scenarios depend strongly on the distances measured to the individual parts of the Cygnus X complex, because sources at a galactic longitude of ∼ 80 • could be in 1 the Local Arm (also named the Orion or Local Spur) and nearby (∼1−2 kpc), in the Perseus Arm at ∼5 kpc, or even in the Outer Arm at distances of ∼10 kpc (e.g., the Cygnus X-3 microquasar).
Unfortunately, distances to the Cygnus X objects are very difficult to obtain and have large uncertainties. First, the Cygnus X OB associations are too far for a parallax measurement with the Hipparcos satellite, which measured distances of nearby OB associations out to a distance of 650 pc (de Zeeuw et al. 1999). Second, at the Galactic longitude of Cygnus X, the radial velocity difference between the Sun and the Cygnus X region is close to the typical velocity dispersion of interstellar gas in a high-mass star-forming region (SFR) (1-7 km s −1 , Moscadelli et al. 2002) for distances up to 4 kpc. Therefore, kinematic distances, which depend on the radial velocity, are not reliable for distances below 4 kpc toward this longitude, and most distance measurements rely on the spectroscopy and photometry of stars. But, these estimates are also affected by large uncertainties (> 30%) because the extinction toward Cygnus X is very high and variable (Schneider et al. 2006). While one can find distance estimates between 1.2 and 2 kpc in the literature, the generally adopted value is 1.7 kpc, based on the spectroscopy and photometry of the stars in the Cyg OB2 association by Torres-Dodgen et al. (1991) and Massey & Thompson (1991). More recently, a nearer distance of 1.5 kpc was obtained by Hanson (2003) using new MK stellar classification spectra of the stars in Cyg OB 2. Of course, using a Cyg OB 2 distance for the entire complex assumes that Cygnus X is a physically connected SFR.
The distance to this "mini-starburst" region is crucial for the many star formation studies performed toward it for its richness and short distance from the Sun. As said, the distances are very uncertain: for example, three OB associations (Cyg OB 1, 8, and 9) have distance estimates between 1.2 and 1.7 kpc, a difference of more than 30%. This distance range of 500 pc is almost ten times more than the extent of the Cygnus X region on the sky -4 • by 2 • or 100 by 50 pc at a distance of 1.5 kpc. Therefore, important physical parameters of objects in this region, such as luminosities and masses are uncertain by a factor of ∼1.7 given a distance uncertainty of 30%. For some of the SFRs, the distance estimates have a much wider range, namely from ∼1 kpc (AFGL 2591) to 3 kpc (DR 21). To distinguish whether all clouds are at the same distance or are only projected close together on the plane of the sky, a direct estimate of distances to distinct objects toward Cygnus X is required.
In this context, we used strong 6.7 GHz methanol masers and 22 GHz water masers as astrometric targets to measure distances of five distinct SFRs toward the Cygnus X complex. No previous parallax measurements were carried out toward this region. Using the terminology of Schneider et al. (2006), the Cygnus X complex is divided about the Cyg OB 2 cluster at (l, b) = (80. • 22, +0. • 80) into a northern region, at Galactic longitudes greater than about 80 • , and a southern region, at lower longitudes. Methanol maser emission was observed toward four SFRs with the European VLBI Network (EVN): W 75N, DR 20, and DR 21 in Cygnus X North, and IRAS 20290+4052 (which is likely part of the Cyg OB 2 association, see Odenwald 1989;Parthasarathy et al. 1992, hereafter IRAS 20290) in Cygnus X South. The pioneering work presented in Rygl et al. (2010) demonstrates the capability of the EVN to achieve parallax accuracies as good as 22 µm. Water maser emission was observed toward AFGL 2591, which is projected within Cygnus X South, with the Very Long Baseline Array (VLBA).
The observations presented here comprise one year of VLBA observations and two years of EVN observations with the addi-tion of Japanese antennas (the Yamaguchi 32-m antenna and the Mizusawa station of the VLBI Exploration of Radio Astrometry, VERA) for several epochs to increase the angular resolution. We report on the preliminary distances from the first two-year results, using only the EVN antennas, and the evidence that AFGL 2591 is really projected against the Cygnus X region, hence not part of it. To optimize the results with the long baselines afforded by the Japanese antennas, we need to develop more sophisticated calibration procedures, since several maser spots resolve on the ∼9000 km baselines. This analysis will be included in a future paper once the observations are completed.
EVN methanol maser observations
The EVN 1 observations were carried out in eight epochs between March 2009 and November 2010 under project EB039. These dates were scheduled near the minima and maxima of the sinusoidal parallax signature in right ascension to optimize the sensitivity of the parallax measurement. The parallax signature was followed for two years with a quasi symmetric coverage of t=0.0, 0.2, 0.63, 0.64 years with an equal sampling of the minima and maxima. These two conditions allow one to separate the proper motion from the parallax signature. Each observation lasted 12 hours and made use of geodetic-like observing blocks to calibrate the tropospheric zenith delays at each antenna (see Reid & Brunthaler 2004;Brunthaler et al. 2005;Reid et al. 2009a for a detailed discussion).
The methanol masers were first selected from the Pestalozzi et al. (2005) database and then observed with the Expanded Very Large Array (EVLA, program AB1316) to obtain accurate positions (Xu et al. 2009b). To find extragalactic background sources to serve as reference positions, to verify their compact emission, and to obtain obtain position with subarcsecond accuracy, we observed compact NVSS (Condon et al. 1998) sources within 2 • of W 75N at 5 GHz with the EVN in eVLBI mode on December 4, 2008 (program EB039A). These observations revealed two compact background sources, J2045+4341 and J2048+4310 (hereafter J2045 and J2048); we also used J2029+4636 (herafter J2029) from the VLBA calibrator survey (Beasley et al. 2002) separated by 4. • 3 from W 75N. A typical EVN observing run started and ended with a ∼1 hour geodetic-like observing block and about ten minutes for fringe-finder observations of 3C 454.3 and J2038+5119. The remaining time was spent on maser/background-source phasereferencing observations. The 6.7 GHz masers in Cygnus X and the three background sources were phase-referenced to the strongest maser in W 75N, using a switching cycle of 1.5 minutes. Table 1 lists the source positions, separations from W 75N, brightnesses, and restoring beam information. DR 21A, DR 21B, and DR 21(OH) are three masers thought to belong to the same SFR, but they were observed separately because their separation was at the border of the field of view limited by time-smearing (∼37 arcseconds).
The observations were performed in dual circular polarization and two-bit Nyquist sampling, for an aggregate recording rate of 512 Mbps. The data were correlated in two passes at the Joint Institute for VLBI in Europe (JIVE) using a one-second averaging time. The maser data were correlated using one 8 MHz band with 1024 spectral channels, resulting in a channel separation of 7.81 kHz and a channel width of 0.35 km s −1 . The background sources were correlated in continuum mode with eight subbands of 8 MHz bandwidth and a channel width of 0.25 MHz. The data were reduced using the NRAO's Astronomical Image Processing System (AIPS) and ParselTongue (Kettenis et al. 2006). The data were phase-referenced to the W 75N maser at v = 7.1 km s −1 and the solutions were transferred to the other masers and to the continuum background sources. More details on the reduction and calibration can be found in Rygl et al. (2010).
VLBA water maser observations
We performed water maser observations using the National Radio Astronomy Observatory's 2 VLBA in four epochs between November 2008 and November 2009 under program BM272H. The observation dates were chosen to match the minima and maxima of the sinusoidal parallax signature in right ascension for optimal parallax and proper motion analysis, as discussed in Sato et al. (2010). For the VLBA observations, the parallax sampling was symmetrically covered at t=0, 0.49,0.51, 1.0 years for one year. To find suitably compact background sources near the maser target, we observed a sample of unresolved NVSS sources with the VLA in BnA configuration on October 5, 2007, at X and K band (program BM272). With these observations, we measured source compactness and spectral index and selected the strongest, most compact ones to serve as background sources (position references) for the parallax observations. Additionally, the VLA data were used to obtain subarcsecond accurate positions for the background sources and the masers (which were also included in these VLA observations). The positions, separations from the maser target, brightnesses, and restoring beam information are given in Table 1.
The VLBA observations, performed at the water maser frequency of 22.235 GHz, included four geodetic-like blocks for calibrating the zenith delays at each antenna. Two fringe-finders (3C 454.3 and 3C 345) were observed at the beginning and in the middle of each run. The water maser was observed together with four background sources, one ICRF calibrator J2007+4029 (Ma et al. 1998, and three quasars selected from the VLA observations of NVSS sources: J2033+4000 (hereafter J2033), J2032+4057, and J2033+4040. We performed phase-referencing observations by fast-switching (every 30 seconds) between the maser and each of the four background sources. We used four adjacent subbands of 8 MHz bandwidth in dual circular polarization. Each subband was correlated to have 256 spectral channels, giving a channel width of 0.42 km s −1 . The data were correlated with the VLBA correlation facility in Socorro using an averaging time of 0.9 seconds. The calibration was carried out in AIPS following the procedure described in Reid et al. (2009a).
Parallax fitting
Generally, we detected 6.7 GHz methanol and 22 GHz water maser spots in several velocity channels each and toward multiple locations on the map (see Fig. 1 for the methanol masers, and see Sanna et al. 2011 for the water masers). All positions were determined by fitting a 2-D Gaussian brightness distribution to selected regions of the maps using the AIPS task "JMFIT". The parallaxes and proper motions were determined from the change in the positions of the maser spots relative to the background sources (position reference). We fitted the data with a sinusoidal parallax signature and a linear proper motion. When the minima and maxima of the parallax signature are sampled equally, the proper motion and parallax are uncorrelated. Maser spots with strongly nonlinear proper motions or a large scatter of position about a linear fit were discarded, since these usually reflect spatial and spectral blending of variable maser features and cannot be used for parallax measurements. Only compact maser spots with well-behaved residuals were used for the parallax fitting.
The formal position errors can underestimate the true uncertainty on the position, since they are only based on the signalto-noise ratios determined from the images and do not include possible systematic errors (e.g., from residual delay errors). Therefore, to allow for such systematic uncertainties, we added error floors in quadrature to the position errors; i.e., we increased the positional error for all the epochs by a fixed amount, until reduced χ 2 values were close to unity for each coordinate. The parallax fitting was then performed following the same procedure as in Rygl et al. (2010): 1) we performed single parallax fits per maser spot to a background source; 2) we fitted all the maser spots (of one maser source) together in a combined parallax fit; 3) when a maser had three or more maser spots, we performed a fit on the "averaged data" (see Bartkiewicz et al. 2008;Hachisuka et al. 2009). However, different maser spots can be correlated because an unmodeled atmospheric delay will affect all the maser spots of one maser source in the same way. Therefore, we multiplied the uncertainty of the combined fit by √ N, where N is the number of maser spots to allow for the likelihood of highly correlated differential positions (procedure 2). To obtain the averaged positions for a maser spot (procedure 3), we performed parallax fits on all the individual spots and removed their position offsets and proper motions, after which we averaged the positions of each epoch. The last approach has the advantage of reducing the random errors, introduced by small variations in the internal spot distribution for a maser feature (e.g., Sanna et al. 2010), while leaving the systematic errors unaffected.
The EVN observations used three background sources, of which only one, J2045, was usable as a position reference: this quasar was close to the maser at the phase reference position and unresolved (Fig. 2). Quasar J2029 had a too large separation from W 75N for astrometric purposes (4. • 3), which can be seen from the near-milli-arcsecond scatter in the variation of the relative position versus time of J2009 with respect to J2045 (see Fig. 3). The relative position of J2048 with respect to J2045 (also Fig. 3) also shows a large scatter, thanks to the resolved emission of J2048 (Fig. 2). Previously, we showed in Rygl et al. (2010) that at 6.7 GHz some quasars can mimic an apparent motion due to changes in the quasar structure. Two or more quasars allow one to quantify this effect so that the uncertainty in the maser proper motion, which is the error from the derived maser proper motion fit and the apparent motion of the quasar, can be estimated. From the plots of relative positions between J2045 and J2029 (J2048 was not considered because it was resolved), we estimated the apparent relative motion to be near zero; −0.1±0.2 in right ascension and 0.2 ± 0.2 mas yr −1 in declination.
The VLBA 22 GHz observations included four background sources, of which two were used for astrometry, J2007 and J2033 (Fig. 4). Two background sources were discarded, one because of a non-detection (J2033+4040) and the other one because of a strong structure change during the observations (J2032+4057). A third source (J2033) was found to be heavily resolved by the VLBA (see Fig. 4). For this quasar, the position was derived by fitting only the peak of the emission in the image, instead of its whole spatial structure.
The parallax and proper motion fitting of the water maser was done in the same fashion as for the methanol maser data. As the VLBA observations used two background sources, the "averaged data" were calculated separately for each background source. The combined fit was carried out on the two averaged data sets, and the parallax uncertainty was multiplied by √ N for the number of maser spots to account for correlated differential positions (see above). In the combined fit, we treated the proper motion of the maser separately from the two background sources, taking their relative, apparent (linear) motion into account (Fig. 5). This is necessary, because there was a significant Table 1. The contours start at a 3σ level, namely 2.4 × 10 −4 , 6.6 × 10 −4 , and 12 × 10 −4 Jy beam −1 , respectively, and increase by √ 2. The first negative contour (−3σ) is shown by dashed contours. The synthesized beam is shown in the bottom left corner. with respect to J2045. The dots mark the right ascension data points, while the filled triangles represent the declination data points. The solid line shows the right ascension fit, while the dashed line represents the declination fit. apparent linear motion in the right ascension coordinate (0.32 ± 0.01 mas yr −1 ), but none in declination (−0.01 ± 0.1 mas yr −1 ).
With the parallax and averaged proper motion results we calculated the 3D space velocities of the SFRs with respect to the Galactic center (Reid et al. 2009b). Each SFR has three velocity vectors: U, the velocity in the direction of the Galactic center; V, the velocity in the direction of the Galactic rotation; and W, the velocity in the direction of the North Galactic Pole (NGP). The maser proper motions that we measure are the ve- Table 1. The contours start at a 3σ level, namely 5.1 × 10 −2 and 1.2 × 10 −3 Jy beam −1 , respectively, and increase by √ 2. The first negative contour (−3σ) is shown by dashed contours. The synthesized beam is shown in the bottom left corner. locity differences between the maser source and the Sun. For obtaining the 3D space velocity, we first need to subtract the solar motion and then to transform the velocity vector from a nonrotating frame to a frame rotating with Galactic rotation. In this calculation we used the solar peculiar motion obtained by Schönrich et al. (2010) and assumed a flat Galactic rotation curve with θ = 239 km s −1 and a solar distance to the Galactic center R = 8.3 kpc. These are the revised Galactic rotation parameters derived by Brunthaler et al. (2011) taking the recent revision of the solar peculiar motion into account, from the Hipparcos values (U ⊙ , V ⊙ , W ⊙ ) = (10.00, 5.25, 7.17) km s −1 by Dehnen & Binney (1998) to (U ⊙ , V ⊙ , W ⊙ ) = (11.10, 12.24, 7.25) km s −1 by Schönrich et al. (2010). Table 2 shows the parallax and averaged proper motion results for all SFRs and their calculated space velocities, while detailed results of the parallax and proper motion fitting are found in Table 3.
Cygnus X North: W 75N, DR 21, and DR 20
W 75N has 14 different methanol maser features emitting in an local standard of rest velocity (V LSR ) range of 3-10 km s −1 with 32 maser spots in total (see Fig. 1). After removing the maser spots with nonlinear motions, we were left with ten spots belonging to six maser features for the parallax and proper motion fitting. Figure 6 shows the parallax fit of W 75N, resulting in 0.772 ± 0.042 mas or 1.30 +0.07 −0.07 kpc. While we usually plot the parallax fits after removing the proper motion, in this figure we also show a parallax fit, to one of the maser spots of W 75N, without the removal of the proper motion. Since different maser spots can have different proper motions, it is not instructive to plot the parallax fits to all the maser spots when including the proper motion.
For DR 21A we found 18 maser spots belonging to seven maser features, covering an LSR velocity range of −9.4 to −5.2 km s −1 (see Fig. 1). The parallax fitting was based on seven spots in four maser features. Toward DR 21B we found seven spots in three maser features with V LSR between 3.3 and 5.0 km s −1 (see Fig. 1), but only one maser spot had a good parallax fit. We also detected methanol masers in DR 21(OH) where we used three maser spots with V LSR =−2.5 to −3.5 km s −1 for fitting a parallax signature. If we assume that the masers in DR 21A, DR 21B, and DR 21(OH) belong to the same SFRwhich is likely since their parallaxes are consistent: 0.686 ± 0.060 for DR 21A, 0.705 ± 0.072 for DR 21B, and 0.622 ± 0.055 for DR 21(OH) -the resulting parallax fit for DR 21 becomes 0.666 ± 0.035 mas, or 1.50 +0.08 −0.07 kpc. The connection between DR 21 and DR 21(OH) has already been noted by Schneider et al. (2006), since these SFRs are part of the same elongated filament seen in CO emission. The parallax fits to the combined and averaged data for DR 21A and DR 21(OH), the fit to the individual spot of DR 21B, and their overall combination (i.e., for the DR 21 SFR) are shown in Fig. 7.
Six maser spots in the velocity range of −4.8 to −2.0 km s −1 , belonging to two maser features that were found in DR 20 (see Fig. 1). The parallax fit was based on one maser feature with three maser spots. We find a parallax of 0.687 ± 0.038 mas, cor- responding to a distance of 1.46 +0.10 −0.09 kpc (see the parallax fit in Fig. 8). The second epoch was omitted from the parallax fitting, because the maser spots in DR 20 had an outlying data point in proper motion caused by large residuals in atmospheric delay in that epoch's data.
Cygnus X South: AFGL 2591 and IRAS 20290+4052
The parallax of AFGL 2591 was based on VLBA data using 22 GHz water masers. AFGL 2591 has a very rich watermaser spectrum of 80 maser features with V LSR from −34 to −0.4 km s −1 (see Sanna et al. 2011). The parallax and proper motion fits were based on six maser features and resulted in a parallax of 0.300 ± 0.010 mas or 3.33 +0.11 −0.11 kpc. Figure 9 shows the parallax and proper motion fit of AFGL 2591 with respect to the two background sources. The parallax fit to each background source separately was 0.302 ± 0.009 mas (J2007) and 0.299 ± 0.002 mas (J2033). The difference between the two parallax fits is due to the apparent motion observed between the two background sources. The effect on the parallax can be approximated by the difference between the two results, namely 0.003 mas. The previous distance measurements of AFGL 2591 were near 1.6 kpc. Assuming that the SFR is associated to IC 1318c (Wendker & Baars 1974), of which the distance was determined by Dickel et al. (1969), AFGL 2591 was thought to be at 1.5 kpc. Alternatively, Dame & Thaddeus (1985) suggest a distance of 1.7 kpc based kinematic distances to CO clouds. However, one can find a much wider spread of distances from 1 to 2 kpc in the literature (van der Tak et al. 1999). The water maser parallax puts AFGL 2591 a factor 2.22 (assuming 1.5 kpc) farther away, at 3.33 kpc, which implies a dramatic change for the physical properties of the SFR (discussed in Sanna et al. 2011).
Finally, for IRAS 20290, we found two methanol maser features composed of a total of eight spots with V LSR from −6.2 to −3.8 km s −1 (see Fig. 1). Due to large residuals for the fit, only two maser spots, belonging to the same feature, were suitable for parallax fitting resulting in a parallax of 0.737 ± 0.062 mas or 1.36 +0.12 −0.11 kpc. The parallax fit is shown in Fig. 10. Notes. (a) The resulting parallax fit for DR 21 is based on the averaged data of DR 21A, DR 21(OH), and the single maser spot of DR 21B.
Is Cygnus X one region?
AFGL 2591, supposedly located in the Cygnus X South region, has a much farther distance than previously assumed, which implies that AFGL 2591 is not a part of a single Cygnus X complex. AFGL 2591 is perhaps part of the Local Arm, which would then extend to greater distances from the Sun than currently thought. While we need more data to investigate this issue with stronger statistical support, we note that AFGL 2591 is not the only source found in this region (see Fig. 11). Two recent parallax measurements, namely ON2 at 3.83 kpc (Ando et al. 2011) and G75.76+0.35 at 3.37 kpc (Xu, private comm.; see also the Bar and Spiral Structure Legacy survey, BeSSeL, website 3 ), locate these SFRs close in space to AFGL 2591 (within 3 • on the sky). AFGL 2591 has a V LSR similar to other SFRs in Cygnus X North, even though it is projected against the Cygnus complex and not part of it. The space motions of AFGL 2591, though, are very different from the sources in Cygnus X as can be seen in Fig. 12. We note that the distance measurement of AFGL 2591 is not affected by using different VLBI arrays and maser transitions than for the other SFRs. Maser parallaxes, both the water and the 6.7 (and 12.2) GHz methanol masers have been shown to produce robust distance measurements: for example, the distance to W3(OH), measured with both 12.2 GHz methanol (Xu et al. 2006) and 22 GHz water (Hachisuka et al. 2006) masers, yielded the same number, whereas the 6.7 GHz maser distances (Rygl et al. 2010) also agreed with the VERA water maser distances (Sato et al. 2008;Nagayama et al. 2011).
The most important result of this study is that Cygnus X North is one physically related complex of SFRs, including W 75N, DR 21, DR 20, and IRAS 20290 (and therefore probably also the Cygnus OB 2 association), located at 1.40 +0.08 −0.08 kpc. This is an average of the individual distances, which range from 1.30 to 1.50 kpc. Our data are consistent with a single distance for these sources, within measurement uncertainty. We note that our distance to the Cygnus X complex is similar to the photometric distance of 1.5 kpc obtained by Hanson (2003) toward the Cyg OB 2 association. Compared to the extent on the sky of the SFRs mentioned above, 25 × 60 pc, the distance spread of 200 pc is a factor ∼3.5 wider (however, the measurement uncertainty is a factor 1-2 times the angular extent). The parallax results can possibly be extended based on contingent spatial-velocity structures identified by Schneider et al. (2006) in the CO emission. Following these authors DR 22, DR 23, DR 17, and AFGL 2620 (their CO groups I and II) should also be part of Cygnus X North and thus at the same distance.
Our results did not find any evidence of a southern counterpart to Cygnus X North, since AFGL 2591 was found to be much more distant and other SFRs toward Cygnus X South (except for IRAS 20290 at l=79. • 7, which was found to be at the same distance as Cygnus X North) were not included in this work. Schneider et al. (2006) find that most of the mass in Cygnus X South is contained in a group of molecular clouds: DR 4, DR 5, DR 12, DR 13, and DR 15 (their CO group IV). Parallax measurements to one of these SFRs then could confirm that Cygnus X South is connected to Cygnus X North, thus forming one of the largest known giant molecular cloud structures in the Milky Way of ∼ 2.7 × 10 6 M ⊙ (Schneider et al. 2006 rescaled to 1.4 kpc).
The distances of the SFRs in Cygnus X fit well with the trajectory of the Local Arm between 0.5 and 2.5 kpc in the X coordinate (see Fig. 11), defined by measurements to Cep A 3 http://www.mpifr-bonn.mpg.de/staff/abrunthaler/BeSSeL/index.shtml (Rygl et al. 2010;Nagayama et al. 2011), and G 59.7+01 (Xu et al. 2009a).
There are two explanations proposed for a single connected Cygnus X region. First, a superbubble driven by the famous Cygnus Loop supernova remnant (e.g., Walsh & Brown 1955;Cash et al. 1980), which was recently discarded by Uyanıker et al. (2001), and, second, an expanding Strömgren sphere (McCutcheon & Shuter 1970). Our Cygnus X sources (all sources except AFGL 2591) have overlapping distance uncertainties, so we cannot use their distances for studying the structure of the complex. However, the UVW space motions of the sources projected on the Galactic plane give an impression of the dynamics of the Cygnus X complex. We plot the resulting UVW space motions toward Cygnus X in Fig. 12. Apart from a clearly different behavior of AFGL 2591, we find that W 75N and DR 21 have a dominant motion toward the NGP, while DR 20 and IRAS 20290 are moving toward the NGP and the Galactic center. These proper motions of the Cygnus X SFRs do not suggest a common expansion center, which an expanding Strömgren sphere should have, so more data are needed to understand the formation of the Cygnus X region.
Additionally, we found that the space velocity V, in the direction of the Galactic rotation (after subtracting the Galactic rotation of V = 239 km s −1 ), lies between −10.5 and −8.0 km s −1 Fig. 12. Midcourse Space eXperiment (MSX) 8 µm image of the Cygnus X region overlaid with the resulting UVW space motions for each source. The white triangles mark the water maser (AFGL 2591) and the methanol maser sources, with their distances indicated in kpc.
for all sources except W 75N, where V = +3.6 km s −1 . It seems that most of the star-forming gas is moving with the same Galactic orbital velocity, lagging some 9 km s −1 behind circular orbits, as found by most parallax studies of massive SFRs (e.g., Reid et al. 2009b) after taking the revised Solar motion into account (Schönrich et al. 2010).
Summary
We measured the trigonometric parallaxes and proper motions of five massive SFRs toward the Cygnus X star-forming complex using 6.7 GHz methanol and a 22 GHz water maser. We report the following distances: 1.30 +0.07 −0.07 kpc for W 75N, 1.46 +0.09 −0.08 kpc for DR 20, 1.50 +0.08 −0.07 kpc for DR 21, 1.36 +0.12 −0.11 kpc for IRAS 20290+4052, and 3.33 +0.11 −0.11 kpc for AFGL 2591. While the distances of W 75N, DR 20, DR 21, and IRAS 20290+4052 are consistent with a single distance of 1.40 ± 0.08 kpc for the Cygnus X complex, AFGL 2591 is located at a much greater distance than previously assumed. The space velocities of the SFRs in Cygnus X do not suggest an expanding Strömgren sphere. | 2012-01-03T13:26:00.000Z | 2011-11-30T00:00:00.000 | {
"year": 2012,
"sha1": "3ca06037416670ef25eb21db56a4459a6be67c95",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2012/03/aa18211-11.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "3ca06037416670ef25eb21db56a4459a6be67c95",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
98461496 | pes2o/s2orc | v3-fos-license | Use of a Silsesquioxane Organically Modified with 4-amino-5-( 4-pyridyl )-4 H-1 , 2 , 4-triazole-3-thiol ( APTT ) for Adsorption of Metal Ions
This paper describes the preparation of a nanostructured silsesquioxane, the cloropropilsilsesquioxano (S) that was organofunctionalised with the 4-amino-5-4(pyridyl)-4H-1,2,4-triazole-3-thiol (APTT), the material obtained of the functionalization was described as SA. The material SA was characterized by spectroscopy in the region of infrared (FTIR). After proper characterization, were carried studies on adsorption of metallic ions such as Cu, Ni and Cd in the active sites of SA. Different methods were studied in the adsorption of metal ions above: aqueous, ethanol 42% and ethanol 99%, determining the adsorption equilibrium time, which for both means and metals the maximum time averaged 35 minutes. After determining the adsorption equilibrium time of metal ions in each system, we determined the specific sorption capacity (Nf ) through adsorption isotherms. The results suggest that the sorption of metal ions on DPPIPS occurs mainly by surface complexation and a Langmuir model allowed describing the sorption of the metal ions on SA. The excellent adsorptive capacity made possible the development of a method for determination of metal ions at trace level in real samples such as waste water, ethanol fuel and alcoholics beverages.
Hybrid materials have been paid on considerable attention due to their organic and inorganic moieties working together to give the hybrid different properties from those displayed in their precursors (Yin et al., 2011b).Functionalized silsesquioxanes can improve some of their properties without affecting their characteristics, for example mechanical and thermal properties (Gnanasekaran et al., 2009;Kuo & Chang, 2011;Xu et al., 2011) and also their oxidation resistance (Gnanasekaran et al., 2009).Currently, there is great interest in the formation of hybrid materials by incorporating of organic groups into structures of silsesquioxanes.Additionally, it can increase their adsorptive capacity of metal ions in solution (Paim, 2007).Metal coordination with silsesquioxane is common when use incompletely condensed silsesquioxanes such as trisilanol and Polyhedral Oligomeric Silsesquioxane (POSS).These complexes are capable of mimicking the main characteristics of the inorganofuctionalized silica with different metals (Dijkstra et al., 2002;Do Carmo et al., 2007;Duchateau et al., 2003;Feher & Budzichowski, 1995;Lorenz et al., 2000).The adsorptive properties of nanomaterials silsesquioxanes based can be attributed mainly to the presence of peripheral organic groups, which contained O, S and N in the functionalized silsesquioxane (Yin et al., 2011a(Yin et al., , 2011b;;Cai et al., 2011).In this context the main objective of this work was to organofunctionalize octa(3-chloropropyl)octasilsesquioxane (SS) with 4-amino-5-(4-pyridyl)-4H-1,2,4-triazole-3-thiol, also known as APTT groups (Figure 1b) in order to evaluate the adsorption capacity of metal ions (Cu 2+ , Ni 2+ , Cd 2+ ) in different media, such as aqueous, ethanol (42% and 99%).APTT is a power ligand for organofunctionalization and sorption of metals.The presence of N and S sites can bind of different transition metals (Chu et al., 2007).
Reagents
All reagents and solvents were of analytical grade (Alpha Aesar, Merck or Aldrich) and were used as purchased.Deionized water was produced with Milli-Q Gradient system from Millipore.The solutions of sodium nitrite were prepared immediately before use.
Techniques
Fourier transform infrared spectra were recorded on a Nicolet 5DXB FTIR 300 spectrometer.Approximately 600 mg of KBr was grounded in a mortar with a pestle, and sufficient solid sample was grounded with KBr to make a 1wt % mixture to produce KBr pellets.After the sample was loaded, the sample chamber was purged with nitrogen for at least 10 min.prior the data collection.A minimum of 32 scans was collected for each sample at a resolution of 4 cm -1 .
Synthesis of Octa-(3-chloropropyl)silsesquioxane (SS)
For the synthesis of octa-(3-chloropropyl)silsesquioxane (SS) a procedure described in the literature was followed (Chojnowski et al., 2006).800 mL of methanol, 27.0 mL of hydrochloric acid (HCl) and 43.0 mL of 3-chloropropyliltriethoxysilane were added into a 1000 mL round bottom flask.The reaction was stirred at room temperature for 5 weeks.The solid phase was separated by filtration in a sintered plate funnel, yielded a white solid which was then oven dried at 120 ºC for 4 hours (Figure 1a).
Isotherms of Adsorption
For studies of the adsorptive capacity of organofunctionalized material for metal ions (Cu 2+ , Ni 2+ , Cd 2+ ) in different media (aqueous, 42% ethanol and 99% ethanol) it was employed the batch technique.For each isotherm of adsorption, samples containing 50 mg of SA in 50 mL solvent with variable concentrations of metal chloride (0.25 × 10 -3 to 3.0 × 10 -3 mol L -1 ) was mechanically shaken for average 35 minutes, at a constant temperature of 25 ± 1 o C.After shaking, the solid phase was separated and an aliquot of 10 mL of solution containing the metal ions was titrated with EDTA solution 1.0 × 10 -3 mol L-1, using murexide as indicator.The quantity of adsorbed metal, Nf, in each flask was determined by the equation Nf =(Na -Ns)/m, where m is the mass (g) of adsorbent and Na and Ns are the initial and the equilibrium amount of the number of moles of the metal in the solution phase, respectively.
Results and Discussion
Figure 2a refers to the spectrum of the APTT bond, showing characteristic bands of this ligand, which are the bands from 500 to 1600 cm -1 , referring to the vibrations of the APTT ring.Similar to the bands at ~1310, 1415 and 1550 cm -1 that correspond to the axial deformation C-N (υ C-N ), the axial deformation C-N (υ C-N ) of the cycle, and angular deformation of N-H (δ N-H ) of the APTT ring, respectively.And in regions near to 1620 cm -1 there was a band attributed to the axial stretching C=N (υ C=N ).The band at ~2790 cm -1 corresponds to the vibration of the S-H bond (υ S-H ), and the intense bandwidth is attributed to the O-H deformation of the molecules H 2 O (υ O-H ).Other absorption bands were observed, an intense broad band between 2300 and 2600 cm -1 which can be attributed to the axial deformation of C-H (υ C-H ) of the ring and two other bands at ~3160 and ~3270cm -1 referring to the axial deformation N-H (υ N-H ) (Silverstein & Welbster, 1996).
Figure 2b illustrates the vibrational spectrum of the functionalized material (SA), showing bands that are characteristic of the precursor materials S and APTT, such as the bands at ~1120 cm -1 related to asymmetrical stretching Si-O-Si (υ Si-O-Si ) that correspond to that found for a cage-shaped structure of silsesquioxane, confirming that the cubic structure is maintained.The bands at ~2900 and 2950 cm -1 attributed to the C-H bond vibration (υ C-H ) and the Si-H vibration (υ Si-C ), respectively, and band width can be attributed to the O-H deformation of molecules H 2 O (υ O-H ).The bands between 1350 and 1650 cm -1 were attributed to the vibrations and deformations of the APTT ring (Silverstein & Welbster, 1996).An absence of the band at 590 cm -1 related to the C-Cl vibrations was also observed, therefore confirming the complete organofunctionalization of S with APTT.To evaluate the adsorption capacity of metal ions in different media (aqueous, ethanol 42% and ethanol 99%), adsorption isotherms were obtained by plotting N f against C, where C is the equilibrium concentration of the solute in solution phase.Figures 3, 4 and 5 illustrate the adsorption isotherms for copper, cadmium and nickel ions from different solvents onto SA surface.For Cu 2+ ions, the values of N f were: aqueous solution (3.09 × 10 -4 mol g -1 ), ethanol 42% (1.03 × 10 -4 mol g -1 ) and ethanol 99% (1.86 × 10 -4 mol g -1 ), with Cu 2+ ions concentrations ranging from 1.74 to 29.64 mol L -1 .The values obtained for the Ni 2+ ions were: aqueous solution (2.11 × 10 -4 mol g -1 ), ethanol 42% (0.97 × 10 -4 mol g -1 ) and ethanol 99% (1.05 × 10 -4 mol g -1 ), with Ni 2+ ions concentrations ranging from 2.25 to 33.18 mol L -1 .For the Cd 2+ ions were not observed saturation of adsorption sites in the concentration range studied, not getting the maximum quantity of adsorbed metal (N f max ).The adsorption properties decreased in the following sequence: A schematic representation of equilibrium that occurs between the SA and MX 2 is represented by Equation 1.
Based on the results, the SA presented an excellent potential for adsorption of the metal ions studied in different media.Similar results using materials analogs to SA been reported in the literature (Do Carmo & Paim, 2012;Lessi et al., 1996;Salles et al., 2004).
More information about the system behavior can be obtained from a fit of the data to the modified Langmuir equation represented by Equation 2, from which one can obtain the linearization curve (Adamson, 1990;Langmuir, 1918). (2) In this equation, C s is the concentration of the solution at equilibrium (mol L -1 ), N f the quantity of solute adsorbed by the material (g mol -1 ), N s is the adsorption capacity (g mol -1 ) and k is the equilibrium constant.
Plotting C s /N f against C s obtains parameters that make possible to calculate the values of k and N s . 1 presents the data of adsorption in solution of CuCl 2 , NiCl 2 and CdCl 2 onto SA surface.The results show that there is a close proximity between the experimental values and empirical Langmuir isotherms.High values obtained for the equilibrium constant, in the order of magnitude of 10 3 L mol -1 , suggests that the complexes formed on the surface of the adsorbent are thermodynamically stable (Dias-Filho & Do Carmo, 2006;Rosa et al., 2006).
Conclusion
Synthesis of octa-(3-chloropropyl)silsesquioxane (SS) and functionalization thereof with the ligand APTT were performed with success.
The composite obtained (SA), showed to be a power material to sorption of transition metals in several solvents.The excellent adsorptive capacity made possible the development of a method for determination of metal ions at trace level in real samples such as waste water, ethanol fuel and alcoholics beverages.
Figure 2 .
Figure 2. (a) Spectrum in the infrared region of APTT and (b) spectrum in the infrared region of SA
Table 1 .
Adsorption of metal ions by SA from different solvents at 25 ± 1 o C and the corresponding correlation coefficients (r) | 2018-12-15T11:07:45.406Z | 2013-01-21T00:00:00.000 | {
"year": 2013,
"sha1": "dcab6a9a5e4cb69124bdcde059310d4ebb8cd7a2",
"oa_license": "CCBY",
"oa_url": "https://ccsenet.org/journal/index.php/ijc/article/download/23148/15290",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "dcab6a9a5e4cb69124bdcde059310d4ebb8cd7a2",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
222353855 | pes2o/s2orc | v3-fos-license | Parenting behaviors of mothers and fathers of young children with intellectual disability evaluated in a natural context
The aims of this study were to analyze the interactions of mothers and fathers with their children with intellectual disabilities, focusing on certain parental behaviors previously identified as promoting child development, and to explore the relations between parenting and some sociodemographic variables. A sample of 87 pairs of mothers and fathers of the same children were recruited from Early Intervention Centers. The children (58 male and 29 female) were aged 20–47 months. Most of the families (92%) were from the province of Barcelona (Spain), and the remaining 8% were from the other provinces of Catalonia (Spain). Parenting behaviors, divided into four domains (Affection, Responsiveness, Encouragement, and Teaching) were assessed from self-recorded videotapes, in accordance with the validated Spanish version of the PICCOLO (Parenting Interactions with Children: Checklist of Observations Linked to Outcomes). Parents were administered a sociodemographic questionnaire. The results revealed strong similarities between mothers’ and fathers’ parental behaviors. Mothers and fathers were more likely to engage in affectionate behavior than in teaching behavior. Only maternal teaching presented a significant positive relation to the child’s age. With respect to the child’s gender, no differences were observed in mothers’ parenting. Conversely, fathers scored significantly higher in Responsiveness, Encouragement and Teaching (and had higher total parenting scores) when interacting with boys. The severity of the child’s ID had a statistically significant effect only on fathers’ Teaching, which showed lower mean scores in the severe ID group than in the moderate and mild ID groups. Teaching also presented a significant positive relation to mother’s age, but father’s age was not related to any parenting domain. Mothers with a higher educational level scored significantly higher in Encouragement and Teaching, and the fathers’ educational level was not significantly related to any parenting domain. Mothers’ and fathers’ Teaching, and fathers’ Responsiveness, Encouragement and total parenting scores, presented a significant positive relation to family income. Finally, mothers spent more time in childcare activities than fathers, particularly on workdays. Our main conclusion is that mothers and fathers show very similar strengths and weaknesses when interacting with their children with intellectual disabilities during play.
Parenting behaviors that have been shown to support children's early development include behaviors in the domains of affection, responsiveness, encouragement and cognitive stimulation or teaching [13][14][15][16][17][18]. The literature shows that supporting children's early development is especially critical for children with a clearly established disability such as those with intellectual disabilities (IDs), or for children at risk for poor developmental outcomes [19]. For families with a child with a disability, good parenting and positive parent-child interactions may constitute a real challenge. In comparison with typically developing children, children with IDs may provide less salient cues, be less responsive or have behavioral problems [11]; they may also show less emotional expressiveness, difficulties in joint attention, difficulties in language and communication, and behavioral problems, all of which may hinder the establishment of good interaction patterns [10,20,21].
Although there is a large body of literature supporting the relationship between positive parenting and child outcomes in typically developing children [3,22], fewer studies have examined this association in parents of children with developmental disorders such as intellectual disabilities [for a review, see 23]. Positive parenting of children with disabilities is an important area of research, due to the many stressors that these parents experience [24][25][26] and their possible impact on parenting. Daily parenting stress habitually has to do with the everyday challenges and demands of providing care for children with serious developmental difficulties such as those with ID. This stress may have a critical impact on the development of parenting and, consequently, on children's psychological and developmental outcomes [27,28]. Compared with parents of children with typical development (TD), parents of children with ID may adopt more intrusive and negative parenting styles [29-33], which may not aid development. The challenges associated with raising a child with ID are numerous [34,35] and may affect parental behaviors and, at a later date, children's outcomes and behavior. Despite these difficulties, studies comparing maternal interactions with a child with disabilities and a child with typical development have shown that mothers try to adapt to the child's characteristics [36]; when interacting with a child with inhibitory control deficits such as Fragile X Syndrome, they are more directive and less conversational [37]. Mothers with a child with ID show a more directive style than mothers with a TD child, but they can be also more supportive and motivating [38]. However, this directive style does not mean that these parents are any less affectionate or positive [12,39]. Research has shown that parenting styles vary greatly [10], and a positive view of parenting is vital to empower parents of children with ID and to promote their perception of themselves as effective parents [40]. Furthermore, a positive parentchild relationship may be an important compensatory factor for daily parenting stress in families with children with ID [28].
Raising a child with an ID may affect mothers' and fathers' parenting in different ways, although we know that having a child with an ID impacts all family members. Family Systems Theory [41] emphasizes the dynamic and interdependent nature of the family unit, in which the experiences of one member potentially affect the entire system. Mothers, fathers and children are all elements inside this system, with interconnected patterns of actions and relationships [2].
Most of the literature on parenting has been conducted with mothers, but the amount of research including both mothers and fathers, or focusing just on fathers, has increased in recent years [28,[42][43][44][45][46][47][48][49]. These studies have highlighted the role that fathers play in their children's cognitive, emotional, and social development [50][51][52][53]. Studies comparing mothers' and fathers' parenting have found both similarities [54] and differences [55]; it seems that fathers are more likely to engage in play activities when interacting with their children, while mothers spend more time in caregiving activities [56]. During play they engage in more rough-andtumble play than mothers, especially with sons, and that this kind of playfulness is beneficial for the construction of attachments to fathers and the regulation of aggression and other behavioral problems [57][58][59]. It appears that mothers use more objects and toys and more verbal and didactic play techniques during play than fathers [60]. A recent study [61] found that fathers of daughters were more attentively engaged with their child, sang more to them, and used more analytical language and language related to sadness and the body; in contrast, fathers of sons engaged in more physical play, and used language more focused on achievement. As has been suggested [62,63], maternal and paternal parental behaviors are linked to the child's developmental outcomes, implying the existence of a complementary system of parenting, with both commonalities and differences between mothers and fathers. However, far fewer studies have examined parenting behaviors in mothers and fathers of the same child with an ID at very young ages [12,28,64; see 43 for a revision] even though early intervention programs are increasingly recognizing the value of engaging fathers in home visits [65,66]. The needs of fathers of children with ID have received less attention in the literature than those of mothers [42,43,67], but there is enough evidence to suggest that paternal involvement in families of children with disabilities can have similarly positive impacts on family wellbeing and child outcomes [68][69][70]. A literature review of paternal involvement in the presence of a disability [71][72][73] found it to be linked to increased child cognitive competence and enhanced social skills. Moreover, paternal support in the family has been found to reduce maternal stress in families of children with disabilities [28,74]. So, father involvement during the early years can lead to positive child and family outcomes in families of children with disabilities such as ID. These findings provide strong justification for the need to continue studying the roles of mothers and fathers of the same child with ID, since we know that the involvement of both parents is a critical ingredient of effective developmental intervention [75][76][77][78][79][80][81]. Parenting differences may be addressed in a variety of domains, all of which can contribute to understanding family functioning in the context of a child with disabilities. This paper focuses specifically on the study of parents' behavior during interactions with their children with ID in a naturalistic home context; however, other variables such as family stress or family wellbeing are equally important in order to understand the complexity of studying families with children with disabilities [43, [82][83][84].
As we noted above, previous studies of parenting in families of children with ID have focused specifically on mothers. However, some studies conducted with mothers and fathers of children with ID in the 1990s [64,85] suggested a similarity between mothers' and fathers' interactions with their children, even though the mothers were much more involved with their children. These results were later confirmed by more recent studies [42, 70,86]. Also, the stud by Crnic [43] indicated that both mothers and fathers of children with ID behave similarly with their children to parents of children with typical development, although as the child grows older the father's involvement in their upbringing decreases.
In relation to father play, a study carried out with fathers of children with ID [87] found it to be associated with more child exploration and symbolic play; in addition, fathers with high emotional availability exhibited more symbolic and less exploratory play than their peers with lower emotional availability.
It is true that few empirical studies have been carried out of mothers' and fathers' behavior in families with young children with ID. Nevertheless, it appears that fathers and mothers share some fundamental similarities in their parenting behavior, but that certain behavioral domains present difference. Previous research shows that although fathers may not contribute to caregiving tasks to the same degree, when they do so they are competent care providers [43,88].
The aim of the present study was thus to explore similarities and differences in the parenting behaviors of mothers and fathers of the same child with an ID in a natural context in Spain.
In relation to the existing literature, our study provides new data on parental interactions with young children with ID in Europe, a context in which public policies for conciliation of work, family and personal life have progressively improved in recent years and have helped to increase the levels of joint participation of men and women in the upbringing and education of their children [89]. Therefore, the study of how mothers and fathers from the same family contribute to child development, and especially in the case of children with developmental disabilities, is a highly relevant topic in Europe today.
We focused on the following research questions: 1. How do mothers and fathers of the same child compare on dimensions of parenting (Affection, Responsiveness, Encouragement and Teaching)?
2. Are the parenting behaviors that mothers and fathers demonstrate with their child with an ID related in any way to family-related demographic variables?
Participants
Our sample was composed of 87 pairs of mothers and fathers of the same children. Almost all families (92%) were from the province of Barcelona (Spain), and the remaining 8% from the other provinces of Catalonia (Spain). They were recruited from Early Intervention Centers (EICs), which cater for children from birth to 6 years of age who have or at risk of developmental delay [90]. The following inclusion criteria were applied: (a) age between 20 and 47 months; and (b) Intellectual Disability (associated or not with another type of disability) diagnosed at least six months before the study. In Spain, Early Intervention Centers offer a universal, free public service, organized by provinces inside each autonomous community. Each center has autonomy for receiving and evaluating cases, and for intervening when necessary. Access to EICs is by indication or referral by a child care service (health, educational or social services) or by direct contact made by the family. EICs are staffed by professionals from different disciplines (usually neuropaediatricians, psychologists, physiotherapists, speech therapists and social workers). The size of the team is related to the number of population to be served, and the coordinator is usually a psychologist.
The study sample comprised 87 children, 58 male (67%) and 29 female (33%), aged from 20 to 47 months (M = 33.0, SD = 6.8). Fifty-six per cent of the children were younger than 3 years old (20 to 35 months), and 44% were 3 years old or over (36 to 47 months). The degree of ID was mild (from 33 to 64%) in 46%, moderate (from 65 to 74%) in 45% and severe (> 75%) in 9% (in Spain, the assessment of the percentage of disability is a standardized process carried out by a governmental agency, the Valuation and Guidance Services for People with Disabilities; after diagnosis, the agency issues an official certificate reflecting both the existence and degree of disability, with ID being graded as mild, moderate or severe). These services carried out the assessment and established the degree of disability. . Almost all (98%) were married or living with a partner. Eleven per cent had received only elementary schooling, 40% had completed secondary school, 34% had a university degree, and 15% had post-graduate studies. Most were in full-time (51%) or part-time employment (31%), while 18% cared for their children and were fully responsible for housework.
Fathers were aged 24 to 60 years (M = 38.9, SD = 5.1). Most (98%) were married or living with a partner. Twenty-two per cent had received only elementary schooling, 36% completed secondary school, 31% had a university degree, and 11% had completed post-graduate studies. Most of them were in full-time employment (91%); the rest were employed part-time (1%), or unemployed (8%). Twenty-three per cent of the families had a monthly income between €1,602 and €2,451, considered an average income in Spain [91]. Monthly income was below €1,602 in 22% of the families and above €2,451 in 44%.
Instruments
A brief sociodemographic questionnaire was used to record the child's age (in months) and gender, degree of ID (mild, moderate or severe), whether s/he attended a kindergarten or a preschool center, and the parents' age (in years), gender, marital status, educational level (1: elementary studies, 2: high school, 3: university degree, 4: Master's/PhD), employment status (full-time employment, part-time employment, unemployed/homemaker), level of monthly income according to the Socioeconomic Classification [92] (from 1: Less than €1,312 to 6: More than €3,005), and daily hours dedicated to child care. In particular, parents were asked: "On average, how many hours a day do you spend on child care (for example, taking your child to school, bathing him/her, taking him/her out for a walk, playing, cooking. . .) during work? And on the weekend?" The Spanish version [93] of the Parenting Interactions with Children: Checklist of Observations Linked to Outcomes (PICCOLO) [17, 18] was used to assess parenting. The PICCOLO is a reliable and valid 29-item measure of parent-child interactions for parents with children between the ages of 10 and 47 months. The 29 items reflect parental behaviors linked to children's developmental outcomes and the measure can be used to assess families with children with disabilities. The PICCOLO items are scored on a 3-point rating scale, from 0 (absent, no behavior observed) to 1 (barely, or some brief, minor, or emerging behavior) to 2 (clearly, definitive, strong or frequent behavior). This rating scale is similar to a behavior checklist, with a yes/no response; a score of 2 corresponds to a clear presence of the behavior and 0 to a clear absence. 1 is an intermediate score, corresponding to behaviors that appear infrequently or not consistently. The items are rated according especially to the consistency of parental behaviors in relation to the child's actions. For example, in the case of item 3 on the Teaching scale "Repeats or expands child's words or sounds" a score of 0 reflects the total absence of that behavior and 2 to its consistent presence (when, every time the child utters a sound or a word, the adult consistently repeats or expands it). A score of 1 would be awarded when the adult occasionally performs the behavior but does not respond to many of the child's utterances (missing opportunities). The advantage of the PICCOLO 3-point scale is that it is easy to score short observations (between 8 and 10 minutes or so) without counting or timing behaviors [17, 18].
As adult-child interaction is a reciprocal process, and as the child's behavior has an impact on parental behavior, it is important to stress that the PICCOLO items refer to parental behaviors in the context of interaction with the child. The assessment of parenting is based on the child's behavior during the interaction. Although some items can be assessed by focusing on the adult (for example, "Smiles at the child"; "Talks to the child about characteristics of objects"), most items assess the adult's behavior in response to that of the child (for example, "Changes pace or activity to meet child's interests or needs"; "Responds to child's emotions"; "Replies to child's words or sounds"; "Verbally encourages child's efforts". So, for example, item 3 on Responsiveness ("Is flexible about child's change of activities or interests") will score 0 when, every time the child changes his/her focus of interest, the parent is not flexible and does not accept the child's choice or initiative.
The items are grouped into four domains: (a) Affection (7 items), which involves the physical and verbal expression of affection, positive emotions, positive evaluation and positive regard; (b) Responsiveness (7 items), which includes reacting sensitively to a child's cues and expressions of needs or interests and reacting positively to his/her behavior; (c) Encouragement (7 items), which considers parents' support of children's efforts, exploration, independence, play, choices, creativity, and initiative; and (d) Teaching (8 items), which includes cognitive stimulation, explanations, conversation, joint attention, and shared play. The instrument generates a score for each dimension between 0 to 14 (and 0 to 16 for the Teaching dimension) and a total score between 0 and 58 (adding all the items). The original instrument's reliability is good [17,18].
The Spanish version of the PICCOLO was recently validated using a sample of 203 motherchild dyads [93]. The results of the confirmatory factor analysis confirmed that the instrument has a four-factor structure of first order domains (Affection, Responsiveness, Encouragement, and Teaching) that collapses into a single, second-order factor (parenting). It also found a high interrater reliability; the intraclass correlation coefficients (ICC) ranged from .69 for the responsiveness domain to .85 for the total score. With respect to internal consistency reliability, all domain and total scores showed satisfactory Cronbach's alpha coefficients (.65 for Affection, .75 for Responsiveness, .76 for Encouragement, .72 for Teaching, and .88 for the total score). In this study, Cronbach's α values for mothers (N = 87) and for fathers (N = 87) were, respectively: .54 and .55 for Affection; .81 and .84 for Responsiveness; .80 and .84 for Encouragement; .66 and .70 for Teaching; and .88 and .90 for the total PICCOLO score.
Procedure
Ethical approval was obtained from the Network of Ethics Committees in Universities and Public Research Centers in Spain. Approval was given in accordance with the International Ethical Guidelines for Health-related Research Involving Humans.
Then, several EICs were contacted by letter and telephone and informed of the nature of the study, and the coordinators of the centers were asked to help in recruiting families for the study. Families were informed that their participation would be entirely voluntary and anonymous. They received a letter with information about the study, the sociodemographic questionnaire and a brief guide about how to video-record adult-child interactions during play at home. The parents signed an informed consent document. Mothers and fathers were asked to auto-record, separately, between 8 and 10 minutes of a normal play session with their child at home, with the following instruction "Interact and play with your children as you typically do". Ninety-four per cent of the videos collected in this sample were more than nine minutes long. The father's and mother's recordings could be made on the same day or on different days, within a maximum period of one week. Both parents chose what to play with their child. Some games and materials were suggested in the brief guide: for example, books, toy animals, kitchens, little dolls, or building blocks. Finally, the videos were collected and scored according to the PICCOLO criteria. Only videotapes that complied with the researcher's instructions (99%) were scored.
We opted for video recording and self-recording because video is considered advantageous to live observation and live coding as it permits researchers to pause and replay the activity as often as is necessary for a thorough analysis of the data [94], and self-recording avoids the interference of the presence of a third person. Parents were aware of the overall aims of the research, but they did not know which specific behaviors were being analyzed.
Finally, the videos were collected and scored according to the PICCOLO criteria by a small group of psychologists and specialists in child development. The first author of this paper, who had been trained by the authors of the PICCOLO, trained the group of raters for this study. The trainees read the PICCOLO manual and watched and discussed the scores for four video recordings with the expert coder. After the training sessions, each observer scored four to six additional video recordings in order to establish reliability prior to collecting study data. Observers were considered to have completed their training when they presented an interrater agreement of 80% or more with the expert coder, following the same criteria as the PIC-COLO user's guide [18]. Each coder scored roughly 20 video recordings selected randomly, including both mothers and fathers from the same different or families. Only videotapes that complied with the researcher's instructions were scored (however, 99% were deemed satisfactory).
Data analysis
Differences in mean PICCOLO item scores between mothers and fathers of the same child were compared via the Wilcoxon signed-ranks test for paired samples, while their differences on mean domain and total PICCOLO scores were compared via Student's t-test for paired samples. Effect size was calculated using Cohen's d. The relationship between mothers' and fathers' parenting scores was analyzed by computing Pearson's correlation coefficients.
For categorical sociodemographic variables, mean parenting scores were compared via Student's t-test (for comparing two means) or via robust Brown-Forsythe ANOVA (for more than two means). The relationship between parenting scores and demographic variables was examined via Pearson product-moment correlation coefficients (for continuous variables), or via Spearman correlation coefficients (for ordinal variables). Missing data were handled by pairwise deletion. IBM SPSS (version 24.0 for Windows) was used for all the statistical analyses. Table 1 presents descriptive statistics (mean and standard deviations) of PICCOLO item scores for mothers and fathers separately. One of the items in the Affection domain (item 5), one in the Encouragement domain (item 6), and five in the Teaching domain (items 1, 2, 5, 6 and 7) showed mean scores lower than 1, in both mothers and fathers, which indicates that the corresponding behavior was barely observed in either parent. The order of the three items with the highest mean was the same for both mothers and fathers: two items in the Affection domain ("Speaks in a warm tone of voice", "Is physically close to child") and one item in the Responsiveness domain ("Pays attention to what child is doing"). This indicates that these parenting behaviors were the ones most clearly observed in the majority of mothers and fathers.
Mothers' and fathers' parenting
Differences in mean PICCOLO item scores between mothers and fathers of the same child were compared via the Wilcoxon signed-ranks test for paired samples. Mothers showed higher mean scores than fathers (p < .05) in two items from the Affection domain, one from the Encouragement domain, and one from the Teaching domain (see Table 1). This result means that positive parental behaviors corresponding to those items were more frequently observed in mothers than fathers. Table 2 presents descriptive statistics (mean and standard deviations) of PICCOLO domains and total scores for mothers and fathers. Scores were computed as means, i.e., dividing the sum score by the number of items in each domain. Thus, mean scores for all domains ranged theoretically from 0 to 2, like the item scores, so they have a common interpretation regardless of the number of items they contain. For both sets of parents in this study, all mean scores ranged between 1 (bare, brief, minor, or emerging behaviors were observed) and 2 (clear, definite, strong, or frequent behaviors were observed), except for the Teaching domain, which showed a mean lower than 1. In other words, both mothers and fathers tend to show positive parenting behaviors (Affection, Responsiveness, Encouragement) with their children, except for teaching behaviors, which were rarely observed.
PLOS ONE
Differences in means on each domain and the total PICCOLO scores between mothers and fathers of the same child were compared via Student's t-test for paired samples. As also shown in Table 2, mothers showed higher mean Affection and Teaching domain scores than fathers. However, the effect size of the differences between mothers' and fathers' mean Affection and Teaching scores can be considered as small (d � .20), using Cohen's benchmarks for interpreting effect sizes [95].
PICCOLO mean scores are represented graphically in Fig 1. As can be observed, mean scores on the four positive parenting domains followed a similar pattern in mothers and fathers: that is, the order of the mean scores was the same. For both parents, the highest mean score corresponded to the Affection domain, followed by the Responsiveness and Encouragement domains; the lowest mean scores were on the Teaching domain (M < 1).
The relationship between mothers' and fathers' parenting scores was analyzed by computing Pearson's correlation, which showed a statistically significant positive correlation between
Sociodemographic variables and parenting
The associations between positive parenting and each of the variables included in the sociodemographic questionnaire were analyzed. Pearson's correlation coefficients between child age (in months) and mothers' and fathers' PICCOLO scores were computed. The only statistically significant positive correlation was found between child age and scores for the Teaching domain, in mothers (r = .266; p = .013), indicating that mothers' teaching behaviors were more frequently observed with older children. In contrast, none of the fathers' parenting domains was significantly related to child age. With respect to child gender, the Student's t-test for independent samples found no statistically significant differences between boys (n = 58) and girls (n = 29) on PICCOLO mean domain and total scores, for mothers (p < .05). This result shows that positive parenting interactions observed in the mothers were not related to child gender. However, the positive parenting interactions observed in the fathers (except for the Affection domain) were more frequently observed for boys than girls.
The Comparison of parenting scores between the three child ID severity groups (mild, moderate or severe) was analyzed via Brown-Forsythe ANOVA. The results indicated that none of the mothers' mean scores on the PICCOLO domains and total scores differed significantly (p > .05) between the three severity groups. Similarly, fathers' mean PICCOLO scores did not differ across the three groups, except for the Teaching domain. The first group of fathers with a child with a mild ID (N = 40) obtained mean Teaching scores of 0.94 (SD = 0.44); the mean for the second group (moderate ID, N = 39) was 0.83 (SD = 0.38); and the mean for the third group (severe ID, N = 8) was 0.44 (SD = 0.28). ANOVA results showed statistically significant differences in mean Teaching scores in at least two ID severity groups (F(2, 84) = 5.34; p = .007). Pairwise comparisons analyzed via Tukey's HSD test showed that the mean Teaching scores of the group with severe ID differed significantly from those of the mild or moderate ID groups (p < .05). However, no differences were found between the means for the mild and moderate ID groups (p > .05).
Several parents' sociodemographic characteristics were also taken into account in the analysis. With respect to mothers' age, a statistically significant correlation was found with Teaching domain scores (r = .264; p = .013), in that teaching behaviors were more frequently observed in older mothers. However, fathers' age was not significantly related (p > .05) to any of the parenting domains.
In relation to parents' educational level (from 1: Elementary studies to 4: Master's/PhD), statistically significant Spearman correlation coefficients were found for mothers on the Encouragement (r s = .218; p = .042) and Teaching (r s = .339; p = .001) domain scores and the total PICCOLO scores (r s = .280; p = .009). Thus, mothers with higher educational levels performed more positive parenting behaviors (except for the Affection and Responsiveness domains) in their interactions with their children. However, fathers' educational level was not significantly related to their positive parenting behaviors.
Mean parenting scores were compared across three groups of mothers' working status via robust Brown-Forsythe ANOVA, for more than two independent means. Differences between the three groups of mothers on mean parenting scores were not statistically significant (p > .05). Most of the fathers were in full-time employment (91%) and thus the sample size of the partially employed (n = 1) and unemployed/homemakers (n = 7) was too small to allow an analysis of the relationship between fathers' employment and parenting.
Marital/partner status was also included in the sociodemographic questionnaire but it was not analyzed, as almost all parents in the study sample were married or living with a partner (98%) and almost all of these couples (97%) were parents of the same child. With respect to family income (from 1: lower than €1,602, to 6: higher than €2,451), statistically significant Spearman correlation coefficients were found between family income and mothers' Teaching
Discussion
This study aimed to contribute to the understanding of parenting constructs across gender by including the same measure for mothers and fathers. Our first aim was to explore whether mothers and fathers of children with an Intellectual Disability (ID) differed in terms of parenting dimensions when they were evaluated playing in a natural context at home. Comparative analyses showed small differences between mothers and fathers of the same child. Even though mothers scored higher than fathers on most PICCOLO items, only four items showed statistically significant differences. Previous evidence using the same measure indicated that mothers usually score higher than fathers [96][97][98][99]. What is particularly interesting from our point of view is that the items with the highest lowest means were the same for both mothers and fathers and that the order of the PICCOLO subscale scores, from highest to lowest, was also the same. The coincidence of high scores on the same parenting domain scores in mothers and fathers suggest that their patterns of parenting are similar. Previous studies of mothers and fathers of children with normative development suggests that their parenting behaviors are conceptually very similar [100][101][102][103][104]; equally, our data from mothers and fathers with young children with ID suggest that there are no separate dimensions of fathering and mothering. Those results are similar to those recorded in the studies carried out with mothers and fathers of children with ID mentioned in the introduction section [28,43,64,85].
In relation to the PICCOLO's distinct domains, mothers and fathers of children with disabilities engage in more types of affective behavior (closeness, warmth. . .) and fewer teaching behaviors (conversation, play, language and cognitive stimulation. . .). We should bear in mind that this study included very young children, with cognitive and language levels below their chronological age, so these parental behaviors are probably very similar to those shown by fathers and mothers towards much younger children without disabilities [17,105]. The Affection and Teaching domains showed statistically significant differences between mothers and fathers but the effect size of these differences was small. Teaching involves activities that include cognitive stimulation, shared conversation and play, explanations, labeling and joint attention, which are essential for promoting child development [14,106,107]. However, in children with IDs these activities may be difficult to carry out. As mentioned in the introduction, children with IDs may have problems with joint attention and with language and communication, which may contribute to the difficulties establishing good interaction patterns [21]. This is something that early intervention practitioners should take into account.
Most EICs in Spain focus on a child-centered approach, mainly aimed at the rehabilitation of the problems or difficulties of children and their families. Professionals at these centers intervene with the child with disabilities far from their natural environment. Recently, there has been growing interest among professionals in increasing family participation in early intervention as a way to improve early childhood intervention outcomes [108,109]. We strongly believe that the information on behaviors identified by the PICCOLO could help guide professionals' intervention plans, and also that the Teaching domain is one of the aspects that should be emphasized in young children with IDs and their families when following family capacity-building practices [110]. The PICCOLO has been used with children with identified disabilities such as ID or Autism Spectrum Disorder in large samples in the US [11], Spain [12], Saudi Arabia [111] and Germany and Switzerland [36]. These studies have demonstrated this checklist's strong reliability and predictive validity for families with children with a disability; indeed, in two of these studies the early parenting behaviors measured with the PICCOLO predicted cognitive and language outcomes in children with disabilities [11,12]. This underlines the importance of early parenting in children with disabilities, as has also been emphasized by family-centered models in the field of early intervention [112][113][114]. Early intervention professionals should support parents in interactions with their children in natural routines through collaboration with families to promote functional learning and optimal outcomes [115]. Coaching with the PICCOLO increases positive parent-child interaction [116]. When early intervention professionals give feedback to parents about their parenting, both parent skills and child outcomes improve [117,118].
The second aim of this study was to explore the relationship between mothers' and fathers' parenting and family-related demographic variables. Our findings in relation to child age and mothers' and fathers' PICCOLO scores only showed a statistically significant positive correlation between child age and scores for the Teaching domain, in mothers, meaning that mothers' teaching behaviors were more frequently observed with older children. These results corroborate those of earlier research on mothers in the general population [17] and mothers with children with a disability [11]. However, none of the fathers' parenting domains was significantly related to child age. This means that parents' parenting behavior does not depend so much on the child's age but on the type of activity in which they engage [63]. Our results did not show an association between parenting interactions in Spanish mothers and child gender, but Spanish fathers presented more positive parenting interactions (except for the Affection domain) with boys than with girls. Some studies in families with children of normal development have shown different parenting styles for sons and daughters [119,120]; few studies have examined this difference in mothers and fathers of children with an ID. A relevant finding was that the mothers' parental behaviors did not vary according to the child's degree of disability. Almost the same could be said of fathers, with the exception of the Teaching domain, in which fathers of children with severe ID showed significantly lower scores. We believe that engaging in Teaching interactions is particularly difficult for these fathers, because it tends to be the mothers who attend the early Intervention Centers, and the service providers are more likely to provide mothers with advice regarding behaviors in the Teaching domain (for example, "Labels objects or actions to the child"; "Repeats or expands child's words or sounds"; or "Asks child for information"). Father's teaching is a very important parenting domain; previous research by our group has shown that it is related to cognitive development in children with ID [12]. These results suggest that specific strategies should be developed to involve fathers in this parenting domain in the intervention at the EIC. As noted above and also in previous research [121,122] EI service providers may have limited understanding of effective strategies for involving fathers, because mothers continue to be the main participants at these centers. Regarding other parental factors, the research suggests that parental age is related to parenting behaviors, in the sense that older parents seem to be more likely to raise children with greater emotional stability [123]. Our results showed that fathers' age was not significantly related to any of the parenting domains but that mothers' age was correlated with Teaching domain scores, thus corroborating the results of a similar study [124]. However, more longitudinal research is needed to validate these results, especially in older fathers and mothers of children with disabilities. Regarding level of schooling, our results showed that mothers with higher educational levels showed more positive parenting behaviors in their interactions with their children than those with lower levels. However, a higher level of schooling among fathers was not significantly related to positive parenting behaviors. These findings are at odds with those of previous reports [125,126]: they may be due in part to role specialization, whereby mothers are more likely to be engaged in childcare [127] or there may be other factors such as family income which, as our data have shown, are significant. Finally, with respect to the amount of time that parents dedicate to childcare, our results show that mothers spend more time carrying out childcare activities than fathers, although this difference is more evident on weekdays than on weekends. We know that fathers today spend more time with their children [128], but the amount of time that parents spend with their children depends above all on whether they work inside or outside the home. In our sample, there was a major difference between mothers and fathers in this variable: 91% of fathers work outside the home, while 18% of mothers did not work and 31% worked only part-time in order to be able to take care of their children. These results confirm the findings of previous studies regarding the influence of the employment status of parents of children with IDs [74,129], and may account for the differences between mothers and fathers. It is a fact in Spain that when a child with a disability is born in a family, it is usually the mother who stops working to take care of the child. But this does not necessarily mean that father-child interactions are less positive; fathers can compensate for a smaller quantity of time by spending higher "quality" time with their children [54]. We are convinced of the importance of including fathers in research and in intervention programs, and in fact this study confirms the findings of previous studies regarding the similarity in the dimensions of mothers' and fathers' parenting [63,100,103]. In Spain, as in other countries [68,130,131], fathers are noticeably absent in EICs This study highlights the need for paternal involvement in EI, since we know that fathers, jointly with their partners, have a positive impact on the development of children with disabilities [68,75]. Few studies have examined the measurement equivalence of parenting dimensions in mothers and fathers with a child with ID. This is necessary if we want to have a full picture of parenting and its influence on the development of children, especially in families with children with ID.
Limitations and future directions for research
This study has several limitations that should be taken into consideration. The first is the selection of the sample. Although participants were recruited at Early Intervention Centers through contact with the center coordinators, the procedure used to select the participants may have been conditioned by the willingness of families to participate [132]; probably, the parents who took part were the ones who were most knowledgeable about child development, and most aware of the importance of parental interactions, or even the most confident about their parenting skills. Similarly, it may be that the parents who were more worried about their child's development were reluctant to participate. Second, this is a descriptive study and there were no observational measures of parenting over time. Mothers' and fathers' patterns of parenting behavior may change as time passes depending on factors such as the age of the child, family structure, socioeconomic status and employment [43,54,129]. Neither the behavior of the child, nor its influence on the adult's behavior, was directly analyzed. Although the PICCOLO items refer to parental behaviors in the context of adult-child interaction, and therefore reflect to a certain extent the adult's responses to the child's behavior, future studies might incorporate a dyadic analysis that includes a coding of the observed child's behavior. In addition, in this study we did not analyze whether the parental behaviors of mothers and fathers predict children's subsequent cognitive and linguistic development. The relation between parenting and child development needs to be explored, as we stress in an earlier study by our group [12]. Nor did we include measures of the quality of mothers' and fathers' parenting behaviors especially if both parents live together (as they did in practically all our cases). Although we asked about the amount of time they spent with their sons and daughters but we did not ask about the types of activities they engaged in. In future studies the interactive effects of mothers' and fathers' parenting behavior on children should be analyzed. This is a very important issue, especially in families with children with an ID, but it is something that is difficult to assess. Further, our sample size was modest, and so our findings require replication before we can draw conclusions about the similarities and differences of parenting behaviors in mothers and fathers of the same child with ID. Finally, we stress that this study is based on a non-experimental (correlational or quasi-experimental) methodology which is able to suggest the existence of relationships, but cannot establish the direction of the causality of the relationships it identifies. | 2020-10-15T13:05:31.052Z | 2020-10-13T00:00:00.000 | {
"year": 2020,
"sha1": "f5e8a31e757941cd047ca3fb360fd1ad14c13b42",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0240320&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8d9d86a411c64ce86956b993d22f2c67ede074f0",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
220716198 | pes2o/s2orc | v3-fos-license | Insulin glargine/lixisenatide fixed‐ratio combination (iGlarLixi) compared with premix or addition of meal‐time insulin to basal insulin in people with type 2 diabetes: A systematic review and Bayesian network meta‐analysis
To assess the efficacy and safety of iGlarLixi, a fixed‐ratio combination of insulin glargine 100 U/mL and lixisenatide, relative to premix insulin and other insulin options through network meta‐analysis.
| INTRODUCTION
Basal insulin is commonly recommended when glycaemic control can no longer be achieved with oral or other injectable glucose-lowering drugs.
Individual randomized controlled trials (RCTs) of basal insulin report that up to 70% of insulin-naïve people with type 2 diabetes (T2D) may reach HbA1c levels of less than 7.0% (<53 mmol/mol). 1,2 However, in a pooled analysis of 45 RCTs, only 40% (7.5%-70%) of individuals achieved this. 3 This is similar to data from clinical practice, where 38% of insulin-naïve people achieve an HbA1c level of less than 7.0% in the first year after starting basal insulin, and only 8% thereafter. 4 patients unless basal insulin is preferred by or is more suitable for the individual patient (e.g. in those with HbA1c > 11.0% [>97 mmol/mol]). 6,7 When the HbA1c target is not achieved with either injectable therapy with or without oral glucose-lowering drugs (OGLDs), the ADA statement recommends combining the GLP-1RA and basal insulin as a preferred alternative to adding a meal-time insulin. 8 RCT data show that the combination of GLP-1RA and basal insulin therapies usually leads to effective glucose lowering with a moderate risk of hypoglycaemia and modest weight gain. 6,9,10 However, real-world evidence shows that less than 50% of patients reach their target, leaving a high unmet medical need. 11,12 The fixed-ratio combinations (FRCs) of basal insulin and GLP-1RAs provide the benefit of single administration, and may be expected to promote better adherence compared with regimens requiring separate injections of the insulin and GLP-1RA. [13][14][15][16] iGlarLixi, an FRC containing insulin glargine 100 U/mL and lixisenatide in a disposable pen-injector, reduced HbA1c and attenuated insulin-related body weight gain versus basal insulin without increasing the risk of hypoglycaemia in people whose HbA1c was inadequately controlled on basal insulin (with or without OGLDs). 17 Although RCTs have established the benefits of iGlarLixi compared with basal insulin, 17 (Table S1), 19 and included keywords and MeSH headings for T2D, basal insulin, premix insulin, iGlarLixi and inadequate glycaemic control (Tables S1 and S2). In addition, a search of conference abstracts via Embase (January 2014 to June 2018 inclusive) and a manual search of the reference lists of eligible studies was performed.
Trial inclusion was guided by predefined criteria for the PICO design. Trials were included if they: (a) had an adult (aged ≥18 years) T2D population in which participants had previously been treated with basal insulin alone or in combination with an OGLD, but still had a HbA1c level of ≥7.0% (≥53 mmol/mol); (b) compared iGlarLixi, premix insulin, or a basal insulin either alone or in combination with OGLDs or meal-time insulin (1-3 times per day); (c) had treatment arms of ≥20 weeks; and (d) reported at least one of the following: change in HbA1c, proportion of participants reaching an HbA1c target of ≤7.0% (≤53 mmol/mol), total insulin dose, change in body weight, hypoglycaemia or gastrointestinal adverse events. Trials were excluded if they (a) included people who had type 1 diabetes or an HbA1c level of less than 7.0%-units (<53 mmol/mol) at baseline or were using meal-time insulin or (b) compared interventions of interest (iGlarLixi, premix insulin or basal insulin) to any non-intervention or placebo. Comparisons of different premix insulin, basal insulin or meal-time insulin to one another were also excluded. Further details are given in Table S1.
Standard methodology for systematic reviews, as defined in the Cochrane Handbook for Systematic Reviews of Interventions, was used. 19 Results are reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. 20 were assessed for risk of bias using the Cochrane Collaboration's tool. 21
| Statistical methods
Data were analysed using Bayesian NMA to combine both direct and indirect evidence to compare the efficacy and safety of iGlarLixi relative to premix insulin, basal insulin and meal-time + basal insulin regimens. A very weak, non-biased prior was set automatically between each direct comparison in a network for each outcome analysed. 22 We performed 5000 burn-in followed by 20 000 iterations with a thinning of 10 for two chains. The convergence was assessed using the Brooks-Gelman-Rubin method. 23,24 Both fixed-and randomeffects models were used. For the random-effects model, the standard deviation was sampled from a uniform distribution. Model selection was based on the Deviance Information Criteria (DIC). 25,26 When the DIC were within five points of each other, the random-effects model results were preferred. 27 The baseline characteristics (age, sex, inclusion and exclusion criteria, treatment period, duration of diabetes, body mass index [BMI], weight, HbA1c and fasting plasma glucose) of the included studies were qualitatively reviewed. The direct pairwise comparison results for the two studies comparing basal-plus versus premix and for the two studies comparing intensified basal versus premix were also qualitatively reviewed. Network inconsistency was tested using the Bucher method to measure differences in the direct and indirect estimates within closed loops in the networks. 28 Change in HbA1c levels, total final insulin dose and change in body weight were modelled using a normal likelihood and identity link function and were represented as mean differences with associated 95% credible intervals (CrIs). A target HbA1c of less than 7.0% (<53 mmol/ mol), hypoglycaemia and gastrointestinal events were modelled using binomial likelihoods and logarithmic link function and were represented as risk ratios (RRs) with associated 95% CrIs.
Probability thresholds were used to determine whether a given therapeutic approach was better, probably better, comparable, probably worse or worse. Probability of better (P.better) was calculated based on the proportion of Markov chain Monte Carlo cycles in which the specific treatment estimate was numerically better than the comparator. A treatment option was taken to be 'more effective' than the comparator if the point estimate favoured the treatment and the 95% CrI did not include 0.00 (continuous outcomes) or 1.00 (binary outcomes). Other efficacy findings ('likely to be favourable', 'comparable') depended on the P.better being ≥85% or 15%-85%, respectively, but with no requirement for the CrI to exclude 0.00 or 1.00. 29 Additional details on the probability thresholds used for interpretation of the results can be found in Table S3. All analyses were conducted using R 30 (Bell Laboratories, Open Source) and the GeMTC 31 package, the latter using the JAGS program for Bayesian modelling. Figure S1). 34 Complete trial and participant characteristics are given in Table 1 and Tables S4-S7 (Table S7).
| Evidence base
Assessments for risk of bias conducted on the eight trials 17,[32][33][34][35][36][37][38] showed that there were no low-quality trials. However, as these were studies of injectables, all were open-label, and while the primary outcome (change in HbA1c from baseline) was objective and blinded until analysis, risk of bias is inherent. Quality assessments are given in We therefore concluded that the exchangeability assumption was valid.
Deviance Information Criterion (DIC) estimates for the fixed-and random-effects models were within five points difference ( Figure 3, Table 2, Table S9), and thus the random-effects models were favoured for the primary analysis of all outcomes. Analysis was not feasible for gastrointestinal events because of inconsistent reporting in the non-GLP-1RA trials.
| Total insulin dose
Five trials reported total insulin dose. 17
| Change in body weight
All eight trials reported body weight. Overall, weight benefit relative to premix among participants receiving iGlarLixi was −2.2 (95% CrI
| Hypoglycaemia
Different definitions of hypoglycaemia were used in the ascertained RCTs (Table S3). Four trials reported the number of participants (incidence) who experienced confirmed hypoglycaemia during the trial. 17,32,35,37 Incidence was low and thus credible intervals for relative risk were large, such that possible differences were not confirmed ( The 2018 Consensus Report by the ADA and EASD recommends that the combination of GLP-1RA and basal insulin may be considered for people with inadequate glucose control while taking a GLP-1RA or using basal insulin. 6,7 In such patients, an FRC can be useful, decreasing the number of medications and the complexity of therapy. When individuals cannot maintain glycaemic control with basal insulin, conventional practice has been to move to a multiple-daily insulin injection regimen or to premix insulin. 39 The latter remains a commonly used option globally. 40 The results of the current study suggest that the FRC of GLP-1RA and basal insulin may be more favourable than premix ('more effective'). It also has a high probability of an advantage in three domains (HbA1c, hypoglycaemia and weight gain) compared with adding meal-time insulin to basal insulin ('likely to be favourable'). Both studies showed similar results to those of the studies already included in the present NMA. The first compared once-daily insulin degludec/insulin aspart (premix) versus once-daily insulin glargine 100 U/mL plus once-daily insulin aspart (basal-plus) over 26 weeks. 43 This study would add to the link between the intensified basal and premix nodes (Figure 2). Similar to Tinahones et al. 37 and Vora et al., 38 Philis-Tsimikas et al. found that although both treatment regimens afforded similar glycaemic control, premix was favoured because of significantly less nocturnal hypoglycaemia than with insulin glargine 100 U/mL plus once-daily insulin aspart. 43 The second very recent publication compared once-daily iGlarLixi versus once-daily insulin glargine 100 U/mL, both with metformin, over 26 weeks, in a single country. 44 This study would add to the link between the iGlarLixi and intensified basal nodes ( Figure 2). The results were similar to those shown by Aroda et al., 17 where iGlarLixi was favoured over insulin glargine 100 U/mL in terms of improving glycaemic control, with no increased risk of hypoglycaemia. 44 The strengths of the current study include formal evidence extraction from original publication of data reporting and analysis, the The analyses were subject to some limitations. First, the selection of outcomes was limited to those evaluated in the LixiLan-L trial. 17 Indeed, the network link between iGlarLixi and the other therapies depended on the findings of that one trial, and the other links depended on just one or two trials each. While insulin glargine 100 U/ mL as basal insulin was both the common link and is the more widely used insulin in that role, the premix and meal-time comparators varied between trials, as did the oral agents allowed in the study designs.
However, the sensitivity analysis did not suggest notable problems arising from inclusion of different meal-time insulin regimens in the same treatment node. Second, the RCT evidence was from comparatively short-term trials, whereas T2D is progressive over years. iGlarLixi is at least comparable, with a higher probability of being favourable in these same outcome domains. FRCs offer patients who wish to use GLP-1RAs and basal insulin the opportunity to do so with a less complex regimen of a single daily injection, convenient dose timing and less plasma glucose monitoring than is necessary when meal-time insulin is added to basal insulin. These factors may lead to improved quality of life and treatment adherence.
ACKNOWLEDGMENTS
The authors are grateful to Doctor Evidence and Amanda Justice for medical writing support funded by Sanofi. The systematic review and statistical analysis were funded by Sanofi and performed by Doctor Evidence.
CONFLICT OF INTEREST
The
PEER REVIEW
The peer review history for this article is available at https://publons. | 2020-07-24T13:05:37.975Z | 2020-07-22T00:00:00.000 | {
"year": 2020,
"sha1": "e8a20c9c0e6fa9859507093968ff736916183ed1",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/dom.14148",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "b29ba8db550c24de541c5aa89ac14808c856671f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
35433605 | pes2o/s2orc | v3-fos-license | Lipodystrophy Due to Adipose Tissue–Specific Insulin Receptor Knockout Results in Progressive NAFLD
Ectopic lipid accumulation in the liver is an almost universal feature of human and rodent models of generalized lipodystrophy and is also a common feature of type 2 diabetes, obesity, and metabolic syndrome. Here we explore the progression of fatty liver disease using a mouse model of lipodystrophy created by a fat-specific knockout of the insulin receptor (F-IRKO) or both IR and insulin-like growth factor 1 receptor (F-IR/IGFRKO). These mice develop severe lipodystrophy, diabetes, hyperlipidemia, and fatty liver disease within the first weeks of life. By 12 weeks of age, liver demonstrated increased reactive oxygen species, lipid peroxidation, histological evidence of balloon degeneration, and elevated serum alanine aminotransferase and aspartate aminotransferase levels. In these lipodystrophic mice, stored liver lipids can be used for energy production, as indicated by a marked decrease in liver weight with fasting and increased liver fibroblast growth factor 21 expression and intact ketogenesis. By 52 weeks of age, liver accounted for 25% of body weight and showed continued balloon degeneration in addition to inflammation, fibrosis, and highly dysplastic liver nodules. Progression of liver disease was associated with improvement in blood glucose levels, with evidence of altered expression of gluconeogenic and glycolytic enzymes. However, these mice were able to mobilize stored glycogen in response to glucagon. Feeding F-IRKO and F-IR/IGFRKO mice a high-fat diet for 12 weeks accelerated the liver injury and normalization of blood glucose levels. Thus, severe fatty liver disease develops early in lipodystrophic mice and progresses to advanced nonalcoholic steatohepatitis with highly dysplastic liver nodules. The liver injury is propagated by lipotoxicity and is associated with improved blood glucose levels.
Nonalcoholic fatty liver disease (NAFLD) is a common manifestation of diabetes, obesity, and metabolic syndrome. NAFLD also occurs in human and rodent models of lipodystrophy (1)(2)(3)(4). In the context of obesity and metabolic syndrome, NAFLD may progress to include liver inflammation or nonalcoholic steatohepatitis (NASH), fibrosis, and occasionally, hepatocellular carcinoma (5), whereas not much is known about the natural history of liver disease in the context of lipodystrophy.
Metabolic syndrome is present in 80% of patients with lipodystrophy, almost universally manifesting as severe insulin resistance, profound hypertriglyceridemia, and ectopic lipid accumulation (6). One major difference between this form of metabolic syndrome and that associated with obesity is in the amount of adipose tissue, which is reduced in lipodystrophy, resulting in low levels of leptin (7), whereas obesity is associated with increased adiposity and high leptin levels. In regards to other metabolic outcomes, including ectopic fat accumulation in the liver, lipodystrophy resembles an extreme version of the obesity-associated metabolic syndrome (8).
Whether lipid accumulation in the liver is a sign of a disease or a physiologic response in patients who have minimal capacity to store lipids in the adipose tissue is unknown. In lipodystrophic patients, the liver fat decreases with leptin treatment (9,10), suggesting that at least a portion of stored liver fat may be used for ketogenesis (11). What processes mediate ketogenesis in the setting of minimal adipose tissue lipid stores are unknown, but it is interesting to note that serum levels of the ketogenic hormone, fibroblast growth factor 21 (FGF21), are elevated with lipodystrophy in patients infected with HIV-1 (12) and that the liver is the main source of elevated serum FGF21 levels in mice with lipodystrophy (13).
The liver disease in some patients with generalized lipodystrophy can progress to liver cirrhosis requiring liver transplantation (14). The most characteristic feature of fatty liver disease associated with lipodystrophy is prominent steatosis with hepatocyte balloon degeneration (8,15), pointing to lipotoxicity as an important pathway of liver injury. Chronic liver injury resulting in cirrhosis and liver failure can be associated with reduced hepatic glucose production leading to hypoglycemia (16). What effects liver lipotoxicity may have on hepatic glucose homeostasis in the setting of lipodystrophy is unknown.
Here, we show that mice lacking insulin receptor (IR) or both IR and insulin-like growth factor 1 receptor (IGF1R) in adipose tissue display a lipodystrophic phenotype associated with severe diabetes and NAFLD. The liver disease in these mice progresses to NASH, fibrosis, and highly dysplastic liver nodules. Interestingly, this progressive liver disease is associated with improvement of hyperglycemia in these mice with age, despite the persistent insulin resistance and a normal ability to mobilize stored glycogen in response to glucagon. The liver injury in these lipodystrophic mice is propagated by lipotoxicity, and high-fat diet (HFD) feeding accelerates the progression of liver disease, leading to normalization of blood glucose levels.
RESEARCH DESIGN AND METHODS
All protocols were approved by the Joslin Diabetes Center Institutional Animal Care and Use Committee and were in accordance with National Institutes of Health guidelines.
Animals and Diets
Fat-specific IR, IGF1R, and IR/IGF1R knockout (KO) mice (F-IRKO, F-IGFRKO, and F-IR/IGFRKO mice, respectively) were generated as described (17). Mice were housed at 20-22°C on a 12-h light/dark cycle with ad libitum access to water and food. For the HFD experiments, the mice were fed a chow diet (Mouse Diet 9F, PharmaServ) until 8 weeks of age and then were continued on chow or switched to a 60% HFD (D12492, Research Diets) for an additional 12 weeks.
Glucose Tolerance Tests and Glucagon Response
Glucose tolerance tests were performed on overnight fasted mice injected with dextrose (2 mg/g i.p.). Glucagon response was assessed in mice fasted overnight injected with glucagon (1 unit/kg; G2044, Sigma-Aldrich) with blood glucose measured at 0, 15, 30, 60, and 120 min. Glucose levels were measured using an Infinity glucose meter (US Diagnostics).
Tissue and Serum Analyses
Tissues were stored frozen or fixed in formalin, and sections were stained with hematoxylin and eosin (H&E) or periodic acid Schiff (PAS). mRNA extraction and quantification was performed as previously described (18). Serum parameters were determined by the Joslin Diabetes Research Center assay core using commercial kits. Hormones were assessed by ELISA, adipokines were measured by a multiplex assay, and alanine aminotransferase (ALT) and aspartate aminotransferase (AST) were assessed by colorimetric assays. Triglycerides from liver samples were measured with a triglyceride quantification kit (Abnova), as previously described by Debosch et al. (19).
Immunohistochemistry and Immunofluorescence
Immunohistochemistry (IHC) was performed on formalinfixed sections and immunofluorescence (IF) on frozen sections. In brief, paraffin-embedded, formalin-fixed sections were deparaffinized and rehydrated. Antigen was recovered by boiling in citrate buffer (H-3300, Vector Laboratories) and incubation in Triton-X, and were digested with proteinase K. Sections were blocked in goat serum and incubated overnight at 4°C for IHC with primary antibodies against b catenin (D13A1, Cell Signaling), F4/80 (ab6640, Abcam), or 4-hydroxynonenal (4-HNE) (ab46545, Abcam). Subsequently, slides were incubated with biotinylated secondary antibody, and the reaction was developed using avidin/streptavidin horseradish peroxidase (PK-4001, Vector Laboratories). For IF, slides were incubated overnight with primary antibody against Ki67 (556003, Becton-Dickinson,) and phosphatidylinositol 3,4,5-trisphosphate (PIP 3 ) (P-0008, Echelon), followed by a secondary antibody labeled with Texas Red (Vector Laboratories) or Alexa Fluor (Invitrogen) and counterstained with DAPI mounting media (H-1500, Vector Laboratories). Dihydroethidium (DHE) stain was performed on frozen liver sections fixed with 4% paraformaldehyde and stained with a 1:1,500 dilution of DHE (10 mg/ mL) stain for 10 min at 37°C. Slides were washed twice in PBS and coverslipped. Images were captured using an Olympus BX60 fluorescence microscope.
In Vivo Glucose Uptake
After 12 weeks of the HFD, control, F-IRKO, and F-IR/ IGFRKO mice were injected with glucose (2 mg/g i.p.) (D20) combined with 0.33 mCi [ 14 C]2-deoxyglucose per gram of body weight. After 15 min, [ 14 C] levels in the liver and quadriceps muscle were determined, as previously published (20).
Statistical Analyses
Data are presented as mean 6 SEM and were analyzed by unpaired two-tailed Student t test or ANOVA, as appropriate.
Lipodystrophic Mice Develop Fatty Liver Disease and Lipotoxicity
Mice with fat-specific KO of the IR, IGF1R, or both, were generated by breeding mice with IR and IGF1R floxed alleles (18) and mice carrying a Cre-recombinase transgene driven by the adiponectin promoter (21). Whereas KO of the IGF1R had minimal effect on white and brown fat development, F-IRKO and F-IR/IGFRKO mice displayed virtually from birth a profound reduction in weights of subcutaneous and visceral white adipose tissue depots (17). Lipodystrophy was associated with hyperglycemia ( Fig. 1A) and severe diabetes at 12 weeks of age. At this time, liver weights of both F-IRKO and F-IR/IGFRKO mice were increased four-to fivefold above the control, whereas the liver weight of F-IGFRKO mice remained normal (Fig. 1B). Hepatomegaly in F-IRKO and F-IR/IGFRKO mice was associated with a three-to fivefold increase in liver triglyceride accumulation ( Fig. 1C and D). There were also two-to fivefold increases in the expression of genes, such as Acc1, Fas, and Scd1, involved in de novo lipogenesis (Fig. 1E). However, at this age, these mice did not display an increase in expression of Tnf-a, and F4/80, although there was some increase in the macrophage marker CD11c (Fig. 1F). F-IRKO and F-IR/IGFRKO mice also did not show significant fibrosis as assessed by trichrome, Sirius red, and reticulin stains ( Supplementary Fig. 1), and mRNA levels of Tgf-b and aSma were not elevated. Col1a1 was increased in F-IR/IGFRKO mice; however, this may be a sign of stellate cell activation in the absence of other markers of fibrosis (22) (Fig. 1G). Histological assessment of liver at 12 weeks of age confirmed that lipodystrophic F-IRKO and F-IR/IGFRKO mice exhibited micro-and macrovesicular steatosis and the presence of balloon degeneration, with no significant inflammation or fibrosis (Table 1). Hepatocyte balloon degeneration is indicative of lipotoxicity and is associated with a progressive form of NAFLD (23,24). By 12 weeks of age, reactive oxygen species levels and lipid peroxidation, as assessed by DHE and 4-HNE stains, respectively, were also increased in the livers of F-IR/IGFRKO mice (Fig. 1H). F-IRKO and F-IR/IGFRKO mice exhibited decreased expression of one of the ratelimiting gluconeogenic enzymes Pepck, whereas the expression of Fbp1 and Pc was actually increased compared with the controls (Fig. 1I). Despite a decrease in Pepck, some gluconeogenic potential was likely preserved because these mice were hyperglycemic compared with the controls. Thus, lipodystrophic F-IRKO and F-IR/IGFRKO mice had profound hepatic steatosis and minimal signs of liver injury, which was primarily a result of lipotoxicity. F-IGFRKO mice, which did not develop lipodystrophy, did not exhibit any of these changes.
Lipodystrophic F-IRKO and F-IR/IGFRKO Mice Can Mobilize Stored Liver Fat
In addition to marked hepatosteatosis, lipodystrophic F-IRKO and F-IR/IGFRKO mice at 12 weeks of age exhibited serum dyslipidemia with elevated circulating triglyceride, cholesterol, and free fatty acid (FFA) levels (17). To determine to what extent these lipids could be mobilized, F-IRKO mice underwent a protocol of overnight fasting and an 8-h refeeding. Fasting resulted in a decrease in blood glucose levels in all mice, although the F-IRKO mice remained hyperglycemic compared with the controls (135 6 13 vs. 87 6 8 mg/dL), and refeeding resulted in a return to the marked hyperglycemic levels ( Fig. 2A). Liver weight of F-IRKO mice also decreased by 38%, from 4.0 6 0.4 to 2.5 6 0.2 g, after fasting and returned to baseline (3.9 6 0.2 g) after refeeding, demonstrating the ability of lipodystrophic mice to store and mobilize fat in the liver (Fig. 2B). Liver weight of control mice also decreased by fasting, from 1.1 6 0.1 to 0.84 6 0.02 g, and increased to 0.94 6 0.07 g with refeeding. Serum triglyceride levels, which were elevated in randomly fed F-IRKO mice (230 6 40 vs. 96 6 10 mg/dL) decreased to levels even lower than control after an overnight fast (56 6 7 vs. 108 6 21 mg/dL), but then returned to baseline after refeeding (Fig. 2C). FFA levels increased with fasting in control mice from 0.7 6 0.1 to 0.9 6 0.0 mEq/L but, paradoxically, decreased in F-IRKO mice during fasting (1.2 6 0.2 to 0.7 6 0.1 mEq/L), probably reflecting the lack of adipose tissue lipolysis and increased FFA utilization in these mice (Fig. 2D). Insulin levels also decreased with fasting in control and F-IRKO mice, indicating normal pancreatic responses to nutrient intake and blood glucose levels ( Fig. 2E). Likewise, b-hydroxybutyrate levels significantly increased with fasting in control and F-IRKO mice (Fig. 2F), suggestive of normal ketogenesis in lipodystrophic mice, despite the almost complete absence of white adipose tissue. Liver FGF21 mRNA was elevated in F-IRKO and F-IR/IGFRKO mice at every age tested (Fig. 2G).
F-IRKO and F-IR/IGFRKO Mice Develop Progressive NAFLD With Aging
The profound hepatomegaly present at 12 weeks of age was already evident by 2.5 weeks of age and continued to advance throughout the lifetime. By 52 weeks of age, livers in F-IRKO mice were 6.5 times heavier, and those in F-IR/IGFRKO mice were 9.2 times heavier than in the controls (Fig. 3A). Despite almost complete reduction of white adipose tissue in F-IRKO and F-IR/IGFRKO mice, and as a result of the massive hepatomegaly, body weight was not significantly different from the controls at up to 3 months of age and was actually 20% greater than the controls at 52 weeks of age (Fig. 3B). Hepatic triglycerides per milligram of tissue were also increased by three-to fivefold in F-IRKO and F-IR/IGFRKO mice as early as 2.5 weeks of age compared with controls and remained elevated at 12 and 52 weeks of age (Fig. 3C). In parallel, steatosis was observed in F-IRKO and F-IR/IGFRKO mice at 2.5 weeks of age, which persisted at 12 and 52 weeks of age ( Supplementary Fig. 2). This correlated with two-to threefold increases in the expression of enzymes involved in de novo lipogenesis (Acc1, Fas, and Scd1) in the lipodystrophic mice compared with controls at 2.5, 12, and 52 weeks of age. (Fig. 3D-F). Interestingly, the insulinactivated transcription factor Srebp1c, which regulates lipogenesis, was not elevated despite massive hyperinsulinemia ( Supplementary Fig. 3).
Excessive lipid accumulation in the liver was associated with increased serum ALT in 5-and 12 week-old F-IRKO and F-IR/IGFRKO mice (Fig. 3G). Serum AST levels were also elevated in F-IRKO mice at 5 and 12 weeks of age and further increased in F-IRKO and F-IR/IGFRKO mice at 52 weeks of age (Fig. 3H). F-IGFRKO mice did not develop hepatomegaly or increased levels of liver triglycerides, enzymes of de novo lipogenesis, or serum ALT and AST levels.
One-Year-Old Lipodystrophic Mice Develop Liver Inflammation and Fibrosis
By 1 year of age, inflammatory infiltrates were evident in F-IRKO and F-IGFRKO livers on histological examination (Fig. 4A, top panel), and this was confirmed by increased staining using antibody to the macrophage marker F4/80 (Fig. 4A, bottom panel). mRNA expression of F4/80 and other macrophage and inflammatory markers, including CD11c and Tnf-a, was also increased three-to eightfold in livers of 1-year-old F-IRKO and F-IR/IGFRKO mice (Fig. 4B), which was not observed at younger ages (Supplementary Fig. 4). Likewise, mRNA expression of fibrogenic genes aSma, Tgf-b, and Col1a1 was increased two-to eightfold in 1-year-old F-IRKO and F-IR/IGFRKO mice (Fig. 4C), which again was largely not increased at prior assessments ( Supplementary Fig. 5).
At 52 weeks of age, the livers of all F-IRKO and F-IR/ IGFRKO mice contained some hepatocytes showing balloon degeneration, and most exhibited many ballooned hepatocytes, with a ballooning score of 2.0 6 0.0 and 1.3 6 0.2, respectively (scale of 0-2, Table 1). At 52 weeks of age, F-IRKO and F-IR/IGFRKO mice showed an increased inflammation score of 0.6 6 0.2 and 1.5 6 0.2 (scale 0-3, Table 1), respectively. The most striking histological difference between 12-and 52-week-old mice was seen in the degree of fibrosis. Fibrosis was not present at 12 weeks of age (Table 1); however, at 52 weeks of age, F-IRKO mice showed stage 1 fibrosis, consisting of interstitial and periportal fibrosis, whereas a more severe degree of interstitial fibrosis (stage 3 of 4) was present in F-IR/IGFRKO mice ( Fig. 4D and Table 1). Liver inflammation or fibrosis did not develop in F-IGFRKO mice at any age.
One-Year-Old F-IRKO and F-IR/IGFRKO Mice Preserve Some Gluconeogenic Potential
As lipodystrophic mice aged and developed progressive liver disease, their blood glucose levels improved (17). We assessed whether impaired hepatic gluconeogenesis caused by progressive liver disease could be responsible for the improved glucose levels in older mice. On the one hand, mRNA levels of two rate-limiting enzymes of gluconeogenesis, G6Pase and Pepck, were indeed decreased in the livers of 1-year-old F-IRKO and F-IR/IGFRKO mice. The expression of other gluconeogenic enzymes, such as Fbp1 and Pc, on the other hand, was elevated (Fig. 5A). Interestingly, the relative expression of gluconeogenic enzymes did not change between 12 and 52 weeks of age, except for G6pase, which was elevated in F-IR/IGFRKO mice at 12 weeks but decreased in F-IRKO and F-IR/ IGRKO mice at 52 weeks of age (Fig. 5A). Although the reduction in G6pase and Pepck expression suggests impaired gluconeogenesis, some gluconeogenic potential was preserved, because 52-week-old F-IRKO and F-IR/ IGFRKO mice did not develop hypoglycemia with fasting (Fig. 5B). In addition, F-IRKO and F-IR/IGFRKO mice were able to mobilize stored glucose in response to glucagon and even showed an exaggerated response (Fig. 5C). This is consistent with enhanced glycogen stores in the liver of lipodystrophic mice as assessed by PAS staining (Fig. 5D). Improvement in glucose was also not caused by failure of pancreatic a-cells, because serum glucagon levels were elevated in 52-week-old F-IRKO and F-IR/IGFRKO mice (Fig. 5E).
Liver Tumors Develop in Lipodystrophic Mice at 1 Year of Age
With aging, livers of F-IRKO and F-IR/IGFRKO mice developed gross nodularity (Fig. 6A, top); this occurred in the setting of fibrosis (Fig. 6A, middle, and Supplementary Fig. 6). Histological sections of liver stained for Ki67 showed clusters of proliferating cells in F-IRKO and discrete proliferative nodules in F-IR/IGFRKO livers (Fig. 6A, bottom). These nodules contained hepatocytes with increased mitotic activity and large, atypical nuclei, often with multiple, prominent nucleoli, indicative of severe cellular dysplasia (Supplementary Fig. 7A). Livers of F-IR/IGFRKO mice also contained tumor-like malformations of bile ducts, segments of bone with bone marrow elements, and areas of extramedullary hematopoiesis ( Supplementary Fig. 7B-D), as well as increased expression of tumorigenic markers b-catenin, Afp and cyclin D1 ( Supplementary Fig. 8A-C). Expression of Pkm2, a rate-limiting enzyme of glycolysis, was also increased in F-IR/IGFRKO livers (Fig. 6B). Pkm2 is normally found Results are mean 6 SEM of five to eight mice per group. *P < 0.05, **P < 0.01, and ***P < 0.001 compared with controls; ##P < 0.01 and ###P < 0.001 compared with fed F-IRKO mice; and §P < 0.05 and § § §P < 0.001 between adjacent groups.
in tissues with high glycolytic activity, such as embryonic stem cells and tumor cells, but not in mature hepatocytes (24).
Whole-body VO 2 was significantly decreased by 30% in 1-year-old F-IRKO and F-IR/IGFRKO mice at least partly because of reduced activity of the mice (Fig. 6C and Supplementary Fig. 9). The respiratory exchange ratio (RER) was also lower in these mice compared with controls at 12 weeks of age, but by 52 weeks of age, the RER was above 0.9 in F-IRKO and F-IR/IGFRKO mice and was actually significantly higher than in controls (Fig. 6C). The latter finding indicates a shift in favor of greater glucose than lipid utilization in the lipodystrophic mice, which is likely from the increased glycolytic activity associated with development of the dysplastic hepatic nodules. IR levels were reduced in the livers of F-IR/IGFRKO mice (Fig. 6D), probably as a result of sustained hyperinsulinemia (25). Also, there was increased serine/threonine p-PTEN, decreased total levels of PTEN (Fig. 6D), and increased staining of PIP 3 (Fig. 6E), all consistent with development of hepatic neoplasia.
HFD Accelerates Blood Glucose Normalization
To test to what extent chronic lipotoxicity might be mediating the liver injury and improved glucose levels in aged mice, we fed the HFD (60% fat by calories) to a cohort of 8-week-old control, F-IRKO, and F-IR/IGFRKO mice. Control mice gained weight on the HFD throughout the 12-week study period, whereas F-IR/IGFRKO mice gained body weight initially, but this began to level off after 8 weeks of the diet. F-IRKO mice also initially gained weight but then started losing body weight after 8 weeks of the diet (Fig. 7A). Likewise, blood glucose levels at 8 weeks on the diet began trending downward, so that at 12 weeks, blood glucose levels of F-IRKO and F-IR/IGFRKO mice were not different from the control mice (Fig. 7B). After 12 weeks of the HFD, glucose tolerance of the lipodystrophic mice also improved and was not different from the control mice (Fig. 7C). Liver histology showed marked steatosis and balloon degeneration in F-IRKO and F-IR/IGFRKO mice, whereas control mice did not develop significant steatosis (Fig. 7D). Liver weight of F-IRKO and F-IR/IGFRKO mice was significantly increased compared with the controls at 12 weeks on the HFD (Fig. 7E), and hepatomegaly was as profound as the liver weight of chow-fed F-IRKO and F-IR/IGFRKO mice at 52 weeks of age.
The expression of gluconeogenic enzymes G6Pase and Pepck was again decreased in F-IRKO and F-IR/IGFRKO mice (Fig. 7F), similar to the decreased expression of these enzymes at 52 weeks of age on the chow diet. Conversely, the expression of Fbp1 was not significantly increased, whereas the mRNA levels of Pc, Gapdh, Enol1, and Aldo b were increased in F-IRKO and F-IR/IGFRKO mice (Fig. 7F and Supplementary Fig. 10), suggestive of increased glucose flux. When [ 14 C]-2-deoxyglucose uptake was assessed in vivo, uptake in the livers of F-IRKO and F-IR/IGFRKO mice tended to be higher, whereas glucose uptake was Figure 5-Gluconeogenesis at 52 weeks of age. A: mRNA expression of G6pase, Pepck, Fbp1, and Pc in control, F-IGFRKO, F-IRKO, and F-IR/IGFRKO mice at the indicated times from 2.5 to 52 weeks of age graphed as fold-change over controls. Results are mean 6 SEM of five to six animals per group. B: Random-fed and overnight-fasted blood glucose levels of chow-fed control, F-IGFRKO, F-IRKO, and F-IR/ IGFRKO mice at 52 weeks of age. Blood glucose levels (C) assessed over 90 min after intraperitoneal glucagon challenge and serum glucagon levels (E) in control, F-IRKO, and F-IR/IGFRKO mice at indicated times. Results are mean 6 SEM of five to six animals per group. *P < 0.05, **P < 0.01, and ***P < 0.001 compared with controls; #P < 0.05 compared with fed mice. D: PAS-stained liver sections from control, F-IGFRKO, F-IRKO, and F-IR/IGFRKO mice at 52 weeks of age. One representative section from five mice per group is shown. decreased in the muscle of F-IR/IGFRKO mice at 12 weeks on the HFD (Supplementary Fig. 11A). Again, normalization of blood glucose levels on the HFD correlated with increased Pkm2 expression in F-IRKO and F-IR/IGFRKO mice (Fig. 7G). Although these livers did not show gross nodularity, the liver parenchyma was heterogeneous in color ( Supplementary Fig. 11B). FGF21 mRNA levels were also increased in lipodystrophic mice on the HFD (Fig. 7G), as was the expression of Acc1, Fas, and Scd1 (Fig. 7H). The HFD-challenged lipodystrophic mice also developed signs of liver inflammation, with elevated expression of F4/80, Tnf-a, and CD11c (Fig. 7I) and signs of liver fibrosis with elevated aSma and Col1a1 mRNA (Fig. 7J).
DISCUSSION
In the current study, we show that adipose-specific deletion of IR or combined deletion of IR and IGF1R induces a generalized lipodystrophy phenotype with profound hepatomegaly, marked steatosis, and increased enzymes of de novo lipogenesis. Early on, this results in increased levels of reactive oxygen species in the liver, augmented lipid peroxidation, and hepatocyte balloon degeneration-all indicative of lipotoxicity. At this age, these mice are able to mobilize stored liver lipids and use different substrates, such that blood glucose levels and liver weight decrease with fasting. Furthermore, lipodystrophic F-IRKO and F-IR/IGFRKO mice are able to robustly increase ketogenesis, perhaps because of increased liver FGF21 expression. Over time, however, lipotoxic effects accumulate, so that lipodystrophic mice develop significant hepatic inflammation and fibrosis by 52 weeks of age. Interestingly, blood glucose levels also normalize at this age, partly because of chronic lipotoxicity with altered expression of gluconeogenic enzymes and development of highly dysplastic liver nodules that results in greater glucose utilization by liver and a dramatic increase in whole-body RER. Normalization of blood glucose levels can be accelerated by feeding mice an HFD for 12 weeks.
The lipodystrophic syndrome observed in F-IRKO and F-IR/IGFRKO mice is similar to human generalized lipodystrophy in many ways. Both are characterized by low leptin levels, marked insulin resistance, hyperlipidemia, and fatty liver disease (26,27). Leptin replacement reduces blood glucose levels and hepatic steatosis in humans with lipodystrophy (8,10), and leptin replacement also normalizes blood glucose levels in F-IRKO and F-IR/ IGFRKO mice (17). The effects of leptin are at partly secondary to reduced food intake and can be mimicked in our lipodystrophic mice with fasting alone.
In humans with generalized lipodystrophy, NAFLD often progresses to NASH and cirrhosis (15), sometimes requiring liver transplantation (14). F-IRKO and F-IR/ IGFRKO mice also develop profound and progressive fatty liver disease with massive hepatomegaly (liver weight up to ;25% of body weight), with inflammation, pericellular fibrosis, and an inversion of the ALT-to-AST ratio. Indeed, by 1 year of age, the F-IRKO and F-IR/IGFRKO mice develop overt dysplastic hepatic nodules with severe largecell dysplasia, frequent mitoses, and elevated tumor markers Apf and b-catenin. This is associated with a decrease in liver PTEN levels and increased accumulation of PIP 3 , changes that are often seen in liver cancer (28). Thus, the liver injury in our current lipodystrophic model describes a full spectrum of NAFLD progression, including the development of severely dysplastic hepatic nodules.
The liver phenotype in these mice is in contrast to the liver phenotype in our previous study of fat-specific IR and IGF1R deletion driven by aP2-cre promoter, which did not develop NAFLD unless challenged with an HFD (18). The difference likely stems from the fact that deletion of IR or IR and IGF1R driven by aP2-cre only results in moderately reduced adipose tissue mass, leading to improved glucose tolerance and protection from age-related and hypothalamic lesion-induced obesity (29). Lipodystrophic mice caused by KO of Srebp1c using aP2-cre also develop NAFLD with progression to NASH, but have not been reported to develop tumors or dysplastic hepatic nodules (30). However, patients with lipodystrophy are known to develop liver adenomas (personal communication from R. Brown and P. Gorden), and hepatocellular carcinoma developed in at least one patient with acquired generalized lipodystrophy (31).
Our findings indicate that stored fat in the liver is a dynamic depot. Lipodystrophic mice lose ;40% of liver weight with fasting and gain 150% above fasted liver weight, after 8 h of refeeding. Canonical thinking is that liver fat accumulation occurs over time and that additive insults lead to chronic liver injury (32). However, human studies also indicate the dynamic nature of liver fat. As an example, overfeeding for only 3 weeks can increase liver fat by 27%, while total body weight increases by only 2% (33). Furthermore, short-term (2-week) hypocaloric diets are used to reduce liver volume before bariatric surgery (34). In humans, caloric restriction to ;1,100 kcal/day for 48 h can also markedly reduce liver triglyceride content, especially when fed a low-carbohydrate diet (35). Stored liver fat in lipodystrophic mice is used for ketogenesis, which occurs despite the almost complete absence of adipose tissue. This may be caused by elevated liver expression of FGF21, a peptide that regulates lipid metabolism (36) and hepatic ketogenesis (37,38). Thus, liver can function as a fat depot, at least in conditions where adipose tissue does not develop. Extensively relying on the liver for lipid storage does comes at a cost, however, because hepatic lipotoxicity is present even at a young age and can be accelerated by HFD feeding.
One of the most unexpected aspects of the F-IRKO and F-IR/IGFRKO phenotype is the improvement in blood glucose levels with aging. The improvement in blood glucose is not because of regeneration of adipose tissue or a reduction in food intake (17). It is also not caused by a return of normal leptin levels or improved insulin sensitivity, because insulin levels remained elevated and pancreatic islets continued to hypertrophy. Instead, the improvement parallels the progression of liver disease from NAFLD to NASH, with inflammation, fibrosis, and severely dysplastic nodules. At least three factors appear to contribute to the improved glycemia: The first is some impairment in gluconeogenesis caused by chronic liver disease. Pepck and G6Pase were decreased in livers of 1-year-old F-IRKO and F-IR/IGFRKO mice. Attempts to directly assess gluconeogenesis with a pyruvate challenge, however, resulted in death of the mice, so the extent of impairment is difficult to quantify. Mice tolerated overnight fasting without development of hypoglycemia, and liver glycogen stores were increased.
A second potential contributory factor could have been a failure of pancreatic cells to secrete glucagon or a loss of glucagon receptors in the dysplastic liver. However, serum glucagon levels remained elevated with age, and F-IRKO and F-IR/IGFRKO mice showed a brisk glycemic response to glucagon injection.
The third and perhaps most important factor that could contribute to glucose normalization may be the chronic lipotoxicity and development of severely dysplastic hepatic nodules. Furthermore, expression of PKM2 in whole-liver lysate was increased, contributing to increased glycolysis. This is consistent with the shift from fat to carbohydrate metabolism with aging, as exemplified by the increase in whole-body RER. Lipotoxicity likely mediates the liver injury because HFD feeding accelerates blood glucose normalization. This occurred even without gross evidence of tumor development; however, the key enzyme of glycolysis, PKM2, is profoundly increased in the lipodystrophic mice after only 12 weeks of HFD feeding. Hyperinsulinemia may directly increase PKM2 levels (39) and thus may lead to glucose normalization in the absence of highly dysplastic liver nodules.
In summary, F-IRKO and F-IR/IGFRKO mice provide a unique new model to study the development and progression of fatty liver disease. Using this model, we show that lipodystrophic mice develop a full spectrum of NAFLD, which progresses to NASH, fibrosis, and ultimately, highly dysplastic liver nodules, which is associated with improvement in blood glucose levels. This liver injury can be accelerated by feeding mice an HFD. Taken together, these data indicate that lipotoxicity can play a major role in the development of liver disease. This can then contribute to altered whole-body metabolism, modifying disease pathogenesis in unexpected ways. | 2017-10-15T15:41:21.795Z | 2016-05-09T00:00:00.000 | {
"year": 2016,
"sha1": "9c7d99ed5d66db827e21b295103c8cf59981f9c8",
"oa_license": null,
"oa_url": "https://diabetes.diabetesjournals.org/content/diabetes/65/8/2187.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "7a317a25eb4ff595f3734eecda4410ec4409c508",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
54563416 | pes2o/s2orc | v3-fos-license | Construction of a highly saturated linkage map in Japanese plum (Prunus salicina L.) using GBS for SNP marker calling
This study reports the construction of high density linkage maps of Japanese plum (Prunus salicina Lindl.) using single nucleotide polymorphism markers (SNPs), obtained with a GBS strategy. The mapping population (An x Au) was obtained by crossing cv. “Angeleno” (An) as maternal line and cv. “Aurora” (Au) as the pollen donor. A total of 49,826 SNPs were identified using the peach genome V2.1 as a reference. Then a stringent filtering was carried out, which revealed 1,441 high quality SNPs in 137 An x Au offspring, which were mapped in eight linkage groups. Finally, the consensus map was built using 732 SNPs which spanned 617 cM with an average of 0.96 cM between adjacent markers. The majority of the SNPs were distributed in the intragenic region in all the linkage groups. Considering all linkage groups together, 85.6% of the SNPs were located in intragenic regions and only 14.4% were located in intergenic regions. The genetic linkage analysis was able to co-localize two to three SNPs over 37 putative orthologous genes in eight linkage groups in the Japanese plum map. These results indicate a high level of synteny and collinearity between Japanese plum and peach genomes.
Introduction
The genus Prunus (family Rosaceae) contains more than 200 species, which include some relevant agricultural stone fruit crops such as peach, apricot, cherry, Japanese plum, myrobalan and European plum, among others [1,2]. Japanese plum is a self-incompatible, diploid species (2n = 2X = 16) that produces an edible drupe; it has been cultivated for 4,000 years [2] for fruit production and ornamental purposes. Different centers around the world have begun to perform breeding programs to obtain new cultivars. Usually the new cultivars show a high level of variability for some important traits such as harvest date, fruit size, flesh color and shape [3]. a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 Some genetic studies have been carried out to investigate the inheritance pattern and heritability of flowering date, ripening date, fruit size, higher total soluble solids, sweetness and flavor in the fruit [2,3], while molecular studies have been few and focused only on the ethylene response and skin and flesh colors [4][5][6][7][8][9][10]. The construction of a linkage map is an important tool for genetic studies. This map will allow establishing the organization of a genome as well as a relationship between markers and polymorphisms. The mapping information offers evidence about polymorphisms strongly associated with agronomically relevant traits in segregating populations [11,12], allowing the genetic basis of the trait of interest to be determined, and also can be a powerful tool in breeding programs for choosing parental lines and implementing marker-assisted selection strategies. The availability of a consensus reference map built for Rosaceae has facilitated the order and the physical orientation of maps as well as the localization of new consensus molecular markers for assisted selection in different Prunus species. Several genetic maps have been published for Rosaceae species, including peach, apple, pear, raspberry, and cherry. A comparative linkage map has even been built for Rosaceae [13,14]. Peach has been considered as a model species for the genus Prunus [15,16]. Verde et al. [17] recently published a new version of a high-quality draft genome of peach, which is a baseline for comparative analysis with Prunus species. In addition, Zhang et al. [18] have published the genome of mei (Prunus mume).
However, to date only a few genetic linkage maps have been published for plums and related species. Dirlewanger et al. [13] developed the first genetic map for mirobalan plum (P. ceracifera). Later, 144 microsatellites (or simple sequence repeats, SSR) were mapped in Prunus mume [19]. Only two genetic maps have been published for Prunus salicina. Vieira et al. [20] built a map using AFLPs and more recently, Salazar et al. [21] described QTLs using SNPs.
Next generation sequencing approaches have recently allowed the release of whole genome reference sequences for many plant species. This technique also allows the identification of thousands of single nucleotide polymorphisms (SNPs), the most abundant type of DNA marker found in eukaryotic genomes [22][23][24]. SNPs have become tremendously important as markers for genetics research in plants because they have been found to be in high frequency, display a lower mutation rate compared to SSR-based markers and they are uniformly distributed across the genome [25].
The distribution pattern of SNPs in segregating populations plus the available reference genomes have allowed analyzing the linkage relationships and the physical genome distribution of those SNPs, permitting comparisons of the collinearity between related species [26]. These properties make SNPs relevant to carry out genetic studies such as phylogenetics, genetic diversity, association analysis and genetic mapping [27].
Next generation sequencing is rapidly becoming a low-cost technology to carry out massive genetic studies, which is allowing important advances in plant genetics and breeding. Elshire et al. [28] developed a robust and low-cost genotyping method based on partial genome sequencing called genotyping-by-sequencing (GBS). This approach uses restriction enzymes to digest the genome in order to reduce its complexity. The DNA fragments are sequenced by high-throughput methods, obtaining hundreds of thousands of SNPs simultaneously. The GBS approach has been shown to be suited to do genetic analysis and linkage mapping of Rosaceae species such as peach [29], sweet cherry [30,31], raspberry [32], and apple [33]. The objective of the present study was the development of a high-density linkage map which will provide important information about the genetics and genomics of Prunus salicina and its relationships with other Prunus species. To date this is the most saturated map available for this species. Author Rolando García-González is employed by Sociedad BioTECNOS Ltda. This affiliation commenced towards the end of this study. Sociedad BioTECNOS Ltda provided support in the form of salary for author RG-G, but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific role of this author is articulated in the 'author contributions' section.
Parental lines and F 1 segregating population
During the spring of 2013 we crossed the cultivar "Angeleno" (An) (late harvest variety; 179 days after full blossom) as a maternal line with the cultivar "Aurora" (Au) (early harvest variety; 112 days after full blossom) as the pollen donor. An was selected as a parental line because is the most important cultivar of Japanese plum for the Chilean fresh fruit exportation industry [34]. An is a late harvest cultivar and has a high level of productivity and a long period of shelf life. Its fruits can be stored at 4˚C for a long period of time (at least 45 days) without expressing physiological disorders such as woolliness or internal breakdown [35]. To obtain seedlings after pollination and then fruit set, the seed were collected and germinated at 4˚C for 75 days. Then 137 F 1 seedlings were established at the experimental station of the Fundación Agro UC located at Curacaví, Chile (33˚26' South, 71˚01' West) during September, 2014. Each seedling was planted directly in the field without grafting and maintained following standard agronomic protocols (canopy pruning, fruit thinning, drip irrigation, fertilization and phytosanitary control).
Genotyping by sequencing (GBS)
GBS protocols were carried out by Institute of Biotechnology, Cornell University, Ithaca, NY, USA according to [28] (more details can be obtained from http://www.biotech.cornell.edu/ brc/genomics-facility). Young leaves from the 137 segregating F 1 and parental lines An and Au were collected for DNA isolation. High quality DNA was extracted from leaves using a standard CTAB approach modified by Carrasco et al. [36]. The double-stranded DNA concentrations were measured with a Qubit 3.0 Fluorometer (Thermo Fisher Scientific). We extracted at least 70 ng/ul of double-stranded DNA per sample according to the protocol suggested by the BRC Genome Facility (Cornell University Biotechnology Resource Center, USA). GBS libraries were developed using the restriction enzyme ApeKI (GCWGC) and two different adapters according to protocols from the Institute for Genome Diversity (IGD) at Cornell University. ApeKI was selected from a panel of restriction enzymes because it is able to produce hundreds of thousands of fragments between 150 and 500bp. Several studies of the genomes of Prunus species have used this criterion successfully to select the restriction enzyme for GBS [21; 28; 31; 37; 38]. GBS sequencing libraries were prepared by ligating the digested DNA to nucleotide adapters (barcodes), followed by standard PCR. Sequencing was performed using Illumina HiSeq2000. The DNA of the parental lines was sequenced three times (independent samples) in order to reduce missing data and errors during SNP calling. The separate FASTQ (raw files) were aligned to Peach v2.1 [17] (http://www.rosaceae.org/gb/ gbrowse/prunus_persica_v2.1) using the Burrows-Wheelers alignment tool version 0.7.8-r441 [39]. Alignments were converted to the SAM format, then merged and sorted into one master binary alignment file with SAMtools 0.1.18 [39]. A 'master' TagCounts file was produced, which was aligned to the peach genome V2.1, and a Tags on Physical Map (TOPM) file was built, containing the best genomic position of each tag. The barcode information in the original FASTQ files is used to tally the number of times each tag in the master tag list is observed in each sample ('taxon') and these counts are stored in a TagsByTaxa file. The information recorded in the TOPM and TBT is then used to discover SNPs and filter them based upon the proportion of taxa covered by the TagLocus, minor allele frequency and inbreeding coefficient (F IT ). More details about the pipeline to SNP calling can be found in Glaubitz et al. [40] and IGD.
Finally, the identified SNPs corresponding to each genotype were stored as a filtered VCF file for posterior filtering with Tassel 5.2.16 [40,41] and mapping analysis using Joinmap v4.1 software [42].
SNP analysis
SNPs were identified following the nomenclature defined in the peach genome v2.1, where each SNP was located in the pseudomolecules (scaffold) s1 to s8, followed by the physical position in base pairs (bp). Total SNPs were filtered using the VCFtools software [43]. Only biallelic SNPs with minimum allele frequency of 0.05 were used. SNP markers with more than 10% missing data were removed, and only those SNPs that showed an even distribution along the main eight peach scaffolds were used for posterior analysis (high quality SNPs). The location of each SNP within intergenic and genic regions (exon, intron and untranslated regions (UTR) was determined using the Perl script (www.perl.org), determining the positions from the Prunus persica v2.1 GFF annotation.
Linkage map construction
The VCF file of clean SNPs was converted to JoinMap format using ngsTools software [44]; therefore the SNPs that were classified as heterozygous in one of the parents were scored as segregation types <lmxll> or <nnxnp>, while SNPs heterozygous in both parents were scored as segregation type <hkxhk>.
Map construction was performed using JoinMap 4.1 software [42] following a two-step strategy. Parental maps were first constructed using SNPs classified as <lmxll> or <nnxnp>. Then for consensus map construction, the integration of the parental maps was performed including the SNPs classified as <hkxhk>.
The segregation distortion of SNPs was determined by calculating chi-square (χ 2 ) using JoinMap, and SNPs with a major distortion of p < 0.001 were removed. In addition, in those cases in which one SNP had Similarity of Loci = 1 with another SNP, only one of them was used for map construction.
Maternal, paternal and consensus map constructions were performed using the regression mapping algorithm. For maternal and paternal map construction, markers were grouped using a minimum independence LOD (logarithm of the odds) score of 10.0 and linkage groups were established at a minimum LOD score of 3.0 and maximum recombination frequency of 0.40. Map distance was estimated using the Kosambi mapping function [45]. Consensus map construction was performed considering the same parameters used for parental map constructions.
Linkage mapping
Using the GBS approach we obtained 454,912,276 reads and 4,995,919 tags; 2,367,021 tags (47.4%) were aligned to a unique position, 200,809 tags (4.0%) were aligned to multiple positions and 2,428,089 tags (48.6%) could not be aligned to the peach genome v2.1 that was used as a reference. The HapMap file containing 49,826 filtered SNPs was used for subsequent analysis. After applying filters (MAF >0.05 and 10% as maximum missing data) a total of 12,720 SNPs were obtained to be analyzed with the software JoinMap 4.1. Chi-square analyses were performed on informative SNPs to evaluate their conformity to the expected Mendelian segregation ratios. SNPs with identical segregation patterns were also discarded. Finally, 1,441 high quality SNPs were available for map construction.
The linkage analysis showed that the 1,441 SNPs were spread over eight linkage groups, which corresponds to the reported haploid chromosome number in Japanese plum [2]. The linkage maps spanned 588 cM and 490 cM for An (female line) and Au (male line), respectively (Fig 1a and 1b; S1 Table).
The consensus linkage map was constructed using SNPs mapped in both parental maps plus the SNPs in heterozygous configuration. The consensus map contained 732 non-redundant SNPs and spanned a distance of 617 cM (Table 1; Fig 1c; S1 and S2 Tables). The number of SNPs mapped, the total map length, the average distance and maximum gap (Table 1) Most of the SNPs identified using the peach genome were co-localized in the correct linkage group after linkage analysis. However, some SNPs did not follow the same physical order relative to the scaffolds of peach genome v2.1. We found that 69 SNPs (9.5%) of 732 SNPs mapped to different linkage groups compared to the peach genetic map.
SNP distribution through the linkage groups
The location of each SNP in the intragenic and intergenic regions (Fig 2) was determined by using the physical position in peach genome v2.1. Table 2 shows the distribution of SNPs in the intragenic as well as intergenic regions of the eight linkage groups.
Linkage Group (LG) SNPs in intragenic regions (%) SNPs in intergenic regions (%)
LG1 The genetic linkage analysis was also able to co-localize at least 1 SNP in 460 putative orthologous genes identified in the peach genome. These genes were distributed at the linkage groups as follows: LG1 = 112; LG2 = 50; LG3 = 65; LG4 = 52; LG5 = 79; LG6 = 34; LG7 = 34 and LG8 = 34 (for more details about these SNPs and genes see S2 Table). Using linkage analysis we were able to co-localize two to three SNPs simultaneously in 38 genes in the eight linkage groups of the genetic map of Japanese plum (Fig 3).
Discussion
Genetic linkage maps are used as a tool for primary localization of important genomic regions associated with the genetic control of both qualitative and quantitative traits, which can help to support a breeding program [46]. The huge amount of genetic information that can be obtained from high throughput sequencing approaches today have allowed whole-genome sequencing and linkage analysis to become powerful methodologies for the identification of genetic polymorphisms related to complex traits in different species [47,48]. GBS is able to generate abundant SNPs for a large number of individuals of a species with a relatively low cost per sample [49]. It is also an effective source of sequence tags that can be used as genetic anchors to direct contig/scaffold assembly and to map genomic fragments using a reference genome.
The GBS approach used in this study identified 49,826 SNPs to initiate the mapping analysis. However, most of them (97.1%) were not used for the construction of a linkage genetic map and were eliminated due to low minor allele frequency (<0.05), excess of missing data (>10%), SNPs with incorrect genotype, more than two alleles for one position and distortion of segregation. It is important to highlight that according to Bastien et al., [50] the restriction enzyme ApeKI used to generate the genomic library in GBS approach [28] helps to identify more SNPs than other restriction enzymes, but with a lower read coverage per marker and resulting in more missing data. After applying the filtering criteria, 1,441 high quality SNPs were available to build the genetic map in Japanese plum. Those 1,441 SNPs represented only 2.9% of the total SNPs identified, which shows the limitations of the GBS approach. Davey et al., [49] and Jiang et al., [51] suggested that the main limitation of GBS is the presence of missing data due to low coverage genotyping, inconsistency in the number of sites sequenced per sample and number of reads per site.
New methodologies to discover SNPs that can bypass these difficulties could give a more consistent number of useful SNPs for carrying out genetic studies [51][52][53]. In contrast to traditional GBS [28], Specific-Locus Amplified Fragment Sequencing (SLAF-seq) [51][52][53] is an new and efficient method for SNP discovery and genotyping that does not require a reference genome, uses two restriction enzymes (HaeIII and MseI), selecting fragments between 370-450bp. SLAF-seq is able to produce a larger number of high quality SNPs for mapping purposes [53].
Few genetic linkage maps have been published for plum species. The first genetic map for plum was published by Dirlerwanger et al. [13]. They reported a map of a myrobalan clone (P. ceracifera) using 93 markers (two SCARs plus 91 SSR) which expanded over eight linkage groups, covering 524.8 cM. Later, Vieira et al. [20] reported a linkage map for parental lines of Japanese plum using 56 to 84 AFLP which covered 905.5 to 1,349.6 cM with an average distance between markers of 16.1 cM to 16.2 cM.
Recently Salazar et al., [21] published a genetic map using SNPs obtained by GBS. They mapped a total of 981 SNPs, 479 SNPs for the female line and 502 SNPs for the male line (cv. "Angeleno"), covering 688.8 cM and 647.03 cM, respectively. The average distance between SNPs was two cM. However, Salazar et al., [21] did not report a consensus map for the two parental lines, therefore it is not possible to distinguish redundant SNPs.
In contrast, we analyzed 1,441 high quality SNPs. Of these, 714 SNPs were mapped in the female parent (cv. "Angeleno") and 320 in the male parent (cv. "Aurora") expanded over 578cM and 472 cM, respectively. Our consensus map was built using 732 non-redundant SNPs, covering 617 cM with an average distance between SNPs of 0.96 cM. This saturation represents at least twice that reported by Salazar et al. [21] and is considered a highly saturated map according to Slate et al. [58]. These authors indicated that a high-density map refers an average interval between SNPs less than two cM. It is important to note that both our study and that of Salazar et al., [21] used the "Angeleno" cultivar as a parental line, which could be favorable for future studies.
Comparing our linkage map with those previously built for Rosaceae species (Table 2), the map length and density of the SNP were similar to those published for P. persica [56] and P. avium [30,31]. Up to now only the P. mume map has higher saturation, with a density of 0.15 to 0.24 cM/SNP [57]. However, the P. mume map has large gaps in LG1 (24.13cM); LG6 (12.79cM) and LG8 (24.13cM). In contrast, the Japanese plum map reported in this study showed a maximum gap of 13.8cM, which was a little more than half that of P. mume. The presence of gaps in a linkage map needs to be analyzed carefully, because they could indicate an assembly problem in the reference genome. Plant genomes have proved to be difficult to assemble because of high heterozygosity, large numbers of repeat sequences and being large and complex [59]. Eichler et al., [60] suggested that genome duplication will be the most important cause of gaps in a physical and genetic map because they are difficult to assemble due to mapping of markers to multiple regions. Reduction of the gaps by increasing the number of offspring and molecular markers will improve the density and resolution of linkage maps, enhancing the ability to identify QTLs and markers for assisted selection. Table 3 indicates that the density of our Japanese plum map (cM/SNPs) was rather superior to those obtained for previous maps developed for other species of Rosaceae such as raspberry, apple, and pear. The number of offspring (137) and the number of SNPs analyzed (730) in the segregating populations (An x Au) could explain this result.
The Japanese plum map revealed that most of the SNPs (90.5%) were located in the linkage groups according to their physical locations in the scaffold of the peach genome. However, some discrepancies were observed, such as differences in the correlative order of the SNPs in the genetic map and differences between physical locations of the SNPs in the peach genome and their positions in the linkage groups. Such discrepancies in SNP order have been previously reported in almond [61], sweet cherry [30,31] and mei (P. mume) [57]. Also, some few SNPs (9.5%) located physically in the peach scaffold mapped to a different linkage group in the Japanese plum map. These discrepancies could be explained by assembly errors in the peach genome [30], mapping errors due to the number of individuals analyzed, and some evolutionary process such as genomic translocations and deletions.
Genus Prunus has shown conserved intraspecific and intragenic co-linearity in the Rosaceae [13], also even with other genera such as Populus [13]. Conserved synteny is important to plant breeding because it can be exploited to identify molecular markers linked to agronomically relevant traits between related species. A recent example of synteny-based development of molecular markers was done between peach and almond [62]. Therefore, based on the synteny pattern observed between Prunus species we tried to identify SNPs mapped in Japanese plum in genes localized in the V2.1 peach genome. We were able to find 510 putative orthologous genes by using SNPs from the peach genome as a reference. 38 genes out of the 510 putative orthologues were co-localized with two to three SNPs after genetic linkage analysis was carried out in our segregating F 1 population of Japanese plum. Our result suggests synteny between the Japanese plum and peach genomes (See Fig 3). We found that 85.4% of the SNPs analyzed were located in intragenic regions. Pootakham et al., [63] and Bastien et al., [50] suggested that methylation-sensitive enzymes such as ApeKI increase the degree of enrichment in coding regions of the genome. More than 50% of the SNPs were located within or close to genic regions.
The number of SNPs identified in our study was superior to those reported by Klagges et al. [30] and Guajardo et al. [31] in sweet cherry (See Fig 2). We have obtained a highly saturated linkage map for P. salicina (Japanese plum) that will be important in breeding studies as well as for the genome sequencing of this species being developed around the world (Pacheco, I. and Silva, H., unpublished data; Fernandez, A., unpublished data).
Supporting information S1 Table. Genetic distance between SNPs mapped by linkage groups. The information is given for the female, the male and the consensus map according to peach genome version v2. Table B: Linkage map of male line cv 'Aurora'. For each SNP is described its original linkage group, the position on the physical map and the position on the genetic map. Physical map position according to peach genome V2.1. Table C: Consensus linkage map. For each SNP is described its original linkage group, the position on the physical map, the position on the genetic map and corresponding orthologous genes that co-localize 2 to 3 SNPs simultaneously (n = 38). Physical map position according to peach genome V2.1. Table D: Identification of the SNPs that co-localize 2 to 3 SNPs simultaneously over 38 genes. Physical map position according to peach genome V2.1. Table E: Functional annotation (Phytozome) of 38 putative orthologous genes that colocalize 2 to 3 SNPs simultaneously. Table F | 2018-12-12T19:53:57.970Z | 2018-12-03T00:00:00.000 | {
"year": 2018,
"sha1": "949b89ffc09b258552e745506cbbef0665cd0365",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0208032&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "949b89ffc09b258552e745506cbbef0665cd0365",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
49432466 | pes2o/s2orc | v3-fos-license | Postsynaptic movement disorders: clinical phenotypes, genotypes, and disease mechanisms
Movement disorders comprise a group of heterogeneous diseases with often complex clinical phenotypes. Overlapping symptoms and a lack of diagnostic biomarkers may hamper making a definitive diagnosis. Next-generation sequencing techniques have substantially contributed to unraveling genetic etiologies underlying movement disorders and thereby improved diagnoses. Defects in dopaminergic signaling in postsynaptic striatal medium spiny neurons are emerging as a pathogenic mechanism in a number of newly identified hyperkinetic movement disorders. Several of the causative genes encode components of the cAMP pathway, a critical postsynaptic signaling pathway in medium spiny neurons. Here, we review the clinical presentation, genetic findings, and disease mechanisms that characterize these genetic postsynaptic movement disorders.
Introduction
Movement disorders comprise a heterogeneous group of diseases characterized by either an excess of abnormal movements (hyperkinesia) or a lack of normal movements (hypokinesia) (Stoessl and Mckeown 2016). The phenotypes can be complex and overlapping, particularly in children, and can even change or evolve over time (Stoessl and Mckeown 2016;Kurian and Dale 2016). For many movement disorders, there are no biomarkers available to aid diagnosis. However, recent genetic advances have greatly contributed to improved diagnosis for patients with movement disorders (Olgiati et al. 2016;Reale et al. 2018). Over the past few years, a number of new genetic movement disorders have been identified, some of which are caused by alterations in genes involved in postsynaptic pathways. Indeed, defects in postsynaptic dopaminergic signaling in striatal medium spiny neurons are emerging as key drivers in the development of a number of genetic hyperkinetic movement disorders. In this review, we discuss the clinical presentation, management, genetic findings, and current understanding of contributory pathogenic mechanisms of such genetic movement disorders associated with striatal postsynaptic dysfunction.
Synaptic physiology
Synapses are complex neuronal structures that are organized in several cellular compartments including the axon terminal membrane of the presynaptic neuron, the synaptic cleft, and the postsynaptic density (PSD) of the adjacent neuron. Synapses contain functionally and structurally distinct molecular machineries for synaptic connectivity and neurotransmission, the very essential processes that underlie brain function. Depending on the brain area, neurons interconnect with thousands of others and form dense, overlapping, and interdigitated networks that define the brain's connectivity. Synaptic signaling is characterized not only by the anatomical organization of neurons but also by distinct neurotransmitter systems, which include amino acids (e.g., inhibitory GABA, excitatory glutamate), monoamines (e.g., dopamine, serotonin), peptides, purines, trace amines, and acetylcholine (Hyman 2005). In chemical synapses, arrival of electrical signal results in membrane depolarization and influx of calcium into the presynaptic terminal, which ultimately results in release of neurotransmitters into the synaptic cleft (Südhof 2013). Neurotransmission is a spatially and temporally precisely regulated process that involves the concerted interaction of specific proteins at the pre-and postsynaptic sites. Neurotransmitters are stored and transported in defined Responsible Editor: Georg Hoffmann structures, known as synaptic vesicles (SVs). SVs are organized in distinct pools at the presynaptic terminal including a reserve pool, a recycling pool, and a primed or readily releasable pool (Rizzoli and Betz 2005). Release of the SV content involves a dedicated molecular machinery and includes several steps: SV priming, docking, and calcium-mediated fusion to the cell membrane (Rizo and Xu 2015). To ensure repetitive and sustained transmission, SVs have to be rapidly recycled. SV recycling is a complex process and involves several endocytic pathways for the retrieval of SV components from the plasma membrane and regeneration of functional SV (Kononenko and Haucke 2015;Soykan et al. 2016). Upon release, neurotransmitters diffuse across the synaptic cleft and bind to their respective receptors on the postsynaptic membrane which activate downstream signaling cascades. The receptors are attached to the postsynaptic density (PSD), which is a multi-protein complex organized into distinct layers of anchoring membrane molecules, scaffolding molecules, signaling molecules, and cytoskeleton molecules. The PSD is a specific feature of glutamatergic synapses. However, PSD-95, a key component of the PSD, has been also identified in glutamatergic synapses of midbrain dopaminergic neurons (Jang et al. 2015) and in medium spiny neurons of the human neostriatum (Morigaki and Goto 2015). The PSD is defined to receive and convert the chemical neurotransmitter signal into electrical and biochemical responses in the postsynaptic neuron (Sheng and Kim 2011). In general, the pre-and postsynaptic compartments are highly dynamic and modify their function or structure in response to specific synaptic activity.
Synaptic pathology
Given the complex molecular organization of synapses, alterations of its composition, structure, or function can have a severe impact on neuronal function leading to neurological disorders (Waites and Garner 2011). Overall, synaptic dysfunction may occur at a number of different sites including the following: (1) the neuronal soma and axonal compartment affecting synaptic gene expression, SV synthesis, and trafficking; (2) the presynaptic compartment affecting SV exocytosis, endocytosis and recycling, maintenance of SV pools and proteostasis, and synaptic metabolic homeostasis involving mitochondrial function; (3) the intersynaptic compartment affecting neurotransmission and neurotransmitter recycling; and (4) the postsynaptic compartment affecting function of channels, receptors, and associated downstream signaling cascades. The association of human brain disorders with aberrant synaptic function and structure has led to the new concept of Bhuman synaptopathy^ (Lepeta et al. 2016). In recent years, synaptic dysfunction has been linked to a variety of neuropathological conditions including epilepsy (Hamdan et al. 2009;Caleo 2009;Casillas-Espinosa et al. 2012), movement disorders (Quartarone and Pisani 2011;Calo et al. 2016;Schirinzi et al. 2016;Calabresi et al. 2016;Matikainen-Ankney et al. 2016;Lepeta et al. 2016), intellectual disability (Mircsof et al. 2015;Crocker-Buque et al. 2016;Zapata et al. 2017;Ung et al. 2017), autism spectrum disorders (Giovedí et al. 2014;De Rubeis et al. 2014), psychiatric disorders (Kang et al. 2012;Fromer et al. 2014), and neurodegenerative disorders (Musardo and Marcello 2017). Recent advances in nextgeneration sequencing technologies and subsequent functional validation of identified genetic variants in patients with distinct neurological disorders have further contributed to understanding the genetic mechanisms underlying these human synaptopathies (Baker et al. 2015;Lipstein et al. 2017;Myers et al. 2017;Guarnieri et al. 2017;Sadybekov et al. 2017).
Postsynaptic dysfunction in brain diseases
Recent isolation and proteomic profiling of the PSD of the human neocortex have revealed 1461 proteins (Bayés et al. 2011). Mutations in over 100 of these proteins cause brain diseases enriched in cognitive, affective, and motor phenotypes (Bayés et al. 2011). Over time, mutations have been identified in genes encoding postsynaptic receptors, ion channels, and components of associated signaling cascades, and the phenotypic spectrum is ever-expanding. Distinct populations of neurons often show a specific vulnerability to genetic alterations, depending on the genes, proteins, and neurotransmitters they express, and the neural circuits they are connected to. For example, GABAergic neurons are thought to play a key role in a number of genetic epilepsies. Mutations in GABA A receptor subunits GABRA 1 , GABRB 3 , and GABRG 2 have been identified in a broad spectrum of different epilepsy syndromes including Dravet syndrome, generalized seizures, epileptic encephalopathies, and febrile seizures (Johannesen et al. 2016;Shen et al. 2017;Niturad et al. 2017). Moreover, mutations in the GRIN2A gene encoding the NMDA glutamate receptor α2 subunit are emerging as a key genetic factor in the epilepsy-aphasia spectrum disorders (Kingwell 2013;Yang et al. 2017). Dysfunction of excitatory hippocampal neurons has been related to intellectual disability caused by mutations in genes encoding proteins of the PSD complex or interacting components (Zapata et al. 2017;Ung et al. 2017). Dysfunction of striatal medium spiny neurons (MSNs) due to alterations in genes encoding key postsynaptic proteins is associated with the pathogenesis of dystonia, dyskinesia, chorea, and parkinsonism. excitatory glutamatergic input from the cortex and the thalamus and modulatory dopaminergic input from the midbrain, in particular from the substantia nigra pars compacta, which innervates the dorsal-lateral striatum, and from the ventral tegmental area, which innervates the medial portion of the dorsal striatum and the ventral striatum (Fisone et al. 2007). Striatal MSNs give rise to inhibitory GABAergic projections to the globus pallidus (striatopallidal pathway) and the substantia nigra pars reticulata (striatonigral pathway).
Dopaminergic signaling in striatal medium spiny neurons
According to their output projections, neurotransmitters, and receptors, MSNs can be classified into two groups. D1-type dopamine receptor (DRD1)-expressing MSNs use enkephalin as a co-transmitter and project direct inhibitory monosynaptic fibers to the globus pallidus internal segment (GPi) and subthalamic nucleus (STN) (Chuhma et al. 2011). D2-type dopamine receptor (DRD2)-expressing MSNs use substance P as a co-transmitter and project indirect excitatory polysynaptic fibers to the same nuclei via the globus pallidus external segment (GPe) and STN (Chuhma et al. 2011) (Fig. 1(a)). It is generally believed that direct and indirect MSNs in the dorsal striatum exert opposite effects on the control of movement. Activation of DRD1 stimulates direct striatopallidal pathway MSNs and results in disinhibition of thalamocortical neurons, thus facilitating movement. Activation of DRD2, however, inhibits indirect striatonigral pathway MSNs and leads to inhibition of thalamocortical neurons and suppression of movement (DeLong et al. 2007). In clinical practice, hyper-and hypokinetic features often coexist, for example, in patients with parkinsonism-dystonia; the reasons for this are not entirely clear, but may be related to developmental age, indicating a complex disruption of basal ganglia motor circuitry.
cAMP signaling pathway in striatal medium spiny neurons
Signaling through DRD1 and DRD2 in postsynaptic MSNs is mainly mediated by the G-protein-coupled receptor (GPCR) cyclic adenosine monophosphate (cAMP) cascade. GPCRs are involved in neurotransmitter action and highly expressed throughout the brain (Gerber et al. 2016). They share a seven-transmembrane-spanning α-helical segment coupled to a heterotrimeric guanine nucleotidebinding protein (G-protein). G-proteins are composed of three subunits, α, β, and γ, and classified into four distinct families depending on their Gα subunit: stimulatory Gproteins (Gα s , Gα olf ), inhibitory G-proteins (Gα i, Gα o, Gα t, Gα z ), Gα q proteins, and Gα 12/13 proteins (Simon et al. 1991;Oldham and Hamm 2008). Binding of the respective neurotransmitters to GPCRs results in catalytic conversion of Gα-bound GDP to GTP and reduces the affinity of the Gα subunit to the Gβγ subunit complex, which subsequently dissociates. The Gα subunit then activates downstream signaling effectors. In striatal MSNs, Gα proteins target the enzyme adenylyl cyclase 5 (AC5), which is involved in generation of the second messenger cAMP. Activation of DRD1 stimulates Gα olf -mediated AC5 enzyme activity and increases cAMP levels, whereas activation of DRD2 leads to Gα i -mediated inhibition of AC5 activity and decreases cAMP levels (Stoof and Kebabian 1981;Zhuang et al. 2000;Hervé et al. 2001;Lee et al. 2002) (Fig. 2). Intracellular levels of cAMP are linked to the activity of protein kinase A (PKA), which phosphorylates downstream effector proteins including ion channels, neurotransmitter receptors, and transcription factors (Fisone et al. 2007). In striatal MSNs, an increase in cAMP and PKA leads to phosphorylation of the dopamine and cAMP-regulated phosphoprotein of 32 kDa (DARP-32) and the transcription factor cAMP-responsive element-binding protein (CREB). DARP-32 is phosphorylated at the Thr-32 residue and as such acts as an inhibitor of protein phosphatase-1 (PP-1) (Fisone et al. 2007). This in turn reduces dephosphorylation of downstream target effectors including voltage-dependent calcium channels, NMDA, AMPA, and GABA A receptors, and thus has a broad impact on neuronal function (Nairn et al. 2004). The enzyme phosphodiesterase 10A (PDE10A), a dual cAMP-cGMP phosphodiesterase, constitutes another modulator of cellular cAMP and cGMP levels and is highly abundant in striatal MSNs.
Dysfunction of medium spiny neurons in movement disorders
It has become increasingly evident that disruption of the cAMP signaling pathway contributes to postsynaptic dysfunction that is associated with movement disorders such as dystonia, chorea, and parkinsonism (Table 1). It is hypothesized that altered dopaminergic signaling in striatal MSNs plays a key role in the pathogenesis of movement disorders. Dystonia is postulated to result from overactivity of direct pathway MSNs leading to reduced GPi activity ( Fig. 1(b)). Chorea and ballism may be associated with hypofunction of indirect-pathway MSNs resulting in reduced pallidal output ( Fig. 1(c)). Both mechanisms ultimately lead to inadequate GABAergic inhibition of thalamocortical projections and a hyperkinetic movement disorder. Parkinson's disease, in contrast, is characterized by dopamine depletion in the substantia nigra leading to increased activity of striatal indirect-pathway neurons. This in turn results in enhanced inhibitory output from GPi and SNr and leads to decreased activity in thalamocortical neurons and a hypokinetic movement disorder (DeLong et al. 2007).
Postsynaptic movement disorders
Adenylate cyclase 5-related movement disorders
Clinical presentation
Adenylate cyclase 5 (ADCY5)-related disorders comprise a large phenotypic spectrum and include clinical presentations that mimic dyskinetic cerebral palsy, benign hereditary chorea, mitochondrial disorders, paroxysmal dyskinesia, myoclonus-dystonia, and recently alternating hemiplegia of childhood (Chen et al. 2014(Chen et al. , 2015Carapito et al. 2015;Mencacci et al. 2015;Chang et al. 2016;Westenberger et al. 2017;Douglas et al. 2017). Disease onset occurs typically in infancy or early childhood, and rarely in early adolescence (Fernandez et al. 2001). The movement disorder is hyperkinetic, mainly characterized by generalized chorea involving the limbs, face, and/or neck. The characteristic perioral and periorbital twitches, formerly described as facial myokymia, were not confirmed by EMG studies, but rather represent a mixture of myoclonic, choreic movements (Tunc et al. 2017) manifesting as orolingual dyskinesia. Limb dystonia can be a major disease feature. Additional in dystonia, and (c) hypofunction of the indirect pathway in chorea ultimately lead to disinhibition of thalamocortical neurons and hyperkinesia. SNc, substantia nigra pars compacta; Gpe, globus pallidus external segment; STN, subthalamic nucleus; Gpi, globus pallidus internal segment; SNr, substantia nigra pars reticulata; PPN, pedunculopontine nucleus (brainstem) movement abnormalities including myoclonus and lower limbs spasticity with pyramidal signs are frequently reported. Eye movement abnormalities such as saccade initiation failure and upward gaze palsy have been described in a number of patients . Abnormal movements may show marked fluctuation in severity and frequency and can be continuous or paroxysmal (Fernandez et al. 2001;Chen et al. 2014;Mencacci et al. 2015). The disease course is usually either static or mildly progressive over time. Many patients suffer severe and painful episodic exacerbations of the movement disorder that can last minutes to hours and may be triggered by emotional stressors, intercurrent infections, or sudden action. Sleep-related worsening of the movement disorder, in particular during drowsiness and awakening, constitutes a specific characteristic feature of ADCY5-related disorders. Axial hypotonia, often preceding the movement disorder, is a common finding and rarely associated with weakness (Chen et al. 2015). Cognition is usually preserved in patients or only mildly impaired. However, severely affected patients may manifest delayed motor and/or language milestones (Chen et al. 2014). Brain MR imaging is typically normal in ADCY5-related disorders (Chen et al. 2015).
Genetics
Mutations in the ADCY5 gene were originally identified in a single five-generation German kindred with an autosomal dominant pattern of inheritance, formerly described as familial dyskinesia and facial myokymia (Fernandez et al. 2001). To date, over 80 patients from 50 affected families have been genetically confirmed (Fernandez et al. 2001;Chen et al. 2012Chen et al. , 2015Carapito et al. 2015;Mencacci et al. 2015;Chang et al. 2016;Dy et al. 2016;Westenberger et al. 2017;Meijer et al. 2017;Zech et al. 2017;Douglas et al. 2017;Tunc et al. 2017;Carecchio et al. 2017). Both autosomal dominantly inherited and de novo mutations have been reported. The p.Arg418Trp variant along with the p.Arg418Gln and the p.Arg418Gly variants constitute recurrent mutations in the majority of patients and indicate a mutational hotspot at the arginine 418 residue. In vitro functional assays have demonstrated a gain of function for the p.Arg418Trp and p.Ala726Thr variants (Chen et al. 2014). Genotype-phenotype correlations suggest that the missense mutation p.Arg418Trp is associated with a more severe phenotype, while p.Arg418Gly, the p.Arg418Gln, and p.Ala726Thr show a milder phenotype (Chen et al. 2015;Chang et al. 2016). Somatic mosaicism, responsible for up to 43% of apparently de novo mutations, results in a less severe phenotype with almost complete resolution of symptoms in adulthood reported in one case (Chen et al. 2015).
Treatment
In ADCY5-related movement disorders, therapeutic trials with anticholinergics (trihexyphenidyl), dopamine antagonists Fig. 2 Schematic overview on a striatal medium spiny neuron synapse. Dopaminergic signaling in striatal medium spiny neurons is mediated by the cAMP signaling pathway. Activation of D1-type dopamine receptors leads to activation of adenylyl cyclase 5 and subsequent increase in cAMP levels, while activation of D2type dopamine receptors results in inhibition of adenylyl cyclase 5 and reduced levels of cAMP. cAMP in turn modulates activity of the protein kinase A, which phosphorylates further downstream effectors including DARP-32 and CREB. Arrows indicate mutations in genes involved in postsynaptic dopaminergic signaling in striatal medium spiny neurons (tetrabenazine), and anticonvulsants have shown limited clinical benefit. The benzodiazepines clonazepam (0.1-0.2 mg/ kg) and clobazam (0.2 mg/kg) have been reported to improve sleep-related dyskinesia and myoclonic episodes (Chen et al. 2015;Chang et al. 2016). Benzodiazepines exert an indirect inhibitory effect on AC5 activity, which might counterbalance the gain of function associated with the p.Arg418Trp mutation (Dan'ura et al. 1988;Chang et al. 2016). Acetazolamide has shown a positive effect on chorea in three patients . Treatment with bilateral GPi deep brain stimulation (GPi-DBS) elicited a positive clinical response (Dy et al. 2016;Meijer et al. 2017). Two case reports showed a significant improvement of dyskinesia and dystonia after DBS Meijer et al. 2017). However, the longterm efficacy of DBS in this condition is largely unknown.
Molecular mechanisms
The enzyme adenylyl cyclase 5, encoded by the gene ADCY5, constitutes the major adenylyl cyclase isoform in the brain and is enriched in the striatum, in particular the nucleus accumbens, where it accounts for 80% of AC activity (Matsuoka et al. 1997). AC5 is a membrane-bound protein that receives signals from striatal GPCRs including DRD1, DRD2, and A2A adenosine receptor (Lee et al. 2002). AC5 converts adenosine triphosphate (ATP) into cAMP upon GPCR-activation (Hanoune et al. 1997). Functional studies into ADCY5 gain of function mutations in an in vitro HEK293 overexpression cell model demonstrated an increase in intracellular cAMP levels (Chen et al. 2014). The AC5 knockout mouse model in contrast, mimicking loss of function, exhibits a hypokinetic phenotype with parkinsonian features (Iwamoto et al. 2003). In Adcy5 − / − mice, attenuation of DRD2 signaling was associated with abnormal coordination, while attenuated locomotion activity was due to defective DRD1 signaling (Iwamoto et al. 2003). In striatal MSNs, AC5 constitutes a key enzyme involved in the modulation of dopaminergic signals and is thus tightly associated with motor control.
Clinical presentation
The phenotypic spectrum of PDE10A-related disorders is strongly correlated to the mutation dosage. In patients carrying a single heterozygous PDE10A variant, disease onset occurs between 5 and 15 years of age. The movement disorder is characterized by chorea that tends to generalize over time. Esposito and colleagues recently described a patient with generalized, non-progressive chorea and diurnal fluctuation that gradually improved during the day and was absent at night (Esposito et al. 2017). The disease course is usually mildly progressive. Patients with dominant PDE10A mutations usually manifest normal cognition and development. Brain MR images show characteristic symmetrical bilateral T2hyperintense lesions of the striatum (Mencacci et al. 2016;Esposito et al. 2017). In contrast, patients harboring recessive PDE10A mutations are more severely affected. They usually present with chorea in the first year of life. Facial involvement with orolingual dyskinesia was found in six patients of one kindred and resulted in severe dysarthria and drooling (Diggle et al. 2016). Reported patients with homozygous mutations had additional neurological features including delayed motor and speech development, cognitive decline, and axial hypotonia (Diggle et al. 2016). Focal epilepsy has been described in one patient (Diggle et al. 2016). Brain MRI of patients with recessive disease does not show any structural abnormalities of the basal ganglia, though investigation with a specific PDE10A PET ligand revealed significant loss of striatal PDE10A in one patient (Diggle et al. 2016).
Treatment
Management of PDE10A-related disorders is based on the symptomatic treatment of chorea. In other neurological disorders including Huntington's disease (HD) and schizophrenia, PDE10A has long been considered a promising target for pharmacological treatment (Menniti et al. 2007;Raheem et al. 2016). In these disorders, perturbation of striatal output has been associated with disease pathophysiology (Raheem et al. 2016;Beaumont et al. 2016). In HD, dysfunction of indirect MSNs is thought to be responsible for the hyperkinetic movement disorder in the early stage of the disease, which is mainly characterized by chorea (Beaumont et al. 2016). Reduced levels of PDE10A have been found in HD patients and HD mouse models (Beaumont et al. 2016). Pharmacologic inhibition of PDE10A in particular enhanced activity and cortical responsiveness of indirect-pathway MSNs and restored defective basal ganglia corticostriatal circuitry, thus mimicking DRD2 agonists (Beaumont et al. 2016). Hence, PDE10A inhibitors might in the future provide a potential therapy for the hyperkinetic features of both HD disease and PDE10A-related disorders.
Genetics
To date, two recessive homozygous PDE10A mutations (p.Tyr107Cys and p.Ala116Pro) have been identified in eight individuals from two consanguineous families. Two recurrent de novo dominant heterozygous PDE10A missense mutations (p.Phe300Leu and p.Phe334Leu) have been reported in four unrelated individuals and in members of a family with an autosomal dominant mode of inheritance (Diggle et al. 2016;Mencacci et al. 2016;Esposito et al. 2017). Both recessive and dominant mutations result in loss of function and reduced levels of PDE10A in the striatum (Diggle et al. 2016;Mencacci et al. 2016). In silico modeling of the p.Phe300Leu and p.Phe334Leu variants demonstrated that the affected amino acids reside within the regulatory GAF-B-binding domain, which stimulates PDE10A activity upon binding of cAMP (Mencacci et al. 2016). In vitro studies verified severly affected cAMP-binding properties (Mencacci et al. 2016). As previously described, genotypephenotype correlations suggest a milder phenotype associated with dominant heterozygous mutations and a more severe phenotype related to homozygous recessive mutations.
Molecular mechanisms
PDE10A encodes the enzyme phosphodiesterase 10A, a dual cAMP-cGMP phosphodiesterase, which is highly abundant in MSNs of the striatum (Coskran et al. 2006). PDE10A catalyzes the hydrolysis of cAMP and cGMP to their corresponding degradation products nucleoside 5′-monophosphate and thus regulates both cAMP and cGMP downstream signaling cascades. PDE10A is involved in the modulation of DRD1and DRD2-activated GPCR-signaling and in the control of striatal gene expression (Strick et al. 2010;Diggle et al. 2016). Pharmacological studies revealed that inhibition of PDE10A preferentially targets indirect-pathway MSNs resulting in suppression of movement and hypokinesia (Threlfell et al. 2009). Indeed, Pde10a-knockout-mice and Pde10a-knock-in mice (p.Tyr97Cys variant) show reduced striatal PDE10A levels and manifest hypokinetic movement abnormalities (Schmidt et al. 2008;Diggle et al. 2016). In humans, biallelic mutations in the PDE10A gene are also associated with reduced striatal levels of PDE10A, but in contrast, a hyperkinetic movement disorder. This observation may reflect species-specific effects and is reminiscent of the situation in HD disease. In both human patients and the corresponding HD mouse models, striatal levels of PDE10A are reduced (Beaumont et al. 2016). However, HD patients typically manifest an early, hyperkinetic movement phase followed by a hypokinetic phase in the later stage of disease. However, very few HD mouse models accurately recapitulate the early hyperkinetic phase which characterizes the early stage of disease (Diggle et al. 2016). As is the case for many human movement disorders, the mouse model only partially reflects the disease evident in human patients.
Clinical presentation
The G proteinα o (GNAO1)-related phenotypic spectrum includes a spectrum of overlapping neurological phenotypes, including early-onset epileptic encephalopathy (EE), drugresistant epilepsy with movement disorder (chorea, athetosis, dystonia, stereotypies) and movement disorder (mainly chorea and athetosis) without seizures. Patients with the epileptic encephalopathy phenotype usually manifest neonatal or infantile-onset tonic seizures or infantile spasms and exhibit distinct EEG features including burst suppression or hypsarrhythmia. Affected patients exhibit severe developmental delay and later may develop a dyskinetic movement disorder (Nakamura et al. 2013;Talvik et al. 2015;Saitsu et al. 2016;Marcé-Grau et al. 2016;Danti et al. 2017). This condition is currently classified as EEI17 (MIM no. 615473). The movement disorder phenotype is mainly characterized by progressive chorea and dystonia that usually develops in the first few years of life. Dyskinesia, in particular facial and orolingual, dystonia, and complex motor stereotypies have been commonly reported (Saitsu et al. 2016;Ananth et al. 2016;Danti et al. 2017). The onset of movement disorder is often preceded by marked hypotonia and neurodevelopmental delay. With increasing age, many patients develop severe exacerbations and suffer from episodes of refractory chorea and ballismus often accompanied by autonomic dysfunction with tachycardia, hyperthermia, hypertension, and diaphoresis (Ananth et al. 2016) (Bstatus hyperkineticus^). Triggers often lead to these exacerbations, and may include fever, intercurrent infections, heightened emotion, and stress. Attacks often arise in clusters and can last minutes to days or even weeks (Danti et al. 2017), often requiring admission to the intensive care unit. Patients with a predominant movement disorder phenotype often show mild cognitive impairment. In patients with GNAO1-related disease, brain magnetic resonance imaging is usually non-specific. However, a thin abnormal corpus callosum has been commonly reported (Danti et al. 2017). Atrophy of the basal ganglia and cerebral atrophy have also been described (Ananth et al. 2016;Sakamoto et al. 2017).
Treatment
For GNAO1-related disorders, tetrabenazine, in particular in combination with neuroleptics (risperidone, haloperidol), appears to be effective for the baseline treatment of chorea (Ananth et al. 2016;Danti et al. 2017). However, clinicians should be cautious about side effects including acute dystonic reactions or malignant neuroleptic syndrome. Sakamoto reported a dramatic response to the anticonvulsant topiramate (7.5 mg/kg), an effect which might be attributed to the inhibitory action on voltage-gated Ca 2+ channels (Sakamoto et al. 2017). Episodic exacerbations of movement disorder are often pharmacoresistant. It is of utmost importance to urgently refer these patients to the intensive care unit for dystonia management (increment of dystonia medication dosages, sedation, paralysis), adequate hydration, and continuous monitoring of cardiorespiratory functions, temperature, and laboratory parameters including creatine kinase and renal function to reduce the risk of hyperthermia, renal failure, and rhabdomyolysis. In the case of pharmaco-refractory chorea or dyskinesia, especially when it becomes life-threatening, (emergency) placement of a deep brain stimulator (DBS) into the globus pallidus internus has often resulted in an excellent clinical response (Kulkarni et al. 2016;Yilmaz et al. 2016;Danti et al. 2017).
Genetics
To date, GNAO1 mutations have been identified in 43 individuals (Nakamura et al. 2013;Talvik et al. 2015;Law et al. 2015;Saitsu et al. 2016;Kulkarni et al. 2016;Marcé-Grau et al. 2016;Ananth et al. 2016;Yilmaz et al. 2016;Menke et al. 2016;Arya et al. 2017;Danti et al. 2017;Sakamoto et al. 2017;Schorling et al. 2017;Waak et al. 2017;Bruun et al. 2017). Pathogenic variants include mostly missense mutations, but also splice site mutations and one single case with a deletion (Nakamura et al. 2013;Danti et al. 2017). Mutations usually occur de novo, with somatic and gonadal mosaicism being described in several families (Nakamura et al. 2013;Yilmaz et al. 2016;Menke et al. 2016). The recurrence risk after one affected child has been estimated at 5-15% (Menke et al. 2016). In almost half of all patients, mutations arise at the highly conserved Arg209 and Glu246 residue indicating mutational hotspots. In vitro functional investigations into the molecular mechanism of 15 GNAO1 pathogenic variants suggested genotype-phenotype correlations (Feng et al. 2017a). GNAO1 loss of function variants was associated with epileptic encephalopathy, while gain of function variants was related to those causing predominantly movement disorders (Feng et al. 2017b). Menke and colleagues further reported that de novo missense mutations in the GNAO1 codon 209 and 246 are predominantly associated with a movement disorder phenotype and developmental delay but without seizures (Menke et al. 2016). Based on a review of literature, Schorling et al. described a female preponderance for the EE phenotype, suggesting that predilection for epilepsy might be a gender-specific effect in GNAO1-related disorders (Schorling et al. 2017). The movement disorder phenotype appears to affect both sexes equally.
Molecular mechanisms
GNAO1 encodes the alpha-o subunit (Gα o ) of G-proteins. G o are the most abundant G-proteins in brain tissue, particularly in neuronal synapses (Jiang and Bajpayee 2009). They regulate multiple intracellular effectors and associated signaling cascades including ion channels, enzymes, and small GTPases (Jiang and Bajpayee 2009). At the presynaptic level, G o proteins further mediate autoinhibitory effects of several neurotransmitters on their receptors (Brown and Sihra 2008). Gα o subunits are specifically involved in the inhibition of voltage-gate Ca 2+ channels and activation of inwardly rectifying K + channels (Simon et al. 1991;Schorling et al. 2017). Knockdown of Gα o proteins in mice (α o −/−) results in hyperactive behavior and motor abnormalities including generalized tremor and impaired motor control, as well as occasional seizures, hyperalgesia, and shortened lifespan (Jiang et al. 1998). A knock-in mutant mouse model (Gnao1 +/G184S ) exhibits a severe seizure phenotype and premature death (Kehrl et al. 2014). The mutant mice exhibit elevated frequency of interictal epileptiform discharges on EEG but no overt brain morphology changes were seen.
G proteinα olf -related dystonia
Clinical presentation G proteinα olf (GNAL1)-related disorders were first reported in 2012, in adult-onset primary torsion dystonia (DYT 25, primary torsion dystonia) (Bressman et al. 1994;Fuchs et al. 2012). Disease onset occurs in the third or fourth decade of life. Dystonia is usually initially focal and affects predominantly the craniocervical region in most patients. With ongoing disease, dystonia progresses and typically leads to more extensive cervical or laryngeal involvement and less commonly truncal or limb involvement. Recently, Masuho et al. identified two affected individuals in a large consanguineous kindred who presented with childhood-onset dystonia (Masuho et al. 2016). Both siblings presented with hypertonia at the age of 1 year and developed generalized dystonia over time. Initial motor and language development was normal.
Treatment
In GNAL1-associated dystonia, a therapeutic trial with levodopa was not beneficial (Bressman et al. 1994). Data on treatment with other anti-dystonic agents is scarce to date.
Genetics GNAL1 mutations are inherited in an autosomal dominant manner with reduced penetrance (Carecchio et al. 2016). De novo heterozygous GNAL1 mutations have also been described in three patients with seemingly sporadic dystonia and negative family history (Dobričić et al. 2014;Ziegan et al. 2014). Recently, autosomal recessive homozygous missense mutations in the GNAL1 gene have been identified in a consanguineous kindred with childhood-onset dystonia (Masuho et al. 2016). In vitro functional assays have demonstrated attenuated DRD1 response for the nonsense mutant p.Ser293* and impaired association of the Gα olf subunit with the corresponding Gβγ subunit for the missense mutant p.Val137Met, thereby indicating loss of function.
Molecular mechanisms
GNAL1 encodes the stimulatory G-protein alpha subunit Gα olf . Gα olf belong to the stimulating G-proteins and couple Bdirect pathway^DRD1 and Bindirect-pathway^A2 adenosine receptors to the activation of AC5 Vemula et al. 2013). Gα olf are enriched in striosomes, which are clusters of striatal MSNs that project to the SNpc (Crittenden and Graybiel 2011). An imbalance of the striatal striosome activity in relation to the surrounding matrix has been postulated to contribute to the development of hyperkinetic movement disorders (Fuchs et al. 2012). A Gnal + / − knockout mouse model has been used to study L-DOPA-induced dyskinesia in parkinsonism (Alcacer et al. 2012). In the dopamine-denervated striatum, L-DOPA induces DRD1 signaling through the cAMP pathway including PKA and DARP-32. Striatonigral lesions of Gnal + / − mice lead to upregulation of Gα olf and induce dyskinesia upon chronic treatment with L-DOPA.
Clinical presentation and genetics
The phenotypic spectrum of GPR88-related movement disorder so far includes only four individuals from one consanguineous kindred (Alkufri et al. 2016). The female siblings presented with speech delay and learning disability and developed chorea at the age of 8-9 years. The movement disorder affected mainly the face and hands, but choreiform movements were also noted in the shoulders, pelvis, and thighs. Alkufri et al. identified a homozygous nonsense mutation in GPR88 gene encoding an orphan G-protein-coupled receptor (Alkufri et al. 2016).
Molecular mechanisms
GPR88 is highly expressed in both DRD1-and DRD2expressing MSNs of the striatum (Massart et al. 2009;Quintana et al. 2012). GPR88 deficiency in a knockout mouse model (Gpr88 Cre / Cre ) leads to enhanced excitability of DRD1and DRD2-expressing striatal MSNs owing to increased glutamate receptor phosphorylation and altered GABA A receptor composition (Quintana et al. 2012). The Gpr88 Cre / Cre mice show increased locomotion, hyperactivity in novel environment, and stereotypic behavior abnormalities reminiscent of striatal dysfunction (Meirsman et al. 2016).
Other genetic movement disorders associated with secondary postsynaptic dysfunction DYT1 early-onset dystonia Clinical presentation and genetics DYT1 dystonia is a hereditary early-onset movement disorder caused by mutations in TOR1A encoding the protein torsin A. Patients manifest with isolated dystonia in childhood or adolescence, usually without any other associated neurological abnormalities (Ozelius and Lubarr 1993). Though not part of the initial presentation, executive dysfunction and psychiatric comorbidities such as mood and anxiety disorders have been described in DYT1 dystonia (Jahanshahi 2017). In the early course of disease, dystonia usually affects one (usually lower) limb and is often related to specific actions (actioninduced or task-specific dystonia). Over time, dystonia usually progresses and becomes segmental, multifocal, or generalized in 60-70% of all patients (Ozelius and Lubarr 1993). DYT1 dystonia shows an autosomal dominant mode of inheritance and manifests with reduced penetrance, estimated at 30%. The majority of patients harbor a three base pair deletion c.907_909delGAG deletion, though three additional in-frame deletions have been reported singly in other individuals (Ozelius and Lubarr 1993).
Molecular mechanisms
Although the exact function of torsin A is yet to be fully elucidated, it is thought to shuttle between the endoplasmic reticulum (ER) and the nuclear envelope (NE) for several physiological functions including ER-associated degradation, dopamine release and metabolism, synaptic shuttling of mRNAs, and cytoskeleton dynamics (Ozelius and Lubarr 1993). Several studies have investigated the role of torsin A in dopamine neurotransmission in striatal neurons. Data from three different DYT transgenic mouse models suggest a role for presynaptic dysfunction in dopaminergic neurons owing to impaired dopamine release Page et al. 2010). However, electrophysiological studies in striatal slice cultures from a transgenic DYT1 mouse model also revealed postsynaptic alterations. Activation of postsynaptic DRD2 resulted in a paradoxical excitatory effect in striatal cholinergic interneurons leading to inappropriate firing activity (Pisani et al. 2006). MSNs of transgenic mice showed decreased surface expression of postsynaptic DRD2 with deficient G-protein coupling (Napolitano et al. 2010). Further studies investigated a potential DRD2 trafficking defect due to reduced torsin A chaperone activity. This hypothesis was corroborated by data demonstrating a direct interaction between torsin A and DRD2 and PET imaging studies demonstrating decreased DRD2 availability in brains of DYT1 patients (Torres et al. 2004;Carbon et al. 2009).
Clinical presentation and genetics
Parkinson's disease (PD) represents the second most common neurodegenerative disorder in adults and most commonly occurs sporadically (Kalia and Lang 2015). However, approximately 5-10% of patients have a monogenic form of the disease with an either autosomal recessive or dominant mode of inheritance (Lin and Farrer 2014). In these monogenic forms, disease onset typically occurs in childhood (juvenile onset parkinsonism, usually < 20 years) or adulthood before the age of 40-45 (early-onset parkinsonism) (Puschmann 2013;Bonifati 2014). PD is neuropathologically characterized by progressive loss of nigrostriatal dopaminergic neurons leading to the typical clinical triad of bradykinesia/akinesia, rigidity, and tremor. In the monogenic early-onset forms of PD, additional neurological features including neurodevelopmental delay, intellectual disability, psychiatric comorbidities, and epilepsy are commonly reported. To date, several genes have been associated with juvenile, atypical parkinsonism (ATP13A2, PLA2G6, FBX07, DNAJC6, SYNJ1) and early-onset parkinsonism (SNCA, PARK2, PINK1, DJ1) (Bonifati 2014).
Molecular mechanisms
Genes associated with early-onset parkinsonism are mainly involved in disruption of presynaptic function (Bonifati 2014). Pathogenic variants have been shown to impair protein trafficking, autophagy, and mitochondrial function culminating in loss of dopaminergic neurons (Lynch-Day et al. 2012;Pickrell and Youle 2015;Hunn et al. 2015). Many of the affected proteins in PD may also have other effects in different synaptic compartments, which remain yet to be fully elucidated. Indeed, in early-onset PD, there is emerging evidence for postsynaptic alterations that may contribute to the disease pathology. For example, Parkin, encoded by the gene PARK2, has been shown to localize to not only presynaptic but also postsynaptic terminals (Sassone et al. 2017). At the postsynaptic terminal, Parkin colocalizes with the postsynaptic density marker PSD-95. Through interaction with PSD-95, Parkin is suggested to regulate trafficking, anchoring, and clustering of membrane surface receptors (Sassone et al. 2017). Parkin is further involved in the mono-ubiquitination of PICK1, a synaptic scaffold protein that regulates the trafficking of several neurotransmitter receptors, ion channels, and enzymes (Joch et al. 2007). Further studies demonstrated that Parkin modulates postsynaptic glutamate receptors. Loss of Parkin leads to an increase in excitatory activity, which ultimately results in exitotoxic dopaminergic cell death (Sassone et al. 2017). Further studies are warranted to elucidate postsynaptic disease mechanisms in genetic early-onset PD. Overall, investigation of postsynaptic alterations in monogenic PD may provide insights into more common forms of PD.
Conclusion
Over the past few years, a number of genetic movement disorders have been identified where defects in postsynaptic MSN function are thought to play a crucial role in disease pathogenesis. Mutations in genes such as ADCY5, PDE10A, GNAO1, GNAL1, and GPR88 affect key proteins of the postsynaptic cAMP signaling pathway, which mediate the effects of dopaminergic neurotransmission in striatal MSNs. On a molecular level, loss or gain of function pathogenic variants differentially impact on the signaling cascade but result in hypo-or hyperfunctional dopaminergic signaling in striatal MSNs.
From a clinical viewpoint, these genetic diseases which align to a common disease pathway also manifest a number of overlapping clinical features. All are characterized by prominent, early-onset movement disorders with hyperkinetic manifestations such as chorea and dyskinesia. Facial involvement is commonly reported in ADCY5-, PDE10A-, GNAO1-, and GPR88-related disorders. Despite these similarities, the course of disease and specific distinct phenotypic features may help to discriminate them clinically. Indeed, ADCY5and PDE10A-related disorders seem to show a static or mildly progressive course, while GNAO1-related movement disorders are characterized by progressive chorea which can become life-threatening in some patients. Distinguishing clinical features may further include sleep-related phenomena and marked fluctuation in ADCY5 disease, abnormal MRI features in dominant PDE10A disease, and severe exacerbations associated with autonomic dysfunction in patients with GNAO1 mutations.
Given these substantially overlapping phenotypes, establishing a definitive diagnosis is often not straightforward. Furthermore, with increasing patient diagnoses, the molecular and clinical spectrum is likely to further expand, with the identification of atypical disease phenotypes. Implementation of nextgeneration sequencing techniques in clinics has already translated into better diagnostics of these rare postsynaptic disorders. For many of these disorders, a diagnostic whole-exome approach or multiple-gene panel testing may be the most efficient method of reaching a confirmatory diagnosis. Despite these genetic advances, clinicians still face the enormous unmet need for disease-specific personalized therapies, as many of these disorders are pharmacoresistant and challenging to treat with conventional, currently available drugs. Precision medicine approaches, targeting the specific gene defect may provide a better long-term strategy to overcome this gap. Gene therapy and RNA manipulation techniques represent attractive new technologies to approach a patient's specific genetic condition. Future identification of specific therapies targeting the cAMP pathway, a critical cellular signaling pathway in striatal MSNs, may revolutionize the treatment of these severe genetic movement disorders.
Funding Prof Kurian is funded by an NIHR Research Professorship and the Wellcome Trust Wellcome Trust, as well as through a project grant from the Rosetrees Trust. Dr. Abela is funded by a Swiss National Foundation Advanced Postdoc. Mobility fellowship.
Compliance with ethical standards
Conflict of interest Prof. Manju Kurian and Dr. Lucia Abela declare that they have no conflict of interest.
Animal right Not applicable
Open Access This article is distributed under the terms of the Creative Comm ons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2018-07-04T01:48:09.007Z | 2018-06-13T00:00:00.000 | {
"year": 2018,
"sha1": "33fe0c88a7d0e0a2f65fbe30efc7ff5655c14abf",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1007/s10545-018-0205-0",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "33fe0c88a7d0e0a2f65fbe30efc7ff5655c14abf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
1345325 | pes2o/s2orc | v3-fos-license | Role of calcium and calmodulin in hemidesmosome formation in vitro.
Intact epithelial sheets were removed from rabbit corneas using Dispase II, a bacterial neutral protease. The freed sheets were placed on denuded corneal basal laminae and incubated at 35 degrees C for 3, 6, 18, or 24 h. Epithelial-basal lamina preparations were incubated in culture medium that either contained (a) varying concentrations of Ca2+ ions, (b) calmodulin antagonists, (c) exogenous calmodulin following an initial 6-h incubation in the presence of antagonists, or that lacked (d) Mg2+ ions. Tissues were processed for electron microscopy, and micrographs were taken of basal cell membranes. At least four experiments were conducted for each treatment, and for each experiment the total number of hemidesmosomes were counted along the basal membrane-basal lamina surface of eight cells. The number of hemidesmosomes formed was directly proportional to the increasing concentration of Ca2+. The presence of absence of Mg2+ ions did not change the numbers of hemidesmosomes formed. Calmodulin antagonists inhibited hemidesmosome formation, and this inhibition was reversed by the addition of calmodulin. Thus, hemidesmosome formation is Ca2+ dependent and appears to be mediated by a calmodulin-regulated mechanism.
antagonists, (c) exogenous calmodulin following an initial 6-h incubation in the presence of antagonists, or that lacked (d) Mg" ions. Tissues were processed for electron microscopy, and micrographs were taken of basal cell membranes . At least four experiments were conducted for each treatment, and for each experiment the total number of hemidesmosomes were counted along the basal membrane-basal lamina surface of eight cells . The number of hemidesmosomes formed was directly proportional to the increasing concentration of Cat +.
The presence or absence of Mg2+ ions did not change the numbers of hemidesmosomes formed . Calmodulin antagonists inhibited hemidesmosome formation, and this inhibition was reversed by the addition of calmodulin . Thus, hemidesmosome formation is Ca2+ dependent and appears to be mediated by a calmodulin-regulated mechanism .
Adhesion of one cell to another or to a substrate is a fundamental property ofcells in higher organisms . Divalent cations such as Ca 2+ play a major role in many diverse cell-cell adhesion systems . Grunwald et al. (1) demonstrated that dual adhesion mechanisms existed in disassociated embryonic neural retina cells, and Brackenbury et al. (2) reported that the phenomenon was also true for both neural and non-neural tissue throughout the chick embryo . A dual adhesion mechanism requires that both Cal' dependent and independent adhesion co-exist and are responsible for the aggregation behavior ofthe cells . Chick embryonic cells will cross-adhere regardless of their tissue origin as long as they share one of these two classes of adhesion.
Hennings and Holbrook (3) and Hennings et al. (4) used epidermal cells from BALB/c mice to examine the divalent cation requirements of desmosome formation, a cell-cell adhesion junction . They observed asymmetric desmosomes when cells were cultured in low Ca 2+ medium. 5 min after increasing the concentration of Ca 2+ to 1 .2 mM they found desmosomal plaques that had tonofilaments inserting into them, and after 2 h they observed symmetric desmosomes (desmosomal plaques opposite each other on opposing cells). By changing the concentration of Ca 2+, Jones et al. (5) bundles with desmosome formation in primary mouse epidermal cells was Ca2' dependent. At low Ca l ' concentrations a bundle network of tonofilaments was located in the juxtanuclear region. After Ca l ' was added to the medium, the network moved toward the cell periphery and made contact with the cell membrane. Desmosome formation then increased dramatically . In many organ and tissue systems control of the level of intracellular Ca2' appears to be dependent on the ubiquitous Ca 2 '-binding protein, calmodulin (CaM)` (6,7). CaM controls a number of fundamental activities, such as cell proliferation and migration (8,9) and Ca 2+ transport (10).
It is not known whether divalent cations or CaM play a role in the maintenance and formation of the cell-substrate adhesion junctions such as hemidesmosomes (HDs). HDs are those adhesive junctions that attach basal cells of stratified squamous epithelia to their substrate, the basal lamina. In addition to providing a strong mechanical coupling, it is likely that these junctions, through their associated tonofilaments, exert tension and distribute the force throughout the cells, playing a role in the maintenance of cell shape .
I Abbreviations used in this paper: CaM, calmodulin ; HD, hemidesmosome .
Except for the ultrastructural studies of Krawczyk and Wilgram (11) and of Beerens et al. (12), there has been little information available on HD formation . Recently, Gipson et al. (13) developed an in vitro system for studying HD formation. Intact sheets of rabbit corneal epithelium were placed on denuded basal laminae and incubated . Using this procedure, the investigators found that the majority of new HD formation occurred within the first 6 h of culture. By 24 h, >90% ofthe number ofHDs per micron of membrane found in normal intact rabbit corneas had formed . As the length of culture time increased, the percentage of immature HDs decreased as the percentage of mature HDs increased. Immature HDs could be divided into two types. Type 1 was characterized by the presence of fine filaments between the membrane and the lamina densa, and Type 2 was characterized by the presence of an electron dense plaque on the cytoplasmic face of the membrane. Mature HDs (Type 3) were distinguished from immature HDs by the appearance of an extracellular electron dense line parallel to the membrane and the lamina densa. In addition, at this stage intermediate filaments that inserted into the electron dense plaque were often present. The major shift in HD maturation occurred during the first 6 h of culture . The investigators also observed that de novo HD formation occurred at sites on the basal lamina opposite existing anchoring fibrils. Anchoring fibrils insert into the lamina densa on the side opposite the basal cell plasmalemma and splay out among the collagen fibrils . The in vitro system developed by Gipson et al. (13) provides a method for examining the role of divalent cations and CaM in HD formation . We found that HD formation is dependent on the concentration of Ca"; development of HD's into mature stages is Ca" dependent ; epithelial basal cell shape is Ca" dependent and a change in cell shape from columnar to round decreases the extent ofHD formation ; and CaM antagonists reversibly inhibit HD formation .
MATERIALS AND METHODS
Animals and Tissues: Corneas from New Zealand white rabbits were used for all the experiments . A complete description of the removal of intact corneal epithelial sheets is found in Gipson and Grill (14) and the protocol for placing these epithelial sheets on basement membranes is explained by Gipson et al . (13) . Briefly, a circular piece of cornea 9-mm diam was removed and placed in defined culture medium (15) The low Ca 2+ medium contained the same additives as the control-defined culture media described above. Control medium contained I mM Call, whereas low Ca`medium contained 10,uM Ca 2+ . 0 .5 and 2 .0 mM EGTA concentrations were used to produce the final Cat* concentrations of 5.0 and 0.5 uM.
The medium was buffered to pH 7 .4 with monobasic sodium phosphate buffer.
1566 THE JOURNAL OF CELL BIOLOGY " VOLUME 98, 1984 HD formation in low concentrations of Cal * ions was compared with the data obtained from that in 1 mM Ca l ' (13) .
Determination of Free and Bound Ca" Concentrations: The concentration of free Ca'* was determined with a Cat * ion selective electrode . For each solution, the concentration of free Cal ' in the medium was tested before incubating the tissue and again after a 6-h incubation period. The tissue was taken after incubation in medium containing varying concentrations of Ca", and the total Ca l' in the cornea was determined by atomic absorption spectrophotometry .
Use of CaM Antagonists and CaM in Culture: To determine whether HD formation was CaM dependent, epithelial-basal lamina combinations were incubated for 6 h in medium containing 1 mM Ca l' and 40,M W7 or W5, two CaM antagonists. The antagonists were initially dissolved in dimethyl sulfoxide and then diluted with culture medium . In half the experiments the medium was changed at 6 h and the tissue was incubated for an additional 12 h in the absence of the antagonists. To further examine the effect of CaM on HD formation, the epithelial-basal lamina preparations were incubated for 6 h in the presence of the antagonist W7, and after 6 h culture, corneas were washed in three changes of defined medium and cultured for an additional 12 h in the same medium containing 2,uM CaM.
[ Statistical Analysis : Aminimum offour experiments were conducted for each treatment. Electron micrographs were taken of basal cell membranes of eight cells for each experiment and the total number of HDs were counted by two independent investigators . The mean number of HDs per micron of membrane was recorded. The type of HD (immature or mature) was recorded according to the designation of Gipson et al . (13). All data were presented in the form of the mean t SEM. Mann Whitney U tests were conducted to determine whether or not the number ofHDs present for one treatment differed significantly from those receiving another treatment. The density of HDs per micron of membrane formed in low Cal' containing media was compared with that found in control medium.
Effect of Ca" on HD Formation
The number of HDs that had formed on basal cells of corneal epithelium after incubation on basal lamina in medium containing low Ca" (10 /AM) for 3, 6, 18, or 24 h is shown in Fig. 1. The upper line (x) denotes formation in the control medium containing 1 mM Ca 2+ (13) To determine if HD formation was correlated to changes in Ca 2+ concentration, the number of HDs formed per micron of membrane in the presence of five Ca 2+ concentrations was determined . At both 6 and 18 h of incubation time, the number of HDs increased with increasing concentration of Ca ll (Fig. 2). The number of HDs formed after 6 h did not differ significantly for the three lowest Ca 2+ concentrations ( Figs . 2 and 3, a and b). However, a significant increase in HDs occurred when culture medium contained 0.6 mM Ca ll .
A similar increase occurred when medium contained 1 .0 mM Ca 2 +. The number of HDs present after 18 h in six concentrations of Ca ll ranging from 5 AM to 1 mM Ca 2+ was observed to follow a gradual step-like transition (Fig. 2). The greatest increase in the density of HDs occurred between 0 .3 and 1 .0 mM Ca2 +. The increase in the number of HDs with increasing Ca 2+ concentration can be seen in the electron micrographs in Fig . 3 . In low Ca 2+ medium (Fig. 3 a) only a small number of mature HDs were present and these were distributed sporadically along the basal lamina. From Fig. 3 The distribution of HDs along the cell membrane was more regular at higher Ca 2+ concentrations (Fig. 3 d) .
HD formation did not require Mg" ions. In medium lacking Mgt +, the HDs per micron of membrane were 2 .1 ± .27 after 6 h as compared with 2 .1 ± .08 in control medium . The percentage of mature HDs did not differ significantly from that found when incubated in the control medium (Mann Whitney U test, p < 0.02).
Effect of Ca 2+ Concentration on HD Maturation
Maturation of HDs depended on both Ca 2+ concentration and length of incubation . Table I shows the percentage of mature HDs at 3, 6, and 18 h in varying Ca 2+ concentrations. At 18 h the greatest percentage of mature HDs was found in control medium, and then decreased with the decreasing Ca 2+ concentration . The percentage was greater at 10 AM than at 0 .1 mM ; however, the difference was not significant . Although the percentage of mature HDs was lower at 6 h for all concentrations, the same trend was apparent (Table 1). At 3 h, mature HDs were present only at 1 mM Ca2+ . Although formation occurred when incubated in medium containing <10 AM Ca 2+' only immature stages were present . The number or maturity of HDs per micron of membrane did not affect their association with anchoring fibrils located beneath the lamina densa ofthe basement membrane . More than 90% of the HDs present were associated with anchoring fibrils (Fig. 3). A high association of the HDs present to the underlying fibrils agreed with the data of Gipson et al. (13).
Effect of CaM Inhibitors on HD Formation
Phenothiazines, such as trifluoperazine (TFP), and naphthalenesulfonamides, such as the W series (W7 and W5), bind to CaM in a Ca"-dependent manner and inhibit Ca"-CaM regulated activities (9,(18)(19)(20)(21)(22) . Two specific antagonists, W5 and W7, were added to the culture medium to test their effect on HD formation . HD formation was negligible when 40 AM of W7 was added to the control medium and incubated for either 6 or 18 h (Table II) . Only immature HDs were present. The number of HDs per micron of membrane was 10 .7% of the control at 6 h and 9 .6% of the control at 18 h. When the antagonist was removed after 6 h incubation, the corneas rinsed and incubated for an additional 12 h in control medium, formation occurred and was significantly greater than 1568 THE JOURNAL OF CELL BIOLOGY -VOLUME 98,1984 resembled that in cultures containing 10 AM Ca 2+ . When 2 AM of CaM was added to the control medium for the second half of the incubation, the number of HDs per micron of membrane was significantly higher than that attained with the addition of control medium alone (Mann Whitney U test, p -_ 0.05). 72% of the control number of HDs per micron of membrane was present after 12 h . Formation did not occur when 2 AM of CaM was added to medium containing 10 AM Ca2' and 2 mM EGTA . W5, a control for nonspecific effects of W7 (9, 22) did not inhibit HD formation as extensively as did W7 . 30% more HDs were present along the basal membrane when W5 instead of W7 was added to the control medium .
Influence of Low Ca 2' and CaM Antagonists on Protein Synthesis
Culture in low Ca 2+ containing medium and control medium containing 40 AM W7 did not significantly affect the metabolic state in any of the three media examined (10 AM Ca", 1 mM Ca 2+ [control], and 1 mM Ca 2+ with 40 AM W7). After a 6-h culture, [3 H]leucine incorporation into trichloracetic acid-precipitable proteins was 99% and 94 .8% of the control, respectively. Thus the lower number of HDs per micron of membrane present after treatment with W7 does not appear to be the result of depressed metabolic activity.
Cell Architecture and Cell-Cell Adhesion Organelles DISCUSSION When incubated for 18 h, epithelial sheet architecture and cell shape were influenced by the concentration of Cat +. At concentrations >0 .3 mM Ca", or in Mg"-free medium, the epithelial sheets on the basement membranes displayed normal continuous apical-basal stratification with columnar basal cells (Fig. 4). At lower Ca`concentrations or in the presence of CaM antagonists, basal cells spread along the basal lamina and stratification was focal . The lack of cell shape maintenance at the lowest Cal' concentrations was even seen at 3 h. Although neither HDs nor desmosomes were present at the lowest Cal ' concentrations, one layer of cells constantly adhered to the basal lamina (Fig. 46) .
We have determined that HD formation, which occurs when freed sheets of rabbit corneal epithelium are placed on denuded corneal basal laminae, requires Ca`and is mediated by CaM . The number of HDs formed is dependent on the concentration of Ca" and HD maturation is a reflection not only of time but also of Ca" concentration . In addition we have found that the shape of the epithelial basal cells affects normal HD formation and that the CaM antagonist, W7, inhibits HD formation in a reversible manner. The ionic requirements for HD formation, a cell-substrate adhesion junction, resemble those of cell-cell adhesion junctions, desmosomes. Hennings et al. (4) and Hennings and Holbrook (3) showed that desmosomal formation between mouse epidermal cells required Ca". They established this requirement by showing the absence offormation in low Ca" medium and the return offormation 2 h after the concentration of Ca" was restored.
After 6 h of culture, HD formation in low Ca 2+ medium occurred at a faster rate than that in control medium . This contrasts to the extent of formation, which is lower than that observed in control medium . These observations may be explained by the hypothesis that the number of HDs that form is controlled by the number ofavailable sites . Using the in vitro system, Gipson et al. (13) presented data that indicate that HDs form over sites on the basal lamina where anchoring fibrils insert. They also demonstrated that >80% of the number of HDs. present in control corneas had formed by 6 h. Thus by 6 h in control medium most of the available sites had been filled . After 6 h in control medium the rate of formation leveled offbecause only a small number ofavailable sites remained . In low Ca" medium, since the extent of formation is lower, we hypothesize the rate of formation to be higher after 6 h because many sites are available . It is possible that the mobilization of intracellular Ca" during incubation might permit the higher rate depicted in the low Ca" medium .
Our data indicate that HD maturation not only is time dependent (13) but also is Ca"-concentration dependent. The smaller percentage of mature HDs . that possess intermediate filaments inserting into their plaques in low Ca" medium may be related to the observation that Ca" is required for intermediate filaments to associate with adhesion plaques . Jones et al. (5) recently described the behavior ofintermediate filament bundles in low Ca" medium in primary mouse epidermal cells and observed that the intermediate filament bundles were generally located in the juxtanuclear region of the cell. They also reported that intermediate filaments rearrange, move to the cell periphery, and make contact with desmosomes after the addition of Ca". Our data support these observations and indicate that the association of intermediate filaments to HDs is Ca" dependent.
Since CaM is known to regulate a number of fundamental activities such as glycogen metabolism, intracellular motility, Ca" uptake, and DNA synthesis (6,7,10,(23)(24)(25), the role of CaM in the formation of cell-substrate adhesion junctions is not surprising . The results indicate that even though the excess antagonist is removed and fresh medium added for the second half of the incubation, some of the CaM present in the cell remains bound to the antagonist in the tissue . Therefore, both CaM and Ca2' are required for further formation . The CaM that is added to the medium may either cause the antagonist to disassociate from the CaM in the cell and act as a sink or it may enter the cell and provide binding sites that have been taken by the binding of the antagonist to CaM . Entrance of CaM into the cell may be possible as the membranes are presumably altered after incubation with antagonist (26). Studies using radioactively labeled exogenous CaM need to 1570 THE JOURNAL OF CELL BIOLOGY -VOLUME 98, 1984 be conducted before one can ascertain whether or not CaM enters the cell. When CaM was added to medium that contains 10 ,uM Ca2' and 2 mM EGTA, no HD formation occurred, because the EGTA competes with CaM for free Ca 2+ thus removing it from the system . Our data indicate that HD formation is a CaZ+-CaM regulated activity.
The shape of the epithelial basal cell appears to be ofprime importance in HD formation as HDs with the greatest density formed when basal cells of the epithelial sheet were columnar and when the ionic requirement for Ca 2+ was met. Jones et al. (5) showed that the intermediate filament bundle system of desmosomes (cell-cell junctions) is important in cell shape maintenance . After 1 h in low Ca 2+ medium, desmosomes were not able to maintain their structural integrity as the intermediate filament bundles moved from the cell periphery, and the cell deviated from its "native" columnar shape . Once Ca 2+ was returned to the medium and the desmosomes reformed (within 30 min), the cells began to acquire a more columnar shape. Our experiments agree with the findings of Jones et al. (5) and Hennings et al. (4) that desmosome formation occurs soon after the concentration of Ca 2+ in the medium is restored . However, HD formation does not occur as rapidly. It was not until 6 h of incubation in control medium after the low Ca`incubation that any HD formation occurred. Formation was observed only at sites where several adjacent cells had returned to their columnar shape.
Pitelka et al. (27) have also observed contortion of mammary epithelium grown on collagen when chelators such as EGTA or sodium citrate are added. Even though cell-substrate adhesion is maintained, the distortion of these cells may be attributed to the centripetal tension within each cell. When low CaZ+-containing medium was used in our organ culture system, the basal cell layer ofthe corneal epithelium adhered to the basal lamina and the characteristic hump shape of the cells was present as described by Pitelka et al. (27). The increase in the concentration of Ca 2+ caused the corneal epithelial cells to become more columnar in shape. HD formation appears to require the precise alignment of basal cells and the initial adherence to the basement membrane . Only after cell substrate alignment and adhesion did assembly, synthesis, and maturation of HDs occur . The importance of the epithelial basal cell shape and its realignment is supported by the work of several investigators . Following subepidermal blister induction, Beerens et al. (12) observed that the initial step of HD formation was the realignment of the basal cells to the basement membrane. They suggested that the HD remnants from the operation were phagocytosed before formation. Krawczyk and Wilgram (11) noted that mature HDs were present only beneath fully attached nonmigrating keratinocytes. This observation was supported by that of Buck (28), who showed that there was only a patchy distribution of HDs near the migrating marginal cells of healing mouse corneal epithelium. HD number was suppressed as far away as 1.5 mm from the migrating cells .
The molecular mechanisms involved in the regulation of HD formation remain to be elucidated. Our results indicate that formation and maturation is a multi-step mechanism that uses both Ca 2 '-dependent and Ca 2 '-independent mechanisms . Dual adhesion systems have been reported by several investigators (1,2,29) in their cell-cell molecular adhesion systems in the developing chick embryo . In the present system, initial adhesion of the corneal epithelial basal cell layer occurred at the lowest concentrations of Ca 2+ . Second, for-mation of HDs was inhibited by CaM antagonists in a reversible manner . In low concentrations of Ca", most of the cell's energy seems to be directed toward formation and not maturation, as indicated by the longer periods of high rates of formation. Third, maturation may be influenced by the mobilization of the intermediate filament bundles to the cell periphery. Our culture system allows the study of different phases of HD formation under controlled ionic conditions . Utilization of the model system of Gipson et al . (13) may facilitate the understanding of the regulation of HD formation and the role of Ca" and CaM in controlling cell-substrate interactions . | 2014-10-01T00:00:00.000Z | 1984-04-01T00:00:00.000 | {
"year": 1984,
"sha1": "2ef89e0a5c4796402ab32bdb204bee0688cd7a20",
"oa_license": "CCBYNCSA",
"oa_url": "http://jcb.rupress.org/content/98/4/1565.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "2ef89e0a5c4796402ab32bdb204bee0688cd7a20",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
132541937 | pes2o/s2orc | v3-fos-license | Phononic collective excitations in superfluid Fermi gases at nonzero temperatures
We study the phononic collective modes of the pairing field $\Delta$ and their corresponding signature in both the order-parameter and density response functions for a superfluid Fermi gas at all temperatures below $T_c$ in the collisionless regime. The spectra of collective modes are calculated within the Gaussian Pair Fluctuation approximation. We deal with the coupling of these modes to the fermionic continuum of quasiparticle-quasihole excitations by performing a non-perturbative analytic continuation of the pairing field propagator. At low temperature, we recover the known exponential temperature dependence of the damping rate and velocity shift of the Anderson-Bogoliubov branch. In the vicinity of $T_c$, we find analytically a weakly-damped collective mode whose velocity vanishes with a critical exponent of $1/2$, and whose quality factor diverges logarithmically with $T_c-T$, thereby clarifying an existing debate in the literature (Andrianov et al. Th. Math. Phys. 28, 829, Ohashi et al. J. Phys. Jap. 66, 2437). A transition between these two phononic branches is visible at intermediary temperatures, particularly in the BCS limit where the phase-phase response function displays two maxima.
nature of the Anderson-Bogoliubov mode is lost at low temperature due to long-range Coulomb interactions that shift its energy toward the plasma energy [1]. However, in dirty superconductors (which are far in the hydrodynamic regime due to the presence of impurities), it was shown both experimentally [32,33] and theoretically [34][35][36] that a phononic collective mode, known as the Carlson-Goldman mode exists close to T c . The speed and damping rate of this collective mode were found to vanish at T c .
In the present work, we compute the complex sound velocity of the phononic collective modes within GPF in a self-consistent nonperturbative way, which allows us to explore all temperatures from 0 to T c . We show that the GPF effective action can be rigorously expanded at low energy ω and wave number q provided one introduces a complex sound velocity u and sets ω = uq. The expansion yields an explicit equation for u that exhibits a branch cut for real u due to the coupling between phonons and fermionic quasiparticles. Following the procedure of Ref. [4] for the pair-breaking branch, we solve this equation after analytic continuation through the branch cut, and study the solutions as functions of temperature and interaction strength. In the limits T → 0 and T → T c , we perform this continuation entirely analytically. For intermediate temperatures, we develop a numerical method to perform the analytic continuation, which is based on the procedure of Nozières [37].
We find, in general, two complex roots to the dispersion equation. One root describes the Anderson-Bogoliubov sound velocity in the zero-temperature limit. Near the transition temperature, we find that there exists another phononic collective mode whose complex velocity vanishes with a critical exponent of 1/2 and whose quality factor diverges logarithmically with T c −T . This root appears in both the phase-phase and density-density response functions as a resonance centered around ω/v F q ≈ ∆/T c , which sharpens when approaching T c . At intermediary temperature, the two phononic branches coexist, and give a characteristic double-Lorentzian shape (which is accentuated in the BCS regime) to the phase-phase response function.
Our results are in good agreement with the existing experimental data at low temperatures. In the vicinity of T c , where the order-parameter collective mode has not yet been observed, we explain how the phase-phase response function could be measured by adapting to cold atoms the setup of Carlson and Goldman based on a Josephson junction between a cold (T → 0) and a hot (T → T c ) superfluid.
A. Gaussian fluctuation action
The present theoretical investigation of collective excitations in superfluid Fermi gases is performed in the pathintegral formalism. We consider ultracold two-component Fermi gases with s-wave pairing, described [14,15,38] by the action functional in Grassmann variables ψ σ , ψ σ , where β = 1/T is the inverse temperature (we set = k B = 1) and the chemical potential µ fixes the total fermion density. The s-wave contact interactions are characterized by the coupling constant g < 0; the ultraviolet divergence of the contact interaction model is removed by replacing g by the s-wave scattering length a through the renormalization relation [38]: The further treatment is based on the effective bosonic pair field action after the Hubbard-Stratonovich transformation with the pair field Ψ , Ψ and the integration over the fermion fields, as in [14,15,38]. This leads to the effective bosonic action S eff depending on the pair field only: where G −1 (r, τ ) is the inverse Nambu tensor, In the mean-field approximation, the pair field Ψ (r, τ ) is replaced by a uniform static order parameter ∆, solution of the mean-field gap equation Here, E k = ξ 2 k + ∆ 2 is the energy of the BCS quasiparticles, with ξ k = k 2 /2m − µ the free fermion energy. The temperature dependence comes in via the function related to the Fermi-Dirac occupation number n(E k ) by X(E k ) = 1 − 2n(E k ). Finally, the mean-field critical temperature T c = 1/β c is the temperature at which the order parameter ∆ in Eq. (5) vanishes: The Gaussian pair fluctuation approximation consists in expanding the action (3) to second order about the meanfield solution. The pair field Ψ is represented as a sum of the uniform and time-independent value ∆ and the fluctuation field ϕ: Ψ (r, τ ) = ∆ + ϕ (r, τ ) ,Ψ (r, τ ) = ∆ +φ (r, τ ) (8) and the fluctuations are taken into account up to second order. Next, the pair field action is rewritten in Fourier space with variables (q, iΩ n ) where Ω n = 2πn/β is the bosonic Matsubara frequency. This gives us the quadratic fluctuation action in matrix form: with the inverse fluctuation propagator M (q, iΩ n ). The collective modes of the system are the eigenmodes of the quadratic action (9). The explicit form of the matrix elements of M with the coupling constant renormalized according to (2) reads: and Note that the "quasiparticle-quasihole" parts of the matrix coefficients with denominator iΩ n ± (E k − E k+q ) vanish at T = 0 [where X(E k ) = 1] as can be seen by the change of variable k ↔ −k − q, and at T = T c since in this case E k = ξ k .
B. Spectrum of the collective modes
The complex energies z q of the collective excitations can be determined as the complex poles of the fluctuation propagator z → M −1 (q, z), or, equivalently, as the complex roots of the determinant of M: One usually separates in z q the real part and imaginary part: where ω q is the mode frequency and Γ q its damping rate. The straightforward analytic continuation of the matrix coefficients (10) and (11) by the replacement iΩ n → z has a branch cut along the whole real axis (unlike in the T = 0 case [4] where the branch cut begins at 2∆) due to the denominator z ± (E k − E k+q ). The roots of Eq. (12), even the low-energy ones, can then only be found when the determinant is analytically continued through the branch cut following the method proposed by Nozières [4,37]. The aim of this paper is to perform this analytic continuation and track the low-energy solutions of (12) in the complex z-plane as functions of interaction strength and temperature.
C. Equation of state
In dimensionless form, the Gaussian fluctuation matrix M, and hence the collective mode energy z q , depend on two reduced parameters: ∆/T and ∆/µ, which both depend on temperature. One may want to replace these parameters by more usual quantities such as T /T c , and the interaction strength, measured by the product k F a of the scattering length a and Fermi wavevector k F . This is done in three steps. First, one uses the number equation to express ∆/ǫ F (ǫ F is the Fermi energy) as a function of ∆/T and ∆/µ. The crudest approximation, which can be used only for a qualitative explanation of collective excitations is the mean-field number equation where n is the average density of the gas. Second, one relates k F a to ∆/T , ∆/µ and ∆/ǫ F by combining Eqs. (2) and (5): With these two equations, one can change the parametrization of z q from (∆/T ,∆/µ) to (k F a, ∆/ǫ F ), or equivalently to (k F a, T /ǫ F ) using T /ǫ F = ∆/ǫ F × T /∆. Third, there remains to express T c /ǫ F as a function of k F a using Eqs. (14) and (15) specified at T = T c , that is for ∆ = 0: Eq. (14) yields T c /ǫ F as a function of µ(T c )/T c and Eq. (15) relates this last parameter to k F a.
In this process, the mean-field number equation (14) can be replaced by a more accurate one, such as the number the number equation obtained via renormalization group theory [39], the one obtained from Monte Carlo calculations [40,41], or the one extracted from experimental data [12,42,43]. Here we will use more particularly an equation of state which incorporates the Gaussian fluctuations of the order-parameter [Eq. (9)] to the number equation, as proposed in Refs. [15,[44][45][46]. This allows us to avoid the aberrant mean-field prediction of a diverging T c in the BEC regime [46]. A major issue of these equations of state accounting for Gaussian fluctuations is that they lead to artifacts when using the mean-field gap equation (15) near T c (they loose the ∆ = O( √ T c − T ) critical behavior known from the theory of Ginzburg-Landau and predict an aberrant first order phase transition). As explained in Appendix A, we solve this issue by rescaling the temperature at which the gap equation is used by the ratio of the mean-field and corrected critical temperature (which is a refinement of the idea of Refs. [47,48]). In this way, the zero temperature equation-of-state coincides with the "GPF" scheme of Ref. [44], the critical temperature is the one computed by Nozières -Schmitt-Rink [46], and the critical behavior ∆ = O( √ T c − T ) (which is crucial for our study of the collective modes) is preserved.
Using an improved equation of state does not qualitatively change our results on the collective modes (it is a mere rescaling of the dependence on k F a and T /T c ) but makes them more quantitative. This strategy is used in our numeric results for collective excitations, particularly in Secs. IV, VI for the spectra of collective modes and in Sec. VII B to compare our results to measurements of the sound velocity.
A. Expansion of the M matrix for phononic energies
In the present treatment, we focus on obtaining an analytic expression of the velocity of the phononic modes in the long-wavelength limit (q → 0). Their eigenenergy is expected to behave as z q ∼ u s q, with the complex sound velocity u s . The sound velocity was calculated at T = 0 [13,15,49] where the quasiparticle-quasihole branch cut vanishes such that u s is real and the long-wavelength expansion of the matrix elements M j,k (q, z), j, k = 1, 2 presents no difficulty, i. e., the two-dimensional expansion in powers of q and z can be done successively. Predictions of the limiting behavior at the transition temperature (T → T c ) are also available [5,29] at weak coupling, and will be discussed in section V.
For 0 < T < T c , the point (q = 0, z = 0) is a branch point of det M and different limiting values when (q, z) → (0, 0) can be obtained depending on the path followed in the (q, z) hyperplane. Therefore, there exists no Taylor expansion valid everywhere in a vicinity of the point (q = 0, z = 0) [14]. An expansion can be obtained nonetheless assuming that q and z are small yet proportional to each other. Consequently, we set z ≡ uq, where u is a complex number independent of q. An analogous trick was performed in Ref. [50].
In the q → 0 limit, it is more tractable to express the matrix elements (10) and (11) in the modulus-phase basis, where the new matrix elements are obtained by the unitary transformation [14]: The diagonal matrix elements M ++ (q, z) and M −− (q, z) correspond to the phase and modulus fluctuations, respectively. The nondiagonal matrix elements describe mixing of modulus and phase fluctuations. The series expansion in powers of q in this basis gives: with coefficients (dimensionless except for the Jacobian d 3 k): Here v k = k/m is the phase velocity associated to wave vector k, and e c (v) = mv 2 /2 is the kinetic energy associated to velocity v.
B. Reduced dispersion equation
Substituting the series expansions of the matrix elements into the determinant ofM, we get where the function W (u) is given by: Let u s be a generic solution of the low-q dispersion equation: The real part of u s is readily interpreted as a sound velocity and the imaginary part gives access to the long-wavelength limit of an inverse quality factor Γ q /ω q : As such, the reduced dispersion equation (28) has no root: none on the real axis (u ∈ R) which is entirely spanned by the branch cut caused by the resonant denominator in Eqs. (23)(24)(25), and none either in the lower complex plane (Im u < 0), otherwise there would also exist an unstable solution in the upper plane (since W (u) = 0 =⇒ W (−u) = 0). Two distinct strategies can be adopted to overcome this apparent paradox.
(i) One can limit the study to the vicinity of the real axis setting u = c + i0 + with c ∈ R, and study the various responses of the system as a function of c. Although the response functions (defined in the next subsection) have no pole, they may exhibit resonance peaks whose position and width may be fitted to extract the real and imaginary parts of a phenomenological speed of sound. This corresponds to an experiment where the response of the gas is recorded at fixed (and low) q as a function of ω, using for example Bragg spectroscopy [12]. The disadvantage of this strategy is that it relies on a delicate choice of a fitting function [29] for 1/W (c + i0 + ), in particular in the case (that we will encounter) where the function has more than one peak.
(ii) One can instead look for true solutions of the dispersion equation (28) in the analytic continuation through the branch cut. Knowledge of the poles of 1/W (u) in the complex plane makes it easy to devise an analytic approximation for the response functions. It also allows for a clear definition of the speed of sound, and therefore for a rigorous study of its temperature dependence, and in particular of its critical exponent near T c .
C. Response functions
The response functions of the pair field in the GPF approximation are the coefficients of the propagatorM −1 evaluated on the real axis z = ω + i0 + [51] (hence without analytic continuation through the branch cut). In the low-q limit,M −1 (q, z) is given by: The largest response is thus in the phase-phase propagator [M −1 ] 2,2 . We define the phase-phase response as a function of the velocity c = ω/q ∈ R.
To account for density excitations, one should supplement the quadratic action (9) by an auxiliary action containing the exciting density fields, which we do in Appendix C. The result is the following expression of the retarded densitydensity Green's function 1 which is in agreement with Eq. (20) of Ref. [51] (taking the density-density element of the response function matrix). The density-pairing field and density-density elements of the fluctuation matrix, M ±ρ and M ρρ , (and their low-q expansion) are given explicitly in Appendix C. We then define the low-q density-density response function as It is related to the long wavelength density-density response function by lim q→0 S(q, cq) = χ ρ (c) /(1 − e −βcq ). χ ρ is composed of two terms which have a distinct physical origin The first term, does not disappear at T c ; above it, it describes the known density response of free fermions. The second contribution χ (2) ρ gathers the terms between curly brackets in (34), which have the determinant ofM (the pairing field fluctuation matrix) in the denominator. It describes the contribution of the pairing field to the density response and it is specific to the superfluid phase. Due to the detM in the denominator of this term, it has the same poles and thus the same collective modes as the pair field response function.
IV. LOW-TEMPERATURE BEHAVIOR
We briefly study the behavior of the speed of sound at zero and low temperature, which are overall well established results. At T = 0, one has X(E k ) = 1 and X ′ (E k ) = 0 such that the coefficients (23-25) of the (q, z) expansion depend trivially on u (as expected since the singular "quasiparticle-quasihole" terms vanish). The dispersion equation (28) in this case has one real root u s,0 (T = 0) = c s,0 (T = 0) which satisfies the hydrodynamic formula mc 2 s,0 = ndµ/dn [30,48] and can thus be unambiguously identified as the first sound of two-fluid hydrodynamics. At low but non zero temperatures (T ≪ ∆, T ≪ T c ), the root u s1 acquires an imaginary part exponentially small in temperature, Im u s,0 (T ) ∝ e −∆ ′ /T , with an activation energy ∆ ′ strictly larger than ∆ [28]. This is because the fermionic quasiparticles of energy ∆ have zero group velocity, and thus cannot contribute to the damping. Our results for this imaginary part are in agreement with Refs. [17,28] and with Landau roton-phonon theory [52]. The collective mode also acquires a velocity shift δc s,0 (T ) = Re u s,0 (T ) − c s,0 (0). In the weak-coupling BCS limit, we agree with Kulik et al. [28] who predicted an exponentially small increase of the velocity: As shown in Fig. 1, we find that after this exponential increase, the velocity passes through a shallow maximum and then decreases. This behavior is reminiscent of what Ref. [23] obtained with a low-energy effective theory. On the contrary, in the BEC regime, we find that the velocity shift is always negative.
V. BEHAVIOR NEAR THE CRITICAL TEMPERATURE
In contrast with the low-temperature regime, the behavior of the collectives branches near T c remains a controversial problem. The available predictions neatly contradict themselves: Popov and Andrianov [5] find the pure imaginary dispersion relation which indicates that u s (T ) has a critical exponent of 1/2, that is, In contradiction with this result, Ohashi and Takada [29] predict a real speed of sound with a critical exponent of 1/6, that is, These two studies are limited to the weak coupling regime 1/k F a → −∞.
More recent studies dealing with the strong coupling regime [50] confirmed the cancellation of the speed of sound at T c (irrespectively of the interaction regime) but did not predict its critical exponent. Using our dispersion equation (28), we are in a good position to solve this controversy.
Using the mean-field equation of state (or the "scaled GPF" scheme described in Appendix A, which preserves this limiting behavior), the limit T → T c implies Neglecting terms of order ǫ 2 , we thus take the limit ǫ → 0 for µ/T fixed to m c ≡ µ(T c )/T c . Note that m c is related to k F a by an equation of state at T c , as explained in section II C. This relation is of course different for, e. g., the mean-field or scaled GPF equations of state.
A. Regimes with (µ > 0) When µ(T c ) > 0 (that is for 1/k F a < 0.68 with the mean-field equation-of-state), the m σσ ′ coefficients in the limit Since µ is the most convenient energy scale near T c , we have redimensionalized the speed of sound,ǔ 2 = mu 2 /2µ, and the integrals in consequence, m σσ ′ = ρ(µ)∆m σσ ′ /2, where ρ(µ) = 2m 3 µ/π 2 3 is the density of states at energy µ (setting the volume of the gas equal to unity). We also introduced the functions and we recall that arccsc(z) = −iln 1 − 1 z 2 + i z . Functions f , g and h of m c are defined in Appendix B where the derivation is detailed. The dispersion equation (28) onǔ then becomes This equation should be solved in the lower-half complex plane after analytic continuation of the functions F and G. With the analytic formulas Eqs. (45,46), this is simply done by the replacements 1 − 1/ǔ 2 → − 1 − 1/ǔ 2 and arccsc(ǔ) → π − arccsc(ǔ). Remarkably, we find that the analytically continued equation has in fact two solutions. The first one (shown in Fig. 2 as a function of m c or 1/k F a) has a nonzero limit when ǫ → 0; it is given by the transcendent equation The second solution u s2 behaves as ǫ ∝ (T c − T ) 1/2 , which confirms the 1/2 critical exponent predicted by Andrianov and Popov. Settingǔ s2 = ǫū s2 , and simplifying Eq.
Thus,ū s2 still depends logarithmically on ǫ. This dependence can in turn be expanded at temperatures extremely close to T c , that is for |ln ǫ| ≫ 1: The first two terms of this expansion are pure imaginary numbers, while the term in O(ǫ/ln 2 ǫ) has a non-zero real part. The quality factor Re u s2 /2Im u s2 thus vanishes near T c as γ/2ln 2 ǫ, where the coefficient γ is: with the short-hand notation (47), such that the dispersion equation becomes m ++ m −− = 0, and u s1 and u s2 solve m ++ = 0 (while m −− = 0 gives the pair-breaking Popov-Andrianov-"Higgs" mode [4,5]). Using the limiting value f (+∞) = 7ζ(3)/12π 2 , we get an expression of u s2 that agrees with Andrianov-Popov, Eq. (39): Conversely, u s1 has the finite nonzero limit The existence of two solutions to the speed-of-sound equation, and thus of two phononic branches, is surprising but it is not an artifact of our analytic continuation scheme. It is confirmed by looking at the response function χ(c), which is a physical observable. Expressions (42)(43)(44) can be used to express the response function near T c : where the redimensionalization isχ = χ × [ρ(µ)∆/2]. In Fig. 3, we show this response function in the far BCS regime 1/k F a = −2 (corresponding to µ(T c )/T c ≃ 37.73). The second root, whose quality factor diverges when T → T c , translates into a sharp resonance peak whose center tends to c = 0 and whose width vanishes at T c . The first root, which conversely has a finite quality factor, does not lead to the appearance of a second peak at temperatures close to T c (we shall see that it does at lower temperatures); it is nevertheless observable in the form of a broad upper shoulder that extends to higher c. Far from Tc, the two rootsǔs1 andǔs2 of the dispersion equation have comparable imaginary parts (and comparable residues), which results in a response function with a double bump structure (blue curve). As the temperature is reduced,ǔs2, whose real and imaginary part tend to 0 like ǫ, dominates, which results in the large resonance peak near c = 0 (black curve). The contribution ofǔs1 still leads to a shoulder at larger c.
In the BEC regime (µ(T c ) < 0), we obtain the following expansions of the m σσ ′ integrals: where the redimensionalization is the same as in the BCS regime with µ replaced by |µ|. The functions α 1 , α 2 , β and γ of m c are defined in Appendix B, and the function B is given by an integral We introduce the function C(ǔ, m c ) for the sake of completeness, but it is not needed to derive the speed of sound to leading order. The dispersion equation (28) in the BEC regime near the transition temperature becomes The analytic continuation of this equation is only slightly more difficult than in the BCS regime; replacing in Eq. (58) arctanh(z) by iπ + arctanh(z) for Re z > 1, we obtain the analytic continuation B ↓ of B: de is a pure imaginary number. The analytically continued equation (59) admits a single complex rootǔ s,B which tends to 0 when ǫ → 0. Up to order ǫ 2 , we can then neglect the terms controlled by α 2 and C in (59), to obtain: Contrarily to the BCS regime, there is here no remaining logarithmic dependence ofǔ s,B /ǫ. Moreover, the quality factor Reǔ s,B /2Imǔ s,B , instead of being logarithmically cancelled, now diverges like 1/ǫ. Finally, in the BEC limit (m c → −∞ or 1/k F a → +∞), we use the equivalents The quality factor of the branch thus diverges exponentially with |m c | in the BEC limit. The damping of the collective modes by the unpaired fermions becomes less efficient when the pairs form a weakly interacting condensate of dimers. Note that our results may be less meaningful in the BEC limit where one expects purely bosonic effects not described by GPF, such as phonon-phonon couplings, to play a major role. It is known for example that important corrections to T c arise when taking into account the condensate depletion due to the bosonic branch [38].
A. Numerical method for the analytic continuation
When the temperature is neither close to 0 nor to T c , it is impossible to express the dispersion equation with simple analytic formulas such as (47) or (59), and thus to perform the analytic continuation based on the analytic properties of elementary functions. We thus develop a numerical method based on the procedure of Nozières [4,37], which is able to perform the analytic continuation directly from the integral expression Eqs. (23)(24)(25).
a. Spectral functions Quite generally, we consider a function F of the complex variable u having a branch cut at the real axis for u = c ∈ R, and introduce the associated spectral function, The spectral function ρ F (c) is in general analytic on the real axis except at most on a finite number of points. It can thus be analytically continued from any chosen interval between these points to the lower complex half-plane. The analytic continuation F (I) (u) of F (u) from upper to lower complex half-plane and through the interval I ⊂ R where ρ F is analytic then reads: where u → ρ (I) F (u) is the analytic continuation of ρ F (c) from the interval I to the lower complex half-plane. To perform the analytic continuation of functions m σ,σ ′ , we compute their spectral functions and study their singularities on the real axis. After the angular integration over θ in Eqs. (23)(24)(25), we get: with the short-hand notations X = X(E k ) and X ′ = X ′ (E k ). In these expressions, the only contribution to the spectral functions comes from the logarithms that have a discontinuity ln(x + i0 + ) − ln(x − i0 + ) = 2iπ for Re x < 0.
We then obtain generically where p = 1 for ρ −− and ρ +− and p = 3 for ρ ++ . Note that the integrands f σσ ′ (whose exact expressions follow immediately from Eqs. (67-69)) are independent of c, such that the only dependence on c (besides the trivial prefactor) is through the integration intervals I ± (c). The idea of our numerical method is to compute analytically the boundaries of those intervals, which we then analytically continue to the complex plane, yielding the continuations of the spectral functions ρ σσ ′ (u), u ∈ C. b. Resonance intervals The intervals I ± (c) are defined as the set of wave numbers k where the argument of the logarithms in Eq. (67-69) has a negative real part, which leads to the condition Here, c g (k) = ∂E k ∂k = kξ k /mE k is the group velocity of the BCS fermionic excitations. This velocity is positive for k > √ 2mµ and negative for 0 < k < √ 2mµ; it is represented in absolute value in Fig. 4. In Refs. [17,25] condition (71) was derived as the low-q version of the resonance condition ω q = E k+q − E k after angular integration. In Ref. [17] it has been further interpreted as a Landau criterion considering an unpaired fermion as an impurity moving in the superfluid. Since c g (k) → ∞ when k → ∞, the inequality c < c g (k) is always fulfilled for large enough k > √ 2mµ. The interval I + (c) is then of the form [k 3 (c), +∞[. As visible in Fig. 4, the inequality c < −c g (k) can be also fulfilled at lower k (0 < k < √ 2mµ) provided that c is small enough, that is smaller than the boundary velocity, which is the absolute value of the minimum of the group velocity, c b = | min k [c g (k)] | (in other words, the largest slope of the BCS branch k → E k in its decreasing part). The boundary sound velocity c b decreases when moving from the BCS to the BEC regime and vanishes when µ = 0, that is when the decreasing part of the BCS branch disappears. At a fixed scattering length, c b rises with increasing temperature because the chemical potential µ(T ) rises. When the condition c < c b is fulfilled, the interval I − (c) exists and is of the form [k 1 (c), k 2 (c)]. Since c g ( √ 2mµ) = 0, when the two momentum ranges exist they are disjoint (k 3 (c) > k 2 (c)). The boundary functions k j (c), j = 1, 2, 3, when they exist, are the real positive roots of the polynomial equation, with When c → c b from below, the integral over I − in (70) tends to 0, but its derivative can remain finite, which results in an angular point of the spectral function ρ σσ ′ in c = c b . Physically, this angular point corresponds to the opening or closing of a decay channel in the decreasing part of the BCS branch, at k < √ 2mµ. This angular point will become a branch point in the analytic continuation.
c. Choices for the analytic continuation The spectral functions ρ σσ ′ (c) are analytic separately in the interval A = [0, c b [ and B =]c b , +∞[. Therefore there are two possible ways to continue them to Im(u) < 0: where k j (u) for j = 1, 2, 3, are the analytic continuations of the real solutions of (73). Numerically, these continuations are obtained by an adiabatic follow-up of the roots of (73) in the complex plane. Note that k 1 and k 2 can be continued to the entire half-plane with Im u < 0 even though they are real only in the interval [0, c b ] of the real axis. Choices
B. Results and discussion
Using our "complex boundaries" numerical method to perform the analytic continuation, we study the solutions of the dispersion equation in the whole range [0, T c ]. The existence of two roots near T c is confirmed by our numerical study, a finding that does not depend on the choice of "window" A or B for the analytic continuation. In order to make the results quantitatively relevant for comparison with experiments, the sound velocity and damping are calculated here using the "scaled GPF" equation of state described in Appendix A.
a. BCS and around unitarity regimes In the deep BCS regime, as shown in Fig. 6 the speed of Anderson-Bogoliubov first sound c s,0 found at zero temperature evolves to the first root u s,1 which we found near T c . Both its real and imaginary part c s,1 and κ s,1 are monotonically increasing functions of temperature. The second solution u s,2 appears only above a threshold temperature 3 T th , which tends to T c in the BCS limit. Its real part c s,2 is zero at T th and at T c while its imaginary part κ s,2 monotonically decreases with temperature. There is thus a regime in the range [T th , T c ] where the two solutions are both well separated in frequency and comparable in damping. As visible in Fig. 7, the response function χ exhibits in this regime two distinguishable maxima (not just a peak with a shoulder as in Fig. 3) corresponding to the two roots of the analytic continuation. This unexpected finding is one of our key results; it validates the existence of two speeds of sound and thus of two collective modes in the GPF theory. At 1/k F a = 1/k F a cross ≃ 0.155 (corresponding with the GPF equation of state to µ(T c )/T c ≃ 1.376, hence still in the non-BEC regime of Sec. V), an exact crossing of the two roots occurs at a given temperature: u s1 (T cross ) = u s2 (T cross ). Then, for 1/k F a > 1/k F a cross , the situation changes: the zero temperature solution c s,0 evolves to u s,2 , while u s,1 appears only above the threshold temperature T th . As illustrated in Fig. 8, this behavior is reminiscent of that of two repulsive particles in 2D, with temperature playing the role of time. The repulsion ensures that the trajectories never cross: if the x-coordinates (here Re u) cross, then the y-coordinates (here Im u) anticross, and vice-versa. In this analogy, the particular case a = a cross corresponds to the infinite energy case where the two particles exactly meet. In the BEC regime, represented in Fig. 9, the T = 0 solution c s,0 always evolves to the solution u s,B that we found near T c . Its real part c s,B decreases monotonically with temperature, while its imaginary part κ s,B vanishes at both 0 and T c , and goes through a maximum in between. The height of this maximum tends to zero in the BEC limit (1/k F a → +∞), such that κ s,B (T ) uniformly tends to zero in this limit. This is consistent with what we found in the vicinity of T c (Eq. (64)) and indicates that the damping mechanism we study (absorption-emission of collective excitations by fermionic quasiparticles) becomes less relevant in the BEC limit where the condensed pairs weakly interact with the unpaired fermions. As visible in Fig. 9, a second solution still exists in the BEC regime, but it is always largely damped such that it does not contribute to the response function, which never displays the two-peak behavior we described in the BCS regime.
c. Visibility of the phase collective modes in the density response The two phase collective mode we have found are also visible in the density-density response function, as shown on Figs. 7 (b) and 10. At low temperature (blue curve in Fig. 7 (b)), the sole feature of χ ρ (c) (which is uniformly dominated by the contribution χ (2) ρ of the pairing field) is the Anderson-Bogoliubov resonance. When the temperature rises, the Anderson-Bogoliubov resonance broadens and two other phenomena are visible: a broad incoherent peak due to the normal component χ (1) ρ appears at velocities c of order v F . This peak is not due to a collective mode (it does not have a Lorentzian shape) but simply to the density response of a normal Fermi gas which becomes increasingly dominant near T c . At the same time, a resonance due to the u s,2 pole of the pairing-field propagator forms at low-velocities, and becomes increasingly sharp when T → T c . The spectral weight of this resonance grows with increasing interaction strength (compare the inset of Fig. 7 (b) in the BCS limit to Fig. 10 (c) at strong coupling).
Note that in the density response we can clearly see the boundary velocity c b discussed above, which is related to the opening of a decay channel in the decreasing part of the BCS branch of excitations.
d. Influence of the choice of the analytic continuation So far we have not discussed the physical consequences of having two windows A (0 c c b ) and B (c c b ) for the analytic continuation. For this, we go back to the physical observable, which are the response functions χ and χ ρ . Both of them have an angular point in c b ; this is an observable feature, not an artifact of the approximation we have used, nor of the collisionless regime. In fact, this angular point follows directly from energy conservation, and is caused by the non-monotonic nature of the quasiparticle spectrum which ensures that the low-and high-k modes are separated by a point of zero group-velocity: the minimum of the BCS branch. Thus, this angular point is in a sense a signature of the superfluid phase. The physical meaning of the two windows A and B is then physically clear: window A is appropriate to reproduce the low-velocity (c < c b ) part of the response functions and window B for the high-velocity part (c > c b ).
When c b is far from the interesting features of the response function, that is from the resonance peaks centered around c s,1 and c s,2 , then only one restriction of χ, and thus only one analytic continuation, is worth studying. This is the case for example in the BCS limit: c b tends to v F which is well above both c s,1 and c s,2 . The "window" A (where the decay to quasiparticle of wave number k < √ 2mµ is allowed) is then the only choice. This reflects the fact that the BCS branch has a large decreasing part in this limit. Similarly, in the BEC regime, one has c b = 0, so that only the "window" B is available for the analytic continuation.
On the contrary, when c s1 or c s2 cross c b at a given temperature (which occurs with the scaled GPF equation of state for 0.679 1/k F a −0.594; in Fig. 11 we show the example of unitary 1/|a| = 0), this means that the angular point in c = c b goes through the peak of χ as temperature varies, as illustrated in Fig. 12. Then, the roots found in window A of the analytic continuation describe the left part of this broken peak, and those of window B, its right part. In practice, when they are close to c b , the difference between the sound velocities c in the two windows is small with respect to their imaginary parts κ s , as can be seen from Fig. 11. Physically, since the damping factor is a measure of the uncertainty of the sound velocity (following from the uncertainty relation between time and energy), this means that the difference in the velocity is almost indistinguishable, or in other words, that the discontinuity in the slope of the resonance peak can only be resolved through a very precise measurement of the response function.
e. Analytic approximation for the response function From the poles u s,1 and u s,2 found in the analytic continuation, and their residues Z 1 and Z 2 in the phase-phase propagator Im m −− /πW one can construct an effective response function, in the BCS regime: which is the sum of the two resonance peaks caused by u s,1 and u s,2 in each window A and B. Note that since the residues Z 1 and Z 2 are complex, this is not simply the sum of two Lorentzian functions. Conversely, in the BEC regime, our effective response function has only one resonance These functions can be compared with the exact response function χ, to check the relevance of the analytic structure found in the analytic continuation. They allow to interpret the shape of χ in terms of resonances caused by collective modes. They can also be used as fitting functions for experimentalists to extract the values of u s,1 , u s,2 or u s,B and their residues from a measured response spectrum.
In the low-temperature case (see the example of T = 0.4T c in the inset (a) of Fig. 12) the residue of the only relevant complex root tends to a real number, such that we expect the response function χ to have an approximate Lorentzian shape. This is indeed what we observe in Fig. 12, with a very good agreement between χ and χ eff . When raising the temperature, away from the BCS regime one does not immediately observe the formation of a second peak (see the examples of T = 0.87T c and T = 0.95T c in Fig. 12), but rather a shift in the position of the original peak and an increase of its width and skewness. To describe the altered peak, we introduce an effective sound velocity u s,eff = c s,eff − iκ s,eff where c s,eff is the value of c where χ eff reaches its maximum, and κ s,eff its half width at half maximum 4 . These quantities are useful only close to T = 0 and T c where one root is much less damped than the other. In the intermediate temperature regime where the two roots have a comparable damping rate, the response function is not well fitted by a single Lorentzian, and one should revert to the superposition introduced in (76). This 4 The effective sound velocity c s,eff is given analytically by the equation The effective damping factor is the half width of χ eff at its half height: where c hw < c s,eff and c hw > c s,eff are the two roots of the equation: Naturally, these definitions are valid only when χ eff shows a single maximum.
particularly the case in the far BCS regime where the response function exhibits two distinct peaks in a temperature range close to but excluding T c (see the example of T = 0.95T c in Fig. 7). As we said above, at and around unitarity, the angular point in c b goes through the resonance peak as temperature varies. This results in a visibly broken peak in χ, which is again well captured by our two-pole analytic approximation χ eff provided one switches of the interval of analytic continuation when crossing c b , as prescribed by Eq. (76). When the argument c = ω/q of the response function passes the boundary velocity c b , χ (c) exhibits an angular point and its two-pole analytic approximation χ eff (c) exhibits a discontinuity.
VII. LINKS TO OTHER THEORIES AND TO EXPERIMENTS
A. Comparison to low temperature approaches In Fig. 13, we plot the inverse quality factor 2κ s,eff /c s,eff of the phononic modes as a function of the temperature at unitarity where we use the scaled GPF equation of state. In this regime, our result can be compared to several other approaches (which all assumed the existence of a unique phononic mode, hence our use of the effective velocity). (i) A prediction based on Landau phonon-roton theory (which is exact if the roton branch is known exactly, see Eqs. (15)(16) of [52]; it is recalculated here using the BCS branch as the roton branch and the GPF parameters of 78) and (79)). Dashed curve: The quality factor is extracted directly from the long-wavelength response function χ (c). In both cases we use the equation of state obtained within the GPF approximation [44], instead of the mean-field one. Dotted curve: the RPA low-temperature asymptotic behavior according to [17] is recalculated using the GPF equation of state. Dotted curve: the SLDA result of Ref. [27]. Inset: the same in a lower temperature range, in the logarithmic scale.
state 5 ) exactly agrees with our asymptotic results when 6 T → 0. (ii) The superfluid local density approximation (SLDA) [27], an approach which exploits the universal behavior of the gas at unitarity, also predicts a quality factor due to the coupling to the fermionic quasiparticle-quasihole continuum in good quantitative agreement with ours except the low-temperature range where the damping rate obtained in [27] seems aberrant as it does not tend to zero when T → 0.
B. Comparison to measurements of the sound velocity
In Fig. 14, the nonzero-temperature effective sound velocity c s,eff as a function of 1/k F a calculated within the present approach is compared with the experimental data of Ref. [12] (squares) using different equations of state. Since the experimental value of the speed of sound were obtained using a single Gaussian fit of the response function, it is natural to compare them to our effective sound velocity (which combines the information about the two resonances in a unique velocity).
The temperatures throughout the BCS-BEC crossover are determined by a quadratic interpolation of the experimental values reported in Ref. [12]: k B T = 0.09E F at unitarity, k B T = 0.02E F at 1/k F a = −1.6, and k B T = 0.1E F at 1/k F a = 1 (such that T /T c is about 1/2 in all three cases). The sound velocity has been calculated here using our results for c s (∆/µ, ∆/T ), and the mean-field gap equation with the chemical potential obtained by three methods: (1) from the Table 3 of the Supplement to Ref. [12], (2) from the number equation accounting for Gaussian pair fluctuations within the NSR scheme for the superfluid state below T c [14], and (3) from the GPF approach of Refs. [15,44,45] (almost equivalent to the scaled GPF equation of state of Appendix A since the temperature is lower than T c here). As can be seen from Fig. 14, an excellent agreement with the experiment is obtained when we use the parameters of state from Ref. [12]. [12] (empty dots), with parameters calculated accounting for Gaussian fluctuations within the NSR scheme for the superfluid (broken-symmetry) state [14] (dashed curve) and within the GPF scheme of Refs. [15,44,45] (solid curve). The calculated sound velocities are compared with the experimental data of Ref. [12] (squares).
C. Measuring the phase-phase response
So far the experiments have measured the collective mode spectrum through the density response of the gas. It would be interesting to access also the phase-phase response function, particularly near T c where it has a very different shape than the density-density response as we have seen. To this end, we explain how one can adapt the Carlson-Goldman [33] experiment, which measured the pairing-field susceptibility of a superconductor, to a cold atom setup. The scheme we propose is illustrated on Fig. 15, and it uses only existing experimental techniques. The excitation is obtained by coupling the system of interest (a superfluid Fermi gas at nonzero temperature, for example at T close to T c ) to a environment consisting of a large superfluid Fermi gas prepared at zero temperature and with a well-defined phase with respect to the system (which can be done by initially performing Josephson oscillations [53]). The two gases are coupled through a tunneling barrier, similar to the thin barrier realized in [53]; to extract information on the spectrum at momentum q, the barrier should be spatially modulated at a wavelength λ = 2π/q, which, for instance, could be achieved by interfering two laser fields. The fact that the reservoir gas is much larger than the studied system ensures that it remains at zero temperature all along the excitation time, and that its quantum fluctuations can be neglected. It behaves then as a classical pairing field imposed on the system, which can be represented by the drive term in the Hamiltonian Here ∆ exc is the order parameter of the reservoir (whose phase has been fixed initially) and J(r, t) = J(t) × cos(qy) is the spatially dependent strength of the barrier; since the system is prepared close to T c , its healing length is very large, and we can assume the effect of the barrier to be homogeneous in the x-direction [33]. The time-dependence of J can be either sinusoidal J(t) ∝ cos(ωt + φ) if one wishes to probe the response function at a given frequency ω, or it can be more abrupt if one wishes to study the quench-like dynamics of the system (which theoretically is described by the Laplace transform of the frequency-domain response function χ [54]). Finally, the phase of the system is measured by letting the cloud expand and interfere with the reservoir as in [53]. The interference pattern will appear shifted in the x direction by a length δx(y) which depends on the local phase of the system at position y.
VIII. CONCLUSIONS
We have studied the long-wavelength solutions to the RPA/GPF equation on the collective mode energy of a neutral fermionic condensate. To access the full range of temperatures between zero and T c , we deal non-perturbatively with the damping caused by absorption/emission of BCS "broken-pair" quasiparticles. For that, we set the energy proportional to the wave vector, z = uq, and analytically continue the equation for u through its branch cut associated to the quasiparticle absorption-emission continuum.
While our results at low temperature agree with previous perturbative approaches in predicting a single collective mode with an exponentially small damping rate and velocity shift, we find an unexpected second solution in the vicinity of the transition temperature T c . This two-mode nature is also visible in the order-parameter phase response function which displays two distinct resonance peaks, at temperatures relatively close to T c , and in the BCS regime. In the limit T → T c , we show analytically that the velocity of the first mode tends to a finite and non-zero complex number, while the damping rate of the second mode vanishes like ∆(T ) (or (T c − T ) 1/2 ), and its quality factor vanishes logarithmically. In the BEC regime, on the contrary, we find only one relevant solution, whose velocity vanishes like (T c − T ) 1/2 near T c with a diverging quality factor.
At arbitrary temperatures 0 < T < T c , we develop a numerical method to perform the analytic continuation of the GPF equation. This confirms the existence of two distinct phononic branches, one being dominant near T = 0, the other near T c . The transition between these two resonances is visible in both the phase-phase and density-density response functions. Last, our knowledge of the two poles in the analytic continuation, and of their residues, allows us to propose an analytic function to describe the phase response in terms of two collective resonances.
The present study not only resolves some problems but also raises new questions, particularly about the existence, outside the collisionless regime, of the transition we have seen between two distinct collective modes. This transition undoubtedly exists in the GPF approximation, but a more systematic treatment should account for the finite lifetime of the fermionic quasiparticles [60]. In any case, our work will be heuristically useful for further developments of the theory of collective excitations in superfluid Fermi gases.
with the saddle-point and fluctuation contributions: where the matrix elements of the inverse Gaussian pair fluctuation M (q, iΩ n ) propagator are described above.
Within the Nozières -Schmitt-Rink (NSR) scheme [38] extended to the superfluid state below T c in Ref. [14] (see also [55,56]), the particle density is determined as considering ∆ as an independent parameter. The NSR scheme has been modified [15,44] taking into account a variation of the gap: This approximation, referred to as GPF (Gaussian Pair Fluctuation approximation) provides the temperature dependence of the chemical potential in good agreement with Quantum Monte Carlo results [40,41].
with the Fermi energies E F and E (sp) F calculated, respectively, with and without accounting for fluctuations: These scaling relations precisely reproduce the GPF or NSR schemes [depending of a choice for n, (A3) or (A4)]. Close to the transition temperature, both NSR and GPF schemes reveal an artifact: a discontinuous change of the gap from a finite value to zero at T c . In order to overcome this issue and study sound velocities in a superfluid Fermi gas for all T < T c , several interpolation schemes were considered in Refs. [47,48]. In the present work, we use a slightly different interpolation scheme. The chemical potential calculated within the GPF approach [44,45] shows an excellent agreement with the Monte Carlo results for T < T c [41], where the transition temperature T c is determined accounting for fluctuations and is the same within the GPF and NSR schemes [45]. Moreover, both the chemical potential and the gap calculated within GPF at T = 0 are in good agreement with these Monte Carlo calculations. Therefore we keep the relations (A6) and (A7) unchanged, so that the chemical potential remains the same as within GPF, and replace (A5) by the equation: where T sp in the temperature dependence of ∆ sp is rescaled as T ′ sp ≡ T (sp) c /T c T . According to (A9), the gap takes the value ∆ = ∆ GPF at T = 0, and tends to zero as ∆ ∝ √ T c − T when approaching T c . This known behavior of ∆ in the vicinity of T c is an exact universal condition, independent on the coupling strength. Eq. (A9) is thus a renormalized saddle-point gap equation in which the aforesaid artifacts of the temperature dependence of ∆ (T ) are removed.
The spectral weight function (the dynamic structure factor) is proportional to the imaginary part of G R ρ : We use the known correspondence between the Green's function in the Matsubara representation G ρ (q, iΩ m ) and the retarded two-point Green's function G R ρ (q, ω + i0 + ) [e. g., [59], Eq. (3.3.11)]: The Green's function in the Matsubara representation G ρ (q, iΩ m ) is determined using the generating functional in the path-integral representation with the auxiliary infinitesimal field variable υ (r, τ ) corresponding to density fluctuations, | 2019-04-03T08:51:55.000Z | 2018-11-19T00:00:00.000 | {
"year": 2018,
"sha1": "eb4569879ab6e704ad014b0c17ae299cf8b5b5c7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1811.07796",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e81c9e89b20fa594b764b4e70877dcc78b0858b4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
248867303 | pes2o/s2orc | v3-fos-license | Exploring the impact of the COVID-19 pandemic on syringe services programs in rural Kentucky
Background The coronavirus pandemic (COVID-19) exacerbated risks for adverse health consequences among people who inject drugs by reducing access to sterile injection equipment, HIV testing, and syringe services programs (SSPs). Several decades of research demonstrate the public health benefits of SSP implementation; however, existing evidence primarily reflects studies conducted in metropolitan areas and before the COVID-19 pandemic. Objectives We aim to explore how the COVID-19 pandemic affected SSP operations in rural Kentucky counties. Methods In late 2020, we conducted eighteen in-depth, semi-structured interviews with persons (10 women, 8 men) involved in SSP implementation in rural Kentucky counties. The interview guide broadly explored the barriers and facilitators to SSP implementation in rural communities; participants were also asked to describe how COVID-19 affected SSP operations. Results Participants emphasized the need to continue providing SSP-related services throughout the pandemic. COVID-19 mitigation strategies (e.g., masking, social distancing, pre-packing sterile injection equipment) limited relationship building between staff and clients and, more broadly, the pandemic adversely affected overall program expansion, momentum building, and coalition building. However, participants offered multiple examples of innovative solutions to the myriad of obstacles the pandemic presented. Conclusion The COVID-19 pandemic impacted SSP operations throughout rural Kentucky. Despite challenges, participants reported that providing SSP services remained paramount. Diverse adaptative strategies were employed to ensure continuation of essential SSP services, demonstrating the commitment and ingenuity of program staff. Given that SSPs are essential for preventing adverse injection drug use-associated health consequences, further resources should be invested in SSP operations to ensure service delivery is not negatively affected by co-occurring crises.
access [5] and harm reduction services utilization [6,7]. Overdose fatalities have also increased during the pandemic; according to the Centers for Disease Control and Prevention (CDC), in December 2020, overdose fatalities increased 38.4 percent since the pandemic onset [8].
In response to escalations in overdose, the CDC issued guidance that emphasized the need for essential services to remain accessible for people most at risk of overdose, such as PWID [8].
In many jurisdictions in the USA, syringe services programs (SSPs) remained open during the COVID-19 pandemic to provide several life-sustaining and health protective services to PWID, including access to sterile injection equipment and overdose prevention resources. While SSPs have existed in the USA since the 1980s, the COVID-19 pandemic introduced new obstacles for program implementation given social distancing and other COVID-19 mitigation strategies, impacts on funding, and additional stresses on program operations and operators [6,9]. These emerging challenges were in addition to other pre-existing challenges to SSP implementation, such as inaccurate fears that SSPs may lead to increases in syringe litter, crime, or encourage drug use [10][11][12][13][14][15][16].
Several predominantly rural states launched SSPs following a 2015 HIV/HCV outbreak among PWID in Scott County, Indiana (USA). The state of Kentucky passed legislation in 2015 that allowed for community implementation of SSPs after approval was received from three entities: the Board of Health at a local health department, county fiscal courts (the body in each county that acts as that county's government), and city councils [17]. To date, more than 80 SSPs have been implemented across Kentucky. Notably, many of these SSPs operate in rural counties [17]. This analysis aims to better understand how the COVID-19 pandemic affected SSP implementation and expansion in rural Kentucky counties.
Data collection
This analysis was embedded in a larger study that aimed to explore overall barriers and facilitators to SSP implementation in rural counties in Kentucky through indepth, semi-structured interviews. The interviews were conducted between August-October 2020 with people involved in SSP implementation (n = 18). Interviewees played a role in SSP implementation in at least one rural county; participants included health department directors who advocated for SSP implementation, program operators, and persons who engaged in HIV prevention services delivery tailored to the needs of PWID who reside in rural areas. Potential participants were primarily identified via searches of the publicly available literature (e.g., media reports, governmental reports) related to SSPs in Kentucky and were also identified during the data collection process via interviewees describing others who played a role in SSP implementation. Persons identified during interviews were vetted against public records to confirm their potential role in SSP implementation. Eligible participants were at least 18 years of age. Potential participants were contacted via e-mail, informed about the study, and invited to participate.
All interviews were conducted by the senior author (STA), who grew up in a rural county in southeastern Kentucky and has conducted several studies related to harm reduction and rural health disparities. Interviews lasted approximately 45 min, were conducted over Zoom or phone, and were audio recorded. All participants provided oral consent prior to participation and were offered a $25 gift card for their participation. Interviews continued until content saturation was reached on the primary study objectives (i.e., the Principal Investigator heard similar narratives and no new insights were gleaned from subsequent interviews). This study was approved by the Johns Hopkins Bloomberg School of Public Health Institutional Review Board.
Interview guide
This analysis used data gleaned from a larger study that aimed to broadly explore the barriers and facilitators to SSP implementation in rural counties in Kentucky [16]. Given that SSP implementation may be affected by a number of interrelated factors that operate at multiple levels (e.g., stigma, policy-level impediments to sterile syringe distribution), two frameworks were used to inform the interview guide: the Consolidated Framework for Implementation Research (CFIR) and Kingdon's multiple streams model of policy change [18]. The CFIR provides a systematic way to explore the factors underpinning the implementation of an intervention while Kingdon's multiple streams model suggests that policy changes occur when three streams align (a problem stream, a policy stream, and a politics stream). A semistructured interview guide was developed based on these frameworks to address the larger study aims. We also included items intended to elicit narratives surrounding the impacts of COVID-19 on SSP operations, the results of which are reported here.
Analysis
Audio recordings were transcribed verbatim. Resulting transcripts were cleaned of any identifying information. An initial coding framework was developed from a list of a priori codes that reflected key concepts/areas of the CFIR and Kingdon's model and to cover additional topics of interest, including the COVID-19 pandemic. The senior author and two qualitative coders worked collaboratively to refine the coding framework. The team read three transcripts and identified emergent themes to create a revised codebook of a priori and inductive codes. Team members then independently coded three transcripts, compared their coding results, refined code definitions, and discussed additional inductive codes. This process was repeated on three additional transcripts to create the final coding framework. Coders then independently applied the codes systematically to each of the transcripts in MAXQDA software such that each transcript was double coded. The team met weekly throughout the coding process to discuss findings; the senior author monitored comparability between coders and resolved discrepancies.
For the purposes of this analysis, the analytic team examined text segments tagged with the "COVID-19" code. We broadly defined the COVID-19 code to include any mentions of the pandemic and how it affected SSP implementation. All quotes pertaining to COVID-19 were subsequently reviewed and further categorized based on emergent themes. The results were summarized in the below analysis, and illustrative quotes were selected to underscore key points.
Confidentiality
While there are many SSPs across Kentucky, the rural nature of our study setting required that we undertake several actions to protect the anonymity of study participants. For example, we do not associate quotes with information about where a given participant lives or works or with detailed descriptions of their specific role(s) during SSP implementation processes as this information may potentially be identifiable. However, an overview of our participants and their backgrounds is provided in the results section. Of note, the results section includes quotes from 12 of the 18 participants and reflects the perspectives offered by all.
Participant characteristics
In-depth interviews were conducted with eighteen participants [10 women, 8 men], most of whom self-identified as White (89%). All interviews occurred during the latter half of 2020, a time characterized by high uncertainty about issues related to COVID-19. Interviews also predated the emergency use authorization of COVID-19 vaccines and the emergence of more transmissible coronavirus variants. Participants provided perspectives from a range of vantage points, including professional and volunteer involvement in responses to the opioid crisis. Professionally, participants held many job titles, including health department directors, healthcare providers, program directors, SSP operators, and HIV prevention service providers. They also reported having been involved in their communities via multiple agencies and coalitions, such as law enforcement, community coalitions, and advisory boards (e.g., at non-profit organizations and local health departments). Participants shared a range of ways in which the COVID-19 pandemic and evolving response activities impacted both SSP implementation as well as concomitant efforts to expand access to SSP services in rural communities.
Ongoing operation of SSPs during the COVID-19 pandemic
Participants emphasized the need to prioritize and continue providing SSP-related services throughout the COVID-19 pandemic; for instance, one participant stated, "We did not put our harm reduction services on hold. We recognized the importance and still allowed people to come…" The dedication of SSP operators to ensuring continuity of harm reduction services during the pandemic was apparent in our interviews. For example, one participant explained: Even during the craziest moments, we've insisted that syringe exchange can't stop. The health department and the city council said that we were essential-our job was essential, so we haven't missed a beat.
While programs remained open, in some instances, the scale of service delivery was diminished, and participants reported that PWID struggled to access SSP services. A participant elaborated on these sentiments by stating, "So, COVID has definitely had an impact on services. Our numbers of needles exchanged has decreased some. I think it's been harder for some people to get into some of our services. " With respect to the number of clients served at rural SSPs, participants discussed client volumes declining, increasing, and not changing. This heterogeneity in experiences may be partially explained by COVID-19 precautions evolving over time; for example, a participant shared: Some of them [SSPs] saw a falloff in participation, but that has since been restored back to normal. But when COVID first hit-everybody didn't understand it. There were some big shutdowns that may have had a drop-off in services, but I'm told now that everybody's back to pretty much where they were.
Another participant discussed that while the pandemic limited secondary syringe exchange (i.e., PWID obtaining syringes to distribute to others), it may also have motivated people to attend the SSP on their own behalf.
My numbers, the intake has been the same, but there have been new participants. I think maybe current participants have actually convinced their bud-dies… I think word has gotten around, especially where people are quarantined and trying to social distance. They can't really get around their buddies as much who done their exchanges before, so they are coming out to do it themselves, … and then word is getting out that the program is not so bad.
SSP operational adaptations due to COVID-19
Participants explained that the COVID-19 pandemic precipitated several changes in SSP operations and that staff were forced to adapt quickly as new evidence-based COVID-19 response strategies emerged. Many participants discussed a diverse range of procedural changes to mitigate COVID-19 risks and overwhelmingly emphasized the adaptability of SSP staff in order to ensure services remained as available and consistent as possible. For example, one participant shared: Participants elaborated that a variety of adaptations were made to align with COVID-19 safety precautions, including promoting social distancing, changing the location and flow of service provision, and making COVID-19 risk reduction supplies (e.g., masks) available to clients. One participant explained the changes at their SSP as follows: We rearranged the flow of needle exchange… they come to a window on the side of the lobby, instead of into a room, unless they identify a need to be seen by a nurse. … So, we really tried to modify some things so that it's safer for them.
Similarly, another participant stated, "We give out masks. We have hand sanitizer. We ask six foot in distance. I've been doing some Narcan trainings virtually". Participants also described preparing harm reduction supplies in advance of clients visiting SSPs to expedite client encounters and, by extension, reduce COVID-19 risks. One participant stated, for example, "We provide what we call a grab and go pack-we have needles prepackaged and they just come in and exchange it much quicker. " Another participant described pre-packing bags with a variety of harm reduction supplies and adding syringes at the time of the client encounter: "We've had to prepare for it more-we fixed up our bags with a little bit of everything in it. Then, when they get there, we put the syringes in…".
In some instances, participants explained that SSP operations had been shifted outside to mitigate COVID-19 risks; however, SSP operators emphasized that these shifts were challenging due to weatherrelated constraints. One participant reflected on the heat of summer by stating: [We are] trying to find places to do it that are out of the weather. Especially where we're still doing curbside, I'm still trying to find other avenues to make it not so unpleasant for everybody, which has proven to be a challenge.
Challenges to relationship building with clients due to the pandemic
While participants stressed the necessity of ensuring PWID have consistent access to sterile injection equipment during the pandemic, the interpersonal interactions between SSP clients and staff changed due to pandemic precautions. Participants discussed how the changes in program operations affected client experiences and resulted in reductions in their utilization of ancillary SSP services. Persons attributed decreased ancillary service utilization primarily to challenges building trust and relationships with clients in contexts of masking, social distancing, and shifts in service delivery modalities. One participant highlighted this sentiment by stating: We've got masks and face shields and gloves and gowns and, before this, there was none of that. So, I think it's gotten stranger because they can't really see your face. They're in masks, of course, too, but everybody's in even more of a hurry now. It feels less personal and it's more difficult to build trust. So, it's been a lot harder. For a period of time, before we started bringing them [clients] back inside, we were doing curbside and had bags made up and then-they're not even really getting out of the car.
Another participant highlighted the impacts of COVID-19 mitigation strategies on enrolling SSP clients into drug treatment services by explaining: I would say we've probably not had as many people that have gotten into treatment and so on because we're not [able to take as much time with clients] like we were before. We are requiring masks and educating and providing them to people. So, COVID has definitely had an impact on the services. Glick et al. Harm Reduction Journal (2022) 19:47 Challenges to SSP related service expansion due the pandemic Participants reported that the COVID-19 pandemic adversely affected overall program expansion, momentum building, and coalition building. The majority of participants described scenarios in which the pandemic decreased momentum for SSP expansion in rural communities and placed new initiatives on hold due to attention being redirected to pandemic response. For example, one participant explained, "But COVID happened, and so that's [initiatives to open SSPs in smaller rural counties] kind of put on the back burner right now. It's so much harder in smaller counties. " Similarly, another participant noted that efforts to expand mobile SSP services were paused because of COVID-19, "We were planning to expand that [mobile van services], but then COVID. So, we still have hopes for that, but we're kind of having to wait a little while. " In addition, participants discussed the ways that coalition building and relationship development with partners (e.g., policymakers, community groups, faith-based community, law enforcement) was limited by the pandemic. As stated by a participant:
Well, I think in public health, relationship is key… spending the time to meet, understand and develop common ground with each elected official to understand what's important in each community and then try to develop those trusted local champions. It's been a major undertaking and then COVID-19 just ate our lunch. It's just really totally turned our world upside down.
Another participant echoed this sentiment and also emphasized that virtual meetings (e.g., via Zoom) were not conducive to engaging local partners in discussions about SSPs, "And so it's been more difficult to make those connections. People aren't necessarily willing to Zoom".
Discussion
While the COVID-19 pandemic impacted SSP operations throughout rural Kentucky, participants in our study reported that providing SSP services remained paramount. A range of adaptative strategies were employed to ensure continuity of SSP services while complying with recommended COVID-19 risk mitigation strategies. Participants also described scenarios in which the scale of service delivery was diminished due to these strategies (e.g., masking, social distancing) adversely affecting the ability of SSP staff to build rapport with clients. Further, participants reported that the pandemic served as an impediment to expanding SSP access in rural communities. These findings are in alignment with other studies that document the pandemic's impact on SSPs and harm reduction services [6,19,20], particularly in rural areas that have been disproportionately affected by the opioid crisis [21] and also build on the existing literature by describing the effects of COVID-19 on SSP operations specifically in rural Kentucky.
Participants in our study reported that staff at rural SSPs adapted to emerging COVID-19 safety guidance through a range of approaches, including moving services outside, preparing harm reduction kits in advance of client encounters, and offering clients personal protective equipment (e.g., masks). These adaptations highlight the importance of ensuring SSPs are able to tailor service delivery to both local contexts and emerging public health guidance. By remaining nimble and responsive to shifting COVID-19 guidance, SSPs in rural Kentucky were able to accommodate the needs of PWID without jeopardizing the health and safety of staff. While adaptations to SSP service delivery are commendable, they also underscore the need for additional research that explores how the pandemic affected the public health of PWID who do not access SSPs.
Eliminating injection drug use-associated morbidity and mortality requires that all PWID are afforded access to evidence-based and low-threshold health and human services. In addition to SSPs, other public health strategies communities may consider implementing to ensure access to sterile injection equipment and minimize COVID-19 risks. For example, implementing public health vending machines, sometimes referred to as syringe vending machines, may hold promise for increasing access to harm reduction resources for populations underserved by the more traditional SSP model [22,23]. Mail-based supply distribution may also be of public health utility, particularly for meeting the needs of PWID residing in isolated areas with limited SSP access [24].
The results of our study suggest that some COVID-19 mitigation strategies (e.g., masking, social distancing) adversely affected relationship building with PWID. This finding warrants additional study given that SSPs may be considered one of few venues that center the voices of PWID and are grounded in treating persons with dignity and respect. Throughout the world, research has shown that PWID are confronted with pervasive stigmatization that may deter help-seeking behaviors [25][26][27], and this is particularly true for PWID in rural areas, given limited availability of healthcare and social service resources [28,29]. The combination of stigma deterring help-seeking behaviors and diminished capacity for SSP staff to build rapport with clients may partially explain worsening trends in overdose fatalities during the pandemic. In essence, the erosion of relationships between SSP staff and clients may have created environments in which persons had needs (e.g., overdose prevention, substance use treatment, mental health services) that went unmet. Communities should invest in efforts to eliminate injection drug use-associated stigma among care providers through educational interventions [28] and in the broader community, perhaps through anti-stigma social media campaigns, while also bolstering the capacity of front-line programs to effectively establish rapport with PWID. Future work should be conducted to develop lowthreshold strategies that support relationship building between SSP staff and clients while also providing protections against COVID-19 transmission.
This study revealed that the COVID-19 pandemic had impacts beyond SSP operations, affecting overall program scale-up, coalition development initiatives, and relationship building among people involved in SSP implementation. Overwhelmingly, participants discussed the ways the pandemic slowed down momentum and placed various new endeavors on hold, in large part because attention was redirected to pandemic response. Prior to COVID-19, there was clear evidence that rural communities were at increased risk for injection drug use-associated health consequences; for example, there were multiple HIV outbreaks linked to syringe sharing in non-urban areas and a large number of rural counties were identified as vulnerable to HIV outbreaks similar to that which occurred among PWID in Scott County, Indiana [33][34][35][36][37]. Evidence shows that the pandemic further exacerbated negative health outcomes among PWID [30][31][32]. The impediments to harm reduction initiative expansion in rural communities may further exacerbate underlying health inequities and disparities among PWID. Future lines of scientific inquiry should assess the degree to which COVID-19 interrupted broader trends in the implementation of overdose and infectious disease prevention services across the USA and strategies to overcome these interruptions to ensure PWID receive the services they need.
This research has several strengths and limitations. One strength is that we interviewed persons with diverse roles during SSP implementation, offering a multiplicity of perspectives on the impacts of COVID-19. A second strength was that the rural Kentucky setting of our study provides critical insight on an understudied region that has been disproportionately affected by the opioid crisis [38][39][40]. However, this study was not designed to make comparisons between urban and rural contexts. Future research should examine differences between urban and rural locations to better understand the impact of COVID-19 and other emergent issues on SSP operations and the overall health related needs of PWID. A third strength lies in the fact that our data collection spanned multiple months (August-October 2020) within the first year of the pandemic. Given the rapid evolution of pandemic response guidance, this time period made it such that people were discussing different phases of the early response, while also allowing us to capture a range of ways in which the initial pandemic response was impacting SSP program implementation. Among the limitations of our study, attempting to ascertain the impact of an emerging and ongoing pandemic is difficult, given the rapidly changing context of the COVID-19 pandemic during data collection. This analysis offers a snapshot of a period of time during the early pandemic and cannot speak to the impacts of the entirety of the time period. Our study also preceded vaccine roll out and the identification of more transmissible coronavirus variants, and as such, policies and procedures may have shifted in ways outside of the scope of this paper. Finally, while our study reached saturation on the primary goals of the project that may not be true for issues related to COVID-19. It is possible that had we extended data collection activities and increased our sample size, we would have uncovered additional perspectives on the impact of the COVID-19 pandemic on SSPs. These limitations notwithstanding, this study offers insight into the ways the COVID-19 pandemic intersected with the opioid crisis in rural Kentucky and affected SSP operations.
Conclusion
In conclusion, this study shows that SSP operators in rural Kentucky counties employed a variety of adaptive strategies to ensure continuity of infectious disease and overdose prevention services delivery among PWID in the face of the COVID-19 pandemic. Participants reported that disease mitigation strategies (e.g., masking, social distancing) adversely affected relationship building between SSP staff and clients. The COVID-19 pandemic also served as a substantial impediment to expanding access to SSP services throughout rural communities. Given that SSPs are essential for preventing adverse injection drug use-associated health consequences, communities should invest additional resources in their operations to ensure service delivery is not negatively affected by co-occurring crises. | 2022-05-19T14:33:08.165Z | 2022-05-19T00:00:00.000 | {
"year": 2022,
"sha1": "c1a693b45820fed37e815d9f211b34249e1d7d65",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "c1a693b45820fed37e815d9f211b34249e1d7d65",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225218068 | pes2o/s2orc | v3-fos-license | Participatory health research with migrants: Opportunities, challenges, and way forwards
Abstract Context Migration is one of the most politically pressing issues of the 21st century but migrant health remains an under‐researched area. The International Collaboration for Participatory Health Research (ICPHR) working group on migration developed this position statement to address opportunities and challenges in relation to migrant health. It aims to contribute to a shift from a deficit model that sees migrants as passively affected by policies to their reconceptualization as citizens who are engaged in the co‐creation of solutions. Methods This paper examines the opportunities and challenges posed by the use of PHR with migrants. It draws on a broad literature to provide examples of successful PHR with migrants and highlights critical issues for consideration. Findings Successful initiatives illustrate the value of engaging migrants in the definition of the research agenda, the design and implementation of health interventions, the identification of health‐protective factors and the operationalization and validation of indicators to monitor progress. Within increasingly super diverse contexts, fragmented community landscapes that are not necessarily constructed along ethnicity traits, inadequate structures of representation, local tensions and operational barriers can hamper meaningful PHR with migrants. Conclusion For each research context, it is essential to gauge the ‘optimal’ level and type of participation that is more likely to leverage migrants’ empowerment. The development of Monitoring and Evaluation tools and methodological strategies to manage inter‐stakeholder discrepancies and knowledge translation gaps are steps in this direction. Patient or public contribution This paper draws from contributions of migrant populations and other stakeholders to policymaking.
| INTRODUC TI ON
Migration has become one of the most politically pressing issues of the 21st century. It is a diverse experience, with potential for both positive and negative impacts for individuals and societies as a whole. 1 There is no standardized way to define 'who is a migrant'. 2 For the purpose of this paper, we consider as migrant 'any person who is moving or has moved across an international border or within a State away from his/her habitual place of residence, regardless of the person's legal status, whether the movement is voluntary or involuntary, what the causes for the movement are and what the length of the stay is'. 3 The vast majority of migrants in the world are migrant workers but the numbers of refugees and people displaced by conflict, natural disasters and climate change are at their highest levels, representing 10% of all migrants who move between countries. 4,5 This underscores the importance of addressing the health of migrants as a part of the global health-for-all agenda.
Acknowledging the essential relationship between good health and successful migration, the World Health Organization 6 In addition, the Colombo Statement, which was endorsed by 19 Ministers and government representatives in 2017, affirmed that migrants should be active stakeholders in programme planning and decision making. 7 Still, migrant health remains an under-researched area in global health and has received insufficient attention by health system planners.
Although migrants are sometimes healthier than the host population on arrival, 8,9 there is evidence of health disparities between some migrants and their host populations and a growing awareness that this is linked to the negative impacts of the broader social determinants of health (SDH). 8,12,13 This includes a pattern of exclusion whereby migrants are under-represented in health-care decisionmaking fora for citizens. 14,15 Appropriate methodological approaches are needed to respond to the challenges associated with contemporary migration, mobility and health. 16 Participatory Health Research (PHR) is a research paradigm that has potential to address opportunities and challenges in relation to migrant health. The goal of PHR is 'to maximize the participation of those whose life or work is the subject of the research in all stages of the research process, including the formulation of the research question and aim, the development of a research design, the selection of appropriate methods for data collection and analysis, the implementation of the research, the interpretation of the results, and the dissemination of the findings'. 17 It is guided by ethical principles to reflect its underpinning values including mutual respect, equality and inclusion. 18 In PHR, relationship building and the value of sustained partnerships throughout a project from question identification to result dissemination is of paramount importance. 19 Grounded on the work of Paulo Freire, the ultimate aim of PHR is to catalyse broad societal transformations for a more fair allocation of resources. 20,21 To this end, the entire process of PHR is conceived to leverage joint societal transformation and transcend the scope of the specific objectives of a particular project.
The underlying assumption is that engaging research participants as co-producers of new knowledge fosters their ownership over the research outcomes, which can then serve to articulate and legitimate political claims to address the social determinants of health.
| Define the research agenda
Most of the published academic research that has so far been conducted in the field of migrant health represents the perspectives of high-income destination countries and focuses on migrant-specific diseases with a particular emphasis on communicable diseases and the mental health of refugees. 4 This focus on differences between migrants versus the local populations has led researchers to overlook some of the most common health problems that affect migrants, which are often similar to those affecting the host population. 11,12,22 Concepts of civic responsibility and participation 21,23-25 emphasize migrants' right to shape the research agenda so research efforts address what migrants perceive as priority needs. 26 Decisive endorsement of the principle of participation is reflected in the increasing requirements by research funders and renewed international commitments to meaningfully involve the public and patients in health research, [27][28][29] including migrants. 7 Still, to date, the research priorities in migrant health have been primarily driven by the interests of academics, policymakers and clinicians 10 with infrequent inclusion of migrants in research prioritization processes. 12,15 Setting priorities for research is a complex process, and there is general consensus that there can be no best practice, because of the contextual differences between individual priority setting exercises. 30 36 In addition, certain health interventions 37 may violate individual rights or exacerbate discrimination, for example, when migrants are screened for infectious diseases without adequate referral to treatment when needed. 7,38,39 The provision of sensitive services is thus essential to respond adequately to the diverse needs of increasingly heterogeneous populations. 10,37,40 However, most interventions and policies are based on data derived from the general population and do not respond to the needs of migrants. 41 Where evidence is lacking, PHR can be a good strategy to fill that gap and pave the way to develop more effective interventions and policies.
PHR acknowledges the importance of experiential, practical, emotional and intuitive sources of knowledge. It builds on the insider perspectives and direct knowledge acquired by the people living with the health problem under study, 42 who are considered experts by experience. 17 The multiple ways of knowing that are inherent to PHR can yield the holistic and nuanced understanding that is required to bridge different explanatory models of disease. High-quality care for migrants cannot be addressed by health systems alone. Migrants from low to high-income countries are often marginalized 22 and exposed to social, occupational and economic conditions that have detrimental effects on their health. 7,54,55 The death of migrants during their migration journey is a tragic illustration of the vulnerabilities that affect migrants at different stages of a migration process that often entails unsafe travel, poor nutrition, psychosocial stressors and harsh living and working conditions. 7 A comprehensive response to the needs of migrants requires health systems to engage with other key sectors such as welfare, housing, education and legal protection. 56,57 While the importance of the SDH is widely recognized, 7,11 the role of public policies beyond the health sector continues to be overlooked in migrant health policies. 58 In turn, the SDH agenda has been criticized for adopting a 'colour-blind' approach that presumes that an improvement of socioeconomic conditions will have a homogeneous impact on the health of different ethnic groups. 59 Enabling diverse stakeholders to learn from each other and plan together can yield fresh ideas about the conditions that are necessary to sustain optimum health at each level of the social ecology and the policy initiatives that can produce these conditions. Previous work with ethnic minorities suggests that PHR can effectively promote broader level societal change. In Kansas City, Missouri, for example, a participatory initiative with Black Americans leveraged positive change in schools, churches, the media and the private sector. 63 In London, the participation of migrant women in a breast screening promotion project was reported to be an empowering experience that challenged the view of migrant women as homogeneous and powerless victims. 44
| Identify health-protective factors
Despite the importance of addressing migrants' vulnerabilities using a SDH approach, it can be harmful to assume that the health of migrants is always poor when compared to the host population. 9 The focus on vulnerability can obscure evidence showing migration as a positive experience for many and the fact that many migrants are young, fit and healthy. 7 Still, migrants are often framed as carriers of disease, difficult health-care users, poorly compliant 64 and, ultimately, a burden to health systems and societies at large. 65 Worryingly, the argument that diseases travel in migrant's blood is recurrently used by anti-migrant political leaders to advance their political agenda.
| Power dynamics
Conducting PHR with migrants is not exempt of challenges some of which are common to all PHR in general. Frequently reported barriers in PHR that can impact on PHR with migrants include conflicts amongst participants, often because of issues related to sharing power and the distribution of resources amongst stakeholders. 17 The 'fall back into dichotomies of power' or 'tyranny of participation' whereby the nature of power dynamics within and amongst stakeholder groups is overlooked, and only the narrow spectrum of interests of the most powerful/vocal is considered, is another frequently highlighted challenge of participatory research. 17,77 Other concerns are the modest impact of participatory research in terms of specific actions bringing about societal change, 78 mostly because of the limited control that PHR participants often have over key political decisions. 79 The assumption that participants will have the necessary time available for contributions, the criteria used to economically compensate some contributors but not others, the amount of the economic rewards provided, the mismatch of expectations, accountability issues, different communication styles/ per-
| Definition of 'migrant communities'
Amidst the conceptual and practical difficulty of defining who is a 'migrant', it is also difficult to define 'migrant communities' and their 'representatives'. Social scientists have long contested idealized notions of 'communities'. 85 The assumption that these are constructed primarily around ethnicity is hotly critiqued by ethnicity scholars as inadequately linked to pre-conceived ideas of homogeneity and identity. 86 The over-culturalization of the concept -it is arguedleads to a 'collective image of communion premised on a shared culture' that fails to capture the actual context of real-world settings.
The loose use of the concept as 'black box' 87 is problematic because 'the community becomes too easily an explanation, as opposed to something to be explained'. 87
| Representativity
The absence of formal, physically bounded migrant communities often leads to research partnerships being established with organizations that provide services to migrants, as a proxy for migrants themselves. 89 While there are positive examples, it is prudent to be aware of potential limitations in terms of truly representing migrants' views. This is particularly worrying in contexts where assimilationists policies or cultures prevail and where the fundamental principles underlying PHR are not necessarily endorsed by migrant 'representatives'. A charity worker performing as 'community representative', for example, may not endorse ideas around migrant empowerment and see migrants as passive recipients of charity that should 'adapt' to the host society, as opposed to active contributors to enrich a multi-cultural society.
Even where 'migrant communities' exist in the form of established migrant organizations, we cannot assume that these will always represent the interests of 'migrants' as a whole. Early calls from development scholars warned that non-participatory, 'top-down' assumptions made by international development programmes during the 20th century could be repeated in the health field. 23 As a matter of fact, an individuals' role as 'community representative' may confer him (or her) an increased control over how resources are used/ distributed and serve to reinforce the power of community-based elites. 84,90 As noted by Wright, PHR is not universally nor categorically 'better' than other forms of research. 17 Understanding migrant associations' landscape, their role and functions, and -importantly -their linkages with the broader communities and the State, is crucial to decide the type and level of participation that suits each specific research setting. Key questions to ask since the outset include: what type of community organizations exist?, What type of activities they conduct? and Who participates in them and why?. 23 This should be useful to assess the extent to which particular groups of migrants (eg newcomers, irregulars, asylum seekers, trafficked persons) are represented, and what should be done to ensure their views are also taken into account.
| Local tensions
The assumed existence of 'migrant communities' willing to work together for a common goal is further challenged in increasingly super diverse contexts 91,92 in which different migrant groups may not necessarily share the same interests or maybe share some, but compete for others. The high rate of Brexit voters amongst long established migrant communities in the UK is an illustrative example that challenges the assumption that all migrants share a common goal. 93 Because the more recently arrived migrants often lack structures for effective representation, their views are less likely to be accounted for, and not necessarily fall under the umbrella of 'migrant' interests, as voiced by the most organized groups. 85 The coexistence of shared and competing interests is also prevalent amongst migrants 'belonging' to the same ethnic group, because 'identity and interest are not insoluble', 94 and different sub-groups are likely to hold -at least some -diverging interest and views (eg youth, women).
In contrast with the ideal of cohesive communities, the everyday spaces of neighbourhoods are in fact often characterized by tensions, fragmentation, competition and conflict. Idealized notions of 'community' can thus serve to actually mask and even reinforce wider structural inequities, which is clearly at odds with the principles underlying PHR. It is thus essential to reconceptualize the concept of migrant communities in more fluid terms (eg not necessarily constructed along ethnicity traits), acknowledge the existence of conflict, as well as the potential inadequacy of organized structures of representation that may exist. The likely rise of conflict of interests needs to be expected, assessed, monitored and disclosed.
In this context, it becomes crucial to adopt a balanced approach that eschews the 'idyll of community' critiqued by ethnicity scholars, 95 but also an exclusive focus on conflict and local tensions. This will help to demystify the role played by communities and its representatives while at the same time help PHR investigators to focus on identifying potential niches of shared interests and aspirations around which common efforts can be articulated. 96 Formative research following the principles of PHR can be useful to assess whether and how heterogeneous populations and stakeholders may cooperate successfully, by putting aside differences and work towards a common goal that may actually produce a shared 'sense of community'. Where this is unlikely to be the case, it will be crucial to acknowledge that less or a different kind of participation may -in fact -be the 'optimal' level or type of participation for a particular research context.
| Operational barriers
At programmatic and implementation level, there are commonly reported challenges that need to be addressed. Language barriers frequently lead to the exclusion of migrants who do not speak the host society language(s), who are already amongst the most socially excluded. This has major implications in terms of equity. The use of visual and culturally adaptable Participatory Learning and Action research techniques 12,52 with the collaboration of trained interpreters and peer researchers can be an effective way to overcome these. 50,97 The involvement of peer researchers, however, can lead to blurred personal and project boundaries and requires an ethical and reflective approach. 98 Other ethical issues related to PHR with migrants include negative consequences from taking part in research, as this could put migrant populations at risk of greater marginalization into even greater peril. Ensuring that informed consent procedures truly inform migrants of both the benefits and potential risks of participation becomes essential here. This may be hard to achieve when the invitation to participate comes from organizations that provide social services to prospective research participants. Careful decisions need to be taken over the most adequate compensation and other types of support to be provided to participants, taking into consideration the characteristics and risks of each particular context. A number of resources are available to guide such decisions in accordance with the ethical principles of PHR. 18 Another common challenge is related to other PHR stakeholders' priorities. For example, academics are often committed to traditional (non-PHR) methods and may feel pressured to quickly publish the evidence in high-impact scientific journals. Policymakers or industry stakeholders may be resistant to research findings that challenge their assumptions, values, attitudes, or practices or lack the commitment (or power) to respond to the specific concerns expressed by migrants. 81 Divergence and controversy arise while achieving a compromise to meaningful consensus, which implies negotiation between conflicting interests. Ideally, such a process should help actors to reorient and expand how they define the 'problem' under discussion, considering their multiple perspectives of analysis of the project and different interpretations of its successes/failures. However, in practice, this is not always the case, and inequalities between negotiating actors may end up favouring those who are most powerful. 99,100 The above challenges illustrate the importance of maintaining a high standard of quality and building the empirical evidence about the value of PHR. In this process, it is important to avoid tokenistic approaches where participatory claims are used as a strategy to implement already designed policies rather than to provide spaces for populations to advocate for transformative initiatives. Participatory processes should be described in a transparent and self-critical manner with a comprehensive account of the achievements but also the challenges and limitations faced. 101 Several points should be considered to advance in this direction. First, regular monitoring and evaluation (M&E) exercises within PHR partnerships should gather stakeholders' perspectives of how things are progressing, and when and how adjustments shall be needed. Robust M&E frameworks are urgently needed to guide these processes, with particular attention to power dynamics that may hinder transformative participation dynamics. 100 In a decisive step in this direction, a M&E working group established within the ICPHR is already drawing from various conceptual frameworks and the views of global PHR practitioners to identify relevant domains, indicators and questions to be asked. 102 Second, guidance is needed on how to recruit, engage and create fruitful inter-stakeholder alliances in this particular field of research. A prerequisite to shared decision making is that partnerships and coalitions are established with inter-sectoral stakeholders. 17,103 The many different kinds of potential interactive spaces for participation should be considered, 104 including those established by the State, academics or by migrant populations themselves. In addition, innovative methodological strategies are needed to identify and address conflicting priorities amongst different actors within the broader contexts in which research takes place. 101 The use of arts is an interesting avenue to explore in this direction. 105 Finally, it is important to manage expectations and make it clear at the outset of projects that societal change may not be achieved because of external constraints. While the commitment is towards action rather than guaranteeing action, explicit and proactive steps should be taken to foster the involvement of migrant partners in collaborative knowledge translation activities to reduce the knowledge-to-practice gap.
Bidirectional mentoring between academic and under-represented groups, for example, is a promising approach that has already been successfully applied with ethnic minorities. 106 All these actions shall be helpful to prevent tokenism and co-optation in this field of research.
| CONCLUSION
PHR presents an opportunity to contribute to generating new knowledge about migrants and their health, by bringing together stakeholders who do not usually meet each other in partnerships for research and policymaking. It can potentially contribute to a paradigm shift, from a pathogenic deficit model that sees migrants as passively affected by policies to their reconceptualization as creative, inspiring and actively engaged citizens in search of solutions. 8 This is important to counter the toxic discourse that migrants are a burden to local societies and can help to break down stereotypes by highlighting their positive contribution to social and economic prosperity. 1,5,11 This paper has emphasized the relevance of PHR in the field of migrant health research, providing an alternative approach to address the current challenges in health research and tackle health inequities. PHR is not, however, a panacea, and there are specific challenges in enacting meaningful and impactful projects in this field. The ultimate distinctiveness and added value of PHR rests in its potential to catalyse real-world action for greater social justice.
Supportive policy environments are essential for this potential to be realized. A genuine progress of PHR with migrants calls for meaningful engagement of inter-sectoral and 'whole' governmental policymakers. In this process, it becomes particularly crucial to grasp -for each particular research context -what is the 'optimal' level and type of participation that is more likely to leverage migrants' empowerment so they can better advocate for their voices to be heard, and their rights to be addressed.
At a time where the case for participatory research is gaining momentum, it becomes crucial to encourage and support critical scholarship and reflective, ethical practice, 18 not only in the application of PHR with migrants, but also in better understanding the nuances of the approach, so that it can truly live up to its potential. The development of M&E frameworks and methodological strategies to manage inter-stakeholder discrepancies and knowledge translation gaps are important steps in this direction.
ACK N OWLED G EM ENTS
We acknowledge the contributions of members of the International
CO N FLI C T O F I NTE R E S T
The authors declare no conflict of interest.
DATA AVA I L A B I L I T Y S TAT E M E N T
Data sharing is not applicable to this article as no new data were created nor analysed in this study. | 2020-10-28T18:57:04.836Z | 2020-09-01T00:00:00.000 | {
"year": 2021,
"sha1": "6702e6ef6b35afe48a826caaff46afb63a85c6b1",
"oa_license": "CCBY",
"oa_url": "https://run.unl.pt/bitstream/10362/125967/1/Roura_Hea_Expect_2021_24_2.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "db40c9466b29d155ba978130a8c42cf0b946f172",
"s2fieldsofstudy": [
"Sociology",
"Medicine"
],
"extfieldsofstudy": [
"Sociology",
"Medicine"
]
} |
269990976 | pes2o/s2orc | v3-fos-license | Noble Metal Nanoparticle-Based Photothermal Therapy: Development and Application in Effective Cancer Therapy
Photothermal therapy (PTT) is a promising cancer therapy modality with significant advantages such as precise targeting, convenient drug delivery, better efficacy, and minimal adverse effects. Photothermal therapy effectively absorbs the photothermal transducers in the near-infrared region (NIR), which induces the photothermal effect to work. Although PTT has a better role in tumor therapy, it also suffers from low photothermal conversion efficiency, biosafety, and incomplete tumor elimination. Therefore, the use of nanomaterials themselves as photosensitizers, the targeted modification of nanomaterials to improve targeting efficiency, or the combined use of nanomaterials with other therapies can improve the therapeutic effects and reduce side effects. Notably, noble metal nanomaterials have attracted much attention in PTT because they have strong surface plasmon resonance and an effective absorbance light at specific near-infrared wavelengths. Therefore, they can be used as excellent photosensitizers to mediate photothermal conversion and improve its efficiency. This paper provides a comprehensive review of the key role played by noble metal nanomaterials in tumor photothermal therapy. It also describes the major challenges encountered during the implementation of photothermal therapy.
Introduction
Cancer persists as a formidable global health challenge, acknowledged as one of the most lethal diseases [1].The imperative for enhanced therapeutic interventions has become increasingly evident as the quality of life for patients remains compromised by the adverse effects of existing therapies.It is estimated that in 2022, there will be nearly 20 million new cancer cases and 9.7 million deaths worldwide (including NMSC) [2].
Cancer, with its multifaceted pathology, is mainly treated by traditional modalities such as chemotherapy, radiotherapy, and surgery, but is often accompanied by debilitating side effects [3].Therefore, novel therapies such as photodynamic therapy (PDT), sonodynamic therapy (SDT), and photothermal therapy (PTT) have been explored in the search for more effective and less harmful therapies [4,5].Among them, PTT is a promising modality that utilizes a photothermal transforming agent (PTA) to convert light energy into heat energy, thereby inducing localized thermal therapy to ablate tumor cells while minimizing collateral damage to healthy tissues [6].However, the efficacy of PTT critically depends on the photothermal-converting ability of the photothermal agents, especially the nanoscale variants, which are adept at generating sufficient thermal energy upon light irradiation [7,8].Therefore, the selection of effective photothermal agents is the key to promoting the efficacy of PTT.In recent years, noble metal nanomaterials have become the frontrunners in this field and won widespread attention due to their huge specific surface area and unique optical, electrical, and catalytic properties [9][10][11].The application of noble metals in PTT is attributed to their pronounced surface plasmon resonance, which contributes to the effective absorption of specific wavelengths of light in the near-infrared (NIR) spectrum and their subsequent conversion into thermal energy, thereby generating localized thermal therapies conducive to tumor ablation [12,13].In conclusion, the flourishing development of noble metal nanoparticles in the biomedical field not only circumvents the limitations of traditional tumor therapies, but also heralds the arrival of a new, safe, and minimally invasive mode of cancer therapy.
Unlike other reviews summarizing the use of noble metals in photothermal therapy for cancer therapy [14][15][16], in this review article, to better understand the use of PTT in tumor therapy, we focus on summarizing the mechanistic studies of noble metals in PTT applications and systematically describe the use of noble metal nanoparticles (including gold, silver, platinum, and palladium) in the field of cancer therapy, presenting combined strategies involving PTT and other therapies, e.g., the application of gold and silver nanoparticles in photothermal therapy (PTT) for cancer therapy is described in detail.This is followed by a comprehensive overview of recent advances in noble metal nanomaterials for cancer therapy, including their role in drug delivery, bioimaging, and combination therapy (Table 1).In addition, we discuss the importance and potential of noble metal nanomaterials-mediated PTT and further suggest future directions for PTT to achieve clinical anti-cancer effects (Figure 1).
become the frontrunners in this field and won widespread attention due to their huge specific surface area and unique optical, electrical, and catalytic properties [9][10][11].The application of noble metals in PTT is attributed to their pronounced surface plasmon resonance, which contributes to the effective absorption of specific wavelengths of light in the near-infrared (NIR) spectrum and their subsequent conversion into thermal energy, thereby generating localized thermal therapies conducive to tumor ablation [12,13].In conclusion, the flourishing development of noble metal nanoparticles in the biomedical field not only circumvents the limitations of traditional tumor therapies, but also heralds the arrival of a new, safe, and minimally invasive mode of cancer therapy.
Unlike other reviews summarizing the use of noble metals in photothermal therapy for cancer therapy [14][15][16], in this review article, to better understand the use of PTT in tumor therapy, we focus on summarizing the mechanistic studies of noble metals in PTT applications and systematically describe the use of noble metal nanoparticles (including gold, silver, platinum, and palladium) in the field of cancer therapy, presenting combined strategies involving PTT and other therapies, e.g., the application of gold and silver nanoparticles in photothermal therapy (PTT) for cancer therapy is described in detail.This is followed by a comprehensive overview of recent advances in noble metal nanomaterials for cancer therapy, including their role in drug delivery, bioimaging, and combination therapy (Table 1).In addition, we discuss the importance and potential of noble metal nanomaterials-mediated PTT and further suggest future directions for PTT to achieve clinical anti-cancer effects (Figure 1).
Mechanistic Study of Noble Metal Nanoparticles for Photothermal Therapy in Cancer Therapy
With the continuous development of nanotechnology, metal nanoparticles with diverse functions and rich biological effects have received extensive attention [17].Metal nanoparticles have many advantages such as controllable size and morphology, excellent optical properties, and easy preparation.Most importantly, metal nanoparticles have both enhanced diagnostic and therapeutic effects and can be used as diagnostic and therapeutic agents, which have important applications in the biomedical field.[18].
Photothermal therapy is a method that uses near-infrared (NIR) light to irradiate a photothermal agent to increase tissue temperature, causing local tissue necrosis through protein denaturation, cell membrane rupture, and DNA damage to achieve tumor killing [19].The process utilizes the photothermal effect, in which the photothermal agent, after absorbing photons generated by laser irradiation, increases in energy and transforms from the basal unilinear state to an excited unilinear state, which is unstable and then returns to the basal state through non-radiative vibrational relaxation (collision of the photothermal molecules with the molecules of the surrounding material to dissipate the energy), resulting in the effect of warming up the temperature [20].Among these, the extraordinary effects produced by noble metals as photothermal agents for this therapy have attracted significant attention.Noble metal nanoparticles have powerful surface plasmon resonance (SPR) properties, which means they can absorb specific wavelengths of light efficiently, especially in the near-infrared region (NIR).NIR light is highly penetrating and penetrates deep into tissues, reducing damage to surrounding normal tissues.When a noble metal nanoparticle absorbs light energy, it converts the light energy into heat energy through a non-radiative relaxation process.This process leads to a significant increase in the local temperature around the particle [21].The local high temperatures generated by noble metal nanoparticles can directly disrupt the structure and function of tumor cells, causing protein denaturation, cell membrane damage, and even destruction of cellular organelles, such as mitochondrial dysfunction thus leading to an insufficient energy supply.Of course, the local thermal effect can also lead to cell death in a variety of ways, including necrosis (death of cells that are damaged to the point that they are unable to sustain life activities) and apoptosis (programmed cell death, controlled by intracellular signaling pathways) [22].Apart from these, high temperatures may also trigger cellular autophagy and inflammatory responses.In addition to directly killing tumor cells, local thermal effects may activate the immune system and promote tumor recognition and attack by immune cells.Released tumor antigens can stimulate the immune system to respond more strongly to tumors, even to untreated metastatic tumor cells.Of course, the local thermal effect can also damage tumor blood vessels and weaken their blood supply, further enhancing the killing effect on tumor cells [23].
As plasmonic excitation element metals, the LSPR of noble metal nanoparticles, such as Au and Ag, are sensitive to many factors, such as their size, shape, composition, environment, and interaction with neighboring nanoparticles [24].Therefore, we can not only inhibit the light scattering at the LSPR or increase the light absorption of PNPs, which can improve the photothermal conversion, but also tune the size and shape (morphology) of the PNPs as well as the composition mainly to improve the photothermal conversion.The researchers synthesized three common gold nanostructures, namely gold nanospheres (AuNSs), gold nanorods (AuNRs), and gold nanostars (AuNSTs), by the same mPEG-SH surface modification.The results show that all AuNPs can convert 808 nm near-infrared (NIR) laser light energy into thermal energy through the localized surface plasmon resonance effect, with the AuNSTs exhibiting the highest photothermal conversion efficiency [25].Based on this, whether the crystallinity of the PNPs (plasmonic nanostructures) affects the photothermal conversion efficiency has also been investigated in recent years.In this study, the researchers developed a defect-damped harmonic oscillator model.Model calculations show that defect-induced damping can effectively reduce the light scattering of PNPs and significantly improve their PCE, especially for PNPs with sufficiently large sizes (Au and Ag greater than ~100 nm), and they have found that defect-induced damping significantly improves their light absorption and photothermal performances [26].Certainly, the shapes of the silver nanoparticles have a significant effect on the photothermal conversion efficiencies, such as quasi-spherical silver nanoparticles, silver nanorods, silver nanocubes, Ag-Rh core-framework nanocubes, and silver prismatic nanocubes [27].Among all the shapes of silver materials, silver nanoprisms have great potential in photothermal therapy (PTT) due to their strong surface plasmon resonance bands in the near-infrared region [28].
In summary, modern medicine utilizes noble metal nanoparticles (AuNPs) of specific shapes and sizes to provide a relatively mild alternative to cancer diagnosis and therapy by absorbing near-infrared (NIR) light and generating a plasmonic resonance effect for two main purposes, namely enhancing tumor detection and generating localized heat at the tumor site for thermal ablation.Noble metal nanoparticles, which act as light absorbers, can be injected into the tumor area, and then produce a thermal effect on the tumor under light excitation.This thermal effect can raise the temperature of the tumor region to a sufficiently high level in the time required to achieve tumor destruction [29].
Photothermal Therapy
Photothermal therapy (PTT) holds significant promise in tumor therapy owing to its distinctive advantages, including high specificity and minimal invasiveness [30,31].Gold nanomaterials, leveraging their surface plasmonic properties, serve as efficacious photothermal converters, thus enhancing the photothermal conversion efficiency of PTT [32].Consequently, gold nanomaterials have emerged as a focal point of scientific inquiry in this domain.
Due to the inherent biosafety profile of starvation therapy for inducing tumor calcification, researchers have increasingly focused on this therapeutic modality in recent years.However, the efficacy of this approach is hindered by the limited availability of calcium ions in or around tumor tissues, leading to a slow and uncontrollable physiological calcification process.This challenge necessitates innovative strategies to enhance the effectiveness of starvation therapy.Recently, a group of researchers [33] developed a novel approach by synthesizing gold nanoparticles (designated as SFT-Au) functionalized with salivary acid (SA, a calcium chelator), folic acid (FA, serving as a tumor-targeting moiety), and triphenylphosphine (TPP, facilitating mitochondrial targeting).Leveraging the abundance of mitochondria within the tumor cells and capitalizing on the light collection and photothermal properties inherent to SFT-Au, this multifunctional nanoplatform aimed to achieve precise calcification of tumor mitochondria, thereby enhancing the efficacy of starvation therapy.Evaluation of this nanoplatform in photothermal therapy (PTT) revealed that calcium chelation induced nanoparticle aggregation, resulting in a significant enhancement of absorption in the long wavelength region (Figure 2).This phenomenon can be attributed to the size-dependent absorption characteristics of gold nanoparticles, thereby facilitating calcium-dependent photothermal conversion upon exposure to 808 nm radiation.Importantly, this calcium-dependent photothermal conversion exhibited sustained high efficiency even after multiple cycles, underscoring the remarkable stability of SFT-Au aggregates under near-infrared (NIR) radiation and elevated temperatures and thus holding promise for sustained antitumor therapy.
functionalized with salivary acid (SA, a calcium chelator), folic acid (FA, serving as tumor-targeting moiety), and triphenylphosphine (TPP, facilitating mitochondria targeting).Leveraging the abundance of mitochondria within the tumor cells and capitalizing on the light collection and photothermal properties inherent to SFT-Au, thi multifunctional nanoplatform aimed to achieve precise calcification of tumo mitochondria, thereby enhancing the efficacy of starvation therapy.Evaluation of thi nanoplatform in photothermal therapy (PTT) revealed that calcium chelation induced nanoparticle aggregation, resulting in a significant enhancement of absorption in the long wavelength region (Figure 2).This phenomenon can be attributed to the size-dependen absorption characteristics of gold nanoparticles, thereby facilitating calcium-dependen photothermal conversion upon exposure to 808 nm radiation.Importantly, this calcium dependent photothermal conversion exhibited sustained high efficiency even afte multiple cycles, underscoring the remarkable stability of SFT-Au aggregates under near infrared (NIR) radiation and elevated temperatures and thus holding promise fo sustained antitumor therapy.The nanoparticles were able to utilize and manipulate the over-expressed calcium i the mitochondria of tumor cells for the simultaneous inhibition of malignant tumors via calcium dependent photothermal therapy and mitochondrial calcification-mediated starving therapy [33] Copyright 2023, Wiley-VCH GmbH.
Due to the involvement of HER2 and HER3 oncogenes in the pathogenesis and progression of specific invasive breast cancers, the overexpression of these genes present a challenge in achieving therapeutic efficacy against such malignancies.Particularly HER3 overexpression contributes to resistance mechanisms against conventiona antitumor agents.To address these hurdles, Eva Villar-Alvarez et al. [34] devised multifunctional, biocompatible nanoplatform integrating diagnostic and therapeuti modalities.This platform comprises branched gold nanoshells loaded with doxorubicin conjugated with the near-infrared (NIR) fluorescent dye indocyanine green, and furthe Due to the involvement of HER2 and HER3 oncogenes in the pathogenesis and progression of specific invasive breast cancers, the overexpression of these genes presents a challenge in achieving therapeutic efficacy against such malignancies.Particularly, HER3 overexpression contributes to resistance mechanisms against conventional antitumor agents.To address these hurdles, Eva Villar-Alvarez et al. [34] devised a multifunctional, biocompatible nanoplatform integrating diagnostic and therapeutic modalities.This platform comprises branched gold nanoshells loaded with doxorubicin, conjugated with the near-infrared (NIR) fluorescent dye indocyanine green, and further functionalized with small interfering RNA (siRNA) targeting HER3, along with the HER2-specific antibody trastuzumab.This design enables a synergistic therapeutic approach, combining chemotherapy, photothermal therapy, RNA interference, and immunomodulation.In vivo experiments conducted in a hormonal mouse model demonstrated a notable reduction in tumor volume following administration of the hybrid nanocarriers, coupled with subsequent near-infrared light exposure.These findings underscore the promising therapeutic potential of such integrated nanoplatforms for combating resistant breast cancer.
Combined Photothermal and Immunotherapy Therapy
Nanomaterial-mediated photothermal therapy (PTT) holds promise for the therapy of localized tumors [35]; however, its efficacy in addressing tumor metastasis and recurrence is constrained.Combination therapy offers a strategy to enhance therapeutic outcomes, leveraging synergistic effects where the combined effect exceeds the sum of individual therapies [36].
In recent years, PTT has emerged as a prominent modality in cancer therapy, with nanomaterial-based photoimmunotherapy presenting distinct advantages.This approach facilitates the release of tumor-associated and tumor-specific antigens, thereby promoting synergistic immunotherapeutic responses.Despite the advancements in immunotherapy leading to improved survival rates among cancer patients, its clinical benefits are constrained in the context of 'cold tumors' characterized by a lack of infiltrating T cells.Xiao et al. [37] devised a tumor-targeting nanosystem named AuNC@SiO 2 @HA, aiming to modulate the immune microenvironment in murine melanoma exhibiting an immunologically 'cold' state, thereby eliciting synergistic effects with an immune checkpoint blockade (ICB).To evaluate the therapeutic potential of AuNC@SiO 2 @HA, a subcutaneous transplantation tumor model was established in immunocompetent SMM102 mice.Subsequently, different therapeutic regimens including saline, anti-PD-1 alone, AuNC@SiO 2 @HA combined with laser irradiation, and a combination of anti-PD-1 with AuNC@SiO 2 @HA plus laser irradiation were administered.Tumor growth progression was meticulously monitored throughout the experimental duration.After the completion of therapy on day 18, tumor volume and weight were measured post dissection.The outcomes revealed that anti-PD-1 monotherapy exhibited limited efficacy against tumors, whereas AuNC@SiO 2 @HA demonstrated remarkable therapeutic effectiveness against tumors when coupled with laser irradiation.Tao Liu et al. [38] developed pH-enzyme-NIR multi-responsive immunoadjuvant nanoparticles (RMmAGL) tailored for tumor-specific photothermal therapy and photothermal-assisted immune modulation (Figure 3B).Within tumor microenvironments, the acidic conditions triggered the dissociation of AuNPs-Glu/Lys from RMmAGL, facilitating the release of the TLR7 agonist R837.Upon internalization by tumor cells, liberated AuNPs-Glu/Lys aggregated, facilitated by the catalytic activity of TGase which is typically overexpressed in tumor cells.This aggregation enabled tumor-specific photothermal therapy upon NIR irradiation.Importantly, this process not only induces damage to the primary tumor but also prompts the generation of tumor-associated antigens in situ.The tumor-associated antigens facilitate the binding to R837, effectively stimulating the maturation of dendritic cells (DCs) through a mechanism akin to vaccination.This activation subsequently triggers antitumor T cells, thereby promoting immunotherapy.Moreover, the residual MSN mannose present in tumor tissues induces the polarization of tumor-associated macrophages from an M2-type to an M1-type phenotype.This polarization serves to remodel the immunosuppressive tumor microenvironment into an antitumor milieu, thereby further augmenting the efficacy of immunotherapy.HyeMi Kim et al. integrated adoptive cell therapy (ACT) with photothermal therapy (PTT) by incorporating AuNPs into tumor-reactive T cells, which were then administered intravenously [22].The AuNP-loaded T cells migrated to the tumor tissues and initiated the elimination of tumor cells.However, over time, these T cells gradually lost control over the tumor cells, resulting in tumor regrowth.Leveraging the remarkable tumor-homing capabilities of T cells, a portion of the AuNPs were transported to the tumor tissue.Subsequently, upon the loss of T cell efficacy against the tumor cells within the tumor microenvironment, photothermal therapy (PTT) was employed to further eradicate residual tumor cells.Compared to ACT or PTT monotherapy, the combination of immunophotothermal therapy significantly attenuated tumor growth and improved overall survival.
Other Combined Therapies
Photothermal therapy (PTT) has garnered significant attention in cancer therapies.However, a notable challenge arises from the upregulation of heat shock proteins (HSPs) in tumor cells following heating, which can counteract the cellular damage induced by elevated temperatures.This phenomenon poses a substantial limitation to the efficacy of PTT as a standalone therapeutic modality.To enhance therapeutic outcomes, it is imperative to explore synergistic approaches wherein PTT is combined with other modalities such as immunotherapy, chemotherapy, radiotherapy, and other established cancer therapies.This integrated approach holds promise for achieving enhanced therapeutic efficacy and ultimately improving patient survival rates [39].Gold nanorods have garnered significant attention in the field of tumor thermochemotherapy.Specifically, gold nanorods (AuNRs) exhibit tunable longitudinal absorption spectra within the near-infrared laser (NIR) region, thereby presenting immense potential for cancer photothermal therapy [40].Gold nanorods (AuNRs) have encountered challenges in achieving efficient application in vivo for photothermal therapy due to limited thermal availability.To overcome this obstacle, Zhao et al. [41] devised AuNR-based nanocomplexes (NCs) with enhanced responsiveness, leveraging the synergistic effects of photothermal therapy and chemoembolization.The AuNR core and doxorubicin (DOX) were encapsulated within N-(2-hydroxypropyl) methacrylamide (HPMA)-co-N-(1-vinyl-2pyrrolidone) (NIPAM) copolymer nanoparticles (NPs) via electrostatic and hydrophobic interactions, respectively.Upon intravenous administration, NIR irradiation-induced temperature elevation prompted a phase transition of NIPAM, facilitating NC aggregation and the subsequent blockade of tumor vasculature.This process facilitated the release and transvascular transport of DOX, leading to its accumulation within the tumor.Consequently, the combined action of DOX and AuNRs resulted in localized antitumor efficacy while minimizing adverse effects on non-tumor tissues.
Certainly, photodynamic therapy (PDT) stands out as a highly effective localized therapy for tumors, relying on reactive oxygen species (ROS) to induce cell death via the utilization of the singlet oxygen generated by excited-state photosensitizers under suitable light sources.However, despite its efficacy, PDT encounters challenges stemming from factors such as low oxygen levels, the short half-life of ROS, and the limited availability of photosensitizers delivered via the intravenous (IV) route, as well as the constrained accumulation of photosensitizers at the tumor site.These limitations hinder PDT's broader application in tumor therapy.Addressing these issues, Xiaodong Ma et al. [42] successfully achieved the integration of photothermal therapy (PTT) and PDT using a singular nanocarrier strategy.They employed Au@MSN nanoparticles as carriers for the intracellular delivery of the photosensitizer tetra(4hydroxyphenyl)porphyrin (THPP).The resulting Au@MSN-Ter/THPP@CM nanoparticles exhibited remarkable photothermal conversion capabilities and demonstrated efficient uptake by ovarian cancer cells.Both the Au@MSN-Ter/THPP@CM nanoparticles and Au@MSN-Ter/THPP@CM@GelMA/CAT mimetic nano@microgels exhibited significant inhibition of cell proliferation.
Tumor microenvironment-mediated ratiometric near-infrared two-region (NIR-II) fluorescence imaging and photodynamic therapy play pivotal roles in enabling accurate diagnosis and effective therapy of deep-seated tumors.However, integrating these functionalities within a single nanoparticle remains a considerable challenge.Shengqiang Hu et al. [43] have addressed this issue by developing novel single-excitation triple-emission down/up-conversion nanoassemblies (Figure 3A).These assemblies enable simultaneous GSH-enhanced ratiometric NIR-II fluorescence imaging and chemo/photodynamic combination therapy for tumors.emission down/up-conversion nanoassemblies (Figure 3A).These assemblies enable simultaneous GSH-enhanced ratiometric NIR-II fluorescence imaging and chemo/photodynamic combination therapy for tumors.[43].Copyright 2023, American Chemical Society; (B) Schematic representation of the therapeutic processes of pH-enzyme-NIR multi-responsive immune-adjuvant nanoparticles (R837@MSNmannose-AuNPs-Glu/Lys, RMmAGL), which combines tumor-specific photothermal therapy and photothermal-assisted immunotherapy for malignant tumor therapy.GNP-VGB3 recognizes VEGFR1 and VEGFR2 and suppresses their VEGF-induced phosphorylation in endothelial cells [38].Copyright 2023, Wiley-VCH GmbH.
Biological Imaging
The detection of biomolecules holds paramount significance in fundamental molecular research, diagnostics [44,45], drug screening, and various biomedical applications.Raman spectroscopy, an analytical technique, relies on the scattering of photons by molecules within a sample to measure their vibrational and rotational modes [46,47].This method is not only facile to execute but also rapid and non-destructive, circumventing interference from aqueous solutions.Importantly, Raman spectroscopy provides highly accurate information regarding the molecular composition and structure of the target molecules [48].
However, the extensive clinical utility of Surface-Enhanced Raman Scattering (SERS) detection is hindered by the absence of a standardized method for concurrently detecting enhanced Raman signals emanating from diverse biomolecules.In response to this challenge, researchers have devised a universal SERS detection platform leveraging gold nanoparticles (AuNPs) to analyze a wide array of biomolecules.Utilizing a two-step enhancement strategy, distinct signatures of various biomolecules such as DNA, RNA, amino acids, peptides, proteins, viruses, bacteria, and lipid molecules can be directly discerned through the measurement of SERS signals, obviating the need for labeling [49].Gold nanostructures facilitate light concentration at the nanoscale by the resonance excitation of their free electrons, a phenomenon known as surface plasmonics.In Surface-Enhanced Raman Scattering (SERS), an intensified electromagnetic field amplifies Ramanscattered light emitted by proximal molecules.Gold nanostructures function as antennas, concentrating light onto molecules and enhancing the Raman-scattered signal to enable the recording of individual molecules' vibrational spectra [50].The strength of the field directly impacts the resolution and sensitivity of analytical techniques like Surface-Enhanced Raman Scattering (SERS).Consequently, "field focusing" has emerged as a crucial research focus, with strategies such as creating dense near-field spots or hot spots employed to enhance near-field focusing effectiveness.Building on this concept, researchers have successfully synthesized gold nanohalos of varying sizes using multiple stepwise synthesis pathways.They demonstrated effective near-field focusing for [43].Copyright 2023, American Chemical Society; (B) Schematic representation of the therapeutic processes of pH-enzyme-NIR multi-responsive immune-adjuvant nanoparticles (R837@MSNmannose-AuNPs-Glu/Lys, RMmAGL), which combines tumor-specific photothermal therapy and photothermal-assisted immunotherapy for malignant tumor therapy.GNP-VGB3 recognizes VEGFR1 and VEGFR2 and suppresses their VEGF-induced phosphorylation in endothelial cells [38].Copyright 2023, Wiley-VCH GmbH.
Biological Imaging
The detection of biomolecules holds paramount significance in fundamental molecular research, diagnostics [44,45], drug screening, and various biomedical applications.Raman spectroscopy, an analytical technique, relies on the scattering of photons by molecules within a sample to measure their vibrational and rotational modes [46,47].This method is not only facile to execute but also rapid and non-destructive, circumventing interference from aqueous solutions.Importantly, Raman spectroscopy provides highly accurate information regarding the molecular composition and structure of the target molecules [48].
However, the extensive clinical utility of Surface-Enhanced Raman Scattering (SERS) detection is hindered by the absence of a standardized method for concurrently detecting enhanced Raman signals emanating from diverse biomolecules.In response to this challenge, researchers have devised a universal SERS detection platform leveraging gold nanoparticles (AuNPs) to analyze a wide array of biomolecules.Utilizing a two-step enhancement strategy, distinct signatures of various biomolecules such as DNA, RNA, amino acids, peptides, proteins, viruses, bacteria, and lipid molecules can be directly discerned through the measurement of SERS signals, obviating the need for labeling [49].Gold nanostructures facilitate light concentration at the nanoscale by the resonance excitation of their free electrons, a phenomenon known as surface plasmonics.In Surface-Enhanced Raman Scattering (SERS), an intensified electromagnetic field amplifies Raman-scattered light emitted by proximal molecules.Gold nanostructures function as antennas, concentrating light onto molecules and enhancing the Raman-scattered signal to enable the recording of individual molecules' vibrational spectra [50].The strength of the field directly impacts the resolution and sensitivity of analytical techniques like Surface-Enhanced Raman Scattering (SERS).Consequently, "field focusing" has emerged as a crucial research focus, with strategies such as creating dense near-field spots or hot spots employed to enhance near-field focusing effectiveness.Building on this concept, researchers have successfully synthesized gold nanohalos of varying sizes using multiple stepwise synthesis pathways.They demonstrated effective near-field focusing for different gap distances between the inner and outer nanohalos through single-particle SERS measurements [51].
Drug Delivery
Gold nanoparticles exhibit significant potential in biomedical applications, particularly in drug delivery and cancer therapy, owing to their distinctive physical and chemical characteristics.These nanoparticles can be tailored in various sizes and shapes, influencing their biodistribution and cellular uptake dynamics [25].Consequently, gold nanoparticles can accumulate within tumor tissues through passive targeting mechanisms, leveraging the enhanced permeability and retention (EPR) effect, as well as through active targeting strategies, which involve surface modifications with molecules designed to recognize specific targets.The researchers investigated the potential of nuclear-targeted gold nanoparticles for radiosensitization in pancreatic cancer.They achieved this by utilizing nuclear localization sequence (NLS) peptides to target gold nanospheres, thereby enhancing the accumulation of these particles within the cell nucleus.The experimental findings indicate that the targeted delivery of gold nanoparticles to the nucleus results in amplified radiosensitization through the augmentation of DNA double-strand break formation [52].In a related context, researchers have explored the use of a VEGFA/VEGFB antagonist peptide (VGB3) coupled with gold nanoparticles to improve efficacy and extend the therapeutic duration.VGB3 is known for its ability to recognize and neutralize VEGFR1 and VEGFR2 on both endothelial and tumor cells [53].When bound to gold nanoparticles (GNP-VGB3), it effectively identifies VEGFR1 and VEGFR2 in endothelial cells, leading to the inhibition of VEGF-induced phosphorylation of these receptors.Importantly, VGB3 would maintain its capacity to recognize VEGFR1 and VEGFR2 even after binding to gold nanoparticles.Furthermore, therapy with GNP-VGB39 resulted in the inhibition of VEGF-induced phosphorylation of both VEGFR2 and VEGFR1.These findings strongly suggest that GNP-VGB3 effectively impedes the VEGF-induced activation (phosphorylation) of VEGFR1 and VEGFR2.Moreover, the utilization of gold nanoparticles in photothermal therapy (PTT) offers notable targeting capabilities.Gold nanoparticles possess the ability to generate heat upon exposure to near-infrared light (NIR), enabling their application in localized heating-based cancer therapy.This targeted heating effect can be directed toward tumor sites where gold nanoparticles accumulate [54].In recent studies, researchers have explored the combined use of gold nanoshell (NS) technology with photothermal therapy (PTT) and liposomal doxorubicin to enhance the prognosis of mouse models with colorectal cancer.The results demonstrated that the combination of PTT with liposomal doxorubicin led to a deceleration in tumor growth rate and an improvement in the survival rate of the mice [55].
The precise targeting capabilities inherent in gold nanoparticles, achieved through meticulous design and customization, render them highly promising entities in the realm of cancer therapy.These nanoparticles exhibit the potential to enhance the efficiency and precision of drug delivery while mitigating the adverse effects on healthy tissues.However, it is imperative to note that further research and optimization are essential to address concerns about the safety, stability, and biodistribution of gold nanoparticles in clinical applications.
Photothermal Therapy
As one of the extensively employed nanoparticles in both biomedical and industrial realms, silver nanoparticles exhibit a diverse array of effects, including antibacterial [56], anti-inflammatory [57], and antitumor properties [58].Research indicates that silver nanoparticles manifest low toxicity in their nanoparticulate form.Conversely, Ag+, generated under oxidizing conditions, demonstrates heightened cytotoxicity against various cancer cell lines through the induction of oxidative stress, mitochondrial damage, and autophagy.The remarkable physicochemical properties inherent in silver nanoparticles render them suitable for applications in Surface-Enhanced Raman spectroscopy (SERS) and metal-enhanced fluorescence [18,59].Furthermore, silver nanoparticles demonstrate immunomodulatory and radiosensitizing effects [58,60].
Silver sulfide nanoparticles (Ag 2 S-NP) hold considerable promise in optics-based biomedical applications, including near-infrared fluorescence (NIRF) imaging, photoacoustic (PA), and photothermal therapy (PTT).Addressing the limitations of conventional silver sulfide nanoparticles, characterized by low NIR light absorbance, stringent preparation conditions, and the use of toxic precursors, [A Biodegradable] and colleagues successfully synthesized Ag 2 S-NP with a size below 5 nm.These nanoparticles were then encapsulated in biodegradable polymer nanoparticles (AgPCPP) (Figure 4A).This innovative approach, employing non-toxic materials and mild preparation conditions, resulted in an increased number of silver sulfide encapsulations within the nanoparticles, thereby enhancing their NIR absorption and subsequently improving optical imaging and PTT effects [61] (Figure 4B-D).In a non-coincidental manner, Zhang et al. orchestrated the synthesis of a hollow Ag 2 S/Ag nanocomposite shell, comprising monolithic Ag and compound Ag 2 S. Following the incorporation of acoustic sensitizers and CT contrast agents, they achieved the pioneering development of multifunctional HASAIC nanoprobes through the envelopment of a thermally supported lipid bilayer (Figure 4E).The monolithic Ag within the probe demonstrated catalytic prowess, facilitating the conversion of H 2 O 2 into O 2 .This catalytic activity serves to mitigate the anoxic conditions at the tumor site, thereby augmenting the effectiveness of acoustic power therapy.Notably, the incorporation of a hollow Ag 2 S/Ag nanocomposite shell layer serves a dual purpose in that it prevents the dissolution of the pure Ag shell layer during the catalytic generation of O 2 from H 2 O 2 , consequently avoiding undesirable consequences in the context of photothermal therapy (PTT) (Figure 4F,G) and photoacoustic imaging (PAI) [62].
Liu et al. engineered a distinctive black noble metal core-shell nanostructure featuring silver (Ag) nanocubes as the core and amino acid-encoded highly branched gold (Au) nanorods as the shell (L-CAg@Au and D-CAg@Au) (Figure 5A).Both L-CAg@Au and D-CAg@Au showcased superior photothermal conversion properties when compared to the amino-acid-free core-shell structure (Ag@Au) (Figure 5B,C).The antitumor therapeutic efficacy of the synthesized samples underwent a comprehensive evaluation both in vitro and in vivo.Apoptosis analysis, conducted through flow cytometry, revealed that D-CAg@Au serves as a potent photothermal therapeutic agent for antitumor applications by inducing apoptosis under laser irradiation, demonstrating commendable therapeutic effectiveness and biosafety (Figure 5D) [13].Zhang et al. utilized a cytosine-rich hairpin-like DNA structure as a growth template to fabricate a novel class of noble metal alloy nanoenzymes termed DNA template Ag@Pd alloy nanoclusters (DNA-Ag@PdNCs) (Figure 5E).These nanostructures, characterized by the integration of silver (Ag) and palladium (Pd) within the DNA scaffold, demonstrated remarkable properties.Specifically, under 1270 nm laser irradiation, the DNA-Ag@PdNCs exhibited an impressive photothermal conversion efficiency of 59.32%.Additionally, a synergistic enhancement of peroxide mimicry enzyme activity was observed due to the unique interplay between the Ag and Pd constituents.The presence of a hairpin DNA structure on the surface of the DNA-Ag@PdNCs conferred several advantageous attributes.Firstly, it imparted excellent stability and biocompatibility ex vivo, rendering these nanostructures suitable for biological applications.Moreover, the DNA scaffold contributed to an enhanced tumor site permeability and retention effect, facilitating targeted delivery and accumulation within tumor tissues.Upon intravenous administration, DNA-Ag@PdNCs demonstrated significant promise as a theranostic agent for gastric cancer.Utilizing high-contrast NIR-II photoacoustic imaging guidance, efficient photothermal enhancement was achieved, augmenting the efficacy of nanocatalytic therapy (NCT).Experimental findings corroborated the ability of DNA-Ag@PdNCs to effectively inhibit gastric cancer tumor growth and eradicate tumor cells through the synergistic effects of photothermal therapy (PTT) and NCT (Figure 5F).In summary, DNA-Ag@PdNCs represent a multifunctional platform for tumor diagnosis and therapy, holding great potential as a versatile tool in nano-diagnostic and therapeutic applications [63].Yoo et al. utilized oleic acid and oleylamine as the co-ligands for surface passivation to achieve the enhanced confinement of the CQD morphology, effectively prevented the CQD fusion, and prepared high monodispersity silver sulfide (Ag 2 S) colloidal quantum dots (CQDs) for tumor diagnosis and therapy (Figure 4H).Experimental results showed that the CQDs that were synthesized using dual ligands exhibited uniform size distribution, showed efficient photothermal effects under near-infrared laser irradiation (Figure 4I), and were able to effectively kill tumor cells [64].Bian et al. drew inspiration from biomineralization processes to fabricate silver-based peptide-directed mineralized silver nanocages (AgNCs).These AgNCs represent organic-inorganic hybrids synthesized utilizing octreotide (OCT) as a template, with their shells composed of AgNPs (Figure 5G).This hierarchical architecture ensures tight aggregation of the AgNPs, thereby facilitating exceptional plasmonic coupling.Consequently, there is a notable redshift in the resonant excitation wavelength from the visible spectrum (420 nm) to the near-infrared (NIR) region (810 nm).Moreover, the manipulation of the size and morphology of mineralized AgNCs through the modulation of the volume of added silver nitrate (AgNO 3 ) enables precise control over the surface plasmon resonance peak of AgNCs within the NIR spectrum.Experimental findings illustrate that AgNCs exhibit a photothermal conversion efficiency of 46.1%, selectively inducing cancer cell death upon NIR irradiation at 808 nm (Figure 5H).These AgNCs demonstrate remarkable antitumor properties and exhibit favorable biocompatibility in the context of photothermal therapy (Figure 5I) [65].
Ag@PdNCs to effectively inhibit gastric cancer tumor growth and eradicate tumor cells through the synergistic effects of photothermal therapy (PTT) and NCT (Figure 5F).In summary, DNA-Ag@PdNCs represent a multifunctional platform for tumor diagnosis and therapy, holding great potential as a versatile tool in nano-diagnostic and therapeutic applications [63].Yoo et al. utilized oleic acid and oleylamine as the co-ligands for surface passivation to achieve the enhanced confinement of the CQD morphology, effectively prevented the CQD fusion, and prepared high monodispersity silver sulfide (Ag2S) colloidal quantum dots (CQDs) for tumor diagnosis and therapy (Figure 4H).Experimental results showed that the CQDs that were synthesized using dual ligands exhibited uniform size distribution, showed efficient photothermal effects under nearinfrared laser irradiation (Figure 4I), and were able to effectively kill tumor cells [64].Bian et al. drew inspiration from biomineralization processes to fabricate silver-based peptidedirected mineralized silver nanocages (AgNCs).These AgNCs represent organicinorganic hybrids synthesized utilizing octreotide (OCT) as a template, with their shells composed of AgNPs (Figure 5G).This hierarchical architecture ensures tight aggregation of the AgNPs, thereby facilitating exceptional plasmonic coupling.Consequently, there is a notable redshift in the resonant excitation wavelength from the visible spectrum (420 nm) to the near-infrared (NIR) region (810 nm).Moreover, the manipulation of the size and morphology of mineralized AgNCs through the modulation of the volume of added silver nitrate (AgNO3) enables precise control over the surface plasmon resonance peak of AgNCs within the NIR spectrum.Experimental findings illustrate that AgNCs exhibit a photothermal conversion efficiency of 46.1%, selectively inducing cancer cell death upon NIR irradiation at 808 nm (Figure 5H).These AgNCs demonstrate remarkable antitumor properties and exhibit favorable biocompatibility in the context of photothermal therapy (Figure 5I) [65].
Combined Photothermal and Immune Therapy
Tumor immunotherapy represents a crucial therapeutic approach in cancer therapy, leveraging the body's immune system to combat tumors.It has emerged as a pivotal
Combined Photothermal and Immune Therapy
Tumor immunotherapy represents a crucial therapeutic approach in cancer therapy, leveraging the body's immune system to combat tumors.It has emerged as a pivotal strategy alongside conventional modalities such as surgery, chemotherapy, radiotherapy, and targeted therapies, offering notable clinical efficacy and advantages [66].
Immunotherapeutic strategies encompass a diverse range of interventions, including therapeutic vaccines, immune checkpoint blockade, bispecific T-cell engagers (BiTEs), and adoptive cell therapy [67].Among these, immune checkpoints serve as pivotal regulators of T-cell activation [68].They play a crucial role in maintaining immune homeostasis by preventing excessive activation of the immune system or autoimmune responses through inhibition of T-cell activation.Tumor cells exploit this mechanism to evade immune surveillance.Conversely, immune checkpoint inhibitors, typically comprising small molecule drugs or antibodies, counteract checkpoint-mediated immunosuppression by competitively binding to immune checkpoints.This blockade unleashes T-cell activation, thereby enabling the immune system to mount a robust antitumor response and eliminate tumor cells.
In recent years, immune checkpoint blockade therapy has solidified its position as a pivotal strategy in tumor immunotherapy.Monoclonal antibodies targeting checkpoint molecules, including programmed death receptor 1 (PD-1)/programmed death ligand 1 (PD-L1) and cytotoxic T-lymphocyte antigen 4 (CTLA-4), have received FDA approval and demonstrated success in clinical applications [69,70].The emergence of these therapies marks a significant milestone in the field.The integration of the immune checkpoint blockade therapy with other therapeutic modalities, such as photothermal therapy, has garnered considerable attention.Notably, Wang et al. devised a novel approach involving AuPtAg-GOx nanoenzymes (Figure 6A,B).These nanoenzymes generate controlled heat at the tumor site upon exposure to 1064 nm laser irradiation, exhibiting mild photothermal properties (Figure 6C-H).Importantly, this system demonstrates the capability to alleviate the heat resistance of tumor cells, thereby enhancing the efficacy of the antitumor immune response (Figure 6I).The combination of mild photothermal therapy (PTT) and glucose oxidase (GOx)-mediated starvation therapy synergistically enhances the efficacy of PTT.This synergy is attributed to the improved recruitment of tumor-infiltrating lymphocytes (TILs) and the induction of immunogenic cell death (ICD), consequently transforming the tumor microenvironment from "cold" to "hot".This addresses the challenge of the limited efficacy of immune checkpoint blockade therapies in treating "cold" tumors.In vivo experiments have demonstrated that the addition of αPD-L1 to the therapy regimen comprising AuPtAg-GOx-mediated mild PTT, starvation therapy, and immunotherapy effectively suppresses both primary and distal tumors [71].Jin et al. pioneered the development of corn-shaped Au/Ag nanorods (NRs) capable of inducing immunogenic cell death (ICD) in tumor cells upon irradiation with 1064 nm light.The corn-like Au/Ag NRs, when combined with NIR-II light irradiation, demonstrated a significant increase in T-cell ICD.Moreover, NIR-II light irradiation led to a notable enhancement in tumor infiltration by T cells, thereby initiating a systemic immune response aimed at reprogramming the immunosuppressed cold tumor microenvironment.These NRs exhibited synergy with the immune checkpoint blockade (ICB) antibodies, effectively inhibiting distal tumor growth and inducing a robust immune memory effect to forestall tumor recurrence [72].Bai et al. synthesized Ag@CuS-TPP@HA nanoparticles, leveraging the collective properties of reactive oxygen species (ROS), photothermal effects, and ICD antibodies to elicit a potent ICD effect.This, in turn, facilitated dendritic cell (DC) maturation and activation of T-lymphocytes.The combination of ROS, photothermal effect, and ICD antibody stimulation promoted effective ICD, fostering DC maturation, T-lymphocyte activation, and proliferation.Consequently, this approach converted "cold" tumors characterized by low levels of tumor-infiltrating lymphocytes (TILs) into "hot" tumors, bolstering the systemic antitumor immune response.Ultimately, this strategy led to the eradication of primary tumors and suppression of distal tumor growth.The implications of this method extend to providing novel insights for the precise diagnosis of deep-seated tumors and facilitating efficient immune checkpoint blockade (ICB)-based antitumor immunotherapy [73].
Other Combination Therapies
While photothermal therapy (PTT) holds considerable promise for biomedical applications, it also presents certain drawbacks.Apart from the inherent complexity associated with the design of therapeutic apparatus utilizing laser-mediated therapy, several challenges need to be addressed.These include the need for enhanced targeting of light/thermal agents to tumor tissues, the potential accumulation of residual thermal sensitizers, and the risk of collateral damage stemming from stray light, all of which have hindered the widespread clinical adoption of PTT.Moreover, a critical limitation lies in the restricted penetration of light through biological tissues, resulting in diminished efficacy, particularly against deep-seated tumors [19].
Other Combination Therapies
While photothermal therapy (PTT) holds considerable promise for biomedical applications, it also presents certain drawbacks.Apart from the inherent complexity associated with the design of therapeutic apparatus utilizing laser-mediated therapy, several challenges need to be addressed.These include the need for enhanced targeting of light/thermal agents to tumor tissues, the potential accumulation of residual thermal sensitizers, and the risk of collateral damage stemming from stray light, all of which have hindered the widespread clinical adoption of PTT.Moreover, a critical limitation lies in the restricted penetration of light through biological tissues, resulting in diminished efficacy, particularly against deep-seated tumors [19].
Based on the inherent limitations of PTT, relying solely on this approach for tumor therapy proves challenging.However, emerging research suggests that integrating PTT with complementary therapeutic modalities often yields synergistic effects, surpassing the efficacy of individual therapies [74,75].Li et al. devised a strategy involving Ag/Pd bimetallic nanoenzymes with peroxidase-like activity, serving as nanocarriers for adriamycin (DOX).This approach capitalizes on the photothermal conversion capability and catalytic generation of hydroxyl radicals (HO•) to augment antitumor efficacy (Figure 7A).Experimental findings demonstrated that the Ag/Pd nanoenzymes exhibited notably high photothermal conversion efficiency (η = 40.97%)and markedly enhanced peroxidaselike activity upon laser irradiation.Additionally, these AgPdNPs efficiently catalyzed the production of HO • from H2O2 in an acidic milieu.Upon reaching the acidic tumor microenvironment, the nanomedicine AgPd@BSA/DOX, when subjected to NIR laser irradiation, facilitates DOX release while inducing hyperthermia.This orchestrated Based on the inherent limitations of PTT, relying solely on this approach for tumor therapy proves challenging.However, emerging research suggests that integrating PTT with complementary therapeutic modalities often yields synergistic effects, surpassing the efficacy of individual therapies [74,75].Li et al. devised a strategy involving Ag/Pd bimetallic nanoenzymes with peroxidase-like activity, serving as nanocarriers for adriamycin (DOX).This approach capitalizes on the photothermal conversion capability and catalytic generation of hydroxyl radicals (HO•) to augment antitumor efficacy (Figure 7A).Experimental findings demonstrated that the Ag/Pd nanoenzymes exhibited notably high photothermal conversion efficiency (η = 40.97%)and markedly enhanced peroxidase-like activity upon laser irradiation.Additionally, these AgPdNPs efficiently catalyzed the production of HO• from H 2 O 2 in an acidic milieu.Upon reaching the acidic tumor microenvironment, the nanomedicine AgPd@BSA/DOX, when subjected to NIR laser irradiation, facilitates DOX release while inducing hyperthermia.This orchestrated approach achieves a multifaceted therapeutic outcome encompassing ROS-mediated tumor ablation, photothermal therapy, and chemotherapy [76].Gong et al. innovatively engineered Metal-Organic Frameworks (MOFs) by leveraging the intrinsic biogenic enzyme, glucose oxidase (GOx), to attain stable water monodispersity and create active surface sites for subsequent modifications (Figure 7B).Subsequently, silver nanoparticles were uniformly immobilized onto the GOx-functionalized MOFs, enhancing their photothermal conversion efficiency upon exposure to near-infrared light.This integration constituted an efficacious paradigm of combined starvation therapy and photothermal therapy, exhibiting superior efficacy in tumor management and metastasis inhibition compared to singular starvation therapy approaches.Notably, this study introduces a pioneering methodology for enhancing the stability and dispersion of MOFs utilizing bio-enzymes under simplified conditions.Furthermore, it underscores the utility of MOFs in potent tumor therapy strategies, obviating the necessity for conventional chemotherapeutic agents [77].Wu et al. developed a multifunctional nanoplatform comprising MoO 3−x nanosheets, Ag nanocubes, and MnO 2 nanoparticles.This nanoplatform exhibits dual-mode functionality by generating reactive oxygen species (ROS) and thermotherapeutic effects upon irradiation with 808 nm near-infrared (NIR) light (Figure 7C).Specifically, when MoO 3−x -Ag-PEG-MnO 2 accumulates at the tumor site, MnO 2 effectively depletes glutathione (GSH) with its antioxidant capacity and decomposes hydrogen peroxide (H 2 O 2 ) to generate highly cytotoxic hydroxyl radicals (•OH) and oxygen (O 2 ), thereby enhancing photodynamic therapy (PDT).The NIR-mediated photothermal therapy (PTT) offers superior tissue penetration compared to visible-light-mediated PDT.Moreover, PDT efficacy can be further enhanced by irradiating MoO 3−x -Ag, leveraging silver's strong absorption of NIR light for efficient photothermal conversion.This single nanomaterial integrates NIR laserinduced synergistic PDT/PTT with multimodal imaging capabilities, holding significant promise for cancer diagnosis and therapy [78].
approach achieves a multifaceted therapeutic outcome encompassing ROS-medi tumor ablation, photothermal therapy, and chemotherapy [76].Gong et al. innovati engineered Metal-Organic Frameworks (MOFs) by leveraging the intrinsic biog enzyme, glucose oxidase (GOx), to attain stable water monodispersity and create ac surface sites for subsequent modifications (Figure 7B).Subsequently, silver nanopart were uniformly immobilized onto the GOx-functionalized MOFs, enhancing t photothermal conversion efficiency upon exposure to near-infrared light.This integra constituted an efficacious paradigm of combined starvation therapy and photother therapy, exhibiting superior efficacy in tumor management and metastasis inhib compared to singular starvation therapy approaches.Notably, this study introduc pioneering methodology for enhancing the stability and dispersion of MOFs utilizing enzymes under simplified conditions.Furthermore, it underscores the utility of MOF potent tumor therapy strategies, obviating the necessity for conventi chemotherapeutic agents [77].Wu et al. developed a multifunctional nanoplatf comprising MoO3−x nanosheets, Ag nanocubes, and MnO2 nanoparticles.nanoplatform exhibits dual-mode functionality by generating reactive oxygen spe (ROS) and thermotherapeutic effects upon irradiation with 808 nm near-infrared (N light (Figure 7C).Specifically, when MoO3−x-Ag-PEG-MnO2 accumulates at the tumor MnO2 effectively depletes glutathione (GSH) with its antioxidant capacity decomposes hydrogen peroxide (H2O2) to generate highly cytotoxic hydroxyl rad (•OH) and oxygen (O2), thereby enhancing photodynamic therapy (PDT).The N mediated photothermal therapy (PTT) offers superior tissue penetration compare visible-light-mediated PDT.Moreover, PDT efficacy can be further enhanced irradiating MoO3−x-Ag, leveraging silver's strong absorption of NIR light for effic photothermal conversion.This single nanomaterial integrates NIR laser-indu synergistic PDT/PTT with multimodal imaging capabilities, holding significant prom for cancer diagnosis and therapy [78].
Biological Imaging
Silver-based equipartitioned excitonic nanoparticles find extensive applications in catalytic technology, nanomedicine, and analytical detection, owing to their exceptional optical properties.Therefore, a comprehensive exploration of the optical characteristics of individual silver-based nanoparticles is imperative.When subjected to an optical field, the electrons within these nanoparticles resonate, resulting in the generation of a surface plasmon resonance absorption peak.The position and intensity of this absorption peak are intricately linked to the nanoparticle's shape, size, dielectric constant, and the refractive index of the surrounding medium.
The optical behavior of metal nanoparticles is significantly influenced by their size.In particular, the shape of the surface excitations or oscillating surface electrons is closely tied to the dimensions of the nanoparticles.For small nanoparticles, where the particle diameters are much smaller than the incident light's wavelength, the surface-isolated excitons demonstrate uniform polarization along the incident electric field, indicating the presence of the dipole Surface Plasmon Resonance (SPR) mode.Conversely, in the case of large nanoparticles, the surface-isolated excitons exhibit uneven polarization across the nanoparticle with some phase delay.Daedu Lee et al. [79] conducted a comparative analysis of the extinction spectra of silver-containing films (SCF) embedded within a thin polystyrene (PS) layer.The extinction spectra of SCF composed of small-sized silver nanoparticles (SCF1~SCF3 with an average particle size ranging from 59 to 93 nm) predominantly displays dipole SPR bands at wavelengths of between 489 and 557 nm.However, with an increase in particle diameter to 219 nm (SCF8), the dipole SPR band experiences a consistent redshift, extending to 900 nm.
Magnetic resonance imaging [80] (magnetic resonance imaging, MRI) and photoacoustic imaging [81] (photoacoustic imaging, PAI) are currently the preferred medical imaging techniques.Regions of superparamagnetic iron oxides (SPIONPs) aggregation produce strong negative contrast in T2/T2* weighted MR images and appear as dark images with low signals.Furthermore, 16 SPIONPs are easy to aggregate, and their stability is enhanced by a factor of 30 after coating with noble metals, while showing strong negative contrast in MRI and strong NIR (680~850 nm) absorption [63].Therefore, the development of multifunctional nanoplatforms, consisting of AgNPs and iron oxide nanoparticles (IONPs), can be used to develop MRI and PA imaging modalities.Shehzahdi S. Moonshi et al. [82] designed a novel silver-iron oxide nanohybrid and succeeded in the NIR region with effective targeted photothermal therapeutic strategy and dual imaging capability using MRI (in vitro and in vivo) and PAI (in vitro) for anticancer therapy.The excellent anticancer activity of this nanoparticle system is determined by the inherent anticancer properties of Ag and the elevated photothermal temperature under near-infrared light irradiation.FA-AgIONPs show excellent potential for simultaneous applications in safe and successful targeted photothermal therapy, dual-modal imaging in in vitro MRI, and in vivo imaging of cancer models.
Drug Delivery
The advancement of targeted drug delivery techniques has significantly enhanced the efficacy of nanoparticle-based anticancer therapeutics, particularly those employing metal nanoparticles.In the realm of drug delivery, surface engineering of silver nanoparticles with specific functional groups enables targeted modifications, thereby augmenting their affinity towards the intended target site.Functionalization can involve the attachment of various molecules such as ligands, antibodies, proteins, or oligonucleotides onto the nanoparticle surface.Victoria O. Shipunova et al. conducted pioneering work in synthesizing targeted formulations for cancer photothermal therapy (PTT), which entail combinations of silver nanoparticles (AgNPs) and the anti-HER2 affinity ligand, ZHER2:342.The localized surface plasmon resonance (LSPR) properties of AgNPs are further amplified by heating the targeted nanoparticles within HER2-positive cells, thereby enhancing the therapeutic effect [83].However, it is imperative to address concerns regarding poor biocompatibility, as it may incite immune responses or toxic effects, thereby compromising the efficacy of targeted delivery and therapeutic outcomes.Thus, ensuring optimal biocompatibility of silver nanoparticles with biological systems emerges as a critical consideration in achieving effective targeting strategies.Renquan Lu et al. conducted an aqueous phase synthesis of silver (Ag) nanoparticles utilizing silver nitrate (AgNO 3 ) and freshly extracted egg whites.The proteins present in the egg whites possess diverse functional groups that play pivotal roles in the reduction of Ag + ions and in maintaining the stability and dispersion of the resultant nanoparticles.This process is crucial for achieving the desired properties of the nanoparticles.In vitro cytotoxicity assessments demonstrated that the Ag-protein biocouplings exhibited excellent biocompatibility with the mouse fibroblast cell line 3T3.Furthermore, X-ray irradiation experiments conducted on 231 tumor cells revealed that these biocompatible Ag-protein biocouplings enhanced the efficacy of the irradiation therapy [84].
Through meticulous design and modulation of these variables, researchers can attain a heightened level of targeting specificity with silver nanoparticles, rendering them increasingly auspicious for a spectrum of medical applications.These applications encompass tumor therapy, molecular imaging, and drug delivery, where the unique properties of silver nanoparticles can be harnessed to advance therapeutic and diagnostic modalities.
Platinum Nanoparticles
Platinum nanoparticles exhibit not only commendable photothermal stability but also the potential for synergistic applications with chemotherapy or immunotherapy, thereby demonstrating promising outcomes in cancer therapy platforms.This potential is underscored by the ability of platinum nanoparticles to integrate seamlessly into multifaceted therapeutic approaches against cancer [85].Zhang et al. [86] have developed a comprehensive therapy system proficient in mediating photothermal therapy (PTT), chemodynamic therapy (CDT), and immunotherapy.Utilizing ultrasonic fragmentation techniques, the particle size of the complex was deliberately reduced, enhancing its migratory capability toward lymph nodes and tumor sites.Notably, yeast microcapsules, owing to their dextran components, prove effective in activating immune responses.They stimulate Dendritic Cell (DC) maturation, induce macrophage polarization, release various cytokines, and activate T cells.The integration of PTT and CDT not only ensures the effective elimination of tumor cells but also elicits an antitumor immune response, thereby extending survival times.In a separate study, Sun et al. [87] synthesized Silicon-Platinum nanocomposites (Si-Pt NCs) through in situ reduction of Pt nanoparticles grown on Silicon Nanowires (SiNW).Leveraging the catalytic activity of Pt NPs and the mesoporous structure of SiNWs, the Si-Pt NCs demonstrated robust Sonodynamic Therapy (SDT) and CDT activities, surpassing the efficacy of pure Pt NPs.The integration of these activities presents a promising avenue for cancer therapy.Furthermore, the mild photothermal effect enhances the combined SDT and CDT therapy substantially.
Lei Zhao et al.
[88] described a novel dual mesoporous nanosystem that can be used for photothermal therapy (based on Pt) and in vivo magnetic resonance imaging (based on Gd 3+ ions) with a simple and mild strategy.They first synthesized mesoporous platinum nanoparticles (mPtNPs) and coated them with mesoporous silica to form mPt @mSiO 2 .Next, they modified the mPt@mSiO 2 nanomaterials with -NH 2 to allow for further binding with the Gd DTPA complexes, which ultimately led to the formation of the mPt@mSiO 2 -Gd DTPA nanosystem.The core of this system is the mesoporous structure of mPtNPs, which exhibits excellent photothermal effects under 808 nm near-infrared laser irradiation.In addition, the mesoporous silica with a shell layer formed by the Gd DTPA complex shows potential MR imaging contrast agent applications for both photothermal therapy and in vivo magnetic resonance imaging.
Palladium Nanoparticles
Palladium nanoparticles, as noble metals, possess distinctive attributes including thermal and chemical stability, catalytic activity, and adjustable optical response [89].Notably, these nanoparticles exhibit a stable photothermal effect characterized by a high photothermal conversion efficiency of 49% and demonstrate significant absorption in the near-infrared spectrum.This property facilitates photothermal-electronic interactions, leading to the generation of heat capable of ablating tumor cells.Prem Singh et al. [90] investigated the synthesis of innovative bimetallic palladium nanocapsules (Pd Ncap), comprising gold bead cores encased within hollow porous palladium shells, and explored their application in the realm of photothermal therapy for cancer therapy.The researchers employed bifunctional carboxy-PEG-thiols as junctions and functionalized Pd Ncaps with the targeting molecule Herceptin to enhance the targeting efficacy against SK-BR-3 cells.The verification of this coupling was conducted through detailed X-ray Photoelectron Spectroscopy (XPS) analysis.The targeted in vitro photothermal therapy (PPTT) efficacy of Herceptin-conjugated Pd Ncap against SK-BR-3 breast cancer cells was assessed, revealing a remarkable cell kill rate of 98.6% at a concentration of 50 µg/mL, utilizing a 1064 nm near-infrared (NIR-II) laser at a low power density of 0.5 W/cm 2 .Xue Dong et al. [91] proposed a multifunctional bioactive gel system designed not only for drug delivery but also to establish a self-adjuvant immune microenvironment that synergistically augments photothermal therapy (PTT), eliciting a potent antitumor immune response.Initially, the M13 phage was engineered via phage display technology to express the glutamate sequence on the pVIII protein (designated as M13E), enhancing biomineralization processes.Subsequently, a self-adjuvant phage gel (referred to as M13 Gel) was synthesized through chemical cross-linking of glutaraldehyde with the engineered phage coat protein, facilitated by a Schiff base reaction.Palladium nanoparticles (PdNPs), possessing photothermal properties, were then mineralized in situ on the surface of the M13 Gel, resulting in the formation of M13@PdGel.Further incorporation of the IDO1 inhibitor NLG919 led to the development of M13@Pd/NLG Gel.This multifunctional bioactive gel system not only facilitates cargo loading but also serves as a self-adjuvant and antigen reservoir, thereby promoting immune cell activation.
Through surface functionalization, palladium nanoparticles can bind to target molecules to enable targeted imaging of specific cells or tissues, leading to potential applications in areas such as cancer diagnosis and treatment monitoring [92].It is an interesting observation in the literature that bimetallic nanomaterials may exhibit very different synergistic effects than monometallic nanomaterials.This phenomenon can be attributed to the fact that the geometry of the metal particles has an impact on the structure and ratio of the active centers of the catalytic material.Examples include palladium nanorods [93], palladium nanosheets [94], and palladium nanospheres [95].In addition, due to the quantum size effect, the electronic energy levels of the metal nanoparticles change, which, in turn, affects the orbital hybridization and charge transfer between the catalytic material and the reactants.These structural changes may contribute to the enhancement of enzyme activity and photothermal conversion.Therefore, we can assume that bimetallic nanomaterials have unique properties which are beyond the scope of single-metal nanomaterials.Ruyu Li et al. [96] introduced palladium nanomaterials to improve the catalytic ability and photothermal conversion of silver nanoparticles, and prepared elm pod polysaccharides (EPP-AgPd1.5 NPs) stabilized silver-palladium bimetallic nanoparticles, EPP-AgPd1.5NPs, which have good photothermal conversion performance and antitumor ability.EPP-AgPd1.5NPs have good potential for future biologically relevant detection and the therapy of malignant tumors.
The Advantages and Disadvantages of Noble Metal Nanoparticles
In recent years, noble metal nanoparticles have played a crucial role in the field of biomedical materials.These nanoparticles exhibit remarkable optical properties, particularly localized surface plasmon resonance (LSPR), making them highly desirable for sensor fabrication [97].The aggregation of noble metal nanoparticles has been shown to significantly enhance their optical characteristics, augment the electromagnetic field, and increase their hydrodynamic diameter.Consequently, sensors based on the aggregation of noble metal nanoparticles demonstrate exceptional performance in detecting harmful substances [98].Noble metal particles possess outstanding nanophotonic and catalytic properties, along with high surface area-to-volume ratios.Various noble metal nanostructures with diverse sizes, shapes, compositions, and aggregation states have been developed and found widespread applications in catalysis [99], imaging [100], therapeutics [101], and light harvesting [102].Additionally, noble metal nanoparticles serve as effective nanosensors for detecting and treating cancerous substances.Furthermore, they serve as carriers for drug delivery, facilitating controlled release, and the enhanced and targeted delivery of therapeutic agents, thereby improving therapeutic efficiency for various diseases.
However, noble metal nanomaterials are not without limitations, which can hinder their application in tumor therapy and other scientific domains.The production process of noble metal nanomaterials tends to be more expensive and intricate compared to alternative materials.This inherent complexity can restrict their scalability and elevate production costs, posing challenges for widespread adoption.Additionally, while noble metals generally exhibit stability under standard room temperature and pressure conditions, they are susceptible to oxidation, dissolution, or chemical reactivity under specific environmental circumstances, such as elevated temperatures or exposure to strong acids or bases.These factors can compromise their stability and performance over time [103].Moreover, certain forms of noble metals may exhibit biotoxicity [79].For instance, silver nanoparticles (AgNPs) have been found to interact with the intestinal immune system, triggering immunerelated signaling pathways that modulate pro-inflammatory and/or anti-inflammatory cytokines, ultimately leading to inflammatory effects [104].Consequently, despite the numerous advantages offered by noble metal nanomaterials across various applications, these aforementioned drawbacks impose undeniable limitations, particularly in medical fields.
Conclusions and Future Prospects
In summary, noble metal nanomaterials play a crucial role in biomedical applications and significantly contribute to the advancement of the pharmaceutical industry.Gold nanoparticles, silver nanoparticles, platinum nanoparticles, and palladium nanoparticles are extensively utilized in biomedical research and clinical practice, owing to their remarkable properties.Additionally, noble metal nanoparticles exhibit a potent surface plasmon resonance effect, enabling efficient absorption of specific light wavelengths, particularly in the near-infrared (NIR) region.This phenomenon facilitates the conversion of light energy into heat energy, enabling targeted tumor ablation while minimizing damage to surrounding healthy tissues.
However, the single use of noble metal nanomaterials for drug delivery can have the disadvantage of insufficient targeting, which, in turn, reduces drug delivery efficiency.Therefore, targeted drug delivery can be achieved by careful design and customization of noble metal nanomaterials, along with enhancement of their photothermal properties and stability.Undoubtedly, through strategic modifications, the efficacy of delivering noble metal nanoparticles to specific sites is significantly amplified.Additionally, it is recognized that a singular photothermal therapy (PTT) approach may not always yield the optimal outcome in tumor therapy.Consequently, integrating noble metal nanoparticlemediated photothermal therapy with immunotherapy, chemotherapy, radiotherapy, or radiation holds promise for synergistic effects, surpassing the additive efficacy principle ("1 + 1 > 2").Moreover, the precise control over size, shape, and surface properties during synthesis confers versatility upon noble metal nanoparticles, enabling the design and fabrication of tailored nanomaterials for distinct cancer types.Furthermore, owing to their remarkable optical characteristics and enduring photothermal stability, noble metal nanoparticles emerge as compelling contenders for clinical translation, playing pivotal roles in bioimaging, drug screening, and various other biomedical applications.In recent years, although photothermal therapy has made great strides in clinical research, it still faces a number of challenges.The distribution and transportation of most phototherapeutic molecules in organisms is not ideal due to the lack of effective PTAs in clinical practice.However, the limited depth of light penetration restricts the therapeutic effect on large and deep tumors.Therefore, precious metals with high photothermal conversion efficiency are expected to be ideal photosensitizers.However, in recent years, mild PTT has gradually become a research trend.Fortunately, the relationship between molecular structure and phototherapeutic effect is simple and controllable.We believe this will facilitate the rapid development of next-generation phototherapy in clinical applications.
However, noble metal nanoparticles are currently in the early stages of research, and further exploration of their potential is warranted.Particularly concerning their potential toxicity, chemical stability, and synthesis methods, significant challenges persist.In conclusion, while noble metal nanomaterials hold significant promise for photothermal therapy (PTT) applications, ongoing research efforts are essential to fully understand and address their limitations.We must persist in our exploration of their functionalities and strive to overcome current challenges to pave the way for their clinical utilization in combating cancer.
Figure 1 .
Figure 1.Schematic representation of noble metal nanomaterials for use in cancer photothermal therapy.
Figure 1 .
Figure 1.Schematic representation of noble metal nanomaterials for use in cancer photothermal therapy.
Figure 2 .
Figure 2. Schematic illustration of synthesis (I) and therapeutic mechanism (II) of SFT-Au nanoparticles.The nanoparticles were able to utilize and manipulate the over-expressed calcium i the mitochondria of tumor cells for the simultaneous inhibition of malignant tumors via calcium dependent photothermal therapy and mitochondrial calcification-mediated starving therapy[33] Copyright 2023, Wiley-VCH GmbH.
Figure 2 .
Figure 2. Schematic illustration of synthesis (I) and therapeutic mechanism (II) of SFT-Au nanoparticles.The nanoparticles were able to utilize and manipulate the over-expressed calcium in the mitochondria of tumor cells for the simultaneous inhibition of malignant tumors via calcium-dependent photothermal therapy and mitochondrial calcification-mediated starving therapy [33].Copyright 2023, Wiley-VCH GmbH.
Figure 4 .
Figure 4. (A) Schematic illustration of AgPCPP nanoparticles; (B) Heating and cooling curves of AgPCPP nanoparticles, free Ag2S-NP and DI water under 10 min of irradiation (808 nm, 2 W cm −2 ); (C) Photostability of AgPCPP under four cycles of irradiation/cooling for a total duration of 20 min (808 nm, 2 W cm −2 ); (D) Infrared thermography of tumors treated with saline, Ag2S-NP, and AgPCPP nanoparticles during 30 min of laser irradiation [61].Copyright 2023, Wiley-VCH GmbH; (E) Schematic diagram of the synthesis of the HASAIC probe; (F) Temperature change at tumor site
Figure 4 .
Figure 4. (A) Schematic illustration of AgPCPP nanoparticles; (B) Heating and cooling curves of AgPCPP nanoparticles, free Ag 2 S-NP and DI water under 10 min of irradiation (808 nm, 2 W cm −2 ); (C) Photostability of AgPCPP under four cycles of irradiation/cooling for a total duration of 20 min (808 nm, 2 W cm −2 ); (D) Infrared thermography of tumors treated with saline, Ag 2 S-NP, and Ag-PCPP nanoparticles during 30 min of laser irradiation [61].Copyright 2023, Wiley-VCH GmbH; (E) Schematic diagram of the synthesis of the HASAIC probe; (F) Temperature change at tumor site under laser irradiation after injection of PBS and probe, respectively; (G) Temperature changing curve of tumor site under laser irradiation after injection of PBS and probe [62].Copyright 2021, Elsevier B.V. All rights reserved; (H) Schematic illustration of ligand passivation on the Ag 2 S CQD surface; (I) On-off cycles of the photothermal effect with 500 nM CQDs under 1.5 W/cm 2 [64].Copyright 2024, American Chemical Society.
Figure 5 .
Figure 5. (A) Schematic illustration of L/D-CAg@Au nanoparticles.(B) Calculation of the photothermal conversion efficiency at 808 nm.The orange curve represents the photothermal effect of the Ag@Au aqueous solution.The black line represents the time constant (τs) for the heat transfer of the system.(C) Heating and cooling curves of water, Ag@Au, L-CAg@Au, and D-CAg@Au aqueous dispersion (100 µg mL −1 ) under the 808 nm laser on/off irradiation (1.0 W cm −2 ).(D) Infrared thermography of tumor sites exposed to 808 nm laser irradiation at 1.0 W cm −2 for 10 min [13].Copyright 2023, American Chemical Society.(E) Schematic Illustration of the Synthesis of AgNCs Using OCT as the Biotemplate and their application.(F) Photothermal heating curves of MKN-45 tumors under 1270 nm laser after intravenous injection of DNA-Ag@Pd NCs for 6 h [63].Copyright 2023, Wiley-VCH GmbH.(G) Schematic illustration of the Synthesis of DNA-templated Ag@Pd alloy nanoclusters (DNA-Ag@Pd NCs).(H) The temperature elevation profiles of AgNP and AgNC solutions at the same concentration under laser irradiation and the photothermal images of the AgNCs solution at different time intervals irradiated.(I) In vivo thermal images of mice injected with saline or the AgNCs under NIR laser irradiation and tumor temperature profiles as the function of laser irradiation time [65].Copyright 2018, American Chemical Society.
Figure 5 .
Figure 5. (A) Schematic illustration of L/D-CAg@Au nanoparticles.(B) Calculation of the photothermal conversion efficiency at 808 nm.The orange curve represents the photothermal effect of the Ag@Au aqueous solution.The black line represents the time constant (τs) for the heat transfer of the system.(C) Heating and cooling curves of water, Ag@Au, L-CAg@Au, and D-CAg@Au aqueous dispersion (100 µg mL −1 ) under the 808 nm laser on/off irradiation (1.0 W cm −2 ).(D) Infrared thermography of tumor sites exposed to 808 nm laser irradiation at 1.0 W cm −2 for 10 min [13].Copyright 2023, American Chemical Society.(E) Schematic Illustration of the Synthesis of AgNCs Using OCT as the Biotemplate and their application.(F) Photothermal heating curves of MKN-45 tumors under 1270 nm laser after intravenous injection of DNA-Ag@Pd NCs for 6 h [63].Copyright 2023, Wiley-VCH GmbH.(G) Schematic illustration of the Synthesis of DNA-templated Ag@Pd alloy nanoclusters (DNA-Ag@Pd NCs).(H) The temperature elevation profiles of AgNP and AgNC solutions at the same concentration under laser irradiation and the photothermal images of the AgNCs solution at different time intervals irradiated.(I) In vivo thermal images of mice injected with saline or the AgNCs under NIR laser irradiation and tumor temperature profiles as the function of laser irradiation time [65].Copyright 2018, American Chemical Society.
Table 1 .
Summary of the properties and applications of noble metals in tumor photothermal therapy. | 2024-05-25T15:18:58.389Z | 2024-05-22T00:00:00.000 | {
"year": 2024,
"sha1": "afbf4bb8593cdb1e73d7512c3a2628a0adf9e3ef",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/25/11/5632/pdf?version=1716371403",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c157f138309f1e3223ad78637039e3a48e467a7a",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": []
} |
219437466 | pes2o/s2orc | v3-fos-license | Air Pollution: Adverse Effects and Disease Burden
At the time of the Workshop in 2017, the scientific evidence was certain: ambient air pollution, that is, contamination of outdoor air consequent to man’s activities, is a major cause of morbidity (ill health) and premature mortality (early death). While the rise of ambient air pollution is relatively recent, air pollution has probably had adverse effects on human health throughout history. In fact, the respiratory tract, which includes the nose, throat and lungs, has a remarkable system of defense mechanisms to protect against inhaled particles and gases. The use of fire for heating and cooking came with exposure to smoke, an exposure that persists today for the billions who use biomass fuels for cooking and heating. The rise of cities concentrated the emissions of pollutants from dwellings and industry and led to air pollution that was likely affecting health centuries ago. Continued industrialization and also electric power generation brought new point sources of pollution into areas adjacent to where people lived and worked. During the twentieth century, cars, trucks, and other fossil fuel–powered vehicles became a ubiquitous pollution source in higher-income countries and created a new type of pollution—photochemical pollution, or “smog”—first recognized in the Los Angeles air basin in the 1940s. The unprecedented growth of some urban areas to form “megacities,” such as Mexico City, Sao Paulo, London, and Shanghai, has led to unrelenting air pollution from massive vehicle fleets and snarled traffic and from polluting industries and coal-burning power plants. With population growth and urbanization, ever more megacities are anticipatesd; the current total of cities with a population over 10 million has now reached 31.
ried out by the US Environmental Protection Agency in support of revisions to the National Ambient Air Quality Standards (NAAQS), which apply to the major outdoor pollutants. Emphasis is given to airborne particulate matter, a ubiquitous type of ambient pollution that derives from multiple sources; the most widely used indicators are PM 10 and PM 2.5 (particulate matter less than 10 and 2.5 μm in aerodynamic diameter, respectively) and to ozone. The chapter also addresses the burden of disease attributable to air pollution, documenting the target for air quality improvement and the global gains in public health that can be made.
While the emphasis here is on ambient air pollution that has adverse effects at local and regional levels, climate change also comes from air pollution with greenhouse gases. The connections are direct; the same combustion processes that release particles and toxic gases also generate carbon dioxide.
The Health Effects of Air Pollution
Overview Ambient air pollution comprises a complex and dynamic mixture of gaseous and particulate air pollutants. The array of health effects linked to ambient air pollution is broad and includes increased risk for respiratory infections, exacerbation of asthma, and chronic obstructive pulmonary disease (COPD-a disease involving destruction of the lung structure) and cardiac (heart) events, contributions to development of major chronic diseases (coronary heart disease, COPD, and cancer), and impaired lung growth and respiratory symptoms during childhood. Additional adverse health outcomes are under investigation: autism and other neurodevelopmental disorders, adverse reproductive outcomes, and more rapid "brain aging," for example. There are several general mechanisms underlying these health effects, particularly oxidant stress, an excess of reactive molecules, and a heightened inflammatory state, given the oxidative nature of ambient air pollution. The increased risk for cancer causally linked to air pollution likely comes primarily from the presence of specific carcinogens (i.e., cancer-causing agents) in ambient air pollution (e.g., polycyclic aromatic hydrocarbons, and inflammation); particles collected in outdoor air are mutagenic, which means that they can damage DNA (IARC, 2015). Table 6.1 provides a listing of major pollutants and associated health effects, as well as some of the current standards and guidelines for controlling their concentrations.
While the problem of air pollution was noted centuries ago, the contemporary era of research on air pollution and health and evidence-driven air quality regulation and management began in the mid-twentieth century following a series of episodes of very high pollution with disastrous health consequences (Brimblecombe, 1987). The most dramatic was the London Fog of 1952, which caused thousands of excess deaths and prompted some of the first epidemiological studies of the health effects of air pollution (Fig. 6.1) (Bell & Davis, 2001). In the United States, recognition of the public health dimensions of air pollution also began in the mid-twentieth century, driven by the rising problem of smog in southern California, episodes of visibly high pollution in major cities, and the 1948 air pollution episode in Donora, Pennsylvania, which caused 20 excess deaths and thousands of illnesses in one small town. Multiple investigational approaches have been used to characterize the health effects of ambient air pollution. Initially, the dramatic pollution episodes made clear that high levels of air pollution caused excess mortality, particularly in elderly people with chronic diseases and in infants and young children. As air pollution levels declined with regulation, increasing emphasis was placed on understanding the quantitative risks of air pollution so that air quality standards could be set that would be protective of public health. In other words, researchers did studies to understand by how much risk changes as air pollution increases or decreases. Epidemiological studies (i.e., research based in populations) were critical for that purpose. Cohort or longitudinal studies were most informative. Such studies involved following participants over time, estimating pollution exposures, and tracking health events; analyses focused on quantifying the risks associated with air pollution exposure during follow-up. For example, the Children's Health Study in Southern California tracked lung growth and respiratory health in school children from communities having a range of pollution concentrations (Gauderman et al., 2015). Now in progress for two decades, the study has shown that higher levels of air pollution slow lung growth and that reduction of pollution enhances it.
These epidemiological approaches are complemented by toxicological studies that provide insights into the mechanisms by which air pollution causes adverse health effects. Such evidence is critical in reaching causal conclusions on the adverse health effects of air pollution. In the past, toxicological studies often involved exposure of animals to a single pollutant, such as ozone, to isolate the pollutant's effect from those of other pollutants present in the air pollution mixture. For studying some pollutants, human volunteers have inhaled the pollutants in an exposure chamber over a short interval and their responses closely monitored. Additionally, pollutants are also studied in cell systems; these systems are likely to gain increasing prominence as new, sophisticated systems probe gene expression of different kinds of cells following exposure.
There are a number of national groups that periodically assess the evidence on adverse effects of air pollution and provide guidance to the setting of standards and guidelines. In the United States, the Environmental Protection Agency carries out , 1952-1953 reviews of the evidence as the basis for renewal of major air quality standards (the National Ambient Air Quality Standards or NAAQS) on a 5-year cycle. Reviews are conducted by the United Kingdom, the European Commission and other nations. The World Health Organization releases air quality guidelines, which are currently being updated. In reviewing the evidence, a judgment that the findings are strong enough to infer a causal relationship has great weight for regulation. The health risks associated with major air pollutants are reviewed below.
While air pollution research and regulation generally focuses on specific pollutants, air pollution outdoors is a complex mixture. Effects attributed to a single pollutant, particularly when studied in the "real world" context, may reflect the toxicity of the mixture as indexed by a particular pollutant. Ambient particulate matter (PM), for example, comes from myriad sources and is emitted as a primary pollutant from combustion and other sources; it is also formed through chemical transformations of gaseous pollutants, such as the formation of particulate nitrates from gaseous nitrogen oxides. The mixture of pollutants formed from vehicle emissions, generally referred to as traffic-related air pollution, may have specific toxicity beyond that of well-studied individual components.
Particulate Matter
The literature on the health effects of particles is enormous, comprising many epidemiological and toxicological studies (U.S. Environmental Protection Agency, 2009). With regard to ambient air pollution, the risks of particulate matter have assumed great prominence because particles are widely monitored and used as the principal indicator for estimating the burden of morbidity and premature mortality attributable to air pollution. Particles are a robust indicator of ambient air pollution because of their myriad sources and the contributions of sulfur and nitrogen oxides and organic compounds to secondary particle formation. Particles in outdoor air have numerous natural and man-made sources. The manmade sources are diverse and include power plants, industry, and motor vehicles, including diesel-powered vehicles that emit particles in the size range that penetrates into the lung. In areas where biomass fuels are used, the contributions of indoor combustion to outdoor air pollution may be substantial.
Particles in outdoor air span a wide range of sizes ( Fig. 6.2) and are highly diverse in composition and physical characteristics, including size as indicated by aerodynamic diameter. Thus, PM 2.5 includes those particles less than 2.5 μm in aerodynamic diameter, a size band that contains most man-made particles in outdoor air and also the particles of a size that can reach the smaller airways and air sacs of the lungs. The very small ultrafine particles, which include freshly generated combustion particles, are another set of particles of concern. Much research has been done on characteristics of particles that determine toxicity. Hypotheses on determinants of particle toxicity have focused on acidity, transition metals such as iron that can cause damaging injury, organic compounds, bioaerosols, and size; as a further complication, different characteristics could be relevant to different health outcomes. However, in spite of extensive toxicological and epidemiological research, the evidence is not yet sufficiently definitive to link particular characteristics to toxicity. Making such linkages would be helpful for targeting control approaches. There has been extensive epidemiological and toxicological investigation of the effects of particles on health since the air pollution disasters of mid-century. The toxicological studies have used approaches including exposing volunteers to generated particles or concentrated air particles, animal exposures, and diverse in vitro assays. This extensive body of evidence shows that particles are injurious and indicates mechanisms by which particles could cause adverse effects on the respiratory and cardiovascular systems. The epidemiological studies have grown in size and in the sophistication of their methodology. Recent studies involve large, national-level populations, such as all people in the United States who are 65 years and older, and estimation of exposures at all household addresses using models that incorporate available monitoring data, satellite information, and land-use data, such as on roadways and manufacturing. Table 6.2 lists the findings of the most recent comprehensive review of the evidence on airborne particles (the Integrated Science Assessment) by the US Environmental Protection Agency. Both short-term and long-term adverse effects were found to be caused by particulate matter. The most recent epidemiological studies continue to find associations of PM 2.5 with increased risk for mortality, even at contemporary concentrations in the United States. In a recent study utilizing the Medicare data for persons 65 years of age and older, over 60 million people, Di et al. showed increased mortality at annual averages below 12 μg/m, the current annual standard in the United States 3 (Di et al., 2017).
One analysis has suggested that reductions in particulate air pollution during the 1980s and 1990s have led to measurable improvements in life expectancy in the United States. The researchers estimated that for each 10 μg/m 3 reduction in air pollution over this period, the average gain in life expectancy was 0.61 years (about 7.3 months). The authors concluded that as much as 15% of the total life expectancy increase seen during this time in the United States was attributable to the air pollution reductions.
This strong evidence on the adverse health effects of particulate matter has led to ever tighter ambient air quality standards (Table 6.1). Nonetheless, adverse health effects are still observed and much of the world's population is exposed to high concentrations at which adverse effects are certain.
Ozone Ozone is a specific gas that has been studied for its toxicity using toxicological approaches. It is also used as an indicator of photochemical pollution, or "smog," which is the complex oxidant mixture produced by the action of sunlight on hydrocarbons and nitrogen oxides. Smog has become a worldwide problem as vehicle fleets have grown. The problem of tropospheric (ground-level) ozone pollution is distinct from the problem of depletion of the strato-pheric (high-level) ozone layer.
Ozone, a highly reactive molecule, has been extensively investigated using toxicological approaches that have included exposures of human volunteers and shortand long-term exposures of animals (U.S. EPA, 2013). The human studies have involved exposures of volunteers, generally young and healthy, to concentrations of ozone found in urban areas in the United States and elsewhere. Collectively, the studies show that exposures of up to 6-8 h with intermittent exercise result in temporary drops in lung function and that some individuals have greater susceptibility to ozone. While the effects are transient, they are of sufficient magnitude in some people (loss of around 10% of function) to be considered adverse. In some of the studies, the lungs have been sampled and evidence of inflammation was found by measuring concentrations of molecules that reflect the tissue's response. In experimental animals, sustained low-level exposure damages the small airways and leads to early changes of COPD; thus, there is concern about permanent structural alteration in ozone-exposed populations. In human studies, asthmatics have not been shown to have increased susceptibility to ozone compared with non-asthmatics.
Epidemiological studies provide coherent evidence on the short-term effects of ozone on respiratory health. There is also evidence from daily time-series studies (studies examining day-to-day variations in death counts in relationship to variations in pollution levels) that ozone increases the risk for mortality. There is inconsistent evidence for cardiovascular effects and a just-completed exposure study of older persons with cardiovascular disease did not find adverse effects. Reflecting the evidence on short-term effects on lung function, standards for ozone concentrations are directed at brief time spans (Table 6.1). Given the range of susceptibility of the population, it is likely that feasibly achieved standards will not protect the full population from adverse respiratory effects.
Nitrogen Oxides Gaseous nitrogen oxides are produced by combustion processes and also contribute to the formation of aerosols. Nitrogen dioxide (NO 2 ), an oxidant gas, is the indicator that is generally monitored. The principal source of NO 2 in outdoor air is motor vehicle emissions, and NO 2 is considered to be a useful indicator of traffic-related air pollution in urban environments. Power plants and industrial sources may also contribute. The health effects of NO 2 emitted into outdoor air probably come mainly from the formation of secondary pollutants, including ozone and particles. NO 2 , along with hydrocarbons, is an essential precursor of ozone and the nitrogen oxides also form acidic nitrate particles.
Nitrogen dioxide itself has been studied in animal models and in clinical studies. It can reach the small airways and air sacs of the lung because of its low solubility. The toxicological evidence at high exposures has raised concern that NO 2 exposure can impair lung defenses against infectious agents such as viruses and cause airway inflammation, thereby increasing the risk for respiratory infections. Supporting epidemiological research is lacking and population studies directed at NO s are complicated by its role in the formation of ozone and its presence in the complex mixture of traffic-related pollutants. Human exposure studies have been performed to investigate the immediate effects of NO 2 on persons with asthma. Nitrogen dioxide could plausibly increase airway responsiveness (the extent to which the airways constrict when irritated) by causing airway inflammation. The findings of the exposure studies have been inconsistent, but suggest that some people with asthma may be susceptible.
Carbon Monoxide
Carbon monoxide (CO) is an invisible gas formed by incomplete combustion. It is a prominent indoor pollutant with sources including biomass fuel combustion and space heating with fossil fuels. At high levels indoors, fatal CO poisoning may result. Outdoors, vehicle exhaust is the major source and concentrations are highly variable, reflecting vehicle density and traffic patterns. Urban locations with high traffic density ("hot spots") tend to have the highest concentrations. The toxicity of CO comes from its tight binding to hemoglobin, which carries oxygen in the blood, and the resulting reduction of oxygen delivery to tissues. Exposures to CO can be assessed by using the level of carboxyhemoglobin as a marker of exposure or by measuring the concentration of CO in the breath.
Because of the reduction of oxygen delivery, persons with cardiovascular disease are considered to be at greatest risk from CO exposure; research has focused on CO and adverse effects in this susceptible group. The research has used an approach referred to as a "clinical study"; volunteers with cardiovascular disease are exposed to levels of CO of interest and clinical measurements made, such as by taking an electrocardiogram. The evidence from such studies indicates that CO exposure leads to earlier evidence of myocardial ischemia (inadequate oxygenation of the heart) following exposure compared with unexposed controls. There are other potential susceptible groups: fetuses and persons with COPD may also be harmed by CO, and normal persons may have reduced oxygen uptake during exercise at low levels of CO exposure.
The exposure studies have provided robust evidence for standards for CO, which are based on brief time windows, reflective of the handling of CO in the body. In higher-income countries, outdoor levels of CO have fallen greatly over recent decades as controls have greatly reduced emissions (see Fig. 6.3). Nonetheless, CO may be a concern in some high-traffic locations. Less is known about CO exposure in middle-and low-income countries, where ambient CO may be added to indoor exposure from biomass fuel combustion.
Sulfur Oxides Sulfur oxides are generated by combustion of fuels containing sulfur, such as coal, crude petroleum, and diesel, and by smelting operations. The water-soluble gas, sulfur dioxide or SO 2 , is the indicator that is generally monitored. However, other sulfur oxides are emitted and the sulfur oxides undergo transformation to form particulate sulphate compounds. Scientific research has been directed primarily at SO 2 , although epidemiological studies provide information on sulfur oxide exposure more generally. Sulfur dioxide is a reactive gas that is effectively scrubbed or cleaned from inhaled air in the upper airway. With exercise and a switch to oral breathing as ventilation increases, the inhaled dose of SO 2 increases and more reaches the lung.
Much of the evidence that has driven regulation comes from clinical studies that involve exposure of people with asthma and that show adverse effects without exposures to other pollutants. Asthmatics are particularly sensitive, with some asthmatics having more severe health responses at a particular concentration than others with asthma. With exercise and hyperventilation, some people with asthma respond with increased resistance of the lung to airflow with an associated drop in lung func- tion and with respiratory symptoms. Such effects have been demonstrated at concentrations that might be reached in the United States in high-exposure situations and that may be common in some heavily industrialized countries. Epidemiological studies from Hong Kong examined the consequences of a major and rapid reduction in sulfur content in fuels. The investigators found an associated substantial reduction in health effects (childhood respiratory disease and all age mortality outcomes).
Lead Although lead in gasoline is now phased out in almost all nations, exposure continues from industrial activities, such as smelting, and sometimes results in dangerous exposures for children. Exposure to lead may occur through inhalation and also ingestion in food and water, routes of exposure that have become the most important in high-income countries. A substantial body of epidemiological evidence links lead exposure of children to adverse neurodevelopmental effects, such as lowering the level of intelligence; as a result of that evidence, recommendations as to the acceptable level of lead for children have been lowered progressively. Lead has also been linked to higher blood pressure and cardiovascular disease and to low bone mineral density and osteoporosis.
The Disease Burden from Ambient Air Pollution
Given these adverse health effects and ubiquitous pollution of outdoor air, estimates of the burden of disease caused by air pollution are needed as a basis for priority setting and air quality management. Such estimates of disease burden can also prove useful for motivating action, as they have done recently in China. The conceptual basis for estimating disease burden draws on the conceptual framework of attributable risk, originally proposed in the early 1950s for smoking and lung cancer by Levin (Levin, 1953). He proposed a statistic, the population attributable risk (PAR), which incorporates the prevalence of the exposure of concern (P) and the relative risk (RR) for disease associated with that exposure, as follows: PAR = (Px[RR − 1])/ (1 + Px[RR − 1]). Thus, the PAR rises as either P or RR increases; in other words, the burden rises as more people are exposed or the risk is increased. For lung cancer, for example, in a population with 40% smokers and a lung cancer RR of 20, the PAR is 0.88, interpreted as 88% of the cases resulting from smoking. The PAR is interpreted based on comparison to a comparison group involving no or lower exposure; for smoking, for example, the comparison is a hypothetical world in which smoking never existed. For air pollution, estimates of the burden of attributable disease have been made by the World Health Organization and the Institute for Health Metrics and Evaluation (IHME) in the United States, which carries out the Global Burden of Diseases, Injuries, and Risk Factors Study (Cohen et al., 2017). A detailed analysis of the global burden of disease attributable to ambient air pollution was recently reported for the 25 years from 1990 to 2015. With regard to the exposure prevalence (P), PM 2.5 and ozone concentrations are estimated for smaller level spatial grids that capture variation in urban areas and then aggregated to provide national mean expo-sures. The IHME estimates for PM 2.5 include ischemic heart disease (IHD), cerebrovascular disease, lung cancer, chronic obstructive pulmonary disease (COPD), and lower respiratory infections (LRI), while only COPD is considered for ozone. The RRs for these diseases come from complex analyses of epidemiological data. The counterfactual values for burden estimation are derived from the lowest values considered in the epidemiological studies.
The latest estimates confirm that outdoor air pollution is a leading cause of premature mortality around the world, ranking at the fourth position in 1990 and the fifth in 2015. The total of deaths attributed to PM 2.5 globally in 2015 is 4.2 million distributed by cause as follows: IHD-1.52 million, cerebrovascular disease-898,000, lung cancer-283,000, COPD-864,000, and LRI-675,000. There is substantial geographic variation globally (Fig. 6.4) with China and India together accounting for more than half of the attributable burden of premature mortality. From 1990 to 2015, estimates have increased in some countries (e.g., India and China), as the population has grown and aged, and air pollution levels have risen. For ozone, the global mortality estimate is much smaller at 254,000.
From a policy perspective, these estimates offer a strong rationale for air quality control and are cautionary in their implications. The estimates of exposure globally describe a stratified target (Fig. 6.4): the high-income countries that have lowered air pollution concentrations over the last half century and the numerous low-and middle-income countries where air pollution has worsened with industrialization and vehicle numbers have increased rapidly. Additionally, many of these countries Figure 3) still face the added challenge of household air pollution, also a leading cause of premature mortality.
Research Needs
There is a robust body of evidence on the health effects of ambient air pollution, which has both acute and chronic consequences. Given the underlying mechanisms of injury and commonalities among pollution sources, the collective evidence should have general applicability to people around the world. Nonetheless, many nations still lack basic monitoring and air quality management, even as pollution levels have risen in recent decades. In motivating action, locally generated data are likely to be more powerful than the external evidence, particularly if from highincome countries. At the least, monitoring data are needed for PM 2.5 and other pollutants if relevant to a particular place. Local research could use cross-sectional epidemiological designs, for example, of the respiratory health of school children, and time-series studies of morbidity (e.g., hospitalization counts) and mortality. The Health Effects Institute in the United States has supported time-series studies of air pollution in multiple Asian countries through its Public Health and Air Pollution in Asia (PAPA) Project (Public Health and Air pollution in Asia (PAPA), 2010). The Global Burden of Disease estimates provide a valuable starting point for priority setting around ambient air pollution as a public health issue. The Health Effects Institute has made this information available in a useful form through its State of Global Air/2019 (https://www.stateofglobalair.org/).
Summary and Conclusions
Ambient air pollution is a well-documented threat to global health. Through man's expanding use of fossil fuels for manufacturing, vehicles, and power generation, the numbers of people exposed to risky levels of air pollution has increased progressively; some are exposed at levels historically associated with evident excess mortality. Ambient air pollution now affects large swathes of the world, as contaminated air crosses national boundaries and moves globally. Of course, greenhouse gases represent another form of global pollution. The evidence is sufficiently certain and the burden of disease sufficiently large to motivate action at national and global levels. Evidence-driven regulations have had substantial impact on air quality in the United States and elsewhere, driving down levels of the most prominent pollutants (Fig. 6.3). Unfortunately, in many low-and middle-income countries, outdoor air pollution is a rising problem consequent to population growth, industrialization, and increasing numbers of vehicles-and, in too many countries, insufficient attention to air quality management and regulation. Control of greenhouse gases has the cobenefit of reducing ambient air pollution.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. | 2020-05-21T00:03:13.749Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "ae1950c9619e3bc09f28242fff1ce78bcd74bbc5",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-31125-4_6.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "8a3605a14c39bbbb9c7d59fa97a26c03ae126894",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
203471711 | pes2o/s2orc | v3-fos-license | THE SEMANTICS OF BIRD DENOMINATIONS IN THE MARI LANGUAGE
This paper presents an analysis of the semantics of bird denominations in Mari: an attempt is made to defi ne the factors, or features, motivating bird denominations. Analysis is based on a set of words of inner origin that are part of the corpus of bird names compiled by the author. The results show that the ornithonomy of the Mari language, created over centuries, constitutes a well-shaped system. It refl ects a variety of features associated with the appearance, way of life of the birds, sounds they produce, etc. Many bird terms refl ect features of appearance. It is interesting to note that the names of birds not seen for some reason may relate to the characteristics of the birds’ voices. In some cases, terms are based on a combination of features. In dialects, different names for same birds may occur, as observed in the sources.
Introduction
The vocabulary of a language with time undergoes changes conditioned by the development of a society. The semantic group uniting bird names is a relatively open structure; terms can disappear or become replaced by new ones. Research on bird terminology presents interest from both theoretical and practical points of view. The results of this analysis of bird names in Mari are relevant to research on the terminology of the language and can be useful for the description of bird denominations in other Uralic languages.
The theme of bird terminology has been treated by many authors; in some works, bird names used in Finno-Ugric languages have been analyzed (for example : Mäger 1967, Sokolov 1973, Jenő 1984, Sivula 1989, Bogá r 2007. Research has been done on the semantic, etymological, linguistic-geographical and other aspects of the subject. The ornithology of Mari has been described by V. N. Vasil'jev (1984); nearly 300 terms have been analyzed by the researcher, mainly from the point of view of etymology.
The present article provides a semantic analysis of bird names in Mari, among which there are over 300 terms not treated before; the aim of the analysis is defi ning the features that underlie bird denominations.
Methodology of research
In current onomathology, thematic groups are distinguished in accordance with the principles of denomination, in other words, onomasiological models, or generalized aspects and features (like colour, action, origin, etc.) underlying the naming of homogeneous groups of objects (Gol'ev 1981).
Motivating factors in some cases are clear, as the semantics of the words is transparent. There are, however, bird names, the etymology of which is undefi ned; research is required to disclose it. The semantics and the structure of the lexical units are considered to be the fundamental aspects of denominations (Varina 1976: 242-243); for the purposes of description, sets of underlying features are established.
In other cases denomination features may be hidden, not distinct in the semantics of a word. For example, the feature "a bird's voice compared with an animal's voice" is not refl ected in the semantics of the term jumyntaga literally: «blessed + ram»), Russ. 'bekas', Lat. 'Gallinago gallinago', Engl. 'Snipe'.
It has been noted by some authors (see in Mäger 1967: 190) that relatively similar principles of bird denominations occurring in different languages eventually condition similarity in the semantics of the denominations themselves, cp. for example, the terms: Lat. 'Botaurus stellaris', Engl. 'Bittern': Est. vee-, soo-, merehärg 'water, marsh, sea bull', Hung. nadibika 'reed bull', Germ. Wasserochs 'water bull', Russ. vodjanoj byk 'water bull', Mar. vüdüškyž 'water bull', удм. vuoš 'water bull', etc. This circumstance has to be considered in the analysis of bird names that have been created through reproduction or analogy occurring due to cultural contacts between peoples.
There is a certain amount of similarity between the systems of denomination used in different languages; this is explained by the fact that ornithonyms convey information related to the natural biological features of birds. There is an opinion that the study of the principles and features of denomination cannot be reduced to a linguistic analysis, as they are directly conditioned by extra-linguistic factors. The motivating factors of denomination and the classifi cation of the features of denominations are directly connected with the characteristics of the realia (see: Gol'ev 1972).
It has been suggested that within each thematic group of bird names, distinction should be primarily made between the major groups of features (names relating to the appearance of the birds -group A, way of life -group B, bird sounds -group C). Within the major groups of features, sub-groups of specifi c features that are directly related to the characteristics of a bird are further distinguished (Gol'ev 1972).
Classifi cation based on both direct and indirect ways of denomination has been used to describe ornithonyms in the Tatar language (Safi na 2006: 62). A total of 25 groups have been identifi ed in the vocabulary of bird terms in Tatar, which shows that bird denominations constitute a complex system. It can be mentioned that indirect denomination is not salient in Mari.
Classifi cation suggested in a research dealing with the ornithology of Northern Russia (Lysova 2002) has been based on two principles of denomination, by which features possessed by objects and features related to objects are distinguished.
In the present work, methodology used in the previous research of bird terminology in different languages is taken into account. The set of terms used in the analysis allows differentiating two groups of motivating features; within these, several sub-groups of features are specifi ed.
Observations in the research were made of the terms used in Meadow Mari, Eastern Mari, and Hill Mari. Materials for analysis were selected from dictionaries and other sources, documents, as well as interviews with informants (mainly held in localities of the Republic of Bashkortostan).
Features possessed by an object as the basis of bird denominations
Among the features possessed by the objects those of the voice, size, appearance (or a detail of it), conduct, way of life of birds can be identifi ed as motivating factors.
Bird denominations based on the feature "voice"
It has been noted in literature that of all the features used for bird denominations the bird's voice in its different manifestations is the most frequent. The author of research on bird terminology in Saami writes: "In all languages, one of the most important etymological types of bird names is onomatopoeia, as one of the most characteristic features of a bird is exactly its sound" (Bogá r 2009: 9). An analysis of Finnish bird names based on onomatopoeia has shown that in the book "Linnut värikuvina" ("Coloured pictures of birds"), half of the 274 names signify bird sounds (Marttila 2010: 3).
An ornythonym can also convey a similarity between the sounds produced by two different birds or by a bird and an animal. For example, in saršüšpyk literally: «golden oriole», Russ. 'ivolga', Lat. 'Oreolus oreolus', Engl. 'Golden Oriole', the singing of the golden oriole is compared with the singing of the nightingale, which looks different; in jumyntaga literally: "blessed lamb", Russ. 'bekas', Lat. 'Gallinago gallinago', Engl. 'Snipe', the sounds produced by the bird, when it fl ies, are compared to those of a lamb. In another case, apšatkajyk literally: "bird-blacksmith", Russ. 'penočka-tenˊkovka', Lat. 'Phylloscopus collybita', Engl. 'Chiffchaff', the term is based on a similarity between the sounds produced by the bird and the sounding of the beats produced by a blacksmith.
Fairly frequently ornithonyms formed as compound words are onomatopoetic. It is interesting to note that the names of many species of birds with a prominent voice sound similar in Finno-Ugric (as well as other) languages. Such onomatopoetic names being linguistic parallels have common source -sounds produced by a given bird.
The name of the cuckoo bird can serve as an example: Lat. 'Cuculus canorus', Engl. 'Cuckoo' in several related languages 3 : Finn. kaki, Est. kägu, Hung. kakuk, Mar. kuku, Udm. kiky. Similar parallels can be observed in the case of the word crow: Lat. 'Corvus', Engl. 'Crow'; it can be noticed that the terms can be split into two subgroups 4 , in accordance with the elements "kar-kor" and "var" occurring in the words: Finn. varis, Est. vares, Hung. varju, Md. varaka, Mar. korak, Udm. kuaka. In the Finno-Ugric languages the name of black grouse has a combination of consonants, tr, in its base form, which corresponds to the bird's voice (Mäger 1967: 63).
It can be assumed that bird names based on onomatopoeia go back to the time when hunting was the main resource for getting food. Precise reproduction of the birds' voices helped the hunters to get prey in abundance. In a talk, hunters referring to a bird can have used a word imitating its voice. With time, such 3 Annu Marttila states that in a study of the names of the cuckoo and of the words that describe the voices of the cuckoo, a sample of terms from 74 % of the languages of "Eurasia" was found to contain 88% of onomatopoetic words (Marttila 2010: 3-4 indications of birds used in speech became transformed; however, the form has phonetically remained similar to the original one, due to the continuity of the relationship between the people and the birds; that is, the possibility of hearing the birds' voices ("ku-ku", "kar-kar") in everyday life has been lasting.
Denominations based on factors related to the birds' way of life
Factors related to the birds' way of life used as the basis of denominations include food, distribution in nature, associations with natural and religious beliefs.
It should be mentioned that bird terms can also denote the capacity of a bird to consume large amounts of food. For example, in Mari the terms for the grey heron are kugylogar (kugu 'big' + logar 'throat') and podlogar (pod 'pot'+ logar 'throat').
Conclusions
The semantic analysis of bird denominations shows that ornithonymy has been created over centuries. The range of motivating factors established in the analysis allows stating that ornithonyms used in Mari constitute a well-defi ned system. Different groups of motivating factors have been used for bird denominations in the language; some of them prove to be more frequent than others. Appearance displayed in many forms has been identifi ed as the factor most frequently occurring in the bird denominations of Mari. Colour, with reference to an object or parts of its body, details of the birds' appearance, size constitute a set of features underlying bird denominations. Voice is often used as a motivating factor, especially in cases, when the birds for some reason cannot be seen. For some terms, several features may be found to be refl ected.
A name of a bird can be transferred to another bird; similarity between two birds displayed in their appearance, voice, or behaviour is in such cases an important factor.
The rich nomenclature of ornithonyms used in the language is a sign showing the people's subtle perception of nature and their creative mind expressed over time. | 2019-09-17T03:01:54.144Z | 2012-06-18T00:00:00.000 | {
"year": 2012,
"sha1": "64eb5c1b04b108415f37ecc502de179809886189",
"oa_license": null,
"oa_url": "http://ojs.utlib.ee/index.php/jeful/article/download/jeful.2012.3.1.19/10331",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "780f3676ba924e1bb7f4f911631007a8685332e3",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"History"
]
} |
49430808 | pes2o/s2orc | v3-fos-license | Age-dependent effects of homocysteine and dimethylarginines on cardiovascular mortality in claudicant patients with lower extremity arterial disease
The association among serum homocysteine (HCY), symmetric dimethylarginine (SDMA), and asymmetric dimethylarginine (ADMA) is of interest in endothelial dysfunction, although the underlying pathology is not fully elucidated. We investigated the relationship of HCY with SDMA and ADMA regarding their long-time outcome and the age dependency of HCY, SDMA, and ADMA values in claudicant patients with lower extremity arterial disease. 120 patients were included in a prospective observational study (observation time 7.96 ± 1.3 years) with cardiovascular mortality as the main outcome parameter. Patients with intermittent claudication prior to their first endovascular procedure were included. HCY, SDMA, and ADMA were measured by high-performance liquid chromatography. Cutoff values for HCY (≤/>15 µmol/l), SDMA (≤/>0.75 µmol/l), and ADMA (≤/>0.8 µmol/l) differed significantly regarding cardiovascular mortality (p < 0.001, p < 0.001, p = 0.017, respectively). Age correlated significantly with HCY (r = 0.393; p < 0.001), SDMA (r = 0.363; p < 0.001), and ADMA (r = 0.210; p = 0.021). HCY and SDMA (r = 0.295; p = 0.001) as well as SDMA and ADMA (r = 0.380; p < 0.001) correlated with each other, while HCY and ADMA did not correlate (r = 0.139; p = 0.130). Patients older than 65 years had higher values of HCY (p < 0.001) and SDMA (p = 0.01), but not of ADMA (p = 0.133). In multivariable linear regression, age was the only significant independent risk factor for cardiovascular death (beta coefficient 0.413; 95% CI 0.007–0.028; p = 0.001). Age correlated significantly with HCY, SDMA, and ADMA. However, only age was an independent predictor for cardiovascular death. Older patients have higher values of HCY and SDMA than younger subjects suggesting age-adjusted cutoff values of HCY and SDMA due to strong age dependency.
Introduction
Lower extremity arterial disease (LEAD) refers to atherosclerotic stenosis or occlusions of the arteries of the lower extremities, which is a disease occurring preferentially in elderly persons. The presence of LEAD might be an indicator of generalized atherosclerosis and therefore poses a significantly increased risk for potentially fatal cardiovascular (CV) events [1]. Known susceptible risk factors are arterial hypertension, hypercholesterolemia, diabetes, and smoking, while the most non-susceptible risk factors are sex and age. Furthermore, there is a known association of these risk factors with endothelial dysfunction due to decreased release of endothelium-derived nitric oxide (NO). Endothelial dysfunction is an independent risk factor of CV morbidity and mortality [2].
The potential role of homocysteine (HCY) in the pathogenesis of CV diseases has been postulated by McCully in 1969. He observed that patients suffering from rare gene defects which led to elevated HCY levels suffered from premature atherosclerosis as early as in their second or third decade of life [3]. At the beginning of the 1990s, the association of HCY and LEAD was more strongly emphasized than the one of HCY and coronary artery disease [4]. By now, the influence of HCY on endothelial function has been thoroughly researched, although the underlying pathology not of the HCY-dependent endothelial dysfunction has not been completely explained yet.
In this context, the association of HCY with symmetric dimethylarginine (SDMA) and asymmetric dimethylarginine (ADMA) is particularly of interest. Contrary to ADMA which directly inhibits the endothelial NO synthase (eNOS), SDMA competes with the intracellular absorption of the NO precursor arginine. This process results in an indirectly decreased NO production via an intracellular deficiency of arginine [5] Both ADMA and SDMA might be directly associated with the occurrence of CV events [6]. HCY is seen as an independent CV biorisk factor as well, although it is unclear whether HCY itself leads to endothelial dysfunction. On one hand, a reduced endothelial-dependent NO liberation due to direct toxic effects on the endothelial cells as well as an inactivation of NO due to increased reactive oxygen species (ROS) production is assumed [7]. On the other hand, there is an assumption that SDMA and ADMA result in an inhibition of eNOS after their activation via HCY and cause an HCY-dependent endothelial dysfunction in this way [8,9].
The aim of the present study was to investigate the relationship between HCY, SDMA as well as ADMA in claudicant patients with LEAD regarding their long-term prognoses. As HCY metabolism clearly changes with aging, we investigated the age dependency of this relationship in claudicant patients [10][11][12].
Study design and patient collective
Between March 2002 and November 2004, a total of 120 consecutive patients were included in a prospective observational study with CV-related death as principal outcome parameter. The study included patients who presented at the outpatient clinic of the Division of Angiology in the Medical University of Graz because of intermittent claudication (Rutherford classification stage 2-3) and who had to undergo their first endovascular procedure of the pelvic and/ or femoropopliteal arteries due to a significant hemodynamic lesion in the respective arteries despite antiplatelet therapy. All patients were on antiplatelet therapy prior to endovascular intervention with either acetylsalicylic acid 100 mg or clopidogrel 75 mg per day. All patients underwent prior unsupervised exercise therapy, while none of the patients took folate or vitamin B12 supplements due to the influence of both vitamins on HCY metabolism. The patients' dietary intake was in accordance with an average Middle European diet. Antegrade or retrograde access via the common femoral artery was used in all interventions. Patients suffering from LEAD below the knee objectified by Doppler ultrasound of the arteries or magnetic resonance angiography were not included in the study, as intermittent claudication (Rutherford classification stage 2-3) is not an indication for endovascular recanalization in patients with LEAD below the knee [13]. The occurrences of a fatal stroke or fatal myocardial infarction were defined as CV death. Patients who were suffering from unstable angina pectoris or consequences of a stroke at the time of recruitment were excluded from our study. Other exclusion criteria were uncontrolled arterial hypertension (defined as a blood pressure above 180/120 mmHg at the time of study inclusion after 10 min in resting position), decompensated heart failure, life expectancy of less than a year, wound infections, vegetarians or vegans, and contraindications against anticoagulants and/or antiplatelet agents. All patients gave their written informed consent after being accurately informed about the clinical trial. The study was approved by the Institutional Review Board of the Medical University Graz, Austria (EK 23-038 ex 10/11).
Data collection
On the day of the endovascular intervention, the patients' baseline characteristics were determined. Subsequently, a total of four follow-up visits after 1, 3, 6 and 12 months were scheduled. At each study visit, the patients' concomitant medication and the occurrence of CV events were recorded. The final examination was conducted between October 2010 and May 2011. During the final examination, the occurrence of CV events was recrded. For this purpose, patients were invited to an outpatient examination/survey, in which they answered questions about their LEAD symptoms, medical history, and current medication. Using the same survey, patients who could not participate in the examination were interviewed on the telephone as an alternative means of data collection. If patients were deceased or not reachable by telephone, the primary care physician of the respective patients 1 3 was contacted and informed about the study. This enabled the collection of the necessary data regarding mortality and the occurrence of CV events (CV death, stroke, myocardial infarction) as well as current medication. Finally, all medical files in all public Styrian hospitals including their emergency rooms and divisions of pathology were reviewed to complete data collection.
Biochemical analyses
At the baseline visit, fasting blood samples were obtained. The serum was centrifuged and stored at − 70 °C until further analysis of HCY, SDMA, and ADMA was performed in March 2011 by means of high-performance liquid chromatography with a solid phase extraction and precolumn derivatization technique which was first described by Teerlink with only minor modifications [14,15]. According to previous reports, the investigated biomarkers can be assumed as stable [16]. Within-day coefficients of variation for SDMA were 4.6% (0.60 µmol/L) and 1.9% (1.0 µmol/L), and between-day coefficients of variance were 9.8% (0.60 µmol/L) and 6.1% (1.0 µmol/L). Within-day coefficients of variation for ADMA were 3.1% (0.62 µmol/L) and 1.0% (2.0 µmol/L), and between-day coefficients of variance were 9.0% (0.62 µmol/L) and 2.2% (2.0 µmol/L).
Statistics
In case of continuous variables, patient characteristics were given as means (± standard deviation). Median and interquartile range were used to express skewed data. Categorical variables were represented by frequency and percentages. The normal distribution was examined via Kolmogorov-Smirnov and Shapiro-Wilk test. The twosided t test was used for the comparison of groups in case of parametrical distribution. For non-parametrical data, a Mann-Whitney U test was utilized. Using a χ 2 and Fisher's exact test, qualitative command variables were compared. Optimal cutoff values for HCY, SDMA, and ADMA as potential predictors of subsequent cardiovascular death were evaluated by receiver operating curve (ROC) analyses. We applied log-rank statistics and assessed survival analysis utilizing Kaplan-Meier curves. The Jonckheere-Terpstra test was used for trend statistics. Correlations between metrical variables were expressed by Pearson´s correlation coefficients. Variables were also assessed as predictors of all-cause mortality in multivariate cox proportional regression analyses. We assumed statistical significance when P value was < 0.05. Statistical analyses were executed via SPSS version 20.0.
To assess the age dependency of concentrations of HCY, ADMA, and SDMA, the cohort was split by age of 65 years because the conventional age threshold at which people can be assumed to be "old" is commonly 65 years. Betweengroup differences were assessed using analysis of variance.
Results
A total of 120 patients were included in the analysis. Cardiovascular deaths were recorded during a mean follow-up of 7.96 (± 1.3) years. Patients´ baseline characteristics are shown in Table 1.
At the beginning of our analysis, age was divided into tertiles ( These tertiles were compared with respect to CV death. As expected, a significant difference between the three groups was observed (Fig. 1). Subsequently, the optimal cutoff value for HCY, SDMA, and ADMA between surviving and deceased patients was determined utilizing ROC curves. The value of HCY was 15 µmol/l, the one of SDMA 0.75 µmol/l, and the one of ADMA 0.8 µmol/l ( Table 2).
In the next step, the groups were compared with the established cutoff values. Age was significantly different for all three variables (HCY p < 0.001, SDMA p < 0.001, ADMA = 0.017, respectively) ( Fig. 2).
Due to this age correlation, we analyzed the age dependency of HCY, SDMA, and ADMA concentrations by splitting the cohort into two subgroups. In subjects aged below 65 years (n = 54, mean age 56.8 ± 6.79 years), the concentrations of HCY and SDMA were significantly lower than in those over 65 years (n = 66, mean age 74.44 ± 5.37 years) with median HCY 12. (Fig. 4).
3
Finally, in adjusted cox proportional regression analyses including the variables age, eGFR, HCY, SDMA, and ADMA, age was independently predictive of all-cause death, when comparing the highest with the lowest tertile (HR with 95% confidence interval: 5.140 (1.191-22.187), p = 0.028).
Discussion
HCY is a chemical intermediate for the metabolism of sulfurous amino acids. In case of a reaction between methionine and adenosine triphosphate, HCY is created after methylation. Methionine is supplied by the intake of food, in particular by animal products such as meat and cheese. Usually, HCY is remethylized into methionine with the help of enzymes, or metabolized into cysteine and glutathione [17]. A great variety of factors for the development of hyperhomocysteinemia have been discussed. Next to genetic enzyme defects in HCY metabolism, hyperhomocysteinemia is also in inverse correlation to the vitamin status of folic acid, vitamin B6, and vitamin B12. Chronic alcohol consumption and gastrointestinal pathologies with impaired absorption and malnutrition can lead to hyperhomocysteinemia as a result of functional deficiencies. High coffee intake was also associated with elevated HCY levels [18]. Conversely, inhibition of the proliferation of smooth muscle cells as well as a reduction of endothelial cell destruction as a result of lowering HCY levels, for instance via folic acid substitution, were reported [19,20]. Numerous studies have determined HCY as independent cardiovascular biorisk factor. The most significant role in this context seems to be played by primarily the HCY-dependent endothelial dysfunction [21]. It is, however, unclear whether HCY directly leads to endothelial dysfunction, or if the HCY-activated dimethylarginines are responsible for it. The present study could prove a significant correlation between HCY and SDMA, but not between HCY and ADMA. On the other hand, the dimethylarginines SDMA and ADMA were significantly correlated to each other. Therefore, an HCY-dependent endothelial dysfunction via dimethylarginines appears at least partially plausible. Other data support the result of our study because dimethylarginines have already been attributed with a certain quantification of the HCY-dependent endothelial dysfunction [8,9]. The HCY-dependent endothelial dysfunction seems to be a multifactorial process, nonetheless.
SDMA as structural isomer of ADMA is eliminated by the kidneys and was considered to be biochemically inactive. SDMA does not only seem to correlate with renal function, but also appears to have a close relationship with the expression of atherosclerotic lesions. Therefore, SDMA is also considered as an independent factor associated with the occurrence of cardiovascular end points such as stroke, myocardial infarction, and CV death [22,23]. Furthermore, data that associated SDMA more strongly with CV events than ADMA have been published [24]. The most plausible underlying pathology not in this context seems to be the influence of SDMA on the store-operated calcium channels located on the endothelial cells after activation by HCY, as an activation of these channels by SDMA results in an increase of oxidative stress [8]. The effects of oxidative stress consequently lead to an increase in the expression of redox-sensitive genes which constitutes a deciding step of early atherogenesis [25,26]. We could neither observe a significant association between HCY and ADMA nor a significant difference in ADMA values between older and younger patients, although older claudicant patients have significantly higher values of HCY (p < 0.001) and SDMA (p = 0.01) than younger patients. A possible explanation for these findings could potentially be found in the form of the enzyme dimethylarginine dimethylaminohydrolase which does not influence SDMA at all [27]. Dimethylarginine dimethylaminohydrolase is primarily responsible for an HCY-induced inhibition of the ADMA/eNOS/NO pathway in endothelial cells, but does not exhibit any significant changes with increasing age and in the presence of ischemia in particular [28,29]. As outlined above, the present study significantly associated HCY, SDMA, and ADMA with the occurrence of CV death. After correction of the variables in a logistic regression model, the variable age was identified as the only significant risk factor and age correlated significantly with HCY, SDMA, and ADMA (r = 0.393; r = 0.363; r = 0.210, respectively). Therefore, increasing age significantly mitigated the prognostic importance of HCY as well as dimethylarginines as prognostic markers for long-term observations in our work. A strong influence of the variable age on these parameters via multivariate analyses could already be shown in multiple prior studies [10,30]. Therefore, the present study only assumed an influence on the results which finally remain to be proven. It is possible that the prognostic effect of HCY, SDMA, and ADMA is overestimated in long-term observations especially as the strong association of these variables with CV events could also be explained by their significant age dependency. Therefore, we suggest that ageadjusted cutoff values for HCY and SDMA may estimate the risk for CV death more appropriately. A limitation of our study is that it is one with a very selective patient cohort which investigated the mortality exactly for claudicant patients with Rutherford classification 2-3.
On the other hand, our overall patient cohort was rather homogenous regarding the dietary intake and prior exercise therapy. Therefore, these parameters should affect the HCY metabolism in a less distinctive manner in our study, and the HCY metabolism seems to be more depending on the age than on other conditions. Nevertheless, further evaluation with larger cohort studies including patients with other types of LEAD is necessary to clarify the age dependency
Conclusions
In summary, the present study could prove a significant association between HCY and SDMA, but not between HCY and ADMA. Consequently, HCY-dependent endothelial dysfunction seems to be caused at least partially by dimethylarginines. Due to the distinct age dependency of HCY and SDMA in the present cohort, age-adjusted cutoff values for these parameters may be more appropriate as independent predictors of CV death.
Acknowledgements Open access funding provided by Medical University of Graz. We thank Anna Klambauer for proofreading and editorial assistance. All authors have read the journal's authorship agreement.
Funding This research received no specific grant from any funding agency.
Compliance with ethical standards
Conflict of interest All authors declare that no conflicts of interest exist.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creat iveco mmons .org/licen ses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2018-06-29T00:19:00.979Z | 2018-06-26T00:00:00.000 | {
"year": 2018,
"sha1": "eaab8f44485f3f6bc896bfb3378d8d2fc0e6bbb1",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00380-018-1210-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "eaab8f44485f3f6bc896bfb3378d8d2fc0e6bbb1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266284200 | pes2o/s2orc | v3-fos-license | Financing Decisions of Property Enterprises under Deflation—A Case Study of 7 Hong Kong Stock Companies
: At present, the real estate industry, as a pillar industry concerning the national economy and people's livelihood, has a greater impact on the national economic development, especially in recent years, economic development has been affected in many ways, and the financing redevelopment of the real estate industry has also been focused on, so it is necessary to carry out a focused research work on the study of the financing decision of the real estate industry in order to guarantee the sustainable and healthy development of the real estate industry. Scholars have conducted a large number of studies on the financing decisions of real estate enterprises in different periods, but the financing decisions under the specific background of deflation have not yet formed a targeted guiding theory, therefore, it is important to have a theoretical guiding significance to effectively predict the financing strategy of the later development through data analysis and mining. This paper collects the financial data of seven property companies in the past ten years, and applies the DCF valuation method and perpetual growth model to predict the future value of the enterprises in the next five years. It is found that the calculation results of the future value of the enterprises appear to be very different due to the different operating conditions of the enterprises. By comparing the changes in the future value of the company under the typical WACC, it is concluded that the company should focus on the impact of WACC on the company's future value. Based on the valuation results, combined with the national policy and the actual situation of different enterprises, this paper provides professional analysis on the financing decision of real estate enterprises based on the changes in the financing structure of the real estate industry in recent years, and puts forward scientific suggestions for financing decisions.
Introduction
In recent years, China is in the high-speed development to high-quality development of the transitional period superimposed on the impact of the epidemic, the economic growth rate has declined, and residents' willingness to purchase homes has declined, leading to an overall slump in the demand side of the real estate market.As the country has successively introduced a series of policies and regulations to regulate the behaviour of real estate in terms of housing land use system, real estate financial system and development of real estate industry policy, due to the real estate industry's own characteristics, most of the real estate enterprises have taken the mode of operating with debt.National tightening of all kinds of financing policies, resulting in real estate enterprises to increase the difficulty of financing, when the market demand, financing changes in the bad situation, it is easy to lead to the enterprise capital chain rupture, resulting in the risk of debt crisis.Most of the real estate enterprises need to seek new financing channels under the predicament of tight capital and debt pressure.In 2022, Qiu and Li take the debt crisis of Evergrande Group as an example to study the debt risk problem of real estate enterprises.The study shows that due to the influence of national policies, such highly indebted property enterprises as Evergrande Group are difficult to get financial support from commercial loans, and when encountering long-term large debt maturity events, they will face the crisis of being unable to repay high debts [1].In the current ever-changing policy environment, real estate enterprises need to have the ability to flexibly respond to market challenges and make scientific financing decisions.In 2015 Ouji in his article introduces the financing methods of China's real estate enterprises in the new period, the problems that occur when corporate financing, and the innovative analysis of the financing decisions of real estate enterprises under the new situation [2].In 2017, Lin and Zeng based on the perspective of financing constraints Resource allocation analysis of the transmission effect of real estate enterprises, the study shows that real estate price fluctuations have a transmission effect on the financing and investment behaviour of China's real estate enterprises [3].In 2019, the work of Zhang, et al. analysed the effect of credit crunches and equity financing regulations on capital structure adjustments made by Chinese listed real estate companies between 2001 and 2012 [4].In 2022, Zhao analysed the financing status quo of the real estate industry under the new situation and put forward the corresponding strategy of innovative financing mode [5].In 2023, Yang put forward the response strategy of improving the efficiency of capital turnover and enhancing the enterprise's ability of investment and financing management from the three aspects of the mechanism of investment decision-making, financing channels, and the integration of investment and financing management [6].The financing of real estate has always been of great concern.These studies aim to enhance the financing efficiency and risk management ability of real estate enterprises and provide useful ideas and suggestions for policy makers, enterprise managers and market participants.In recent years, with the policy changes in the real estate industry, domestic and foreign scholars have conducted extensive research on the financing of real estate.However, there are still some gaps in the research on property financing policies for specific situations-deflationary social context.How real estate enterprises make reasonable and correct financing decisions in the context of deflation is of positive practical significance for the future development of real estate enterprises.
Using company valuation methodology, this paper examines listed real estate companies as an example.Based on the financial data of seven Hong Kong real estate companies, the valuation model is used to calculate the company's future value in the next five years, and according to the requirements of the national strategic development, the company's diversified or specialized investment decisions and other factors are professionally analysed, so as to put forward a scientific financing decision.
Research Method
According to the requirements of the data and variables selected for this study, the main variables used are the financial data of net profit after tax, depreciation and amortization of seven real estate enterprises, including Poly Property Group, from 2013 to 2023.Based on the results of 2023 real estate development enterprises' comprehensive strength measurement, excluding enterprises with long stock suspension, abnormal and incomplete disclosure of financial information, or major restructuring matters from 2013-2023, seven real estate enterprises ranked in the top 20 in terms of comprehensive strength were finally selected -Poly Property Group, R&F Group, China Resources Group, Longfor Group, Shimao Group, Sino-Ocean Group and China Jinmao.Based on the financial data of the past ten years, the valuation of the company for the next five years is carried out.These seven real estate companies, as leading companies in the real estate industry, can be regarded as seven nationally representative samples, reflecting the overall development trend of China's real estate industry, which is of representative significance for the study of corporate financing decisions in the context of deflation.In this paper, we choose the discounted cash flow model (DCF valuation method) and perpetual growth model, which are frequently used in practice, to predict the future cash flow and calculate the future value of the enterprise.This paper starts from the actual profitability of the enterprise, combined with the concept of the time value of money to assess the future value created by the enterprise, and then concludes the corresponding financing decisions.
Importance of the Future Value of the Enterprise
Future value is an important indicator for projecting future cash flows.Generally speaking, future value represents the present value of future cash flows, with the growth rate as the general perpetuity rate.Calculating the future value of an enterprise can convert the future value into the present value, accurately predict the direction of the market.By calculating the economic benefits that may be brought to the property enterprise in the next five years, they can help the enterprise to scientifically and systematically assess the economic value of the enterprise.And then they can help enterprise to carry out scientific financial planning and risk management, which has guiding significance for the financing decisions of the enterprise in the present time.
Calculation of Future Value
There are usually two methods for calculating the future value of an enterprise: the perpetuity method and the multiplier method.In the assessment of enterprise value, the discounted value of perpetual cash flows accounts for about 50 per cent of the overall discounted value, and therefore the perpetual growth model and the determination of its parameters have a significant impact on the outcome of enterprise valuation.The core of the perpetual growth method lies in estimating the present value of future cash flows based on a stable and perpetual growth rate, so as to determine the value of the enterprise or assets.In this paper, the perpetual method is chosen to calculate the future value of the enterprise.In 2019, Li and Wang through the analysis of the perpetual growth model, put forward the main factors driving the growth of cash flow, and put forward the relevant suggestions for how to choose the perpetual valuation model and how to determine the growth rate of the perpetual period for the enterprise [7].In 2020, Li supported the reasonableness of the application of perpetual growth rate in the financial and value assessment of the enterprise by a large amount of data [8].Under the perpetual growth model it is necessary to calculate the future value of three data: perpetual cash flow, weighted average cost of capital (WACC) and perpetual growth rate (g).The weighted average cost of capital (WACC), is a price measure of how expensive it is for a firm to raise capital.The calculation of WACC is based on a firm's capital structure and is derived from the weighted average of the cost of debt and the cost of equity.The cost of debt reflects the interest paid by the firm through intervening funds, while the cost of equity reflects the return that investors expect on their equity investments.It reflects not only the opportunity cost of the risk that investors bear in investing in a company, but also the return on investment expected by all investors, and WACC plays an important role in evaluating the day-to-day operations of a company [9].This paper uses the annual report data of seven companies in the past ten years, according to the framework of H-type two-stage corporate cash flow discounting theory, and use relevant software to carry out quantitative analysis of various indicators, and speculate on the company's operating status and intrinsic value in the next five years through the data calculation results, and draw relevant conclusions.
Empirical Analysis
Based on ten years' financial data of real estate enterprises, this paper calculates the growth rate of free cash flow of enterprises, and averages the growth rate of seven enterprises, to get the average growth rate of 0.207869.And then we can calculate the free cash flow of enterprises at the end of the period, and get the future value of enterprises according to the model of perpetual growth.As shown in Figure 1, the future value is calculated for the seven companies for the next five years when the value of WACC is 5%.As the trend of the projected changes in the future values of China Resources Group, Sino-Ocean Group, Longfor Group, Poly Property Group and China Jinmao is increases year-on-year, which is also in line with the current actual operating conditions.These five companies have been operating well for the last ten years.While R&F Group and Shimao Group are expected to have a negative future value of the company in 2024-2028.According to the data released by the Paper, the current capital flow of R&F Group is not optimistic, and R&F Group has 30,727,400 yuan of overdue commercial paper, and the total amount of the executed amount is more than 8.2 billion [10].The research in China Real Estate in 2018 (Mid-Decade) shows that R&F Group has been adopting a more aggressive land bank strategy, accumulating a large number of third and fourth tier land bank, and is facing a certain amount of sales pressure, with a serious weakening of the ability to return funds [11].In 2020, Zhang analysed the debt situation of R&F Group, and found that in 2017 R&F Group acquired 77 hotel assets, and until 2019, hotel business of R&F Group has been in the red for seven consecutive years, dragging down the cash flow of the enterprise, The current forecast of the company's future value is negative [12].Shimao Group's debt ratio has continued to increase in recent years, and its capital situation is not optimistic.According to the data released by Eastmoney, as of the end of 2022, Shimao Group's solvency continues to weaken, with total liabilities of 536.7 billion yuan, and at the end of June 2023, Shimao Group's net gearing ratio is about 372.5% [13].Therefore, when the company's future value is negative, it indicates that the enterprise's operating condition is poor and the risk of bankruptcy increases.Some scholars based on the theory of enterprise development life cycle, believe that the vast majority of enterprises in negative growth will go from prosperity to decline or even rapid demise in a few years, so they believe that the perpetual growth rate is not applicable to enterprises with negative growth and the use of perpetual growth model is not scientific.Li Xuehua through the study shows that the long-term development of enterprise value by the perpetual growth rate (g), return on investment (ROI) and the weighted average cost of capital (WACC), confirming the perpetual growth rate in theory and practice of the scientific nature, cannot be based on the special case of certain enterprises to deny the use of perpetual growth model value [8].Thus, when real estate firms get into trouble with debt, it can usually be traced to a range of problems, including difficulties in specific business segments, aggressive strategies and management.Real estate firms with negative future values may have encountered problems in certain business segments, putting greater pressure on their funds due to operating losses in some projects, poor investment decisions and declining market demand, etc.Real estate enterprise may have adopted aggressive strategies such as high leverage, over-expansion, etc.While these aggressive strategies can bring high returns in the short term, they are also accompanied by high risks.The quality of corporate management may also affect the future value.Inappropriate management decisions and resource allocation may lead to a firm falling into debt default.Therefore, Real estate enterprises should conduct effective risk management, formulate and implement scientific corporate strategies to enhance the future value of their businesses.
According to the formula of the perpetual growth model: future value = free cash flow at the end of the period * (1 + perpetual growth rate) / (WACC -perpetual growth rate), when changing the value of weighted average cost of capital (WACC), the result of the enterprise's future value also changed greatly.Moreover, WACC and the firm's future value change inversely.As shown in Figure 2, this future value is a calculation of the future value of the seven firms for the next five years when the average growth rate is fixed and the value of WACC is 15 per cent.It can be seen that WACC is more of a reflection of a firm's global capital.According to the formula of the perpetual growth model, as the WACC value increases, the future value of the firm decreases.It may lead to increase the financial risk of the company and reduce the compensation received by the investors.When the WACC value is 15%, the prediction of the future value of the seven companies is not optimistic, and at this time, the economic condition of the whole real estate industry is also relatively in a negative state.In 2000, Chen pointed out that the root cause of China's second type of deflation is the structural contradiction.The real estate industry, as a pillar industry in the national economy, also brings deflationary pressure to the Chinese economy when GDP declines [14].Therefore, listed companies should find a reasonable operation mode to reduce the value of WACC within the safe range of their debt risk to minimize WACC in order to achieve the goal of maximizing shareholder value.
In 2023, Zhang analyses the current state of the real estate market and the industry in four aspects: project uncertainty, irrational capital cost structure, low capital turnover and policy risk in the context of the current macro-policy regulation [15].In recent years, Chinese urbanization process has become slower and the population growth rate has been slow, leading to an oversupply in the real estate market.
According As shown in Figure 3, the decline in the growth rate of real estate investment and development means that the real estate market has made certain adjustments, and it is also a signal that the current pressure on corporate development and investment has become greater.In recent years, the state has introduced a series of policies to rectify and regulate the real estate industry.Due to the real estate investment is too fast, the property vacancy rate is too high, the property price rises too fast, the fundraising is overly dependent on bank credit and other factors, resulting in the real estate industry enterprise financial risk is high, in the deflationary cycle of the real estate industry is very easy to cause the risk of exposure of financial weaknesses [16].Based on this, real estate enterprises how to make scientific financing decisions in the context of the era of deflation, the enterprise or even the whole real estate industry is particularly important.In view of the value and characteristics of real estate enterprise financing management and the current real estate enterprise financing channels are less, the capital chain management level is low, the impact of the external environment caused by the reality of the problem, real estate enterprises should take scientific and reasonable measures to deal with financing problems.
Enhancement of the Cash-to-Short-Debt Ratio
The financing methods of enterprises include debt financing and equity financing, and in general, the financing methods of enterprises are bank loans, stock financing, debt financing and so on.Xu pointed out through his research that the scale of equity financing in China is relatively small, and the financial structure dominated by credit indirect financing is an important root cause of the high debt ratio of enterprises [17].The state released the real estate three red line policy on 1 January 2021, the "three red lines" include: real estate enterprises excluding advance receipts after the gearing ratio shall not be greater than 70%; Real estate enterprises net debt ratio shall not be greater than 100%; Real estate enterprises cash-to-short-debt ratio is less than 1.After the announcement of the "three red lines" indicators, the real estate companies have a high debt ratio, which is the most important source of the high debt ratio.After the announcement of the "three red lines" indicators, real estate enterprises began to reduce their own debt ratio, but in recent years there are still a number of enterprises stepped on the line, caught in the debt default predicament.Research shows that a cash-to-short-debt ratio of >1, i.e. >100%, is considered safe, and when the cash-to-short-debt ratio is < 1, the company's solvency, as well as its cash flow position, is considered to be poor.2020 R&F Group had a cash-toshort-debt ratio of 0.64 <1, and Shimao Group's debt ratio continued to rise.As of the end of June 2023, Shimao Group's cash-to-short-debt ratio is 0.03, and its capital position is not optimistic [18].Therefore, property enterprises should steadily improve their cash-to-short-debt ratios, especially for enterprises with negative future value calculations or debt defaults, they should reduce their cash-toshort-debt ratios to reach a safe state.
Adoption of Diversified Financing Methods
According to the annual results of R&F Group and Shimao Group in 2022, it can be seen that bank loans account for more than 40% of corporate liabilities in both companies, and an excessive proportion of debt financing will lead to poor liquidity and reduced risk resistance.The financing structure of enterprises is usually divided into endogenous financing and external financing, and property companies should choose the appropriate financing method according to their own situation.In 2023, Stokes and Cox by calculating lending cap rate, calculated the maximum loan amount for commercial property financing, based on the property valuation [19].Eng Teck Yong Suggested management and strategies for the real estate industry and real estate companies industry in the next decade by analysing the current situation of real estate industry and real estate companies' funding [20].Dr. G. Rajendran's paper, he mentioned the profit and risk issues related to the financing portfolio, highlighting the importance of liquid assets in financing [21].
Taking the seven Hong Kong stock companies selected in this paper as an example, most of the liabilities of real estate enterprises are mainly long-term debts, accounting for up to 50%, due to the high-debt mode of operation of the real estate industry, the characteristics of the larger cash flow, long-term debts will increase the pressure on the profitability of the company in the future, which has a greater financial risk.Coupled with the national tightening of financing policies, it is easy to cause the credit rating of the real estate industry to decline, triggering the risk of debt crisis.It can be seen that real estate, as a traditional cyclical industry, uses debt financing as the main financing channel, and the funds of the real estate industry rely too much on bank loans, which will weaken the debtservicing ability, capital liquidity and profitability of the enterprise.The debt ratio of the real estate industry in 2007 was 62.9%, and it has risen to 79.2% in 2021, and China's real estate industry is in the high-debt operation mode supported by bank loans.The real estate industry in China is in a highly indebted business mode supported by bank loans.When the market policy changes, high leverage, high debt, high turnover enterprises are difficult to adjust their business strategy in time according to the national policy, which will bring risks to the overall operation of real estate enterprises.Therefore, real estate enterprises should change the development mode of high leverage, adjust the financing structure of the company and reduce the proportion of debt financing.These enterprises can choose other companies with higher security to participate in other companies as well as more dynamic investment in other fields, reduce the value of WACC through emerging capital, and gradually solve the problem of high indebtedness of real estate enterprises.Innovative financing methods, actively expanding real estate trust, bonds, funds and other diversified financing modes, to enhance the liquidity of real estate enterprises and reduce debt risk.
Keeping up with National Policies and Optimising Business Segments
Against the background of deflation, the purchasing power of the currency has increased, aggravating the debt burden of enterprises.According to the future value results, real estate enterprises should actively pay attention to the policy changes of the country, timely adjust their own financing methods and financing scale, broaden the communication channels of the capital market, diversify the financial risks through innovative financial instruments.Enterprises cannot carry out unrealistic expansion and introduce a large amount of external capital to reduce the stability of the funds.Real estate enterprises can reduce the information asymmetry between them and their creditors by establishing a mechanism for external creditors to participate in corporate governance, thus optimizing corporate governance and stimulating the innovative vitality of enterprises.Currently, metropolitan areas and city clusters have become the mainstream trend of China's urban development, and have brought significant development opportunities for the healthy development of the real estate industry.Therefore, real estate enterprises should grasp the layout of urban development, dynamically optimize the structure of land bank policies according to the urbanization process, land bank development policies, adhere to the policy of urban policy, precise policy, one city, one policy and carry out regional diversification and strategic development in order to look for new growth points in performance.
Conclusion
This paper uses the DCF valuation method and the perpetual growth model to value property companies in the context of deflation.The perpetual growth model shows that the value of WACC is inversely related to the future value of the enterprise, and property enterprises should make scientific financing decisions to reduce the value of the weighted average cost of capital (WACC) based on the valuation results and within the tolerance range of their debt risk.In the case of deflation, the debt burden of enterprises is increased, the liquidity and profitability of funds are poorer, and the real estate enterprises need to adopt the dynamic financing method even more.The research in this paper to a certain extent bridges the gap in the research on the valuation of real estate enterprises in the specific context of deflation, and by calculating the future value of the enterprise in the next five years, predicts the future value of the enterprise, and puts forward an innovative solution to the real estate enterprise for the relevant financing decisions.However, by reviewing and collating the financing structure of the real estate industry in recent years as well as the research on the financing of real estate enterprises, it can be seen that a large amount of relevant literature has taken listed companies as samples, and its fruitfulness is of limited significance in guiding the whole real estate industry.When choosing samples in future research, we can consider including small and medium-sized enterprises in the industry into the research scope for holistic analysis.For listed companies, the stock price also represents the value of the company to a large extent.Therefore, the value assessment of both stock and future value can be used to value the enterprise more accurately.To address the shortcomings of this study, future directions are proposed: (1) Expanding the number of property companies selected from the real estate industry and expanding the sample capacity for a more precise study.(2) The stock market, as a key component of the financial market, needs to assess the value of real estate stocks and predict the future development trend and financial status of enterprises through fundamental analysis.In the future, the selection of valuation methods can be further refined as described above to facilitate in-depth research on this topic.
to the data released by the National Bureau of Statistics (NBS), the growth rate of national real estate development investment has shown a downward trend from 2013 to 2023, and by 2022, the national real estate development investment has shown negative growth.The contraction of real estate development is influenced by a number of factors.
Figure 3 :
Figure 3: 2013-2023 changes in growth rate of national property development investment. | 2023-12-16T16:30:01.590Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "31ca05222574cb52826196439b9425e1a6c8d57d",
"oa_license": "CCBY",
"oa_url": "http://www.clausiuspress.com/assets/default/article/2023/12/13/article_1702458613.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "d534bc48dde0a9afb69a503e538d3a31b74f2901",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": []
} |
3308920 | pes2o/s2orc | v3-fos-license | Simple analysis of scattering data with Ornstein-Zernike equation
In this paper we propose and explore a method of analysis of the scattering experimental data for uniform liquid-like systems. In our pragmatic approach we are not trying to introduce by hands an artificial small parameter to work out a perturbation theory with respect to the known results e.g., for hard spheres or sticky-hard spheres (all the more that in the agreement with the notorious Landau statement, there is no any physical small parameter for liquids). Instead of it guided by the experimental data we are solving the the Ornstein-Zernike equation with a trial (variational) form of the inter-particle interaction potential. To find all needed correlation functions this variational input is iterated numerically to satisfy the Ornstein-Zernike equation supplemented by a closure relation. We illustrate by a number of model and real experimental examples of the X-ray and neutron scattering data how the approach works.
I. INTRODUCTION
Liquids (and not only so-called complex liquids or colloidal suspensions but as well not exotic at all conventional simple organic and inorganic ones) are not structureless at any scale uniform media.The knowledge of their structure is very essential for understanding of the underlying physics and chemistry, and it allows a non-blind search of new "smart" materials possessing properties required for various applications.Nowadays the topic is a multidisciplinary area including many basic science problems involving physics, chemistry, biology and applications.Recently there has been an essential evolution in our understanding of the structures and phase transitions in liquids.It can be illustrated merely by continuously growing number of exciting new publications (some of those will be cited in what follows in our paper).The progress, as always, is driven not only by developments of new experimental techniques but also theoretical advances, promising potential applications, and related interesting fundamental scientific problems.
Experimentally complete and detailed structural information is obtained by a number of scattering methods (Xrays, neutrons, light).Results are summarized in many reviews and monographs, e.g., in multiple editions of the well known J.P. Hansen and I. R. McDonald book [1], containing also numerous relevant references.However liquids as any studied in physics objects have so-to-speak two faces.First, mentioned above -experimental data, and second -their theory description.Modern powerful computers and software enable to perform large scale simulations of molecular liquids or colloidal dispersions.Thus, there are high precision scattering data and high accuracy simulations, and therefore it is tempting to think that nothing else is needed in the field.Unfortunately it is not completely true.The matter is that the full set of parameters which determine the experimentally measured scattering intensity I (or related to I the static structure factor S(q)), and the parameters needed to perform numeric simulations are not exactly the same, and what is worse only barely known.Readers can find a lot of original publications, reviews and monographs from the theory and simulation side (just to mention a few, see [2], [3], [4]).One of the main difficulties in comparing the results of the large-scale simulations with specific experimental measurements is the availability of an accurate connection between experimental control parameters and the theoretical variables needed for the simulations.The actual values of the parameters are determined by the microscopic interactions, which are not well known.It might be not so important because in simulations the level of details is much greater then can be obtained experimentally, but there is another disadvantage of large scale simulations, they do not tell us which elements of the interactions are the most essential for a given system behavior.It should be cause for general embarrassment in the field that there are no still answers on even the most basic questions on structural and thermodynamic properties of liquids.
To overcome somehow this mutual uncertainty of the scattering data and simulations, and to relate the data with physical system characteristics we need the theory guideline.And here we face another trouble.Theory in the rigoristic meaning of the word (see e.g., [5]) may not be developed for molecular or colloidal liquids, since there is no any small parameter.To find a way from the impasse we propose a pragmatic approach.The rigoristic theoretical view is certainly correct, however the heuristic theory approach combined with experimental input and common physical wisdom, provides useful tool to describe experimental data with a few phenomenological parameters.Moreover obtained theoretically results, can be also used for certain new predictions.We introduce some effective (model) inter-particle potentials, which we regard as experimentally determined ones.Along this way to test such combined (theory-experiment) approach we need a standard reference system.For the colloidal dispersions such reference system is the dispersion of hard spheres.The fact is that structural and even dynamic properties of liquids are dominated by the molecular repulsive cores.This general deceptively simple Van der Waals observation leads to the idea that hard spheres is a suitable basic model of the liquid state.
The liquid structure factor which can be determined from the measured scattering intensity is a Fourier transform of the pair density correlation function, see details in the next section.The density correlation function satisfies the formally exact Ornstein-Zernike (OZ) equation.Unfortunately the equation is not in a closed form, because it contains two unknown functions, and to solve this equation one has to add a closure relation.It turns out that the structure factor of the hard-spheres liquid can be accurately calculated using the Percus-Yevick (PY) closure relation.Unfortunately the remainder (with respect to hard spheres) interactions may not always be treated as a perturbation.Then one has to rely on different methods (see, e.g., [6], [7], [8], [9]).In this work we are solving the OZ equation for a smooth combination of the correlation functions (see details in the next section) and introduce an effective inter-particle interaction potential.To find the other correlation functions this input is iterated numerically to satisfy the OZ equation supplied with a closure relation [1].We test the approach investigating a number of model and real experimental examples of the X-ray and neutron scattering data.
Common experience in data fitting claims that for short-range interactions between the particles, the PY closure relation gives very reasonable description of the data.For more long-ranged interactions the better description of the data is obtained by the so-called hypernetted-chain (HNC) closure relation [1].Variation of external conditions and material parameters results in the change of physical properties of the system.The static structure factor S(q) is the main quantity one needs to analyze experimental data and to confront the data with the theory.The OZ equation with PY closure can be solved analytically only for hard spheres [10], [11] or hard-spheres with a very short-range attraction (sticky hard sphere model) [12], and more recent experimental and theoretical advances and improvements can be found in [13], [14].The first example (hard spheres) is very important but oversimplified to describe the real molecular liquids and colloidal dispersions.As it concerns to the second model (sticky hard spheres) its application is strongly limited [15].In fact it is a general drawback of the standard approach.The interpretation of the scattering data and the obtained values of the parameters are model dependent, and heavily rely on the assumptions used in the data analysis.Although the OZ integral equation can be solved by iterations [1], the method of the solution which provides the convergence and stability for a general case is not proposed (see the documentation to SASFIT software [16]).Stability of the algorithm can be obtained if one can guess in advance the form of the solution with several adjustable parameters, and it is not a trivial task.
Thus, the desire to understand the physical system characteristics behind its structure factor which is the main aim of this paper, is hardly surprising.The plan of our paper is as follows.In the next section II we describe the main steps of our approach.We analyze the scattering data in the framework of the OZ equation.Our method can be applied both for molecular liquids and for colloidal dispersions, but for the latter systems it works if polydispersity is not too high.Then in section III we illustrate how our approach works and present a number of model (with known form for the structure factor) and experimental results which we analyze by our method.Finally, in the section IV we summarize the main steps of our approach and the results of the work.
II. THEORY
Consider suspension of monodisperse hard spheres of diameter σ.Volume fraction occupied by the spheres is φ = πσ 3 n/6, where n is its average concentration.The pair correlation function is defined as g(r) = 1 + nh(r) = n(r)n(0) /n 2 − nδ(r), and two other correlation functions are useful to describe the scattering data: the total correlation function h(r) and the direct correlation function c(r) = −[δ 2 F /(T δn(r)δn(0))].The functions h(r) and c(r) enter to the exact OZ equation.
Static structure factor S(q), is determined as OZ equation can be rewritten in the Fourier representation as and to solve this equation it is necessary to add a closure relation.Two the most popular (and turned out physically justified) closure relations are the PY closure equation (where V (r) is an interaction potential between particles and β = (k B T ) −1 ), and HNC closure looks like where If the form of the inter-particle potential V (r) is known, the closure relation allows to solve the OZ equation and then to compute S(q).Unfortunately it is almost never the case, and one has either to guess about V (r), or try to find some insight by fitting the scattering data.It looks as a vicious circle (because to fit the data we need to solve the OZ and the closure equations, what is impossible without the knowledge of the potential).Luckily the situation is not so hopeless, and the both tasks (namely to compute S(q) and to guess on the form of V (r)) can be done simultaneously by iterations.We chose first (guided by physical arguments) a model potential, then compute S(q), compare the results to the experimental data and repeat the procedure, until the agreement with experimental data becomes satisfactory.What is important for our approach, that it is sufficient to know experimental data in a relatively narrow (around the first peak in I(q)) range of the scattering wave vectors to compute the static structure factor in a much broader range of q.
Usually it is supposed that the interaction potential is very large and repulsive for small inter-particle distance r < σ, (where σ stands for an effective size of the particle hard core) V (r) = V ≫ k B T , and vanishes outside the interaction region R int < r, V (r) = 0.It is easy to see that the PY and HNC closure relations imply that However, what is directly measured in any scattering experiment is not the static structure factor.The measured quantity is the scattering intensity I(q, n) (where as before q is the scattering wave vector, and n is the average particle concentration).For a very dilute dispersion, when n = ndil is small ndil σ 3 ≪ 1, I(q, ndil ) is the scattering intensity from a single particle, termed traditionally as the particle form factor.For molecular liquids, or for colloidal dispersions with relatively small polydispersity (the narrow particle size distribution function), the static structure factor can be determined as S(q) = ndil I(q, n)/(n I(q, ndil )) Unfortunately it is impossible to calculate accurately the correlation functions h(r), c(r) in the r-space by the inverse Fourier transformation of the static structure factor, because S(q) decreases too slow, usually as q −1 .Luckily, for the function γ(r) the situation is much better.It can be obtained by the Fourier transformation.The reason is that γ(r) is a smooth function unlike the total and direct correlation functions, and its Fourier transform decreases fast, e.g., for the hard spheres like 1/q 3 .The required range of the wave vectors in the experimentally measured scattering intensity depends on the interaction potential.For the hard spheres at not too high particle volume fraction (say, φ ≤ 0.4)), it is sufficient to know the scattering intensity for q < 10/σ.Then the function γ(r) can be calculated directly from the static structure factor and the exact OZ equation (without an explicit use of any closure equation): Eq. ( 7) is the main message of our work.Of course to compute c(r), g(r), and h(r) separately one has to supplement the OZ equation by one or another closure relation.
The equation for the direct correlation function for the PY closure reads as and for for the HNC closure it is Technically to solve the OZ equation we design a simple numerical method, similar to that proposed by Gillan [17].
The original [17] method is based on the discrete form the of the equations and Fourier transform of the correlation functions (instead of direct calculation of the integral in ( 1)).By our code we are solving the following discretized equations cj = 4πh r q j N i=1 c i r i sin(q j r i ) , where r i = i h r , q j = j h q , h q = π/(N h r ), h r and h q are steps in r and q space correspondingly.Fitting the results obtained from the equations (7-10) to the experimentally found the static structure factor, we are able also to determine the interaction potential, calculate the density correlation function g(r), and its main characteristic features, and compute as well some thermodynamic properties of the system, e.g., the pressure.It is worth to cite here the work [18] where the inverse problem (to find the interaction potential from the scattering data) has been also discussed.
III. ANALYSIS OF EXPERIMENT
As the first test of our approach we treat the hard sphere model data obtained by the exact solution of the OZ and PY equations.This is our input "experimental" data.The test will allow also to estimate the accuracy of our computation due to the limited range of the available wave vectors, and finite precision of the discretized Fourier transform.In the test (in dimensionless r measured in the units of σ) we take the hard sphere volume fraction φ = 0.17 and 200 points from the data-set of the exact solution for the structure factor S(r) in the range of dimensionless r r/σ = 1 ÷ 11.To analyze these "experimental" data by our method, we should first find the effective interaction potential.The suitable choice of the potential is a guarantee of the efficiency, accuracy, and fast convergence of the procedure.For the hard sphere data, the natural choice is the hard sphere potential supplemented by the correction terms (x here is r/σ).Fitting four adjustable parameters V a , V r , κ a and κ r we estimate the corrections to the hard sphere potential smaller than βV (r) < 0.05.The calculated structure factor deviates from its "experimental" value less then 0.07%!.
If we take the Lenard-Jones potential without the hard core part, then the fitting to the "experimental" data (the exact OZ and PY equation solution for the hard spheres) gives V a = 0.4, V r = 1.26, and the deviation of the calculated structure factor from the "experimental" data is about 0.7%, i.e., 10 times worse then for the potential ( 11)).A bit larger (but still not too bad) the differences between the both potentials take place if we compare the computed and "experimental" pair correlation functions (see Fig. 1.).In own turn with the pair correlation function in hands, we can find such physically relevant quantity as the average coordination number N N = 4πn Here the integral is taken over the relatively narrow region of r (around the main peak of correlation function at r = σ).
For the potential (11) N ≃ 3.97, whereas for the Lenard-Jones potential N ≃ 3.89.We conclude from these two pure methodical (but instructive) examples, that the both model effective potentials provide fairly good (although not ideal) data descriptions.The accuracy of the computed integral characteristics (like the average coordination number N ) is less impressive, it is about 2%.If we were dealing with real experimental data (with finite systematic errors and noise), the accuracy of our method can be considered as very satisfactory one.
Let us move now to the real experimental data.We take the scattering intensity data from the work [19] for the polymethylmetacrylate (PMMA) spheres.Using our methodology and depletion interaction potential induced by the polystyrene globules dispersed in the solution [20], we fitted all the experimental data presented in the work [19] with our new described above numerical procedure (which is simpler and faster than used in [19]).One more advantage of the new approach, is that it is flexible and can be adapted for a rather wide range of the interaction potentials.Just as an illustration in the Fig. 2, we plot the static structure factor calculated by the method of [19] (black squares) and by our new method (red circles).Presented data correspond to volume fraction of PMMA particles φ = 0.2 and concentration of polyethylene glycol c p = 23mg/L.The radius of particles was fixed at the experimental value, and the interaction potential is the depletion potential.It is worth to note also that the standard method [19] gives the value for the potential amplitude V a1 = −3.3kT, and the method of our work leads to V a2 = −3.55kT .
Our next example is related to the neutron scattering data on liquid krypton [21].We reproduce the borrowed from the work [21] data on the scattering intensity in the Fig. 3.The data cover the large interval of the scattering wave vectors qσ ∼ 1 ÷ 40 (where σ is estimated by the position of the main peak in the static structure factor).Results of the fitting for the Lenard-Jones potential to the experimental data are presented in Fig. 3 by solid lines.The values of parameters determined from the same fitting are presented in Table 1.Four parameters were used as adjustable: scaling factor for the wave vector σ, density n and the amplitudes of the Lenard-Jones potential V a and V r .By physical arguments (or even common wisdom) for the krypton one should expect the Lenard-Jones interaction potential provides an adequate description of the system.Surprisingly enough (see the Fig. 3) it is not the case (as it was mentioned in [21] and as we have confirmed by our own computation).Something is evidently wrong.In our opinion, the catch is in a small q region.As it is well known (see e.g., [1]) there is the exact thermodynamic relation We take the values entering (14) parameters from [21] and the handbook [22] and calculate the structure factor at q = 0 for a few temperature points along the liquid-gas coexistence line: S(q = 0, T = 133K) ≈ 0.076, S(q = 0, T = 153K) ≈ 0.13, S(q = 0, T = 183K) ≈ 0.459.
With these so-to-speak exact values for the S(q = 0), one has to perform the fitting of the scattering data in the broad range of the wave vectors.We present the results of our approach model fitting with the Lenard-Jones inter-particle potential.The magnitudes of all needed and computed parameters are presented in the table 1.To find all three correlation functions we utilize the PY closure relation, but similar by quality fitting can be obtained also with the HNC closure relation.As we said already the separate important task is to find (estimate) the interaction potential.We plot the result obtained by our method in the Fig. 4.
The curves in the Fig. 4 present the potential divided by k B T .All three curves can be rescaled and collapsed into a single universal (master) curve.The result of such procedure is presented in Fig. 5.
IV. CONCLUSION AND PERSPECTIVES
In recent years there has been an upsurge in interest in structural investigations of various colloidal suspensions and molecular liquids (see e.g., [23], [24] and references therein), although the problem itself is anything but new.The van der Waals theory is the cornerstone of the current understanding of fluid structures and phase behavior.Although modern experimental and numeric methods of structure investigations can provide very detailed information about many features and properties of liquid state, no simple way to relate the both, because the data depend on a number of material and microscopic parameters only barely known.To get such insight one has to rely on the traditional methods of the statistical physics, namely, having in mind the fluid state structure, it is necessary to solve the exact OZ equation.Unfortunately, here we face to a problem, because the equation is not in a closed form: it contains two unknown functions.In a general case (molecular liquids, or not very dilute colloidal dispersions) it is not possible to derive by a regular theoretical method the needed closure relations.The most popular closure relations (PY and HNC) are basically a sort of self-consistent extrapolations from the very dilute dispersion limit.The PY closure allows to find the exact analytical solution of the OZ equation for the dispersion of hard spheres.Moreover, the PY closure leads to a reasonably good description of the experimental data for the dispersions somehow close to the hard sphere ones.Then it is tempting to try to describe experimental data for a more broad class of dispersions using the trial PY or HNC ansatz for the correlation functions and computing perturbatively corrections to the ansatz to fit the data.Unfortunately such a perturbative approach is not always work, and besides leads to not very efficient and fast computations.To find how to overcome the difficulty, there is no way to improve the theory all the more that it is impossible for a system without any small parameter.As one can expect this intermediate range of parameters is the most difficult one to treat theoretically, all the more, analytically.
Instead, we propose a simple and working instrument, combining the theory solution to the OZ equation for the function γ(r) and experimentally measured scattering intensity.It is based on the observation that it is possible to determine the smooth function γ(r) ≡ h(r) − c(r) directly from the experimental data for the structure factor (scattering intensity data for the system under consideration and from its diluted state).Then our method enables to compute the correlation functions and the inter-particle potential by using the OZ equation with a closure relation.
1 Рис. 3 :
Structure factor of liquid krypton at liquid-gas coexistence.The squares present the data for the temperature 133 K, circles -153 K and triangles -183 K. Solid lines present the result of fitting.
Рис. 4 :
Lenard-Jones interaction potential for liquid krypton.Black line correspond to 133 K, red line -153 K and blue line -183 K.
Рис. 5 :
Interaction potential for liquid krypton, rescaled to the temperature 153 K. Lines are the same as in Fig.4.
Table 1
[19] 2: Static structure factor for PMMA spheres with attraction caused by the depletion potential.Squares present the results obtained in[19]and circles represent the results of the current approach | 2017-09-25T14:20:27.000Z | 2017-09-25T00:00:00.000 | {
"year": 2018,
"sha1": "e33b80b9a5fb2cfacba5aaa5c3a09ea83ab5227a",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/1709.08507",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e33b80b9a5fb2cfacba5aaa5c3a09ea83ab5227a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
139876591 | pes2o/s2orc | v3-fos-license | Alumina–MWCNT composites: microstructural characterization and mechanical properties
ABSTRACT In the present work, Al2O3–multiwalled carbon nanotube (MWCNT) composites have been developed by both conventional sintering and spark plasma sintering (SPS) and their microstructures, mechanical properties and wear behavior have been investigated. Further, the influence of various other parameters such as the sintering time, sintering temperature, MWCNT loading level and processing technique adopted for development of the composites has also been analyzed. The powder metallurgy route was selected for development of Al2O3–0.2, 0.5, 0.8, 3, 5 vol% MWCNT composites using both conventional sintering and SPS. For conventionally sintered Al2O3–MWCNT composites, it has been found that both the hardness and relative density of the composites decreased up to a loading level of 0.2 vol% of MWCNTs, followed by a continuous increase with the addition of MWCNTs to the Al2O3 matrix, attaining a maximum value in the case of Al2O3–3 vol% MWCNT composite. The wear behavior of conventionally sintered composites also exhibits significant improvement with increase in sintering time. The SPSed Al2O3–MWCNT composites show a much higher relative density and better mechanical and tribological properties as compared to conventionally sintered Al2O3–MWCNT composites.
Introduction
The substantial progress in ceramic-based nanocomposites (CMNCs) is playing a vital role in broadening the range of areas in which ceramics can be applied. Alumina (Al 2 O 3 )-based composites are potential engineering materials possessing superior mechanical as well as tribological properties. Monolithic ceramics suffer from inherent brittleness, poor creep resistance and low fracture toughness. Considerable number of attempts have been made over the last few decades to develop ceramic-matrix composites (CMCs) with better mechanical properties as compared to monolithic ceramics [1,2]. CMCs have low weight, high hardness, and superior thermal and chemical resistance, and they have emerged in recent years, as an attractive choice for a wide range of applications. Since Niihara introduced the concept of nanocomposites in 1991 [3], the addition of nanofillers as a reinforcement phase has become one of the most promising methods of improving the mechanical properties of CMNCs. Carbon nanotubes (CNTs) have emerged as potentially attractive nanofillers for CMNCs. In the present work, multiwalled carbon nanotubes (MWCNTs) have been used as reinforcement for the development of Al 2 O 3 -MWCNT composites. The density of MWCNTs is~2.6 gm/cc, and their specific surface area lies in the range of 200-400 m 2 /g. The tensile strength of MWCNTs ranges between 10 and 60 GPa whereas its modulus lies in the range of 0.3-1 TPa. It has high thermal conductivity of 3000 W/m K and exceptional electrical conductivity in the range of 10 6 -10 7 S/m [4]. Al 2 O 3 is one of the most commonly used ceramic materials due to its extremely high hardness (15)(16)(17)(18)(19)(20)(21)(22), high oxidation resistance and good chemical stability. Among the various engineering ceramics, Al 2 O 3 is one of the most cost-effective and economically viable materials. Al 2 O 3 possesses an extremely high melting point of~2071°C and its density lies in the range of 3.75-3.95 gm/cc. Apart from these attributes, its bulk modulus is~324 GPa, its Young's modulus is~413 GPa and its compressive strength lies in the range of 2000-4000 MPa. The fracture toughness of Al 2 O 3 is~5 MPa√m and its coefficient of thermal expansion is 10.9 × 10 −6 /K [5]. Although Al 2 O 3 has several excellent functional properties, its applications are limited due to its low fracture toughness. Significant efforts have been made to improve the fracture toughness of Al 2 O 3 by the addition of nanofillers or the use of new sintering processes such as spark plasma sintering (SPS) [6,7]. Despite the tremendous efforts exerted in the area of composites, however, uniform dispersion of the nanofillers in the ceramic matrix is still a key challenge. The difficulties associated with the homogeneous distribution of nanofillers and reproducible development of materials possessing enhanced mechanical properties could be considered the major hindrance in the area of nanocomposites [8,9]. Renewed interest in CMCs was observed with the discovery and commercial availability of carbonaceous nanofillers such as CNTs and graphene [10,11]. The outstanding functional characteristics and exceptional mechanical properties of CNTs make them an attractive choice as nano-reinforcements to improve the fracture toughness of brittle ceramics.
To date, extensive research has been conducted to enhance the fracture toughness of ceramic-based materials. Many atomistic simulations have predicted that CNTs possess a capability of enduring significant tensile and compressive forces prior to failure due to their unique morphology and considerable flexibility. The addition of CNTs as nanofillers can not only enhance the hardness and strength of the composites but can also enhance their wear resistance. Due to their closed tubular structures, CNTs form a weak interaction at the matrix interface during the wear process [12]. However, CNT-reinforced CMNCs developed to date have exhibited much lower mechanical performance than expected. This might primarily be attributable to the agglomeration of CNTs and weak interfacial bonding between the nanotubes and the matrix. The effectiveness of CNTs in reducing the wear rate and providing a stable coefficient of friction has been validated experimentally under different loading conditions for carbon-reinforced composites [13,14]. A study on Al 2 O 3 -CNT composites describing the effects of CNTs on their mechanical characteristics and electrical performance has been conducted earlier [15,16] and the effects of CNT addition on the tribological properties have also been reported [17]. Significant enhancement of wear resistance with the addition of CNTs has been observed. However, a detailed understanding of the tribological behavior of Al 2 O 3 -CNT composites will require further research. This paper reports the influence of MWCNT addition on such properties as the density, hardness, fracture toughness and wear behavior of both conventionally sintered and SPSed Al 2 O 3 -MWCNT composites.
Synthesis of MWCNTs
For the fabrication of various Al 2 O 3 -MWCNT composites, a low-pressure chemical vapor deposition (LPCVD) technique was used to synthesize MWCNTs. Due to high van der Waals interactions and a high aspect ratio, achieving uniform dispersion of MWCNTs in ceramic matrices presents major difficulties that can lead to MWCNT agglomeration. To combat this issue, surface modification of MWCNTs has been achieved through acid functionalization. Synthesized MWCNTs were treated with strong oxidizing agents such as H 2 SO 4 and HNO 3 in order to reduce the high van der Waals forces and minimize the MWCNT agglomeration and thus to enhance their dispersion in the Al 2 O 3 matrix. Figure 1 shows a schematic diagram of the LPCVD technique used to synthesize MWCNTs along with the acid functionalization procedure.
Fabrication of Al 2 O 3 -MWCNT composites
The powder processing route was adopted to fabricate Al 2 O 3 -MWCNT composites. It is well known that MWCNTs tend to agglomerate in the host matrix due to their structural morphology, which leads to an ill-constructed interface between the matrix and MWCNTs. Thus, homogenous dispersion of MWCNTs within the ceramic matrix is extremely important in order to impart the desired mechanical properties to composites. The detailed procedures followed for fabrication of Al 2 O 3 -MWCNT composites from their milled-powder mixtures are illustrated in Figure 2.
Consolidation and sintering
For the consolidation of Al 2 O 3 -MWCNT composites, both pressure-free and pressure-assisted sintering routes were selected. Pure Al 2 O 3 and Al 2 O 3 -0.2, 0.5, 0.8, 3 and 5 vol% MWCNT composites were fabricated. In the case of conventional sintering, green compacts were prepared in a uniaxial cold compaction machine under a load of~395 MPa and later sintered at 1650°C for three different holding times of 1, 2 and 3 h in an inert Ar atmosphere. For the pressure-assisted sintering, SPS was carried out using a Dr. Sinter 515S apparatus (SPS Syntex Inc., Kanagawa, Japan) with a pulse on-off ratio of 12:2. The various sintering parameters adopted during the SPS of Al 2 O 3 -MWCNT composites were as follows: Temperature = 1450°C, time = 10 min, heating rate = 100°C/min, pressure = 50 MPa, diameter of graphite die = 15 mm, vacuum in chamber = 6 Pa, ambience = Ar (at flow rate of 2 L/min), voltage = 20 V and current flow =~1200 A.
After sintering, the pressure was removed and the samples were allowed to cool naturally in the furnace until they attained room temperature. Figure 3 shows the SPS profile used for the fabrication of various Al 2 O 3 -MWCNT composites.
The Archimedes' method was used to determine the bulk density of various composites, and the rule of mixtures was followed to calculate their theoretical density. The density of Al 2 O 3 was assumed to be 3.95 g/cc and the density of MWCNT to be 2.6 g/ cc. The morphology of the fabricated composites was analyzed under an optical microscope and SEM.
Characterization techniques
Several techniques were used to characterize the synthesized MWCNTs, milled Al 2 O 3 -MWCNT powder mixtures and sintered Al 2 O 3 -MWCNT composites. The xray diffraction (XRD) of the MWCNTs, milled powder mixtures and fabricated composites was conducted using a Panalytical PW 3040 X'Pert MPD X-ray diffractometer using CuKα radiation (λ = 0.15415 nm). A JEOL JEM-2100 high-resolution transmission electron microscope (HRTEM) at an acceleration voltage of 200 keV was used to analyze the morphologies of the blended powder mixtures. The morphologies of the sintered composites were analyzed using a Zeiss Axio Scope.A1 optical microscope, a Nova NanoSEM 450/FEI field emission scanning electron microscope (FESEM) and a JEOL-JSM -6480LV scanning electron microscope (SEM), both of the units enabled with energy-dispersive X-ray (EDX) analysis systems. In order to determine the thermal stability of various powder mixtures, differential scanning calorimetry (DSC) and thermo-gravimetric analysis (TGA) were conducted using a Netzsch STA 409C Simultaneous Thermal Analyzer at a heating rate of 10°C /min in an Ar atmosphere. A Malvern Nano Zetasizer ZS system was used for the particle size analysis.
Mechanical testing
The mechanical properties of pure sintered Al 2 O 3 and various Al 2 O 3 -MWCNT composites were investigated.
For pure Al 2 O 3 and various Al 2 O 3 -based composites, the hardness values were measured on a polished cross-section using a Vickers microhardness tester for 10 s. Loads of 100 and 500 gf were applied to conventionally sintered and SPSed composites, respectively. An average of five indents was considered for each sample. The sizes of the conventionally sintered and SPS samples were 5 mm × 10 mm and 5 mm × 15 mm, respectively. The single-edge notched beam method under ambient conditions was adopted to determine the fracture toughness of the composites. Due to sample size restrictions, the indentation fracture toughness testing technique was adopted and carried out at different loads to create a notched crack at the indented point. A notched crack with the length l and diameter 2a was formed by the application of stresses in the range of 500-2000 kgf, with the force applied till a complete fracture of the sample was achieved. A crosshead speed of 0.05 mm/min and a maximum span length of 10 mm were set for the toughness test. For a particular indentation load (P), corresponding hardness (H) values were recorded from the point of notch initiation until the complete fracture of the sample. The K IC values were determined by the Shetty equation using the Palmqvist crack model [18,19]: In order to investigate the wear mechanism and determine the wear performance of pure Al 2 O 3 and various Al 2 O 3 -based composites, the dry sliding wear test was carried out using a DUCOM TR208-M1 ballon-plate tribometer at a sliding speed of 20 rpm and sliding time of 10 min. The wear test was done under a normal load of 1 kgf using a diamond indenter with a diameter of 2 mm and wear tracks 6 mm in diameter were formed on the surfaces of various samples. The variations in the wear rate and wear depth with respect to the sliding time were investigated. SEM was used to characterize the morphologies of the worn surfaces and the wear debris obtained from the wear tracks. [20]. Figure 5 shows HRTEM images of various Al 2 O 3 -MWCNT powder mixtures. As the loading level of MWCNTs was very low in the case of the Al 2 O 3 -0.2 vol % MWCNT powder mixture, no prominent MWCNTs were visible in Figure 5(a). However, in Figure 5(b-e), both MWCNTs and Al 2 O 3 nanoparticles can be clearly observed in the milled powder mixtures. MWCNTs with hollow cores can be easily seen adhering to Al 2 O 3 particles. Both the Al 2 O 3 and MWCNTs were blended by ball milling in the desired volume fractions for a period of 30 min in order to obtain uniformly dispersed powder mixtures. The HRTEM images reveal that the graphitic structure of MWCNTs was well preserved during blending and that the short milling duration did not cause deterioration of the structure of MWCNTs. The presence of nanostructured Al 2 O 3 particles in the blended powder mixture is confirmed by the images. Due to its high brittleness, micron-sized Al 2 O 3 was reduced to the nanometric domain within a short duration of milling.
Results and discussion
The hexagonal spot pattern in Figure 6(a) corresponds to the MWCNTs in the Al 2 O 3 -3 vol% MWCNT powder mixture. The SAD pattern of MWCNTs confirms the sixfold symmetry of the carbon atoms positioned in the graphitic lattice. On the other hand, the faint concentric rings seen in the SAD pattern of the Al 2 O 3 -5 vol% MWCNT powder mixture in Figure 6(b) correspond to the nanostructured Al 2 O 3 in the powder mixture [21,22]. The thermal stability of Al 2 O 3 -MWCNT powder mixtures was analyzed by DSC/TGA. The endothermic peak at~92°C in the DSC plots of Al 2 O 3 -MWCNT powder mixtures in Figure 8(a) corresponds to the removal of the hydroxyl groups and the evaporation of the absorbed moisture. An exothermic peak corresponding to the combustion of MWCNTs can be seen at~761°C. By comparing the DSC plots of different Al 2 O 3 -MWCNT powder mixtures with different MWCNT loading levels, it is observed that the exothermic peak at~761°C is strongest for the Al 2 O 3 -5 vol% MWCNT powder mixture due to its highest content of MWCNTs, resulting in the release of a larger amount of energy because of the higher degree of MWCNT oxidation. The DF HRTEM images in Figure 7 reveal that MWCNTs were well preserved in the form of tiny bundles between the Al 2 O 3 particles at a loading level of 3 vol%, which suggests that complete decomposition of the MWCNTs entrapped between the Al 2 O 3 particles was not achieved in the case of the Al 2 O 3 -3 vol % MWCNT powder mixture. Thus, no prominent exothermic peak is visible in the DSC plot of the Al 2 O 3 -3 vol% MWCNT powder mixture. It can be noted from the TGA plots of various Al 2 O 3 -MWCNT powder mixtures in Figure 8(b) that the mass loss of the powder mixtures occurs in steps. Initially, at~137°C, the toluene used as a process controlling agent during milling and the residual moisture of the Al 2 O 3 -MWCNT powder mixture evaporate. Thereafter, a mass loss corresponding to the combustion of MWCNTs takes place at~625°C. In the case of the Al 2 O 3 -5 vol% MWCNT powder mixture, a large amount of MWCNTs remain entrapped between the Al 2 O 3 particles. As a result, a comparatively smaller mass loss is observed for the Al 2 O 3 -5 vol% MWCNT powder mixture than for the powder mixtures with lower MWCNT content. It should be noted that the Al 2 O 3 -5 vol% MWCNT composite powder shows more residual mass as compared to the Al 2 O 3 -0.5 vol% MWCNT powder mixture because complete decomposition of the entrapped MWCNTs could not be achieved. The TGA plots suggest that the removal of residual moisture and toluene (at~137°) shows a relatively more dominant effect on the mass loss of various powder mixtures as compared to the decomposition of MWCNTs. Although the highest mass loss is observed for the Al 2 O 3 -0.5 vol% MWCNT powder mixture and the lowest mass loss is observed for the Al 2 O 3 -5 vol% MWCNT powder mixture, it is noteworthy that the patterns of both the DSC and TGA plots are alike for all the powder mixtures, indicating that the thermal behavior of all the Al 2 O 3 -MWCNT powder mixtures is identical [24,25].
One of the most important physical properties of particulate samples is particle size distribution (PSD).
The PSDs of pure Al 2 O 3 and Al 2 O 3 -MWCNT powder mixtures were determined by dynamic laser scattering, and the dispersion condition was evaluated based on the zeta potential. The PSD of Al 2 O 3 -MWCNT powder mixtures was determined in order to find out the average particle size of the various powder mixtures. The PSD of pure Al 2 O 3 milled for 30 min in Figure 9(a) shows that the average size of the Al 2 O 3 particles is~2.131 μm. It should be noted that the as-received pure Al 2 O 3 has a particle size in the range of~80-140 µm (refer to Figure 4(a)). This confirms that a short period of milling can reduce the particle size of Al 2 O 3 to a very fine size. By comparing all the PSD plots in Figure 9(b-f), it is evident that the average particle size of the Al 2 O 3 -MWCNT powder mixture becomes lower with increase in the volume fraction of MWCNTs in Al 2 O 3 . The reduction in the particle size of the powder mixture corresponds to the increase in the volume fraction of MWCNTs with a very fine size. It should be noted that only a single sharp peak is seen in the PSDs of most of the powder mixtures, whereas two adjacent peaks are seen only in the case of the Al 2 O 3 -0.8 vol% MWCNT powder mixture. This confirms the highly uniform PSD in all the powder mixtures [26].
The XRD plots of pure Al 2 O 3 and MWCNTs are shown in Figure 10(a). Sharp peaks located at 2θ values of 28.27°, 38.31°and 49.06°can be seen in the XRD plot of Al 2 O 3 . The XRD spectrum of MWCNT shows the strongest peak at 2θ~26.2°corresponding to the (0 0 2) plane and a low intensity peak at 2θ~43.9°corresponding to the (1 0 0) plane of MWCNTs. Figure 10 [27].
The emergence of new phases and grain growth occurring in the developed composites has been analyzed by XRD. The formation of new peaks in the XRD spectra enables new phase formation or transformation occurring during sintering of the composites to be determined. Figure 11 The grain pinning effect can be clearly seen in the SPSed samples but is not prominent for conventionally sintered samples. Also, no profound peak shift was observed for the SPSed samples as compared to the conventionally sintered samples. The shift in the Al 2 O 3 peaks seen in the inset images in Figure 11(a,b) corresponds to the diffusion of small C atoms into the Al 2 O 3 lattice. The short sintering duration in the case of SPSed samples restricts diffusion, whereas the longer holding time during conventional sintering enables easy diffusion of C atoms [28]. Figure 12 shows optical micrographs of pure Al 2 O 3 and the various Al 2 O 3 -MWCNT composites developed by conventional sintering and SPS. A dense smooth surface is evident in the optical micrographs of pure Al 2 O 3 in Figure 12(a-d), conventionally sintered at 1650°C with a holding time of 1, 2 and 3 h, and SPSed at 1450°C for 10 min. For the conventionally sintered pure Al 2 O 3 samples, a relative density in the range of~86.9-89.44% was achieved, whereas a relative density of~99.24% was observed for the SPSed pure Al 2 O 3 sample (refer to Figure 15). From the optical micrographs of Al 2 O 3 -MWCNT composites in Figure 12(e-t), it is evident that with the addition of MWCNTs to the Al 2 O 3 -MWCNT composites, the MWCNTs start to agglomerate. The size of the MWCNT agglomerates was found to increase with increase in the MWCNT content. Large agglomerates with sizes in the range of 1100-1700 µm can be observed in the optical micrograph of the Al 2 O 3 -5 vol% MWCNT composite in Figure 12(q-s), whereas in Figure 12 Figure 12(q-t) [29]. Figure 13 shows FESEM images of pure sintered Al 2 O 3 and Al 2 O 3 -3 and 5 vol% MWCNT composites developed by conventional sintering at 1650°C for durations of 1 and 3 h and SPSed at 1450°C for a dwell time of 10 min. From the micrographs in Figure 13(a,b), the effect of sintering parameters such as dwell time, temperature and processing technique, along with the influence of MWCNT addition on the grain size of Al 2 O 3 , can be analyzed. A significant grain growth in conventionally sintered pure Al 2 O 3 is clearly visible. SPS restricted the grain growth of Al 2 O 3 due to the short sintering duration, however, as can be seen in Figure 13(c). Upon addition of MWCNTs, a remarkable reduction in the Al 2 O 3 grain size can be observed from the micrographs. In Figure 13 Figure 13(g,h). Figure 14(a) shows an SEM image of a conventionally sintered Al 2 O 3 -3 vol% MWCNT composite developed at 1650°C with a holding time of 3 h, along with elemental maps of Al, O and C. A white colored lump of Al 4 C 3 can be seen lying on the surface of the composite. In the SPSed Al 2 O 3 -3 vol% MWCNT composite, rod-like Al 4 C 3 structures can be seen on its surface. Al 4 C 3 is formed by a reaction of MWCNTs with Al 2 O 3 . The diameter of these rods is~270 nm. The high pressure applied during SPS along with the short sintering time allowed the formation of these rod-like structures. On the other hand, in the case of conventionally sintered Al 2 O 3 -3 vol% MWCNT composites, the Al 4 C 3 structures are irregular in shape and are not nanostructured due to the prolonged sintering time [30].
As ceramics require very high sintering temperatures, achieving near full density of the composites without damaging the structure and morphology of the MWCNTs is one of the most important challenges during the development of CMNCs. The highest level of densification in Al 2 O 3 -MWCNT composites can be achieved at an optimum loading level of the nanofiller, as a very high loading level of MWCNTs would lead to their agglomeration in the Al 2 O 3 matrix resulting in poor densification of the CMNCs. Agglomerates of MWCNTs at the grain boundaries lead to abnormal grain growth and poor densification of the composites. On the other hand, a very low nanofiller loading level can leave many of the pores unfilled and result in a lower relative density of the composites. Therefore, an optimum nanofiller loading level results in the highest level of densification in CMNCs [31]. Figure 15 -MWCNT composites was found to improve when MWCNTs were introduced into the Al 2 O 3 matrix in the range of 0.5-3 vol%. However, increasing the loading level of the MWCNTs beyond 3 vol% led to a decrease in the relative density of the [32]. Figure 16 shows MWCNT composites. It is noteworthy that in the case of conventionally sintered composites, the hardness of all the composites was higher as compared to pure Al 2 O 3 samples except for Al 2 O 3 -0.2 vol% MWCNT composites, whereas in the case of SPSed composites, this variation was not observed. The addition of MWCNTs up to 3 vol% results in an increase in the hardness of Al 2 O 3 -MWCNT composites. The highest hardness was observed in the case of Al 2 O 3 -3 vol% MWCNT composites irrespective of the sintering technique adopted. The hardness of conventionally sintered Al 2 O 3 -3 vol% MWCNT composites developed by conventional sintering at 1650°C for 3 h was found to be~4.109 GPa, whereas SPSed Al 2 O 3 -3 vol% MWCNT composite showed a hardness value of~8.38 GPa. A drop in the hardness of the composites was observed when the concentration of MWCNTs was increased to 5 vol%, which can be attributed to agglomeration of the nanofiller in the Al 2 O 3 matrix. MWCNTs tend to agglomerate due to their high aspect ratio and strong van der Waals interactions, leading to poor densification of the composites and ultimately decreasing the hardness value. The addition of an optimum loading of 3 vol% MWCNTs results in homogeneous distribution of MWCNTs in the Al 2 O 3 matrix, which effectively restricts grain growth by grain boundary pinning. MWCNTs present around the grain boundaries can effectively reduce the atomic diffusion coefficient and prevent grain growth during sintering. A finer Al 2 O 3 grain size results in higher hardness of the composites. The grain refinement effect of MWCNTs is also evident from the XRD plots of Al 2 O 3 -MWCNT composites in Figure 11 [33]. The tribological properties of Al 2 O 3 -MWCNT composites were investigated using a ball-on-plate tribometer. Figure 17 shows the wear rate of various Al 2 O 3 -MWCNT composites and clearly suggests that the variations in wear rate of the Al 2 O 3 -MWCNT composites follow a similar trend, irrespective of the sintering technique adopted. The wear rate of conventionally sintered Al 2 O 3 -MWCNT composites decreases continuously as the concentration of MWCNTs is increased up to 3 vol%. The improved wear behavior of Al 2 O 3 -MWCNT composites as compared to monolithic Al 2 O 3 is due to a protective tribofilm formed on the wear track by MWCNTs, which provides the composites with effective wear resistance [34]. With the addition of MWCNTs above 3 vol%, however, the wear rate shows a sudden increase due to heterogeneous agglomeration and clustering of MWCNTs in the sintered composite. In the case of conventionally sintered composites, moreover, the wear rate was found to be dependent on the sintering duration. Al 2 O 3 -MWCNT composites sintered at 1650°C for 1 h show a relatively higher wear rate, whereas composites sintered at the same temperature for 3 h show a relatively lower wear rate. This can be attributed to a higher level of densification in the case of the composites sintered for 3 h. SPSed Al 2 O 3 -MWCNT composites show a much lower wear rate as compared to conventionally sintered composites. However, the trend in the variation of wear rate with respect to the loading level of MWCNTs is similar to that of conventionally sintered composites [35].
The plots in Figure 18 show variations in wear depth with respect to time for various Al 2 O 3 -MWCNT composites. For conventionally sintered composites, it is evident from Figure 18 The higher wear depth observed for the monolithic Al 2 O 3 sample corresponds to the removal of rough surface asperities due to its comparatively lower degree of densification and lubricity [36].
The SEM images in Figure 19 show the wear tracks of sintered pure Al 2 O 3 and Al 2 O 3 -MWCNT composites. It is evident from the images that the width of the wear tracks is significantly reduced by increasing the sintering duration from 1 to 3 h. Composites sintered for 3 h display relatively smoother wear tracks, moreover, with hardly any noticeable grain pull-out as compared to composites sintered for 1 h. However, a large area of grain pull-out and severe damage to the wear surfaces, with traces of wear grooves and large residual wear debris on the wear tracks, can be observed for Al 2 O 3 -MWCNT composites sintered for 1 h (Figure 19(a,d,g,j,m)). When the MWCNT content is increased from 0.5 to 5 vol% in the Al 2 O 3 matrix, abrasive sliding wear occurs resulting in a greater amount of Al 2 O 3 grain pull-out. The width of the wear tracks decreased with increase in the sintering duration of the composites from 1 to 3 h, although the sintering temperature did not vary. For both 1 and 3 h sintered composites, a minimum wear track width is observed in the case of Al 2 O 3 -3 vol% MWCNT composites. The lower tangential frictional force between the composite surface of the wear track and the ball reduced the grain pull-out due to the formation of a protective tribofilm by the MWCNTs. MWCNTs embedded in the unpolished surfaces of the composites are dislodged and scattered on the wear track during the wear test to form a protective lubricating tribofilm. Moreover, the rolling effect of MWCNTs on the reduction of abrasion and wear rates cannot be ignored. MWCNTs also contribute to bridging the grains to protect against crack propagation due to their higher aspect ratio during micro-chipping and grain pull-out for improving the wear resistance of the composites [37]. In the SPSed samples, the width of the wear tracks decreases with increase in the MWCNT content in the Al 2 O 3 matrix due to the shorter sintering duration of SPS. The MWCNTs remain well preserved and assist in the effective lubrication of the Al 2 O 3 matrix, which results in an overall improvement of the wear resistance of the composites. sintered Al 2 O 3 -3 vol% MWCNT composites developed by conventional sintering at 1650°C for 2 and 3 h, respectively. It is clear from the SEM images that the sintering duration plays an important role in the conservation of MWCNTs in the Al 2 O 3 matrix, since the wear debris collected from the 3-h sintered composite shows mostly Al 2 O 3 particles with a negligible accumulation of MWCNTs. Longer sintering duration leads to better densification of the composites, making pull-out of MWCNTs difficult during the sliding wear test. This is also confirmed by the variations in wear rate in Figure 17 and the variations in wear depth in Figure 18. Figure 20(c). A large number of faceted Al 2 O 3 grains can be seen in the SEM image in Figure 20(d). However, MWCNTs are not found in the wear debris of the SPSed Al 2 O 3 -3 vol% MWCNT composite. This is due to the higher level of densification obtained for SPSed composites, making pull-out of MWCNTs difficult [38]. The variations in fracture toughness values (K IC ), determined using the notch indentation fracture toughness method for various sintered Al 2 O 3 -MWCNT composites, are shown in Figure 21 in the formation of an interlinked web-like structure of long nanotubes that weakens the nanofiller-matrix interface [39].
To investigate the various toughening mechanisms, the fractured surfaces of conventionally sintered and SPSed Al 2 O 3 -MWCNT composites were observed under SEM. The SEM images in Figure 22 clearly indicate that crack bridging by MWCNTs is the main toughening mechanism of the Al 2 O 3 -MWCNT composites. When MWCNTs embedded in Al 2 O 3 -MWCNT composites encounter a crack, they bridge the crack wake and effectively obstruct its propagation. The pull-out of MWCNTs also contributes to toughening of the composites. Both MWCNT pull-out and crack bridging by the MWCNTs can be seen in the SEM images in Figure 22. The high aspect ratio of MWCNTs leads to a longer crack-wake bridging zone and improves the toughness of the composites. During crack propagation, an initial uncoiling of MWCNTs occurs in the crack wake, and when the crack propagates further, the uncoiled MWCNTs stretch elastically, serving as stretched MWCNT bridges instead of conventional frictional pull-out bridges. The MWCNTs are responsible for the interfacial strengthening as they tend to bridge the grains due to their high aspect ratio and hence impede the crack propagation. During deformation, MWCNTs can absorb energy through their highly flexible elastic behavior and increase the fracture toughness [40]. Additionally, MWCNTs can act as pinning points to stop grain boundary movements occurring under stress. Thus, MWCNTs embedded in the grains pin the Al 2 O 3 grains together and strengthen the grain boundaries. As a result, these MWCNT-strengthened grain boundaries lead to a changed fracture mode, from intergranular in pure Al 2 O 3 to transgranular in Al 2 O 3 -MWCNT composites. The smaller diameter of MWCNTs allows them to become embedded in the grains during grain growth, and their elongated shape enables them to link various grains together in order to form bridges. The sliding of concentric tubes of MWCNTs allows them to extend to significantly longer than their original length without breaking. MWCNTs can be stretched to a great extent before disintegrating during crack propagation and hence contribute to the bridging effect and toughening mechanism [41].
Conclusions
The effects of various sintering parameters such as sintering temperature, dwell time, sintering pressure and variations in nanofiller concentrations were analyzed for various conventionally sintered and SPSed Al 2 O 3 -MWCNT composites. It was found that addition of MWCNTs at significantly lower loading levels of up to 3 vol% remarkably enhances the mechanical and tribological properties of Al 2 O 3 -based composites. However, any further addition of MWCNTs into the Al 2 O 3 matrix leads to the formation of complex clusters and their agglomeration in the host matrix, with resulting deterioration of the properties of the composites. The major conclusions drawn from the present research work are as follows: (1) Good-quality MWCNTs were synthesized at optimized conditions using the LPCVD technique. The synthesized MWCNTs comprised concentric cylindrical graphene layers with an interlayer spacing of 0.34 nm. The outer diameter of the MWCNTs was found to bẽ 12 nm and the inner diameter~3.3 nm. | 2019-04-30T13:08:47.314Z | 2019-01-02T00:00:00.000 | {
"year": 2019,
"sha1": "883b341670bfbb440f5a1e7fbab75a2ffc6666c4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/21870764.2018.1552235",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "a10bcaa5bf94baa9e5642d484f68a58a69c10386",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
216179126 | pes2o/s2orc | v3-fos-license | Characteristics and treatment effectiveness of the nummular headache: a systematic review and analysis of 110 cases
Background/objective Nummular headache (NH) is a primary headache disorder characterised by intermittent or continuous scalp pain, affecting a small circumscribed area of the scalp. As there are limited data in the literature on NH, we conducted this review to evaluate demographic characteristics and factors associated with complete resolution of the headache, and effectiveness of treatment options. Methods We performed a systematic review of cases reported through PubMed database, using Preferred Reporting Items for Systematic Reviews and Meta-Analyses protocol and ‘nummular headache’, ‘coin-shaped headache’ and ‘coin-shaped cephalalgia’ keywords. Analysis was performed by using χ2 test and Wilcoxon rank-sum test. For individual interventions, the response rate (RR%) of the treatment was calculated. Results We analysed a total of 110 NH cases, with median age 47 years and age of pain onset 42 years. Median duration to make correct diagnosis was 18 months after first attack. The median intensity of each attack was 5/10 on verbal rating scale over 4 cm diameter with duration of attack <30 min. Patients with NH had median three attacks per day with frequency of 9.5 days per month. 40 (57.97%) patients had complete resolution of the headache after treatment. Patients with complete resolution were younger, more likely to be female, and were more likely to have diagnosis within year. Patients with complete resolution more likely to have received treatment with onabotulinum toxin A (botulinum toxin type A (BoNT-A)), and gabapentin compared with patients without complete resolution. Most effective interventions were gabapentin (n=34; RR=67.7%), non-steroidal anti-inflammatory drugs (NSAIDs) (n=32; RR=65.6%), BoNT-A (n=12; RR=100%) and tricyclic antidepressant (n=9; RR=44.4%). Conclusion Younger patients, female sex and early diagnosis were associated with complete resolution. NSAIDs, gabapentin and BoNT-A were most commonly used medications, with significant RRs.
AbstrACt background/objective Nummular headache (NH) is a primary headache disorder characterised by intermittent or continuous scalp pain, affecting a small circumscribed area of the scalp. As there are limited data in the literature on NH, we conducted this review to evaluate demographic characteristics and factors associated with complete resolution of the headache, and effectiveness of treatment options. Methods We performed a systematic review of cases reported through PubMed database, using Preferred Reporting Items for Systematic Reviews and Meta-Analyses protocol and 'nummular headache', 'coin-shaped headache' and 'coin-shaped cephalalgia' keywords. Analysis was performed by using χ 2 test and Wilcoxon rank-sum test. For individual interventions, the response rate (RR%) of the treatment was calculated. results We analysed a total of 110 NH cases, with median age 47 years and age of pain onset 42 years. Median duration to make correct diagnosis was 18 months after first attack. The median intensity of each attack was 5/10 on verbal rating scale over 4 cm diameter with duration of attack <30 min. Patients with NH had median three attacks per day with frequency of 9.5 days per month. 40 (57.97%) patients had complete resolution of the headache after treatment. Patients with complete resolution were younger, more likely to be female, and were more likely to have diagnosis within year. Patients with complete resolution more likely to have received treatment with onabotulinum toxin A (botulinum toxin type A (BoNT-A)), and gabapentin compared with patients without complete resolution. Most effective interventions were gabapentin (n=34; RR=67.7%), non-steroidal antiinflammatory drugs (NSAIDs) (n=32; RR=65.6%), BoNT-A (n=12; RR=100%) and tricyclic antidepressant (n=9; RR=44.4%). Conclusion Younger patients, female sex and early diagnosis were associated with complete resolution. NSAIDs, gabapentin and BoNT-A were most commonly used medications, with significant RRs.
IntroduCtIon
As per International Classification of Headache Disease-3 (ICHD-3), nummular headache (NH) is a primary headache disorder characterised by intermittent or continuous scalp pain of highly variable duration, but often chronic, in a small circumscribed area of the scalp and in the absence of any underlying structural lesion. 1 2 Previously, it was known as 'coin-shaped headache'. 1 The estimated incidence in NH is 6.4 per 100 000. 3 The pathophysiology of NH is unknown, 4 5 and the majority of them arise without any precipitating factor. The signs and symptoms of NH are confined to a small area, suggesting a peripheral local process with no evidence of central mechanism as in migraine or tension type headache. 4 The localisation of pain can be to any part of the scalp, but most commonly parietal area is affected. 4 6 NH is typically unifocal, with the exception of a few cases of bilateral headache. 6 The pain is usually mild to moderate in intensity, rarely severe. The duration is highly variable, lasting from as short as a few seconds to daily and continuous pain. Pain is mostly described as pressure, stabbing or occasionally burning. 4 Sensory disturbances like allodynia, hypo/ hyperaesthesia, paraesthesia, hyperalgesia and tenderness commonly occur in the affected area of pain. [4][5][6][7][8] Patients with mild pain do not require treatment and reassurance is the only intervention needed. However, treatment is warranted in patients with severe pain. Antiepileptic's and tricyclic antidepressants (TCA) have not been shown to be effective in patients with NH. Analgesics and nonsteroidal anti-inflammatory drugs (NSAIDs) have been reported as an effective treatment in 60% of the published cases; especially in cases of acute exacerbation, mild continuous or intermittent pain or as add-on treatment with other drugs. 4 In patients with inadequate response to other treatments, botulinum toxin type A (BoNT-A) is reported to be a Open access well-tolerated and effective treatment in few case series 5 9 but lacks appropriate sample size. It has been reported that gabapentin is only transiently effective for NH and NH eventually becomes refractory to all standard prophylactic and analgesic therapies. [10][11][12] Small number of case series and prospective studies have reported NH characteristics and its various treatment therapies but limited by duplication of patients data or inadequate availability of individual patients data. 3 4 13 To our knowledge, there are no studies that have analysed the individual cases of NH systematically to evaluate the effectiveness of each therapeutic intervention.
The primary aim of this systematic review is to evaluate the demographic characteristics, variation in presentation of NH and effectiveness of therapeutic interventions in individual NH cases published in literature. Our secondary outcomes were to find the characteristics of patients with complete resolution of the headache and effectiveness (response rate (RR)) of the treatment choices used for patients with NH.
Methods
We performed a systematic review of cases reported on NH. We followed the predesigned Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol and PRISMA checklist 14 (online supplementary file 1) and standard for reporting the systematic review to the extent of our possibilities.
search strategy A comprehensive search for case reports and case series on PubMed database was conducted on 30 June 2019 by two independent investigators (AA and SS). The search included case reports, case illustrations, letters reporting human cases and case series from January 2002 to June 2019, by using keywords 'nummular headache', 'coinshaped headache' and 'coin-shaped cephalalgia'. definition and classification NH is a rare kind of primary headache disorder that is defined as, 'pain of highly variable duration, often chronic pain in a small circumscribed area of the scalp without any underlying cause'. ICHD-3 has described the following diagnostic criteria: (1) continuous or intermittent head pain, (2) exclusively on the scalp with four characteristics of sharply countered, fixed in size and shape, round or elliptical, 1-6 cm in diameter and (3) not met criteria of other headaches of ICHD-3 diagnosis. 1 eligibility criteria We used the following inclusion criteria to include cases in systematic review: all case reports and case series of NH (1) where diagnosis was confirmed by clinician as described by the author's judgement, (2) where complete data including demographics and personal information were available, and (3) where mimicking differential diagnosis was ruled out to provide a clear picture. We excluded articles that were (1) observational studies, review articles, letters to the editor that were not presenting clinical NH case reports, (2) case reports in any language other than English and (3) observational studies, review articles and case series with duplication of patient's data.
selection of studies and data collection By using this search strategy, a total of 87 articles were identified and screened. We excluded 21 articles which were non-human, non-full text and outside of January 2002-June 2019 publication. Then, both investigators (AA and SS) independently read 66 articles including abstracts and full manuscript, and selected the articles based on inclusion and exclusion criteria. Any disagreement was reviewed by a third investigator (UKP) and disagreement was resolved by consensus. Twenty-five articles that were excluded were not full articles, incomplete information on demographics or headache characteristics, not well defined, non-English language and difficult to comprehend. This left us with 41 case reports and case series, of which 2 were missing treatment given or effectiveness. So, 41 case reports and case series were considered for qualitative and 39 were considered for quantitative analysis (figure 1).
All eligible studies were reviewed using a standardised web-based form to collect information. All data were summarised descriptively, including country of the patient, age, sex, age at diagnosis, latency, duration, timing and frequency of attack, characteristic of headache (localisation, region, diameter, quality and intensity of pain, tenderness and exacerbating factors), concomitant symptoms, comorbidities and therapeutic interventions.
outcomes
The primary outcome of our systematic review of cases is to evaluate the demographic characteristics, variation in presentation of NH and treatment interventions as most cases of NH were initially misdiagnosed for other types of headache. Our secondary outcomes were to find the characteristics of complete resolution versus non-resolution of the headache and effectiveness (RR) of the treatment choices. The complete resolution (no headache with ongoing treatment) versus non-resolution (infrequent headache episodes with ongoing treatment) of the headache pain was decided by the patients' response noted by the physicians (within case) with the different medicines.
statistical analysis
We used Microsoft Excel to collect the data of those 110 cases and SAS (V.9.4) software to evaluate the data (online supplementary file 2). Univariate analysis of differences between categorical variables was tested using the χ 2 test and analysis of differences of median between continuous variables was tested using Wilcoxon rank-sum test. We used proc means, proc freq, proc npar1way and proc univariate procedures to calculate these numbers. Frequency percentage, median and SE of the cohort were calculated from non-missing data. P value of <0.05 was considered statistically significant. No statistical power calculation was conducted prior to the study and the sample size was based on the available data. For individual interventions, the RR (%) of the treatment was calculated by dividing the number of patients with complete resolution after taking a particular drug to the number of patients who had taken that drug multiplied by 100.
results
We analysed a total of 110 NH published cases which fulfilled the inclusion criteria for this review. Table 1 represents age-based and gender-based distribution of the cohort. epidemiological and clinical characteristics There were 108 adults (38% male and 62% female) and 2 children (1 boy and 1 girl) diagnosed with NH. The median age of the study cohort was 47±1.7 (SE) years ranging from 4 to 80 years. The median age of onset of pain in patients was 42±1.8 (SE) years (table 2).
The correct diagnosis of NH was made within 18±12.8 (median ±SE) months after the first episode of headache with median intensity of 5±0.2 (SE) on verbal rating scale (VRS) of 1-10 (intensity: 1=least severe to 10=most severe). The median diameter of pain was 4±0.2 (SE) cm. Out of 32 patients with known duration of attack (pain), 17 (53.13%) patients experienced <30 min of attack, 9 (28.13%) between 30 and 120 min duration and 6 (18.75%) patients had >120 min duration of attack.
Patients with NH in the study had 3±2 (median ±SE) headache attacks per day with frequency of 9.5±3.6 (median ±SE) days per month. Note that all percentages are column percentages to compare characteristics between complete resolution and no-resolution groups. Missing data were not considered for the calculation of the frequency percentages. *TCA includes amitriptyline and nortriptyline.
rr to treatment
NH is most commonly present in the middle age group 6 17 and the average age of patients in our study group is 47 years with only two paediatric cases. Studies report the predominance of NH in females with a ratio of 1.5-1, consistent with our findings of female to male 0.6/0.4. 6 18 NH is most commonly unilateral 19 but our study had 10% patients with bilateral pain which is supported by Dai et al review 6 with 12% patients with bilateral pain. Pain mostly occurred on the right side and localised to parietal region, consistent with Rammohan et al. 17 Nearly half of our study patients had pressure like quality of pain followed by stabbing and burning, with similar results reported by review by Dai et al. 6 The average intensity of pain is mild to moderate with an average of 5/10 on VRS. The average pain diameter was 4 cm, findings consistent with other studies which reported 2-6 cm of pain. 6 17 20 There are no standard treatment guidelines present for NH, but few drugs have been used for the management of NH. These drugs include NSAIDs, gabapentin, carbamazepine, BoNT-A, triptan, TCA and nerve blocks. 4 A study of 21 patients with NH by Zhu et al, 21 where 14/21 patients with NH were treated with different therapeutic approaches, concluded that NH can be effectively treated with acupuncture or combining amitriptyline with indomethacin, Open access ibuprofen or carbamazepine. Chirchiglia et al 22 reported the successful management of one case of NH by adding palmitoylethanolamide to topiramate. Both of these drugs are used for neuropathic pain and suggest that release of algogenic substances like neurokinins, substance-P and calcitonin gene-related peptide (CGRP) causes inflammation and pain. Due to the involvement of cutaneous branches, the pain is superficial. Use of PAE decreases degeneration of mast cells, prevents alteration of nerve fibres and reduces inflammation. Nerve block is used for the treatment of NH and a study by Dach et al 12 showed that blocking greater occipital nerve can relieve pain.
Our study found that gabapentin is the most frequently used therapeutic modality, with an RR of 67.7%. Martins and Abreu 5 and Trigo et al 23 concluded that gabapentin is the most frequently used medication with RR >50% in NH treatment and most common dose was 800 mg/ day. An interesting finding of our study is the 100% treatment response with BoNT-A therapy. In support of our findings, studies by García-Azorín et al 13 and Cuadrado et al 4 concluded that BoNT-A significantly decreased the frequency of NH and may be a reasonable therapeutic approach for those patients, refractory to gabapentin. Another review study by Dai et al proved the effectiveness of BoNT-A treatment in 9/11 cases. 6 The major strength of our study is that the study population analysed was only individual case reports and precisely evaluated the effectiveness of therapeutic interventions. However, our study has some limitations. First, the sample size is small because of lack of reporting, strict exclusion criteria and excluding other prospective studies with no individual patients' data available. Although this was done to avoid duplicate patients and maintain the quality of article but it further reduced number of patients in the analysis. Second, there were not any randomised controlled studies available to support our study; hence, the evaluated treatment options represent the preferences of the physicians. Third, the treatment response of NH to newer anti-CGRP agents is not known. Nevertheless, given the limited availability of accurate information on this disease, this study shows a relatively large number of patients.
ConClusIon
This is the first systematic study to report the effectiveness of treatment options after analysing the individual cases published in literature. The median age of diagnosis of NH was 47 years. Patients with NH had median three attacks per day with frequency of 9.5 days per month. Sixty-nine per cent of patients had temporary relief and 60% of patients had complete resolution of the headache after treatment. Female sex and early diagnosis were associated with complete resolution of NH. NSAIDs, gabapentin and BoNT-A were most commonly used medications, with RRs increasing in that order. | 2020-04-27T20:41:09.098Z | 2020-03-01T00:00:00.000 | {
"year": 2020,
"sha1": "3248ce792a6249296c931191a9b5de50cc057a72",
"oa_license": "CCBYNC",
"oa_url": "https://neurologyopen.bmj.com/content/bmjno/2/1/e000049.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f65ee164c08bb7095b11c1baada7a7da78257dbb",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119447120 | pes2o/s2orc | v3-fos-license | Magnetic manipulation of topological states in p-wave superconductors
Substantial experimental investigation has provided evidence for spin-triplet pairing in diverse classes of materials and in a variety of artificial heterostructures. A fundamental challenge in actual experiments is how to manipulate the topological behavior of $p$-wave superconductors (PSCs) that could open perspectives for applications. Such a control knob is naturally provided by the spin-triplet character of the PSC order parameter, described by the spin d-vector. Therefore, in this work we investigate the magnetic field response of one-dimensional (1d) PSCs and demonstrate that the structure of the Cooper pair spin-configuration is crucial to set topological phases with an enhanced number of Majorana fermions per edge, N, ranging from N=0 to 4. The topological phase diagram, consisting of phases with Majorana modes at the edge, becomes significantly modified when one tunes the strength of the applied field and allows for long range hopping amplitudes in the 1d PSC. We find transitions between phases with different number of Majorana fermions per edge that can be both induced by a variation of the hopping strength and a spin rotation of the d-vector. Hence, the interplay of the applied magnetic field and the internal spin degree of freedom of the PSC opens a new promising route for engineering topological phases with large number of Majorana modes.
I. INTRODUCTION
A Majorana fermion (MF) is an exotic quasiparticle that constitutes its own charge-conjugate partner (i.e. creation and annihilation operators are identical) [1]. This property implies that every spinless fermion can be decomposed into two MFs, locally bound together. However, if the two MFs becomes sufficiently spatially separated, then one can employ the two states defined by the respective spinless fermion as a topological qubit. The experimental achievement and control of the above properties are timely challenges in the area of condensed matter from both a fundamental point of view [2] and also for the tantalizing perspectives of decoherence free quantum computation [3][4][5]. Recently, there has been a tremendous effort in designing and realizing materials platforms where topological superconductivity and MFs can be successfully obtained. This is demonstrated by the proposal of a large variety of systems based on heterostructures made of topological insulators or semiconductors interfaced with s-wave superconductors (SCs) [6][7][8][9][10] as well as by the subsequent experimental evidence of MFs in hybrid superconducting devices [11][12][13][14][15][16][17][18][19][20].
While the so far conducted studies of PSCs have led to remarkable progress and insight regarding topological systems at large, some fundamental aspects remain not fully established. For instance, one of the main issue when dealing with PSC points to the relation between the spin-structure of the triplet order parameter (OP) and the resulting topological phases.
In this work, we move in this framework with a special focus on the possibility of achieving topological phases with a large number of MFs per edge. We show that the Cooper pair spin-configuration of a 1d PSC with an easy arXiv:1710.05567v1 [cond-mat.supr-con] 16 Oct 2017 spin-plane, chiral symmetry [33,[35][36][37][38][39] and long-range hoppings can have a fundamental role to set topological phases with an enhanced number of Majorana fermions per edge (e.g. ranging from N = 0 to 4). We determine the topological phase diagram, consisting of phases with MFs at the edge, in the presence of long range hopping amplitudes in the 1d PSC and an applied magnetic field that preserves the chiral symmetry. We find transitions between phases with different number of Majorana fermions per edge that can be both induced by a variation of the hopping strength and a modification of the -vector spin structure. Hence, magnetic field and internal spin degree of freedom of the PSC define relevant tuning parameters to engineer topological phases with large number of MFs.
The remainder of the paper is organized as follows. We present in Sect. II the model and the methodology for describing the 1d PSC. Sect. III is devoted to the presentation of the results as related to the topological phase diagrams. Finally, in the Sect. IV we provide the concluding remarks.
II. MODEL AND METHODOLOGY
We model the 1d-PSC by a Bogoliubov-de Gennes lattice Hamiltonian in k-space as, where . The matrices σ and τ indicate spin 1/2 Pauli matrices in the spin and particlehole sectors, respectively. We assume the electron dispersion ε k = −2t cos(ka) − 2t cos(2ka) − 2t cos(3ka) − µ and t = 1, with t ν denoting hopping to the ν-th nearest neighbor with lattice constant a = 1 and µ is the chemical potential. In addition, we introduce the Zeeman field h and the odd-parity OP d k = 2d sin k, with d the complex vector defining the spin-orientation of the OP. It is also convenient to introduce a matrix OP in spin-space {↑, ↓}, ∆ = d · σσ y : ∆ ↑↑,↓↓ = d y ± id x and ∆ ↑↓ = −id z . The d-vector components are then related to the pairing correlations for the spin-triplet configurations having zero spin projection along the corresponding symmetry axis. Moreover, once we know the orientation of the d-vector we can immediately deduce that Cooper pairs having equal spin configurations are in the plane perpendicular to it.
For the present analysis we consider a magnetic field lying in the yz plane with amplitude h and orientation θ, i.e. h y = h sin[θ], h y = h cos[θ]), while the PSC has an easy xy spin-plane for the d-vector, and in particular our previous self-consistent analysis [39] has shown that d = d(i cos α, sin α, 0). The spin structure of the pairing is due to an effective separable four-fermion interaction in the PSC channel with potentials V x = V y ≡ V for the spin-xy plane. For such a physical configuration, although in the presence of a source of the usual time reversal symmetry breaking, at any given orientation of the field in the yz plane the Hamiltonian resides in the BDI symmetry class exhibiting chiral, timereversal and charge-conjugation symmetries [40][41][42] with corresponding operators: Π = τ x σ x , Θ = τ z σ z K and Ξ = τ y σ y K. Here, K stands for complex conjugation. Note that the emerging time-reversal symmetry does not lead to a Kramers degeneracy, since it satisfies Θ 2 = I, with I the unit operator. The coexistence of the above set of symmetries is very important because it sets the topological class of the system and it allows to determine an integer Z invariant which counts the number of topologically protected MFs at the end of the 1d-PSC. Since the chiral symmetry operator anti-commutates with H k , by employing a unitary transformation U rotating the basis in the eigenbasis of Π, the Hamiltonian can be put in the off-diagonal form as with antidiagonal blocks given by the matrices A k . Hence, its determinant det A k can be put in a complex polar form z k = | det A k | exp[iθ k ] and, as long as the eigenvalues of A k are non-zero, it can be used to obtain the winding number W by evaluating its trajectory in the complex plane as We observe that the number of singularities in the phase of the determinant is a topological invariant [35] because it is not related to any symmetry breaking and it cannot change without the amplitude going to zero, thus implying a gap closing and a topological phase transition. The integer W counts the number of MFs at the edges of 1d-PSC. Hereafter, we compute W in the parameters space and we combine such analysis with that on an open chain of size N = 2000 [43,44] in order to investigate topological phases and topological transitions.
III. RESULTS
Since at zero applied field one can freely choose the orientation of the d-vector, the Hamiltonian can be decomposed into two spin blocks, each of which contributes with 1MF per edge if |t+t | > |t +µ/2|. The application of an external field (or effectively the proximity to a ferromagnetic system) leads to different behaviors depending on its orientation with respect to the d-vector, including the possibility of a breakdown of the bulk-boundary correspondence due to a reconstruction of the bulk d-vector arising from boundary effects [39]. Moreover, while a field parallel to the d-vector makes the PSC topologically trivial, for a perpendicular orientation a topological regime can be obtained with 1 or 2 MFs per edge. As mentioned in Sect. I, the case with a magnetic field lying in a plane perpendicular to the d-vector places the system in the class BDI and, in principle, it allows us to realize a topological phase with an arbitrary number of MFs per edge. Starting from these observation, one anticipates a substantial reconstruction of the topological phase diagram in the presence of longer range neighbor tight-binding dispersions. In this context, our aim is to demonstrate that the spin-structure of the d-vector in the chiral symmetric regime opens the path in obtaining new phases with an enhanced number of MFs per edge.
In Fig. 1(a),(b) we report on the topological phase diagrams for representative sets of parameters for both the magnetic field and {∆ ↑↑ , ∆ ↓↓ } OPs, with the latter corresponding to a two-components d-vector with almost equal amplitude in the xy spin-plane. The construction of the topological phase diagram has been performed by directly computing the winding number W through the trajectories in the complex plane of the Det(A k ) as well as, for specific points in the phase space, by explicitly analyzing the multiple MFs in a 1d lattice with open boundary (see Figs. 2 (a)-(d)). The first observation arising from a direct comparison of the two phase diagrams at h = 0.8 and 2.0 is that topological phases with a number N of MFs larger than 3 requires a magnetic field that overcomes a critical threshold. Indeed, a large portion of the phase diagram in Fig. 1(a) is topologically trivial (gray region) while configurations with N = 3 MFs are possible only by suitably tuning the t and t hopping amplitudes. The increase of the magnetic field not only leads to a dramatic growth of the topological regions with N = 1, 2, 3 MFs but it also gives access to new configurations with N = 4 MFs. Transitions between phases with different number of MFs can then be induced by both tuning the strength of the applied field and that of the t and t hopping amplitudes. We point out that it is not needed to have both t and t non-vanishing in order to get topological phases with N = 3, 4 MFs. Nevertheless, it turns out that they are only accessible when t = 0 ( Fig. 1(b)), while, on the contrary, for t = 0 the winding number is limited to be less than 3.
Finally, we address the role of the d-vector by exploring a variation of the {∆ ↑↑ , ∆ ↓↓ } OPs. To highlight the impact of the d-vector rotation in the xy plane we selected two representative configurations where all the electronic parameters are fixed apart from the amplitude of t (Fig. 3). The resulting diagram provide interesting and general indications on the way the d-vector acts in designing the topological phases. Indeed, we find that states with N = 3, 4 MFs can be obtained only when ∆ ↑↑ ( )∆ ↓↓ that corresponds to configurations with the d-vector having almost equal amplitude for x and y components. Remarkably, such regions with enhanced MFs require to have both components of the d-vector to be non-vanishing. Hence, we learn that in a fieldtemperature phase diagram they can potentially occur close by the boundary where one of the two spin order parameters goes to zero.
When considering configurations with N = 1, 2 MFs, as expected, those with an even number of MFs are sensitive to the relative sign of the OPs exhibiting a non-trivial behavior when a π phase shift occurs between ∆ ↑↑ and ∆ ↓↓ . On the other hand, the phases with an odd number MFs is not spin-phase dependent. An inequivalent behavior is also observed for the cases with N = 3 and 4 MFs through different types of accessible topological phase transitions when interchanging ∆ ↑↑ with ∆ ↓↓ . Our results show that phases with N = 2 and N = 4 have a fundamentally different topological behavior when considering their robustness and the nature of the allowed topological transitions upon a variation of the OPs amplitudes.
IV. CONCLUSIONS
In conclusion we have investigated the relation between the spin-structure in 1d PSC and the occurrence of multiple MFs in the presence of an applied magnetic field and considering an electron connectivity with long-range hopping amplitudes. We demonstrated that for specific orientations of the magnetic field with respect to the d-vector, chiral symmetry can be restored and the system can exhibit multiple chiral symmetry protected MFs. We singled out the main conditions which are needed to achieve such topological phases: the d-vector should have two components with almost equal amplitude in the xy spin-plane and thus it is generally non-collinear with respect to the orientation of the applied eld. | 2017-10-16T08:42:56.000Z | 2017-10-16T00:00:00.000 | {
"year": 2017,
"sha1": "bc1ecdfdf7ab5c7cbcf0902069423a727d01f627",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1710.05567",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "bc1ecdfdf7ab5c7cbcf0902069423a727d01f627",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
225426927 | pes2o/s2orc | v3-fos-license | Double-winding Wilson loops in SU(N) lattice Yang-Mills gauge theory
We study double-winding Wilson loops in $SU(N)$ lattice Yang-Mills gauge theory by using both strong coupling expansions and numerical simulations. First, we examine how the area law falloff of a ``coplanar'' double-winding Wilson loop average depends on the number of color $N$. Indeed, we find that a coplanar double-winding Wilson loop average obeys a novel ``max-of-areas law'' for $N=3$ and the sum-of-areas law for $N\geq 4$, although we reconfirm the difference-of-areas law for $N=2$. Second, we examine a ``shifted'' double-winding Wilson loop, where the two constituent loops are displaced from one another in a transverse direction. We evaluate its average by changing the distance of a transverse direction and we find that the long distance behavior does not depend on the number of color $N$, while the short distance behavior depends strongly on $N$.
I. INTRODUCTION
What is the true mechanism for quark confinement is not yet confirmed and still under the debate, although more than 50 years have passed since quark model was proposed by Gell-Mann [1] in the beginning of 1960s. In the 1970s, however, the dual superconductor picture was already proposed by Nambu, 't Hooft and Mandelstam [2] as a mechanism for quark confinement. In fact, validity of the dual superconductor picture was confirmed for U (1) pure gauge theory [3], Georgi-Glashow model [4] and N = 2 supersymmetric Yang-Mills theory [5], although it is not yet confirmed for the ordinary nonsupersymmetric Yang-Mills theory [6] and quantum chromodynamics (QCD). Therefore, the dual superconductor picture is now regarded as one of the most promising scenarios for quark confinement, although this does not deny the existence of the other mechanics for quark confinement. See e.g., [7][8][9] for reviews.
In order to establish the dual superconductor scenario, the most difficult issue to be resolved first of all is to guarantee the existence of magnetic monopoles in the pure non-Abelian Yang-Mills gauge theory, which is different from the 't Hooft-Polyakov magnetic monopole [10] in the gauge-scalar model. This issue was circumvented by using the method called the Abelian projection proposed by 't Hooft [11]. The Abelian projection is a gauge fixing which explicitly breaks the original gauge group into its maximal torus subgroup where color symmetry is also broken. By the Abelian projection, magnetic monopoles of the Abelian type [12,13] are indeed realized, but the resulting theory is distinct from the original gauge theory with the non-Abelian gauge group. To avoid the gauge artifact, we must find a procedure which enables one * Electronic address: skato@oyama-ct.ac.jp † Electronic address: akihiro.shibata@kek.jp ‡ Electronic address: kondok@faculty.chiba-u.jp to define magnetic monopoles in a gauge-invariant way. This issue was solved recently for the Yang-Mills theory with the gauge group SU (N ) and any semi-simple compact gauge group [14], by using the non-Abelian Stokes theorem for the Wilson loop operator and the new reformulation of the Yang-Mills theory based on the new field variables obtained by change of variables through the gauge covariant field decomposition of the Cho-Duan-Ge-Faddeev-Niemi-Shabanov [15][16][17][18][19][20][21][22]. See [9] for a recent review. However, these achievements do not necessarily means that the dual superconductivity is the unique scenario for understanding quark confinement. Recently, Greensite and Höllwieser [23] introduced a "double-winding" Wilson loop operator in lattice gauge theory [24] to examine possible mechanisms for quark confinement. The doublewinding Wilson loop operator W (C = C 1 ×C 2 ) is a pathordered product of (gauge) link variables U ∈ SU (N ) along a closed contour C which is composed of two loops C 1 and C 2 , See Fig.1. A more general "shifted" double-winding loop is introduced in such a way that the two loops C 1 and C 2 lie in planes parallel to the x − t plane, but are displaced from one another in the transverse direction, e.g., z by distance R, and are connected by lines running parallel to the z-axis. In the non-shifted case R = 0, the two loops C 1 and C 2 lie in the same plane, which we call coplanar. We denote by S 1 and S 2 the minimal areas bounded by loops C 1 and C 2 , respectively. Note that the double-winding Wilson loop operator is defined in a gauge invariant manner, irrespective of shifted R = 0 or coplanar R = 0.
In [23], they investigated the area (S 1 and S 2 ) dependence of the expectation value W (C = C 1 × C 2 ) of a double-winding Wilson loop operator W (C = C 1 × C 2 ) for the SU (2) gauge group. Consequently, it has been shown in a numerical way that both the original SU (2) arXiv:2008.03684v1 [hep-lat] 9 Aug 2020 lattice gauge theory and center vortex model obey the difference-of-areas (S 1 − S 2 ) law, while the Abelianprojected model obeys the sum-of-areas (S 1 + S 2 ) law. In the coplanar case R = 0, a double-winding loop has been set up as given in Fig.2. In order to discriminate difference-of-areas and sum-of-areas laws, it is efficient to measure the L 1 -dependence of a coplanar double-winding Wilson loop average W (C = C 1 × C 2 ) , with the other lengths L, L 2 , and δL being fixed. For simplicity, we set δL = 0. Then S 1 (= L × L 2 ) and S 2 (= L 1 × L 2 ) are the minimal areas of rectangular loops C 1 and C 2 , respectively. We assume S 1 ≥ S 2 for definiteness hereafter. If W (C 1 × C 2 ) obeys the difference-of-areas law: then ln W (C 1 × C 2 ) must linearly increase in L 1 as L 1 increases. On the other hand, if W (C 1 × C 2 ) obeys the sum-of-areas law: then ln W (C 1 × C 2 ) must linearly decrease in L 1 as L 1 increases. The numerical evidences were obtained as given in Fig.3 which summarizes their results for L 1 dependence of ln W (C 1 × C 2 ) with the other lengths being fixed, e.g., L = 10, L 2 = 1, δL = 0, based on numerical simulations performed on a lattice of size 20 4 at β = 2.4. These results certainly show both the original SU (2) gauge field and center vortex lead to the difference-ofareas law, while Abelian-projected configurations lead to the sum-of-areas law.
From a physical point of view, a double-winding Wil- son loop can be interpreted as a probe for studying interactions between two pairs of a particle and an antiparticle. Then differences among three cases are understood as follows. In the Abelian model, a particle and an antiparticle in a pair are respectively connected by the electric flux with the length of L and L 1 , as indicated in the top panel of Fig.4. The total energy of flux tubes shifted by R > 0 becomes σ (L+L 1 ), where σ is a string tension, if the flux-flux interactions are neglected. This argument will give a reason why the Abelian model gives the sumof-areas law. Moreover, they argue that even in the limit R → 0 the sum-of-areas law remains unchanged in the Abelian model, because electric flux tubes tend to repel each other and they can not coincide in the type II dual superconductor.
For the SU (2) gauge theory, they argue that the "W bosons" play the crucial role, since they are off-diagonal components of the SU (2) gauge field which are not included in the Abelian model. W bosons have charged components W −− and W ++ with respect to the Abelian U (1) group. They explain that charged off-diagonal components W −− and W ++ of the SU (2) gauge field neutralize respectively positive and negative static charges. Consequently, flux tubes exist only for connecting two positive charges and two negative static charges, which leads to difference-of-areas law. See the bottom panel of Fig.4.
In the vortex picture, if a vortex pierces the minimal area of a loop, it will multiply the holonomy around the loop by a factor −1. Therefore, if a vortex pierces two loops C 1 and C 2 simultaneously, it gives a trivial effect. The non-trivial result is obtained only if a vortex pieces the non-overlapping region S 1 − S 2 . This leads to difference-of-areas law.
Quite recently, Matsudo and Kondo [25] have investigated a double-winding, a triple-winding, and general multiple-winding Wilson loops in the continuum SU (N ) Yang-Mills theory. They have found that a coplanar double-winding SU (3) Wilson loop average follows a novel area law which is neither difference-of-areas law nor sum-of-areas law, and that sum-of-areas law is allowed for SU (N ) (N ≥ 4), if the string tension is assumed to obey the Casimir scaling for quarks in the higher representations.
In this way, the study of double-winding Wilson loops itself is interesting because it can be used to test the confinement mechanism in QCD. Moreover, it is worth considering the interactions between two color flux tubes. In this paper, we investigate both "coplanar" and "shifted" double-winding Wilson loops in SU (N ) lattice Yang-Mills gauge theory by using both strong coupling expansion and numerical simulations.
In this paper, we show that the "coplanar" doublewinding Wilson loop average has the N dependent area law falloff: "max-of-areas law" for N = 3 and sum-ofareas law for N ≥ 4, which add a new result to the known difference-of-areas law for an N = 2 "coplanar" double-winding Wilson loop average. Moreover, we investigate the behavior of a "shifted" double-winding Wilson loop average as a function of the distance in a transverse direction and find that the long distance behavior does not depend on the number of color N , while the short distance behavior depends on N .
This article is organized as follows. In section II, we examine how the area law falloff of a "coplanar" doublewinding Wilson loop average depends on the number of color N . In section III, we examine a "shifted" doublewinding Wilson loop, where the two constituent loops are displaced from one another in a transverse direction, especially evaluate its average by changing the distance of a transverse direction. The final section IV is devoted to conclusion and discussion. We also discuss the validity of the Abelian operator studied in [23]. Recently, there are numerical evidences that the dual superconductor for SU (2) and SU (3) lattice Yang-Mills theory is type I [26], although they explain sum-of-areas law on the basis of type II superconductor. We should study the interaction between two flux tubes in the limit R → 0, in case of type I superconductor.
II. A "COPLANAR" DOUBLE-WINDING WILSON LOOP
First of all, we consider the coplanar case R = 0 of a double-winding Wilson loop in the SU (N ) lattice Yang-Mills gauge theory, as indicated in Fig.2. For simplicity, we set δL = 0. Let S 1 (= L × L 2 ) and S 2 (= L 1 × L 2 ) be the minimal areas of rectangular loops C 1 and C 2 , respectively. We assume S 1 ≥ S 2 for definiteness hereafter.
A. strong coupling expansion
Let S g be a plaquette action for the SU (N ) lattice Yang-Mills theory: where the link field U n,µ satisfies U n+μ,−µ = U † n,µ . This action reproduces the ordinary Yang-Mills action − d D x µ<ν tr(F 2 µν ) up to constant in the naive continuum limit (lattice spacing → 0). The diagrammatic expressions of a plaquette variable U n,µν and the plaquette action are given in Fig.5.
Note that the standard Wilson action S W is defined by see e.g., [29]. The difference of the constant term in the action is physically insignificant and we drop it in the strong coupling analysis. By comparing S g and S W , we can find We define a partition function Z by where dU n,µ is the invariant integration measure of SU (N ). Then the expectation value W (C) of an operator W (C) is defined by In order to evaluate the expectation value in eq.(8), we perform the strong coupling expansion: For the large bare coupling constant g, we can expand the weight e Sg into the power-series of 1/g 2 , and perform the group integration over each link variable U n,µ according to the measure dU n,µ . In Appendix A, we summarize the formulas needed for the strong coupling expansion and for the SU (N ) group integration.
SU (2)
First, we study the case of SU (2) gauge group. For a coplanar double-winding Wilson loop, there is a single link variable U for a link ∈ C 1 − C 2 and there is a double link variable U U for a link ∈ C 2 , as shown in the top diagram of Fig.6.
We list some of explicit SU (2) group integration formula as For a single link variable U (resp. U † ) for ∈ C 1 −C 2 , we need at least one additional link variable with an opposite direction U † (resp. U ) to obtain non-vanishing result after integration in eq.(8) according to the integration formulas (10c) for the SU (2) group integrations. Such link variables are supplied from the expansion eq.(9) of e Sg . Since the number of plaquettes which are brought down from e Sg must be equal to the power of 1/g 2 in the expansion eq.(9), the leading contribution to W (C 1 × C 2 ) comes from a set of plaquettes tiling the minimal area S 1 −S 2 with the least number of plaquettes. See the top diagram of Fig.6. For double link variables U U for ∈ C 2 , on the other hand, we do not need additional link variables coming from the expansion of e Sg to obtain the non-vanishing result due to the integration (10d), giving the g-independent contribution.
For the SU (2) gauge group, therefore, the leading contribution to W (C 1 × C 2 ) in the strong coupling expansion comes from the term in which a set of plaquettes tiles the surface with the area S 1 − S 2 , as shown in the top diagram of Fig.6. Therefore, group integrations give the result where σ = log(2g 2 ). This result was first obtained by Greensite and Höllwieser in [23]. We reconfirmed the difference-of-areas law of coplanar double-winding Wilson loops for SU (2). The bottom diagram of Fig.6 shows one of higher-order contributions in the strong coupling expansion for SU (2). This diagram gives non-vanishing contribution due to the integration formula (10f).
SU (N ), (N ≥ 3)
Next, we study the case of SU (N ) (N ≥ 3) gauge groups. We list some of explicit SU (N ) (N ≥ 3) group integration formula as Notice that the SU (N ) case is different from the SU (2) case. For a double link variable U U for a link ∈ C 2 , we need additional N − 2 link variables (U ) N −2 with the same direction to be brought down from the expansion of e Sg in eq.(8) to obtain the non-vanishing result after the integration according to the integration formulas (12e) for the SU (N ) group integrations. See the top diagram of Fig.7. For a single link variable U (resp. U † ) for a link ∈ C 1 − C 2 , on the other hand, we need at least one additional link variable with the opposite direction U † (resp. U ) to obtain non-vanishing result after integration in eq.(8) according to the integration formulas (12c) for the SU (2) group integrations. Therefore, the contribution from the top diagram of Fig.7 is given by where the coefficient p N is calculated by collecting the numerical factors coming from link integrations and the power-series expansions of e Sg . We have another contribution from the bottom diagram of Fig.7. For a double link variable U U with the same direction for a link ∈ C 2 , we have additional 2 link variables (U † )(U † ) with the opposite directions to be brought down from the expansion of e Sg in eq.(8) to obtain the non-vanishing result after the integration according to the integration formulas (12f) for the SU (N ) group integrations. For a single link variable U (resp. U † ) for a link ∈ C 1 − C 2 , on the other hand, we need at least one additional link variable with an opposite direction U † (resp. U ) to obtain non-vanishing result after integration in eq.(8) according to the integration formulas (12c) for the SU (N ) group integrations. Therefore, the contribution from the bottom diagram of Fig.7 is FIG. 7: A set of plaquettes tiling the areas S1 and S2 which gives the leading contribution to a coplanar double-winding Here S1(= L × L2) and S2(= L1 × L2) are respectively the minimal areas bounded by rectangular loops C1 and C2 with S1 ≥ S2. given by where the coefficient q N is calculated in the similar way to p N . For the SU (N ) (N ≥ 3), the leading contribution in the strong coupling expansion may come from one of the two diagrams shown in Fig.7. Since the number of plaquettes brought down from e Sg is equal to the power of 1/g 2 , these two contributions can be written as where coefficients p N , q N are determined by expansion coefficients of the power series expansion of e Sg and SU (N ) group integrations for link variables. Which contribution becomes dominant is naively determined by comparing the power index of 1 g 2 N , which depends on the number of color N .
For N ≥ 4, we find that the second term in eq.(15) gives the dominant contribution in the strong coupling expansion for W (C 1 × C 2 ) , since the inequality holds, Thus we conclude that the sum-of-areas law of a coplanar doublewinding Wilson loop is allowed for N ≥ 4. This result is consistent with the result obtained by Matsudo and Kondo in [25].
From the top panel of Fig.7, we can easily find that the coefficient p N should be calculated for each number of color N , because type of diagrams are different with the number of color N . On the other hands, we can obtain general formula for the coefficient q N , since the diagram of the bottom panel of Fig.7 is common to all numbers of color N . The result is for S 2 ≥ 1 in lattice units. See Appendix B for the detail.
In the following, we show the results for SU (2), SU (3) and SU (4) in more detail.
SU (2) For the number of color N = 2, eq. (15) reduces to where The factor 2 in front of p 2 and q 2 arises from the nonoriented nature of the plaquettes for SU (2), which is to be compared with (11).
where From this result, we find that the first term in eq.(20) gives the dominant contribution to W (C 1 × C 2 ) for sufficiently large areas S 1 and S 2 , which is neither difference-of-areas law nor sum-of-areas law for the arealaw falloff of the coplanar double-winding Wilson loop average. We call this area-law falloff "max-of-areas law" (or max(S 1 , S 2 ) law). This result is also consistent with the result obtained by Matsudo and Kondo in [25].
SU (4) For the number of color N = 4, eq. (15) reduces to where In this case, both terms in eq.(23) behave as sum-of-areas law.
3.
L1 dependence of the W (C1 × C2) From the above discussions, we can understand the L 1 dependence of the coplanar double-winding Wilson loop average W (C 1 ×C 2 ) in SU (N ) lattice Yang-Mills gauge theory for fixed L, L 2 , and gauge coupling g.
For SU (2) gauge group, we plot eq.(17) in Fig.8, which shows the difference-of-areas law behavior of a coplanar double-winding Wilson loop for N = 2. On the other hand, we plot eq.(20) in Fig.9. For SU (3) gauge group, as the coplanar double-winding Wilson loop average follows the max-of-areas law, it is expected that there are no L 1 -dependence of W (C 1 ×C 2 ) for efficiently large areas S 1 and S 2 . In fact, we can see that the plots flatten at L 1 ∼ 4 (resp. L 1 ∼ 1) in top (resp. bottom) panel in Fig.9.
B. Numerical simulation
We examine the L 1 -dependence of W (C 1 × C 2 ) that we discussed above.
SU (2): We generate the configurations of SU (2) link variables {U n,µ }, using the (pseudo-)heat-bath method for the standard Wilson action. The numerical simulations are performed on the 24 4 lattice at β(= 2N/g 2 ) = 2.5. We thermalize 3000 sweeps, and in particular, we have used 100 configurations for calculating the expectation value of coplanar double-winding Wilson loops W (C 1 × C 2 ) . confirm the difference-of-areas law for SU (2). Note that we can also confirm W (C 1 × C 2 ) −1/2 for S 1 = S 2 from Fig.8. Fig.9. For example, we can see that the plots flatten at L 1 ∼ 4 for L 2 = 8, which means that there are no L 1 -dependence of W (C 1 × C 2 ) . Thus, we numerically confirm the max-of-areas law for SU (3).
III. A "SHIFTED" DOUBLE-WINDING WILSON LOOPS
Finally, we consider the shifted case R = 0 of a doublewinding Wilson loop in the SU (N ) lattice Yang-Mills gauge theory, as indicated in Fig.12. Contours C 1 and C 2 lie in planes parallel to the x-t plane, but are displaced from one another in the z direction by distance R. Just like the previous section, for simplicity, let C 1 (C 2 ) be a rectangular loop of length L, L 2 (L 1 , L 2 ), and S 1 (≡ L × L 2 ), S 2 (≡ L 1 × L 2 ) be the minimal areas of contour C 1 , C 2 respectively.
A. strong coupling expansion
First, we study the shifted double-winding Wilson loop based on the strong coupling expansion.
One of the diagrams which gives a leading contribution in the strong coupling expansion is given by a set of plaquettes tiling the two minimal surfaces S 1 and S 2 , as shown in Fig.13. The results of a group integration for the links U 's on both surfaces become N (1/g 2 N ) S1+S2 for N ≥ 3, and 2N (1/g 2 N ) S1+S2 for N = 2, respectively. The difference of factor 2 in front of N for N = 2 arises from the non-oriented nature of the plaquettes to conclude the N = 2 result: Another type of diagram which also gives a leading contribution in the strong coupling expansion is given by a set of plaquettes tiling the minimal surface S 1 − S 2 and the four sides with the area 2R(L 1 + L 2 ) of a cuboid with a height R, whose bottom is a rectangular of size To summarize the above discussion, the expectation value of the shifted double-winding loop W (C 1 × C 2 ) R =0 from diagrams as shown in Fig.13 and Fig.14 becomes for N = 2, Note that the R → 0 limit of eq.(28) does not agree with the coplanar result eq.(17), although the sum of the second and third terms in eq.(28) from the diagram of Fig.14 reproduce the coplanar result eq.(17) in the limit R → 0. This is because the first term in eq.(28) coming from the diagram of Fig.13 does not have in the limit R → 0 the counterpart of the strong coupling expansion in the coplanar case and hence contributes only to the shifted case with R = 0. For SU (2) gauge group, especially, we perform the detailed study on the R-dependence of a shifted double- In what follows, we rewrite L 2 into T , Let us imagine T direction be time t-axis, L and L 1 direction be spatial x-axis, and R direction be also space z-axis as seen in top side in Fig.15. As is explained in [23], the shifted double-winding Wilson loop at a fixed time can be interpreted as a tetra-quark system consisting of two static quarks and two static antiquarks. The pairs of quark-antiquarks are connected by a pair of color flux tubes, as seen in the bottom side in Fig.15. We study how interactions between the two color flux tubes change, when the distance R is varied. We find that the second term in eq.(28) dominates for R < R C := L1 1+L1/T , and the first term in eq.(28) dominates for R > R C , because the comparison of the two exponents of these terms for S 1 = LT and S 2 = L 1 T reads where we have neglected the third (higher order) term in eq.(28) for the naive estimate of R C . This means that the left diagram of Fig.16 dominates for R < R C , and the right diagram of Fig.16 dominates for R > R C . Therefore, the dominant diagram switches from left to right at a certain value R C of R as R increases, just like the minimal surface spanned by a soap film. In Fig.17, we plot the R-dependence eq.(28) of a shifted double-winding Wilson loop average W (C 1 × C 2 ) for fixed L, L 1 , and L 2 in the SU (2) lattice gauge theory. The second and third terms in eq.(28) have Rdependence, but the first term in eq.(28) does not depend on R. Therefore, the plot gets flattened for R ≥ R C ∼ 1, which is consistent with Fig.16. This behavior does not depend on the number of color N . In fact, SU (3) and SU (4) cases are given as follows.
SU (4) : In Fig.18, we also plot the R-dependence eq.(31) of a shifted double-winding Wilson loop average W (C 1 ×C 2 ) for fixed L, L 1 , and L 2 in the SU (3) lattice gauge theory.
B. Numerical simulation
Next, we examine the R-dependence of W (C 1 × C 2 ) based on numerical simulations on a lattice.
SU (2): In order to calculate the shifted doublewinding Wilson loop average, we use the same gauge field configurations as those used in calculating the coplanar double-winding Wilson loop. However, we have used APE smearing method (N = 5, α = 0.1) as a noise reduction technique. Fig.19 gives the plots obtained for the W (C 1 ×C 2 ) for various values of R where we have fixed L = 5, T (= L 2 ) = 2, L 1 = 3. We see that the behavior of data in Fig.19 is consistent with the analytical result given in Fig.17. 1 ∼ 6. We see that the data in Fig.20 also consistent with the analytical result given in Fig.18 for sufficiently large areas S 1 and S 2 .
IV. CONCLUSION AND DISCUSSION
In this paper, we have studied the double-winding Wilson loops in SU (N ) lattice Yang-Mills gauge theory by using both strong coupling expansion and numerical simulation.
First of all, we have examined how the area law falloff of a "coplanar" double-winding Wilson loop average depends on the number of color N , by changing the size of minimal area S 2 of loop C 2 . We have reconfirmed the difference-of-areas law for N = 2, and have found new results that "max-of-areas law" for N = 3 and sum-ofareas law for N ≥ 4.
Moreover, we have considered a "shifted" doublewinding Wilson loop, where two contours are displaced from one another in a transverse direction. We have evaluated its average by changing the distance of a transverse direction, and have found that their long distance behavior doesn't depend on the number of color N , but the short distance behavior depends on N .
It should be remarked that this "shifted" doublewinding Wilson loop may contain an information about interactions between two color flux tubes. For this purpose, we need to accumulate more data on the fine lattices with more larger size.
Originally, one of reasons why Greensite and Höllwieser considered the double-winding Wilson loops seems to be that they want to evaluate monopole confinement mechanism in lattice SU (2) gauge theory. They have considered an operator which simply replaces SU (2) link variable U n,µ with the Abelian variable u n,µ as an "Abelian" double-winding Wilson loop, and have shown that the expectation value of such a naive operator obeys the sum-of-areas law. But, it is known that such naive operator should work only for a single-winding Wilson loop in the fundamental representation. Recently, Matsudo and his collaborators [28] have given the explicit expression for the Abelian operator which reproduces the full Wilson loop average in higher representations, which is suggested by the gauge-covariant field decomposition and the non-Abelian Stokes theorem (NAST) for the Wilson loop operator. Similarly, we hope that a correct form of the Abelian operator for a double-winding Wilson loop can be found in the similar way. When we change the line integral to the surface integral, our considerations of the diagrams which give the leading contribution to the strong coupling expansion seems to be useful to construct the NAST for a double-winding Wilson loop. These results will be discussed in a forthcoming paper. In order to perform the strong coupling expansion in the lattice gauge theory, we must calculate the following integrations for the polynomials of group matrix elements over each links: where U ij (i, j = 1, 2, · · · , N ) denotes a matrix element of a matrix U ∈ SU (N ) belonging to the SU (N ) group with the property U −1 = U † , and dU is an invariant measure (Haar measure) on the compact group which is left-invariant and right-invariant We can normalize the measure such that By using properties of the invariant measure, Creutz has shown that eq.(A.1) can be evaluated by the following formula [29,30]: where J is a source variable and is an arbitrary N × N matrix, |J| = det(J), ∂ ji ≡ ∂/∂J ji , and cof(∂) is a cofactor of ∂, respectively. We list some of explicit results from the above formula as The last eq.(A.11) consist for N > 2. For N = 2, Following relation can be shown by using property of invariant measure, From this relation, we also obtain, The following more practical formulae are useful to calculate the expectation value of double-winding Wilson loop by using strong coupling expansion. Let X, Y, A, B be elements of SU (N ) group. From eq.(A.9), we find From eq.(A.8), we find From eq.(A.11), we find for N > 2, In this section, we show explicitly how eq.(16) is obtained. From eq.(8) and eq.(9), a contribution to a coplanar double-winding Wilson loop average W (C 1 × C 2 ) from the bottom panel of Fig.7 is expressed as B.1: Diagrammatic representation of the integration ruleW2 eq.(B.4) for the product of two double-plaquettes with the same clockwise orientation: Integration is performed over the link variables U on the link which is common to two double-plaquettes with the same clockwise orientation. By decomposing the path-ordered product of the link variables along the loop, the plaquette variables for the single plaquette p1 and p2 to the left and right of U is respectively represented by tr(U † p 1 ) := tr(U † X) and tr(U † p 2 ) := tr(Y U ). Here X and Y represent the products of the link variables along staple-shaped paths with the same orientations.
where U † pj and U † p k denote respectively plaquette variables on (S 1 − S 2 ) and S 2 areas. Here note that U † p represents the plaquette variable for the plaquette p with the clockwise orientation.
First, integration with respect to the link variables {U } on the (S 1 − S 2 ) area can be performed with the same technique of the strong coupling expansion as that for the fundamental Wilson loop to obtain where we have defined Next, we perform the integration in eq.(B.3) over the link variables {U } inside of the S 2 area, which excludes the links on the loop C 2 = ∂S 2 (the boundary of S 2 ). As shown in Fig.B.1, performing the integration with respect to the link variables U on the link which is common to two double-plaquettes with the same clockwise orientation using eq.(A.19), we obtaiñ .
Here From the above consideration, definingW n by the result of connecting n adjacent double-plaquettes one after another by integrating over the link variables inside the S 2 area, we can conclude thatW n is written as This statement is proved by the mathematical induction. Indeed, by applying the same procedures as those given in eq.(B.4) and eq.(B.7) to eq.(B.8), we find the relationship Therefore, we have obtained the recurrence relation which holds for the coefficients α n and β n for n ≥ 1: Solving this recurrence relation with the initial condition eq.(B.5), we obtain the explicit form for the coefficients α n and β n : Because the expansion coefficient is applied to n double-plaquettes.
Appendix C: Explicit calculation of the coefficient p3 In this section, we show explicitly how eq.(22) is obtained. From eq.(8) and eq.(9), a contribution to a coplanar double-winding Wilson loop average W (C 1 × C 2 ) from the top panel of Fig.7 is expressed as dU W (C 1 × C 2 ) · pj ∈(S1−S2) 1 g 2 tr(U † pj ) · p k ∈S2 1 g 2 tr(U p k ) , (C.1) where U † pj and U p k stand respectively for plaquette variables on the (S 1 − S 2 ) and S 2 areas. Here note that U † p and U p respectively represent the plaquette variables for the plaquette p with clockwise and counterclockwise orientations. In this section, we focus on the N = 3 case.
First, the integration with respect to the link variables {U } on the (S 1 − S 2 ) area can be performed with the same technique of the strong coupling expansion as that for the fundamental Wilson loop to obtain where we have defined Next, we perform the integration in eq.(C.3) over link variables {U } inside of the S 2 area, which excludes the links on the loop C 2 as the boundary of the S 2 area. As shown in Fig.C.1, performing the integration over the link variable U using eq.(A.18) for two plaquettes that have a common link U , we obtain dU tr(XU ) · tr(U † Y ) = 1 N tr(XY ). (C.4) From this observation, we conclude that one factor of 1/N appears if two plaquettes are connected after common links are integrated. When S 2 plaquettes are connected one after another by using eq.(C.4), a factor of (1/N ) S2−1 is FIG. C.1: Diagrammatic representation of the integration rule eq.(C.4) for the product of two plaquettes with the same counterclockwise orientation: Integration is performed over the link variable U on the link which is common to two plaquettes with the same counterclockwise orientation. The plaquette variables for the plaquette p1 and p2 to the left and right of U is respectively represented by tr(Up 1 ) := tr(XU ) and tr(Up 2 ) := tr(U † Y ). Here X and Y represent the products of the link variables along staple-shaped paths with the same orientations.
applied, and after that only the path ordered product of the link variables on the loop C 2 as the boundary of S 2 is left unintegrated. Therefore, eq.(C.2) becomes where the integral is only for the link variable on the loop C 2 . As shown in Fig.B.3, by using the decomposition W (C 2 ) := tr(AX) and W (C 2 × C 2 ) := tr(AXAX), and by repeatedly using eq.(A.9), we obtain where we have used the cyclicity of the trace in the second equality. Note that this result is meaningful only when N = 3, because we have used eq.(A.9) in the above calculation, eq.(A.10) holds for M = 0 (mod3). For N = 3, thus, we obtain W (C 1 × C 2 ) p3 = −3 1 3g 2 S1 , (C.7) which indeed yields p 3 = −3. | 2020-08-11T01:00:33.553Z | 2020-08-09T00:00:00.000 | {
"year": 2020,
"sha1": "219f290f7dd1d2cd0a00f671c1fa1477b3207e9e",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.102.094521",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "44d0acff2b80bcfc070626c9b823871850f828a8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
200045586 | pes2o/s2orc | v3-fos-license | Energy loss spectrum and surface modes of two-dimensional black phosphorus
The structural features and the electron energy loss spectrum of black phosphorus (BP) have been experimentally analyzed and they are discussed based on a theoretical calculation. The low-energy loss spectra of typical samples reveal that the emerging high-mobility two-dimensional material BP often exhibits both bulk and surface plasmon modes. The surface modes of BP are strongly thickness dependent. Electrodynamic analysis indicates that the Fuchs–Kliewer-like surface plasmon modes consist of two branches with different charge symmetry: the upper side and lower side have the same charge polarity as the lower branch and the opposite charge polarity to the upper branch. This study provides fundamental insight into the characteristic nature of BP plasmonics.
Introduction
Surface plasmon is a type of surface electromagnetic mode propagation along the interface between the metal like medium and the dielectrics. The amplitude of the electric field decays exponentially away from the interface. Surface plasmonics of particular interest because they offer an efficient method for light manipulation [1]. In the past decade, as an important subject of surface plasmonic investigations, graphene plasmonics has provided a variety of gate-controlled optoelectronic applications, such as light harvesting and optical sensing [2,3]. In addition to the unique electronic structure of graphene, graphene plasmonic devices can confine light in atomic thickness, resulting in dramatically enhanced local fields owing to graphene's two-dimensional (2D) nature. However, replaceable 2D plasmonic materials are still lacking. Exploration of new 2D plasmonic building blocks in novel material systems is highly desirable to advance the field of plasmonics. As a result, a number of 2D materials in addition to graphene have been successfully prepared, including hexagonal boron nitride, transition metal dichalcogenides, and black phosphorus (BP) [4][5][6]. Among these emerging 2D systems, BP is one of the most promising candidates because of its remarkable electronic and photonic properties, such as its high carrier mobility (5200 cm 2 V −1 s −1 at room temperature), thickness-dependent direct bandgap, and gate-tunable optoelectronic properties [7][8][9][10]. A variety of experimental studies have been performed to investigate the structural and physical properties of BP by scanning tunneling microscopy [11], atomic force microscopy, Raman spectroscopy [12], and the electron loss spectroscopy [13,14]. Although theoretical and experimental studies have demonstrated the potential of BP as a promising plasmonic platform [15][16][17][18][19][20][21], systematic analysis of the thickness-dependent surface modes in the low energy loss spectrum is still lacking. In the present work, we reveal that few-layer BP exhibits both bulk and surface plasmon modes in the visible and ultraviolet regions by electron energy loss spectroscopy (EELS) in transmission electron microscopy (TEM). The surface modes of BP are strongly thickness dependent. We performed an ab initio calculation of the dielectric function of BP. Based on the anisotropic dielectric function and electrostatic approximation, we also calculated the contribution of the Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. thickness-dependent anisotropic surface modes to the low energy loss spectrum. The theoretical results are in good qualitative agreement with the experimental results.
Experimental
Crystalline BP is a typical sensitive material when exposed to ambient O 2 and H 2 O, and it shows visible structural degradation [22]. Although the mechanism is still not completely understood, the presence of electrons/light has been shown to either initiate or accelerate this degradation [14]. Thus, the pristine BP samples in this study were handled with minimal ambient light exposure, and prior structural/compositional analyses were performed to ensure that the BP flakes selected for study were not significantly altered. In the experimental measurements, BP crystalline plates with thicknesses of 1-10 layers in distilled water solvent were dropped onto a 400 mesh copper grid on which carbon nanotubes were grown. After the distilled water vaporized, the BP nanoplates were attached to the nanotubes. The EELS measurements were performed with a JEOL-2100F electron microscope equipped with a Gatan spectrometer operating at a voltage of 200 kV. A collection semi-angle of more than 100 mrad was used. The TEM images were recorded with a JEM-ARM200 TEM (JEOL Inc.). High-resolution real space microscopy and EELS were performed for free-standing BP supported by carbon nanotubes on the edges. The advantage of using the carbon nanotubes as a scaffold for the flakes is that it can avoid introducing the background from the substrate in the EELS. Also, the surface electromagnetic modes are sensitive to the dielectric functions of the mediums above and below the interface along which it propagates. So, the spectroscopic feature and plasmonic behavior of the BP supported on the carbon nanotubes are different from the ones of the BP deposited on a flat substrate.
A TEM image of the sample at low magnification is shown in figure 1(a). It clearly shows that the BP crystalline plates are supported by the carbon nanotubes. A high-resolution real space image and the EELS spectrum of the BP plates are shown in figures 1(c) and (d), respectively. The crystal structure and lattice symmetry were investigated by TEM observations and electron diffraction along the main axis directions. All of the obtained experimental data can be well indexed to the Cmca orthorhombic unit cell with lattice parameters of a=3.313 Å, b=4.376 Å, and c=10.478 Å. The structural image in figure 1(c) is for an exfoliated BP flake viewed along the [001] crystallographic direction, where P atomic columns can be directly observed. The schematic structural model is shown in figure 1(b), which is consistent with previously reported structures as AA stacking [13,23]. The core energy loss spectrum in figure 1(d) clearly shows that the phosphorus L 23 peak is located at 132 eV. Its shape is consistent with what has been observed with the pristine BP [14].
The low-energy loss spectrum recorded at 200k magnification with an exposure time of 1 s in the image mode is shown in figure 2. The thickness of the crystalline samples was analyzed as follows. First, the zero loss peaks were isolated and the elastic scattering counts I 0 were recorded. The total counts I t were also recorded. The mean free path was estimated by the equation [24] F E E E E 106 ln 2 , where F is the relativistic factor, E 0 is the incident electron energy, b is the semi-collection angle, and E m is the mean energy loss estimated by the empirical formula E Z 7.6 ,
Data analysis
The thickness-dependent energy loss spectrum is dominated by the bulk loss of BP at around 19 eV, and the energy loss associated with excitation of the surface modes on the upper and lower surface of the crystal plate appears as a broad plateau. With increasing thickness, both features in the EELS spectrum become stronger. The plasmons around 20 eV in BP has been reported recently by Nicotra et al [16] for both the zigzag and armchair directions. The weak shoulder observed in the EELS spectrum around 10 eV is identified as a single-particle transition from the 2 G + band, derived from p z orbitals, whose wave function has alternating signs between nearest neighbors. To extract the contributions of the surface modes, we removed the zero loss peaks by the reflected tail fit function. The bulk contributions in the experimental EELS spectra were then calculated using the jellium model. The bulk plasmon frequencies E p and full widths at half-maximum of the plasmon peaks E p D were fitted to the experimental data using the following equation: where E is the energy. The fitted bulk losses were then removed and the surface loss contributions remain, which are shown in figure 3.
We performed an ab initio calculation of the electronic structure of BP with the Vienna ab initio simulation package. Two layers of phosphorus atoms stack as the AA type. A 4.37 Å×3.31 Å×10.47 Å unit cell with eight atoms was constructed. The projector augmented-wave approach and the generalized gradient approximation exchange-correlation function were used in the calculation. We used 500 eV as the energy cutoff of the wave functions and a 21×21×7 K-point mesh. The obtained anisotropic dielectric functions of BP in the X, Y, and Z directions shown in figure 4 were used as the input parameters in the following calculation of the EELS spectrum.
We will now discuss the theoretical calculation and analysis of the EELS spectrum of BP. Because of the difficulty of an accurate calculation of the EELS spectrum for the biaxial material, the analytical analysis in the present study is confined to the calculation of the EELS spectrum of the uniaxial-type thin film. The EELS data of the uniaxial crystals have the same in-plane dielectric functions as the dielectric functions along the x-and y-axis directions, whereas the out-of-plane dielectric function (i.e. the dielectric function of the z-axis direction) is different. The biaxial nature of BP was considered and EELS calculation was separately performed for the XZ and YZ planes. The analytical analysis was performed in the manner of nonretarded approximation. Similar analysis of the electron energy loss probability of uniaxial thin films has been performed for graphite [25,26] and hexagonal boron nitride [27], in which the theoretical results are in good agreement with the experimental measurements. We started with separate calculations of the guided modes, bulk loss, and Begrenzungseffekt. The total electron energy loss probability is the sum of the three contributions.
The typical results for a 5 nm thick hypothetical uniaxial thin film with the in-plane dielectric function equal to the calculated dielectric functions for the x-and y-axis directions and the out-of-plane dielectric function equal to the calculated z direction dielectric function is shown in figure 5. A bulk plasmon peak at 19 eV is the prominent feature in both situations. The plateaus below 19 eV in the loss functions are contributed by the surface guided modes confined by the upper and lower boundaries of the thin flakes. The energy transfer from the bulk mode to the guided modes is described by the Begrenzungseffekt. The total loss functions are the sum of the loss functions obtained from the bulk modes, guided modes, and Begrenzungseffekt, which have similar profiles as the experimental data.
To further understand the structural features of the EELS spectrum, we plot the dispersion of the guided modes and bulk modes in log scale in figure 6. The dispersions of the relevant modes calculated with the X and Z direction dielectric functions are shown in figures 6(a) and (c), and the counterparts calculated with the Y and Z direction dielectric functions are shown in figures 6(b) and (d). Figures 6(a) and (b) are the bulk mode dispersion at 19 eV. Figures 6(c) and (d) are the guided mode dispersion below 19 eV. The guided modes split into two branches in both sets of plots. The lower branches below 14 eV in both sets of plots correspond to the lower branch of the surface plasmon pointed out by Kliewer and Fuchs [28,29], while the upper branches between 14 and 19 eV are the upper branch of the surface plasmon. The calculated dielectric functions of the X, Y, and Z Another important feature is that the surface loss function has a local maximum at about 0.7 eV. This is consistent with the fact that small but obvious peaks appear at a frequency below 2 eV in the experimental data. We will now discuss the fundamental features of the total loss functions and guided mode loss functions for samples with different thicknesses ( figure 7). The bulk loss monotonically increases with increasing thickness, as expected. However, the guided mode behavior is more complicated. The main peak of the surface loss moves to higher energy with increasing thickness, which is consistent with the experimental data for the surface loss part of the spectrum (figure 3). The thickness dependence of the data shown in figure 3 mainly comes from two aspects. First, the shape change of the curves with thickness of 7.76, 10.58, 15.79 nm mainly comes from the thickness dependence of the surface mode dispersion as point out by [27]. Second, what we measured are actually the stacks of BP flakes, especially for the thicker regions. So the surface mode contributions from the upper and lower sides of the slabs add together. It also induces thickness dependence to our data.
Conclusions
BP has been investigated by high-resolution electron microscopy and EELS. The orthorhombic crystal structure and crystal lattice constants were determined by real space measurement, as well as diffraction analysis. We also calculated the anisotropic dielectric function of BP by an ab initio method. Quasistatic analysis was performed and the EELS spectrum was analyzed within the frame of classical electrodynamics. The biaxial nature of BP was considered and EELS calculation was separately performed with the dielectric functions of BP in X, Z direction and Y, Z direction. The contributions of the bulk and surface plasmons were separately measured and calculated. The Fuchs-Kliewer-like surface plasmon modes within the BP slab are observed and the data are well supported by analytical analysis. The fast electron can act as a pulsed white light source with evanescent components and efficiently stimulate the surface plasmons along the surface of BP. The lower branch of Fuch Kliewer modes with smaller energies than 14 eV has the same charge polarity on the upper surface and lower surface of the slab while the upper branch of Fuch Kliewer modes with larger energies than 14 eV has the opposite charge polarity on the upper surface and lower surface of the slab. BP shows potential as a future 2D optoelectronic material and it has possible applications in plasmonic devices. | 2019-08-16T18:38:38.665Z | 2019-07-18T00:00:00.000 | {
"year": 2019,
"sha1": "08b86246c9098347dcbf62e1aeee99bbf97a444f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/2515-7639/ab27e9",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "c417e466c7c0acf9c601b7113abdc2290d041cee",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Materials Science"
]
} |
119576442 | pes2o/s2orc | v3-fos-license | Squaring the Magic
We construct and classify all possible Magic Squares (MS's) related to Euclidean or Lorentzian rank-3 simple Jordan algebras, both on normed division algebras and split composition algebras. Besides the known Freudenthal-Rozenfeld-Tits MS, the single-split G\"unaydin-Sierra-Townsend MS, and the double-split Barton-Sudbery MS, we obtain other 7 Euclidean and 10 Lorentzian novel MS's. We elucidate the role and the meaning of the various non-compact real forms of Lie algebras, entering the MS's as symmetries of theories of Einstein-Maxwell gravity coupled to non-linear sigma models of scalar fields, possibly endowed with local supersymmetry, in D = 3, 4 and 5 space-time dimensions. In particular, such symmetries can be recognized as the U-dualities or the stabilizers of scalar manifolds within space-time with standard Lorentzian signature or with other, more exotic signatures, also relevant to suitable compactifications of the so-called M*- and M'- theories. Symmetries pertaining to some attractor U-orbits of magic supergravities in Lorentzian space-time also arise in this framework.
Introduction
Magic Squares (MS's), arrays of Lie algebras enjoying remarkable symmetry properties under reflection with respect to their main diagonal, were discovered long time ago by Freudenthal, Rozenfeld and Tits [1,2,3], and their structure and fascinating properties have been studied extensively in mathematics and mathematical physics, especially in relation to exceptional Lie algebras (see e.g. [4,5,6,7,8,9,10,11,12]).
Following the seminal papers by Günaydin, Sierra and Townsend [13,14], MS's have been related to the generalized electric-magnetic (U -)duality 1 symmetries of particular classes of Maxwell-Einstein supergravity theories (MESGT's), called magic (see also [17,18,19,20,21]). In particular, non-compact, real forms of Lie algebras, corresponding to non-compact symmetries of (super)gravity theories, have become relevant as symmetries of the corresponding rank-3 simple Jordan algebras [22], defined over normed division (A = R, C, H, O) or split (A S = R, C S , H S , O S ) composition algebras [23].
Later on, some other MS's have been constructed in literature through the exploitation of Tits' formula [2] (cfr. (2.1) below). On the other hand, the role of Lorentzian rank-3 simple Jordan algebras in constructing unified MESGT's in D = 5 and 4 Lorentzian space-time dimensions (through the determination of the cubic Chern-Simons F F A coupling in the Lagrangian density) has been investigated in [24,25,26].
In the present paper, we focus on Tits' formula (and its trialitarian reformulation, namely Vinberg's formula [4]; cfr. (2.17) below), and construct and classify all possible MS structures consistent with Euclidean or Lorentzian rank-3 simple Jordan algebras. We also elucidate the MS structure, in terms of maximal and symmetric embeddings on their rows and columns.
It should be remarked that most of the MS's which we determine (classified according to the sequences of algebras entering their rows and columns) are new and never appeared in literature. Indeed, as mentioned above, before the present survey only particular types of MS's, exclusively related to Euclidean Jordan algebras, were known, namely the original Freudenthal-Rozenfeld-Tits (FRT) MS L 3 (A, B) [1,2,3], the single-split supergravity Günaydin-Sierra-Townsend (GST) MS L 3 (A S , B) [13], and the double-split Barton-Sudbery (BS) MS L 3 (A S , B S ) [8] (which also appeared in [27]). Besides these ones, only a particular "mixed" MS (denoted as L 3 ( A, B) in our classification; see below) recently appeared in [21], in the framework of an explicit construction of a manifestly maximally covariant symplectic frame for the special Kähler geometry of the scalar fields of D = 4 magic MESGT's. The entries of the last row/ column of the magic squares have been computed also in [28], depending on the norm of the composition algebras involved.
Furthermore, we elucidate the role and the meaning of the various non-compact, real forms of Lie algebras as symmetries of Einstein-Maxwell gravity theories coupled to non-linear sigma models of scalar fields, possibly endowed with local supersymmetry. We consider U -dualities in D = 3, 4 and 5 space-time dimensions, with the standard Lorentzian signature or with other, more exotic signatures, such as the Euclidean one and others with two timelike dimensions. Interestingly, symmetries pertaining to particular compactifications of 11-dimensional theories alternative to M -theory, namely to the so-called M * -theory and M -theory [29,30], appear in this framework.
Frequently, the Lie algebras entering the MS's also enjoy an interpretation as stabilizers of certain orbits of an irreducible representation of the U -duality itself, in which the (Abelian) field strengths of the theory sit (possibly, along with their duals). The stratification of the related representation spaces under U -duality has been extensively studied in the supergravity literature, starting from [31,32] (see e.g. [33] for a brief introduction), in relation to extremal black hole solutions and their attractor behaviour (see e.g. [34] for a comprehensive review).
A remarkable role is played by exceptional Lie algebras. It is worth observing that the particular non-compact real forms 2 f 4(−20) and e 6(−14) , occurring as particular symmetries of flux configurations supporting non-supersymmetric attractors in magic MESGT's, can be obtained in the framework of MS's only by considering Lorentzian rank-3 Jordan algebras on division or split algebras. Thus, the present investigation not only classifies all MS's based on rank-3 Euclidean or Lorentzian simple Jordan algebras, but also clarifies their role in generating non-compact symmetries of the corresponding (possibly, locally supersymmetric) theories of gravity in various dimensions and signatures of space-time.
The plan of the paper is as follows. In Sec. 2, we recall some basic facts and definitions on rank-3 (alias cubic) Jordan algebras and MS's, and present Tits' and Vinberg's formulae, which will be crucial for our classification.
Then, in Sec. 3 we compute and classify all 4 × 4 MS's based on rank-3 simple (generic) Jordan algebras of Euclidean type. We recover the known FRT, GST and BS MS's, and other 7 independent MS arrays, and we analyze the role of the corresponding symmetries in (super)gravity theories.
Sec. 4 deals with rank-3 simple (generic) Jordan algebras of Lorentzian type, and with the corresponding MS structures, all previously unknown. In particular, the Lorentzian FRT MS (Table 11), which is symmetric and contains only non-compact Lie algebras, is relevant to certain (nonsupersymmetric) attractors in the corresponding theory.
A detailed analysis of the MS structure, and further group-theoretical and physical considerations, are given in the concluding Sec. 5.
Magic Squares and Jordan Algebras
We start by briefly recalling the definition of a magic square: A Magic Square (MS) is an array of Lie algebras L(A, B), where A and B are normed division or split composition algebras which label the rows and columns, respectively. The entries of L(A, B) are determined by Tits' formula [2]: The symbol ⊕ denotes direct sum of algebras, whereas stands for direct sum of vector spaces. Moreover, Der are the linear derivations, with J B we indicate the Jordan algebra on B, and the prime amounts to considering only traceless elements.
In order to understand all these ingredients of the Tits' formula (2.1), it is necessary to introduce some notation first. The octonions are defined through the isomorphism O ∼ = 1, e 1 , . . . , e 7 R , where · R means the real span. The multiplication rule of the octonions is described by the Fano plane: O denotes the imaginary octonions. The split octonions O S can be obtained e.g. by substituting the imaginary units e i →ẽ i , i = 4, 5, 6, 7, so that they satisfyẽ 2 i = 1 instead of e 2 i = −1 (see e.g. [35]). If the quaternions H and the complex numbers C are represented e.g. by the isomorphisms: H S ∼ = 1, e 1 , e 5 , e 6 R , and C S ∼ = 1, e 4 R , the split quaternions H S and the split complex numbers C S can be represented by the isomorphisms: As for the octonions, the prime denotes the purely imaginary quaternions H and complex numbers C , respectively. An inner product can be defined on any of the above division algebras A as: where the conjugation " · " changes the sign of the imaginary part. The algebra of derivations Der(A) is given by:
4)
i.e. by the maps satisfying the Leibniz rule. Then, if L and R respectively are the left and right translation in A, a derivation D x 1 ,x 2 ∈Der(A) can be constructed from x 1 , x 2 ∈ A as: which, when applied to an element x 3 ∈ A, becomes: The main ingredient entering in the Tits' formula (2.1) is the Jordan algebra J [22,23], which is defined in the following way: A Jordan algebra J is a vector space defined over a ground field F, equipped with a bilinear product • satisfying: (2.6) The Jordan algebras relevant for the present investigation are rank-3 Jordan algebras J 3 over F = R, which also come equipped with a cubic norm: There is a general prescription for constructing rank-3 Jordan algebras, due to Freudenthal, Springer and Tits [36,37,38], for which all the properties of the Jordan algebra are essentially determined by the cubic norm N (for a sketch of the construction see also [39]).
In the present investigation, we realize a rank-3 Jordan algebra J B over the division or split algebra B as the set of all 3 × 3 matrices J with entries in B satisfying: where η = diag{ , 1, 1}, with = 1 for the Euclidean Jordan algebra J B 3 , and = −1 for the Lorentzian Jordan algebra 3 J B 1,2 (see e.g. [24]), i.e. J is of the form: with a i ∈ R, and x i ∈ B, i = 1, 2, 3. Thus, out of the all the Jordan algebras from the classification in [23], we are restricting ourselves to the consideration of all the simple rank-3 Jordan algebras except for the non-generic case of J = R itself 4 . The (commutative) Jordan product • (2.6)-(2.7) is realized as the symmetrized matrix multiplication: It is then possible to introduce an inner product on the Jordan algebra: As an example, for both the rank-3 Jordan algebras J O 3 and J O S 3 , the relevant vector space is the representation space 27 pertaining to the fundamental irrep. of E 6(−26) resp. E 6(6) , and the cubic norm N is realized in terms of the completely symmetric invariant rank-3 tensor d IJK in the 27 (I, J, K = 1, ..., 27): (2.14) A detailed study of the rank-3 totally symmetric invariant d-tensor of Lorentzian rank-3 Jordan algebras can be found in [24]. The last important ingredient entering Eq. (2.1) is the Lie product [., .], which extends the multiplication structure also to A ⊗ J B , thus endowing L (A, B) with the structure of a (Lie) algebra. Its general explicit expression can be found e.g. in Eq. (2.5) of [12]: Tits' formula (2.1) can be rewritten in a more symmetric way in A and B by generalizing the concept of derivations to that of triality (see e.g. [4,35,11]): This leads to Vinberg's formula [4]: which implies: a relation which will be useful in subsequent treatment. A remarkable property of Jordan algebras is that they have various symmetry groups, which are relevant to supergravity theories and appear as entries in the MS's.
The derivations algebra Der(J B ) generates the automorphisms group Aut(J B ) of the Jordan algebra. The structure algebra Str(A), which for a general algebra A is defined to be the Lie algebra generated by the left and right multiplication maps, in the case of a Jordan algebra can be expressed as [8]: and its Lie algebra structure follows from [D, The reduced structure algebra Str 0 J B is then defined as the quotient of Str(J B ) by the subspace of multiples of L 1 , with 1 the identity of J B . It can be verified that The conformal algebra Conf(J B ) is the vector space [41,42]: and its Lie algebra structure is defined by the brackets giving rise a priori to sixteen possible structures of Euclidean MS L 3 .
However, by virtue of (2.17) and (2.18), it is enough to explicitly list only the magic squares for which the number of split division algebras labeling the rows is bigger or equal to that of the columns. This yields only ten different structures of Euclidean MS L 3 , which we list and analyze below.
The subscript in brackets denotes the character χ of the real form under consideration, namely the difference between the number of non-compact and compact generators [45]. Thus, in the case of compact real forms (as for all entries of FRT MS), the character is nothing but the opposite of the dimension of the algebra/group itself. [41,42], which are the U -duality symmetries of N = 4 magic theories in 6 D = (2, 1) (i.e. Lorentzian) space-time dimensions [13,46], based on the extended Freudenthal triple system (EFTS) T J B 3 . The third row displays Conf J B 3 , the conformal symmetries of J B 3 (2.20) [41,42]: • They are the U -duality symmetries of N = 2, D = (3, 1) magic MESGT's [13,14] based on the Freudenthal triple system (FTS) M J B 3 [47].
• Up to a commuting Ehlers SL(2, R) factor, they are the stabilizers of the extended scalar manifold of the T J B 3 -based magic theories in D = (3, 0) (i.e. Euclidean) space-time dimensions [48,49].
• However, other (exotic) supergravity theories can be considered, obtained from suitable compactifications of theories in 11 dimensions alternative to the usual D = (10, 1) M -theory, but still consistent with the existence of a real 32-dimensional spinor, namely M * -theory in D = (9, 2) and M -theory in D = (6, 5) [29]. By exploiting the analysis of [30], Conf J B 3 (up to the Ehlers SL(2, R)) factor can also be regarded as the stabilizers of the the extended scalar manifold of the T J , the following embedding of symmetric cosets holds: where "H * " denotes the para-quaternionic structure of the corresponding spaces, which have vanishing character (χ = 0; see e.g. [50] for a recent study of such manifolds).
The first row displays Aut J B 3 = mcs Str 0 J B 3 , namely the automorphisms of J B 3 : • They are the stabilizers of the scalar manifolds of N = 2, D = (4, 1) magic MESGTs [13,14] based on J B 3 .
• Considering more exotic theories, Aut J B 3 can also be regarded as the stabilizers of the scalar manifolds of the same J B 3 -based theory in D = (0, 5) M dimensions. [8], which also appeared more recently in
The Barton-Sudbery
(3.6) Note that for the J O S 3 -based theory, can be regarded as a particular, non-compact pseudo-Riemannian version of the rank-2 real special symmetric manifold 5. -10. All the other Euclidean MS's L 3 can be computed (as to our knowledge, they never appeared in the literature), and we report them in Tables 5 -10. It can be noticed that L 3 ( A, B), given by Table 6, and L 3 ( A, B), given by Table 9, are symmetric, while all the other ones are non-symmetric. By suitably generalizing the approach of [21] to non-compact spaces, these MS's may be used to explicitly construct pseudo-Riemannian scalar manifolds of theories of Maxwell-Einstein (super)gravity in non-Lorentian space-times, also obtained from compactifications of M * -theory or M -theory. For instance, the symmetric MS L 3 ( A, B) can be used O S F 4(4) E 6(2) E 7(7) E 8(8) to determine a (maximally) manifestly E 6(2) × U (1) -covariant construction of the rank-3 pseudo-Riemannian special Kähler manifold E 7(7) E 6(2) ×U (1) , which is a non-compact version of the aforementioned Riemannian special Kähler symmetric coset 4 Magic Squares L 1,2 over rank-3 Lorentzian Jordan Algebras
The first "mixed" MS
We will now exploit Tits' formula (2.1) in order to construct all possible MS's L 1,2 based on rank-3 Lorentzian Jordan algebras over the division algebras R, C, H, O, C S , H S and O S . As discussed at the start of Sec. 3, by virtue of (2.17) and (2.18), it is enough to explicitly list only the magic squares for which the number of split division algebras labeling the rows is bigger or equal to that of the columns.
We would like to point out that, as to our knowledge, these MS's never appeared in literature. Interestingly, their study has been motivated also by the investigation of the stabilizers of the class of "large" non-BPS Z = 0 U -orbits in magic MESGT's in D = (3, 1) dimensions [51], which indeed provide the third row of L 1,2 (A, B), the Lorentzian counterpart of the FRT MS L 3 (A, B) [1, 2, 3] given in Table 1.
The second row displays: • the stabilizer of the "large" non-BPS U -orbit (with Z H = 0) of the M J B 3 -based magic MESGT's in D = (3, 1) dimensions [31,51]. , the following embedding of symmetric cosets holds: , the following embedding of symmetric cosets holds: where "H" denotes the quaternionic structure of the corresponding spaces. Note that Table 2) embed maximally (by an SU (2) factor) the non-compact real forms in the third row. The second, third and fourth rows match the corresponding rows of its Euclidean counterpart, namely of the GST MS L 3 (A S , B) given in Table 2.
On the other hand, the first row coincides with the first row of the Lorentzian FRT MS L 1,2 (A, B) given in Table 11. Table 3, up to the first entry (from the left) in the first row, which reads: This is a non-symmetric MS (L 1,2 ( A, B) = L 1,2 ( A, B) T ). Its third and fourth rows coincide with those of its Euclidean counterpart, namely of first "mixed" MS L 3 ( A, B), given in Table 4. On the other hand, its first and second rows match those of the Lorentzian FRT MS L 1,2 (A, B), given in Table 11.
The Lorentzian BS
5. -10. All the other Lorentzian MS's L 1,2 can be computed, and we report them in Tables 15 -20. It can be noticed that L 1,2 ( A, B), given by Table 16, and L 1,2 ( A, B), given by Table 19, are symmetric, while all the other ones are non-symmetric. By suitably generalizing the approach of [21] to non-compact spaces, also these MS's may be used to explicitly construct pseudo-Riemannian scalar manifolds of theories of Maxwell-Einstein (super)gravity in non-Lorentian space-times, also obtained from compactifications of M * -theory or M -theory.
Analysis
Below we list some observations on common properties, as well as on differences, among the two sets of 4 × 4 MS's over rank-3 Euclidean (Tables 1 -10) and Lorentzian (Tables 11 -20) rank-3 (simple, generic) Jordan algebras.
1. For L 3 (A, B) and L 1,2 (A, B) (namely for the FRT MS - Table 1 -and its Lorentzian analogue - Table 11 -), the symmetries in the second row/column are embedded into the symmetries in the third one with a factor U (1) or SO (2) Table 2 and its Lorentzian analogue - Table 12 -), the symmetries in the second column (row) are embedded into the symmetries in the third column (row) with a factor U (1) (SO(1, 1)), whereas the symmetries in the third column (row) are embedded into the symmetries in the fourth column (row) with a factor SU (2) (SU (1, 1)). And, similarly, for L 3 (A S , B S ) and L 1,2 (A S , B S ) (namely for the double-split BS MS - Table 3 -and its Lorentzian analogue - Table 13 -), the symmetries in the second row/column are embedded into the symmetries in the third one with a factor SO(1, 1), while the symmetries in the third row/column are embedded into the symmetries in the fourth one with a factor SU (1, 1). Analogous results holds for all other Euclidean (Tables 4 -10) and Lorentzian (Tables 11 -20) MS's. The rationale of all this is the following. When the embedding of H into G in the next row/ column of the MS contains an extra factor T = U (1), SO(1, 1), SU (2) or SU (1, 1), this reflects the structure of the symmetric coset G H×T , which then carries a complex (special Kähler), (special) pseudo-Kähler, quaternionic or para-quaternionic structure, respectively.
2. When all the aforementioned commuting factors are taken into account, all the embeddings in the MS's are maximal and symmetric [45].
3. From Tits' formula (2.1), it can be realized that the factor SO(2) or SO(1, 1), needed to maximally embed the symmetries in the second row into the symmetries in the third one, is in turn embedded respectively into Aut(H) = SO(3) or Aut(H S ) = SL(2, R); on the other hand, the factor SU (2) or SU (1, 1), needed to maximally embed the symmetries in the third row into the symmetries in the fourth one, is in turn embedded respectively into Aut(O) = G 2(− 14) or Aut(O S ) = G 2 (2) . The relevant (maximal and symmetric) embeddings read: Therefore, for each of the embeddings of a row/column in the next, these generators always have the same origin.
4. The symmetries of Euclidean and Lorentzian rank-3 Jordan algebras over division algebras can be read from the rows of the corresponding single-split MS, namely from the GST MS L 3 (A S , B) ( Table 2) and from its Lorentzian counterpart, i.e. the MS L 1,2 (A S , B) ( Table 12). For Euclidean rank-3 Jordan algebras, it holds: Since the second, third and fourth rows of L 3 (A S , B) and L 1,2 (A S , B) match, this implies that the reduced structure, conformal and quasi-conformal symmetries of Euclidean and Lorentzian rank-3 Jordan algebras over division algebras coincide: This is consistent with the analysis of [24,25].
5. Analogously, the symmetries of Euclidean and Lorentzian rank-3 Jordan algebras J B S 3 over split algebras can be read from the rows of the corresponding double-split MS, namely from the BS MS L 3 (A S , B S ) ( Table 3) and from its Lorentzian counterpart, i.e. the MS L 1,2 (A S , B S ) ( Table 13). For Euclidean rank-3 Jordan algebras, it holds: can be identified also with the "large" non-BPS U -orbit (with Z H = 0) of the J A 3 -based magic MESGT in D = (4, 1) dimensions [52,53]. On the other hand, Conf (J A 1,2 ) K(J A 1,2 ) (5.12), whose stabilizer is given (up to a U (1) factor) by the second row of the Lorentzian FRT MS L 1,2 (A, B) (Table 11), is the Koecher upper half plane of J A 1,2 [25], which can be identified also with the "large" non-BPS U -orbit (with Z H = 0) of the M J B 3 -based magic MESGT's in D = (3, 1) dimensions [31,51]. Moreover, by adding an additional U (1) factor in the stabilizer, Conf (J A 1,2 ) | 2012-09-25T09:37:25.000Z | 2012-08-30T00:00:00.000 | {
"year": 2012,
"sha1": "4b82633ea703ff5188887b6041da12dbb515de37",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4b82633ea703ff5188887b6041da12dbb515de37",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
260099901 | pes2o/s2orc | v3-fos-license | A chemical probe unravels the reactive proteome of health-associated catechols
Catechol-containing natural products are common constituents of foods, drinks, and drugs. Natural products carrying this motif are often associated with beneficial biological effects such as anticancer activity and neuroprotection. However, the molecular mode of action behind these properties is poorly understood. Here, we apply a mass spectrometry-based competitive chemical proteomics approach to elucidate the target scope of catechol-containing bioactive molecules from diverse foods and drugs. Inspired by the protein reactivity of catecholamine neurotransmitters, we designed and synthesised a broadly reactive minimalist catechol chemical probe based on dopamine. Initial labelling experiments in live human cells demonstrated broad protein binding by the probe, which was largely outcompeted by its parent compound dopamine. Next, we investigated the competition profile of a selection of biologically relevant catechol-containing substances. With this approach, we characterised the protein reactivity and the target scope of dopamine and ten biologically relevant catechols. Strikingly, proteins associated with the endoplasmic reticulum (ER) were among the main targets. ER stress assays in the presence of reactive catechols revealed an activation of the unfolded protein response (UPR). The UPR is highly relevant in oncology and cellular resilience, which may provide an explanation of the health-promoting effects attributed to many catechol-containing natural products.
Covalent binding to proteins is facilitated by the tendency of catechols to oxidise to reactive ortho-quinones in aqueous conditions at physiological pH, 22 which are then subject to nucleophilic attack by amine or thiol side-chains in proteins. Specically, this has been studied in depth for dopamine (DA), where aberrant protein modication has been implicated as a potential driver of neuron loss in the pathogenesis of Parkinson's disease (PD). 19,20 However, much remains unknown about the identity of the specic protein targets, the binding sites, and the reactive molecular species. 17 Chemical activity-based probes 23,24 have recently been published of dopamine (DA), 25 capsaicin (CP), 11 n-octyl caffeate, 14 3,4dihydroxyphenyl acetic acid (DPA), 26 and 6-hydroxydopamine (6-OHDA), 27 the latter being a neurotoxic oxidation product of DA. In general, the analysis of protein modication by DAbased catechols using mass spectrometry (MS) methods is challenged by the precipitation of DA-protein adducts, 20 which interferes with their detection. 17 DA quinone (DAQ) is a reactive molecule that can undergo further chemical reactions or covalently modify cellular structures. For instance, it can cyclise via intramolecular nucleophilic attack by the amine side chain to form leukodopaminochrome, which may further polymerise to form the insoluble natural pigment eumelanin. DAQ is also subjected to nucleophilic attack by protein side chains such as cysteine in a Michael-type addition and results in protein post-translational modication (PTMs) which can deactivate enzyme activity and cause protein aggregation. 17 Protein-bound DA can be oxidised again and add to further DA molecules via their nucleophilic amine, forming an insoluble protein-melanin conjugate termed neuromelanin (Fig. 1a). 17 Bearing in mind the propensity of DA and DA-protein adducts for precipitation, we designed novel minimalist catechol probes for global target identication by functionalisation of the DA amino group with an acyl alkyne handle. Since these probes lack a nucleophilic residue, they are readily oxidised to an ortho-quinone but, unlike DA, cannot undergo side-reactions such as cyclisation or polymerisation to insoluble DA-protein aggregates. With a preserved native protein reactivity, this probe design facilitated the direct target identication of DA by chemical proteomics in live cells by applying the probe in competition with DA and with a suite of structurally diverse catechols (Fig. 1b). Chemical proteomics revealed ER-associated proteins among the top hits of several catechol natural products which was conrmed by cellular unfolded protein response (UPR) assays. The modulation of ER stress pathways by some of these compounds provides an intriguing explanation for their anticancer activities. Fig. 1 Probe design and workflow for the identification of catechol protein targets. (a) DA is oxidised to DAQ, which cyclises to aminochrome by intramolecular nucleophilic attack of the amine. Aminochrome polymerises to form insoluble eumelanin. DAQ may also react with a nucleophilic amino acid residue of a protein such as a cysteine. Following protein modification, DA can undergo multiple further reactions with other DA molecules or proteins, leading to the formation of insoluble protein-dopamine conjugates (neuromelanin). 4,5 An acylated probe such as DA-P3 lacks the nucleophilic amine, trapping it in the quinone state and impeding the formation of insoluble probe-protein aggregates. (b) Chemical proteomics workflow applied for target identification. Live cells are treated with the chemical probe (blue circle), DMSO, or an excess of the catechol of interest (purple pentagon) plus the chemical probe. Following protein extraction, the labelled proteome is ligated by CuAAC to biotin azide (cyan shape), enriched on avidin beads (brown circle), digested, and peptides are analysed by LC-MS/MS.
DA probe design and synthesis
In order to prevent undesired side reactions, we devised a novel probe design based on alkynylated DA derivatives lacking the free amine. This strategy enables quinone formation, ensures the desired protein reactivity, and prevents polymerisation, which are important prerequisites for selective target identication in competitive studies with diverse catechols. Following this approach, we synthesised three DA-probes, DA-P1, DA-P2, and DA-P3, with a varying chain length of ve to seven carbons by standard amide coupling using EDC HCl and HOBt. For a negative control lacking the 3 ′ OH group, DA-P4, we acylated tyramine with 5-hexanoic acid ( Fig. 2a and S1a †). Next, we tested general protein reactivity of the probes with puried DJ-1, which has three reactive cysteines 28 and has been reported to be modied by DAQ. 15,17,29 DJ-1 (5 mM) was incubated with a 100fold excess of DA-P2, DA-P3, DA-P4, DA, or the equivalent amount of DMSO as control, and analysed by high resolution MS (Fig. S1b †). Indeed, DJ-1 was modied by two to three molecules of DA-P2 and DA-P3. DA-P4, lacking an intact catechol group, showed no protein binding. Furthermore, no modication was observed on DJ-1 treated with DA, which is in line with previous studies that failed to detect DA-modications (c and d) MS data from three replicates were analysed by MaxLFQ and filtered for proteins identified in three replicates in at least one condition. Samples were compared using a two-sided two-sample ttest. Proteins that were enriched against DMSO are highlighted in blue, proteins that were outcompeted by DA are highlighted in cyan. Proteins were considered significant when they were enriched more than four-fold (log 2 (enrichment) $ 2) with a p-value of less than 0.01 (−log 10 Table S3 † for details on identified proteins. on proteins by MS methods. 17,18 As DA modications have been proposed to be reversible 18 and may lead to protein precipitation, 17 labelling was performed with DA-P3 in competition with DA to reveal DA modication by probe displacement. For this, DJ-1 (1 mM) was treated with different concentrations of DA (12.5-200 mM) followed by incubation with DA-P3 (25 mM). Aer copper-catalysed azide-alkyne cycloaddition (CuAAC) 30,31 to rhodamine azide, labelled DJ-1 was separated by SDS-PAGE and uorescence intensity of the corresponding protein band indeed decreased with increasing DA concentration. No protein precipitation was observed aer probe treatment and competition with up to 50 mM DA (Fig. S1c †). However, lower Coomassie band intensities were visible at 100-200 mM DA, indicating the formation of insoluble DA-protein aggregates 25 at high concentrations.
To analyse overall protein binding in live cells, HEK293 cells were treated in situ with all three catechol probes (Fig. 2a). A uorescence reporter was appended to the labelled proteins by CuAAC aer lysis, and labelled proteins were visualised by uorescent SDS-PAGE (Fig. 2b). With 1 h treatment, concentration-dependent labelling was visible for all probes starting from 15 mM. Overall, DA-P3 showed the strongest labelling and DA-P1 the weakest with a comparable labelling pattern across the probes. Importantly, no signicant protein precipitation was observed in the coomassie staining even at 100 mM concentration. Next, to facilitate a MS-based comparison of probes DA-P1-3, the proteome was treated with 15 mM compound for 1 h and labelled proteins were subsequently ligated to a biotin handle aer cell lysis. Following enrichment of the labelled proteome on avidin beads and tryptic digest, the resulting peptides were analysed by liquid chromatographytandem mass spectrometry (LC-MS/MS) with label-free quanti-cation (Fig. 1b). 34 As already observed via gel-based analysis, the MS data revealed a large overlap of identied hits across all probes and the length of the alkyl chain correlated with the number of signicant hits whereas it largely did not inuence the identity of enriched proteins (Fig. S2a-c, Table S1 †). We next extended treatment to 3 h to account for potential differences in uptake or labelling kinetics (Fig. S3a-c, Table S2 †). While the correlation between chain length and the number of identied hits remained, the overall number of labelled proteins diminished from 1 h to 3 h, indicating that the modications introduced on the proteins are relatively short-lived. We chose treatment with DA-P3 for 1 h for all following experiments as these conditions resulted in the strongest labelling.
Competitive labelling experiments reveal DA binding proteins HEK293 cells produce no endogenous DA (ref. 20 and 32) but have been reported to take up DA and other monoamine neurotransmitters. 33 We therefore labelled intact HEK293 cells with DA-P3 in competition with DA. Treatment of HEK293 cells with 15 mM DA-P3 for 1 h resulted in an at least fourfold enrichment of 236 proteins (−log 10 (p-value) $ 2) compared to a DMSO-treated control (Fig. 2c, Table S3 †). Moreover, a 30-fold excess of DA added to the cells 1 h prior to probe addition displaced the binding of 205 proteins (Fig. 2d, Table S3 †). Of all proteins enriched by DA-P3 compared to the DMSO control, 63% were signicantly outcompeted by DA (Fig. 2e). These data support that DA-P3 does indeed mimic the reactivity of DA/DAQ well.
To link the identied targets of DA to cellular functions, we performed a gene ontology (GO) term analysis of signicantly enriched proteins using the GOrilla tool. 35,36 Among the proteins identied solely by DA-P3, proteins involved in ER stress and UPR ("response to ER stress", "ER unfolded protein response") as well as PDIs ("peptide disulphide oxidoreductase activity") were enriched at least 3-fold (Fig. 2f). Similarly, GO terms related to the endoplasmic reticulum (ER) also stood out in the competition experiment with DA (Fig. 2g).
SH-SY5Y is a catecholaminergic neuroblastoma cell line frequently used in DA-associated neurodegeneration research. [37][38][39] Surprisingly, whilst DA-P3 showed strong labelling in SH-SY5Y where it labelled 205 proteins compared to a DMSO control at 4 mM (Fig. S4a, Table S4 †), we observed no competition by a ten-fold excess of DA (Fig. S4b, Table S4 †). It is very plausible that the presence of endogenous catechols or corresponding metabolites 40 may interfere with competitive experiments. Noteworthy, Hurben et al. have reported a similar experimental set up using an alkylated DA-probe (DAyne) where no competition by the parent compound DA could be observed. 25 We therefore chose HEK293 for all following experiments.
Analysis of proteome-wide DA-P3 modications
To uncover the residues modied by DA-P3 (15 mM, in situ) as well as the mass of their modication, we clicked labelled HEK293 proteome to isotopically labelled desthiobiotin azide (isoDTB) tags. 41 Proteins were subsequently digested and peptides enriched on avidin beads, eluted, and modied peptides were detected by LC-MS/MS (Fig. 3a, Table S5 †). 42,43 An unbiased analysis 43-45 revealed the added masses of 754.4120 and 760.4206 corresponding to DA-P3 plus a heavy or a light isoDTB tag, respectively, and an additional methyl group (Fig. 3b, Table S5 †). In human cells, catechol compounds such as catecholamine neurotransmitters, catechol oestrogens, or xenobiotics are methylated by the catechol-O-methyltransferase (COMT) as part of a degradation pathway explaining the mass adduct. 46 The modication was highly selective for cysteines which constituted 98% of all detected modied residues (116 total, Fig. 3c, Table S5 †). Recently, covalent binding of CP to certain proteins has been reported. 11 So far, it remains unknown if CP protein reactivity requires demethylation and ortho-quinone formation or if a direct nucleophilic attack is also possible. In fact, our data revealed that 3-O-methylated catechols directly bind to cysteine residues. This observation is in line with an oxidation to the quinone methide, followed by a nucleophilic cysteine attack (Fig. 3d). An analogous addition of glutathione to enzymatically oxidised CP has been reported supporting this notion. [47][48][49] Although it is possible that unmethylated catechol modications may have escaped our MS detection (despite their identication on DJ-1 with our method), the observation of 3-O-methyl catechols as protein adducts is an intriguing and unprecedented observation.
Competitive labelling reveals targets of health-associated catechols
Catechol groups not only play crucial roles in neurotransmitters but are also widely found in plant-derived foods attributed with health-promoting properties. Previous ndings have indicated that different catechols may share a common protein target space as labelling of b-actin with a DPA-probe could be outcompeted by certain avonoids. 26 Nevertheless, insights into the molecular targets are sparse. We thus took advantage of the broad DA-P3 reactivity and utilised it as a minimalist catechol probe in competition with a selection of health-associated catechol compounds that are oen found in plant-derived foods/beverages (quercetin, QC; taxifolin, TF; epicatechin, EC; luteolin, LU; epigallocatechin gallate, EG; caffeic acid, KS) or drugs (dobutamine, DB; carbidopa, CD). CP with a methylated catechol group was included for comparison (Fig. 4a). Labelling was performed with DA-P3 (15 mM) and a 10-fold excess of the respective catechol in live HEK293 cells followed by enrichment and quantitative LC-MS/MS analysis with label-free quantication. 34 Of the ten compounds tested, LU, CP, OL, QC, DB, and EG outcompeted probe binding at 180-286 proteins whereas CD, TF, EC, and KS showed no competition at all (Fig. 4b-e and S5, Table S6 †). Interestingly, CP, carrying a 3-O-methylcatechol group, was among the reactive compounds, corroborating our results with methylated catechols (Fig. 4d-e, Table S6 †).
Metabolic activity assays following treatment with competitive catechols only revealed a moderate decrease in cell viability in the presence of EG and CP, substantiating that the observed protein enrichment is not the result of toxicity (Fig. S6 †). We next tested competition by two exemplary unreactive catechols at higher concentrations. Indeed, we observed competition of DA-P3 labelling by a 100-fold excess of CD and KS with 113 and 259 proteins, respectively ( Fig. S7a- Table S7 and S8 †). This indicates that these compounds are less reactive or cell permeable and facilitate competition solely at high concentrations. Given that these higher concentrations could lead to unspecic effects, we decided to focus on competition in a 10-fold excess in the following experiments.
Across all reactive compounds, 17 proteins were consistently signicantly targeted by DB, OL, QC, CP, LU, and EG, revealing an unanticipated broad overlap of target proteins susceptible to catechol modication despite the diversity of the chemical structures (Fig. 4f). Interestingly, four proteins were associated with the ER, namely ESYT1 (tethers the ER to the plasma membrane), 50,51 SPCS2 (contributes to cotranslational translocation of nascent proteins into the ER), 52 WFS1 (ER membrane glycoprotein involved in the regulation of cellular Ca 2+ homeostasis), 53 and HMGCR. HMGCR is localised in the ER membrane, where it catalyses the rate-determining step in the biosynthesis of cholesterol and other isoprenoids. 54 A comparison of the frequency of GO terms 35,36 associated with signicantly enriched proteins revealed 57 biological process, 21 molecular function, and 27 cellular component Table S5 † for proteomics data.
terms that were overrepresented in at least one competition condition (Fig. S8 †). Across all categories, terms related to structural proteins (e.g., microtubule, cytoskeleton, cell adhesion) were consistently represented. Furthermore, as already observed in the previous analyses for DA-P3 and DA, ERassociated terms were enriched for selected catechols.
As the catechols appeared to target ER proteins, we exemplarily chose this crucial organelle for validation and hypothesised that these compounds could induce ER stress. To test this hypothesis, HEK293T cells were treated with DA-P3, DA, compounds classied as "reactive" (QC, DB, OL, CP) or "unreactive" (TF and KS), and tested for UPR induction at 100 mM. DA was additionally tested at 450 mM as this was the concentration applied in the MS experiments. The analysis was performed in three biological replicates which were in qualitative agreement (Fig. 5 and S9a-b †). ER stress is characterised by Fig. 4 Labelling in HEK293 cells with DA-P3 in competition with a 10-fold excess of different catechol compounds. (a) Overview of the catechol compounds applied as competitors. (b-d) Example volcano plots of catechols showing differential protein reactivity, and CP. MS data from three replicates were analysed by MaxLFQ and filtered for proteins identified in three replicates in at least one condition. Samples were compared using a two-sided two-sample t-test. Proteins were considered significant when they were enriched more than four-fold (log 2 (enrichment) $ 2) with a p-value of less than 0.01 (−log 10 (p-value) $ 2); proteins that were above the cut-off in the enrichment experiment an accumulation of unfolded proteins which triggers the activation of three signalling pathways mediated by the receptors ATF6, PERK, and IRE1a in mammalian cells. 55 The UPR is moreover typically accompanied by an increase in the expression of the immunoglobulin heavy chain-binding protein (BiP), a chaperone of the 70 kilodalton heat shock protein (Hsp70) family. 56,57 Immunoblotting of ATF6, which is cleaved upon activation, revealed no formation of the 50 kDa N-terminal cytosolic fragment in the presence of any of the tested catechols, indicating that this pathway is not activated at our experimental conditions. We next investigated the phosphorylation of eIF2a by immunoblotting which monitors the activation of the PERK sensor. Treatment with the positive control tunicamycin, DA-P3, TF, QC, and OL visibly increased eIF2a phosphorylation. Moreover, the expression of BiP was upregulated in the presence of TF, OL, QC, DA, and DA-P3. Finally, we tested for the splicing of XBP1 mRNA, a process triggered by the UPR sensor IRE1a. RT-PCR of the XBP1 mRNA revealed the formation of a spliced 416 bp fragment 58 in the presence of tunicamycin, DA-P3, QC, OL, and, to a lesser extent, DB and TF. Overall, these data revealed a very strong activation of the UPR by DA-P3, and a weaker one by other compounds including QC, OL, DA, DB, and TF. No UPR activation was observed for CP and KS. The lack of UPR activation by CP despite its broad protein competition may be due to its structural differences, i.e.
Conclusions
Here, we have applied a broadly reactive minimalist catechol probe to elucidate the protein reactivity of biologically relevant catechol compounds in live cells. The direct comparison of structurally diverse catechols revealed large variations in their protein reactivities. The origin of these differences remains to be resolved. Among the catechols with high protein reactivity were compounds, for which, to our knowledge, broad protein reactivity has not been shown previously, for example the inotrope and clinically applied drug DB and the olive oil constituent OL. The tendency to covalently modify proteins is one of the reasons that catechols are known as "Pan Assay Interference Compounds" (PAINS), 60-62 a term describing compound classes that frequently recur as screening hits as a consequence of unspecic interference with biological assays. 60,61 Catecholcontaining natural products have widely been reported as bioactive compounds in a variety of disease contexts 63 but the reports oen fail to take into account the promiscuity in terms of protein binding. 60 Our results illustrate the scope of proteins modied by catechols as well as the low degree of selectivity.
A potential limitation of our and other reactivity-based labelling tools is that protein modications could be substoichiometric, 25,59 therefore, certain proteins may have escaped our competitive labelling method. Moreover, certain proteins addressed by our panel of catechols and DA-P3 may differ and thus escape probe detection in the competition experiments. Yet, our data reveals that a signicant number of proteins is susceptible to catechol modication regardless of their structure and we have revealed an unexpectedly broad protein reactivity of certain catechols.
Another important nding was the direct modication of cysteines by methylated catechols. This protein modication has not been reported previously, but corroborates the recently reported cysteine-reactivity of the 3-O-methyl catechol CP. 11 To date, only selected catechol targets have been reported whereas our work highlights the broad reactivity of catechols in live cells. Specically, we were able to show that certain catechol compounds target ER proteins, including PDIs critical for ER protein folding and UPR regulation, [64][65][66] which results in increased ER stress. The UPR is a cellular response during ER stress caused by an accumulation of unfolded proteins. As a consequence, cells adjust protein synthesis, folding, and degradation to reduce the burden of unfolded proteins or else undergo apoptosis if unsuccessful. Due to increased protein turnover, many cancer cells experience constant ER stress and are more sensitive to UPR and PDI inhibition compared to healthy cells. 67,68 Furthermore, recent studies report the inhibition of PDIs by DA, the avonoid isoquercetin, and n-octyl caffeate. 10,14,25 Altogether, the modulation of ER-associated cellular processes highlights one intriguing facet on how catechols could promote health-benecial effects.
Data availability
The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE partner repository 69 with the dataset identier PXD043348.
Author contributions
A. W. M. planned and performed probe synthesis, mass spectrometry experiments, cytotoxicity assays, and data analysis;
Conflicts of interest
There are no conicts to declare. | 2023-07-24T15:07:40.623Z | 2023-07-22T00:00:00.000 | {
"year": 2023,
"sha1": "b05649a1d552033a45b350be2a68df7260c9982e",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2023/sc/d3sc00888f",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9a9d97a067cbe9f955a28081a8382ae5ef83dcfa",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.