id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
253584369 | pes2o/s2orc | v3-fos-license | The significance of right ear auditory processing to balance
Although the association between balance and hearing thresholds at different frequencies in the right/left ear is crucial, it has received scant empirical attention. Balance is widely ignored when evaluating hearing in adults. This study examined the relative contribution of left versus right ear hearing at different frequencies to balance, and the mediating role of suprathreshold speech perception on age-balance associations. Pure tone hearing thresholds (500–4000 Hz), suprathreshold speech perception, balance, and risk of falling were evaluated in 295 adults. The results indicate that the right ear contributes more to balance than the left ear. This might imply dominance of the left hemisphere in processing hearing cues for balance. Frequencies within the speech range (500/1000/2000 Hz) were correlated with balance and mediated the interaction between age and balance. These results should be considered when tailoring hearing and balance rehabilitation programs.
Beginning with the discovery of the left-hemispheric dominance of language 1-3 there has been a consensus that practically all higher functions, including memory, learning, perception, spatial cognition, attention, complex motor skills, and emotion processing show some degree of lateralization [4][5][6] . Specifically, right ear processing is significantly more efficient for speech stimuli 7 . In recent years, a growing body of evidence has suggested that hearing cues contribute to balance 8,9 . Studies show that auditory information can be integrated with vestibular, somatosensory, and visual signals to improve balance, orientation, and gait [10][11][12][13] . Despite its importance, to the best of our knowledge, the relative contribution of the right/left ear to balance has never been explored.
Shayman et al. 12 reported that external auditory input contributes meaningful information to vestibular selfmotion cues in a frequency-dependent manner. They showed that auditory cues significantly improve sensitivity to self-motion perception below 0.5 Hz, whereas vestibular cues contribute more at higher frequencies. However, the ways in which hearing thresholds at different frequencies potentially influence balance control remain unclear.
To improve the ecological validity of the human hearing-balance relationship Criter & Gustavson 14 and Carpenter & Campos 15 recommended that future research should use real life environments and functional indices rather than relying solely on a laboratory-based approach consisting of pure-tone hearing thresholds. One of the first signs of hearing deterioration is difficulty in understanding speech in challenging everyday listening situations [15][16][17][18][19] . However, very little is known about the interaction between the deterioration of speech perception and balance.
Falling and its consequences have a significant impact on individuals (loss of quality of life, nursing home admissions) and society (healthcare costs) 9,13 . Early detection of balance disorders and possible interventions can potentially reduce falling and prevent its consequences 13 . Recent studies have shown that auditory information can be integrated with vestibular, somatosensory, and visual signals to improve balance, orientation, and gait [10][11][12][13] . However, hearing status is rarely taken into account when evaluating gait and balance 8,9 . To respond to these needs, the current study examined the interaction between hearing and balance in a group of adults, using functional indices of hearing and balance. It then explored the relative contribution of the left versus right ear at different frequencies to balance. The findings should lead to a better understanding of the age-balance association.
Results
The descriptive statistics and inter-correlations for hearing and balance measures are presented in Tables 1 and 2, respectively. Table 3
Mediation effects.
Since there was a significant correlation between age and balance, age and hearing, and balance and hearing, the mechanism underlying the observed relationship between age and balance ( Fig. 1) was explored further. To test for a mediation effect, we used the PROCESS add-on in SPSS. This macro calculates two regression analyses. The first estimates the effect of age on hearing measures (path a). The second regression estimates the effect of hearing measures on balance (path b) controlling for age. The cross-product a*b is considered an estimation of the indirect effect of age on balance via hearing measures. The significance of the indirect effect was calculated with a 95% confidence interval bootstrapping approach because the sampling distribution of the indirect effect is known to be skewed. Cases where the 95%CI does not include zero are equivalent to a significant effect at alpha < 0.05. Significant associations were observed between age and hearing measures (Path a: supplementary data are available online Table S1-7). For the right ear, WIN 50% SNR, PTA1 and hearing thresholds (500 Hz, 1000 Hz, 2000 Hz) were associated with balance after controlling for individuals' age (Path b). PTA2 and hearing thresholds 4000 Hz were not associated with balance after controlling for individuals' age (supplementary data are available online Table S1-S7). By contrast, for the left ear, only PTA1, 500 Hz, and 1000 Hz were associated with balance after controlling for age. The WIN 50% SNR, PTA2, 2000 Hz, and 4000 Hz were not associated with balance (Path b: supplementary data are available online Table S1-S7).
The results for the indirect analyses (Path a*b) are presented in Table 4. As shown in Table 4, five mediation effects were observed for the right ear but only three for the left ear. For the right ear, WIN 50% SNR, PTA1 and hearing thresholds (500 Hz, 1000 Hz, 2000 Hz) fully mediated the association between age and balance. Bootstrap results showed that the bootstrapped 95% CI around the indirect effect did not include zero. On the other hand, hearing thresholds of 4000 Hz and PTA2 did not mediate the association between age and balance (Fig. 2).
For the left ear, PTA1, 500 Hz and 1000 Hz mediated the association between age and balance. Bootstrap results for these measures showed that the bootstrapped 95% CI around the indirect effect did not include zero (Path a*b). On the other hand, WIN 50% SNR, PTA2 and hearing thresholds of 2000 Hz, and 4000 Hz did not mediate the association between age and balance. Bootstrap results for these measures showed that the bootstrapped 95% CI around the indirect effect included zero.
To determine whether the right or left ear was more likely to mediate the association between age and balance, we conducted a parallel mediation model in which both ears competed with each other as an explanatory www.nature.com/scientificreports/ mechanism. As presented in Table 5, for WIN 50% SNR and PTA1, the right ear emerged as a significant mediator whereas the left ear was not significant. For PTA2, neither ear was more dominant.
Discussion
The results of the current study indicate a stronger contribution of the right ear to balance than the left ear. Consistently, the correlations between the right ear and balance were higher than those for the left ear. In the right ear, almost all the hearing measures mediated the relationship between age and balance (WIN 50% SNR, PTA 1, hearing thresholds 500 Hz/1000 Hz/2000 Hz). By contrast, in the left ear, only PTA 1 and hearing thresholds of 500 Hz/1000 Hz mediated this interaction. Hearing measures for the right ear evidenced a stronger mediation effect than the left ear with respect to the interaction between age and balance (Tables 4, 5). These results may point to the dominance of the left hemisphere in processing hearing cues for balance.
To the best of our knowledge, this is the first study to suggest hemispheric lateralization and left hemisphere dominancy to account for the hearing-balance relationship. This should come as no surprise since all the major cognitive functions including language, spatial and emotional processing are lateralized 1-6 . The right ear advantage is well-known for the processing of verbal stimuli, reflecting left hemispheric dominance for language 4-6 . Studies have argued for the enhanced role of the left hemisphere in the control of motor actions 20 . Although hemispheric function for postural control and balance is not fully understood, most studies indicate that the right cerebral hemisphere plays a more prominent role in the efferent processes responsible for balance control [21][22][23][24] . For example, Golomer et al. 21 found that right hemispheric visual dominance is particularly useful for postural control in complex equilibrium conditions. On the other hand, Cioncoloni et al. 25 suggested that the left hemisphere plays a critical role in the selection of the appropriate postural control strategy. These findings emphasize the fact that the cerebral role in postural control and the cortical mechanisms of spatial hearing are complex processes, and more research is needed to elucidate them 25,26 .
Very little is known about the contribution of hearing at different frequencies to balance 12 . The current findings suggest that frequencies within the speech range (500/1000/2000 Hz) are correlated with balance. Both PTA 1 (the average of the hearing thresholds of 500 Hz, 1000 Hz, and 2000 Hz) which is the clinical predictor of the speech reception threshold (SRT), and WIN (speech perception in noise) in the right ear mediated the interaction between age and balance. These results raise the possibility that deterioration of speech perception in the presence of noise might indicate balance deterioration. However, pure tone thresholds of 4000 Hz, in both ears, were not correlated with balance and did not mediate the relationship between age and balance. Since agerelated hearing loss is characterized by bilateral hearing loss above 2000 Hz, this strengthens the claim that the relationship between hearing and balance is affected by factors other than age-related hearing loss.
The current study shows that hearing interacts significantly with balance in adults ( Table 2). This is consistent with data reported in Agmon et al. 8 29 and Doettl et al. 30 . Specifically, Lin and Ferrucci 10 found that for every 10 dB increase in hearing loss, the probability of an individual reporting a fall increased by 1.4. The interaction between hearing and balance has also been reported in patients with hearing loss 14,31 . Impaired balance was also found to exist in younger populations with hearing impairments 32,33 . This association between hearing loss and falls may be accounted for by several mechanisms: (a) physiological mechanisms that may influence auditory and postural systems. These could involve a concomitant dysfunction of both cochlear and vestibular sensory organs given their shared location within the labyrinth in the inner ear, or age-related changes in the corpus callosum that could affect both hearing and walking 8,34 ; (b) cognitive mechanisms such as paying attention to postural control might tap cognitive resources. Fewer cognitive resources and less attention due to hearing loss may impair postural balance in real life situations and increase the risk of falling 8,35,36 ; (c) behavioral mechanisms such as hearing loss might influence spatial orientation, social parameters, and the interaction between the effects of reduced mobility and Table 4. Regression results for the simple mediation of the right and left ears on the association between age and balance through hearing measures (Path a*b). Unstandardized regression coefficients are reported. Bootstrap sample size = 5000. LL, lower limit; CI, confidence interval; UL, upper limit. Significant mediation effects are in bold. As shown, five mediation effects were observed for the right ear but only three for the left ear. The significance of the indirect effect was calculated with a 95% confidence interval bootstrapping approach because the sampling distribution of the indirect effect is known to be skewed. Cases where the 95%CI does not include zero are equivalent to a significant effect at alpha < .05. www.nature.com/scientificreports/ reduced auditory inputs. Hearing deterioration may thus restrict a person's ability to monitor and perceive auditory environmental cues that provide spatial orientation 8 . Consistent with previous studies 37, 38 the current study found a decline in hearing and balance with advancing age. Furthermore, the findings indicated that balance was correlated with hearing, even when controlling for age. As demonstrated in the current study, hearing mediated the interaction between age and balance. This implies that one of the reasons for the deterioration of balance with advancing age may result from hearing deterioration. This finding is supported by previous studies indicating that balance deterioration is positively correlated with the extent of hearing deterioration in hearing-impaired populations 33,39,40 .
The current study used hearing tests that simulate everyday hearing situations (WIN), in addition to the commonly used index of hearing thresholds (pure tone thresholds in the range of 0.5-4.0 kHz). These tests were selected based on recommendations in previous studies 14,15,26,41 , in an attempt to better preserve the ecological validity of the human hearing-balance relationship. It is also important to note that since balance is a very complex function, the results of the balance test used in the current study (TUG) might have been affected by other factors such as peripheral hearing, vestibular, and visual factors. However, TUG is considered to be a good diagnostic tool for balance and risk of falling 42 , and is often being used in research evaluating balance in adult populations 14,43 . Further research should explore these topics in a variety of populations in different age groups and while using a variety of hearing/balance measures and pathologies.
Thus overall, the correlations between hearing and balance and the mediating effect of speech range frequencies on the age and balance relationship suggest that difficulties in understanding speech in adults over the age of 45y may indicate reduced balance and might imply the need for a balance evaluation. At the same time, balance difficulties may indicate the need for a hearing evaluation. Thus, the current study supports previous research recommending the evaluation of balance in individuals with hearing deterioration 9,44 , in order to potentially reduce falling and prevent its consequences. The relatively greater contribution of the right ear to balance, compared to the left ear, should be considered during hearing evaluation and rehabilitation. Hearing may thus contribute to balance in addition to visual, vestibular, and proprioceptive input.
Materials and methods
Participants. A sample of 295 community dwelling adults (181 female and 114 male) aged 46-75 years (58.5 ± 6.1), participated in this study. Written informed consent was obtained from all subjects. All participants underwent two hearing tests (Standard Pure-Tone Audiometry test and Words-in-Noise-WIN), one balance test (Timed Up and Go-TUG) and the Montreal Cognitive Assessment-MoCA. All methods were performed in accordance with the relevant guidelines and regulations.
Exclusion criteria included poor physical health 38 , mobility using walking aids, and suspected presence of mild cognitive impairment as defined by the MoCA < 26/30 45 . After signing the informed consent form and completing the MoCA questionnaire, the participants were administered the hearing and the balance tests.
Hearing and balance evaluation. Hearing in the right and left ears was evaluated using Standard Pure-Tone Audiometry, and the Hebrew version of Words-in-Noise (HWIN) test 46,47 . To assess hearing thresholds, the Standard Pure-Tone Audiometry 48 was administered at octave levels from 500 to 4000 Hz using a HARP mobile audiometer with TDH-50 earphones (Grason-Stadler Inc, Eden Prairie, MN; Guymark UK Limited, West Midlands, UK). The pure tone average 1 (PTA1) was calculated as the average hearing threshold at 500 Hz, 1000 Hz, and 2000 Hz. PTA1 is regarded as a predictor of the speech reception threshold. The pure tone average 2 (PTA2) was calculated as the average hearing threshold at 1000 Hz, 2000 Hz, and 4000 Hz. PTA2 emphasizes the weight of high frequencies to hearing.
The WIN is a word-recognition test to assess speech perception in noise 46 . The Hebrew version of the WIN consists of two lists of 35 common consonant-vowel-consonant (CVC) words mixed with 6 talkers' babble at 7 signal-to-noise ratios (SNRs) from 24 to 0 dB SNR in 4 dB increments. The two lists were presented to each subject, one for each ear for open set identification 47 . The total number of correctly identified words and the 50% point in dB SNR (WIN 50% SNR) for each ear was calculated using the Spearman-Karber Eq. 49 .
Performance-based balance was measured using the timed up and go test (TUG). The TUG is a widely used instrument that examines balance, functional mobility, and risk of falling across multiple adult populations 14,[50][51][52][53] . The test requires the subject to stand up, walk 3 m, turn, walk back, and sit down. Time taken to complete the test is strongly correlated with level of balance and functional mobility. Cognition was assessed by the Hebrew version 54 of the MoCA 45 .
Statistical analysis. The statistical analysis was performed using IBM SPSS Statistics for Windows v.24.
The data were expressed as the mean ± standard deviation (SD). Pearson's correlation analysis was used to determine the correlation between age, hearing tests and balance tests. A value of p < 0.05 was considered statistically significant. To examine the mediational role of the hearing measures, the PROCESS macro 41 Model 4 was used to calculate four sets of regressions (Fig. 1). The first set of regressions examined the associations between the predictors (age) and mediating variables (hearing measures), Path a. The second set of regressions examined the links from the mediators (hearing measures) to the outcomes (balance) controlling for age, Path b. The third set of regressions examined the direct associations between the predictors (age) and the outcome (balance), Path c. The fourth set of regression examined the direct associations between the predictors (age) and the outcome (balance) controlling for the mediators (hearing measures), Path c' . To test the significance of the indirect effects of age on balance through hearing deterioration, the bootstrapping approach was used and the 95% CI for the indirect effects on 5,000 resamples was calculated 55
Data availability
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. | 2022-11-18T14:41:55.817Z | 2022-11-17T00:00:00.000 | {
"year": 2022,
"sha1": "d3676bbbf8118299004f41a8f42eb37d35d0a640",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "d3676bbbf8118299004f41a8f42eb37d35d0a640",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
40566961 | pes2o/s2orc | v3-fos-license | Streptococcus pneumoniae serotype 19A in Latin America and the Caribbean: a systematic review and meta-analysis, 1990–2010
Background Pneumococcal conjugate vaccines (PCVs) are in the process of implementation in Latin America. Experience in developed countries has shown that they reduce the incidence of invasive and non-invasive disease. However, there is evidence that the introduction of PCVs in universal mass vaccination programs, combined with inappropriate and extensive use of antibiotics, could be associated to changes in non-PCV serotypes, including serotype 19A. We conducted a systematic review to determine the distribution of serotype 19A, burden of pneumococcal disease and antibiotic resistance in the region. Methods We performed a systematic review of serotype 19A data from observational and randomized clinical studies in the region, conducted between 1990 and 2010, for children under 6 years. Pooled prevalence estimates from surveillance activities with confidence intervals were calculated. Results We included 100 studies in 22 countries and extracted data from 63. These data reported 19733 serotyped invasive pneumococcal isolates, 3.8% of which were serotype 19A. Serotype 19A isolates were responsible for 2.4% acute otitis media episodes, and accounted for 4.1% and 4.4% of 4,380 nasopharyngeal isolates from healthy children and in hospital-based/sick children, respectively. This serotype was stable over the twenty years of surveillance in the region. A total of 53.7% Spn19A isolates from meningitis cases and only 14% from non meningitis were resistant to penicillin. Conclusions Before widespread PCV implementation in this region, serotype 19A was responsible for a relatively small number of pneumococcal disease cases. With increased use of PCVs and a greater number of serotypes included, monitoring S. pneumoniae serotype distribution will be essential for understanding the epidemiology of pneumococcal disease.
Background
Streptococcus pneumoniae causes invasive pneumococcal disease (IPD), which is often life-threatening in children less than 2 years old, adults older than 65 years old and immunocompromised individuals [1]. Pneumococcus is also the most common cause of bacterial acute otitis media (AOM) and sinusitis in children [1][2][3].
Since the introduction of pneumococcal conjugate vaccines (PCVs), numerous studies have been published on their safety, immunogenicity and efficacy, in particular for the heptavalent vaccine (PCV7), introduced in the USA in 2000 [8,9]. Studies conducted after PCV7 introduction, have shown dramatic and sustained decreases in vaccine type (VT) IPD rates, carriage and herd effects [10][11][12][13]. These positive findings were followed by reports of IPD caused by non vaccine types (NVT) S. pneumoniae PNSP and MDR [14][15][16][17][18]. NVT have also been described as agents of non invasive disease [15] and nasopharyngeal carriage [12]. Data from both North America and Europe have shown S. pneumoniae serotype 19A (Spn19A) to be the most prevalent serotype, associated with increasing rates of MDR [19]. Consequently, attention has focused on Spn19A, its prevalence and the numerous factors leading to this increase, and how best to control its impact [20][21][22].
This review summarizes the available published and unpublished Latin American and Caribbean (LAC) data from 1990 through 2010 describing the prevalence and burden of Spn19A in children less than 6 years. For comparison, we also analyzed the data for the most prevalent serotypes in this region [23].
Methods
We searched for data collected between January 1990 and July 2010 following PRISMA guidelines. Using both refined search strategies and broad spectrum, low specificity, searches (i.e. "Streptococcus pneumoniae" OR "pneumococcus" anywhere in the text), we reviewed all references on S. pneumoniae that were geographically linked to LAC countries, with no language restrictions; the targeted age group was children 6 years-old or younger. We searched the following databases: Medline (PubMed), Embase, Latin American and Caribbean Health Sciences Information (LILACS), Scientific Electronic Library on Line (SciELo) and SCOPUS. Search terms used are shown in Additional file 1a. Abstracts of recent meetings on infectious diseases were also included.
Serotype distribution data were extracted by five reviewers for IPD, non-IPD and nasopharyngeal carriage. In addition, data on Spn19A penicillin susceptibility, pneumococcal disease prevalence and/or incidence, mortality rate, and pneumococcal vaccine potential impact were collected when available. For calculation of impact using SIREVA data we assumed serotype 6A/6B protection for PCV7 and PCV10 [4].
In order to avoid duplicate data, numbers were only added from the databases of the SIREVA Project (only for invasive isolates) available via the PAHO website, [23]. Data included in the analysis: 2000-2005, 2006, 2007, 2008 and 2009, SIREVA corresponded to available information in the original sources published according to the methodology used for this systematic review. We limited selection bias by reducing the heterogeneity of samples; most data were from the SIREVA network for invasive isolates with standardized laboratory surveillance techniques and expanded availability of protocols. For the inclusion and exclusion criteria used to select publications reporting non IPD studies, we reviewed international criteria and internationally defined and accepted sample collection and laboratory techniques. For the purposes of this systematic review, we adopted the definitions presented in Additional file 1b.
We analyzed and presented our results following standard guidelines. Prevalence estimates were computed using the number of Spn19A isolates as the numerator and the total number of S. pneumoniae reported as denominator, for each study. Two techniques were used to calculate the pooled prevalence estimates: Mantel-Haenszel (fixed-effects model) and DerSimonian-Laird (random-effects model) [24,25].
For invasive disease meta-analysis we only included publications reporting non-SIREVA data, considering that SIREVA data represent 96.7% of samples analyzed and would bias the pool estimation.
Lastly, we estimated Spn19A specific IPD incidence by multiplying the serotype distribution by the reported incidences identified in this review.
Study selection
Our searches retrieved a total of 1704 references. After reviewing the titles and abstracts, a total of 322 full texts were reviewed, 222 of which were excluded. The final number of publications included was 100 and data were extracted from 63; the remaining 37 were referenced ( Figure 1). The characteristics of the 63 studies reviewed and of those that were referenced are described in Additional file 2.
The 63 references were divided into three categories: studies with information on invasive serotypes (n = 14), non-invasive isolates with individual information (n = 27) and burden of disease or disease incidence studies (n = 26). Four references provided information on more than one of these categories.
When considering countries with more than 500 isolates collected over a period of 17 years (1993-2009) or 10 years (2000-2009) the total number of invasive isolates was 17831; 677 (3.9%) of these were Spn19A, ranging from 1.5% in Colombia to 7.0% in Venezuela ( Table 1).
The prevalence of Spn19A in the region, by country and by time period is shown in Table 3 Figure 2.
SIREVA data from the period 2006-2007 [23] showed that pneumonia accounted for 59.4% of non-meningitis cases (1069/1801) and 56% of all Spn19A (42/75). Overall, Spn19A accounted for 3.9% of the pneumonia isolates in LAC (See Additional file 6b). Data from 2000-2009, showed that Spn19A was the 10 th and 6 th most frequently reported serotype causing meningitis and non meningitis, respectively (See Additional files 6c and d).
Spn19A in AOM
Data showing the frequency of serotype 19A amongst isolates in cases of acute otitis media are presented in Table 4. Despite representing more than 70% of the whole sample, only 0.6% of Costa Rican isolates were Spn19A. Data grouped by VT (13, 10 and 7 valent vaccines) and NVT are also shown (See Additional file 7a). Overall, Spn19A accounted for 2.4% (11/460) of isolates, ranking the 9 th amongst the most frequent serotype for AOM.
Spn19A in nasopharyngeal carriage
Spn19A data from isolates in healthy children are shown in Table 4; 20 serotypes were identified for 74.7% of nasopharyngeal isolates, of which serotype 19F was the most frequently reported (13.6 and Spn19A the 6 th (4.1%) (See Additional file 7b).
The distribution of Spn19A in healthy children (carriage) was similar from that seen for the smaller sample of isolates collected from sick children ( (Table 5) [61]. An analysis of other serotypes showed that among 453 resistant meningeal isolates, Spn19A was the 5 th most frequently reported serotype (4.9%) and that, for 248 resistant non-meningitis isolates, it was 3 rd (8.3%) (See Additional file 8a). These data are presented by country in Additional file 8b.
Spn19A burden of disease
Our literature search for publications on the burden of disease caused by serotype 19A identified 26 papers; Table 6 summarizes incidence rates reported. Incidences by country are presented in Additional file 9.
Lagos et al. [46] monitored IPD related hospitalizations in Chile between 1994 and 2007. Among the serotypes identified, "other" or "non-vaccine serotypes within vaccine serogroups" (which included Spn19A) were reported for 72 patients with invasive clinical syndromes. For these patients, the case fatality rate was 0%. The annual incidence of IPD among children 0-59 months of age caused by Spn19A was 1 per 100,000.
Pneumococcal vaccine potential impact for invasive disease
With the SIREVA data for IPD the estimated percentage was 63% for PCV7, 79.3% for PCV10, and 85.5% for PCV13. We carried out an analysis to establish the potential benefit of adding Spn19A, 1 and 5. This showed that addition of serotypes 1 and 5 increase impact by 13.2%, whilst addition of 19A increases vaccine impact by 4% (see Additional file 10).
Discussion
This systematic review of Spn19A data in children under 6 years old, from studies conducted in LAC over a period of 20 years shows that Spn19A remains a less common agent of IPD than other serotypes (3.8%), ranking 9 th in the twenty most prevalent serotypes [23]. The percentage of isolates accounted by Spn19A differed between countries, being the 10 th most frequently reported from Colombia, the 6 th from Mexico and 4th from Venezuela (Additional file 3a and Additional file 11). This information provides a complete overview of the role of Spn19A for pneumococcal disease facilitating the decision process for those countries considering to introduce PCV, but also will allow evaluation of potential variations in the prevalence of Spn19A and other serotypes, as reported previously in studies following introduction of PCV7 [14][15][16][17][18][19]23]. Our analysis of the literature identified the serotypes accounting for 85.4% of IPD in the region, serotype 14 being the most common (28.7%). However, the percentage of isolates accounted for by each of these serotypes varied from country to country, in agreement with Johnson´s observation in her recent global serotype paper [4].
The scope of our search strategy allowed us to retrieve comprehensive lists of peer-reviewed publications. Two of our authors being members of the SIREVA team, we were able to identify the vast majority of relevant publications in non-indexed journals and obtain personal communications with SIREVA coordinators [23]. Additionally, information retrieved from over a 20 year period evaluated secular trends and the periodicity of serotypes described in the literature [1][2][3][4].
A strength of our analysis is that the percentage of IPD Spn19A isolates reported in the non SIREVA data that we reviewed (7 reports, 1990-2008) was not significantly different from that for the SIREVA data (3.8%).
Regarding time period of Spn19A prevalence, a significant increase, from 3.3% to 4.6%, was noted only in Argentina and Colombia between 1994-1999 and 2006-2009 before any universal vaccine intervention could have had an impact. However, Spn19A stability was observed in Brazil, Chile, Dominican Republic and Mexico. Similar increases in the percentage of isolates accounted for Spn19A, even prior to the introduction of PCV7, have been reported in Europe [63], South Korea [64], Southern Israel [65] and Taiwan [66], likely reflecting selection pressure from antibiotic use.
On the other hand, in the USA the observed increasing prevalence of PRSP and MDR Spn19A has been suggested to be due to a rapid expansion of the Spn19A clonal complex CC320, to more than one new clone introduced or to successful clones associated with other serotypes that have undergone a recombinational switch to Spn19A [20,21].
In the LAC region only one study, describing PFGE patterns of Spn19A isolates, and conducted in Colombia [67], reported Spn19A MDR isolates in IPD; two were found related to the clone Colombia 23F -ST338, one to the clone Spain 23F -ST81, and 6 were not related to the clones studied. A possible explanation of these findings may be that a successful clone, such as Spain 23F , underwent a recombinational switch to Spn19A.
No differences could be established between age groups for the prevalence of Spn19A as an IPD agent. In contrast, serotypes 1 and 5 were more frequent in children 2-5 years old and serotypes 6B and 14 were more frequent in <2 year olds than in the other age groups in the LAC region (See Additional file 5).
Our analysis suggests that Spn19A causing IPD in LAC is more frequently an agent of non-meningitis disease (4.5% of cases), in particular pneumonia than of meningitis (2.9%) (See Additional file 6a).
PNSP in invasive Spn19A isolates has been reported in LAC since 1993 [26]. A study conducted in 2010, using the new CLSI breakpoints for penicillin [61], showed that resistant Spn19A isolates are circulating in the region, more frequently as agents of meningitis (MIC ≥ 0.125 μg/ml) than for of non-meningitis (MIC ≥ 4.0 μg/ml). However, the finding of a prevalence of 3.2% for Spn19A with MIC ≥ 8.0 μg/ml among non-meningitis cases, recovered in Mexico, Colombia and Venezuela is of great concern, as it follows reports of 7.7% of cases being attributed to serotype 19F. Molecular surveillance data will reveal their role as agents of pneumococcal disease [20,21].
Despite the fact that S. pneumoniae causes 30-60% of AOM cases worldwide [68], only three papers and one abstract were found and analyzed; overall, 2.4% of these were attributed to Spn19A. As AOM continues to be an important childhood infection and given that the etiology might change from VT to non-PCV7 strains once pneumococcal vaccines are widely implemented [69], it is important to conduct AOM etiology studies in the region. S. pneumoniae may be subject to serotype replacement phenomena and attention to antibiotic resistant NVT otopathogens as well as non typable Haemophilus influenzae is required [70].
Nasopharyngeal carriage has been confirmed with greater values reported for children less than 5 years old. From the papers analyzed, Spn19A ranked the 6 th most frequently reported serotype for healthy children (4.1%), jointly with Non-Typable. There were a high number of serotypes with ability for colonizing the nasopharynx, with serotype 19 F the most frequently identified (See Additional file 7b).
Nasopharyngeal serotypes described in Latin America from 1994 to 2008 are very similar to those described by Huang in 2001 (pre vaccine data) for generally healthy children in 16 Massachusetts communities. Spn19A represented 4.2% of 143 isolates; PNSP was described for 77% of the NVT, in particular for serotypes 6A, 19A and 9A [12].
Studies conducted since the introduction of PCV7 vaccination have shown decreases in colonization with pneumococcal VT shortly after immunization as well as longer-term changes in colonization patterns. Huang [12] reported a decrease in the carriage of VT from 36% to 3% seven years after mass introduction of PCV7, whereas NVT carriage increased from 15% to 29%. The common colonizing serotypes in 2007 included 19A (16%) (Baseline data 6.0%), 6A (12%), 15B/C (11%), 35B (9%) and 11A (8%), a clear reflection of the replacement phenomenon. Additionally, the more frequent colonizing serotypes have greater resistance to penicillin. Nasopharyngeal surveillance appears to be a reliable system for measuring vaccination impact in terms of a decrease in VT types and will help to elucidate the emergence of NVT following PCV introduction.
Incidence rates reported by Lagos [46] for IPD caused by Spn19A ranged from 0.4 to 2.2 cases x 100,000 between 1994 and 2007, suggesting a seasonal pattern for this serotype. Similar variations have been shown for other serotypes in the LAC region, such as 1 and 5 [34] and may explain changes during time periods in the SIR-EVA data presented in this review. This should be considered when interpreting data post introduction of pneumococcal vaccines in this region. In contrast, the incidence of other serotypes such as 14 has shown small variations [26,34].
PCVs have been introduced recently in several countries in LAC, but currently, there are no data published about their impact in reducing IPD. Consequently, little is known about the replacement phenomenon with Spn19A, which has been well described previously [22]. Available data provide only an estimation of hypothetical impact (supplement 10). The same calculation for the recent SIREVA data [4] showed a major impact of PCV10 and PCV13 vaccination, in particular related to the inclusion of serotypes 1 and 5. In fact, after 2009, countries in the region have incorporated different PCV into their expanded program of immunization following individual assessment for their epidemiology (PCV7/13: Costa Rica, Uruguay, Mexico and PCV-10: Brazil, Colombia, Ecuador and Chile). Following the results of this review, indicating the low prevalence of Spn19A in most of the countries, it is necessary to report any subsequent change in the distribution of this serotype in those countries who have introduced one of the available PCV. Particularly, trying to explain any increase or decrease in the Spn19A prevalence comparing the statistics prior to universal vaccination and possible factors that could explain this, such as vaccine coverage, antibiotic use and immune response based on the vaccine formulation.
The results of our systematic review have a number of limitations. The source of primary data, either from SIR-EVA or from independent research teams, could introduce selection bias, potentially promoting the selection of more severe forms of the disease. However, it is important to highlight that more severe disease will have the largest impact from a burden of disease or a public health perspective. Information on disease severity caused by Spn19A in this region was limited; this is also the case for data collected for other serotypes, given that similar surveillance activities are employed in the different countries. As this limitation is not restricted to a specific serotype, it should not bias our conclusions. It was not possible to analyze temporal changes in serotype frequency, except from a very broad perspective. The small amount of data available on burden of disease and on the possible effects of mass vaccination highlights the need for more research in this area.
Incidence of IPD in this region ranges from 3.0 to 206.8 cases per 100,000. Overall 9 serotypes are responsible for 80% of IPD and 30% are due to serotype 14; Spn19A remains relatively uncommon as an agent of
Conclusions
As several countries in the region implemented PCV in their routine schedules starting in 2006, regional data on vaccination impact on IPD, non invasive and nasopharyngeal carriage by VT and herd effect should soon be available. In the near future we expect that data on VT and NVT, supported by a solid surveillance system, will be available, which will support public health decisions on the introduction of PCV.
Additional files
Additional Authors' contributions EC contributed to systematic review conception and design, data analysis, interpretation of data, elaboration, review and comments on all drafts of this paper and gave final approval to submit for publication. CIA contributed to systematic review conception and design, data analysis, interpretation of data, elaboration, review and comments on all drafts of this paper and gave final approval to submit for publication. RDA contributed to systematic review conception and design, data analysis, interpretation of data, review and comments on all drafts of this paper and gave final approval to submit for publication. DR contributed to systematic review conception and design, data analysis, interpretation of data, review and comments on all drafts of this paper and gave final approval to submit for publication. CC contributed to systematic review conception and design, data analysis, interpretation of data, review and comments on all drafts of this paper and gave final approval to submit for publication. EO-B contributed to systematic review conception and design, interpretation of data, review and comments on all drafts of this paper and gave final approval to submit for publication. REC contributed to systematic review conception and design, interpretation of data, review and comments on all drafts of this paper and gave final approval to submit for publication. All authors read and approved the final manuscript. | 2018-04-03T04:49:30.094Z | 2012-05-28T00:00:00.000 | {
"year": 2012,
"sha1": "2c07d961291f5c6408c9687e0c84850f6345ad92",
"oa_license": "CCBY",
"oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/1471-2334-12-124",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6e52774fd99bcbe13e22be510bc44439d24fe58c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
240153476 | pes2o/s2orc | v3-fos-license | Effects of dose de‐escalation following testosterone treatment and evoked resistance exercise on body composition, metabolic profile, and neuromuscular parameters in persons with spinal cord injury
Abstract The dose de‐escalation (DD) effects of testosterone and evoked resistance training (RT) on body composition, cardiometabolic, and neuromuscular variables were investigated. Thirteen men with chronic complete spinal cord injury (SCI) were followed for additional 16 weeks after receiving either testosterone treatment only (TT) or TT+RT. During the 16‐week DD period, the TT+RT group underwent a program of once weekly electrical stimulation with gradually decreasing ankle weights and testosterone patches of 2 mg day−1 (TT+RT group). The TT only group did not receive any intervention throughout the detraining period (no‐TT group). Body composition was tested using anthropometrics, dual energy X‐ray absorptiometry, and magnetic resonance imaging. After an overnight fast, basal metabolic rate (BMR), lipid panel, serum testosterone, inflammatory biomarkers, glucose effectiveness, and insulin sensitivity were measured. Finally, peak isometric and isokinetic torques were measured only in the TT+RT group. All measurements were conducted at the beginning and at the end of DD. Absolute thigh muscle cross‐sectional areas (CSAs) demonstrated interaction effects (p < 0.05) between the TT+RT (−8.15%, −6.5%) and no‐TT (2.3%, 4.4%) groups. Similarly, absolute knee extensor muscle CSA demonstrated interaction effects (p < 0.05) between the TT+RT (−11%, −7.0%) and no‐TT (2.6%, 3.8%) groups. There was a trend (p = 0.07) of increasing visceral adipose tissue (VAT) CSAs in the TT+RT (18%) and in the no‐TT (16% cm2) groups. There was an interaction (p = 0.005) between TT+RT (decreased by 3.7%) and no‐TT groups (increased by 9.0%) in BMR. No interactions were evident between groups over time for biomarkers related to carbohydrate, lipid metabolism, or inflammation. Finally, there were no changes (p > 0.05) in peak isometric or isokinetic torques and rise time following 16 weeks of the DD period in the TT+RT group. TT+RT during 16 weeks of DD was minimally effective at preventing detraining relative to no‐TT on muscle size, BMR, and VAT. However, neuromuscular gains were successfully maintained.
| INTRODUCTION
Surface neuromuscular electrical stimulation (NMES) evokes skeletal muscle hypertrophy, restores lean mass, decreases percentage fat mass, and enhances cardiometabolic profile in persons with spinal cord injury (SCI; Dudley et al., 1999;Gorgey, Khalil, et al., 2019;Mahoney et al., 2005). NMES has been shown to elicit significant improvements in whole-body cardiometabolic profiles (Gorgey, Khalil, et al., 2019). Several studies demonstrated that training the large paralyzed lower extremity muscles is likely to be associated with improvements in peak oxygen uptake, lipid, and carbohydrate profiles as well as bone mineral density in persons with SCI (Dolbow et al., 2011). To maximize the benefits on cardiometabolic risk factors, NMES-resistance training (NMES-RT) was successfully combined with testosterone in eugonadal men with complete SCI (Gorgey, Khalil, et al., 2019). Combining NMES-RT with testosterone resulted in increasing whole thigh and knee extensor muscle cross-sectional area (CSA) by 29.5% and 43%, respectively, without changes in the testosterone only (testosterone treatment [TT]) group (Gorgey, Khalil, et al., 2019). Both groups showed modest decreases in visceral adipose tissue (VAT) and interleukin 6 (IL-6) with a decrease in intramuscular fat (IMF) in the TT+RT group. Basal metabolic rate (BMR) increased by 211-250 kcal/day only in the TT+RT group (Gorgey, Khalil, et al., 2019). These findings supported earlier work which demonstrated that 5-10 mg/day of TT for 12 months increased serum testosterone level from 251 to 504 ng/dl, increased lean mass and resting metabolic rate in hypogonadal men with SCI (Bauman et al., 2011).
Limited evidence in persons with SCI exists regarding how training cessation affects cardiometabolic risk factors and neuromuscular parameters such as muscle peak torque, rise time, and fatigue (Bauman et al., 2015;Gorgey, Martin, et al., 2016;Holman & Gorgey, 2019). In able-bodied controls, the effects of 4-week detraining on neuromuscular parameters were tested following 8 weeks of NMES training (Gondin et al., 2006). The authors noted decreases in knee extensor maximum voluntary contraction, vastii muscle electromyography activity, muscle activation, and quadriceps CSA by 9%, 20%, 5%, and 3%, respectively (Gondin et al., 2006). Bickel et al. re-ported that exercise dose de-escalation (DD) was capable of maintaining certain RT adaptations during a 32-week detraining period (Bickel et al., 2011). The DD RT program was accomplished by reducing the training volume to 1/3 (three sets of 10 once a week) and 1/9 (one set of 10 once a week) from the original three sets thrice weekly (Bickel et al., 2011). Interestingly, older individuals (60-75 years) required a higher maintenance dose to retain the gain in myofiber CSA and myofiber type transformation (Bickel et al., 2011).
In persons with chronic SCI, 2.5 years after cessation of either arm cycling exercise or functional electrical stimulation cycling leg lean mass and whole-body lean mass decreased by 16% and 5.4%, respectively, with a 15.5% concomitant decrease in BMR following (Gorgey, Martin, et al., 2016). Similarly, Gurney et al. showed that 8 weeks of detraining following 12 weeks of functional electrical stimulation cycling resulted in a 22.5% decrease in peak oxygen uptake and a 52.5% decrease in peak workload (Gurney et al., 1998). In contrast, Bauman et al. showed that lean tissue mass and energy expenditure were retained for an additional 6 months in hypogonadal men with SCI following discontinuation of 12 months of transdermal testosterone application (Bauman et al., 2015). The authors suggested persistent beneficial effects of anabolic hormone therapy on lean mass and resting metabolic rate in men with SCI (Bauman et al., 2015). Therefore, the addition of testosterone to NMES-RT may result in retention of the cardiometabolic benefits or neuromuscular parameters after cessation or DD period in persons with SCI.
Previous studies primarily relied on administering a period of passive detraining (i.e., no intervention following an active exercise program). A limited number of studies have focused on investigating the effects of DD (i.e., reducing the frequency or volume of training, or the dose of administered medication). Reducing the frequency of training has previously been recommended as an effective strategy to enhance adherence and compliance to a longitudinal exercise programs . In our previous attempts, we have utilized a frequency of twice weekly of NMES-RT to restore muscle mass and enhance the cardiometabolic profile (Gorgey, Khalil, et al., 2019) and neuromuscular parameters was minimally effective at preventing detraining relative to no-TT on muscle size, BMR, and VAT. However, neuromuscular gains were successfully maintained.
K E Y W O R D S
basal metabolic rate, body composition, dose de-escalation NMES, glucose effectiveness, inflammatory and anabolic biomarkers, resistance training, spinal cord injury, testosterone treatment, visceral adipose tissue (Holman & Gorgey, 2019). Even a frequency as low as once weekly of NMES-RT may result in increased leg lean mass and reduced muscle fatigue in a person with T6 SCI (Gorgey, Caudill, et al., 2016). Therefore, a DD period of once weekly NMES-RT combined with testosterone might retain the cardiometabolic benefits and neuromuscular parameters after 16 weeks of training.
The primary purpose of the current study was to investigate the effects of 16 weeks of DD with low-dose testosterone and NMES-RT (TT+RT) on parameters of body composition, cardiometabolic profiles, and neuromuscular parameters compared to no-TT (i.e., 16 weeks without any testosterone or NMES-RT) in men with chronic complete SCI. In the current study, the TT+RT group underwent a decrease in the dose of testosterone and the volume of NMES-RT during the DD period. For the TT+RT, the DD program was designed by reducing the training volume (four sets of 10 to three sets of 10) and the training frequency (twice weekly to once weekly) from the original 16-week training (Gorgey, Khalil, et al., 2019) and similar to earlier recommendations intended to maintain training adaptations (Bickel et al., 2011). Additionally, the DD program introduced the lowest dose of testosterone (2 mg day −1 ) in the TT+RT group and ceased administration of testosterone in the TT group (no-TT) similar to earlier work (Bauman et al.2015). Our primary hypothesis was that decreasing the dose of testosterone combined with NMES-RT in the TT+RT group may mitigate the effects of detraining on muscle size, VAT, neuromuscular parameters (peak torque and rise time), and cardiometabolic profiles compared to cessation of testosterone in the no-TT group after chronic SCI.
| METHODS
Fifteen men with complete SCI, who were originally randomized into 16-week open-label manner to investigate the effects of TT+RT compared to TT only on body composition and metabolic profile (Gorgey, Khalil, et al., 2019) and neuromuscular parameters (Holman & Gorgey, 2019), were invited to participate in an additional 16 weeks of DD period. Originally, both groups received 16 weeks of transdermal testosterone patches (2-6 mg day −1 ) that were alternated between left and right shoulders at bedtime. Additionally, the TT+RT group received 16 weeks of supervised progressive RT, twice weekly using surface NMES and ankle weights (Gorgey, Khalil, et al., 2019). After maintaining each group's original assignment, only 13 men were followed for 16 weeks to investigate the effects of DD of TT+RT (n = 7) and no-TT (n = 6) on cardiometabolic risk factors and neuromuscular parameters. Two participants withdrew from the TT+RT group (see the Section 3). There was no time gap between the initial phase of the trial (Gorgey, Khalil, et al., 2019) and the second phase of DD. This additional 16-week period started (DD0) and concluded (DD16) with a 2-day visit consisting of an overnight stay for measurements of body composition, metabolic profile, peak isometric, and isokinetic torques. DD0 corresponds to the post-intervention one time point in the original trial (Gorgey, Khalil, et al., 2019). The design of the training and DD program is illustrated in Figure 1.
| Consenting and physical examination
The study was approved by the local institutional research board. After signing an informed consent, each participant underwent a detailed physical examination by a trained physician. Detailed inclusion and exclusion criteria as well as the process of recruitment and randomization were previously described (Gorgey, Khalil, et al., 2019). Briefly, participants underwent physical examination F I G U R E 1 Timeline of phase I (effects of TT+RT vs. TT only) and phase II (DD of TT+RT vs. no-TT) on cardiometabolic risk factors and neuromuscular parameters in persons with chronic SCI. Dark blue reflects intervention in phase I (Gorgey, Khalil, et al., 2019) and light blue reflects the DD phase. DD, dose de-escalation; RT, resistance training; TT, testosterone treatment that included a neurological assessment, electrocardiogram, and International Standards for Neurological Classification of Spinal Cord Injury.
| Resistance training using evoked NMES
The RT protocol using surface NMES was recently described in detail and shown in a video publication demonstrating step-by-step strategies to effectively implement surface NMES in persons with SCI (Mahoney et al., 2005;Ryan et al., 2013). Briefly, one surface adhesive electrode was placed on the knee extensor muscles 2-3 cm above the superior aspect of the patella over the vastus medialis muscle, and the other adhesive electrode was placed lateral to and 30 cm above the patella over the vastus lateralis muscle. A Theratouch 4.7 stimulator unit (Rich-Mar), was set to deliver biphasic rectangular pulses of 30 Hz, 450 µs pulses at a current amplitude (mA) sufficient to evoke full leg extension against gravity. The current was manually increased to evoke full leg extension (three sets of 10 repetitions) with a 2-3-min rest between sets as previously described (Gorgey, Khalil, et al., 2019). The sets were alternated between the right and left leg starting with the right leg and the current amplitude (mA) was recoded for every repetition. Training was conducted once weekly for 16 weeks with participants sitting in their own wheelchairs. Training started with the maximum ankle weights that were achieved during the original 16-week trial and was gradually decreased by 2 lbs. (0.91 kg) per week until ankle weights reached 2 lbs. and were maintained for the remaining weeks. Training ensured full knee extension was achieved for 30 reps without fatigue.
| Testosterone treatment
The TT+RT group applied 2 mg day −1 testosterone shoulder patches (Androderm, Watson Pharma. Inc.; Gorgey, Khalil, et al., 2019) while the TT only group did not apply testosterone patches during the DD period (no-TT group). The 2 mg day −1 was chosen because it is the minimal dose according to the manufacturer. Patches were supplied every 30 days and returned on a monthly basis to ensure adherence to the intervention protocol. Participants were instructed to place patches before bedtime and keep it on for 24 h. Patches were removed in the morning before bathing and re-attached on the same spot for the rest of the day. Participants discontinued T patches on the last day of the study or 4 days prior to DD16 measurements.
Two-day assessment period
The 2-day assessment period included measurements of body composition, anthropometry, dual energy X-ray absorptiometry (DXA) as well as measuring peak isometric and isokinetic torques. Additionally, magnetic resonance imaging (MRI) scans were obtained for whole thigh, individual skeletal muscles and IMF CSAs, trunk VAT, and subcutaneous adipose tissue (SAT). On the day of body composition assessment, participants were reminded to consume adequate fluids to stay hydrated and to eat a light meal 2-3 h prior to testing (Dixon et al., 2013). Participants were then escorted to the clinical research unit for the dinner and remained in the unit overnight for metabolic measurements the following morning.
Anthropometrics and body composition assessments. The height of each participant was determined while lying in a supine position using their left sides . Two smooth wooden boards were placed at the participant's head and heels and the distance between them was measured to the nearest cm. Measurements of abdominal girth (widest region of the trunk), waist (narrowest region of the trunk), hip (encompasses both greater trochanters), and thigh (mid-point between anterior superior iliac spine and superior border of the patella) circumferences were measured in triplicate in the supine position . For the first three circumferences, participants were asked to take a deep breath and then exhale, and measurements were captured at the end of expiration. Measurements were repeated if there was a difference greater than 0.5 cm between repeated readings (Gorgey, Martin, et al., 2016).
Dual energy X-ray absorptiometry. Body composition was measured by whole-body scans using a GE Lunar Prodigy Advance scanner (GE Lunar Inc.). Fat-free mass (FFM), fat mass (FM), %FM, and LM for total body, trunk, legs, arms, android, and gynoid regions were measured by DXA Spungen et al., 2003). The DXA scanner was calibrated using a daily quality control phantom according to the manufacturer's guidelines. Participants were transferred to the DXA table using either a ceiling lift or self-transfer with or without sliding board. Participants were allowed 20 min in a flat supine position to account for possible fluid shifts before starting the scan. Knees were strapped together using a large velcro strap above the knee joints and every effort was made to ensure that each leg was placed in a neutral position with the big toe facing upward. The lead research investigator checked that the whole-body posture was aligned straight with no rotation in the pelvis or shifting of the trunk. The arms were placed close to the body in mid-prone position to ensure the total body was within the scanning field . All scans were performed and analyzed by a trained DXA operator using Lunar software version 10.5. Total regional borders were placed by the computer auto analysis program delineating anatomical regions of interest and final adjustments were made to ensure optimum inter-participant reproducibility. We have reported the short-and long-term precision of the regional and whole-body composition using DXA in persons with SCI .
Magnetic resonance imaging. Thigh muscle CSA (Primary outcome variables): Magnetic resonance imaging was performed at the VA Medical Center using a General Electric Signa 1.5-T magnet as previously described (Elder et al., 2004;Gorgey, Khalil, et al., 2019;. Transaxial images (12-15 slices; fast spin echo; repetition time, 850-1000 ms; echo time, 6.7 ms; imaging frequency, 63.8 MHz; echo number, 1; echo train length, 3; flip angle, 90°; field of view, 20 cm; matrix size, 256 × 256) 8 mm thick and 16 mm apart, were taken from the hip joint to the knee joint using a General Electric body array flex coil to measure thigh CSA. Using a localized coil, the signal-to-noise ratio was improved, resulting in high-resolution images for analysis. The acquisition time per leg was 3.5 min. The participant's legs were strapped together to mitigate involuntary muscle spasms, and participants were provided earplugs to minimize the noise. Images were analyzed using Win-Vessel software (Ronald Meyer, Michigan State University). To distinguish muscle from fat, the outer perimeter of the thigh muscle group was manually traced, and pixel signal intensity was automatically determined via the software. A bimodal histogram segmentation was plotted that contained two distinct peaks, with the first peak representing the threshold for muscle and the second peak representing the threshold for fat. This mid-point value was used to separate muscle pixels from IMF pixels as previously described (Gorgey, Khalil, et al., 2019).
Regions of interest were manually traced including whole thigh CSA (thigh CSA = muscle CSA+SAT CSA; Figure 2a). Whole skeletal muscle CSA is the entire thigh muscle CSA including IMF and excluding bone CSA ( Figure 2b). Absolute skeletal muscle was determined via signal intensity after excluding IMF and femoral bone CSA. SAT CSA was defined as the area between the outside of the muscle CSA and inside of the thigh CSA ( Figure 2c).
Visceral and subcutaneous adiposity: Magnetic resonance imaging images were obtained using a Echelon RAPID Torso/Body Coil (Hitachi Medical Systems America) to capture multiaxial slices of the trunk region (Gill et al., 2020;Sumrell et al., 2018). Transverse axial images (axial in-phase/out-phase with a repetition time of 140 ms and echo time of 4.2 and 2 ms for the in-phase and the out-phase, respectively; a 42-cm field of view; matrix size of 256 × 256; one number of excitation; acquisition time of 40 s and slice thickness was 0.8 cm and interslice space was 0.4 cm) were obtained from the xiphoid process to L4-L5 and from L4-L5 to the femoral heads. In the supine position, subjects had lower extremities strapped to avoid unpredictable movement due to spasms during the scan and subsequent image artifacts. Participants were instructed to maintain their position during the scan and were asked to take a deep breath and hold it for 20 s, to F I G U R E 2 Representative MRI images of the mid-thigh showing a step-by-step procedure of capturing and analysis (a) raw image; (b) whole thigh CSA after segmentation and tracing on the outside subdermal border and excluding the bone CSA. The whole thigh CSA includes thigh subcutaneous adipose tissue (SAT) and whole thigh muscle CSA; (c) whole thigh muscle CSA is measured after tracing on the deep subfascial border after excluding SAT (i.e., white adipose tissue surrounded by the two large green circles) and bone CSAs. The whole thigh muscle CSA includes absolute muscle CSA and intramuscular fat (i.e., white adipose tissue infiltrated within the anatomical boundaries of different muscle groups and inside the inner green circle). CSA, cross-sectional area; MRI, magnetic resonance imaging prevent any respiratory artifacts that could alter the quality of images.
Visceral adipose tissue was measured across different anatomical regions of the trunk between liver and kidneys (VAT L-k ), between kidneys and umbilicus (VAT K-U ), between iliac crests and femoral heads (VAT IC-FH ), and total VAT (VAT total ; Gorgey, Khalil, et al., 2019). TT patches were removed 48-72 h prior to MRI scans to avoid possible skin burn.
Day 2-Metabolic testing (secondary outcome variables)
After completing the body composition assessment, participants were then escorted to the clinical research unit for dinner and remained there overnight for metabolic studies the following day.
Basal metabolic rate. After an overnight fast of 10-12 h, participants were kept in a dark room for 20-30 min to attain a resting state during which BMR was measured as previously described . Briefly, while in the supine position a canopy was placed over the subject's head. Each subject was allowed 2-3 min before starting the test to ensure that subjects were calm and comfortable prior to initiating measurements. All subjects were instructed to stay awake during the entire test and to breathe normally. The canopy was then attached to a vacuum to draw expired gases to the flowmeter of the metabolic unit (COSMED KB42; COSMED). Prior to the test, the metabolic unit was calibrated using standard procedures as recommended by the manufacturer. Carbon dioxide and oxygen output were used to calculate the respiratory exchange ratio, and the BMR (kcal/day) was calculated using the average of the last 15 min of the test.
Serum T, anabolic growth factors, lipid panel, adiponectin, and inflammatory biomarkers. After BMR, fasting blood samples were collected at approximately 6.30 a.m. Total testosterone was measured by liquid chromatography with isotope dilution mass spectrometry detection after supported liquid extraction (ESOTERIX INC.). The amount of testosterone in each sample was calculated from a linear plot generated by purified testosterone standards ranging from 2.5 to 5000 ng/dl.
Intravenous glucose tolerance test. After fasting blood samples, an intravenous line was placed to facilitate infusion of glucose and blood sampling (Gorgey, Khalil, et al., 2019;. Blood samples were taken before and every 2-3 min after glucose injection (0.3 gm/ kg IV over 30-60 s) for 30 min, followed by blood collection every 5-10 min ending at 180 min after glucose injection. Twenty minutes after the glucose injection a bolus of insulin (0.02 U/kg, regular short acting insulin, Humulin; Lilly) was injected to determine insulin sensitivity. Plasma glucose was measured by the autoanalyzer glucose oxidase method and plasma insulin concentrations were determined by commercial radioimmunoassay (ALPCO). The glucose disposal rate per unit of secreted insulin per unit time and glucose-mediated glucose disposal rate were calculated from a least squares fitting of the temporal pattern of glucose and insulin throughout the intravenous glucose tolerance test (IVGTT) using the MINMOD program. The insulin sensitivity index (Si) describes the effect of insulin to promote glucose disposal and to inhibit hepatic glucose production. Glucose effectiveness (Sg) indicates the ability of glucose to cause its own uptake into the cell at basal insulin levels.
Peak isometric and isokinetic torques (TT+RT only)
Briefly, peak isometric and isokinetic peak torques were measured using a Biodex Isokinetic Dynamometer after transfer with an Arjo barrier-free from the wheelchair to the Biodex System (Holman & Gorgey, 2019) only for the TT+RT group. We could not test subjects enrolled in the no-TT group because of budgetary constraints. Subjects were seated with the trunk-thigh angle at 90° and the knee flexed at 90° (where 0 corresponds to the full knee extension). Participants were securely strapped to the chair by two crossover shoulder harnesses and a belt across the hip joint. The axis of rotation of the dynamometer was aligned to the anatomical knee axis. The lever arm was attached 2-3 inches above the lateral malleolus. For isometric peak torque, surface NMES was applied to both knee extensor muscle groups after adjusting the current at 30 Hz and 450 µs. The current was set at 50 mA (two trials) and 100 mA (two trials) to test different muscle recruitment level. Each isometric trial was separated by 10-15 s of rest to avoid muscle fatigue with recurrent activation. Each participant was allowed 30-60 s of rest between either two consecutive isometric or isokinetic trials. After completion of isometric testing, participants were allowed approximately 5 min of rest before proceeding with isokinetic testing on the same leg. For isokinetic peak torques, knee extensor muscle group was tested at 60, 90, and 180 deg s −1 after setting the NMES current to 30 Hz, 450 µs, and 100-150 mA. Two trials were conducted per each speed and the measurements were conducted at DD0 and DD16 after closely matching the arc range of motion of knee flexion-extension.
| Statistical analyses
All data were tested for normality using the Shapiro-Wilk tests and if necessary (p < 0.05), data were then logtransformed prior to any statistical analysis. Outliers were detected using normal Q-Q plots at different time points (DD0 and DD16) for each group. Mixed-model ANOVA tests were performed to examine main time effects (DD0 and DD16) and between-group (TT+RT vs. no-TT) differences as well as interactions between groups on the primary (muscle CSAs) and secondary outcome variables over the course of the 16-week DD period. If there was an interaction effect, post hoc analyses were then followed using independent t-tests. To further dissociate a main time effect (DD0 and DD16), paired t-tests were conducted within each group separately. Intent-to-treat analysis approach was adopted for those who experienced withdrawal during the trial by maintaining their group assignments. The approach was used to ensure the current study was not underpowered. Statistical analyses were performed using IBM-SPSS version 26.0 (SPSS) and all values are presented as mean ± SD.
| RESULTS
Physical and SCI characteristics were not different between the two groups during the 16-week DD period (Table 1). Three participants from the TT+RT group (n = 9) withdrew from the training, because of conflict with either personal commitments or work. However, one out of the three participants agreed to participate in the DD0 and DD16 measurements without receiving testosterone patches or participating in the NMES-RT (his data were included). The second and third participants withdrew after completion of week 1 and week 7, respectively (their data were not included in the analysis). Seven participants were considered for analysis in the TT+RT group. In the TT+RT group (n = 6), adherence to the progressive RT protocol was 96 ± 5% over 16 weeks.
Across the 16 weeks, participants in the TT+RT managed to successfully complete the target number of sets (three sets) and repetitions (30 repetitions/per week) on both legs. Progression of the training for both lower extremities in the TT+RT group is listed in Table 2. Over the 16-week period, weights were significantly decreased to 2 lbs. (Table 2). Q-Q plots detected outliers in transverse supine diameter, fasting insulin, and Si data and the data were excluded from further statistical analyses.
Fasting insulin, IL-6, and CRP data at DD0 and DD16 did not meet the assumption of normality (i.e., Shapiro-Wilk test of p < 0.05) and were log-transformed before conducting further statistical analyses.
Physical characteristics and anthropometrics
Physical characteristics (Table 1) and anthropometrics (Table 3) did not change in either TT+RT or no-TT. There was a trend of increasing (p = 0.065) sagittal diameter in both groups (Table 3).
Body composition variables
Arm lean mass and FFM showed a significant time effect (p = 0.003) that indicated decline in both groups (Table 4). A between-group effect was noted in leg lean mass (p = 0.018); a follow-up independent t-tests revealed differences in leg lean mass at DD0 (p = 0.005) and a trend at DD16 (p = 0.058).
There was a trend of interaction in total body FM between TT+RT and no-TT groups (p = 0.09).
Skeletal muscle and IMF CSAs
Thigh muscle CSA. Table 5 demonstrates the changes in thigh skeletal muscle CSAs following TT+RT and no-TT following the 16-week DD phase.
Knee flexors and hip adductor muscle CSA
There was a trend of an interaction between TT+RT and no-TT groups in left adductor muscle CSA (9.9% at DD16, p = 0.053; Table 5), without changes in the right adductor muscle CSA. Knee flexor muscle CSAs did not show any changes within or between groups (p > 0.05). Figure 3a,b presents VAT CSA and VAT:SAT ratio across different anatomical trunk landmarks in persons with SCI. There were no interactions for VAT CSA or VAT: SAT ratio (p > 0.05). There were no differences between TT+RT and no-TT groups. SAT L-k, k-U, IC-FH remained unchanged (p > 0.05) in both the TT+RT (184±121 to 191±107 cm 2 ) and no-TT (155 ± 67 to 149 ± 66.5 cm 2 ) groups.
| Metabolic profile
Blood pressure and heart rate There were no main effect of time or between-group differences in resting systolic (TT+RT: 114±18 to 115±22 mmHg and no-TT: 100 ± 8 to 101 ± 14 mmHg; ps = 0.845 or 0.109, respectively), resting diastolic
T A B L E 4 (Continued)
T A B L E 5 Whole thigh CSA, whole and absolute muscle CSA, and intramuscular fat in persons with SCI for participants following 16 weeks of dose de-escalation of TT+RT and no-TT groups Anatomical region TRT+RT (n =
| Peak isometric and isokinetic torques
In the TT+RT group (n = 7), there were no changes (p > 0.05) in peak isometric torques at 50 mA or 100 mA following 16 weeks of the DD period. The slowness in the rise time was also maintained at both 50 mA (0.12 ± 0.016 to 0.11 ± 0.02 ms, p = 0.1) and 100 mA (0.13 ± 0.007 to 0.15 ± 0.013 ms, p = 0.2). Isokinetic peak torques were also maintained (p > 0.05) at 60, 90, and 180 deg s −1 in the TT+RT group.
| DISCUSSION
The major findings of the current study indicated that 16 weeks of DD program of adding low-dose TT to NMES-RT was minimally effective in maintaining the effects of determining on muscle size and BMR compared to the no-TT group. Participants in both groups experienced increases in VAT without noticeable changes in the cardiovascular, carbohydrate, lipid, or inflammatory biomarker profiles. Cessation of administering testosterone resulted in a continuous decrease in muscle CSA and increase in the VAT CSA in the no-TT group. Furthermore, the addition of 2 mg day −1 TT to NMES-RT did not maintain the increase in muscle size or the decrease in *, Statistical difference from DD0, whole thigh muscle CSA (p < 0.0001); x, statistical interaction between TT+RT and no-TT groups for whole thigh muscle CSA (p < 0.0001); x′, trends toward interaction; #, betweengroup differences (p < 0.05); #′, trend between both groups (p = 0.07-0.09). A1: area 1 (#1-4), reflects the anatomical (A) region of the average of the proximal four CSA slices of the thigh immediately following the inferior border of the gluteus maximus muscle; A2: area 2 (#5-8), reflects the average CSA of the mid four MRI slices of the thigh; A3: area 3 (#9-12), reflects the average CSA of the distal four MRI slices of thigh toward the knee joint; Average: reflects the average CSA of 12 MRI slices of the entire thigh.
VAT that was previously observed following 16 weeks of TT+RT compared to TT only (Gorgey, Khalil, et al., 2019). Both groups did not show additional changes in body composition or metabolic profile over the 16 weeks of detraining. Finally, the de-escalation training was successful in retaining the neuromuscular adaptions previously demonstrated by increasing peak isometric and isokinetic torques as well as slowness in the rise time following TT+RT. The current study adopted a novel strategy of detraining via administering a low-dose TT (2 mg day −1 ) and reciprocally regressed the weightlifting program by decreasing the frequency of NMES-RT from twice to once weekly and decreasing ankle weights by 2 lbs. per week. At the end of week 12, participants in the TT+RT performed 30 reps per leg and decreased weights to 2 lbs. and continued training for additional 4 weeks with the same weights. Similar to previous work (Bickel et al., 2011), we aimed to decrease the training volume by reducing the number of sets and the frequency of the training. In the first phase of the study, participants were able to lift on average 20 lbs. per leg over a 16-week period (Gorgey, Khalil, et al., 2019). Considering that the person with SCI is a model of aging (O'Brien, Wade, et al., 2017), it is interesting to note the similarity between the current findings and previous work in able-bodied subjects (Bickel et al., 2011). Previous work noted that a DD of 1/3 or 1/9 of the initial RT volume did not successfully maintain the gains in myofiber size or type in older adults; however, muscle strength was preserved (Bickel et al., 2011). Furthermore, both groups experienced rebound effects on their circulating endogenous testosterone after significantly reduced the dose of transdermal patches from 2-6 mg day −1 during the initial training phase (Gorgey, Khalil, et al., 2019). During the DD phase, the low-dose of 2 mg day −1 was not sufficient to inhibit endogenous testosterone in the TT+RT group. This rebound effect in endogenous testosterone may explain the modest nonsignificant increases in muscle CSA and BMR in the no-TT group.
In the current work, TT+RT was minimally effective in preventing the loss in muscle mass. In the first phase, participants gained approximately more than 30%-40% in absolute muscle CSAs and they lost close to 10% in the second detraining phase. Regressing the frequency, repetitions, and ankle weights over the 16-week period of DD eventually resulted in decreasing the volume of the training necessary to evoke protein accretion and muscle hypertrophy. Others have shown that the gains in muscle mass and trabecular bone parameters were partially preserved following cessation of high-volume functional electrical stimulation cycling for 1 year in persons with SCI (Frotzler et al., 2009). Another recent study showed that 3 weeks of detraining resulted in decreasing muscle thickness to initial baseline following 8 weeks of combined functional electrical stimulation exercise with blood flow restriction in persons with complete SCI (Skiba et al., 2021). We have previously shown in the same group the activation of phosphorylated AKT as a molecular signal of muscle hypertrophy . Decreasing the volume of NMES-RT may abruptly deactivate the signaling pathways necessary to maintain the previously gained muscle hypertrophy. On the other hand, the no-TT group did not receive any TT over the 16-week DD period similar to previous work (Bauman et al., 2015). Bauman et al. (2015) showed that despite cessation of TT following 12 months, hypogonadal participants with SCI maintained the gain in lean mass and resting metabolic rate for additional 6 months. In the initial 16 weeks of this trial, participants enrolled in the TT only group showed a modest decrease in VAT (Gorgey, Khalil, et al., 2019). Unlike the findings of Bauman et al., cessation of TT resulted in reciprocal increase in VAT in both groups.
Magnetic resonance imaging revealed robust changes in muscle size and VAT during the 16-week detraining period. Furthermore, MRI facilitated the measure of whole muscle CSA and absolute muscle CSA after subtracting IMF. We intentionally did not average data in both the right and left thigh muscles to ensure that the effect of detraining was even and because each leg was trained separately (Mahoney et al., 2005); especially with the fact that the magnitude of ankle weights was slightly different at the end of initial phase of the trial (Gorgey, Khalil, et al., 2019). There were no differences as result of the detraining between the right and left legs. DXA did not demonstrate major changes in body composition over the 16-week DD period. However, DXA has previously been used to study the changes in legs and total body lean mass after more than 2 years of detraining in persons with complete SCI (Gorgey, Martin, et al., 2016). This may explain previous research findings that showed retention of gains in lean mass following 6 months of cessation of testosterone after using variable dose for 12 months in hypogonadal men with SCI (Bauman et al., 2015). The discrepancy between the studies could be simply explained using different imaging techniques. Another important determination is that 73% had low serum testosterone levels in the TT-RT group (215-313 ng/dl) and 64% had low testosterone levels in the no-TT group (140-402 ng/dl; Gorgey, Khalil, et al., 2019). Differences in gonadal status between the studies may have required a higher dose of testosterone (5-10 mg day −1 ) with longer carryover effects in men with SCI (Bauman et al., 2015).
The current findings suggest that a DD program of TT+RT and cessation of TT were not successful in retaining the decrease in VAT in persons with SCI. The increase in VAT could be simply explained by decreasing level of physical activity after SCI. VAT imposes significant cardiometabolic risks in persons with SCI (Farkas et al., 2017;Gill et al., 2020). Several studies suggested that VAT is associated with a spectrum of inflammatory biomarkers and likely impacts hepatic metabolism and causes mitochondrial dysfunction (Farkas et al., 2017;O'Brien, Wade, et al., 2017;Sumrell et al., 2018). The increase in the level of physical activity and NMES-RT successfully reduce VAT in persons with SCI (Gorgey, Khalil, et al., 2019;Pelletier et al., 2018). During the initial 16-week training period in these participants, both TT+RT and TT only modestly decreased VAT CSA. In a previous case report, the authors demonstrated that dietary manipulation via caloric restriction for 8 weeks combined with TT remarkably decreased both trunk VAT and SAT .
The effects of detraining on metabolic profile were recognized in a 3.9% decline in BMR in the TT+RT group. The initial phase showed a 17% increase in BMR following 16 weeks of TT+RT (Gorgey, Khalil, et al., 2019). It is important to note that BMR was non-significantly higher at DD16 in the no-TT group; this could be simply attributed to increase in endogenous testosterone following cessation of administration of TT in this group (Welle et al., 1992). In the TT+RT group, the loss in muscle mass was accompanied with a non-significant decrease in BMR. Previously, it was noted that adiponectin, an insulinsensitizing hormone, may drive the increase in BMR via an increase in mitochondrial citrate synthase (O'Brien et al., 2018). Persons with SCI have greater levels of circulating adiponectin compared to able-bodied controls (Maruyama et al., 2008;Wang et al., 2005). We have further demonstrated that addition of TT to NMES-RT resulted in decreasing circulating adiponectin (Gorgey, Khalil, et al., 2019). This higher level of adiponectin may compensate for the loss in sympathetic nervous system and drive an increase in BMR in persons with SCI (O'Brien et al., 2018); which may explain the higher levels in persons with SCI. NMES-RT-induced skeletal muscle hypertrophy and increases BMR. The increase in BMR may result in a feedforward mechanism that suppresses adiponectin levels in persons with SCI (Gorgey, Khalil, et al., 2019).
The gain in muscle size following 16 weeks of training using TT+RT was mirrored with increases in peak torque, specific tension of the trained knee extensor muscles, and slowness of the rise time (Holman & Gorgey, 2019). The increase in specific tension reflected increases in both neural drive and muscle hypertrophy of the knee extensors. Previous reports indicated that detraining is associated with reductions in maximal voluntary contraction (Colliander & Tesch, 1992;Narici et al., 1989), muscle CSA (Narici et al., 1989), and neural drive to the muscle (Narici et al., 1989). In the current study, NMES-RT combined with low dose of TT was successful in maintaining the gains in peak isomeric and isokinetic torques as well as the slowness of the rise time that were previously noted in the initial phase of the trial. The findings are in line with previous work that suggested that neuromuscular adaptions were preserved during months of detraining following heavy RT (Andersen et al., 2005).
| Limitations and future directions
In addition to the limitations of our previous work (Gorgey, Khalil, et al., 2019), the current study had a small sample size. Due to budgetary constraints, it was extremely difficult to retain participants across the 16week training and 16-week detraining period of the study. We only managed to enroll more than 50% of the sample size that were enrolled in the initial 16-week training phase (Gorgey, Khalil, et al., 2019). This also limited our ability to measure neural adaptions in the no-TT group. Additionally, three of the participants in the TT+RT group withdrew from participating in the 16-week period of DD. However, one participant agreed to be enrolled in the DD0 and DD16 measurements. We have included his data with the other six participants because the primary research question was aimed toward understanding the effects of detraining on cardiometabolic profiles and neuromuscular parameters in persons with SCI. In the current work, we have defined detraining as either decrease in the volume or dose of the intervention (TT+RT group) or complete cessation of the designed intervention (no-TT group). Additionally, carefully screening his data did not indicate noticeable changes to be considered as an outlier compared to the mean of the other six participants in the TT+RT group.
Another concern is the failure to account for their dietary habits during the training period. It is possible that the participants adapted poor dietary habits that offset the carryover effects of TT+RT or TT in men with SCI.
The current findings highlighted the need to develop rehabilitation or pharmaceutical approaches to attenuate the loss in lean mass during a detraining program. Alternating leg exercise with an arm cycling exercise or circuit RT programs may encourage long-term compliance and allow participants to increase their daily level of physical activity. Leisure time physical activity has been shown to attenuate the development of chronic comorbidities and the gain in VAT in persons with SCI (Buchholz et al., 2009;Pelletier et al., 2018).
| CONCLUSIONS
A follow-up DD training program of TT+RT or no-TT was minimally effective in preventing loss in muscle size, decreasing BMR, and increasing VAT. These findings expand our understanding of how muscle size, BMR, and VAT are changed during a DD of testosterone and NMES-RT doses in persons with SCI. The findings further suggest that neuromuscular adaptations were retained in men with SCI; highlighting possible discordance between spinal cord circuity and muscle adaptations following detraining in men with SCI. Considering the prevalence of cardiometabolic disorders in this population, the current findings are helpful for designing future studies to explore the effects of longitudinal trials via applications of telehealth technology to ensure adherence and long-term compliance. | 2021-10-30T06:17:30.233Z | 2021-10-29T00:00:00.000 | {
"year": 2021,
"sha1": "042eff6b76d383e2bc54d284e605ba7fd1206926",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.14814/phy2.15089",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ca70bde10b493d72b045765a1bfbdf7164eb8544",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235899310 | pes2o/s2orc | v3-fos-license | Panicle Counting in UAV Images For Estimating Flowering Time in Sorghum
Flowering time (time to flower after planting) is important for estimating plant development and grain yield for many crops including sorghum. Flowering time of sorghum can be approximated by counting the number of panicles (clusters of grains on a branch) across multiple dates. Traditional manual methods for panicle counting are time-consuming and tedious. In this paper, we propose a method for estimating flowering time and rapidly counting panicles using RGB images acquired by an Unmanned Aerial Vehicle (UAV). We evaluate three different deep neural network structures for panicle counting and location. Experimental results demonstrate that our method is able to accurately detect panicles and estimate sorghum flowering time.
INTRODUCTION
Sorghum (Sorghum bicolor (L.) Moench) is used in biofuels, forage, grain, and food due to its ability to resist waterlimited conditions [1]. Plant breeders evaluate various properties of a crop during the growing season. Measurement of physiological properties of plants is known as phenotyping [2]. Flowering time (time to flower after planting) is an important phenotypic trait related to plant development and grain yield in sorghum [3]. A sorghum plant is considered "flowering" when a panicle (clusters of grains on a branch) is flowering (or blooming), and a plot (a section of the crop field) is flowering when 50% of the sorghum plants have reached this stage [4]. We can evaluate flowering in a sorghum plant by observing its panicles as shown in Figure 1. While we are unable to determine the state of flowering of individual panicles due to resolution of most imagery, we can consider counting across temporal data as a potential surrogate measure, as the capability to detect panicles increases when the flowers emerge from the tight panicle.
Traditional phenotyping methods for panicle counting use manual counting, which is time-consuming in large fields with multiple genotypes of plants. In recent years, the use of Unmanned Aerial Vehicles (UAVs) has been demonstrated for high-throughput phenotyping of many traits [5]. Compared to traditional phenotyping, UAVs equipped with multiple sensors can collect field data in a non-destructive way and in less time. For this study, high resolution orthorectified images [6] acquired by an RGB camera on a UAV platform were analyzed. Additional details are included in the description of the datasets below. Deep neural networks provide promising results for detecting and counting panicles. In [7], Ghosal et al. developed a weakly supervised deep learning framework with Reti-naNet [8] to detect and count sorghum panicles. Chandra et al. proposed an active learning method with Faster-RCNN [9] for panicle detection in cereal crops [10]. Segmentationbased networks can be used for panicle detection and counting as well, as shown by Lin et al. [11]. In this paper, we investigate the panicle detection performance of multiple networks and use the counts of the best network for flowering time estimation.
OUR APPROACH
Our method consists of multi-temporal panicle detection and flowering time series estimation, as shown in Figure 2. For panicle detection training and testing, we use an RGB orthomosaic [6] photo of a sorghum field in West Lafayette, Indiana, USA acquired by a Sony ILCE-7RM3 camera mounted on a DJI Matrice 600 Pro platform on July 22, 2020 at 20m altitude. The orthomosaic photo is cropped into individual images of two row segments of plants. Each cropped image is horizontally divided into two sub-images. The images are further separated for training, validation, and testing. We manually ground truth the images by labeling each panicle with a bounding box. In total, we have 500 images for training, validation, and testing. The images have dimensions of 800 × 600 pixels which are resized to 512 × 512 pixels during training. Flowering time was estimated for a field of sorghum test plots (∼200,000 plants/hectare), comprised of two replicates of 80 varieties in a randomized block design (plot size: 7.6m × 3.8m), 10 rows per plot. In practice, the flowering time varies for different genotypes of sorghum, so this needs to be accounted for. For this specific genotype, with a planting date of May 13, 2020, we select the multi-temporal RGB images from 65, 68, 70, 76, 79, and 83 days after planting. Each image is cropped from the associated orthomosaic photo with size of 3000 × 1200 pixels. The cropped image has 8 row segments of plants because 2 rows in the middle were destructively sampled for biomass. The ground truth data is obtained by manually counting panicles in these cropped images.
We chose the deep networks based on their performance on a general object detection dataset such as COCO [12]. We selected three detection-based deep networks for panicle detection.
RetinaNet. RetinaNet [8] is a one-stage detection-based network with focal loss as the loss function as shown in Figure 3. It uses ResNet [13] and feature pyramid network (FPN) [14] as backbone networks. Each level of the FPN is connected with a sub-network for bounding box regression and object classification. The focal loss is used in the classification sub-network. In our experiments, we choose ResNet-101 with FPN as the backbone for RetinaNet.
YOLOv5. YOLOv5 [15] is a one-stage detection-based network. The general structure of YOLOv5 consists of backbone, neck and prediction as shown in Figure 3. YOLOv5 uses CSPNet [16] as backbone architectures. FPN [14] and Path Aggregation Network (PANet) [17] are used for the neck of YOLOv5. There are four different versions of YOLOv5. The main differences of the versions are the depth and width. We chose the YOLOv5x model for our experiments since it has the best accuracy across the different versions.
Faster-RCNN. Faster-RCNN [9] is a two-stage detectionbased network consisting of a feature map extractor, regional proposal network (RPN), and Region of Interest (ROI) pooling and classification network as shown in Figure 3. The main idea of Faster-RCNN is to use RPN to generate bounding boxes. We use the ResNet-101 with FPN as the feature map extractor in the Faster-RCNN model.
EXPERIMENTAL RESULTS
We split the 500 images into training (80%), validation (10%), and testing (10%). For all three networks, we start with models pretrained on the COCO dataset, as this reduces training time. Learning rate is set to 0.00001 for three networks. The training time for each network is around 30 minutes using 4 NVIDIA GTX 1080 Ti graphics cards. Validation is performed every 10 epochs.
In these equations, true positive, false positive and false negative are represented by TP, FP and FN, respectively. In Equation 3, k refers to the k-th threshold for precision and recall. In Equation 4, 5, 6, C i is the ground truth count in the i-th image. N is the number of image samples.
We evaluate the performance of the three networks with the validation and testing datasets. The results are shown in Table 1 We use a hybrid genotype sorghum with multi-temporal panicle counting ground truth data for flowering time estimation (see Section 2). The shape and color of panicles varied for each individual variety of sorghum. We select the variety based on the similarity of our training data. We use our panicle counting deep network to estimate the counts for each test image without resizing. For early dates in the time sequence, some panicles that did not bloom can still be detected by the network. We set a threshold for the bounding box size to remove them. We then fit a third degree polynomial to the estimated counting data to obtain the panicle count time series as shown in Figure 4 with the counts in Table 3. The estimated flowering time is the intersection between the line associated with half of the ultimate number of panicles counted and the flowering curve. Our estimated flowering time is 68 days after planting which is nearly identical to the result from the manual counts.
CONCLUSION AND DISCUSSION
In this paper, we propose a method for flowering time estimation by counting panicles in UAV images. We evaluate the performance of three popular detection-based network architectures and show that YOLOv5 has the best performance. We also describe the use of multi-temporal panicle counting for flowering time estimation. Our result shows the estimated flowering times are nearly identical to the results of manual counting. Future work will include training with panicle images with different shape and color to generalize the method for more varieties of sorghum plants.
ACKNOWLEDGMENTS
We thank Professor Ayman Habib and the Digital Photogrammetry Research Group (DPRG) from the School of Civil Engineering at Purdue University for providing the images used in this paper. The work presented herein was funded in part by the Advanced Research Projects Agency-Energy (ARPA-E), U.S. Department of Energy, under Award Number DE-AR0001135. The views and opinions of the authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof. Address all correspondence to Edward J. Delp, ace@ecn.purdue.edu | 2021-07-16T01:15:41.762Z | 2021-07-11T00:00:00.000 | {
"year": 2021,
"sha1": "5196dbc0ca25a955cc6ee39fa19006f49cf5dd59",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2107.07308",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5196dbc0ca25a955cc6ee39fa19006f49cf5dd59",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science"
]
} |
252778431 | pes2o/s2orc | v3-fos-license | How compliance with behavioural measures during the initial phase of a pandemic develops over time: A longitudinal COVID‐19 study
Abstract In this longitudinal research, we adopt a complexity approach to examine the temporal dynamics of variables related to compliance with behavioural measures during the COVID‐19 pandemic. Dutch participants (N = 2399) completed surveys with COVID‐19‐related variables five times over a period of 10 weeks (23 April–30 June 2020). With these data, we estimated within‐person COVID‐19 attitude networks containing a broad set of psychological variables and their relations. These networks display variables' predictive effects over time between measurements and contemporaneous effects during measurements. Results show (1) bidirectional effects between multiple variables relevant for compliance, forming potential feedback loops, and (2) a positive reinforcing structure between compliance, support for behavioural measures, involvement in the pandemic and vaccination intention. These results can explain why levels of these variables decreased throughout the course of the study. The reinforcing structure points towards potentially amplifying effects of interventions on these variables and might inform processes of polarization. We conclude that adopting a complexity approach might contribute to understanding protective behaviour in the initial phase of pandemics by combining different theoretical models and modelling bidirectional effects between variables. Future research could build upon this research by studying causality with interventions and including additional variables in the networks.
BACKGROUND
During the initial phase of a pandemic, a vaccine is often absent and compliance with behavioural measures such as social distancing and isolation are often considered of pivotal importance to curtail the spread of the virus. Research during the COVID-19 pandemic shows that the majority of people reported compliance with behavioural measures but also indicates ample room for improvement (Hensel et al., 2022). Thus, greater insight into determinants of compliance is crucially important for improving our ability to understand, predict and improve compliance during epidemic outbreaks (Betsch et al., 2020).
The literature on compliance with behavioural guidelines during the COVID-19 pandemic is already immense, with numerous studies pointing towards factors contributing to compliance or the lack thereof, from a range of theoretical perspectives, such as the Theory of Planned Behaviour (Ajzen, 1991;Gibson et al., 2021), the Health Belief Model (Clark et al., 2020;Rosenstock, 1974) and Moral Foundations Theory (Chan, 2021; Graham et al., 2013). This body of work has generated important insight into the psychological underpinnings of compliance in the context of the COVID-19 pandemic, but theoretical frameworks like the ones just mentioned are inherently restrictive because of the limited number of variables they focus on. An influential perspective on compliance during pandemics that is less restrictive is that by Bish and Michie (2010), who provide an extensive review of the literature on this topic and discuss a wide range of determinants that play a role in predicting health behaviour during a pandemic. As such, most empirical work on this topic takes the perspective of a particular model and the specific elements in it, while the influential perspective by Bish and Michie takes a broader perspective but is theoretical in nature.
Our current aim is to deepen our understanding of compliance with behavioural measures during the initial phase of a pandemic by taking a complexity approach. We do so in accordance with the conceptual frameworks provided by Bish and Michie (2010). We add to this existing literature by conducting empirical research that includes not only the constructs of these conceptual frameworks but also additional variables that more recent research showed to be relevant in the context of pandemics. Moreover, we employ a longitudinal design that provides insight into how relations between variables develop over time and thus indications of causal effects. Such longitudinal designs seem to be most lacking in the literature on pandemics thus far (Bish & Michie, 2010). Bish and Michie (2010) propose frameworks on health behaviour during a pandemic with different attitudinal and demographic determinants depending on the type of protective behaviour. In our current research, we focussed on preventive behaviours (e.g. hygiene behaviours and vaccination) and avoidant behaviours (e.g. social distancing and working from home) since these resonate with the behavioural pandemics by combining different theoretical models and modelling bidirectional effects between variables. Future research could build upon this research by studying causality with interventions and including additional variables in the networks.
K E Y W O R D S
broad attitude networks, compliance with behavioural measures, COVID-19 pandemic, longitudinal, within-person temporal effects measures as advised by Dutch authorities during the initial phase of the pandemic. According to Bish and Michie (2010), attitudinal variables that are associated with preventive and avoidant behaviours include perceived severity, susceptibility, costs, efficacy and self-efficacy but also social norms, cues to action, knowledge, trust in authorities and state anxiety. Moreover, they argue most of these variables are covered by three well-known models within psychology that have been used to explain relevant behaviour, namely the Theory of Planned Behaviour (TPB; Ajzen, 1991), the Health Belief Model (HBM; Rosenstock, 1974) and the Protection Motivation Theory (PMT; Rogers, 1975). The TPB predicts behaviour through intention, with the intention being influenced by attitudes, behavioural control and behavioural norms. The HBM explains health-related behaviour through the perception of one's susceptibility and severity of the disease, benefits and barriers of the behaviour, self-efficacy and cues to action. PMT seeks to understand behaviour through threat appraisal (i.e. severity of and vulnerability to the disease) and coping appraisal (i.e. response and self-efficacy). Research during the COVID-19 pandemic also associates components of these models with compliance with behavioural measures, namely attitudes (Bogg & Milad, 2020), perceived control and severity (Li et al., 2020), risk perception (Schneider et al., 2021), social norms (Tunçgenç et al., 2021) and self-efficacy (Shahnazi et al., 2020). Bish and Michie (2010) argue that various psychological factors not included in these models are also relevant in the context of pandemics, such as knowledge about the disease, trust in authorities and state anxiety. The latter suggests that adverse effects of the pandemic on people's mental health should also be considered when aiming to understand behaviour in the context of pandemics. Recent research during the COVID-19 pandemic indeed associates compliance with behavioural measures with mental health, namely fear of the virus (Harper et al., 2021) and stress (Lieberoth et al., 2021). Additionally, potential consequences of behavioural measures such as changes in lifestyle and loneliness are associated with adverse mental and physical health effects (Blom et al., 2021;Leigh-Hunt et al., 2017). These findings suggest it is important to take changes in (mental) health into account when investigating compliance during a pandemic.
Thus, research points towards a wide range of interrelated variables relevant to health behaviour during a pandemic. Chambon, Dalege, Elberse, et al. (2022) corroborated this in the context of the COVID-19 pandemic by adopting a complex psychological systems approach to provide an overview of variables related to preventive behaviours in a study conducted in both the United Kingdom and the Netherlands. This resulted in a descriptive account of how these variables are interrelated and insight into differences between the countries. The study could not provide insight into the directions of relations between variables and their temporal effects due to its cross-sectional design (Chambon, Dalege, Elberse, et al., 2022).
Prior research suggests that temporal effects are an important subject of investigation because psychological factors associated with compliance with behavioural measures during pandemics can show substantive changes over time (Ibuka et al., 2010;van der Weerd et al., 2011). Likewise, research during the COVID-19 pandemic showed fluctuations in risk perception (Schneider et al., 2021), psychological distress (Daly &Robinson, 2021) andwell-being (Wang et al., 2021). Such temporal changes in psychological factors associated with compliance highlight the importance of longitudinal research into compliance during pandemics. Accordingly, the current research contributes to the existing literature by examining the temporal effects of variables and their interrelations. We focus on variables that can be expected to fluctuate during the initial phase of a pandemic and not relatively stable variables such as demographics, personality traits and political preferences. We recognize that such variables can be important for compliance as well and may moderate the relations between variables included in this study, but their effects fall outside of the scope of the present research.
To summarize, prior research showed that a broad set of variables can affect compliance with behavioural measures during pandemics. Also, there is a gap in the literature regarding longitudinal research that provides insight into the temporal effects of variables relevant to protective behaviours during a pandemic. Consequently, parsimonious theoretical models might not provide sufficient insight into the complexity of compliance during pandemics. Nevertheless, most research into compliance focussed on a rather limited set of variables examined in a cross-sectional research design. The current research therefore adopts a complexity approach in which a more diverse set of variables was included and employs a longitudinal research design. Importantly, the aim of this research is not to present a comprehensive overview or an exhaustive set of variables relevant for compliance but to demonstrate that compliance during pandemics can be explained by a broad set of variables that extends beyond the traditional models (i.e. complexity approach) and to provide insight into directions of effects between these variables (i.e. temporal effects).
Complexity approach
We utilize a complexity approach to gain insight into how psychological variables organize and interact over time. A promising means to this goal is psychological network analysis, which (1) enables exploration of data with many variables of different types, (2) provides insight into patterns of unique relations that remain after controlling for effects of other variables (i.e. pairwise conditional dependencies) and (3) presents these statistical associations in powerful visualizations . A theoretically underpinned model of attitudes that adopts a complexity approach through network analysis is the Causal Attitude Network (CAN) model (Dalege et al., 2016). This model conceptualizes attitudes as complex systems of interacting cognitive, affective and behavioural factors. This systemic approach models properties of attitude dynamics, including their relation to behaviour. Because this approach naturally integrates many different factors relevant to behaviour, we adopt an extension of the CAN model to improve our understanding of compliance behaviour.
Attitude networks in the CAN model consist of nodes that represent cognitive, affective and behavioural components of attitudes. Links between nodes represent interactions between these components. These links, also known as edges, represent linear relations between nodes and can be positive, indicating an excitatory relation, or negative, indicating an inhibitory relation. The strength of interactions between nodes can vary, resulting in a weighted network. In the CAN model, the overall state of the system (i.e. the pattern of feelings, thoughts and behaviours that characterize a person) arises out of these interactions between attitude components (Dalege et al., 2016). Figure 1 shows a simplified and hypothetical network of the attitude towards physical distancing. Through patterns of interaction, some node pairs will be more strongly aligned than others. These patterns can be identified using partial correlations, where the relation between two given nodes is conditional on all other nodes in the network (see Epskamp, 2020a;van Borkulo et al., 2014). Thus, via statistical analyses of data, empirical estimates of attitude network models can be obtained (Dalege et al., 2017). Representations of these models can yield important information to advance our understanding of behavioural variables. For example, the centrality of specific nodes may provide information on the relative importance of different nodes in the network organization, that is, nodes central to the network have more and stronger relations with other nodes. Ideally, network models are estimated on longitudinal data, as such data can harbour important clues about the causal organization of the network. This results in network models that include directions of effects between variables (see Figure 1).
To summarize, the present research addresses gaps in the literature regarding complexity and temporal effects by shedding light on temporal dynamics of the system of psychological factors (i.e. broad attitudes networks) related to compliance with behavioural measures during the COVID-19 pandemic. Our means in doing so is conducting network analysis with longitudinal data gathered during the pandemic: participants completed a survey with COVID-19-related variables five times over a period of approximately 10 weeks (23 April-30 June 2020).
Participants and design
The Ethics Review Board of the University of Amsterdam approved this study (2020-SP-12194). Participants were recruited via a research panel (Ipsos). The sample of the first wave was representative of the Dutch population in terms of age, gender and residential area. Table 1 provides sample and descriptive information for the longitudinal sample in the column on Wave 5 (valid N = 2399), consisting of respondents that participated throughout the study and finished all surveys. This sample size far exceeds the advised sample size of approximately 500 participants for accurate network estimation of a moderately sized network, that is, an accurate representation of the true underlying network (Epskamp, 2017;van Borkulo et al., 2014).
Measures
Data collection was conducted with a Dutch online survey. We first identified the most important constructs in the literature (see introduction) and then developed a survey with items based on these constructs (see Supporting Information S1.2 for a complete overview). Analysis for node construction was conducted with the largest and most diverse sample available, namely wave 1, as this wave included all participants that completed the first survey (i.e. featured no dropout). Psychological variables were constructed as nodes through either predetermined combinations of items or based on results of a reduction technique aimed at identifying components in data (i.e. principal axis factoring [PAF]). Detailed descriptions of how items were combined to form nodes, including PAF results, are provided in Supporting Information S1.3. Resulting nodes are presented in Table 2. F I G U R E 1 Simplified and hypothetical attitude network of the attitude towards physical distancing. Variables are displayed as nodes (circles) and linear relations between variables as edges (lines between circles). The arrows indicate predictive effects between nodes from one measurement to the next. The network consists of a behavioural element ('Keep distance from others'), a cognitive evaluation ('Physical distance is effective to prevent spread') and three affective evaluations ('Fear of disease', 'Feeling lonely' and 'Missing physical contact'). The edge width indicates the strength of the relations. In this example, feeling lonely is more strongly affected by keeping distance from others than by perceiving physical distance as effective to prevent the spread, as indicated by the different width of these edges in the network. Furthermore, relations can be unidirectional (e.g. between keeping distance from others and feeling lonely) or bidirectional (e.g. between feeling lonely and missing physical contact). Bidirectional effects can consist of edges with different strengths Two attention checks were included in each wave to ensure data quality. Failing both attention checks led to exclusion from further data collection and analysis in all subsequent waves. b Comparing networks of the longitudinal sample and dropouts showed very few significant differences in relations between variables. More information is provided in the Supporting Information (S1.1). c The answer option 'I prefer not to answer' was included in items on education, illness and smoking, and was treated as a missing value. Participants with missing values for one or more nodes were deleted from network analysis, which is reasonable given the small number of missing values.
T A B L E 2 Nodes (psychological variables) based on items in the survey, including item examples and answer scales
Node (items per node)
Examples of items per node (/ in the same text line means separate item in survey) Scale
Compliance (5) I comply with the corona measures./Keep 1.5 metres away from others./Wash your hands regularly with water and soap 1 (I do not display this behaviour more) to 7 (I display this behaviour much more now) Risk Perception (2) How likely (/severe) do you believe it is you will get infected with the coronavirus within the next year?
1 (Extremely unlikely) to 7 (Extremely likely) Health Risk (2) For me personally (/my family and friends), I consider the health risk of an infection with the coronavirus… 1 (Extremely small ) to 7 (Extremely severe)
Economic
Consequences (2) For me personally (/my family and friends), I consider the economic consequences of the corona pandemic… 1 (Extremely small ) to 7 (Extremely severe) Self-exempting Beliefs (2) I think I am already immune (protected) against the coronavirus./I will not get infected with the coronavirus because I never get the seasonal flu either 1 (Strongly disagree) to 7 (Strongly agree) Negative Affect (8) The corona pandemic is making me (feel)… (e.g. sad/ frustrated/overwhelmed) 1 (Strongly disagree) to 7 (Strongly agree) Compassion (1) The corona pandemic is making me feel compassion 1 (Strongly disagree) to 7 (Strongly agree)
Compliance
Protective behaviours as recommended by the Dutch national government (Government of the Netherlands, 2020) provided the operationalization of compliance with behavioural measures in this study. We measured to what degree participants adopted protective behaviours as advised to the general public (i.e. physical distancing and hygiene behaviours), providing a self-reported measure for compliance. Items forming this node changed during the study because of changes in recommended protective behaviours. 1 Attitudes Attitudes were measured in line with the multi-component model, namely consisting of cognitive, affective and behavioural attitude elements (Eagly & Chaiken, 1993). Items forming the cognitive attitudinal nodes Risk Perception, Health Risk (r sb = .59) 2 and Economic Consequences (r sb = .75) were based on prior research into psychological COVID-19 networks (Chambon, Dalege, Elberse, et al., 2022). The node Compliance consisted of five items in the first wave (a = .75), and one of these items was adapted for the second wave. A sixth item was added in the third wave, leading to Compliance consisting of six items from the third wave onwards. Thus, this node contains behavioural items that were applied during the administration of the survey. 2 The scale reliability, although for some nodes lower than one would normally prefer, was interpreted as sufficient since our aim was to measure evaluations and not to design reliable measuring scales (Dalege et al., 2016).
Examples of items per node (/ in the same text line means separate item in survey) Scale
Healthy Lifestyle (3) To what extent did you experience.. (e.g. feeling suddenly scared/fearful/tense; e.g. feeling lonely/hopeless/a loss of interest; e.g. difficulty breathing/numbness/feeling weak) during the past two weeks?
1 (Not at all ) to 5 (Very much) Note: Mean scores of items were calculated for nodes that were based on multiple items (except for risk perception as explained in text). The sections of the survey that referred to 'the corona measures' contained the following explanatory text 'By this, we mean the recommendations to prevent the spread of the coronavirus and thus prevent overloading the healthcare system, for example stay at home as much as possible, keep 1.5 metres of distance from others and wash your hands regularly with soap and water'. a WHO Regional Office for Europe (2020), Multiple items (e.g. worries, vaccination intention) were adopted from the WHO protocol for COVID-19 monitoring.
b Tennant et al. (2007), raw scores were converted to metric scores as required for the (S)WEMWBS. c de Jong Gierveld and van Tilburg (2008), answer scale formally ranges from 1 (No! ) to 5 (Yes! ). Please note that answer scales were adjusted, and a 2-week time frame was specified for the current study. d Derogatis (2001), answer scale formally ranges from 0 (Not at all ) to 4 (Extremely). Please note that we changed the BSI-18 answer scales and specified a 2-week time frame, possibly invalidating clinical interpretation of scores. Therefore, we use the term complaints instead of symptoms.
T A B L E 2 (Continued)
Consequences contains items measuring expected consequences of the pandemic for the economy. Selfexempting Beliefs (r sb = .57) tapped into one's conviction of not being susceptible to an infection with the coronavirus.
Items measuring affect surrounding the pandemic resulted in the nodes Negative Affect (e.g. anxiety, anger and confusion; a = .89) and Compassion (single item). Items on worries also resulted in two nodes: Worries Virus (a = .73) and Worries Measures (a = .67), reflecting worries about events during the pandemic resulting directly from the coronavirus (e.g. losing someone they love) and events resulting from measures taken because of the virus (e.g. a recession), respectively.
Behavioural attitudinal nodes consisted of Vaccination Intention (single item), Measures Support (a = .90) and Measures Ease (r sb = .68). Vaccination Intention represents one's intention to get a COVID-19 vaccine once available. The items covering attitudes towards the behavioural measures, consisting of both general items and semantic differential scale items, resulted in two nodes: Measures Support reflects support for the behavioural measures advised to prevent the spread of the coronavirus, and Measures Ease represents participants' perceived ease of complying with these measures.
Additional psychological factors
Social Norm (r sb = .76) reflects prescriptive (what other people think one should do) and descriptive (what other people do) social norms regarding compliance with behavioural measures. Items on perceived control formed the nodes Control Infection (r sb = .61) and Self-efficacy (single item). The former node represents to what degree participants felt able to avoid getting infected with the coronavirus. The latter node represents whether participants perceived to know how to protect themselves from the coronavirus. Involvement (a = .84) was comprised of items on how actively involved participants perceived themselves to be in the COVID-19 pandemic (e.g. watching the news). This node also represents a dimension of attitude strength. A single item measuring perceived knowledge about the pandemic formed the node Perceived Knowledge. The node Trust (a = .86) reflects general trust in the four actors relevant to the corona pandemic in the Netherlands: the authorities, the Dutch National Institute for Public Health and the Environment (RIVM), health care professionals and science.
Physical and mental health nodes General Health reflects an overall self-reported evaluation of participants' health. Single items measuring participants' evaluation of their physical and mental health during the survey, compared to before the pandemic, formed the nodes Health change Physical and Health change Mental, respectively. Lifestyle items measured an improvement or deterioration in exercise, diet and sleep, resulting in Healthy Lifestyle (a = .49). 3 The sum of items from the Short Warwick Edinburgh Mental Wellbeing Scale ([S]WEMWBS; Tennant et al., 2007) formed Mental Well-being (a = .84). The shortened version of the loneliness scale (de Jong Gierveld & van Tilburg, 2008) resulted in the node Loneliness (a = .76). The Brief Symptom Inventory 18 (BSI-18; Derogatis, 2001) was adopted to measure psychological complaints. This scale contains 18 items measuring three areas of psychological distress (i.e. anxiety, depressive and somatic symptoms). Three nodes were formed based on the BSI-18 items: Anxiety Complaints (a = .92), Depressive Complaints (a = .90) and Somatic Complaints (a = .84).
Individual differences
The first survey also included questions on individual differences, namely demographics (i.e. age/gender/ education) and health (i.e. illness/smoking). These variables provided descriptive information about the sample and were excluded from temporal analyses given that they were measured just once, in the first wave. The first survey also included items on relatively stable personality aspects (i.e. consideration of future consequences, resilience and coping) that provided input for a different paper on interventions. Readers are referred to Chambon, Dalege, Waldorp, et al. (2022) for more information on (results of) these measures.
Procedure
Participants that subscribed to Ipsos' research panel received an invitation to participate via e-mail. Participants were informed about and asked to commit to the longitudinal research design beforehand. Only participants who finished the survey received invitations for subsequent waves. They received compensation in the form of points that can be spent at web shops.
Interventions were also included at the beginning of the third and last wave, to which participants were randomly assigned (see Supporting Information S1.4). These interventions are not the focus of the current paper and will be presented in a different paper Chambon, Dalege, Waldorp, et al. (2022).
Network analysis
Networks were estimated with the dlvm1 function (Lag-1 dynamic latent variable model for panel data) in psychonetrics (Epskamp, 2020a(Epskamp, , 2020b. The R script is made available in the Supporting Information (S1.5). We estimated two networks that provide information on average within-person effects on a population level: temporal and contemporaneous (Epskamp, 2020a) psychological networks. 4 Temporal networks contain edges representing whether one node predicts other nodes in the next measurement while controlling for all other nodes. These temporal effects are calculated by regressing each variable on all variables (including itself) on the previous measurement (i.e. lag-1) and therefore require repeated measures (i.e. waves). The resulting partial correlations are indicative of directed predictive effects, depicted by edges with arrows in the network (from wave t-1 to wave t). The weight of edges indicates their effect size and can be interpreted as one would interpret regular partial correlations. Edges between two nodes represent either a causal effect or are the result of a third (unknown) underlying cause. Contemporaneous networks are based on residuals of the temporal network (i.e. variance and covariance that cannot be explained by the modelled temporal effects). These edges can be interpreted as partial correlations between nodes in the same measurement after controlling for all other nodes in the same measurement and the previous measurement (i.e. temporal effects). These partial correlations thus represent undirected associations, depicted by edges without arrows in the network. Again, edge weights indicate effect size and can be interpreted as regular partial correlations. Readers are referred to Epskamp et al. (2018) for additional information on the type of networks presented here.
Centrality measures facilitate the interpretation of visualized networks. The centrality measure 'strength' is among the most commonly used centrality measure for psychological networks and provides information on the conditional association between a node and other nodes in the network. It is calculated by the sum of the absolute edge weights of relations a specific node has with connected nodes. Two different types of strength can be distinguished in temporal networks, due to directed edges: InStrength for edges directed towards that specific node and OutStrength for edges directed from that specific node to other nodes. In other words, we distinguish between effects to a node and from a node.
R E SU LTS
This section presents COVID-19 broad attitude networks containing temporal dynamics based on the longitudinal survey. In the current study, temporal effects indicate which nodes predict other nodes over F I G U R E 2 Timeline of important events in the Netherlands during data collection. PM = Prime Minister. Contains relevant statistics (blue), safety measures (orange) and news items (green) related to the COVID-19 pandemic, including the moments of data collection for each wave (grey). Please note that a downward trend of the number of COVID-19 infections started before the data collection started Table 3 for edges weights. Nodes represent measured psychological factors. Edges represent relations between two nodes after controlling for other nodes in the network, with their weight indicating the strength of relations. Blue edges represent positive (excitatory) relations and red edges represent negative (inhibitory) relations. Edges with weights below .05 are omitted to facilitate readability. The network has a cut-off value of .10, meaning edges with weights below that value are depicted with similar width and colour density; (b) standardized strength measure. This measure represents direct effects of a specific node on the network and is calculated by the sum of the absolute edge weights, with InStrength for edges affecting that node and OutStrength for edges from that node affecting other nodes a time frame of 2-3 weeks (i.e. from one survey of a wave to the next), whereas contemporaneous effects indicate which nodes predict other nodes within the same survey of a wave.
Preliminary analysis
Descriptive statistics of nodes are provided in Supporting Information S2.1, together with results of repeated measures ANOVA analyses to examine the effects of time on variables. Results indicated a significant change in all nodes over time. Furthermore, Figure 2 provides the context at the time of data collection by showing a general timeline of the pandemic in the Netherlands (see Supporting Information S3 for underlying data).
Evaluation of the overall fit of the dlvm1 model in which we included all edges in the COVID-19 broad attitude networks showed excellent fit (see Supporting Information S2.2). Generally speaking, confidence intervals of edge weights were not wide, indicating stable (reliable) edge estimates. The weights of edges discussed below are reported in parentheses (see Supporting Information S2.3-S2.5 for a complete overview of edges in the COVID-19 networks). Figure 3a shows temporal effects, standardized to partial directed correlations, in the COVID-19 broad attitude network. Edges indicate nodes' predictive value for the next measurement after controlling for all other nodes. Table 3 provides edge weights of all edges in the temporal network. The width of node borders represents the degree to which nodes were influenced by the same node in the previous wave (i.e. autoregression). Thicker node borders indicate more stable nodes. The most stable nodes in the temporal COVID-19 network were Vaccination Intention (.41), Compliance (.37), Measures Support (.25) and Healthy Lifestyle (.24).
Temporal COVID-19 network
Regarding nodes related to compliance, results showed bidirectional effects between Compliance and multiple other variables in the network after controlling for the effects of every other node in the T A B L E 3 Edge weights of partial directed correlations from the temporal COVID-19 network Note: Read rows (first column) as node from which the edge originates.
network. Compliance was predicted by Measures Support (.13), Involvement (.10), Social Norm (.07), Vaccination Intention (.06) and Worries Virus (.06). Also, Compliance predicted Social Norm (.12), Measures Support (.11), Involvement (.08), Worries Virus (.07) and Vaccination Intention (.06). The former indicates that change in compliance with behavioural measures was predicted by the degree to which one supports the behavioural measures, was mentally involved in the COVID-19 pandemic, perceived social norms regarding compliance, intended to get vaccinated against COVID-19 and worried because of the virus. The latter indicates that change in compliance with behavioural measures was predictive of one's perceived social norms regarding compliance, support for the behavioural measures, mental involvement in the pandemic, worries because of the virus and intention to get vaccinated against COVID-19. Such positive feedback loops indicate that compliance decreases over time if one of the aforementioned five variables decreases, and, vice versa, that these five variables decrease if compliance decreases. Figure 3a shows that the temporal network contains more of these patterns.
Moreover, results suggest bidirectional effects between three of the variables that showed bidirectional effects with compliance. That is, the nodes Measures Support, Involvement and Vaccination Intention had bidirectional relations among each other. Such a pattern of relations between Compliance, Measures Support, Involvement and Vaccination Intention suggests a reinforcing structure between these nodes.
Another interesting pattern in the temporal network was found between Depressive Complaints, Anxiety Complaints and Loneliness. Depressive Complaints predicted Anxiety Complaints (.07) and Loneliness (.09), and Anxiety Complaints and Loneliness predicted Depressive Complaints (.10 and .06, respectively). However, edges between Anxiety Complaints and Loneliness were relatively small (i.e. from Anxiety Complaints to Loneliness .02; from Loneliness to Anxiety Complaints .03). This pattern suggests a relatively central role for Depressive Complaints in this triangle with Anxiety Complaints and Loneliness.
Node centrality Figure 3b shows the standardized centrality measure 'strength' for the temporal COVID-19 network (value is provided in text in parentheses). Results showed that Involvement, Compliance, Worries Virus and Social Norm had a central role in the COVID-19 network by both affecting and being affected by other nodes (OutStrength 2.48, 1.79, 1.05, 1.05; InStrength 1.87, 1.75, 1.72, 1.21, respectively). Interestingly, Measures Support predominantly affected other nodes (OutStrength 2.26) and was affected to a lesser extent (InStrength 0.97). Salient discrepancies between InStrength and OutStrength measures were found for Negative Affect, which was predominantly affected by other nodes in the COVID-19 network (InStrength 1.69; OutStrength 0.08) and Worries Measures, which was mostly affected by other nodes (InStrength 0.97; OutStrength −0.22). Finally, nodes relevant to mental health were of relatively low strength, suggesting that these nodes had a limited impact on the network. A table with node strength values for each node is provided in Supporting Information S2.4. Figure 4 shows (a) contemporaneous effects in the COVID-19 broad attitude network (left) and (b) the standardized centrality measure 'strength' for the contemporaneous COVID-19 network (right). Results concerning edges related to Compliance are comparable to the temporal level: the strongest edges with Compliance are Measures Support (.22), Social Norm (.13) and Involvement (.09). These edges indicate that when someone reports compliance, they are also likely to report support for the measures, social norms and involvement in the pandemic. This, together with the effects in the temporal network (see Figure 3a), implies that dynamics concerning compliance with behavioural measures are comparable between measurements over time (i.e. temporal) and within measurements (i.e. contemporaneous). Strength measures for the node Compliance differed between the COVID-19 broad attitude networks: this node played a more central role in the temporal network (InStrength 1.75; OutStrength 1.79) than in the contemporaneous network (0.32). This indicates that, relative to other nodes, compliance with behavioural measures is more predictive of and predicted by other nodes in the network in the next wave than that it was strongly connected with other nodes in the same survey.
Contemporaneous COVID-19 network
Furthermore, strength measures of the contemporaneous COVID-19 network showed that several variables with higher strength in the temporal network also played a central role here, namely Measures Support (2.16), Negative Affect (1.96), Worries Virus (1.56) and Involvement (1.36). This implies that these nodes are important not only for the COVID-19 broad attitude network over time between measurements (i.e. the nodes are predictive of and/or predicted by many other nodes in the network) but also within a measurement wave (i.e. these nodes are related to many other nodes in the same survey). Interestingly, the node Social Norm had relatively high strength on the temporal level (InStrength 1.21; OutStrength 1.05) but less on the contemporaneous level (−0.35). This suggests that social norms are more predictive of other nodes in the COVID-19 broad attitude network over time than that this node was strongly connected with other nodes in the same survey.
Finally, we conducted correlational analyses to examine coherence between the temporal and contemporaneous networks. Results showed a moderate to large correlation between the temporal and contemporaneous COVID-19 networks (r = .47, z = 0.51; see Supporting Information S2.6 for plot). This indicates a relation between edges representing effects over time between measurements (i.e. waves) and edges representing effects during measurement (i.e. within a survey).
DISCUS SION
The current research explored the temporal dynamics of psychological factors related to compliance with behavioural measures during the initial phase of the COVID-19 pandemic. Results of this highpowered longitudinal study showed that nodes in the broad attitude networks were highly interconnected. This indicates that all included variables provided substantive information on the dynamics of adopting protective behaviours during a pandemic, justifying a complexity approach. Several insights can be drawn from this study. Three interesting patterns were observed in the temporal network. First, results show a series of bidirectional effects between variables in the temporal network relevant for compliance with behavioural measures during the COVID-19 pandemic. We found that the degree to which people support behavioural measures and were mentally involved in the pandemic were most important for predicting compliance over time after controlling for every other variable in the network. This is in line with previous research showing relations between support for measures and compliance (Chambon, Dalege, Elberse, et al., 2022;van Rooij et al., 2020). Interestingly, the current study also found effects the other way around, namely that compliance predicted support for the behavioural measures and involvement in the pandemic over time. Similar results were obtained for vaccination intention and social norms: not only do these variables predict compliance with behavioural measures over time, but compliance also predicts these variables over time. These potential feedback loops between variables consist of positive relations between two nodes, indicating that effects strengthen as time progresses. More specifically, an increase in one variable leads to an increase in the other variable in the next measurement, which in turn leads to an increase in the first variable in the measurement thereafter, etcetera. Insights regarding such potential feedback loops add to existing literature such as the conceptual frameworks of health behaviour during a pandemic proposed by Bish and Michie (2010) and underlying theoretical frameworks such as the TPB, HBM and PMT by moving from unidirectional effects to more complex interactions between variables. Moreover, such feedback loops provide possible explanations for observed effects. For instance, we observed a decrease in compliance with behavioural measures, support for these measures and involvement in the pandemic throughout the course of this study. According to the temporal network, a decrease in compliance could be explained by a decrease in support for the measures or involvement. The feedback loops suggest that this applies vice versa as well: a decrease in support for the measures or involvement could be explained by a decrease in compliance. These findings resonate with different theoretical models in which behaviour can both be determined by other variables (e.g. TPB; Ajzen, 1991) and determine other variables (e.g. dissonance theory; Festinger, 1957). Moreover, the findings suggest that (elements of) these models can apply simultaneously. Thus, combining different models with bidirectional effects between variables might improve our understanding of health behaviour in a pandemic beyond unidirectional effects between limited sets of variables based on singular theoretical models.
The second pattern observed in the temporal network is that several of these bidirectional effects combined formed a positive reinforcing structure. That is, three variables that showed positive feedback loops with compliance (i.e. support for behavioural measures, involvement in the pandemic and vaccination intention) also showed positive feedback loops among each other. Such structures can potentially amplify change in these variables. That is, an increase (or decrease) in one variable from the reinforcing structure is likely to be accompanied by an increase (or decrease) in the remaining variables over time, further increasing (decreasing) the initially increased (decreased) variable and other variables over time, and so on.
Such positive reinforcing structures are important to acknowledge for different scenarios. First, considering interventions: if one aims to increase compliance, doing so via a variable that is not only directly related to compliance but also part of a positive reinforcing structure could amplify the effects of interventions over time. For instance, results suggest that an intervention aimed at positively influencing support for behavioural measures could increase not only compliance but also involvement in the next time frame. This could increase compliance via multiple variables: in addition to the intervention initiating a feedback loop between compliance and support for the measures, the intervention's effect on involvement could also initiate the feedback loop between compliance and involvement. Thus, positive reinforcing structures can amplify the effects of interventions beyond bidirectional effects.
The second reason why reinforcement structures are important is that they can provide insight into processes of polarization during a pandemic. Reinforcement structures can enhance initial leanings towards one end of the evaluative spectrum and lead to a movement towards the extremes of that spectrum. This effect resembles the phenomena of polarization, in which people strengthen their attitude in the initial direction, a process that has been reported in relation to the initial phase of the COVID-19 pandemic (Graso et al., 2021;Kerr et al., 2021). The observation that involvement is part of a reinforcing structure with compliance, support for behavioural measures and vaccination intention corresponds with the Attitudinal Entropy (AE) framework grounded in network theory (Dalege et al., 2018). This framework explains polarization through attitudinal entropy, a state reflecting inconsistent and instable attitudes. Dalege et al. (2018) propose that attitudinal entropy is reduced by thinking about and turning attention towards attitudes objects, and involvement as included in the current study can be interpreted as such. This reduction in entropy results in strengthening attitudes in the initial direction, i.e. polarization. This is also in line with the mere thought effect on attitude polarization (Tesser & Conlee, 1975), proposing that thought can result in attitude polarization.
The third pattern observed in the temporal network is also relevant in the context of interventions. That is, the network contained a triangle of three nodes in which one node shows a bidirectional relation with both other nodes, but these other nodes show a weak or no relation with each other. Such a pattern was for instance observed between depressive complaints, anxiety complaints and loneliness, in which depressive complaints showed a bidirectional relation with both other nodes, but anxiety and loneliness showed a relatively weak relation. This suggests that intervening on depressive complaints can be effective to improve mental health.
Another insight from this study is that our results suggest a weak predictive effect between (nodes related to) mental health and compliance. One could have expected a predictive relation, for example by compliance in the form of adhering to social distancing and isolation guidelines to have an impact on mental health, but there were only weak predictive effects between these constructs when controlling for other variables. This could imply that relations between variables related to mental health (e.g. stress) and compliance observed in prior research (Harper et al., 2021;Lieberoth et al., 2021) might be explained by constructs other than included in those studies, for instance, support for the behavioural measures or involvement in the pandemic. Another possible explanation could be that prior research focussed mainly on between-person effects and that relations between mental health and compliance are more pronounced on that level. Comparing psychological within-person and between-person effects in the context of compliance and mental health during pandemics would be the next step in this line of research to examine whether meaningful differences can be observed.
A final insight of this study is that the most important temporal effects concerning compliance with behavioural measures between measurements were comparable with effects found within the same measurement (i.e. contemporaneous effects). For instance, the positive reinforcing structure found between compliance, support for the behavioural measures, involvement and vaccination intention was found not only over time between measurements but also within the same measurement: higher scores on compliance, support for the measures, involvement in the pandemic and vaccination intention tends to co-occur during a measurement. However, there were also important differences between these time frames, as corroborated by a moderate to large relation between the (edges in the) networks. These differences are also important to consider when designing interventions as these can be tailored based on their anticipated effects. For instance, social norms were found to be relatively important for the COVID-19 network over time but not within the same measurement. This implies that the effects of interventions aimed at influencing social norms should ideally be studied over time, that is, a next measurement and not within the same survey.
The current research enables us to formulate several recommendations. The first recommendation concerns future research into causality. Although interesting indications of causality can be drawn based on the approach adopted in this study, no firm statements on causality can be made. In order to do so, interventions should be studied (Kossakowski et al., 2021). Second, the ecological validity of the presented COVID-19 broad attitude networks is unknown. Although the COVID-19 broad attitude networks contain a diverse set of variables, more psychological variables can be relevant for compliance with behavioural measures during pandemics. Changes over time observed in this study could also be caused by predictors not included in the model. This also applies to the contemporaneous effects, which can also indicate that other variables have not (yet) been taken into account. Future research could include additional variables that are deemed relevant based on the scientific literature to further meet reallife complex psychological systems concerning pandemics. Third, although comparing networks of the longitudinal sample and other respondents showed very few significant differences in relations between variables, we cannot predict how respondent dropout affects the results presented here. Given the aim to demonstrate how this approach provides an overview of predictive relations between variables, and not to present a generalizable theoretical model or representative overview of node scores, we assume that dropout has no significant effect on the outcome given the aim of the current study. Fourth, future research could examine temporal dynamics of compliance during pandemics in different phases of a pandemic. The current study was conducted during the initial phase of the COVID-19 pandemic and we cannot predict to what degree results are generalizable beyond the first wave. Results therefore represent a particular period in time, and future research could focus on other phases such as the perseverance period in which adopting protective behaviours needs to be maintained. Fifth and finally, it should be noted that the time spacing of measurements is likely to affect results. For example, it is more likely that change in relatively stable nodes such as compliance and intention to get vaccinated is detected when administering surveys weeks apart than hours apart. We think that the time frame adopted in this study (i.e. 2 to 3 weeks between surveys) is adequate to detect changes in this context; nevertheless, future research could adopt different time frames to examine whether this generates different patterns of effects.
In conclusion, the COVID-19 broad attitude networks obtained in this study show the added value of adopting a complexity approach to compliance in the context of pandemics. Moreover, the adopted method provides insight into unique relations between a broad set of variables, and how relations between these variables develop over time. Finally, the results suggest that the network structure can provide important insights for explaining observed effects and designing effective interventions, providing an informed strategy grounded in network theory to influence compliance during pandemics.
AC K NOW L E DGE M E N T S
We would like to thank Sacha Epskamp for contributing to the analyses. This research was funded by The Dutch Research Council (NWO grant 440.20.019). Jonas Dalege's work was supported by an EU Horizon 2020 Marie Curie Global Fellowship (no. 889682).
C ON F L IC T OF I N T ER E S T
The author(s) declared no potential conflicts of interest.
OPE N R E SE A RCH BA DGE S
This article has earned Open Data and Open Materials badges. Data and materials are available at https://osf.io/qu7p2/.
DATA AVA I L A BI L I T Y S TAT E M E N T
The data that support the findings of this study is made openly available in OSF at https://osf.io/ qu7p2/. R code is also made available on OSF. Supplemental materials are available in the online version of the article. | 2022-10-11T06:16:39.847Z | 2022-10-10T00:00:00.000 | {
"year": 2022,
"sha1": "37cb71f6ee8b892596f5c065d239a64f56d00752",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/bjso.12572",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "e7940eed5e88067c68804d575b5387c98678d260",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
85579510 | pes2o/s2orc | v3-fos-license | Natural Selection Shapes the Mosaic Ancestry of the Drosophila Genetic Reference Panel and the D. melanogaster Reference Genome
North American populations of Drosophila melanogaster are thought to derive from both European and African source populations, but despite their importance for genetic research, patterns of admixture along their genomes are essentially undocumented. Here, I infer geographic ancestry along genomes of the Drosophila Genetic Reference Panel (DGRP) and the D. melanogaster reference genome. Overall, the proportion of African ancestry was estimated to be 20% for the DGRP and 9% for the reference genome. Based on the size of admixture tracts and the approximate timing of admixture, I estimate that the DGRP population underwent roughly 13.9 generations per year. Notably, ancestry levels varied strikingly among genomic regions, with significantly less African introgression on the X chromosome, in regions of high recombination, and at genes involved in specific processes such as circadian rhythm. An important role for natural selection during the admixture process was further supported by a genome-wide signal of ancestry disequilibrium, in that many between-chromosome pairs of loci showed a deficiency of Africa-Europe allele combinations. These results support the hypothesis that admixture between partially genetically isolated Drosophila populations led to natural selection against incompatible genetic variants, and that this process is ongoing. The ancestry blocks inferred here may be relevant for the performance of reference alignment in this species, and may bolster the design and interpretation of many population genetic and association mapping studies.
estimate that this median tract size would be expected after 1,513 generations of admixture. The accuracy of this estimate may be affected by demographic details, natural selection, and imprecision in recombination estimates.
DGRP ancestry proportions are highly variable along the genome
Examining collective DGRP ancestry for each window, striking genome--wide variability was detected (Figure 1). Surprisingly, the X chromosome is nearly fixed for European ancestry, with localized exceptions. Whereas the autosomes carry 21.8% African ancestry, for the X chromosome this average is reduced to 5%. 75.1% of X--linked windows have <5% African ancestry and 37.8% are completely fixed for European ancestry. In contrast, an outlier for higher African ancestry is chromosome arm 2L (Figure 1). This arm effect is largely explained by the prevalence of inversion In(2L)t ( Figure S1), the most common African--origin inversion in the DGRP [2,20]. In(2L)t and other inversions can have strong effects on genetic variation across whole chromosome arms ( Figure 2) [4,20].
More perplexing than the between--arm ancestry differences are the strong fluctuations observed within chromosome arms, often on the scale of tens or hundreds of kilobases ( Figure 1). For each chromosome arm, there is a significant negative correlation (P < 0.0001) between African ancestry and recombination rate, with Pearson r 2 of 0.080 for standard autosomal arms analyzed jointly and 0.101 for the X chromosome. The mean sub--Saharan ancestry proportion is 30.2% for autosomal windows below 0.5 cM / Mb, but only 13.0% when the recombination rate is above 4 cM / Mb ( Figure 3). This relationship is not expected under a neutral introgression scenario, but might result either from inefficient selection in low recombination regions against African alleles that are disadvantageous in 8 the predominantly European gene pool and North American environment of the DGRP population, or from favored African alleles carrying longer linkage blocks in regions of low recombination.
Using simulations based on previously inferred demographic parameters, Pool et al. [4] found that admixture on the order of 1,000 generations ago between African and European populations of D. melanogaster was detected reliably by the method implemented here.
The few errors tended to involve missing unusually short admixture tracts, rather than inferring false tracts. Those simulations focused on the autosomes because X--linked admixture will be easier to detect (based on the much larger diversity difference between African and European populations for the X chromosome). Hence, the lower X--linked admixture levels described above contradict the predictions of methodological bias. Nor is the recombination result easily explainable by such bias: low recombination window lengths were scaled to contain similar numbers of polymorphisms as high recombination regions, and Europe/Africa diversity ratio is typically similar across high and low recombination regions of chromosome arms [4], so there is no obvious prediction of a difference in the power to detect admixture between these categories.
Impact of natural selection on ancestry inferences
There are reasons to be skeptical of some extreme DGRP ancestry deviations. The two intervals of maximal African ancestry are near Cyp6g1 and overlapping Ace, loci with strong selective sweeps related to 20th century insecticide usage [21,22]. At these loci, sweeps that occurred after the divergence of the Raleigh population from its European and African source populations (perhaps less than 150 years ago; Keller 2007) could result in biased ancestry inference.
Although it would be desirable to annotate each case in which very recent selective sweeps may have influenced ancestry calling, this goal may require significant methodological advances. The HMM used here should be more sensitive to cases involving very recent selection affecting the European reference panel, but such sweeps could either be global (with either the same or different haplotypes fixing in each population), or shared by the European and African reference panels but not the DGRP, or specific to the European sample. These scenarios each lead to distinct predictions for variation among populations, whereas analyses focused only on the European reference panel will mainly pick up sweeps that happened prior to American colonization (which are of less concern here).
This issue reflects a general challenge for ancestry inference. Other reference panel approaches should be subject to similar effects of recent selection. Methods that do not use reference panels may return non--geographic divisions in the data, such as clustering inverted versus standard chromosome arms [20], and even if inverted arms were removed, their output could prove similarly uninformative or biased in cases of recent sweeps.
Hence, the ancestry inferences presented here (Table S3) should be regarded as provisional, and should be revisited in light of future methodological developments.
For either hard sweeps or moderately soft sweeps affecting the European sample, such recent selection should increase that population's haplotype homozygosity, since there has been very little time for mutation and recombination since the adaptive event. While such a pattern is observed at Cyp6g1, in general the inferred peaks of African ancestry in the DGRP show no such pattern ( Figure S2). Thus, while recent selection may drive some apparent ancestry deviations, most of the genomic variance in DGRP ancestry suggested in Figure 1 may be genuine.
Admixture in the reference genome
Using the same methods as described for the DGRP genomes, the D. melanogaster reference genome was estimated to have 9.4% African ancestry ( Figure 2, Table S4 The reference genome's segments of African ancestry are correlated with those found in the DGRP (Table 1). And like the DGRP, the reference genome is more likely to carry African ancestry in low recombination regions (Table 1). Hence, many of the demographic and selective events that molded complex patterns of ancestry in the DGRP may have affected other North American populations as well.
Functional and population genetic correlates of ancestry deviations
Although precise neutral expectations for interlocus variance in ancestry proportion depend on unknown details of the North American colonization scenario, the dramatic and non--random variance observed here suggests the possibility that African and European alleles at some loci may have had unequal fitness in North American environments. To investigate which types of genes would be the most likely targets of any such selection, gene ontology (GO) enrichment analysis was performed for intervals of elevated African or European ancestry. The GO categories most enriched for European ancestry included "circadian behavior", while those for African ancestry included "flight behavior" and vision-related categories (Table 2; Table S2).
Evidence for widespread epistatic fitness interactions in the DGRP
The hypothesis that selection may disfavor certain African alleles in the primarily European gene pool of the DGRP population is consistent with the above--described relationship between recombination rate and ancestry ( Figure 3). Only homozygous intervals were analyzed, so each genome has just one allele per locus, and inverted chromosome arms were excluded. Results from the true data were then compared against randomly permuted data sets, in which individual labels for the second window were shifted (thus maintaining the true data's population ancestry frequencies at each window, as well as patterns of linkage between neighboring windows). Across the genome, a notable excess of interchromosomal window pairs with low FET P values was observed (Figure 4), indicating a genome--wide signal of ancestry disequilibrium. At very low P values, the enrichment was more pronounced for X--autosome window pairs than for pairs split between the two major autosomes ( Figure 4). To avoid treating neighboring window pairs as independent, nearby outlier P values were merged into two--dimensional "clusters" of ancestry disequilibrium, and these clusters were extended from each focal window until pairs with P < 0.05 were no longer observed with appreciable frequency. Although the binning criteria were necessarily somewhat arbitrary (see Methods), they were designed to extend clusters generously in an attempt to fully account for their effect on the genomic distribution of FET P values. Examining the chromosomal distribution of these pairwise clusters, there is little evidence that adjacent clusters are failing to be appropriately merged ( Figure S3). This procedure resulted in 676 AD clusters with no pairwise overlap, many of which are likely to represent false positives.
However, subtracting the entire span of all 676 clusters only accounted for 33% of the genome--wide excess of X--autosome FET P values below 0.05, and 58% of the autosome-autosome excess. Hence, although further study is needed to accurately estimate the number of pairwise IFIs between African and European alleles in the DGRP genomes, based 13 on the present analysis I can not rule out a scenario in which a surprisingly large number of pairwise incompatibilities are present.
Potential genetic targets of interlocus fitness interactions
The vast number of pairwise comparisons involved in genome--wide disequilibrium testing entails a multiple testing problem, with the consequence that no pairwise P value from a single hybrid or admixed population is likely to be statistically significant in a genome--wide context [27]. Hence, in order to draw any specific conclusions about genes causing AD in a single population, additional evidence is needed. With the goal of identifying a more confident set of AD clusters, I hypothesized that some true positive loci might participate in a greater number of pairwise interactions than expected by chance.
While a plurality of all genomic windows overlapped zero AD clusters and most windows others overlapped three or fewer, a smaller subset of windows overlapped several -up to a maximum of 13 pairwise between--chromosome clusters. Comparing the total "cluster counts" of windows in the real data against those from permuted data sets, I confirmed that windows overlapping multiple pairwise clusters were observed much more frequently than expected randomly ( Figure S4). For example, windows overlapping 7 or more AD clusters were 3.7X more common in the real data (implying a posterior probability of 79% that at least some of a window's pairwise clusters are genuine), and "cluster counts" of at least 7 were observed in 59 distinct genomic regions. Windows overlapping 11 or more AD clusters were enriched by a factor of 5.2X, indicating a posterior probability of 84%. Hence, a subset of windows constituting "AD hubs" have fairly strong confidence of holding Figure S5). Importantly, FST peaks are typically narrower than AD hubs, so their co--occurrence may help to localize the genetic targets of IFIs.
A thorough analysis of genes likely to underlie IFIs in North American D. melanogaster could encompass one or more follow--up studies. Still, a preliminary examination of the genes and pairwise combinations involved in AD hubs may motivate hypotheses for further genomic and functional testing, regarding the biological nature of putative incompatibilities between African and European D. melanogaster. I therefore highlight a few of the most notable genes and categories indicated by these AD hubs below. Figure 5 illustrates the pairwise components of AD hubs with at least 7 pairwise interchromosomal interactions. The most extreme AD hub, overlapping 13 clusters, was centered on the gene Argonaute 2 ( Figure 6). An RNA interference gene, AGO2 is involved in the loading of siRNA onto the RISC complex, and its known functions include antiviral response, chromatin silencing, and autophagy. Along with a second AD hub including Table S5.
Enriched GO categories for windows in AD hubs with elevated FST (see Methods) echoed many of these same themes (Table S6). These categories included "detection of chemical stimulus involved in sensory perception" (which had the lowest P value among GO categories represented by at least 5 AD hubs), "cellular response to stimulus", "signal transducer activity", "cell surface receptor signaling pathway", "intrinsic to membrane", aspects of transmembrane transport, and GABA and allatostatin receptor activities.
Windows from 35 AD hubs met the FST criteria for this analysis, encompassing a median span of just 10 kb per hub. For 27 of these hubs, the window(s) with elevated FST included at least one gene from the GO categories mentioned above. This exploratory analysis can not conclusively point to the genes and processes underlying putative incompatibilities in the DGRP, but it does suggest hypotheses for downstream molecular and genomic studies.
Less than a third of pairwise clusters involving AD hubs linked one hub to another
Genome--wide evidence for natural selection shaping patterns of admixture
Three primary patterns in the DGRP ancestry inferences suggest that natural selection has powerfully influenced patterns of population ancestry along these genomes. First, levels of European and African ancestry vary strikingly within and between chromosome arms ( Figure 1). Second, the degree of African introgression is greatly reduced in regions with higher recombination rates ( Figure 3). Third, there is a genome--wide abundance of interchromosomal AD locus pairs in which strain ancestries are correlated ( Figure 4).
With regard to the first point, one striking feature of the genomic ancestry landscape is the X chromosome's strongly reduced African introgression relative to the autosomes. This result mirrors the situation in sub--Saharan Africa, where admixture from outside Africa is lowest on the X [4,35]. X chromosomes may thus be inhibited from introgressing between African and non--African populations in either direction. Qualitatively similar patterns have been reported from cases of hybridization involving mice, Neanderthals, and other taxa [36--38]. Although the present results concern the admixture of two populations of the same species, they are compatible with Haldane's Rule [39]. The brief evolutionary time scale of these populations' separation (perhaps only 0.06Ne generations [6]) leaves little time for mutation and drift alone to produce such differences.
Ancestry disequilibrium and its possible causes
Consistent with the hypothesis of epistatic incompatibilities or other fitness interactions between African and European alleles, I found that ancestry disequilibrium is widespread in the DGRP genomes and may involve a large number of locus pairs. The most obvious explanation for AD is an incompatibility between an African allele at one locus and a European allele at another, producing an epistatic fitness interaction due to consequences for survival and/or reproductive success. Positive assortative mating -if flies with African alleles at certain loci mate preferentially -might also contribute to ancestry disequilibrium among wild--caught individuals. Thus, AD could stem from interactions between individuals in addition to epistasis within individuals. It is worth mentioning, however, that the present study does not directly examine wild--caught flies, but instead the genomes of strains that were inbred for 20 generations, and had originated from >200 independent isofemale lines. Recessive BDMIs will be unmasked by the inbreeding process. Although opportunities for natural selection are limited during inbreeding, the success of a full sibling cross might be influenced by the combinations of African and European alleles that these individuals possess. Thus, inbreeding and the opportunity to study mostly--homozygous genomes may amplify the signal of IFIs and aid the search for causative loci.
Another recent study that used an inbred Drosophila collection to test for genetic incompatibilities within D. melanogaster was by Corbett--Detig et al. [41]. These authors used genotyping data from the Drosophila Synthetic Population Resource (DSPR) [42], which consists of more than 1,700 recombinant inbred lines from panels that derive from 8 geographically diverse founder strains after 50 generations of interbreeding. Corbett--Detig et al. [41] found evidence of interchromosomal allelic associations, concluding that they stemmed from incompatibilities segregating within populations. In light of the current study, and given the mix of cosmopolitan and sub--Saharan strains in the DSPR, it is also possible that some of these incompatibilities had accumulated between populations. Previously, it was shown that one solution to the multiple testing problem inherent in genome--wide disequilibrium testing is to add data from a second independent hybrid / admixed population and require that both populations show a disequilibrium signal for a given pair of loci [27]. Appropriate genomic data is not yet available for such an analysis in D. melanogaster, since the admixture tracts found in sub--Saharan populations are still impractically long for locus--specific analysis [4]. However, the two--population approach could become feasible if a number of strain--specific genomes were sequenced from a region such as Saharan Africa, Madagascar [5], northern Australia, or possibly South America. The suitability of a population will depend on the timing and amount of admixture, and analysis supporting an independent history of admixture relative to North America.
Here, I proposed that without data from a second population, statistical power can be gained by focusing on "AD hubs". Indeed, I found that loci participating in multiple pairwise interactions were far more common in the real data than expected for random false positives. This step allows the identification of a set of loci with fairly strong confidence of contributing to IFIs (e.g. 79% to 84% posterior probabilities), including those discussed above. Many of these AD hubs include genes with roles in neurotransmission and sensation. It is not possible to infer from the present data what caused fitness interactions involving this group of genes, whether it be ecological or reproductive aspects of behavior, the maintenance of function in novel thermal environments, or other selective pressures. Such hypotheses will ultimately require experimental analysis.
It will also be of interest to compare the genomic admixture patterns identified in the DGRP to broader latitude clines in eastern North America and elsewhere [43--45], with the expectation that many loci subject to ancestry deviations or ancestry disequilibrium in this North Carolina sample may show atypical clinal patterns as well. However, such analyses should ideally be conducted based on ancestry proportions along the cline, as opposed to FST between northern and southern populations. If a latitude gradient in ancestry is present, then heterogeneity in north--south FST may simply reflect a ragged genomic landscape of genetic differentiation between the African and European source populations.
Since genomic data from D. melanogaster latitude clines mainly comes from pooled sequencing, a method to estimate ancestry proportions from pooled data would allow for more robust clinal analysis.
I have not estimated the precise number of loci contributing to ongoing fitness interactions in the DGRP population, and further methological advances toward this goal would be desirable. However, the above analyses hint that this number may be substantial.
Excluding several hundred of the most extreme pairwise interactions did not erase the genomic signal of AD. The identification of 59 AD hubs at a 79% confidence level is relevant as well, as is the observation that these hubs appear to interact with a larger number of partner loci ( Figure 5). These findings, together with the pronounced genomic variance in ancestry and its correlation with recombination rate, suggest that natural selection has profoundly altered the genomic consequences of admixture between temperate and tropical populations of D. melanogaster. This work provides an intriguing example of admixture between genetically differentiated populations, in a species in which large populations may facilitate an important role for natural selection in the genome 24 [46,47]. Importantly, this may also be a system in which putative incompatibilities are particularly amenable to functional characterization.
Significance of mosaic ancestry for Drosophila research
Being the first and most completely sequenced D. melanogaster genome, the genome of the y; cn, bw, sp laboratory strain is typically the standard against which newly sequenced genomes from this species are compared. In an evolutionary context, however, this genome is not an obvious "reference", being the result of a complex history involving founder events and admixture. The reference genome's mosaic ancestry may impact reference alignments and downstream analyses. Non--African D. melanogaster have essentially a subset of the genetic diversity present in sub--Saharan Africa. Thus, a pair of non--African genomes will have fewer sequence differences than a pair of sub--Saharan genomes or a comparison between these groups. During reference alignment, too many SNP or indel differences from the reference genome may cause reads not to map. Thus, when the reference carries a European allele, reads from other non--African alleles may have a higher probability of mapping than reads from sub--Saharan alleles. This effect may depend on the method and parameters used, but could bias population genomic studies of individual genomes or pooled samples in ways that are heterogeneous across the genome.
This problem might be minimized by accounting for known variation during reference alignment, or by using a reference genome with similar genetic distances to all strains of D.
The mosaic ancestry of DGRP and laboratory strains may also be relevant to a range of phenotypic and genetic studies. The European and African source populations probably differed in various phenotypes [7]. Some of the phenotypic diversity resulting from their admixture may persist today and contribute to the trait variation of populations such as the DGRP. As a potential example, variants at many of the AD hub genes mentioned above Chromosome arms with inversions were excluded from reference panels, based on evidence that inversions have recently moved between populations [4,20]. Based on the relatively older admixture of North American populations (compared with the apparently very recent introgression studied in Africa), a somewhat smaller window size was used in the present analysis. Windows were scaled by genetic diversity, as defined by 100 non-singleton SNPs in the Rwanda sample. In moderate to high recombination regions, these windows typically corresponded to 3--5 kb. Otherwise, ancestry was assessed exactly as previously described [4,49]. Regions of genomes previously inferred to contain residual heterozygosity or identity by descent with another analyzed genome were excluded from all analyses.
Ancestry deviations and gene ontology enrichment
Population ancestry proportions among DGRP genomes were found to vary on both local and broader genomic scales. To analyze genes that could be responsible for local peaks of African or European ancestry, a simple "ancestry deviation" statistic was implemented. This statistic was defined as the difference between the proportion of African ancestry in the focal window and the median of that quantity in the 51st to 250th windows on each side. This procedure helped to account for the regional ancestry background while excluding windows that may deviate along with the focal window due to the same instance of natural selection. Outlier windows for ancestry deviation were defined as based on the 2.5% tails for each chromosomal arm. To avoid double--counting the same putative instance of selection, "outlier regions" grouped outlier windows with up to two non--outlier windows between them.
The set of all genes overlapping outlier regions (including the next exon on each side of the region) was subjected to gene ontology (GO) enrichment analysis. GO categories corresponding to the overlapping genes were counted only once per region. The locations of all outlier regions (in terms of the windows that each spanned) were randomly permuted within their original chromosome arms 50,000 times, a practice that accounts for the effects of varying gene lengths. For each GO category, the proportion of random permutations generating at least as many outliers as observed in the real data constituted a
Ancestry disequilibrium testing and analysis
Analogous to linkage disequilibrium, I tested for "ancestry disequilibrium" (AD) using the ancestry inferred for each genome in each window, asking whether having an African allele in one window boosted the chance of having an African allele in an unlinked window.
Fisher Exact Tests (FETs) were applied to each interchromosomal pair of windows.
Genomic distributions of FET P values were compared between the real data and permuted data sets in which individual labels were consistently shifted for the second window in a pair (thus maintaining linkage patterns among windows). Due to the computationally intensive analysis, just 10 permuted data sets were assessed, but each one contains roughly 1 E +8 P values, and consistent results were observed from one replicate to the next.
To bin multiple neighboring window pairs that could result from the same pair of interacting loci, a set of the most extreme AD window pairs were extended to form "AD clusters". Specific criteria for selecting and extending these criteria were as follows. (1) Identify each interchromosomal window pair with a raw FET P value below 0.0001 as starting points for AD clusters. Here, fold--enrichment in the real data (relative to permuted data sets) is plotted for Fisher Exact Test P values. All comparisons between X--linked and autosomal windows, and between chromosomes 2 and 3, are plotted in separate series. Above, enrichment is plotted for each 0.01--wide P value bin. Below, the cumulative enrichment for all P values below a given threshold is indicated. Although there are isolated cases above where two or more clusters might share the same basis, the overall patterns suggests that most AD clusters -whether they represent true or false positives -appear to represent distinct signals in the data. Future statistical methodological development should target the refinement of two--dimensional AD regions. to that for all other windows, with the autosomes and X chromosome analyzed separately.
Here, FST is measuring genetic differentiation between European and western African populations, as proxies for the source populations that gave rise to North American D.
melanogaster. Elevated FST for these hubs is consistent with the hypothesis that adaptive functional differences had arisen between the source populations, which may then have been subject to interlocus fitness interactions upon secondary contact in the New World. genomic window is shown for the five major euchromatic chromosome arms (color--coded and labeled above). Genomes lacking at least 500 bp of called sequence within a given window were excluded, and windows with fewer than 50 genomes meeting that criterion were omitted from this plot. shown; these genes were indicated by patterns of cluster overlap and FST, but further research is needed to assess their potential involvement in interlocus fitness interactions.
AD
Ancestry disequilibrium -the correlation of population ancestry between loci. Analogous to linkage disequilibrium, but calculated using inferred ancestries rather than specific genotypes.
AD cluster A pair of genomic regions that contain one or more window pairs with strong ancestry disequilibrium.
AD hub
A set of neighboring windows that overlaps an unusually large number of AD clusters.
These genomic regions are hotspots for ancestry disequilibrium, and may experience interlocus fitness interactions with a number of unlinked loci.
BDMI
Bateson--Dobzhansky--Muller incompatibilities. Fitness may be compromised when variants from previously isolated populations are brought into contact by admixture.
IFI
Interlocus fitness interaction. AD may indicate an IFI, and a BDMI is a potential explanation for an IFI.
DSPR Drosophila Synthetic Population Resource
[Box 1 to be placed near the first mention of "ancestry disequilibrium"] | 2016-11-01T19:18:48.349Z | 2015-02-04T00:00:00.000 | {
"year": 2015,
"sha1": "e70f5251903fb25516130f4592df33f50f811d29",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/mbe/article-pdf/32/12/3236/17472651/msv194.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "e7db6473064da34a29c85d576f71cc725c4313c5",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
132815919 | pes2o/s2orc | v3-fos-license | Impact of tropical Atlantic sea-surface temperature biases on the simulated atmospheric circulation and precipitation over the Atlantic region: An ECHAM6 model study
As many coupled atmosphere-ocean general circulation models, the coupled Earth System Model developed at the Max Planck Institute for Meteorology suffers from severe sea-surface temperature (SST) biases in the tropical Atlantic. We performed a set of SST sensitivity experiments with its atmospheric model component ECHAM6 to understand the impact of tropical Atlantic SST biases on atmospheric circulation and precipitation. The model was forced by a climatology of observed global SSTs to focus on simulated seasonal and annual mean state climate. Through the superposition of varying tropical Atlantic bias patterns extracted from the MPI-ESM on top of the control field, this study investigates the relevance of the seasonal variation and spatial structure of tropical Atlantic biases for the simulated response. Results show that the position and structure of the Intertropical Convergence Zone (ITCZ) across the Atlantic is significantly affected, exhibiting a dynamically forced shift of annual mean precipitation maximum to the east of the Atlantic basin as well as a southward shift of the oceanic rain belt. The SST-induced changes in the ITCZ in turn affect seasonal rainfall over adjacent continents. However not only the ITCZ position but also other effects arising from biases in tropical Atlantic SSTs, e.g. variations in the wind field, change the simulation of precipitation over land. The seasonal variation and spatial pattern of tropical Atlantic SST biases turns out to be crucial for the simulated atmospheric response and is essential for analyzing the contribution of SST biases to coupled model mean state biases. Our experiments show that MPI-ESM mean-state biases in the Atlantic sector are mainly driven by SST biases in the tropical Atlantic while teleconnections from other basins seem to play a minor role.
Introduction
The majority of current coupled atmosphere-ocean general circulation models (AOGCMs) suffers from substantial biases in simulating sea-surface temperatures (SSTs) in the tropical Atlantic (TA) in terms of climatological seasonal cycle and climate mean state. The most common shortcoming of these models is a warm SST bias in the south-eastern tropical Atlantic (SETA) (e.g. Fig. 1). The bias maximum often exceeds 5 K and is centered at the eastern boundary of the basin in the upwelling region of the Namibian-Angolan coast (Richter et al. 2012a;Toniazzo and Woolnough 2014;Voldoire et al. 2014). The extension of this warm bias spreads towards the equator and covers a large fraction of the basin. Moreover, several AOGCMs simulate too cold SSTs along the coast of Venezuela and Brazil. In many cases the combination of the aforementioned erroneous SSTs leads to a reversal of the annual mean equatorial SST gradient (Davey et al. 2002;DeWitt 2005;Richter and Xie 2008).
The large-scale positive bias in the SETA as well as local biases in tropical Atlantic coastal regions constitute a long-standing problem with only little improvement over Abstract As many coupled atmosphere-ocean general circulation models, the coupled Earth System Model developed at the Max Planck Institute for Meteorology suffers from severe sea-surface temperature (SST) biases in the tropical Atlantic. We performed a set of SST sensitivity experiments with its atmospheric model component ECHAM6 to understand the impact of tropical Atlantic SST biases on atmospheric circulation and precipitation. The model was forced by a climatology of observed global SSTs to focus on simulated seasonal and annual mean state climate. Through the superposition of varying tropical Atlantic bias patterns extracted from the MPI-ESM on top of the control field, this study investigates the relevance of the seasonal variation and spatial structure of tropical Atlantic biases for the simulated response. Results show that the position and structure of the Intertropical Convergence Zone (ITCZ) across the Atlantic is significantly affected, exhibiting a dynamically forced shift of annual mean precipitation maximum to the east of the Atlantic basin as well as a southward shift of the oceanic rain belt. The SST-induced changes in the ITCZ in turn affect seasonal rainfall over adjacent continents. However not only the ITCZ position but also other effects arising from biases in tropical Atlantic SSTs, e.g. variations in the wind field, change the simulation of precipitation over land. The seasonal variation and spatial pattern of tropical Atlantic SST biases turns out to be crucial for the simulated atmospheric the last years of model development (Richter and Xie 2008;Toniazzo and Woolnough 2014). Plenty of recent studies provide a detailed investigation of these SST biases and their interaction with other mean state model biases, hypothesizing on their origin and development but to a lesser extent dealing with their impacts on climate simulations. Approaches to solving SST-bias related problems prove to be dependent on the considered model and its parameterizations. Yet there is high agreement that nearequatorial westerly wind biases causing an anomalous weak current of surface water, are a key factor for the simulated near-equatorial SST biases (Richter et al. 2012b). The consequential decrease in upwelling of cooler water causes positive temperature anomalies in the eastern basin. At this the positive Bjerknes feedback between SST gradient and near-surface winds enhances SST biases along the equator. Since equatorial westerly wind biases are even present in atmospheric general circulation models (AGCMs) forced by observed SSTs, atmospheric biases are suggested to play an important role in generating near-equatorial warm SST biases (Voldoire et al. 2014).
The poor representation of coastal upwelling is mainly caused by weaker than observed alongshore winds yielding reduced Ekman transport (Xu et al. 2014). A remote effect due to oceanic Kelvin waves may be relevant as well. Furthermore, due to the coarse horizontal resolution global models fail to represent small-scale orographic features as well as oceanic mesoscale eddies, both of them appear to be important mechanisms for coastal upwelling (Richter 2015).
Another important contributor to warm SST biases in the tropical Atlantic is the shortcoming of most general circulation models (GCMs) to realistically simulate stratocumulus clouds. The well-documented underestimation of cloud cover in GCMs is element of a positive atmosphere-ocean feedback mechanism that favors to maintain and even intensify warm SST biases. Simulated reduced cloudiness leads to excessive downward shortwave radiation, warming the surface and consequently decreasing lower tropospheric stability and degrading favorable conditions of stratocumulus formation (Voldoire et al. 2014). To what extent the model resolution contributes to the simulated biases is still Fig. 1 The seasonal mean pattern of SST biases in the tropical Atlantic as simulated in the MPI-ESM appears as intense positive bias in the SETA, and a cool bias along the coast of Brazil. The bias pattern features a strong seasonal cycle. The bias climatology of the MPI-ESM historical simulation (1979-2005 period) is defined relative to AMIP-II reanalysis. Seasonal mean values are computed on the base of the monthly bias climatology used for the model setup of sensitivity experiment BIAS_mm an open debate. A positive effect of increased resolution on horizontal winds can be found near coasts. It is linked to a better resolved land orography. Also, the relative importance of individual error sources and their coupling has not been fully addressed yet (Richter 2015).
While possible causes of tropical Atlantic SST biases have already been subject of numerous studies, there was very little effort spent on the examination of influences arising from these biases (Wahl et al. 2011;Murakami et al. 2014;Sasaki et al. 2014). That is why we want to address this issue in our study. How SST biases impact the simulation of climate appears to be an important question for the interpretation and reliability of model output. Ignoring consequences of such SST biases increases the uncertainty of projected global climate change. SST biases may drastically distort model results and by that possibly modify future climate projections, society relies on.
For instance, studies focusing on the role of SST anomalies in the tropical Atlantic sector (due to natural variability) document important correlations between Atlantic SSTs and regional rainfall (Fontaine and Janicot 1996;Yoon and Zeng 2010) as well as connections to climate in other basins (Kucharski et al. 2008). Consequently, SST biases being always present, artificial anomalies may cause substantial changes in climate simulations. Especially for the Amazon and Sahel region where future climate change is still very uncertain and inconsistently simulated in different GCMs (Borges et al. 2014;Saini et al. 2015), a quantification of changes depending on SST biases may be a useful information. Ashfaq et al. (2010) introduced a possible way of evaluating the influence of tropical SST biases on climate predictions by conducting a quantile-based bias correction. They found a substantial effect on precipitation distribution over many regions due to changes in atmospheric moisture content and circulation, driven by TA SST biases.
First studies discovered a positive feedback between warm SST errors in the south-eastern Atlantic, equatorial errors and precipitation (Toniazzo and Woolnough 2014). Xu et al. (2014) analyze multi-model ensembles from the Coupled Model Intercomparison Project Phase 5 and 3 archive (CMIP5 and CMIP3) and show that the SETA bias is responsible for a southward shift of the Atlantic ITCZ and a cooling of the tropical western Atlantic.
In this study, we want to approach the investigation of bias impacts in a more idealized way. Our analysis is based on atmosphere-only simulations using the AGCM ECHAM6. We force the model with a climatology of SST boundary conditions to neglect interannual variability and focus on annual mean state effects as well as implications on seasonal timescales. In different configurations, anomaly patterns limited to the tropical Atlantic region are added to the observed SST field that remains unchanged elsewhere.
With this new approach of addressing the mean-state SST bias problem by performing uncoupled atmospheric sensitivity experiments, we test the sensitivity of atmospheric climate simulations in ECHAM6 to TA SST biases. Aim of this study is also the comparison of the atmospheric response to TA SST biases with coupled model mean-state biases to derive the contribution of SST biases to the prevalence of other prominent biases in the MPI-ESM. Ignoring interannual variability in the boundary conditions does not only reduce complexity of our experiments but also allows for a general statement, if interannual variability is important for the contribution to coupled model meanstate biases at all. Disregarding atmosphere-ocean interactions, our experiments do not allow for a complete analysis of mechanisms driven by TA SST biases. However, due to the elimination of ocean feedbacks, the direct atmospheric response can be understood more easily and simulated changes are attributable to TA SST biases only because SSTs in all other basins are unchanged.
Besides examining the direct influence of tropical Atlantic SST biases on atmospheric circulation and precipitation patterns, the relevance of the temporal and spatial structure of the TA bias pattern will be elucidated. We will show that TA mean-state biases in the MPI-ESM have a strong seasonal cycle ( Fig. 1) that must be considered for the analysis of the intra-annual atmospheric response. Furthermore, our sensitivity study demonstrates that cold biases in the western part of the tropical Atlantic are not minor important than positive SST biases in the SETA for the modulation of the basin-wide climate.
Model and experimental design
To investigate the impact of tropical Atlantic SST biases we perform a set of SST sensitivity experiments. Numerical model simulations for this study have been performed using the atmospheric general circulation model (AGCM) ECHAM6 ) (version 3). ECHAM6 is used as the atmospheric component in the fully comprehensive MPI Earth System Model (MPI-ESM) ). Here, we perform uncoupled AGCM simulations by prescribing global sea-surface temperatures and sea-ice concentrations (SICs). In our model configuration we use a Gaussian T63 grid providing 1.8 • × 1.8 • horizontal resolution at the equator. 47 levels in the vertical resolve the atmosphere up to 0.01 hPa (∼80 km, high-top model). To reduce complexity and focus on the influence of tropical Atlantic SST biases on the climate mean state, we neglect interannual variability of the background state by forcing the model with climatological boundary conditions. Seasonal climatologies have been derived from historical AMIP-II boundary conditions (Taylor et al. 2000).
All experiments are integrated for 50 years to create a sufficient large ensemble to get statistical robust results.
Our control simulation (CTL) is forced by monthly mean SST and SIC boundary conditions, averaged over the historical period . Despite forcing the model with a climatology of observed SSTs, it simulates precipitation and wind biases (Siongco et al. 2014). Throughout the year the ITCZ in CTL is simulated broader than observed and slightly shifted to the south (Fig. 2). In the annual mean, CTL displaces the ITCZ maximum to the west while it is observed centered over the Atlantic. The precipitation bias is accompanied by a zonal wind bias predominating in the northeast of the Atlantic. These biases confirm common shortcomings among uncoupled AGCMs (Siongco et al. 2014;Voldoire et al. 2014).
Sensitivity experiments for this study are characterized by superimposing an SST bias pattern on the tropical Atlantic SST field in varying configurations. We define the tropical Atlantic as the maritime region from 30 • S to 30 • N with lateral boundaries of the North and South American continent in the west and Africa in the east (see also Fig. 1). Global SIC and SSTs outside the tropical Atlantic region remain unchanged compared to CTL. At the northern and southern boundaries of the tropical Atlantic region, SSTs have been smoothed towards the undisturbed SST field by computing weighted averages of each boundary grid point (weight 1.0) with its 8 surrounding points (weighted 0.5 above and aside, 0.3 in the corners), respectively.
The main bias pattern for the sensitivity experiments has been taken from a historical simulation of the MPI-ESM coupled model (version 1.1). The referred MPI-ESM simulation has been performed following the CMIP5 setup for post-1850 climate simulations incorporating closeto-observed forcing. To derive a climatological SST bias pattern for the tropical Atlantic, we have considered the years 1979-2005 only and compared simulated SSTs with AMIP-II reanalysis (Fiorino 2000), using monthly data of the respective period. As a first step, the simulated SST field has been corrected by removing the field-mean, monthly-mean bias in surface temperature within the global tropical belt (30 • S-30 • N) for the considered period which varies between −0.5 K (August) and −1.3 K (March). This approach accounts for general MPI-ESM model biases in the tropics that are not limited to the Atlantic Ocean and therefore, are not the object of our study. Then, the SST bias pattern of the tropical Atlantic has been computed as the departure of corrected MPI-ESM SSTs from the observed state. Finally, a climatology has been calculated by averaging the resulting multi-year monthly anomalies over the whole period 1979-2005. Figure 1 shows the seasonal cycle of the resulting climatological SST bias pattern.
Based on the derived bias pattern, three different SST boundary conditions have been constructed. Table 1 describes main characteristics of the individual simulations. In experiment BIAS_mm the TA bias pattern is superimposed onto the SST field of CTL in its unchanged show the CTL zonal wind anomaly at 925 hPa compared to NCEP-II. All values are zonally averaged from 60 • W to 25 • W (a) and from and complete shape. Consequently, BIAS_mm captures the climatological-mean seasonal variability in the TA as simulated by the MPI-ESM, including its biases. For a better understanding of the simulated response, two additional experiments are performed. In BIAS_mp, solely the positive part of the SST bias pattern, mainly located in the SETA, is considered. The experiment helps to eliminate the atmospheric response to the part, that can be attributed to positive biases in the TA only. It conforms to a differential warming of the Southern Atlantic. Experiment BIAS_ am is characterized by superposing the annual-mean bias pattern onto the control field for each month, neglecting the seasonal evolution of the coupled model biases. Both BIAS_mp and BIAS_am serve as potential simplification of the spatial and temporal structure of the primitive bias pattern. Additionally, since one fundamental scope of application of GCMs is the projection of future climate, we examine the atmospheric response to tropical Atlantic SST biases in a simplified global warming scenario (Table 2). Idealizing global warming by a homogeneous increase of global SSTs by 2 K and suggesting that the MPI-ESM SST bias pattern remains unchanged under warmer conditions, we rerun CTL and the sensitivity experiment BIAS_mm with this changed background state.
Analysis methods
Impacts of TA SST biases on the atmospheric circulation and precipitation distribution are examined by focusing on differences of monthly mean values between SST sensitivity experiments and the AGCM control simulation (BIAS_xx−CTL). Using a two-tailed Student's t test we focus on simulated changes that are statistical significant at the 95% confidence level.
Assessing model performance skill and relative importance of biases in both of the coupled model and uncoupled sensitivity experiments we compare our results to observations and reanalysis data. As reference for global precipitation we use the Global Precipitation Climatology Project (GPCP) Version-2 monthly precipitation analysis (Adler et al. 2003). For global surface temperature and horizontal winds we use the NCEP-DOE AMIP-II reanalysis (R-2), since it is consistent with AMIP-II boundary conditions of SST and SIC that are used in our sensitivity experiments (Kanamitsu et al. 2002). All data has been bilinearly interpolated to T63 horizontal resolution to match the ECHAM6 resolution used in this study.
To quantify simulated changes of a parameter X compared to its reference state O (e.g. CTL, observations, reanalysis) and relate it to mean state biases in the coupled model we define its bias index BI X as: Overbars indicate the time mean, angle brackets the area average over a certain domain.
Changes in the moisture budget
For a detailed analysis of the SST bias impact on the hydrological cycle we divide its response into two components: a dynamical component due to mean horizontal circulation changes δu, and a thermodynamic one, dependent on changes in mean specific humidity δq, only. According to the moisture budget equation (Eq. 2) the variation of precipitable water W in the atmosphere is determined by the sum of precipitation P, evaporation E, and convergence of vertically integrated moisture flux qu: On long-term averages the local derivative of W is negligible small. Then, the net flux of water substance at the surface (P − E) is balanced by the convergence of vertical integrated moisture flux qu, only (Trenberth and Guillemot 1995). Linearization of this equation and taking the (1) BI X = �X� − �O� �O� .
(2) The subscript s in the fourth term on the right hand-side indicates surface quantities, overbars monthly-mean values, and primes departures from the monthly mean. The term including surface quantities results from the generic relation and will be neglected because it is mainly dependent on orographic features and small compared to the other terms (Pomposi et al. 2014). Considering differences in the net flux of surface water substance δ(P − E) (e.g. between sensitivity simulations and control run) we can break down Eq. 4 into forcing driven by mean circulation changes (dynamic component), and forcing due to variations in mean specific humidity (thermodynamic component): Vertical integration of model quantities is approximated by a summation over all model levels k: (3) Since we base our analysis on monthly-mean quantities, only, in the following study we do not investigate the role of transient eddies in changing the net surface water budget. Generally, two main theories about changes in tropical rainfall exist (Huang et al. 2013): the first one claims that tropical rainfall changes follow a "wet-get-wetter" pattern, stating that rainfall increases in already wet regions. The other theory links the increase in rainfall to those regions, where the surface warming exceeds the area-mean tropical warming ("warmerget-wetter"). As Huang et al. (2013) show in their study, the "wet-get-wetter" mechanism corresponds to the thermodynamic component (Eq. 5, terms 3 + 4 on the right hand side) while the "warmer-get-wetter" effect is directly coupled to the dynamic component (Eq. 5, terms 1 + 2 on the right hand side).
Annual mean Atlantic ITCZ response
As a result of TA SST biases, annual mean precipitation changes across the tropical Atlantic are mainly characterized by an eastward shift of the rainfall maximum, and expansion and southward movement of the zonal-mean ITCZ. Tropical precipitation mainly follows maximum SSTs in our model. Bias-induced near-equatorial rainfall changes are driven by circulation changes obeying the "warmer-get-wetter" theory while changes in the subtropical eastern Atlantic are thermodynamically forced, in accordance with the "wet-get-wetter" mechanism (Sect. 3.1).
The TA SST bias pattern leads to a substantial decrease in precipitation over the northwestern tropical Atlantic (NWTA), and an increase across the SETA (Fig. 3). The drying is most intense along the northwestern coast of South America, located above cold SST anomalies. However, a comparison shows that precipitation changes are not one-to-one associated with the SST anomalies. Fig. 3 In the annual mean BIAS_mm shows a dry bias along the Brazilian coast and a wet bias over the Gulf of Guinea compared to CTL. a Precipitation difference in mm/day. b Difference in surface temperature in K. Dotted areas indicate differences statistically significant at the 95% confidence level Increasing rainfall is located over the Gulf of Guinea and peaks at the coast of Gabon, while the SST bias pattern has its positive peak anomaly further to the south at the Namibian coast. The mean response in tropical Atlantic precipitation depends on changes in absolute SSTs induced by the bias pattern (Fig. 4). Following high SSTs, the precipitation maximum zonally shifts eastwards. When forcing ECHAM6 with observed SSTs in the tropical Atlantic region (CTL), maximum rainfall is simulated in the west of the basin. This conforms with results from Siongco et al. (2014) who analyzed the position of the Atlantic ITCZ in AGCMs compared to observations. In their study they show that in AMIP-type simulations the model ECHAM6 misplaces oceanic precipitation clusters westwards the observed location centered over the Atlantic. Introducing the tropical Atlantic SST bias pattern (BIAS_mm) generates highest SSTs in the Gulf of Guinea and by that causes the Atlantic precipitation maximum to shift from its western location to the eastern Atlantic. The west-to-east shift of maximum precipitation is accompanied by a weakening of near-equatorial easterly trades (Fig. 4) driven by the reversal of the zonal SST gradient. Over the central Atlantic the zonal wind component is reduced to more than half of its value in CTL. Over the Gulf of Guinea zonal winds change sign yielding a westerly flow. These changes in near-equatorial low-level wind are directly coupled to anomalous rising motion at 0 • E and sinking motion around 45 • W (Fig. 5). Vertical and horizontal circulation changes result in a substantial weakening of the Atlantic Walker circulation cell which is known to be thermally driven (James 1994).
(a) (b)
Besides the zonal displacement of maximum rainfall, under the influence of SST biases the annual mean ITCZ broadens towards the south (Fig. 4). Imposed SST biases cause a meridional widening of the rain belt of about 6 • . The zonal-mean Atlantic rainfall maximum is meridionally shifted to the south. This result demonstrates that SST biases in the TA worsen the already too far southward Fig. 2). The change in meridional ITCZ structure conforms with a modification of the annual-mean Hadley circulation over the Atlantic basin (Fig. 6). While in CTL the branch of ascending motion is located north of the equator, BIAS_mm simulates it slightly south of the equator. Furthermore, the region of rising motion broadens in agreement with the meridional broadening of the ITCZ. The meridional modulation of the Hadley Cell branch of rising motion is associated with the large-scale interhemispheric north-south SST gradient (Table 3). Due to the SST bias pattern we differentially warm the southern tropical Atlantic. This warming forces a southward movement of the ITCZ towards the warmed hemisphere (Schneider et al. 2014). In our experiment the bias pattern generates not only a weakening of the SST gradient being positive in CTL, but a reversal. Due to the intense warming south of the equator and the cooling north of the equator in BIAS_mm at the same time, the south tropical Atlantic is warmer than the north tropical Atlantic. For this reason the branch of rising motion of the Hadley Cell is displaced to the southern hemisphere.
All in all, our experiments show a close connection between SST biases in the TA and the forcing of precipitation and wind changes in ECHAM6. The response of precipitation and circulation in BIAS_mm bears high resemblance with coupled model mean state biases (Richter et al. 2012a), as for example the weaker than observed equatorial easterlies across the Atlantic. We have also shown that precipitation changes in BIAS_mm are linked to anomalies in the tropical vertical overturning circulations.
A break down of the moisture budget equation allows for a more detailed analysis of simulated changes in the global hydrological cycle. In agreement with the southward shift of the Atlantic ITCZ the annual mean P − E difference as indicator for wet (P > E) and dry regions (P < E ) displays a migration of wet regions towards the south (Fig. 7a). In the deep tropics (15 • S-15 • N) this hydrology change is determined by the third (divergent dynamic) term on the right hand side of the moisture budget equation (Eq. 5, Fig. 7b): This reveals the dominant role of mean horizontal circulation changes δu in changing the distribution of precipitation across the Atlantic compared to changes in specific humidity. This in turn satisfies the "warmer-get-wetter" theory for precipitation changes in the tropics (Huang et al. 2013). Furthermore, Eq. 7 emphasized that the relevant contribution of circulation changes can be reduced to the divergent component. This means that arising anomalies in horizontal divergence and convergence that couple to anomalous vertical atmospheric motion control the pattern of δ(P − E).
In higher latitudes (outside 15 • S-15 • N) in the eastern basin outside the catchment area of the ITCZ changes in the net surface water budget δ(P − E) are dominated by evaporative changes. These changes are thermodynamically driven, thus driven by changes in specific humidity δq and obey the "wet-get-wetter" or rather "dry-get-drier" theory (Held and Soden 2006). Using the moisture budget equation allows for a reduction of the forcing term to the advective thermodynamic component (Fig. 7c): This shows that in the considered subtropical region the response in δ(P − E) is controlled by anomalous moisture advection due to a changed gradient in specific humidity. Specific humidity is changing because of increased evaporation over the SETA due to the positive SST biases located there.
Experiments BIAS_am and BIAS_mp reproduce main characteristics in annual-mean precipitation change with similar patterns but varying amplitudes compared to the response BIAS_mm. Superimposing the tropical Atlantic annual-mean bias (BIAS_am) instead of the monthly bias climatology (BIAS_mm) leads to a slight intensification of the observed response in BIAS_mm. In case of reducing the SST bias pattern to its positive component (BIAS_mp), the annual mean precipitation response is less pronounced and mainly present in a precipitation increase along the Guinea Coast while the drying over Brazil is very weak. The intercomparison of all sensitivity experiments shows high agreement between the annual mean precipitation change and the variation of large-scale interhemispheric SST gradients in the tropical Atlantic (Table 3). The southward shift of the Atlantic rain belt is most pronounced in BIAS_mm and BIAS_am, as the North-South (NS) SST gradient shows the largest weakening. In other words the bias pattern considering both positive and negative anomalies imposes a larger change of the interhemispheric energy balance by warming the South TA and cooling the North TA, than solely the component of warm SST biases. This significantly effects the ITCZ position as it controls the cross-equatorial energy flux (Schneider et al. 2014). Considering both positive and negative anomalies in the bias pattern in BIAS_mm and BIAS_am also contributes to a larger anomalous zonal SST gradient than in BIAS_mp. This yields a less pronounced zonal wind bias in BIAS_mp and a less intense weakening of the Walker circulation. Forcing the model with the annual mean bias pattern in each month (BIAS_am) causes an overestimation of the zonal SST gradient anomaly on intraseasonal time-scales leading to an increased annual-mean zonal wind change evidencing a reversal of near-equatorial zonal winds in the eastern basin.
Influence on the seasonal cycle of the Atlantic ITCZ
The seasonal oceanic ITCZ response in the tropical Atlantic sector differs substantially between the individual SST sensitivity experiments. This is in contrast to the annual mean response which appears to be qualitatively similar in BIAS_mm, BIAS_mp, and BIAS_am (Sect. 4.1). Results show a high dependence of the ITCZ latitudinal position and the seasonal-mean meridional SST gradient between the tropical North and South Atlantic. ITCZ shifts are most pronounced in DJF in the western basin and in JJAS in the eastern basin. Zonal wind anomalies drive the strength of western Atlantic drying and eastern Atlantic wetting and are controlled by the monthly near-equatorial zonal SST gradient. Because the general structure of the Atlantic ITCZ is not zonally symmetric we will analyze it by looking at its eastern and western part separately. Under the influence of SST biases in the tropical Atlantic (BIAS_mm) western Atlantic rainfall decreases to half of the simulated precipitation in CTL from April-October due to anomalous sinking motion forced by the cold SST biases along the South American coast (Fig. 8). As it has already been detected on annual mean time-scales, the seasonal rainfall response across the Atlantic is dynamically driven, too. Tropical Atlantic mean overturning circulations exhibit high sensitivity to the underlaying SST pattern that is substantially modified by the inclusion of SST biases. The strong linkage between SSTs and mean circulation causes a displacement of zones of maximum moisture convergence that provokes precipitation changes.
Because of the warm SST biases in the south TA, in DJF the western ITCZ shifts southward from its seasonal mean position of 5 • S-10 • N to 20 • S-5 • S (Fig. 8), following warm SSTs. This simulated southward shift of the ITCZ in DJF depends mainly on the positive SST biases in the SETA generating a substantial interhemispheric SST gradient anomaly. This is confirmed by sensitivity experiment BIAS_mp which equivalently simulates the shift of the rain belt (Fig. 1). In turn BIAS_am does not capture the western ITCZ shift in DJF because it does not cover the seasonal spread of positive biases across the southern Atlantic that prevails in BIAS_mm and BIAS_mp and causes the substantial southern tropical Atlantic warming.
From the analysis in Sect. 4.1 we know that the substantial drying in BIAS_mm across the western Atlantic is due to a zonal shift of the rainfall maximum to the east, driven by a slow-down of the Walker circulation, that in turn is forced by the anomalous equatorial SST gradient. Because disregarding negative biases in the pattern does not capture the strong zonal SST gradient anomaly, BIAS_mp cannot reproduce the drying taking place from April to October in the western basin. In agreement to this interpretation experiment BIAS_am that includes equatorial cold biases in the west and warm biases in the east similarly simulates the drying in the western Atlantic during boreal summer as seen in experiment BIAS_mm (Fig. 8).
In the eastern tropical Atlantic the ITCZ is broadened towards the south accompanied by strong anomalous westerlies throughout the year when considering the full SST bias pattern (Fig. 9). Arising near-equatorial westerly wind anomalies are due to the anomalous zonal SST gradient that generates a pressure gradient anomaly. Farther away from the equator the westerly wind anomaly is caused by the decrease of easterly momentum advection through meridional winds within the rain belt. The widening of the rain belt is most intense during winter and spring, when the ITCZ reaches almost the double of its width in CTL. During June-September, in addition to the zonal wind bias there is an anomalous meridional wind component preventing the northern flank to expand northward, causing a dry bias north of 5 • N. BIAS_mp covers the general structure of the seasonal cycle of the eastern Atlantic ITCZ response very well. This simplified sensitivity experiment however fails to produce the shift of maximum rainfall during June-September, emphasizing the contribution of cold biases near the West African Coast in the tropical North Atlantic (Fig. 1). In experiment BIAS_am during winter and spring the precipitation increase and westerly wind bias is more intense than in BIAS_mm due to the the larger zonal SST gradient. During JJAS the annual mean SST bias shows a less intense equatorial SST gradient anomaly than monthly varying pattern in BIAS_mm. As a consequence, simulated anomalies of both precipitation and zonal wind turn out to be weaker.
Seasonal rainfall changes over adjacent continents
Our sensitivity experiments reveal that both precipitation biases over Brazil and West Africa prevailing in the coupled MPI-ESM are largely attributable to SST biases in the tropical Atlantic. In this section we will focus on seasonal rainfall changes over West Africa and Brazil during July-September, which is known to be the season most sensitive to rainfall changes forced by tropical Atlantic SSTs (Fontaine and Janicot 1996;Yoon and Zeng 2010). For both regions we consider regional mean precipitation biases to quantify the impact of TA SST biases following our definition of the bias index (Eq. 1).
Comparing MPI-ESM precipitation over Brazil with observations shows that the coupled model produces a large dry bias both over land and over sea (Fig. 10). This dry bias over tropical South America is prevalent in many AOGCMs (Yin et al. 2013;Ryu and Hayhoe 2014). It has been established that dry events over Brazil and the Amazon in general are driven by SST anomalies in the Pacific and Atlantic oceans through their influence on moisture patterns and atmospheric circulation (Yoon and Zeng 2010). Over land, surface feedbacks and soil moisture play a role, too (Wang and Fu 2002).
Through our experimental setup we can show that a large part of the MPI-ESM dry bias in JAS over Brazil is attributable to TA SST biases. Superposing the complete bias pattern onto the observed state reproduces large parts of this precipitation anomaly over tropical South America (Fig. 10). In agreement with Hagemann et al. (2013) TA SST biases amplify the dry bias over land, already present in CTL, when forcing the model with observed SSTs. Analyzing experiments BIAS_mp and BIAS_am it can be concluded that cold biases in the northwestern TA play a major role in causing this rainfall anomaly while prescribing only positive biases does have almost no effect on Brazilian rainfall. For this reason BIAS_mp does not capture the drying. Over land, the decrease in precipitation depends not only on the off-coastal cold bias but also on the large-scale equatorial SST gradient modified by the bias pattern, forcing anomalous westerlies over tropical South America (Fig. 8). That is why the precipitation anomaly turns out to be weaker in BIAS_am than in BIAS_mm (Fig. 10), because in BIAS_ am during JAS the anomalous zonal gradient is weaker.
In the climatological mean summer (JAS) the MPI-ESM simulates too much rainfall along the Coast of Guinea and too weak rainfall over the Sahel region (Fig. 11). This anomalous dipole-like precipitation pattern is reproduced by our sensitivity experiments. To quantify precipitation changes during JAS over West Africa, we refer to the commonly used Guinea Coast and Sahel precipitation index. In comparison with the simulated precipitation in CTL all sensitivity experiments show a decrease in Sahel rainfall and an increase in rainfall across the Coast of Guinea (Fig. 11). Thus all experiments are capable to reproduce the general rainfall bias signal of the coupled model. The precipitation change is driven by the decrease in land-sea temperature contrast set up by positive SST biases in the SETA. The decrease in temperature gradient results in a reduction of the pressure gradient between ocean and land which in turn weakens the onshore winds transporting moisture from the SETA to West Africa. This result fits to the change in ITCZ position (Fig. 9): in boreal summer all sensitivity experiments display a shift of the rain belt towards the south accompanied by a widening of the rainband towards the south. The correlation between the dipole-like rainfall anomaly across West Africa and a shift of the Atlantic (Saini et al. 2015;Druyan 2011).
Discussion and conclusions
As our study shows, the awareness of the influence of tropical Atlantic SST biases is important for a reliable interpretation of global climate simulations because these biases cause substantial changes in simulated regional precipitation and atmospheric circulation. The climate in the tropics is mainly determined by properties of regional rainfall. Tropical rainfall in turn is controlled by the ITCZ position and structure which is directly coupled to atmospheric overturning circulation systems (Schneider et al. 2014). The tropical rain belt represents the ascending branches of both the Hadley cell and Walker cell. Our sensitivity experiments show that TA SST biases modify the surface energy balance in such a way that the ITCZ migrates towards the southern hemisphere, which is untruly simulated too warm. This finding matches common mean-state precipitation biases in coupled GCMs that also tend to place the ITCZ too far south (Richter et al. 2012a). Furthermore, the reversal of the equatorial Atlantic SST gradient forced by the bias pattern causes a weakening of equatorial easterlies of similar amplitude as simulated in coupled model integrations. The reduction of zonal winds along the equator drives the weakening of the Atlantic Walker circulation. The slow-down of the zonal overturning circulation comes along with a shift of the annual-mean precipitation maximum from the west to the east. However, our experimental setup does not allow for an analysis of what comes first. Do SST biases in the TA lead to the westerly wind bias, first, and wrong winds cause the precipitation changes, or, do precipitation changes lead the circulation changes? Answering this question would guide us to a better understanding of ITCZ biases in GCMs.
With our analysis of the moisture budget equation we show that changes in the tropical hydrological cycle are mainly dynamically driven and, by that, follow a "warmerget-wetter" pattern (Huang et al. 2013) on annual and seasonal mean time-scales. In contrast to the study of Huang et al. (2013), we cannot identify a "wet-get-wetter" mechanism on intraseasonal time-scales. However, in their study they investigate precipitation changes under global warming which leads to positive SST anomalies zonally more homogeneously with small variance in time, while our bias pattern shows high zonal asymmetry as well as substantial seasonal variability.
Intercomparing all sensitivity experiments emphasizes the importance of the spatial and temporal variability of TA biases for the simulated atmospheric response. With regard to a detailed analysis of the quantitative impact of SST biases on processes within the atmosphere, both BIAS_am and BIAS_mp do not serve as appropriate simplifications of the fully comprehensive bias pattern. Instead, determining the contribution of SST biases to mean-state biases in AOGCMs depends on the monthly-varying pattern used in BIAS_mm. This is especially important for the analysis of seasonal time-scale climate, as our analysis of seasonal rainfall shows (Figs. 10 and 11).
We find that considering the seasonal cycle of positive TA SST biases (BIAS_mp) is more important for reproducing the anomalous seasonal migration of the oceanic ITCZ that is simulated in BIAS_mm, than considering the presence of cold biases in the tropical Atlantic without a seasonal cycle (BIAS_am) (Figs. 8 and 9). In the annual mean, the forcing in experiment BIAS_am results in an overestimation of precipitation change, while it is slightly underestimated in BIAS_mp.
Even though sensitivity experiments BIAS_mp and BIAS_am cause quantitatively different precipitation and circulation changes, both of them help to develop a better understanding of the atmospheric response to the fully comprehensive SST bias pattern. In the eastern basin, BIAS_mp produces already changes in tropical rainfall very similar to BIAS_mm. However, the intensity of precipitation change is not captured when not considering negative biases in the pattern. Substantial drying over the South American continent is only simulated when considering cold biases in the forcing pattern. This highlights the role of negative biases in the western and northern TA. Cold biases play a primary role for the decrease of Brazilian rainfall. Furthermore, these negative SST biases act as essential amplifier of anomalous large-scale meridional, interhemispheric and zonal, near-equatorial SST gradients introduced by positive biases. These amplifications increase the slow-down of the Atlantic Walker circulation as well as the southward shift of the oceanic ITCZ which causes an amplification of the atmospheric response. Overall negative biases are not minor important than positive biases, because they contribute significantly to the simulated atmospheric response by amplifying anomalous SST gradients in the tropical Atlantic region and controlling the dry bias over the western Atlantic and Brazil. This finding is supported by sensitivity experiment BIAS_am that imposes too intense TA negative biases on seasonal scales, and by that leads to an overestimation of regional Atlantic circulation and precipitation changes.
Our approach to quantify the contribution of SST biases to other mean-state biases (e.g. wind, precipitation) lacks the ocean-atmosphere interaction, which definitely plays an important role in maintaining and reinforcing or damping coupled GCM biases. This limitation might explain to some part why coupled model mean-state biases cannot be fully reproduced by our sensitivity study. For example, our sensitivity experiments do not fully reproduce the substantial positive precipitation bias along the Coast of Guinea. However, the main structure of MPI-ESM mean-state precipitation and wind biases is fairly met. This shows that mean-state biases across the Atlantic basin and adjacent continents are mainly regionally forced by TA SST biases. Furthermore, because we can reproduce coupled model mean-state biases with our idealized experiments using climatological boundary conditions, interannual variability does not play a key role in their maintenance.
Main deficit in our experiments is that the model ECHAM6 already simulates considerable wind and precipitation biases when forced with observed SSTs and SICs (Hagemann et al. 2013;Siongco et al. 2014). In our study we have not argued these shortcomings but simply used the control experiment as a reference state for our analyses. The lack of understanding mechanisms leading to initial biases in the atmospheric circulation model however does not allow for drawing a direct causal relationship between simulated precipitation and circulation change, and TA SST biases in the coupled MPI-ESM.
Because one important aspect of GCMs is the projection of future climate, we rerun experiments CTL and BIAS_ mm after globally increasing SST boundary conditions by 2 K. This very idealized scenario of climate change shows that the precipitation response to TA SST biases remains unchanged under global warming, assuming that the meanstate SST biases stay the same (Fig. 12). This is an important result because it allows the transfer of our main findings to climate simulations of the future, leading to a better interpretation of future climate change. However this result should be regarded with caution. Experiments of this study reveal high sensitivity of the climate simulation to the seasonal cycle of the TA bias pattern. Already small changes in the seasonality of the bias pattern under global warming, not considered in our idealized scenario, might cause a substantial different result.
While we focused our study on the impact of tropical Atlantic SST biases on the Atlantic sector and adjacent landmasses it is important to mention that statistically significant changes in atmospheric circulation and precipitation are not confined to this region. Substantial influences can also be found on remote regions, such as the extratropics and the Indian basin. This will be addressed in another study.
(a) (b) Fig. 12 BIAS+2K shows a very similar structure of annual-mean precipitation response as BIAS_mm (Fig. 3). The precipitation response under global warming differs only over the western Atlantic and South America by maximum 1 mm/day. a Annual-mean precipitation difference BIAS+2K−CTL, b difference between the annual-mean precipitation changes in the global warming scenario (BIAS+2K) and the historical experiment (BIAS_mm). All values are in mm/day. Dotted areas in the right panel indicate differences statistically significant at the 95% confidence level | 2019-04-26T14:24:31.102Z | 2017-09-01T00:00:00.000 | {
"year": 2016,
"sha1": "f174c37ada6d5a318454d26c00ea398271ba5f45",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00382-016-3415-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0f83135d9d63b0a2b0f4043cb7e9b4379bb34928",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
235364208 | pes2o/s2orc | v3-fos-license | An experimental study of fog and cloud computing in CEP-based Real-Time IoT applications
Internet of Things (IoT) has posed new requirements to the underlying processing architecture, specially for real-time applications, such as event-detection services. Complex Event Processing (CEP) engines provide a powerful tool to implement these services. Fog computing has raised as a solution to support IoT real-time applications, in contrast to the Cloud-based approach. This work is aimed at analysing a CEP-based Fog architecture for real-time IoT applications that uses a publish-subscribe protocol. A testbed has been developed with low-cost and local resources to verify the suitability of CEP-engines to low-cost computing resources. To assess performance we have analysed the effectiveness and cost of the proposal in terms of latency and resource usage, respectively. Results show that the fog computing architecture reduces event-detection latencies up to 35%, while the available computing resources are being used more efficiently, when compared to a Cloud deployment. Performance evaluation also identifies the communication between the CEP-engine and the final users as the most time consuming component of latency. Moreover, the latency analysis concludes that the time required by CEP-engine is related to the compute resources, but is nonlinear dependent of the number of things connected.
Introduction
Currently, Internet of Things (IoT) applications are part of people's daily lives and their growth, in recent years, is increasing (according to Gartner [1], the total number of connected things will reach 25 billion by 2021, producing immense volume of data). Thus, the model known as cloud computing, executor of interconnectivity and execution in IoT, faces new challenges and limits in its expansion process. These limits have been given in recent years due to the development of wireless networks, mobile devices and computer paradigms that have *Correspondence: jmcastil@ulima.edu.pe † Giovanny Mondragón-Ruiz and Alonso Tenorio-Trigoso contributed equally to this work. 1 Center of Information and Communication Technologies, Universidad Nacional de Ingenieria, Lima, Peru 2 Universidad de Lima, Lima, Peru Full list of author information is available at the end of the article resulted in the introduction of a large amount of information and communication-assisted services [2]. For example, in Smart Cities the use of IoT systems involves the deployment of a large number of interconnected wireless devices, which generate a large flow of information between them and require scalable access to the Cloud for processing [3]. In addition, many applications for Smart City environments (i.e., traffic management or public safety), carry real-time requirements in the sense of non-batch processing [4]. Under this context, the data processing architecture for IoT systems has moved from a centralized paradigm such as cloud computing to a distributed paradigm known as fog computing, as critical problems must be addressed such as obtaining a scalable, robust, secure and experience-centric data processing architecture and Quality of Service (QoS) from end users [5]. Therefore, fog computing emerges as a complementary model of cloud computing. It can be said that it is a natural extension, which seeks to decentralize work on the Cloud server by creating a hierarchy of layers between the hardware components of the architecture [6,7]. Therefore, the fog computing architecture derives from the cloud computing architecture as an extension in which certain applications and data processing are performed at the edge of the network (edge level) before being sent to the Cloud server (core level) [8,9]. The devices that implement this functionality can consist of the end devices themselves (i.e., smartphones), local micro datacenters [10], low-cost hardware platforms that act as gateways between the sensors and the Cloud [11], or even the same devices that make up the infrastructure of the interconnection network [12], among others.
Thanks to this, it is sought that the analysis, computation and data processing services are closer to their data sources and end users, thus reducing both the use of the access network to the Cloud server and the latency of sending and replying with the edge devices (sensors and actuators) and the final users [13,14]. However, fog devices are usually constrained resources and this may be one of the main drawbacks of the system.
The objective of this work is to evaluate the performance of a fog computing architecture capable of detecting in real time a pattern of system behaviour based on the information collected by the final devices. More precisely, the architecture is endowed with the intelligence necessary for data processing by means of a Complex Event Processing (CEP) engine [15]. It is important to note that, in this paper, the concept "real time" does not refer to the traditional definition of real time computing (i.e., hard real time), related mostly to control systems which need response times in the order of milliseconds (or even lower). Here, the term "real time" has the meaning of expecting a short time response from the system in human terms, with higher orders of magnitude, even up to a few seconds (i.e., soft real time).
Moreover, one key goal of this research study is to make a comparative study among the features of traditional cloud computing versus fog computing architectures. To assess performance, the study is based on an analysis modelling and a testbed evaluation in which both the performance of the end user and resource usage are considered [16]. A graphical overview of the approach towards the comparative evaluation of cloud and fog architectures is presented in Fig. 1.
Thus, the structure of this paper is as follows. First of all, some preliminary information and concepts are introduced in "Background" section, in order to ease the understanding of this work. Next, the related work is presented in "Related work" section. Then, the description of the architecture and ecosystem considered in this work are described in "Architecture and ecosystem" section. Later, "Fog & cloud computing: analysis modelling" section details the analysis modeling considered for cloud and fog computing. Subsequently, in "Fog & cloud computing: performance evaluation" section an objective study is carried out on the optimisation of computational resources and improvement of the latency of fog computing with respect to cloud computing for IoT applications. Finally, "Conclusions and future plans" section analyses the conclusions and future work to be carried out in subsequent investigations.
Background
In this section, the key technologies that support the proposal of this paper are briefly introduced, in order to ease its understanding. More specifically, these are fog computing (and related terms), the telemetry protocols and CEP.
Fog computing architecture
The fog computing paradigm can be simply defined as a natural extension of the cloud computing paradigm. In the literature, there exist related terms, such as edge computing or mist computing. There is not a standard criteria about the layered architecture of fog computing and there are different approaches [17]. While mist computing is more commonly agreed to refer to the processing capability that lies within the extreme edge of the network (i.e., the IoT devices themselves) [18], the terms edge and fog computing are not strictly separated layers. Some authors consider them as different tiers but others use both terms in a different way. For example, Bonomi et al. [13] literally state that "fog computing extends the cloud computing paradigm to the edge of the network", thus including edge computing as part of the fog computing paradigm. Reciprocally, in Dolui et al. [19] fog computing is considered a particular implementation of edge computing. Also, the reference architecture outlined by Buyya et al. [20] depicts a continuum of resources available from the cloud to the sensors (the things).
In any case, the fog computing architecture can be deemed as conceptually integrated by two main levels: the core level, which encompasses the cloud-based datacenters, and the edge level, which includes different devices and their interconnections, such as sensors, smart mobiles or single-board computers deployed at several places between the final IoT devices and the cloud.
The edge level usually includes a Wireless Sensor Network (WSN), because it is the most flexible interconnection approach for many use cases [21]. This leads to the fact that different network technologies operate in fog computing architectures, namely: • Personal Area Networks (PANs), that interconnect all the information extraction devices (i.e., the sensors). • Local Area Networks (LANs), which implement the interconnection of the WSN gateway with its nearest fog node. • Wide Area Networks (WAN), which connect the fog nodes to the cloud.
Fog computing architectures accelerate data processing and response to events by eliminating a round trip to the cloud for analysis. In addition, they avoid the need for costly bandwidth extensions caused by uploading/downloading large amounts of traffic to/from the core network. It also protects sensitive data by analysing them within the local network. Ultimately, organisations that adopt fog computing get deeper and faster information, which increases business agility, increases service levels and improves security [22]. Nevertheless, the design of a profitable fog architecture has to consider Quality of Service (QoS) factors such as throughput, response time, energy consumption, scalability or resource utilization [23].
Telemetry protocols
Telemetry is an aspect of great importance when it comes to developing an efficient IoT network with QoS for a fog computing architecture. Several messaging protocols exist that can play this role: Message Queue Telemetry Transport (MQTT), Constrained Application Protocol (CoAP), Advanced Message Queuing Protocol (AMQP) or Hypertext Transfer Protocol (HTTP). In [24] a detailed comparison among them is carried out and conclude that there is not a clear optimal election to fit all use cases.
Nevertheless, according to [24], the most used telemetry protocols is the MQTT [25], a Machine-to-Machine (M2M) communication protocol between different components of the fog computing, because it consumes very little bandwidth, easily adapts to different levels of latency and can be used in most embedded devices with few resources [20]. The MQTT architec- ture follows a star topology based on the publicationsubscription messaging paradigm, in which a central node acts as a server or broker, which is responsible for managing the network, receiving messages from the publishers and transmitting the messages to the subscribers.
Complex event processing (CEP)
CEP [26] is a technology that allows to ingest, analyze and correlate a large amount of heterogeneous data (simple events) with the aim of detecting relevant situations in a particular domain (complex events). In the context of this paper, CEP performs tasks related to the fusion of data processing collected by the sensor nodes to generate complex events or alarms 1 . The main result of the process is to notify interested parties of patterns derived from the analysis of lower level events [15]. One drawback of CEP is that it can potentially exhibit heavy storage requirements related to the amount of simple events that need to be stored for analysis. However, it should be noted that in the context of IoT, even though devices generate data streams continuously, these data need to be analyzed within a short period of time to be meaningful and harness the potential of fog computing. Thus, storage requirements are considerably reduced. Data analysis over large periods of time (for example, in order to identify trends over data) should be deployed at resources placed in the cloud level.
CEP offers a wide variety of data analysis patterns for event generation [15]. In general, the procedure of analysis and generation of events in CEP can be summarised with the three main steps shown below (in that order): 1. Input: data flows from sources (i.e., sensors) arrive at the CEP engine. 2. Analysis: all the incoming data flows are processed by dividing and realising the logic to the events. 3. Action: once the specified pattern has been fulfilled, an alarm is notified.
Moreover, there are several alternate open-source frameworks for distributed stream processing, which exhibit different performance and are best suited to different use cases. A comparative evaluation can be found in Nasiri et al. [27], focusing on the most popular ones (namely, Apache Storm, Apache Spark Streaming, and Apache Flink). According to this study, Apache Flink (an implementation of a CEP engine) is able to provide capability to run real time data processing pipelines in a fault-tolerant way at a scale of millions of tuples per second.
Related work
In this section, some implementations based on distributed fog computing architectures are reviewed, as well as work related to the performance evaluation of these architectures.
Fog computing
Many architectures that are developed initially as a centralised architecture type (i.e., cloud computing) are currently adapting to a decentralised type (i.e., fog computing), as is the case of FIWARE for Smart Cities [2]. This work exposes the use cases in which it is of great importance, and necessity, to decentralize resources with a fog computing architecture. In addition, it shows that the reasons for implementing this type of architecture focus primarily on operational requirements rather than performance issues related to the Cloud.
Following this trend of implementing distributed architectures, different adaptations arise today such as mobile computing that is still a fog computing architecture, being the Edge Node a smartphone. In Dhillon et al. [28], the authors show an interesting development with the adaptation of a CEP engine for remote patient monitoring. That is, the system performs the analysis and detection of complex events on the smartphone by sending the results to a hospital back-end server for further processing. By taking advantage of the large computing capacity of today's smartphones, the authors demonstrate the viability of their entire system and mobile application by reducing the workload on hospital servers, in addition to reducing latency for a test pattern. Moreover, CEP has been used to analyze events generated at both edge and core level to facilitate decision-making before storing data in a database, which removes repetition of queries and web services as expose Alfonso Garcia-de-Prado et al. [15].
On the other hand, the emerging Industry 4.0 takes advantage of technology to offer improvements in the production areas thanks to real-time indicators that serve to create better administrative and logistic plans. An example is the work done by Fernández-Caramés et al. [29], which uses a two-layer fog computing architecture. The first layer (Node Layer) is where certain sensors and actuators with radio frequency emitters are located. The second layer (Fog Layer) is the intermediate layer, with microcomputers, in which sub modules are distinguished according to their functionality; for example, event detection and sending notifications regarding Business Intelligence. The implementation of fog computing offers faster answers on average due to the reduction of latency with the detected events offering, in addition, the ability to analyse more data, which in this case would increase its production. However, they mention that their work is under the conditions of the place where the tests were carried out; therefore, the results cannot be generalised.
Finally, an interesting aspect in this type of architectures is also taking place in the field of online games with an improvement in the user experience thanks to the reduction in response time. This is the example of the Pokemon Go game and its iPokeMon version, which works on fog computing [30]. Specifically, the Data Center is an Amazon virtual machine located in Dublin and the edge node is an Odroid XU+E microcomputer. The partition of tasks mentioned is given so that the server in the cloud maintains a global view of the Pokemons, while the edge node has a local view of the users that were connected to it. The edge node periodically updates the global view of the cloud server. As a result, a 20% decrease in the average response time and a 90% reduction in the size of data sent to the server is obtained. In this research, we can observe that when implementing a decentralised architecture like fog computing, both functionality and resource usage are optimized.
Evaluation of fog computing
As it has been observed, one of the main fundamentals to deploy a fog computing architecture is to reduce the latency in the final applications. Likewise, we can observe that the enhancement of this metric entails improvements in different ones, such as, for example, the reduction of energy consumption [31], improving the QoS [32], maximising the Quality of Experience (QoE) [33], among others. In this sense, for the analysis of the distribution of computational resources it is necessary to be able to evaluate this type of architectures.
Thus, Jalali et al. [34] carry out a comparative study between Data Centers with cloud computing architecture and Nano Data Center with fog computing, the latter being implemented with Raspberry Pis. The performance of the two architectures is evaluated considering different aspects but always focused on energy consumption. For this, several tests are carried out such as static web page loads, applications with dynamic content and video surveillance, and static multimedia loading for videos on demand. Some of the conditions that were worked on were variants in the type of the access network, the idleactive time of the nodes, number of downloads per user, etc. Moreover, the authors determine that under most conditions the fog computing platform shows favourable indicators in energy reduction. However, in a few cases the opposite is seen. Hence, the authors conclude that in order to take advantage of the benefits of fog computing, the applications whose execution on this platform have an efficient consumption of energy throughout the system must be identified.
Regarding Raspberry Pi microcomputers, the tests of different authors, such as Morabito et al. [35], show that they are efficient when handling low volumes of network traffic. Their results support how useful they are in the execution of lightweight IoT-oriented applications, based on specific protocols such as CoAP and MQTT.
On the other hand, Shi et al. [36] propose a mechanism for redistribution and retransmission of tasks to reduce the average latency of the Cloud-Fog integrated network architecture service in Industrial Internet of Things (IIoT). This mechanism consists in optimizing the flow of information from when the data is collected in the end devices until it reaches the Cloud. The results show a reduction in latency from 10s when cloud computing is used up to 1.5s with fog computing. Although, as can be seen, in addition to the fact that latency is a serious problem, the system suffers from architecture components for data analysis, such as CEP, which add an additional bonus, both to latency and the consumption of computational resources.
Finally, a spine-leaf fog computing network to reduce network latency and congestion problems in a multilayer and distributed virtualized IoT data center environment is presented in Okafor et al. [32]. This approach is cost effective as it maximizes bandwidth while maintaining redundancy and resistance to failures in mission critical applications. These results, in latency and QoS metrics, are obtained for datacenters by comparing these two methods for a typical fog computing architecture with respect to cloud computing.
As it can be seen, in most evaluations the benefits of using fog computing together with conventional data centers are shown. Taking into account this evaluation set out in the literature, the actual load of this architecture has been evaluated in our work, but specifically in realtime IoT applications. For these types of applications in IoT, two important and critical architecture components emerge, to be integrated into both the edge nodes and the cloud, these are, the CEP technology and the MQTT protocol.
Finally, note that identifying the main bottlenecks of CEP-based fog architectures is an open area for future improvements. This work evaluates the performance of the key elements that take part in the communication process for applications with real-time requirements. To the authors' knowledge, no previous research work focused on analysing the cost of communication of CEP-based fog and cloud architectures.
Architecture and ecosystem
In this section we will describe in detail the layers that compose the fog computing architecture where our experiments focus, their components and the key functional aspects of the proposal.
Fog computing architecture
The fog computing architecture considered in this work integrates the core level and the edge level (see Fig. 2). It should be noted at this point that the main idea of (2021) 10:32 Page 6 of 17
Fig. 2
Ecosystem of the Developed Architecture the described architecture is that fog applications are not involved in performing batch processing, but have to interact with the devices (sensors, smart watches, etc) to provide real-time streaming. Hence, the edge level has the capacity to perform a first information processing step.
In the edge level, the critical and main component of the considered fog computing architecture is the Fog Node, that is located within the LAN layer (see Fig. 2). The Fog Node is the point of link between the edge level and core level of the platform, besides being able to analyse and make decisions [31]. Therefore, the Fog Node in an IoT network has the main role of acquiring data sensed by the end-points and collected by the gateways, analysing them and taking actions, that is, sending them to the Cloud or notifying the end users. More specifically, each Fog Node analyses the WSN information collected within its LAN zone.
The Fog Node is formed by a CEP engine for data processing tasks and a Broker for communication tasks, from now on called as Local CEP and Local Broker, respectively. More precisely, the Local Broker receives the information collected by the WSN endpoints (i.e., the gateways) and makes it available to the Local CEP engine for processing. Also, the Local Broker communicates with the core level, so that persistent system data is stored.
The core level has two main areas of work: (i) storage of information from the edge level to provide data persistence in the system; and, (ii) global information processing on data from the different WSNs. The Global CEP and the Global Broker are in charge of this processing. Therefore, the CEP events generated in this layer will be those created by analysing the data from different WSNs, since the events generated from a particular WSN will be tasks associated with the Fog Node deployed in that WSN. Likewise, the notifications generated when analysing the information in the core level will be sent to the subscribed users through the Internet.
Fog computing ecosystem
The design of a centralized or distributed computational architecture for IoT applications entails the use and integration of different services such as identification, communication, data analysis or actuation, to mention some. Nevertheless, making a thorough enumeration of all the technologies that can be used at each one of the layers of the considered architecture is out of the scope of this paper. Rather than that, focus will be put on those elements that are key in our proposed architecture. Figure 2 outlines a set of architecture components located in the core level and the edge level to build and deploy distributed IoT applications. The feasibility of using devices with limited storage and computational resources as Fog Nodes is hugely related to the cost of the data analysis and the communication service. So, the most important components of a Fog Node in our architecture are the CEP engine and the MQTT Broker. More specifically, the CEP engine performs data analysis and processes complex events, while the MQTT Broker is used to feed data into the CEP Engine and to distribute complex events (alarms, from now on) to the actuators, final devices or subscribed users (more details in "Data flow analysis" section).
Telemetry: MQTT protocol
The MQTT Broker is used to feed data into the CEP Engine and to distribute complex events (also named as alarms in this context) to the subscribed end devices (more details in "Data flow analysis" section).
The location of MQTT Brokers is one key design decision regarding telemetry. So, in our architecture there are two types of brokers belonging to the application level, as shown in Fig. 3. On the one hand, at the edge level there will be a Local Broker for each WSN, which will subscribe to the events generated by the WSN in particular, known as Local Events. On the other hand, a Global Broker in the core level will subscribe the events generated by the different WSN, known as Global Events.
It is important to note that the implementation of the Local Broker in Fog Nodes does not involve removing the Global Broker. So, each Fog Node will work with the flow of information from the sensor network assigned to its coverage area (Local Events). On the contrary, the Global Broker will work with the flow of information from the different Fog Nodes, (Global Events).
Complex event processing
The CEP engine is also implemented at both levels of the proposed architecture. Likewise to the MQTT brokers, Local Events (generated in the WSN at the edge level) will be processed in the corresponding Fog Node through the Local CEP, while Global Events (the ones that takes data from different WSNs) must be analysed in the CEP located in the core level, i.e., the Global CEP.
In any case, events are fed into the CEP engine by means of MQTT clients. Whenever a complex event is detected, a new publication to its corresponding topic is made into the MQTT broker, notifying the alarm. Figure 4 depicts the data analysis procedure with CEP, from the data that arrive from the sensors at a given time to finally detect and obtain the complex event.
This work uses the Closer-context events methodology. This case attempts to determine if an event could be generated by analysing the current data with a close past, i.e., data from a sensor at a time t, V y (t), is analysed together with the data obtained from another sensor at time t − n, V z (t − n), where n ∈ N ≥ 1. The CEP pattern used in this work is described in detail in "CEP pattern" section.
Fog & cloud computing: analysis modelling
In this section, the data flow for both cloud and fog architectures will be described and the process of the latency analysed, after briefly introducing the application considered as a case study.
Case study application
With the purpose of evaluating the proposed architecture, a case study application must be deployed. In order to assess the latencies experienced in the different elements of the overall system, a simple application has been considered which adds little overhead to the basic and minimum components of the ecosystem. More precisely, the end-points are configured to send a sequence of numerical values, while the CEP and Broker have been configured to generate a closer-context event. The pattern detected by CEP generates an alarm if the consecutive values received from two different end-points are bigger than a preconfigured threshold.
Real applications can deploy more sophisticated event detection procedures, thus adding more overhead to the CEP engine. But with this simple application we can measure a performance baseline for the system.
Data flow analysis
In order to carry out an exhaustive study of the use of computational resources in the fog computing architecture, we will analyse the communication and functionality of its components. In addition, we will compare to a model of centralised computational architecture type cloud com-puting to add a comparative analysis. Thus, Fig. 5 details the data flow of the fog computing and cloud computing architecture. As can be seen, in both architectures two levels to be analysed are distinguished: edge level and core level.
On the one hand, in the case of fog computing (see Fig. 5a), we can see that the edge level will perform all the data processing while the core level will only work for the storage of the information. More deeply, in every Fog Node of the edge level a CEP and Broker are deployed for the Local Events generation.
On the other hand, in the case of cloud computing (see Fig. 5b), the edge level will be a passive element, that is, it will only send the information to the core level, which will be the entity that deploys the Broker and CEP to the generation of Global Events. Keep in mind that the study focuses on seeing the impact of deriving computing resources to the Fog Nodes. Keep in mind that the Broker and CEP located in Fog Nodes (edge level) are named as Local CEP and Broker; and those in the Cloud (core level) as Global CEP and Broker. Therefore, a difference in both flows lies first in the location of the CEP module for event detection and the Broker for subscription. In the fog computing model these modules are found both at the edge level and at the core level. However, for the load tests that will be carried out, when simulating only the data from a WSN, Global CEP and Broker will be active, although no load to analyse since this task will be carried out entirely in the Fog Nodes. Regarding the cloud computing model, the Fog Nodes will not have activated the Local CEP and Broker since these will be deployed in the Cloud globally.
The second difference that affects the functionality of the Local Broker is the type of publications made. In the case of fog computing, the Fog Node makes a double publication: one for the analysis by CEP of the data and another publication to the Cloud for storage. While, in In summary, the flow of information is as follows: in fog computing, the event is generated and distributed through the Local CEP and Broker, respectively, which is located in the Fog Node. Optionally, in the case of multiple WSNs and depending on the application, the Global CEP and Broker could also be used. However, in cloud computing, the event is generated and distributed exclusively in the same cloud, that is, in the Global CEP and Broker. It should be noted that, for the evaluation tests performed, all the underlying architecture is exactly the same.
Latency analysis
In this section we are going to focus our attention on the latency of both the fog and cloud architectures. The flow data previously depicted for the fog and cloud architectures helps us to provide a simple and high-level model to analysis the latency. Figure 6 shows the main characteristics of the abstraction model considered, where we can observe three main entities: • Source: it will be the entity that sends the data simulating the operation of the end-points associated with a WSN. For the tests, and with the idea of having controlled the number of events that are generated, Source is a script written in Python that will indicate in our case the flow of information to be sent to the Fog Node over the Internet. • CEP-Broker: it will be the entity that will analyse the information and generate the events turned into alarms. This entity will be the Local CEP and Broker Broker if you are in the Cloud (for cloud computing analysis). • Final User: will be the entity that will receive the alarms. For our study we have used a smartphone that will receive messages through the Internet, specifically through a 4G Connection.
Hence, t x refers to the topic to which the end-point is subscribed. m x is the message sent between the different entities, such that x = 0 corresponds to the flow from Source to CEP-Broker, whereas x = 1 corresponds to the flow from CEP-Broker to Final User. This message also includes its departure time. Note that we instrument the CEP-Broker to send back a message to the Source (and respectively, from the Final User to the CEP-Broker) to calculate an estimation of the one-way latency of the messages. We assume here that the upwards and backwards latency are the same.
Therefore, in this context the total time or latency (in seconds), L total , from Source to Final User will be defined as the sum of times of several sectors, as shown in Equation 1.
(1) Figure 6 details the procedure to calculate the times in each sector: • L 1 will be the time since sending a message from Source to CEP-Broker, whose latency is denoted as l 1 , (Fog Node or Cloud). It should be noted that, for the calculation of this value, and due to the fact that the Broker has its own messaging manager making unfeasible to know exactly the time in which the alarm is distributed, a confirmation message, whose latency is represented as l 1 , will be sent. Notice that the shipment from Source will be made by subscription to the Broker. Therefore, the time spent sending the message t 0 m 0 is defined according to Equation 2.
• L CEP will be the time in CEP, that is, the time in which the data reaches the CEP engine T 1 , is analysed and the complex event in the form of an alarm, T 2 is obtained as output. Therefore, the analysis and generation time of the event is defined according to Equation 3.
• L 2 will be the time since leaves the CEP engine, the alarm is published through the Broker and reaches the Final User, with a latency l 2 . Additionally, Final User will send a confirmation message, whose latency is l 2 . Thus, the latency in this last sector, when sending the t 1 m 1 message, will be defined as shown in Equation 4.
To conclude this section, it should be noted that in the tests carried out on this model, whose results are shown in "Cloud vs. fog: latency evaluation with stress workload" section, the three entities (Source, CEP-Broker (Local and Global) and Final User) are located at different geographical points from the same city, and they have associated different public IP addresses. In addition, as mentioned, both for the data flow model in cloud computing and in fog computing represented in Fig. 5, the latency has been calculated with the same equations and following the same procedure.
Therefore, once the case study is defined, the data flow analysis and the latency study have been carried out, we will perform the performance evaluation for both architectures.
Fog & cloud computing: performance evaluation
This section begins with the description of the testbed where the evaluation tests have been carried out. Next, the CEP pattern that has been used in the tests, as well as the details of load generation will be specified. Already entering to the evaluation itself, a first analysis is presented on the impact of using various network technologies in the latency experienced by the end users of the system when receiving the generated events, depending on whether a fog or cloud computing architecture is used. Subsequently, a stress test is performed on both architectures taking into account the latency according to the number of alerts
Testbed description
Now the main hardware and software components of the testbed developed for carrying out the experiments will be described. The edge level of the testbed is deployed as a Python script that emulates 20 end-points and 2 gateways (10 end-points for each), namely, the Source entity in "Latency analysis" section. For the Fog Node, a Raspberry Pi 3 model B+ type microcomputer has been used, which has a 4-core 64-bit 1.4GHz processor, a 1GB RAM LPDDR2 SDRAM and Raspbian (without Graphical User Interface) operative system. In order to keep control of the environment (i.e., network latencies), the core level has been implemented on-premise by using local resources. More precisely, the core level was implemented on an Intel Core i7 computer at 2.90GHzx8 with 8GB of RAM and 1TB of Hard Disk. Final User is a Huawei P20 Lite smartphone with Android version 8. A basic Android application has been developed in order to receive the alarms from CEP-Broker. As noted above, all the components have been deployed at different locations in Lima (Peru) and are interconnected through the public Internet.
The CEP engine used in this work is Apache Flink (version 1.8.0). Apache Flink is an open-source framework for state calculations on unlimited and limited data flows. Two types of processes are created during the runtime environment in Apache Flink. On the one hand, the Jobmanager implements 50 and 175 threads in Local and Global CEP, respectively, and is responsible for coordinating distributed execution, assignment of tasks, fault management, etc. On the other hand, the Taskmanager, configured with 512MB, is responsible for executing the tasks assigned by the Jobmanager on the data flow. The configu-ration of these two types of processes was optimised to minimize latency in the generation of alarms for our case study.
CEP pattern
The following is the implemented CEP pattern that will be used to analyse the incoming data, generate the events and notify with an alarm, in addition to the simulation of events that will be used to study the latency and performance set out in the next section.
In the tests performed, a CEP engine has been deployed for processing Closer-context events with a simple pattern. Thus, an alarm will be generated provided that, in moments of time t 1 and t 2 , the values V x (t 1 ) and V y (t 2 ), received from different end-points, x and y, exceed a set threshold, Th. That is, it is true that V Thus, Fig. 7 shows an example of simulation until the second 120 to clarify the process of generating alarms. For our simulation a threshold Th = 40 has been established. For the generation of events, it must be fulfilled that in consecutive moments a data arrives The process is as follows (in that strict order): 1. At the beginning, when it reaches CEP, the data V 1 (0) = 40 is discarded for not fulfilling the condition. 2. Upon arrival of the second data V 2 (40) = 41 this is stored by fulfilling the first case of the employer. 3. In the next 80 seconds another data arrives V 3 (80) = 42 so the complete pattern has been fulfilled and, therefore, generates the event and we close this first case of alarm generation. Likewise, the first pattern is met again with this data, so we open a second case of event generation. 4. In the second 120, we see that V 4 (120) = 40 arrives so the pattern does not meet and the second case is discarded for not complying with the established rule.
It is important to note that the number of alarms can be increased by sending more topics in less timeframes, so we can set the maximum number of alarms per minute. Therefore, for all the tests, 10-minute simulations were made simulating a controlled number of alerts every minute in an equidistant manner, that is, 10 tests were carried out generating the same number of alerts every minute. For example, in a first round a consumption test was performed with the generation of 200 alarms per minute for 10 minutes; once the services are restarted, a load of 400 alarms is performed per minute for 10 minutes and the services are restarted.
For this work a maximum limit of 800 alarms/min has been established since when generating more alarms, a bottleneck was created in the Fog Node and events were beginning to be lost. To do this, 20 end-points are emulated and a total of 1600 data per minute is sent, that is, 80 data per end-point. Note that the load applied to the system is the same for all tests, varying only the number of alarms; therefore, the use of network bandwidth from Source is always the same.
Finally, it should be noted that for the following results, 30 tests were performed to ensure its accuracy. Mean values have been represented.
Influence of network technology on the latency
A key aspect of the proposed architecture is the network technology used by the Final Users (see Fig. 6). These elements can be connected to the Fog Node thanks to WAN networks or LAN networks depending on the location of the Final User. Performance depends on the technology used. Thus, in this section we are going to evaluate the impact of some of the most widely used technologies. More precisely, in these experiments we are going to evaluate the influence of 3 different technologies: 3G, 4G and WiFi on latency. So, the testbed described in "Testbed description" section has been deployed considering 3 different Final Users, all of them subscribed to the Local Broker: (i) one is subscribed by WiFi (it is in its wireless LAN coverage area); and (ii) the other two are subscribed through 3G and 4G telephone networks respectively (WAN connection).
Hence, Fig. 8 shows the results of making this comparison between the different connections to the Broker for a load with the pattern described in the previous subsection and a total of 800 alarms/min. As expected, a user who is on the same LAN of the Fog Node (WiFi connection) will receive the alert in less time than one connected by 3G and 4G, although 4G is very close to WiFi. One of the strengths of 4G is the speed and stability of the sig- nal with respect to 3G which, as can be seen, has a more pronounced variance than 4G [36].
Thus, it can be seen from this study that the fog computing approach allows recipients in the area of coverage of the Fog Node to receive the alarm with a significantly lower latency than those recipients connected by telephony network. It should be noted that with a cloud computing approach, recipients can only receive the alert from the core level. The additional latencies incurred may be harmful for a wide range of applications.
Cloud vs. fog: latency evaluation with stress workload
Since the 4G telephony network has stable results and good latency performance, this will be the network used to send alarms to Final User in the remaining experiments. In addition, and as we will see in this section, this latency study should be extended so that we can compare if latency is reduced with the generation of Local Events (fog computing), rather than Global Events (cloud computing). Thus, in this particular case, and by which the subsequent performance study will be carried out, we will compare the latency in both architectures for a controlled number of alarms generated, specifically 200, 400, 600 and 800 alarms/min. Equation 1 has been used to calculate total latency (see "Latency analysis" section). In all the cases, averaged values for latencies are shown.
In this context, we can see in Fig. 9 how using a fog computing architecture reduces latency considerably, that is, the notification of an event arrives earlier to Final Users than in a cloud computing architecture. Now, in this case we can see how the latency exceeds the second in the case of cloud computing. Moreover, it has a growing linear trend with a steep slope. On the other hand, fog computing also presents a linear trend, although it has much smoother slope, that is, it almost maintains a constant value. Therefore, we can consider that the latency in fog computing, in addition to being lower than in the cloud computing architecture, has a more stable value, independently of the assigned load. In this context, the following test tries to determine which element of the architecture has the greatest impact on latency. For these results, the description of the latency and how to obtain it in each sector must be taken into account (see "Latency analysis" section). In particular, latencies in the three sectors are shown: (i) L 1 time between the Source entity and before analysing in CEP (see Equation 2); (ii) T CEP time since a data is analysed and the event is generated (see Equation 3); and (iii) L 2 time since the event becomes an alarm and reaches Final User (see Equation 4).
Therefore, Fig. 10 shows the average latency data, broken down by each sector indicated above. In it, it can be seen that in both architectures, the element that contributes most to latency is the MQTT Broker in the two phases of communication.
Taking into account the times obtained in the study of latency in Fig. 10 we can draw the following conclusions by sector: • L 1 (see Fig. 10a): In this sector, the cloud computing architecture records a growing trend: the more alarms per minute there are, the higher latency L 1 . In the case of fog computing we can observe that the latency is constant and independent, as soon as we analyze a considerable set of events. Note that this parameter includes both the transmission time of the network and the work done by the MQTT Broker. In both architectures the communication latency has a low variance (see Fig. 8, 4G connection) so the variation observed in the latency values of the figure is due to the initialisation behaviour of the MQTT Broker as the first data arrives from Source. • T CEP (see Fig. 10b): The first behaviour to observe between both architectures is that the time observed in fog computing is slightly longer than the time in cloud computing because the resources in the Fog Node are more scarce than in the cloud. In any case, the time in both architectures is very similar and practically constant in this sector and, therefore, not very significant. • L 2 (see Fig. 10c): For this sector we can see how, unlike L 1 , in both architectures there is a constant and independent trend to the number of alarms, because the Broker service is already initialised and it only distributes the alarms to the Final User. On the other hand, it is observed that in this case the latency for the cloud computing architecture is more than twice the one obtained by the fog computing architecture.
In summary, we can see that the growing trend in cloud computing (see Fig. 9) is due to the time spent in the L 1 sector. In addition, an important factor that we can observe at this point has been that the MQTT Broker is a critical point of latency, while CEP performs the analysis of the data at a minimum latency.
Finally, not only latency is important to evaluate in both architectures. The distribution of computational resources in the different architectures must also be assessed.
Cost analysis: use of resources
In this section we will continue with the stress test developed for latency, but analysing the computational con-
Case 1: core level
In the following results the measurements have been made in the core level, that is, in the central server (Cloud). The idea of this test is to know the computational consumption in the core level when using any of the architectures to evaluate. To this end, the Perf tool [37,38] has been used to measure the energy consumed, Joules/millisecond (J/msec), and the Linux top tool to obtain the percentage of CPU and RAM consumed (see Fig. 11). At first sight, we can see that cloud computing has a higher computational consumption in the measured values, so, by using a fog computing architecture we have reduced considerably the consumption of resources in the Cloud. The metrics evaluated are detailed below: • As for average CPU consumption (in %), see Fig. 11a, we can see that it has not been excessive in both architectures since the events sent do not perform complex mathematical operations that stress the CPU, but are simple comparison events. It can be seen that when cloud computing is used, CPU consumption is at most 1% higher than in fog computing, which is a very insignificant increase.
• Regarding the consumption of RAM (in %), see Fig. 11b, we see more interesting results. It is possible to appreciate that the single activation of the CEP engine and the Broker represents a 35% increase in memory consumption. This aspect is due to the fact that CEP performs the analysis of events by storing data in the buffer and the Broker distributes the alarms from RAM. In contrast, in the case of fog computing, we see a very low value since the Broker and CEP services are not activated. • In matters of energy, see Fig. 11c, we see an average reduction of 69% in benefit of using fog computing with respect to cloud computing, without becoming high values. It is a consequence of the lower use of CPU and RAM.
It has been possible to verify how the use of fog computing download of work at the core level. This would be an additional benefit of the fog computing architectures (distribute resources across the different distributed devices) that will be more noticeable the more sophisticated the processing to be performed on the data.
Case 2: edge level
To obtain computational consumption at the edge level, when using a Raspberry Pi as Fog Node, only the consumption (in %) of CPU and RAM could be obtained because the Perf tool is not available for ARM processors. Therefore, the measurements made are now observed in Fig. 12. Note that the scales used in the graphs are different than in Case 1 (see Fig. 11). However, it is reported that the maximum energy consumption of a Raspberry Pi board at maximum load (i.e., the worst case) is 5.1W [39], leading to 0.0051J/msec, which is negligible compared to the energy consumed in the core level. At first glance we can see how to use fog computing we have a greater consumption of resources in the Fog Node. This point is interesting and corroborates that the decrease in the consumption of computational resources in the core level implies a redistribution of resource usage towards the edge level. The metrics evaluated are detailed below: • Regarding the CPU consumption (in %), see Fig. 12a, we can observe for both architectures a linear behavior with the number of alarms processed per minute, although the slope obtained in the fog computing architecture is much steeper, reaching a consumption of 20% compared to 6% of cloud computing for 800 alarms/min. This is due to two facts: i) The limited performance of low-cost devices, such as the Raspberry Pi of our testbed; and ii) The workload since it is not only assigned by the CEP engine, but also that of the Broker who must make a double publication. However, the result obtained at this point is key: low-cost devices (less than US$40 per device) can be used to analyze data from IoT applications with real-time requirements using a CEP engine without overloading the system. • Regarding the consumption of RAM (in %), see Fig. 12b, it can be seen that the CEP and Broker engine consume up to 80% of the RAM in the Fog Node, that is, almost 70% more than in the cloud computing model, since in the latter case the Fog Node is a passive element. Like the core level analysis, CEP performs the event analysis and the Broker distributes the alarms from RAM. A key aspect that certifies the feasibility of using low-cost devices is that the % of memory in use is constant and independent of the number of alarms generated.
As a summary, we can observe that the assignment of tasks and work to the edge level with CEP and Broker brings with them a distribution of work assigned to the Fog Nodes while the core level has a much lower load. This is an interesting fact since in addition to harnessing the computing power at the edge level, it also highlights that the response times to the end user are much shorter, which in turn enables the deployment of a large number of applications with real-time requirements in IoT.
Conclusions and future plans
This paper shows the development of a distributed fog computing architecture for the deployment of IoT applications. Our study shows how these architectures optimise the distribution of resources throughout the entire deployed platform, in addition to considerably reducing latency.
On the one hand, regarding resource distribution, we have observed that by deploying critical data analysis and decision-making applications (CEP and the MQTT Broker, in our case), the values of the evaluated metrics are reduced considerably (CPU consumption, RAM memory and power consumption) on the cloud server, with the consequent savings for the cloud provider. Specifically, the fog computing approach enables a reduction of RAM consumption up to 35% and energy up to 69% at the core level, since it fully exploits the computational resources of fog nodes. In addition, it has been verified that low-cost devices, such as Raspberry Pi with a cost less than US$40, have enough computing resources to offer the quality of service required by IoT applications with real-time needs.
On the other hand, regarding latency, the work highlights how a fog computing architecture considerably reduces latency with respect to cloud computing, up to 35% better. Breaking down the latency results, we can also see how the Broker is the critical element of the increase in latency.
Regarding future work, the authors of this work consider it appropriate to evaluate Software Defined Networks (SDN) techniques in the Fog Nodes. As observed in the document, the limit of 800 alarms/min can be mitigated by developing a spine-leaf layer between the core and edge level, which allows the analysis to be redirected in case of Fog Node overload.
Likewise, a study on the creation of micro services in the Fog Node for the Broker and CEP through containers would be very interesting to provide a certain degree of isolation between different applications deployed on the edge level. To do this, using microclouds techniques in the Fog Node can be an interesting aspect for reducing consumption and latency.
Finally, it is proposed to use more sophisticated microcomputers that have built-in accelerators (graphics cards, Tensor Processing Units (TPUs), . . . ) to analyse the impact of deriving machine and deep learning techniques from the Cloud to the Fog Node. | 2021-06-08T13:40:27.263Z | 2021-06-07T00:00:00.000 | {
"year": 2021,
"sha1": "96112cd84fd39b0ad4a308283c7c024cdb75da6d",
"oa_license": "CCBY",
"oa_url": "https://journalofcloudcomputing.springeropen.com/track/pdf/10.1186/s13677-021-00245-7",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0cacbbcde5be014b2b1e698d22fe1e2ab354e181",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
15915607 | pes2o/s2orc | v3-fos-license | Association of Glycation Gap With Mortality and Vascular Complications in Diabetes
OBJECTIVE The “glycation gap” (G-gap), an essentially unproven concept, is an empiric measure of disagreement between HbA1c and fructosamine, the two indirect estimates of glycemic control. Its association with demographic features and key clinical outcomes in individuals with diabetes is uncertain. RESEARCH DESIGN AND METHODS The G-gap was calculated as the difference between measured HbA1c and a fructosamine-derived standardized predicted HbA1c in 3,182 individuals with diabetes. The G-gap’s associations with demographics and clinical outcomes (retinopathy, nephropathy, macrovascular disease, and mortality) were determined. RESULTS Demographics varied significantly with G-gap for age, sex, ethnic status, smoking status, type and duration of diabetes, insulin use, and obesity. A positive G-gap was associated with retinopathy (odds ratio 1.24 [95% CI 1.01–1.52], P = 0.039), nephropathy (1.55 [1.23–1.95], P < 0.001), and, in a subset, macrovascular disease (1.91 [1.18–3.09], P = 0.008). In Cox regression analysis, the G-gap had a “U”-shaped quadratic relationship with mortality, with both negative G-gap (1.96 [1.50–2.55], P < 0.001) and positive G-gap (2.02 [1.57–2.60], P < 0.001) being associated with a significantly higher mortality. CONCLUSIONS We confirm published associations of G-gap with retinopathy and nephropathy. We newly demonstrate a relationship with macrovascular and mortality outcomes and potential links to distinct subpopulations of diabetes.
T he glycation gap (G-gap) refers to the potential deviation of glycated HbA 1c away from the other indirect estimate of blood glucose attainment such that it might read substantially lower or higher than expected (1-3). Glycated HbA 1c represents the net effect of several mechanisms, which may shift its direct glycation relationship with overall levels of glycemia (4)(5)(6). Many factors are known to influence HbA 1c , including various erythrocytic processes (6)(7)(8)(9). Protein glycation is a nonenzymatic reaction dependent on glucose concentrations, but intracellular enzymatic deglycation of proteins has also been identified (10). The key deglycating enzyme, fructosamine-3-kinase, has isoforms and a genetic polymorphism suggested to influence HbA 1c variability, but any impact on HbA 1c glycation is unknown; although it seems unlikely that glycated HbA 1c is a substrate for this enzyme since it has been shown that there is no evidence that it plays any role in HbA 1c deglycation at the relevant glycation site (11,12). To add to the potential for a spurious generation of a G-gap, many factors, including variability in protein turnover and obesity, may affect fructosamine estimation (1, 13,14). The evidence concerning the effects of urinary protein loss are mixed (1, 13). Even then, fructosamine reflects blood glucose attainment over a much shorter time frame than HbA 1c and may more readily be influenced by very short-term changes in blood glucose levels. It may simply be that the G-gap is no more than an empiric and potentially spurious measure of disagreement between the two indirect estimates of glycemic control, with each having a number of confounders to the direct relationship with blood glucose.
Although we have demonstrated that the G-gap is a consistent phenomenon within individuals over time (1), there remains doubt as to whether the G-gap is a real phenomenon or if it has any significant sequelae (15). Hypothesizing that the G-gap is an inconsequential nonsystematic event, irrelevant to diabetes outcomes, it would not then be expected to be associated with distinct subpopulations of human diabetes or to have any sequelae in clinical outcomes. This article explores the association of the G-gap with diabetic population demographic factors and with crucial clinical outcomes to determine if such associations exist.
Patient selection
We reviewed all HbA 1c and fructosamine estimations undertaken at New Cross Hospital over 4 years (2006-2009), identifying and selecting all adults with diabetes ($18 years of age) who had paired estimations of HbA 1c and fructosamine performed on the same day from the same sample set. Thereafter, clinical information was taken from our diabetes registry and linked to this dataset. The diabetes register is validated to be .99% accurate for the identification of known diabetes and for mortality status in linkage with the National Health Service Strategic Tracing Service. Pregnant women, those with a creatinine .200 mmol/L, those with a known hemoglobinopathy, or those with an abnormal electrophoretic pattern on HbA 1c testing were excluded.
Retinopathy grading, microalbuminuria, and macrovascular risk Digital retinal screening was in accordance with the English National Screening Program for Diabetic Retinopathy (ENSPDR) (16). Retinopathy was categorized into a dichotomized variable (with or without any retinopathy). Urine albumin-creatinine ratio (UACR) was assessed as dichotomous variable dividing into lower risk or higher risk for progressive microalbuminuria (,10 or .10 mg/mmol) (17). Individuals were categorized as having established macrovascular disease depending on the presence or absence of any previous cardiac, cerebral, or peripheral macrovascular event.
Analytical methods
HbA 1c International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) values were available only from 1 June 2009. Hence we have used the Diabetes Control and Complications Trial (DCCT)aligned HbA 1c in our analysis. HbA 1c was measured using high-performance liquid chromatography on a Tosoh G7 analyzer (Tosoh Bioscience Ltd., Worcestershire, U.K.). The performance scores in the UK National External Quality Assurance Scheme (UK NEQAS) were as follows: A (accuracy) score ,100 and B (bias) score ,2%, which were within the acceptable limits of the UK NEQAS for glycated hemoglobins (maximum limits: A score ,200 and B score less than 67.5%). The between-batch coefficient of variation was 1.8 and 1.4% for an HbA 1c of 5.7% (39 mmol/mol) and 9.5% (80 mmol/mol), respectively. Fructosamine was measured by nitrotetrazolium-blue reduction on a Roche Modular P analyzer (Roche Diagnostics Ltd., West Sussex, U.K.) using a Cobas kit with between-batch coefficient of variation 3.1% at a level of 263 mmol/L and 2.2% at 518 mmol/L (18).
Calculation of the fructosaminepredicted HbA 1c and the G-gap As published (1), a predicted HbA 1c (FHbA 1c ) was calculated from the simultaneously measured fructosamine standardized to the HbA 1c distribution according to the following equation: FHbA 1c = {[(fructosaminemean fructosamine)/SD fructosamine] 3 SD HbA 1c } + mean HbA 1c . The G-gap was the difference between the true HbA 1c and the fructosamine-derived standardized predicted FHbA 1c (G-gap = HbA 1c -FHbA 1c ). Importantly the FHbA 1c was not derived from HbA 1c by correlation/ regression methods. The normalized standard deviate reallocation of fructosamine levels yields fructosamine-based HbA 1c equivalent results with the same distribution, mean, and SD as HbA 1c without altering the rank position of the fructosamine-derived value. A negative G-gap denotes the true HbA 1c appearing to read lower than the FHbA 1c , and a positive G-gap denotes the true HbA 1c appearing to read higher than that predicted by fructosamine. Among those with a second paired HbA 1c -fructosamine estimation, in order to identify those with a consistent G-gap direction, the product of two G-gaps was calculated. If consistent, the G-gap product would be positive (positive 3 positive = positive; negative 3 negative = positive), but any discordance in direction of the G-gap over time in two paired readings would yield a negative G-gap product (negative 3 positive = negative).
G-gap categorization
The G-gap (unit = HbA 1c %) was categorized as negative, neutral, or positive when less than or equal to 21 (i.e., more negative than 21), greater than 21 to less than +1, or greater than or equal to +1, respectively. This categorization was taken from our previously published clinical error grid analysis of the impact of G-gap on assessment of glycemic control (19).
Statistical analysis
Data were analyzed on SPSS version 19. Comparison between multiple group means was by one-way ANOVA and the differences between frequencies/proportions by x 2 test. Binary logistic regression was used to determine the association of various independent factors with dichotomized variables. Survival analysis was undertaken using Cox regression. In each case, a stepwise backward extraction method excluded all nonsignificant variables (P . 0.05) to determine the simplest, most parsimonious model. Data are presented as the mean 6 SD. All statistical tests were considered significant at P , 0.05.
Ethical committee approval
The use of the clinical database for this study was approved by the relevant local U.K. National Health Service Research Ethical Committee.
RESULTSdOf 4,757 patients identified, 3,182 had complete demographic data and were included. Their follow-up from the first paired HbA 1c -fructosamine estimation to the time of death or study end point was 38 6 16 months. Table 1 shows the glycation estimates. The correlation between HbA 1c and fructosamine in the first HbA 1cfructosamine pair is r = 0.75, P , 0.001 (n = 3,182). Nevertheless, the G-gap range demonstrates the substantial magnitude of variation between HbA 1c and FHbA 1c, both of which indicated completely differing assessments of attainment of glycemic control. The distribution of G-gap status for the whole group varied significantly by HbA 1c quintile (x 2 = 505.8, P , 0.001), noting the striking increase in negative G-gap status in the lowest HbA 1c quintile, whereas the positive G-gap prevalence was graded across ascending quintiles (Fig. 1). Repeat HbA 1c -fructosamine estimations were undertaken 11 6 10 months after the first in 1,609 patients. There was a quadratic relationship (r 2 = 0.67, P , 0.001) between the first and second G-gap (as described in RESEARCH DESIGN AND METHODS) with only 47 (3%) and 17 (1%) of the 1,609 patients discordant at a G-gap product more negative than 20.5 and 21.0, respectively.
There were significant differences between G-gap categories in a number of relevant demographic characteristics ( Table 1). The key clinical outcomes of retinopathy (borderline significance), nephropathy (UACR), established macrovascular disease, and mortality also varied significantly with G-gap status (Table 1).
Binary logistic regression analyses were undertaken to determine the relationship between the absence or presence of these diabetes outcomes and the G-gap categories (negative, neutral, and positive, as defined), taking into account other identified relevant significant factors (age, sex, ethnic status, smoking status, diabetes type, duration of diabetes, insulin use, and BMI). The overall models were all significant (P , 0.001) for each outcome ( Table 2). Within that, independent of the other significant factors, the G-gap effect was significant for retinopathy and UACR, and the outcomes were worse with a positive G-gap category ( Table 2). The G-gap status did not retain significance with macrovascular disease prevalence (P = 0.28) after regression model adjustment for other factors.
The mortality pattern with G-gap differed and was clearly not linear but rather "U" shaped. In Cox regression analysis, the G-gap association was significant only as a quadratic nonlinear U-shaped relationship (overall x 2 = 307.3, P , 0.001). The significant factors were age (P , 0.001), smoking (P , 0.001), ethnicity (P = 0.003), and G-gap (squared term) (P , 0.001) but not sex, BMI, type or duration of diabetes, and insulin use. Introducing the prevailing HbA 1c (latest value) into the model had no significant effect (P = 0.082).
Similarly the G-gap retained its significant association with mortality independent of proteinuria. Indeed proteinuria lost its significant association with mortality when the most heavily proteinuric subjects were excluded (UACR .200 mg/mmol) whereas G-gap retained significance. Furthermore, G-gap continued to be significantly associated with mortality even when a completely normoalbuminuric population subset was analyzed (n = 2,077; 1,873 alive and 204 dead; x 2 = 179.02, P , 0.001; G-gap OR 1. Thus for mortality, in contrast to retinopathy, nephropathy, and macrovascular disease where the neutral and negative G-gap groups did not significantly differ from each other, both the negative and positive G-gap groups had a significantly worse outcome than the neutral group.
CONCLUSIONSdThe entire notion of a G-gap must be treated with skepticism and caution. There are extensive confounding factors that are real caveats to its meaning. Whether they relate to erythrocytic function, biochemical pathways, or pure statistical error, mechanisms for the G-gap are, as yet, not at all understood. It would be appropriate to clearly state that there is no known genetic or biochemical mechanism that provides anything remotely close to a definitive explanation.
Yet, we now show that the G-gap varies significantly with demographic characteristics and is associated with the key diabetes outcomes: retinopathy, nephropathy, macrovascular disease, and mortality. Variation in demographics with the G-gap status has not been previously reported. The G-gap is consistent over time (1-3), and twin studies suggest it to have significant inheritability (20). G-gap consistency, potential inheritability, and the now-reported demographic linkages tantalizingly point toward human diabetic subpopulations with biological variation in any underlying pathophysiological mechanisms. Hyperglycemia is central to the development of diabetes complications (21)(22)(23). Hyperglycemia-induced protein glycation is an unequivocally important pathophysiological mechanism (21,24). Any factors significantly altering glycation may theoretically alter the relationship between glucose and the development of diabetes complications. Glycated HbA 1c has been shown to correlate with the risk for developing microvascular complications in diabetes (22,23). The G-gap is proposed as a measure of the deviation of glycated HbA 1c away from its expected value, such that a negative G-gap is taken as meaning a lesser level of glycation than expected and a positive G-gap more so. Our observations, that the micro-and macrovascular complications of diabetes are directly associated with a positive G-gap, are logically consistent with the glycation mechanism for complications. Others have reported a relationship between the G-gap and retinopathy and nephropathy (2,25,26). Cohen et al. (2) suggested that the G-gap increased the risk of more advanced nephropathy 2.9-fold. Rodr ıguez-Segade et al. (25) studied 2,314 patients with type 2 diabetes for a mean of 6.5 years, dividing the cohort into tertiles based on the average of all individual G-gaps, and showed that the mean G-gap predicts the progression of nephropathy. In an alternative nonfructosamine-based approach, the hemoglobin glycation index (HGI), the G-gap was calculated as the difference between the measured HbA 1c minus an HbA 1c predicted from date-matched mean blood glucose estimations (3). In a study by McCarter et al. (26) analyzing the data from DCCT, HGI was shown to be a significant predictor of retinopathy and nephropathy. To our knowledge, ours is the first published study confirming some potential association between the G-gap and macrovascular disease.
With a relationship between G-gap and diabetes vascular complications, mortality would be expected to follow a similar pattern. This was not so. Adjusted all-cause mortality was higher both in the negative and positive G-gap groups. The limitations of our study are manifest with it being a cross-sectional, retrospective study that was neither designed nor powered to address mortality, and we have no data on cause of death, noting that diabetes is associated with increased morality from both vascular and a variety of nonvascular causes (27). It would be tempting to conjecture on reasons why a positive G-gap might be associated with mortality, given the macrovascular association, but we can offer no true explanation in light of the overall effect. There are no previous published reports of any relationship between the G-gap and mortality.
The long-term follow-up of the UK Prospective Diabetes Study (UKPDS) cohort suggested some benefit for macrovascular outcome and mortality with lower HbA 1c levels (28), but the conclusion of other studies and meta-analyses has demonstrated little or no impact of HbA 1c on either macrovascular events or mortality (29)(30)(31)(32)(33)(34). The ACCORD trial of intensification of therapy to a target HbA 1c ,6.0% (42 mmol/mol) stands out as having led to increased mortality with tighter glycemic targets for uncertain reasons (35). In a retrospective cohort study using the U.K. General Practice Research Database, Currie et al. (36) showed increased risk of all-cause mortality with both lower and higher HbA 1c levels proposing a U-shaped association with the lowest risk at an HbA 1c level of 7.5% (58 mmol/mol). Given the general failure to link HbA 1c levels with mortality outcomes, our observation of an increased prevalence of a negative G-gap at lower HbA 1c levels and of a positive G-gap at higher HbA 1c levels, both associated with adverse mortality outcomes in a U-shaped pattern that mirrors the observations of Currie et al. (36), clearly offers an avenue for further exploration.
It has been argued that the any association of the G-gap with outcomes, whether calculated from fructosamine or HGI, is a statistically spurious outcome of regression analysis with the anchor HbA 1c value (15). In all other published methodologies of the ascertainment of the G-gap (2,3,25,26,37), an HbA 1c equivalent from fructosamine or blood glucose data has been derived by regression analysis. Our methodology specifically avoids this. With our methodology, we have previously shown that over time and repeated measures, the G-gap remains consistent within subjects despite significant within-subject variations in HbA 1c and fructosamine and that the variation away from HbA 1c is larger than statistically expected by Altman- Bland analysis (1,19).
There is a great need to be cautious about the G-gap, and many caveats must be attached to this concept. As well as the concern that the G-gap may well be a spurious statistical phenomenon, there is concern about the use of fructosamine. Fructosamine represents the glycation of a number of proteins although predominantly albumin, the time frame of representation of glycemic attainment may be shorter than that of HbA 1c (remembering that HbA 1c glycation itself is most influenced by glucose levels over the preceding 30 days), the glycation product assessed is not as specific as defined for HbA 1c , its glycation may mirror protein turnover rates and protein loss as proteinuria, and it is influenced by shorter time frame changes. Thus, subjects who tightened up on diet, lifestyle, and other treatment prior to their blood testing or those who had intercurrent illness with short-term deterioration could well have introduced a gap between HbA 1c and fructosamine, which would have translated into a G-gap.
To counter this anxiety, it should be pointed out that in general, as confirmed in this study, many have shown a good relationship between HbA 1c and fructosamine (1,2,25). Fructosamine is known to be well associated with preceding blood glucose levels (38). A concern relating to the possible association of fructosamine levels with proteinuria has been published to be significant (13). In our own regression analysis of fructosamine with multiple relevant factors, we show that, all in all, they account for no more than 20% of the variance in fructosamine, which is to say 80% of fructosamine is not in any association with any known influencing factor. Among that medley of associated factors as presented, UACR was the last entered variable being the statistically weakest independent association, with an r 2 progression of 0.002 thus representing only 0.2% of the accountable variance of fructosamine. In direct bivariate terms, the relationship of fructosamine to (log)UACR was nonsignificant (P . 0.4). Although in end-stage renal failure there is well-known unreliability of HbA 1c (indeed fructosamine may be the better estimate), it does not seem appropriate to extend these concerns into the G-gap outcomes in our cohort tested (39). Finally, we can clearly state that the relationship of G-gap to mortality was independent of proteinuria. That is to say that although proteinuria and protein turnover themselves may be associated with mortality and may influence fructosamine, the associations of G-gap to mortality are statistically independent of that as far as we can determine.
Thus, it seems that fructosamine is an acceptable measure of glycemia attainment but it cannot be considered a gold standard measure. In this regard, it would be the validation of the G-gap by blood glucose that would best reflect the deflection of HbA 1c glycation. A study of the G-gap and the HGI has confirmed that the two indices are highly correlated and consistent (40).
It is important to stress that the G-gap is not truly independent of HbA 1c since the G-gap is computed as the difference between a measured and a predicted HbA 1c , and so independence is impossible. In that regard, any association of G-gap with outcomes such as mortality will always be difficult to dissect away from an association with glycemia. However, it would not be expected that a single point HbA 1c would have any casual bearing on mortality. Furthermore, in studies that have linked HbA 1c to mortality, if any link actually exists, the relationship is complex and the factors of linkage ill understood (41,42). In any case, the G-gap itself is not fully associated with glycemic control, as indicated by the weak correlation with HbA 1c (r = 0.38, P = 0.001, variance explained [r 2 ] = 14%) and fructosamine (r = 20.33, P = 0.001, variance explained = 11%). Finally, introducing HbA 1c into the model in the Cox regression analysis had no association with all-cause mortality (P = 0.082) and did not alter the significant association of G-gap with mortality.
In conclusion, evidence is mounting around the G-gap but significant caveats remain, and it may yet turn out to be a spurious phenomenon, especially since no defining mechanism is as yet proposed. We do now know that it appears to vary with certain demographic characteristics, it is consistent in direction over time, and, when positive, it is associated with the key diabetes vascular complications in a manner coherent with one key underlying pathophysiological mechanism, protein glycation. We now further show an unexpected association of completely uncertain etiology with mortality with both a negative and a positive G-gap in a U-shaped relationship with no associated effect of the prevailing HbA 1c .
Our article confirms the reported G-gap association with retinopathy and nephropathy, and the findings for demographics, macrovascular disease, and mortality are previously unreported. | 2016-05-12T22:15:10.714Z | 2013-09-14T00:00:00.000 | {
"year": 2013,
"sha1": "764a806a7710449aff6355a99780325c43e08ea1",
"oa_license": "CCBYNCND",
"oa_url": "https://care.diabetesjournals.org/content/diacare/36/10/3247.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "eb130a9a1685cb28538213b6ca2089a07dcee203",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
59463740 | pes2o/s2orc | v3-fos-license | Structural coloration of chitosan-cationized cotton fabric using photonic crystals
In this work, poly (styrene-methyl methacrylate-acrylic acid) P(St-MMA-AA) composite nanospheres were deposited onto chitosan-cationized woven cotton fabrics followed by a second layer of chitosan. The deposited photonic crystals (PCs) on the fabrics were evaluated for coating efficiency and resistance, chemical analysis and color variation by optical and SEM microscopy, ATR-FTIR, diffuse reflectance spectroscopy and washing fastness. Chitosan deposition on cotton fabric provided cationic groups on the fiber surface promoting electrostatic interaction with photonic crystals. SEM images of the washed samples indicate that the PCs are firmly coated on the cotton surface only in the chitosan treated sample. The photonic nanospheres show an average diameter of 280 nm and display a face-centered cubic closepacking structure with an average thickness of 10 μm. A further chitosan post-treatment enhances color yield of the samples due to the chitosan transparent covering layer that induce bright reflections where the angles of incidence and reflection are the same. After washing, no photonic crystal can be detected on control fabric surface. However, the sample that received a chitosan post-treatment showed a good washing fastness maintaining a reasonable degree of iridescence. Chitosan fills the spaces between the polymer spheres in the matrix stabilizing the photonic structure. Sizeable variations in lattice spacing will allow color variations using more flexible non-close-packed photonic crystal arrays in chitosan hydrogels matrices.
Introduction
Textile coloration is traditionally obtained by means of using chemical colorants such as dyes and pigments. However, in nature brilliant, vivid and iridescent colors that arise from the physical interaction of light with biological nanostructures can be often observed [1]. These structural colors are not produced by chemical pigments but they are originated from light interference, diffraction or scattering phenomena at submicron range and can generate color effects considerably brighter than those of pigments as well as completely transparent materials [2]. Structural colors are not subject to photobleaching and are very efficient in using light. They can be found in creatures living in low-light environments and display color effects not achievable by pigmentation [3]. Structural colors can be generated from basic optical processes represented by thin film interference, diffraction grating effect, multilayer film interference, light scattering or photonic crystals. Photonic crystals (PCs) can be sources of exceptionally bright and brilliant reflected colors arising from coherent Bragg optical diffraction [4]. PCs are periodic optical materials or structures designed to affect the motion of photons in a similar way that periodicity of semiconductor crystals affect the motion of electrons [5]. It is known that monodisperse highly charged colloidal micro or nanoparticles spontaneously self-assemble into facecentered cubic or body-centered cubic crystalline colloidal arrays in low ionic strength aqueous solutions [6]. These arrays are the simplest form of PCs and are of particular interest because of their photonic band gaps and strong interaction with light [7]. Colloidal particles have long been used as the major components of industrial products such as foods, inks, paints, coatings, papers, cosmetics, photographic films, and rheological fluids [8]. Colloidal photonic structures can be fabricated using different methods such as colloidal self-assembly [9], block copolymer self-assembly [10], auto-cloning process and holographic lithography [11]. Few literature can be found about the application of electrostatic selfassembly on textile fibers and structures [12]. Textiles display irregular rough surfaces and different woven or knitted structures difficulting the self-assembly of PCs on the fabrics [13]. Most of the studies have focused on silk and polyester fabrics disregarding the application in cotton and other cellulosic substrates [14][15][16]. An effective adhesion of the PCs arrays coating is essential to allow the fabric to function as expected [17]. Since cotton fibers are negatively charged due to the presence of carboxyl and hydroxyl groups is often necessary to functionalize its surface with cationic charges [18]. Chitosan (CH) is the deacetylated derivative of chitin and represents an interesting alternative to commonly used compounds to functionalize cotton because of its protonated amino groups [19]. Moreover, CH films can act as reflectors and mimic structures found in the exoskeletons of insects [20]. The main objective of this study is to develop structural color onto cotton fabrics. Colloidal PCs based on P(St-MMA-AA) composite nanospheres were deposited onto a CH-cationized woven cotton fabric followed by a second CH layer on the top of the PCs as protective and color enhancing coating. Coated fabrics were evaluated for coating efficiency and resistance, chemical properties and color variation by optical and SEM microscopy, ATR-FTIR, and washing fastness.
Materials
Commercial black dyed cotton fabric with a warp density of 34 threads cm -1 , a weft density of 30 threads cm -1 and weight per unit area of 140 g m -2 was pre-washed with a 1 g L −1 of non-ionic detergent solution at 30 ºC for 30 min and rinsed. Chitosan (DD 85%, ChitoClear hq95-43000, Mw = 350 kDa) was purchased from Primex (Iceland). Styrene (St), methyl methacrylate (MMA), and acrylic acid (AA) were distilled before use. All the other reagents were analytical grade purchased from Sigma-Aldrich, St. Louis, MO, USA.
Preparation of Monodispersed P(St-MMA-AA) Composite Nanospheres
Monodispersed composite latex spheres of poly (styrene-methyl methacrylate-acrylic acid) (P(St-MMA-AA)) were synthesized by a modified soap-free emulsion polymerization method as described by Cong and Cao 2003 [21]. Briefly, 120 mL of aqueous solution (A), containing 0.4 g of Na 2 S 2 O 8 and 0.8 g of NaHCO 3 in a funnel, and 25 mL of monomer mixture (B), consisting of St/MMA/AA (90:5:5 v/v/v) in another funnel, were added at the same time into a 250 mL three-necked flask. The mixture was stirred at 70 °C in N 2 atmosphere for 5 h.
Cationization Process
Cationic cotton fabric (7x7 cm) was prepared by using 1 wt% CH dissolved in 1% acetic acid aqueous solution. The CH solutions were dissolved at 300 rpm for 30 min at 70 ºC. The samples were padded in a mini-foulard through the CH solution at 1.5 bar of pressure and 4 rpm. The excess coating was then removed by rinsing with distilled water and dried for 12 hours at 50 °C.
Coating of Cotton Fabrics with Photonic Crystals
Three types of samples were tested: (1) Cotton sample was dipped in a 8% photonic colloid solution for 5 minutes and then dried at 60 ºC; (2) A cationized cotton sample obtained impregnating the fabric into a 1% CH using a padding machine (1.5 bar, 4 rpm, 80% pick-up) and then treated as sample 1; (3) Sample 2 followed by a second impregnation in 1% CH solution and dried again at 60 ºC.
Fourier transform infrared spectroscopy (FTIR)
A Nicolet Shimadzu FTIR spectrophotometer (Madison, USA) with an attenuated total reflectance accessory (ATR) was used to record the FTIR spectra of the fabric samples. Spectra were collected in the region of 4000-400 cm −1 and at a resolution of 4 cm −1 with 45 scans at room temperature. A background scan with no samples and no pressure was acquired before the spectra collection.
Scanning electron microscopy (SEM)
SEM analyses of the samples were carried out with an ultra-high resolution Field Emission Gun Scanning Electron Microscope (FEG-SEM), NOVA 2000 Nano, SEM, FEI Company. Secondary electron images were performed with an acceleration voltage between 5 and 10 kV. Backscattering Electron Images were made with an acceleration voltage of 15 kV. Samples were covered with a film of Au-Pd (80-20 wt%).
Photographs
Optical photos of the fabrics coated with PCs were taken with a Nikon CoolPix4300 digital camera. The pictures were acquired under natural light, at the same time, environmental conditions, perpendicularly to the fabrics and at the distance of 15 cm.
Washing fastness
The washing fastness was evaluated according to the standard ISO 105 C06, A1S method at a temperature of 40 ˚C.
Spectrophotometric measurements
The color of the fabrics was evaluated using a Spectraflash 600 (Datacolor) diffuse reflectance spectrophotometer at standard illuminant D65 (LAV/Spec. Incl., d/8, D65/10°). The responses analyzed were the color characteristics: K/S is the color strength calculated using Kubelka-Munk's equation (K/S= (1-R) 2 /2R, where R is the reflectance). L*, a*, and b* are the coordinates of the color in the cylindrical color space by black-white (L*, lightness), red-green (a*), and yellow-blue (b*) sensations. The results were also summarized by the overall color difference (∆E*) value.
Results and Discussion
Monodispersed composite latex spheres of P(St-MMA-AA) were successfully synthesized by soap-free emulsion polymerization. Figure 1-a shows the SEM micrographs of the deposited PCs. P(St-MMA-AA) nanospheres were uniform in shape with a mean sizes of 280 ± 20 nm. The coating display good adhesive property to the cotton substrate with most of the layered region exhibiting a face-centered cubic close-packing structure of P(St-MMA-AA) nanospheres. However, some gaps, cracks and random configurations are also present in the coated surfaces especially near to the edges of the inter-fiber spaces (Figure 1-b). Nanospheres seem to be densely loaded on the cotton fiber surfaces without large gaps in the cross section. The thickness of P(St-MMA-AA) layer ranging between 5 and 15 μm with an average thickness of 10 ± 5 μm. Figure 1-c, 1-e and 1-g show the SEM images of the deposited PCs in the untreated, CH pre-treated and CH bi-layer cotton fabrics, respectively. As expected, the morphology of the fabric surfaces coated with CH are smoother than untreated sample [22]. The PCs coatings completely cover the fiber and only few micro-wrinkled stripes and fibril structures protruded out from the fabric surface. It is interesting note that CH is well dispersed in the PCs matrix without aggregations. After washing, the untreated and the CH pre-treated samples show the typical longitudinal fibril structures of the cotton fibers. The washed untreated control does not exhibit any PCs onto the cotton fibers (Figure 1-d) and in the washed CH pre-treated sample they are faintly visible as some isolate crystal (Figure 1-f) PCs distribution with few cracks on the surface (Figure 1-h). The sandwich-like coating of the PCs between CH layers is able to ensure washing durability. As previously observed, PCs coated on untreated cotton (Figure 2-a) shows a chalky white appearance with weak structural color expression [23]. However, after the addition of CH onto the cotton surface the visual appearance significantly changes from almost monochromatic to iridescent colors (Figure 2-c). It seems that CH addition allows a more homogeneous distribution of PCs onto the cotton fibers and displays a different mechanism in the absorbance of scattered light probably due to differences in refractive index of CH and P(St-MMA-AA) as well as by variations of the periodical scale [24]. After washing both the samples (Figure 2-b and 2-d) the fabric did not retain any PCs exposing the black dyed cotton surface and losing iridescence. Figure 2-e and 2-f show the PCs deposited on the pre-treated CH cotton with a second CH impregnation on the top of the nanospheres before and after washing, respectively. The introduction of the CH top layer into the lattice structure drastically change the color appearance of the fabric due to the differences in light adsorption between P(St-MMA-AA) and CH films [25]. CH is able to absorb scattering light and increase the color saturation producing brighter structural colors [26]. After washing, the second CH treatment is able to retain some structural color but large gaps of black cotton are visible. 1 (a, b), 2 (c, d) and 3 (e, f) before and after washing, respectively.
The ATR-FTIR (Figure 3) of the control cotton fabric (CO) display the very intense bands at 1160, 1100 and 1020 cm -1 assigned to the vibrations of the C-O-C bond of the glycosidic bridges of the cellulose structure [27]. The peaks at 2900, 2850, 3334 and 3282 cm −1 may be attributed to the aliphatic CH 2 and OH groups of the cellulose structure [28,29]. The absorption band at 1640 cm −1 , often referred to as amide I band, may be assigned to the amide carbonyl C=O stretching vibrations of the azo dye present in the fibers [30]. After CH deposition (CO+CH) no significant differences can be observed compared to the control. The PCs layer deposited on the untreated sample (CO+PH) efficiently covers the cotton and black dye infrared bands. The bands at 3027 cm -1 , 2922 cm -1 and 1541 cm -1 , are assigned to CH stretching and to the bending of CH 2 groups of the nanosphere. Moreover, the band observed at [31]. The addition of PCs on the CH pretreated sample (CO+CH+PH) shows a broad band at 3334 cm −1 attributed to the O-H stretching vibration of the water absorbed on CH [32]. The typical bands of amide I and amide II of CH structure are observed at 1648 cm −1 and 1451 cm −1 , respectively. The peak at 750 cm -1 previously solely attributed to nanospheres become more intense due to the additive presence of the in-plane NH deformation vibration of CH [33]. The presence of the second layer of CH (CO+CH+PH+CH) does not significantly change the infrared spectrum compared to the CH pre-treated sample (CO+CH+PH). After washing, all the control samples show the characteristic spectrum of the untreated black cotton. The CO+CH+PH also shows an increase of the bands assigned to the cellulose structure and a decrease of nanosphere peak. On the contrary, the sample CO+CH+PH+CH did not show significant changes before and after washing. It was only observed a small reduction on the peaks attribute to the CH due to the erosion of the CH top layer. The peak attributed to the P(St-MMA-AA) remain intense confirming that a high number of nanospheres is still on the fabric surface. K/S value of black cotton control sample is significantly higher when compared with the PCs coated fabric (Table 1). All the unwashed samples show a noteworthy increase in the overall color differences (∆E*) and in lightness (L*). However, only the samples containing PCs display a decrease in K/S values in the range of one order of magnitude. The red-green (a*) and yellow-blue (b*) color coordinates show differences in function of the CH content. The samples pre-treated with CH show a high b* component and a slight increase in the a* component while the sample with the top layer of CH is less a* and the value of b* is half of the only pre-treated samples. This confirms the different color appearance of the fabric due to the scattering light adsorption of CH previously observed. After washing the CO+CH, CO+PH and CO+CH+PH samples display the same lightness and K/S of control dyed cotton. In terms of color coordinates the samples with CH maintain some b* color. However, the sample with the CH top layer (CO+CH+PH+CH) is able to maintain a high degree of lightness and a low value of K/S but shows different color coordinates with prevalence of a* color component. Table 1. Overall color difference (∆E*), color strength (K/S) and color coordinates: lightness (L*), redgreen (a*), and yellow-blue (b*) of the fabrics before and after washing. CO control K/S = 590.
Conclusion
In this study, PCs fabricated by soap-free emulsion polymerization were applied with and without CH treatment to a cotton fabric to produce structural color. CH addition provides a more uniform coating of the PCs onto the cotton fabrics and the additional CH transparent covering layer enhances color | 2018-12-25T04:36:24.016Z | 2017-10-01T00:00:00.000 | {
"year": 2017,
"sha1": "f9ec828c6dcc512b5cf93194b0e7222e4b71799b",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/254/10/102012",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "c14a1c7a99599bcc62c1f76741124d5cbcfdf283",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
211096543 | pes2o/s2orc | v3-fos-license | Dualization and Automatic Distributed Parameter Selection of Total Generalized Variation via Bilevel Optimization
Total Generalized Variation (TGV) regularization in image reconstruction relies on an infimal convolution type combination of generalized first- and second-order derivatives. This helps to avoid the staircasing effect of Total Variation (TV) regularization, while still preserving sharp contrasts in images. The associated regularization effect crucially hinges on two parameters whose proper adjustment represents a challenging task. In this work, a bilevel optimization framework with a suitable statistics-based upper level objective is proposed in order to automatically select these parameters. The framework allows for spatially varying parameters, thus enabling better recovery in high-detail image areas. A rigorous dualization framework is established, and for the numerical solution, two Newton type methods for the solution of the lower level problem, i.e. the image reconstruction problem, and two bilevel TGV algorithms are introduced, respectively. Denoising tests confirm that automatically selected distributed regularization parameters lead in general to improved reconstructions when compared to results for scalar parameters.
Introduction
In this work we analyze and implement a bilevel optimization framework for automatically selecting spatially varying regularization parameters α := (α 0 , α 1 ) ∈ C(Ω) 2 , α > 0, in the following image reconstruction problem: (1.1) minimize 1 2 Ω (T u − f ) 2 dx + TGV 2 α (u) over u ∈ BV(Ω), where the second-order Total Generalized Variation (TGV) regularization is given by |divφ(x)| r ≤ α 1 (x), for all x ∈ Ω . (1.2) Here, Ω ⊆ R d is a bounded, open image domain with Lipschitz boundary, S d×d denotes the space of d × d symmetric matrices, T : L d/d−1 (Ω) → L 2 (Ω) is a bounded linear (output) operator, and f denotes given data which satisfies In this context, η models a highly oscillatory (random) component with zero mean and known quadratic deviation (variance) σ 2 from the mean. Further, L 2 (Ω) and L d/d−1 (Ω) denote standard Lebesgue spaces [1], and | · | r , 1 ≤ r ≤ +∞, represents the r vector norm or its associated matrix norm. The space of infinitely differentiable functions with compact support in Ω and values in S d×d is denoted by C ∞ c (Ω, S d×d ). Further, we refer to Section 2 for the definition of the first-and second-order divergences div and div 2 , respectively. Originally, the TGV functional was introduced for scalar parameters α 0 , α 1 > 0 only; see [14]. It serves as a higher order extension of the well-known Total Variation (TV) regularizer [23,53], preserves edges (i.e., sharp contrast) [49,57], and promotes piecewise affine reconstructions while avoiding the often adverse staircasing effect (i.e., piecewise constant structures) of TV [22,45,52]; see Figure 1 for an illustration. These properties of TGV have made it a successful regularizer in variational image restoration for a variety of applications [8,9,11,12,14,16,46,58]. Extensions to manifold-valued data, multimodal and dynamic problems [5,13,42,43,47,54] have been proposed, as well. In all of these works, the choice of the scalar parameters α 0 , α 1 is made "manually" via a direct grid search. Alternatively, selection schemes relying on a known ground truth u true have been studied; see [18,24,25]. The latter approach, however, is primarily of interest when investigating the mere capabilities of TGV regularization.
While there exist automated parameter choice rules for TV regularization, see for instance [37] and the references therein, analogous techniques and results for the TGV parameters are very scarce. One of the very few contributions is [7] where, however, a spatially varying fidelity weight rather then regularization parameter is computed. Compared to the choice of the regularization weight in TV-based models, the infimal convolution type regularization incorporated into the TGV functional significantly complicates the selection; compare the equivalent definition (2.1) below. Further difficulties arise when these parameters are spatially varying as in (1.2). In that case, by appropriately choosing α = (α 0 , α 1 ) , one wishes to smoothen homogeneous areas in the image while preserving fine scale details. The overall target is then to not only select the parameters in order to reduce noise while avoiding oversmoothing, as in the TV case, but also to ensure that the interplay of α 0 and α 1 will not produce any staircasing. For this delicate selection task and inspired by [37,39] for TV, in this work we propose a bilevel minimization framework for an automated selection of α in the TGV case. Formally, the setting can be characterized as follows: (1.4) minimize a statistics-based (upper level) objective over (u, α) subject to u solving (1.1) for a regularization weight α = (α 0 , α 1 ).
Note here that the optimization variable α enters the lower level minimization problem (1.1) as a parameter, thus giving rise to u = u(α). We also mention that this optimization format falls into the general framework which is discussed in our review paper [33] where the general opportunities and mathematical as well as algorithmic aspects of bilevel optimization in generating structured non-smooth regularization functionals are discussed in detail. As our statisical set-up parallels the one in [37,39], here we resort to the upper level objective proposed in that work. It is based on localized residuals R : L d/d−1 (Ω) → L ∞ (Ω) with (1.5) Ru where w ∈ L ∞ (Ω × Ω) with Ω Ω w(x, y)dxdy = 1. Note that Ru(x) can be interpreted as a local variance keeping in mind that, assuming Gaussian noise of variance σ 2 , we have that Ω (T u true − f ) 2 dx = Ω η 2 dx = σ 2 |Ω|. Consequently, if a reconstructed image u is close to u true then it is expected that for every x ∈ Ω the value of Ru(x) will be close to σ 2 . Hence it is natural to consider an upper level objective which aims to approximately keep Ru within a corridor σ 2 ≤ σ 2 ≤ σ 2 with positive bounds σ 2 , σ 2 . This can be achieved by minimizing F : L 2 (Ω) → R with The function F (R·) is indeed suitable as an upper level objective. This is demonstrated in Figure 2, where we show (in the middle and right plots) the objective values for a series of scalar TGV denoising results and for a variety of parameters (α 0 , α 1 ) for the image depicted on the left. Regarding the choices of σ, σ, w we refer to Section 6. Upon inspection of Figure 2 we find that the functional F (R·) is minimized for a pair of scalar parameters (α 0 , α 1 ) that is close to the one maximizing the peak-signal-to-noise-ratio (PSNR). Note, however, that in order to truly optimize the PSNR, one would need the ground truth image u true , which is course typically not available. In contrast to this, we emphasize that F (R·) does not involve any ground truth information. Rather, it only relies on statistical properties of the noise. For analytical and numerical reasons, rather than having (1.1) as the lower level problem for the bilevel minimization framework (1.4), we use its Fenchel predual. This yields a bilevel problem which is expressed in terms of dual variables and is equivalent to the one stated in terms of the primal variable u. A similar approach was taken in [37,39] for TV models. In this way, one has to treat a more amenable variational inequality of the first kind rather than one of second kind in the primal setting in the constraint system of the resulting bilevel optimization problem. Numerically, one may then utilize very efficient and resolution independent, function space based solution algorithms, like (inexact) semismooth Newton methods [48]. The other option that will also consider here, is to minimize the upper level objective subject to the primal-dual optimality conditions, for which Newton methods can also be applied for their solution, see for instance [40] for an inexact semismooth Newton solver which operates on the primal-dual optimality conditions for TV regularization.
Summarizing, this work provides not only a user-friendly and novel hierarchical variational framework for automatic selection of the TGV regularization parameters, but by making these parameters spatially dependent it leads to an overall performance improvement; compare, e.g., the results in Section 6.
The structure of the paper. Basic facts on the TGV functional with spacially varying parameters along with functional analytic foundations needed for (pre)dualization are the subjects of Section 2. Section 3 is concerned with the derivation of the predual problem of (1.1) and the corresponding primal-dual optimality conditions. Regularized versions of the primal problem (1.1) and its predual are in the focus of Section 4. Besides respective primal-dual optimality conditions, we study the asymptotic behavior of these problems and their associated solutions under vanishing regularization. It is also argued that every regularized instance can be solved efficiently by employing an (inexact) semismooth Newton method. Section 5 introduces two bilevel TGV problems for which the first-order optimality conditions of the predual problem and the first-order primal-dual optimality conditions serve as constraints, respectively. For these problems, based on Karush-Kuhn-Tucker theory in Banach space associated first-order optimality conditions are derived. The numerical solution of the proposed bilevel problems is the subject of Section 6. Finally, the paper ends by a report on extensive numerical tests along with conclusions drawn from theses computational results.
2. The dual form of the weighted TGV functional 2.1. Total Generalized Variation. We recall here some basic facts about the TGV functional (1.2) with constant parameters α 0 , α 1 and assume throughout that the reader is familiar with the basic concepts of functions of bounded variation (BV); see [2] for a detailed account. For a function φ ∈ C ∞ c (Ω, S d×d ) the first-and second-order divergences are respectively given by When r = 2 in (1.2) then we obtain the isotropic version of the TGV functional; otherwise the functional is anisotropic. Among all anisotropic versions, r = +∞ is of particular interest to us, primarily for computational reasons.
In [16] it was shown that a function u ∈ L 1 (Ω) has finite TGV value if and only if it belongs to BV(Ω). Here BV(Ω) denotes the Banach space of function of bounded variation over Ω with associated norm · BV(Ω) . Moreover, the bounded generalized variation norm · BGV := · L 1 (Ω) + TGV 2 α (·) is equivalent to · BV(Ω) . Similarly to TV, TGV is a convex functional which is lower semicontinuous with respect to the strong L 1 convergence. In [10,16] it is demonstrated that the TGV functional can be equivalently written as where BD(Ω) is the space of functions of bounded deformation, with E denoting the distributional symmetrized gradient [56]. The asymptotical behavior of the TGV model in image restoration with respect to scalars α 0 , α 1 was studied in [50]; see also in [57]. For instance, when T = Id and either α 0 or α 1 converges to zero, then the corresponding solutions of (1.1) converge (weakly * in BV(Ω)) to f . When both of the parameters are sent to infinity, then the solutions converge weakly * to the L 2 -linear regression solution for f . We further note that the set of affine functions constitutes the kernel of the TGV functional. There exist combinations of α 0 , α 1 such that TGV α (u) = α 1 TV(u). This happens for specific functions u, and in general one can show that there exists a constant C > 0 such that if α 0 /α 1 > C, then the TGV value does not depend on α 0 and, up to an affine correction, it is equivalent to TV. In that case the reconstructed images still suffer from a kind of (affine) staircasing effect [50].
The fine structure of TGV reconstructions has been studied analytically mainly in dimension one in [4,15,49,51]. Under some additional regularity assumptions (compare [57]) it can be shown that for TGV denoising the jump set of the solution is essentially contained in the jump set of the data; see [21] for the TV case.
2.2. The space W q 0 (div 2 ; Ω). Next we introduce several function spaces which will be useful in our subsequent development. For this purpose, let 1 ≤ q ≤ ∞ and p ∈ L q (Ω, Based on this first-order divergence, we define the Banach space . Similarly one obtains the Banach space W q (div 2 ; Ω) as the space of all functions p ∈ L q (Ω, S d×d ) whose first-and second-order divergences, divp and div 2 p, respectively, belong to L q (Ω). Note that div 2 This space is equipped with the norm p q W q (div 2 ;Ω) := p q L q (Ω) + divp q L q (Ω,R d ) + div 2 p q L q (Ω) . We refer to [11] for a more general definition of these spaces. Note that when q = 2 these spaces are Hilbertian and then the standard notation is H(div; Ω) and H(div 2 ; Ω); see [28]. The Banach spaces W q 0 (div; Ω) and W q 0 (div 2 ; Ω) are defined as Using the definitions above, the following integration by parts formulae hold true: with Eφ denoting the symmetrized gradient of φ.
We will show that the space C ∞ c (Ω, S d×d ) in (1.2) can be substituted by W d 0 (div 2 ; Ω). This fact will be instrumental when deriving the predual of the TGV minimization problem. For this we need the following result, which involves the Banach space of functions of bounded deformation here denoted by BD(Ω); see, e.g., [55] for more details.
Proposition 2.1. Then weighted TGV 2 α functional (1.2) admits the equivalent expression Proof. The proof is analogous to the one for the scalar TGV functional; see for instance [11, Proposition 2.8] or [10, Theorem 3.5]. Here, we highlight only the significant steps. Indeed, given u ∈ L 1 (Ω), the idea is to define Now, after realizing that the proof proceeds by next showing that the dual problem of (2.6) is equivalent to (2.5) and then applying the Fenchel duality result [27]. The only subtle point is the following density result which is required in order to show that (2.6) is indeed equal to (1.2). In fact, it suffices to show that Indeed let ψ belong to the second set in (2.7), and let > 0. Choose 0 < λ < 1 such that (2.8) ψ − λ ψ C 2 0 < /2. Since α 0 and α 1 are continuous and bounded away from zero there exists α > 0, smaller than the minimum of α 0 , α 1 , such that From standard density properties there exists a function φ ∈ C ∞ c (Ω, S d×d ) such that the following conditions hold for all x ∈ Ω: Then, from (2.10) it follows that φ belongs to the first set in (2.7) and from (2.8) and (2.9) we get that ψ − φ C 2 0 < . Now we are ready to establish the density result needed for dualization. For the sake of the flow of presentation we defer the proof, which parallels the one of [11, Proposition 3.3], to the appendix; see Appendix A. Below "a.e." stands for "almost every" with respect to the Lebesgue measure. Proposition 2.2. Let u ∈ L d/d−1 (Ω), α = (α 0 , α 1 ) with α 0 , α 1 ∈ C(Ω) and α 0 , α 1 > α > 0. Then the weighted TGV functional (1.2) can be equivalently written as |divp(x)| r ≤ α 1 (x), for a.e. x ∈ Ω . (2.11) Remark : By slightly amending the proof of Proposition 2.2 one can also show that where K α is defined over H 0 (div 2 ; Ω) rather than W d 0 (div 2 ; Ω).
The predual weighted TGV problem
Now we study the predual problem for the weighted TGV model with continuous weights, i.e., we use the regularization functional (1.2) or equivalently (2.11). For T ∈ L(L d/d−1 (Ω), L 2 (Ω)) we assume for simplicity that B := T * T is invertible and define v 2 Then there exists a solution to the primal problem as well as to its predual problem , for a.e. x ∈ Ω, (3.2) and there is no duality gap, i.e., the primal and predual optimal objective values are equal. Moreover, the solutions u and p of these problems satisfy Proof. We set U = W d 0 (div 2 ; Ω), V = L d (Ω), Λ : U → V with Λp = div 2 p, and also F 1 : U → R and Here, I S (·) denotes the indicator function of a set S. Immediately one gets that The problem in (3.6) admits a solution. Indeed, first observe that the objective is bounded from below. Then note that since 1 2 T · −f 2 L 2 (Ω) is continuous at 0 ∈ L d/d−1 (Ω), its convex conjugate (see [27] for a general definition) which is equal to 1 2 T * f + · 2 B − 1 2 f 2 L 2 (Ω) is coercive in L d (Ω); see [6,Theorem 4.4.10]. Hence, any infimizing sequence (p n ) n∈N is bounded in W d 0 (div 2 ; Ω), and thus 7 there exist an (unrelabeled) subsequence and p ∈ W d (div 2 ; Ω) such that p n p, divp n divp and div 2 p n div 2 p weakly in L d . We also have that p is a feasible point since the set is weakly closed. Then p is a minimizer of (3.6) as 1 2 T * f − · 2 B is weakly lower semicontinuous in L d (Ω).
We now calculate the expression F * 1 (Λ * u) + F * 2 (−u) for u ∈ Y * = L d/d−1 (Ω). As before one verifies by direct computation that F * 2 (−u) = 1 2 T u − f 2 L 2 (Ω) . Moreover, In order to prove that there is no duality gap, it suffices to show that the set λ≥0 λ(dom(F 2 ) − Λ(dom(F 1 ))) is a closed subspace of V . Then the so-called Attouch-Brezis condition is satisfied; see [3]. It is immediate to see that dom(F 2 ) = L d (Ω), and hence the condition holds true. Thus, we also get existence of a solution for the primal problem (3.1). Finally (3.3) follows from the optimality condition (Euler-Lagrange system) that corresponds to Λp ∈ ∂F * 2 (−u).
The assumptions on T in the above proposition are invoked throughout the rest of this work. In the special case when T = Id (corresponding to image denoising), then we can only get existence of a solution to the predual problem in the Hilbert space H 0 (div 2 ; Ω). The proof of this fact is similar to the one above.
The primal-dual optimality conditions for the problems (3.1) and (3.2) read Λp ∈ ∂F * 2 (−u), (3.8) and we note once again that (3.3) corresponds to (3.8) with F 2 and Λ as in the proof of Proposition 3.1. Instead of making the optimality condition that corresponds to (3.7) explicit, we are interested in the analogous optimality conditions written in the variables u and w of the equivalent primal weighted TGV problem (3.9) min u∈BV(Ω) w∈BD(Ω) For this purpose note first that the predual problem (3.2) can be equivalently written as Then the solutions of the above two problems can be characterized as follows.
is a solution to (3.10), and (w, u) ∈ BD(Ω) × BV(Ω) is a solution to (3.9) if and only if the following optimality conditions are satisfied: and Note that the suprema above are always greater or equal to the corresponding suprema over . Moreover, as we focus on a minimization problem, we are interesing in those (w, u) ∈ Y * that render the suprema finite. This implies in particular that w has a distributional derivative Ew with bounded Radon norm, and hence it is a Radon measure. It follows that w ∈ L 1 (Ω, R d ) yielding w ∈ BD(Ω); see [10]. This also im- . Using now density results analogous to (A.3) we have Here we used the fact that since the distribution Du − w has a finite Radon norm, it can be represented by an R d -valued finite Radon measure and in particular by u ∈ BV(Ω). Furthermore, as in the proof of Proposition 3.1 we have F * 2 (w, u) = 1 2 T u − f 2 L 2 (Ω) . The fact that there is no duality gap is ensured by Propositions 2.1, 2.2 and 3.1. We now turn our attention to the optimality conditions It can be checked again that (3.18) gives (3.11) and (3.12). We now expand on (3.17). We have Hence we obtain (3.13) and (3.14).
Note that in the proof above we made use of the following density results:
A series of regularized problems
4.1. Regularization of the primal problem. With the aim of lifting the regularity of u and w to avoid measure-valued derivatives, we next consider the following regularized version of the primal weighted TGV problem (3.9): for some constants 0 < µ, α 1. Existence of solutions for (4.1) follows from standard arguments.
are solutions to (4.1) and its predual problem, respectively, if and only if the following optimality conditions are satisfied: Proof. The proof follows again easily by calculating the corresponding primal-dual optimality conditions.
Next we study the relationship between the solutions of (3.9) and (4.1) as the parameters µ, α tend to zero. Proposition 4.2. In addition to the standing assumptions on T , let T also be injective on the set of affine functions. Further, let µ n , α n → 0 and let (w n , u n ) n∈N be a sequence of solution pairs of the problem (4.1). Then u n * u * and w n * w * in BV(Ω) and BD(Ω) respectively, where (w * , u * ) is a solution pair for (3.9). The convergence is up to subsequences.
Proof. For convenience of notation, define the energies We have Thus, the sequences (u n ) n∈N and (w n ) n∈N are bounded in BV(Ω) and BD(Ω), respectively. In order to see this, note that by setting α i := min x∈Ω α i (x), i = 0, 1, we get Hence, (u n ) n∈N is bounded in the sense of second-order TGV. Using the fact that T is injective on the set of affine functions, one can further derive a uniform L 1 bound on (u n ) n∈N ; see for instance [16,Theorem 4.2]. This implies further that this sequence is bounded on BV(Ω). The bound on (w n ) n∈N in BD(Ω) then follows from (4.6).
From compactness theorems in those spaces (for BD(Ω) see for instance [56]) we have that there exist u * ∈ BV(Ω) and w * ∈ BD(Ω) such that u n k * u * and w n k * w * in BV(Ω) and BD(Ω) respectively along suitable subsequences. Due to the lower semicontinuity of the functional E with respect to these convergences, we have for any pair (w,ũ) [55]. From this, in combination with the fact that . Hence, since (4.7) holds we have that Finally, by following similar steps as in the proof of [57, Thm. 3], we can show that for every This yields that (w * , u * ) is a solution pair for (3.9).
Note that if the solution u * of (3.9) is unique, then we have u n * u * along the entire sequence.
We now proceed to the second level of regularization of the problem (4.1), which, in addition to lifting the regularity of u and w, respectively, also smoothes the non-differentiable constituents. For this purpose, we define the following primal problem which will also be treated numerically below: Here ϕ γ,r * denotes the Huber-regularized version of the | · | r * norm. In what follows, for notational convenience we will focus on ϕ γ := ϕ γ,2 , i.e., for a vector v ∈ X, S = R d or R d×d and γ > 0 we use with | · | denoting either the Euclidean norm in R d or the Frobenius norm in R d×d . We mention that this type of Huber regularization of TV-type terms in the primal problem corresponds to an L 2 regularization of the dual variables in the predual [17,40]. In order to illustrate this consider the following denoising problem (P γ ) without any H 1 regularization: Its corresponding predual problem is given by The proof is similar to the one of Proposition 3.2 with and in the dualization process we use the fact that for an S-valued measure µ we have, see for instance [26].
Returning to the (doubly) regularized primal problem (P γ ), we are primarily interested in its associated first-order optimality conditions.
are solution to (P γ ) and its predual problem, respectively, if and only if the following optimality conditions are satisfied: The proof of Proposition 4.3 follows from calculating the corresponding primal-dual optimality conditions as in Proposition 4.1. The analogous approximation result follows, where we have set γ 0 = γ 1 = γ and T = Id for simplicity. Proposition 4.4. Let (w, u, q, p) and (w γ , u γ , p γ , q γ ) satisfy the optimality conditions (4.2)-(4.5) and (Opt 1 )-(Opt 4 ), respectively. Then, as Proof. By subtracting first two equations of the optimality system of Proposition 4.1 and 4.3, respectively, we get for all When using v = u − u γ and ω = w − w γ in the equations above and adding them up we get (4.14) u We now estimate R 1 and R 2 . Consider the partitions of Ω into disjoint sets (up to sets of measure We estimate R 1 separately on the disjoint sets Starting from A γ ∩ A and noticing that it follows that pointwise on A γ ∩ A (with argument x left off for ease of notation) we have Turning now to the set A γ ∩ I and recalling ∇u − w = 0 we have For the set I γ ∩ A, note that Thus, we can estimate Similarly, for the set I γ ∩ I we get Combining the above estimates we have and for R 2 we get Hence, from (4.14) we obtain the desired convergences for u γ and w γ . From this result and using (4.12) and (4.13) we get that for every v ∈ H 1 (Ω) and for every ω ∈ H 1 (Ω, R d ) we have 14 as γ → 0. This completes the proof.
Finally, the following approximation result holds true, when α, µ and γ tend to zero.
4.2.
Regularization of the predual problem. We now consider the following regularization of the predual problem (3.2) for > 0: (4.17) min where H 2 0 (Ω, S d×d ) denotes the usual Sobolev space with homogeneous first-order trace on the boundary [1], and the map M : H 0 (div 2 ; Ω) → R + 0 is convex and continuous, with M(p) = 0 if and only if |p(x)| r ≤ α 0 (x) and |divp(x)| r ≤ α 1 (x) for almost every x ∈ Ω. We also assume that M is coercive in the sense that M (p n ) → ∞ if max{ p n L 2 (Ω) , divp n L 2 (Ω) } → ∞ for some sequence (p n ) n∈N . Further, ∆ denotes the vector Laplacian operator, which is the standard Laplacian applied component-wise. For the sake of discussion, we mention that more sophisticated regularizations securing divp ∈ L r (Ω) with r > 2 for the subsequent application of (function space versions of) generalized Newton methods for solving this problem are possible as well. Proof. By J(·) we denote the optimal objective of (3.2), where we ignore the term 1 2 f L 2 (Ω) , and let K α be the corresponding constraint set. Let n → 0. Note that · L 2 + ∆ · L 2 is a norm on H 2 (Ω, S d×d ) [29]. Thus, the minimizing functional in (4.17), denoted by J n (·), is coercive over H 2 0 (Ω, S d×d ) for every n ∈ N. Hence, any infimizing sequence of (4.17) has a weakly convergent subsequence in H 2 0 (Ω, S d×d ). Further, J n is weakly lower semicontinuous and, thus, (4.17) has a solution p n , which is unique due to strict convexity.
For this problem we take r = ∞ leading to the anisotropic version of TGV and use M with G δ : R → R acting component-wise and defined by for δ > 0. Summarizing and allowing for different regularization weights β > 0, γ > 0 (rather than β = γ = > 0), (4.17) takes the form (4.23) where, for greater flexibility, we also use 1 0 > 0 and 1 1 > 0, respectively, in front Q δ and P δ . Note that for sufficiently small 0 , 1 , the quantities Q δ (p, α 0 ) and P δ (divp, α 1 ) get small as well and p and divp are expected to "approximately" satisfy the box constraints in (3.2).
Two bilevel optimization schemes
In this section we will adapt the bilevel optimization framework developed in [37,39] in order to automatically select the regularization functions α 0 and α 1 . The main idea is to minimize a suitable upper level objective over both the image u and the regularization parameters α 0 , α 1 subject to u being a solution to a (regularized) TGV-based reconstruction problem with these regularization weights.
It is useful to recall the definitions of the localized residual R and the function F as stated in the introduction: where w ∈ L ∞ (Ω × Ω) with Ω Ω w(x, y) dxdy = 1 and for some appropriately chosen σ 2 , σ 2 . We next describe two bilevel schemes each one based on the two regularized TGV problems studied in the previous sections.
Bilevel dual.
Noting that the localized residual Ru can also be written in terms of the dual variable p yielding The duality based bilevel TGV problem is defined as follows: Here, box constraints on α i are contained in in Ω for some , > 0, i = 0, 1. Note that the H 1 regularity on the parameter functions α 0 , α 1 facilitates the existence and differential sensitivity analysis as established in [37,39] for the TV case. Note, however, that this setting does not guarantee a priori that these functions belong to C(Ω), the regularity required for applying the dualization results of the previous sections. Nevertheless, under mild data assumptions, one can make use of a regularity result of the H 1 -projection onto the sets A 0 ad and A 1 ad ; see [39,Corollary 2.3]. In particular, if α 0 , α 0 , α 1 , α 1 as well as the initializations for α 1 and α 0 are constant functions, then along the projected gradient iterations, compare Algorithms 3 and 4, the weights are guaranteed to belong to H 2 (Ω) which (for dimension d ≤ 2) embeds into C(Ω).
We briefly note that in the TV case it can be shown [30,33] that W 1,1 regularity for the regularization parameter α suffices to establish a dualization framework. A corresponding result is not yet known for TGV, even though one expects that it could be shown by similar arguments. Hence, here we will also make use of the H 1 -projection regularity result as described above.
Regarding the box constraints (5.4) in [24] it was shown that for a PSNR-optimizing upper level objectiveJ(u, α) = u(α) − f 2 L 2 (Ω) subject to H 1 and Huber regularized TV and TGV denoising problems, under some mild conditions on the data f , the optimal scalar solutions α and (α 0 , α 1 ) are strictly positive. As depicted in Figure 2 the upper level objective discussed here appears close to optimizing the PSNR, keeping the parameters strictly positive via (5.4) seems, however, necessary for the time being.
We now briefly discuss how to treat the bilevel problem (P TGV -d). Let (α 0 , α 1 ) → p(α 0 , α 1 ) denote the solution map for the lower level problem, equivalently of the optimality condition (4.24). Then the problem (P TGV -d) admits the following reduced version Similarly to the TV case [37], one can show that the reduced functionalĴ TGV : H 1 (Ω) × H 1 (Ω) → R is differentiable. We can then apply the KKT framework in Banach space [59]: subject to x ∈ C and g(x) = 0, where V, A, Z are Banach spaces, X = V × A, T : X → R and g : X → Z are Fréchet differentiable and continuous differentiable functions, respectively, and C ⊂ X is a non-empty, closed convex set.
We haveĴ d (α 0 , α 1 ) ∈ (H 1 (Ω) × H 1 (Ω)) * . In order to obtain the gradient of this functional we apply the inverse Riesz map as follows: with P 1 , P 2 denoting the first and the second component of the derivative of the reduced objective. Equipped with this gradient, a gradient-related descent scheme as in [39, Algorithm 1] can be set up for our bilevel TGV problem. This will be discussed further in Section 6.1 below.
We will skip here the proofs for the differentiability of the functions g and the reduced objective J as well as the existence proofs for (P TGV -d) and (P TGV -p.d.). These results can be shown similarly to the corresponding assertions for TV; see [37,39].
Newton solvers for the lower level problems.
5.3.1. Dual TGV Newton. Before we proceed to devising of a projected gradient algorithm for the solution of both aforementioned bilevel problems, we discuss here two Newton algorithms for the solutions of the corresponding lower level problems.
We first state the corresponding function space Newton method for the solution of (4.24); see Algorithm 1.
Here G δ denotes the second derivative of G δ in (4.22). Due to the regularization of p in (4.24) the algorithm admits a local superlinear convergence; see [31,32]. Moreover, similar to [41] it can be shown that the solver is mesh (i.e. image resolution) independent.
A few words on the discrete version of Algorithm 1 are in order. Images (d = 2) are considered as elements of U h := {u | u : Ω h → R} where Ω h = {1, 2, . . . , n} × {1, 2, . . . , m} is a discrete cartesian grid that corresponds to the image pixels. The mesh size, defined as the distance between the grid points, is set to h = 1/ √ nm. We define the associated discrete function spaces (p 11 , p 12 , p 22 ). For the discrete gradient and divergence 19 Algorithm 1 Function space Newton algorithm for the solution of the regularized TGV dual problem (4.23) while some stopping criterion is not satisfied do Find δp k ∈ H 2 0 (Ω, S d×d ) such that the following equation is satisfied in [H 2 0 (Ω, S d×d )] * : Update p k+1 : we have, ∇ : W h → V h and div : V h → W h satisfying the adjoint relation ∇ = −div . We refer the reader to Appendix B for precise definitions of these operators as well as for a detailed description of the other discrete second-order differential operators, We note here that these operators must be defined with the correct boundary conditions in order to reflect the boundary conditions imposed on p ∈ H 2 0 (Ω, S 2×2 ).
5.3.2.
Primal-Dual TGV Newton. Next we briefly describe the primal-dual TGV Newton method for the solution of the first-order optimality conditions in Proposition 4.3 written here for the denoising case, for the sake of readability only: For the discretized versions of the above differential operators, we use the standard five-point stencils with zero Neumann boundary conditions. Note that these act on the primal variables u and w, which satisfy natural boundary conditions in contrast to the dual variable. The discretized symmetrized gradient Ew is defined as 1 2 (∇w + (∇w) ). The system of equations (5.12)-(5.15) can be shortly written as g pd (x) = 0, where x = (u, w, q, p). We compute the derivative of g pd at a point x = (u, w, q, p) as the following block-matrix: 20 Given x k , the Newton iteration for solving the system of equations (5.12)-(5.15), or g pd (x) = 0 for short, reads which can also be written as Here it is convenient to introduce the notation since only the submatrices C and D depend on k. Note that the righthand side Dg pd (x k )x k −g pd (x k ) of the linear system (5.18) can be written as Notation-wise, the components that appear in b k 2 should be regarded as the diagonals of the corresponding diagonal matrices that we mentioned before, multiplied component-wise. By introducing the notation x k 1 = (u k , w k ) , x k 2 = (q k , p k ) , the Newton system (5.18) can be written as The above system can be simplified utilizing the Schur complement: First solve for the primal variables x k+1 1 = (u k+1 , w k+1 ) and then recover the dual ones x k+1 2 = (q k+1 , p k+1 ). This yields . The folllowing result then holds.
Lemma 5.1. If (q k , p k ) belong to the feasible set, i.e., |q k | ≤ α 1 and |p k | ≤ α 0 component-wise, then the matrix S k := (A − BD −1 k C k ) is positive definite and for the minimum eigenvalues we have λ min (S k ) ≥ λ min (A) > 0. Furthermore, S −1 k is bounded independently of k. The proof of Lemma 5.1 follows the steps of the analogous proof in [40] and is hence omitted. Summarizing, the Newton method for the solution of the (5.12)-(5.15) is outlined in Algorithm 2.
Here we have followed [40] and project in every iteration the variables q, p onto the feasible sets such that the result of Lemma 5.1 holds.
The projections onto the feasible sets are defined respectively as with the equalities above to be considered component-wise. 21
Algorithm 2
Newton algorithm for the solution of the regularized TGV primal problem (P γ ) while some stopping criterion is not satisfied do Solve the linear system for x k+1 , p k+1 as projections ofq k+1 ,p k+1 onto the feasible sets {q : |q| ≤ α 1 }, {p : |p| ≤ α 0 } end while
Numerical implementation
In this section we will describe two projected gradient algorithms for the solution of the discretized versions of the two bilevel problems (P TGV -d) and (P TGV -p.d.). Note that for most of the experiments we will keep α 0 a scalar -this is justified by the numerical results; see the relevant discussion later on.
6.1. The numerical algorithm for (P TGV -d). We now describe our strategy for solving the discretized version of the bilevel TGV problem (P TGV -d). For this purpose, we introduce the discrete versions of differential operators and norms that appear in the upper level objective of (P TGV -d). We will make use of the discrete Laplacian with zero Neumann boundary conditions ∆ N : U h → U h which is used to act on the weight function α 1 . These are the desired boundary conditions for α 1 as dictated by the regularity result for the H 1 -projection in [39,Corollary 2.3]. For that we use the standard Laplacian stencil, setting the function values of ghost grid points to be the same with the function value of the nearest grid point in Ω h . For a function u ∈ U h we define the discrete 2 norm as For the discrete H 1 norm applied to the weight function α 1 we use while the dual norm is defined as We will also make use if the following version of the discrete dual For the discrete version of the averaging filter in the definition of the localized residuals (5.1) we use a filter of size n w × n w , with entries of equal value whose sum is equal to one. With these definitions the discrete version of the bilevel TGV (P TGV -d) is the following: Here, (·) + is applied in a component-wise way and we have ∈ Ω h }. Note that the discrete penalty functions P δ : W h → W h and Q δ : V h → V h are defined straightforwardly by componentwise application of the function G δ .
Regarding the choice of the lower and upper bounds for the local variance σ 2 and σ 2 , respectively, we follow here the following rules, where σ 2 is the variance of the "Gaussian" noise contaminating the data: The formulae (6.1) are based on the statistics of the extremes; see [39, Section 4.2.1]. We now proceed by describing the algorithm for the numerical solution of (P h TGV -d). In essence, we employ a discretized projected gradient method with Armijo line search. The discrete gradient of the reduced objective functional is computed with the help of the adjoint equation which is the discrete version of (5.7). We summarize this in Algorithm 3. For the sake of notation, here 1 denotes a matrix either of the form [Id; Id] or [Id; Id; Id] of size nm × 2nm or nm × 3nm, respectively, depending on whether it is applied on α 1 or α 0 . On the other hand, 1 denotes a matrix of size 1 × nm with all entries equal to one. The projection P (A 1 ad ) h is computed as described in [39,Algorithm 4], that is via the semismooth Newton method developed in [32]. We only mention that the original discretized H 1 -projection problem P (A ad ) h (α) given by is approximated by the following penalty version: with some small α > 0. For the projection regarding α 0 , we simply set P (A 1 ad ) h (α 0 ) = max(min(α 0 , α 0 ), α 0 ). Furthermore, a path following scheme is employed for solving g d (p, α 0 , α 1 ) = 0. This done by using a decaying sequence 0 = 0 , 1 = 1 up to a tolerance g d (p +1 , α 0 , α 1 ) ≤ tol ( ) , and then setting +1 0 := max(θ 0 , 0 ), +1 1 := max(θ 1 , 1 ) for some 0 < θ < 1, until a desired level of penalization is reached. 23
In summary, the projected gradient algorithm for the solutions of (P h TGV -p.d.) is described in Algorithm 4. The projections P (A 0 ad ) h and P (A 1 ad ) h are computed as before, using [39,Algorithm 4].
Algorithm 4
Discretized projected gradient method for the bilevel TGV problem (P h TGV -p.d. ) Use the Algorithm 2 to compute the solution x k = (u k , w k , q k , p k ) of the lower level problem Solve the adjoint equation (6.4) for (u * , w * , q * , p * ) Compute the derivative of the reduced objective with respect to α 0 and α 1 as in (6.6) and (6.7) Compute the reduced gradients = θ + τ k 1 and k := k + 1 until some stopping condition is satisfied 6.3. Numerical examples in denoising. We now discuss some weighted TGV numerical examples, with regularization weights produced automatically by Algorithms 3 and 4. We are particularly interested in the degree of improvement over the scalar TGV examples. We are also interested in whether the statistics-based upper level objective enforces an automatic choice of regularization parameters that ultimately leads to a reduction of the staircasing effect. Our TGV results are also compared with the bilevel weighted TV method of [37,39]. The associated test images are depicted in Figure 3 with resolution n = m = 256. The first one is the well-known "Cameraman" image which essentially consists of a combination of piecewise constant parts and texture. The next two images, "Parrot" and "Turtle" contain large piecewise affine type areas, thus they are more suitable for the TGV prior. The final image "hatchling" is characterized by highly oscillatory patterns of various kinds, depicting sand in various degrees of focus.
We note that the initialization of the algorithms needs some attention. As it was done in [39] for the TV case, α 0 0 and α 1 0 must be large enough in order to produce cartoon-like images, providing the local variance estimator with useful information. However, if α 0 is initially too large then there is a danger of falling into the regime, in which the TGV functional and hence the solution map of (at least the non-regularized) lower level problem does not depend on α 0 . In that case the derivative of the reduced functional with respect to α 0 will be close to zero, thus making no or little progress with respect to its optimal choice. Indeed this was confirmed after some numerical experimentation. Note that an analogous phenomenon can occur also in the case where α 0 is much smaller than α 1 . In that case it is the effect of α 1 which vanishes. This has been shown theoretically in [50,Proposition 2] for dimension one, but numerical experiments indicate that this phenomenon persists also in higher dimensions. In our examples we used and α 0 1 = 9 × 10 −4 and α 0 0 = 3.125 × 10 −6 for (P h TGV -d) and α 0 1 = 0.25 and α 0 0 = 0.2 for (P h TGV -p.d.). Regarding the termination of the projected gradient algorithm, we used a fixed number of iterations, n = 30 for (P h TGV -d) and n = 40 for (P h TGV -p.d.). Neither the upper level objective nor the argument changed significantly after running the algorithm for more iterations; see for instance Figure 4. The same holds true for the corresponding PSNR and SSIM values. We also note that a termination criterion as in [39] based on the proximity measures , i = 0, 1, is also possible here. We note that due to the line search, the number of times that the lower level problem has to be solved is more than the number of projected gradient iterations. For instance for the four examples of (P h TGV -p.d.) of Figure 5 the lower level problem had to be solved 57, 57, 57, and 59 times respectively (40 projected gradient iterations). Typically 8-12 Newton iterations were needed per each lower level problem. Table 1. PSNR and SSIM comparisons for the images of Figure 5. Every cell contains the corresponding PSNR and SSIM value For the first series of examples we keep the parameter α 0 scalar, whose value nevertheless is determined by the bilevel algorithms. We depict the examples in Figure 5. The first row shows the noisy images, while the second contains the bilevel TV results [37]. The third row depicts the best scalar TGV results with respect to SSIM, either using the dual or the primal-dual approach -whichever had the largest value -where we have computed the optimal scalars α 0 , α 1 with a manual grid method. The fourth and the fifth rows show the results of (P h TGV -d) and (P h TGV -p.d.) respectively. Detailed sections of all the images of Figure 5 are highlighted in Figure 6. The weight functions α 1 for the bilevel TV and the bilevel TGV algorithms are shown in Figure 7. In Table 1 Figure 5 scalar TGV-primal-dual) with respect to both quality measures, as well as the corresponding values of the three bilevel algorithms. We next comment on the results for each image.
Cameraman: Here both the best PSNR and SSIM are obtained by the bilevel TV algorithm. This is probably not surprising due to the piecewise constant nature of this image. However, both bilevel TGV algorithms improve upon their scalar versions with respect to both measures. It is interesting to observe the two different spatial weights α 1 produced by the two bilevel TGV algorithms, see the last two functions at the first column of Figure 7. The dual TGV algorithm, solving the anisotropic version of TGV, has the tendency to blur thin objects that have a 45 degree orientation with respect to the pixel grid, like for instance the middle part of the cameraman's tripod. We see that the weight α 1 drops significantly at this area aiming to reduce this effect. Otherwise both bilevel algorithms preserve better the detailed area of the camera with the weights having small values there.
Parrot: Here the best results with respect to both PSNR and SSIM are achieved by the two bilevel TGV algorithms, (P h TGV -p.d.) and (P h TGV -d), respectively. There is significant improvement over all TV methods, which is due to the parameters being chosen in a way such that the staircasing effect diminishes. Furthermore, we observe improvement over the scalar TGV results especially around the parrot's eye, where the weights α 1 drop significantly; see the second column of Figure 7.
Turtle: We get analogous results here as well, with the bilevel TGV (P h TGV -p.d.) producing the best results both with respect to PSNR and SSIM. There a significant reduction of the staircasing effect, while the weight α 1 drops in the detailed areas of the image (head and flipper of the turtle).
Hatchling: In this image, the best PSNR is achieved by (P h TGV -p.d.), but only marginally. In fact, the best SSIM is achieved by the scalar version of the dual TGV algorithm also with a comparable PSNR. Similarly at least with respect to PSNR, the scalar TV is marginally better than bilevel TV. We attribute this to the fact that the natural oscillatory features of the image are interpreted as noise by the upper level objective. Nevertheless, all the bilevel methods are able to locate and preserve better the eyes area, i.e., sand in focus, with the weight α 1 dropping there significantly.
Finally, we show an example where also the weight α 0 varies spatially. For simplicity we use here only the primal-dual version (P h TGV -p.d.). We note that by spatially varying both TGV parameters, the reduced problem becomes highly non-convex with many combinations of these parameters leading to similar values for the upper level objective. In order to deal with this, we use the following initialization strategy, which according to our numerical experiments, produces satisfactory results. We keep the spatial weight α 1 fixed, as it has been computed from the previous experiments, see the last row of Figure 7, and we optimize only with respect to a spatially varying α 0 . As initialization for α 0 , we set it constant, equal to 5.
In Figure 8 we depict the computed spatially varying parameters α 0 as well as the corresponding PSNR and SSIM values. Observe that the shape of α 0 is different to the one of α 1 , compare the last row of Figure 7 to the second row of Figure 8. This implies that a non-constant ratio of α 0 /α 1 is preferred throughout the image domain. Secondly, by spatially varying α 0 we only get a slight improvement with respect to PSNR and SSIM in all images, apart from the last one. However, it is interesting to observe the spatial adaptation of α 0 with respect to piecewise constant versus piecewise smooth areas. The values of α 0 are high in large piecewise constant areas, like the background of cameraman, the left area of the parrot image, as well as the top-right corner of the turtle image. This is not so surprising as large values of α 0 imply a large ratio α 0 /α 1 and a promotion of TV like behaviour in those areas. We can observe this in more detail at the parrot image, see last row of Figure 8. On the contrary, the values of α 0 are kept small in piecewise smooth areas like the right part of the parrot image and the sun rays around the turtle's body. This results in low ratio α 0 /α 1 and thus to a more TGV like behaviour, reducing the staircasing effect. This is another indication of the fact that by minimizing the statistics-based upper level objective one is able not only to better preserve detailed areas but also to finely adjust the TGV parameters such that the staircasing is reduced.
Conclusion
In this work we have adapted the bilevel optimization framework of [37,39] for automatically computing spatially dependent regularization parameters for the TGV regularizer. For that we first examined two variants of the TGV regularization problem establishing rigorous dualization frameworks that form the basis for their algorithmic treatment via Newton methods. We showed that the bilevel optimization framework with the statistics/localized residual based upper level objective is able to automatically produce spatially varying parameters that not only adapt to the level of detail in the image but also reduce the staircasing effect.
Future continuation of this work includes adaptation of the bilevel TGV framework for advanced inverse problems tasks, i.e., Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) reconstruction as well as in multimodal medical imaging problems where structural TV based regularizers (edge aligning) have been suggested. Adaptation of the framework for different noise distributions e.g. Poisson, Salt & Pepper as well as combination of those [19,20], should also be investigated. A fine structural analysis of the weighted TGV regularized solutions in the spirit of [34,44] would be also of interest. spatial α 1 , scalar α 0 spatial α 1 , spatial α 0 spatial α 1 , scalar α 0 (detail) spatial α 1 , spatial α 0 (detail) Figure 8. Experiments with optimizing over a spatially varying α 0 . Top row: the automatically computed scalar parameters α 0 , that correspond to the images of the last row of Figure 5. Middle row: the automatically computed spatially varying parameters α 0 , where α 1 has been kept fixed (last row of Figure 7). The weight α 0 is adapted to piecewise constant parts having there large values and hence promoting TV like behaviour, see for instance the parrot image at the last row. On the contrary α 0 has low values in piecewise smooth parts promoting a TGV like behaviour reducing the staircasing. Note that the use of symmetric differences for the mixed derivative results in a symmetric matrix representing Dxy. All the resulting operators D xx , D xy , D yy are then symmetric. For the discrete second divergence div 2 : V h → U h , we have div 2 p = D xx p 11 + 2D xy p 12 + D yy p 22 . The vector bi-Laplacian is an operator V h → V h where p → (∆ 2 p 11 , ∆ 2 p 12 , ∆ 2 p 22 ) with ∆ 2 = D xxxx + D yyyy + D xxyy + D yyxx . The resulting stencil for ∆ 2 is as shown below. In order to reflect the boundary conditions of H 2 0 (Ω, S 2×2 ), the bi-Laplacian must be endowed with both zero Neumann and zero Dirichlet boundary conditions. Again this is enforced by considering any ghost points (up to two of them in the boundary), to have zero value. Finally we discuss the dicretization of the operator ∇ 2 div 2 : V h → V h , which is equal to (∇ 2 div 2 p) 11 = Dxxxxp 11 + 2Dxxxyp 12 + Dxxyyp 22 , (∇ 2 div 2 p) 12 = Dxyxxp 11 + 2Dxyxyp 12 + Dxyyyp 22 , (∇ 2 div 2 p) 22 = Dyyxxp 11 + 2Dyyxyp 12 + Dyyyyp 22 , where in fact it holds Dxxxy = Dxyxx, Dxxyy = Dxyxy = Dyyxx and Dxyyy = Dyyxy. For these fourth order discretized differential operators we use the stencils | 2020-02-14T02:01:21.141Z | 2020-02-13T00:00:00.000 | {
"year": 2020,
"sha1": "8b6f5695f6d09da9dbefd3ad3daacc250674a5a5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8b6f5695f6d09da9dbefd3ad3daacc250674a5a5",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
229400700 | pes2o/s2orc | v3-fos-license | ANALYSIS OF USING OF SOCIAL MEDIA AS COMMUNICATION FOR LECTURES AT PTKIN: Case Study of Da’wa Management of IAIN
Social media is a tool that cannot be separated from modern human life, regardless of age, location, profession, and status. Every message can be sent quickly and right to other people both short and far distances. The covid-19 outbreak makes everyone do social distancing to minimize the spread of the virus, including teaching and learning activities at schools and campuses. This research discusses social media that has used by students and lecturers in their online learning with various platforms directed by lectures or campus that occurred in students of the Da’wa Management department at IAIN Sultan Amai Gorontalo and IAIN Pare-Pare. This discussion aims to determine the analysis use the social media during the online learning period which has occurred in the last semester, so it will provide benefits to researchers and intellectuals to explore and learn effective ways of organizing online learning in the future. This study used a descriptive qualitative method that occurred in two different places (IAIN Sultan Amai Gorontalo and IAIN Pare-pare. The researcher collected data by observation, documentation, and interviews with several structured and non-structured questions. The data that has been collected and analyzed by the researcher to give conclusions from the problems that have been found. The results of this study indicate that the use of social media as communication for online lectures in the Da’wa Management Department of IAIN Sultan Amai Gorontalo and IAIN Pare-Pare is effective with several notes related to minor numbers on students' understanding of the material, personal enjoyment, influencing attitudes, social relations between students and lecturers. students and personal actions during the Covid-19 pandemic.
A. Introduction
The development of technology and information is so fast and rapid, sometimes even difficult to control for those who are reluctant to keep up with the times. The most basic thing is the existence of the internet with various advantages and speeds that can be used in various activities of human life. Communication without the internet is like eating rice without side dishes, which means that the internet has become an important part of communication in society, both interpersonal communication, intercultural communication, to communication between communities, which does not impose boundaries and always provides wide space for its users. Communication via the internet can be accessed using cell phones or cellphones, computers and laptops, with a connection device or connector that becomes a bridge to get a signal to facilitate internet access wherever and whenever.
In this modern era, we can be sure that almost all people in this world own and utilize the internet, especially through cellular phones or handphones. The use of the internet in various activities has become a common practice in communicating, it's just how a person can use it properly and optimally so as to avoid the abyss of humiliation. In ancient times, everyone was always limited by time and space to communicate with each other, then with the development of this technology space and time were not the main problems. The internet is the focus that is used in a device, in which there is a medium that can connect people to strengthen social relationships. This media allows users to represent themselves in interacting, cooperating, sharing, communicating with other users to form virtual social bonds (Nasrullah: 2015).
Social media is a means that cannot be separated from modern human life today, regardless of age, location, profession and even status. Each message can be conveyed quickly and precisely to other people both short and short distances. One of the benefits of social media as a source of information through various existing platforms and applications such as Facebook, Twitter, Instagram, YouTube, Telegram, WhatsApp, Line and others is by creating an account as a personal identity to facilitate access to it without being restricted by anything. Social media that is supported by good internet services and optimal networks will further strengthen the relationship between the owner and the device, internet users will be more comfortable doing activities on their social media. The convenience obtained is a personal alternative in finding answers related to social problems.
In a study conducted by We Are Social in 2019 (www.databoks.katadata.co.id) It is stated that telephone users are 355.5 million out of Indonesia's 268.2 million population, which means that telephone users are more than the total population. It can also be said that there are some people who have 2 or more telephones that are used to make it easier to communicate with other parties. Of the 355.5 million telephone users, there are only 150 million people who use the internet and are active on social media, this figure consists of 20 million active users of social media on computers or laptops and 130 million active users of social media via telephone. cellular or mobile. The use of social media as a learning technology, entertainment media, communication media, to a forum for discussion in finding solutions and making decisions in dealing with problems in society. The following is a diagram of telephone, internet and social media users in Indonesia: A social problem that is currently spreading is the covid-19 outbreak which has forced every human being to carry out social distancing to minimize the spread of the virus, including teaching and learning activities at schools and campuses. The Covid-19 outbreak has occurred since the end of 2019, with the largest cases in China with 81,620 cases (www.worldmeters.info). Meanwhile in Indonesia, as of April 2, 2020, 170 people died, 1,790 were positive and 112 recovered (www.covid19.go.id), This makes Indonesia and even the world take a preventive attitude with social distancing in various places such as not holding gatherings in public places, using masks and always washing hands. Social media plays an important role in socializing social distancing and provides important education to the entire community in an effort to spread this virus.
Schools and campuses as media for public gatherings must also be temporarily shifted to social media as a follow-up to a more effective and efficient teaching and learning process in the Covid-19 era. Although various effects arise from the online or online teaching and learning process, this does not discourage continuing education and sharing knowledge among teacherstudents and lecturer-students. Various social media are used to facilitate both parties, so as to minimize conflicts that will arise. Social media can reach all elements of society both in villages and in cities, but it can affect personal emotional which then can hinder communication, with the existence of an unsupportive environment and habits such as minimal network and no cell phone due to low economy. The teaching and learning process must inevitably be carried out with the coercion of existing conditions with appropriate and optimal actions. Social media has become a new tool and even a new weapon in carrying out various missions in order to create an optimal succession, including in the field of education at the primary, secondary to tertiary levels. The involvement of various elements in communicating through social media has become a special concern in building good relationships between individuals. According to Nasrullah (2015) The presence of social media and the growing number of users from time to time have provided an interesting fact about the enormous power and impact of the internet on lives. Various facts that emerge both in the form of information and news in various aspects through social media, have made it a phenomenon in this modern era. Phenomena in the field of education also appear continuously, with the response of the community through social media that has made it a viral thing and even trending topics on various platforms. The world of education has gradually adapted to the use of social media, especially supported by the age of education which is classified as a teenager. Nur Ainiyah (2018) said that social media has brought and shaped a new world in the mindset of millennial adolescents in interacting and communicating in new ways, especially in the world of education, which of course as students expect a media that facilitates the educational process, and without realizing it, social media has become the answer. Social media provides educational messages such as Wikipedia and so on, and social media as a medium of liaison between teenagers who are building networks and acquaintances so that they can be used in the future both in business, politics and in society.
B. Theoretical Review
According to Hamzah (2015) Education supported by social media is the development of online learning technology, which is a complementary method for traditional educational learning. The social media used by students to their lecturers will strengthen collaboration and collaboration between individuals, which then supports the smoothness of the academic learning process. Social media is a solution for the academic world with limited physical documents or paper used in every academic activity, so that students and lecturers can interact with digital documents that can be accessed by both parties without any restrictions. Apart from digital documents, other media that can be accessed in the form of video, audio, and e-mail as an introduction to messages, even now lecturers and students can video call each other using several platforms that have regenerated their systems. Suryadi et al (2018) According to him, the use of social media, namely WhatsApp on the discipline of learning of students, has a strong positive effect, especially on the subject of Islamic Religious Education, this is due to the large number of students using social media, namely WhatsApp during class hours, so that the learning discipline of students becomes weak and not focus on following the lesson. Hanoum (2014) In his research, the use of social media in learning is very useful in increasing the active participation of students in the learning process, which will ultimately have an impact on learning outcomes. This can happen because social media users, namely students, use it as a place to share information, discussions and other activities which are often constrained by limited time in class.
Based on some of the research above, social media is an interesting thing in interpersonal connectivity, from students to lecturers, lecturers to students and fellow students, so that this research discusses the use of social media by students to their lecturers during online lectures using various platforms such as Facebook, Twitter, Instagram, YouTube, Telegram, WhatsApp, Line and others which have been directed by lecturers and the campus in particular. at the State Islamic Religious College (PTAIN) during the covid-19 period. Social media is a form of communication used by many people, this communication will run and be considered effective, if it raises these 5 things, namely understanding, fun, influencing attitudes, good social relations and actions (Tubbs et al: 2002).
With some of the descriptions above, this study discusses the analysis of the use or use of social media as a means of communication during online lectures at 2 PTAINs, namely the Sultan Amai State Islamic Institute (IAIN) and the State Islamic Institute (IAIN) Pare-Pare by focusing on 1 study program, namely Da'wa Management on both campuses. What social media are used during online lectures at the Sultan Amai Gorontalo State Islamic Institute (IAIN) and the Pare-Pare State Islamic Institute (IAIN); and How effective is the use of social media as communication between the two parties (lecturers and students) during online lectures at the Sultan Amai State Islamic Institute (IAIN) in Gorontalo and the Pare-Pare State Islamic Institute (IAIN).
C. Methods
This research is a research that uses a descriptive qualitative method with a case study approach that occurs in 2 locations. The data collection techniques used were observation, interviews and documentation, with a focus on interview techniques and structured and nonstructured questions in the questionnaire as a research tool. Interviews were conducted to extract data from sources by filling out a questionnaire that had been prepared by the researcher. According to Moelong (1993) Research with interview techniques as data mining and filling out questionnaires by respondents aims so that the results obtained can be used in research to achieve research objectives and targets, based on questionnaires that have been prepared previously.
Researchers conducted direct interviews with respondents with questions and answers that had been prepared in the questionnaire as an interview guide. The data that has been obtained from respondents through questionnaires that have been filled in, then analyzed in order to be presented optimally by the researcher by providing conclusions from the problems that have been found. In this study the researchers conducted interviews with 10% of the total students in the Da'wa Management Study Program from 2 PTAIN campuses, namely the Sultan Amai Gorontalo State Islamic Institute (IAIN) and the Pare-Pare State Islamic Institute (IAIN) regarding the analysis of social media as lecture communication. online throughout the covid-19 outbreak, namely within 2 months from 1 July 2020 to 31 August 2020.
D. Results and Discussion
Da'wa management is a social scientific cluster developed in state and private universities in Indonesia. The management of Da'wa at the Sultan Amai State Islamic Institute (IAIN) in Gorontalo is under the auspices of the Faculty of Ushuluddin and Da'wa and was established in 2013, with a vision of "Making the Department of Da'wa Management a Center for the Development of Excellent Da'wa Management Science" and the following mission: 1. Carry out a process of scientific transformation in the field of da'wa management in an effective and updated manner, and integrated with science and culture. 2. Conduct research and development of knowledge in the field of da'wa management based on local culture and information technology. 3. Carry out community service in the form of da'wa practice and the application of research results and establish collaborative relationships between institutions. Meanwhile, the Management of Da'wa at the State Islamic Institute (IAIN) Pare-Pare is under the auspices of the Faculty of Ushuluddin, Adab and Da'wa and was established in 2014, with a vision of "Carrying Out Management of Da'wa Management Based on Acculturation and Information Technology in Eastern Indonesia in 2025" and the following mission: 1. Organizing the tri dharma of higher education that is competitive and has character towards the stability of faith, moral maturity and professional stability based on information technology in the field of da'wa management. 2. Organizing Islamic acculturation studies with the cultural treasures of the archipelago in the field of da'wa management.
Realizing professional human resources, with an entrepreneurial spirit through integrated
Islamic studies in the field of da'wa management. The student population of the State Islamic Institute (IAIN) Sultan Amai Gorontalo Da'wa Management Study Program in the 2019/2020 even academic year was 106 people and as many as 106 students of the Pare-Pare State Islamic Institute (IAIN) Da'wa Management Study Program in the 2019/2020 even academic year are 323 students. In this study, it was obtained an average of more than 10% of the total population of each campus as follows: Vol. 2 No. 1, 2020 w-2 nd ICONDAC -November 3-5, 2020 e-ISSN: 2686-6048 Based on the data above, as many as 105 speakers were obtained with a composition of 66 students of IAIN Sultan Amai Gorontalo Da'wa Management (62.2% of the 106 students) and 39 students of IAIN Parepare Da'wa Management (12% of 323 students). With a total of 105 resource persons, if classified by age group, it is obtained as follows: As many as 105 resource persons are active users of social media, if classified according to the platforms most often used in surfing in cyberspace, Facebook is found as a social media with the most 80% used as follows: In the teaching and learning process during the Covid-19 period, various applications have been used as learning support media that are maximally utilized to achieve optimal goals, so that the knowledge and knowledge conveyed by lecturers can be understood by students properly. From the data of 105 informants, it was found that the majority of students undergoing online lectures using the Whatsapp application as much as 43.8%, as for other applications used in the learning process, namely edlinks of 13.4%, if classified according to the learning media application used, it is obtained as follows: Lecturer learning methods so much, so that students also receive material and discussion with varying percentages, even students who always get boredom are found to be preceded by minimal material. In the data obtained, the majority of lecturers gave a balanced portion of assignments and materials, 54.3% stated that they received this balanced portion. Of the 105 sources, it was obtained as follows: Social media can be said to be effective as a communication medium during online lectures which is also a learning medium during the Covid-19 period, it can be based on 5 things, the first is that there is an understanding. In the process of communicating online lectures, students understand and understand the material presented by the lecturer, and the lecturer also provides understanding and understanding of the material presented to students in online lectures properly. From 105 sources, It was found that 61% of students agreed that during online lectures the lecturer had provided understanding and understanding of the material presented well and 44.8% of students were doubtful about their understanding of the material presented by the lecturer. After the interview can be obtained as follows: Happiness or pleasure is the second thing that makes social media effective as a medium of communication during online lectures during the Covid-19 period, as long as communicating online lectures, namely lecturers always provide a happy atmosphere in every online lecture meeting session, and students feel happy at every online lecture meeting session. As many as 105 resource persons, it was obtained that 56.2% of students agreed that during online lectures the lecturer had provided happiness at every meeting and 41.9% of students were doubtful or could be said to be quite happy during online lectures. Researchers obtained the following data: The third aspect that makes social media effective as a medium of communication during online lectures during the Covid-19 period is the influence on the attitudes of messages delivered by lecturers to students. There are 2 things to pay attention to in this aspect, namely the material presented by the lecturer which influences the good attitude of students in every online lecture session, and students feel there is a change in attitude (to positive) during online lectures. It has been obtained from 105 students that as many as 59% of students agree that the material presented by the lecturer has an effect on good attitudes on students in each online lecture session and 49.5% of students agree that there is a change in attitude (to positive) during online lectures.
Changes in attitudes and felt by students during online lectures include more polite speech, time discipline, more health care, more diligent in doing assignments and the teaching and learning process, implementation of knowledge directly to the community, motivated and enthusiastic in carrying out social activities, being more obedient to both parents and getting closer to family and utilizing social media or cell phones to learn optimally. The data obtained are as follows: The next thing that makes social media effective as a medium of communication during online lectures during the covid-19 period is between students and students and students to lecturers or otherwise good social relations are built. The following are the things to consider in this aspect, namely the lecturer building good social relations with students in every online lecture session, the absence of conflicts that are born during online lectures (between lecturers and students), and (fellow students). It was found that from 105 students stated that 66.7% of students agreed that lecturers build good social relationships with students in every online lecture session, 59% of students agreed that there was no conflict that was born during online lectures (between lecturers and students) and 59% of students agreed not there is a conflict that was born during online lectures (fellow students). The following data are obtained as follows: The last thing that is the focus of social media can be said to be effective as a medium of communication during online lectures during the Covid-19 period is the presence of patterns of actions and characters related to lecture material. There are several things to consider in this pattern of action, namely the lecturer teaches the patterns of action and character in each material presented to students in each online lecture session, as well as the patterns of student daily actions influenced by the material presented by the lecturer during online lectures. As many as 105 students thought that 59% of students agreed that the lecturer taught the patterns of action and character in each material presented to students in each online lecture session, and 43.8% of students agreed that the student's daily action patterns were influenced by the material presented by the lecturer during online lectures. The data obtained by researchers are as follows: w-2 nd ICONDAC -November 3-5, 2020 e-ISSN: 2686-6048 5 things, namely understanding, pleasure or happiness, influencing attitudes, good social relations and actions obtained by students during online lectures are a statement on the analysis of using social media as online lecture communication that occurs on 2 PTAIN campuses, namely the Sultan's State Islamic Institute (IAIN) Amai Gorontalo and the State Islamic Institute (IAIN) Pare-Pare. In fact, as many as 78.1% or 82 people stated that the application during online lectures through social media has been used by lecturers to facilitate them such as whatsapp, zoom, google meet and edlink for several reasons, namely increasing individual independence, applying disciplined life for life healthy and stop the spread of the covid-19 virus, teaches learning habits wherever and whenever.
Meanwhile, only 21.9% or 23 people stated that social media does not facilitate the teaching and learning process based on several reasons, namely the lack of provider networks, especially in rural areas, lectures not face-to-face with lecturers further reduce inner bonds in the delivery of knowledge, frequent electricity blackouts alternately in some areas, especially rural areas, the lack of hardware devices to support lectures such as laptops and androids that are capable.
E. Conclusion
Social media as an effective means of communication, especially in the online lecture process, provides several benefits even though it is not optimal, namely the delivery of understanding and understanding of material from lecturers to students and students are more independent learning to understand the material presented with limited space and time. The learning process still has a happy and enjoyable impact, especially on students, which will reduce the feeling of boredom due to the adaptation of new learning methods. Communication through social media which is used for online lectures has an effect on changing attitudes in students both within the family and in community life. The existence of social relations is maintained both by lecturers to students and fellow students so that harmonization in learning is still well maintained. As well as the delivery of messages through virtual actions for the development of student character can still be delivered. | 2020-12-27T10:09:17.483Z | 2020-11-26T00:00:00.000 | {
"year": 2020,
"sha1": "0087d3e02d7312e874f79e2bee1fb0f642f8707e",
"oa_license": "CCBY",
"oa_url": "http://proceedings.uinsby.ac.id/index.php/ICONDAC/article/download/392/419",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3ba0956a9a9ee24d33729bb73a1a546fd69975ad",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
} |
251850994 | pes2o/s2orc | v3-fos-license | FLOWERING LOCUS T indel variants confer vernalization-independent and photoperiod-insensitive flowering of yellow lupin (Lupinus luteus L.)
Abstract Ongoing climate change has considerably reduced the seasonal window for crop vernalization, concurrently expanding cultivation area into northern latitudes with long-day photoperiod. To address these changes, cool season legume breeders need to understand molecular control of vernalization and photoperiod. A key floral transition gene integrating signals from these pathways is the Flowering locus T (FT). Here, a recently domesticated grain legume, yellow lupin (Lupinus luteus L.), was explored for potential involvement of FT homologues in abolition of vernalization and photoperiod requirements. Two FTa (LlutFTa1a and LlutFTa1b) and FTc (LlutFTc1 and LlutFTc2) homologues were identified and sequenced for two contrasting parents of a reference recombinant inbred line (RIL) population, an early-flowering cultivar Wodjil and a late-flowering wild-type P28213. Large deletions were detected in the 5′ promoter regions of three FT homologues. Quantitative trait loci were identified for flowering time and vernalization response in the RIL population and in a diverse panel of wild and domesticated accessions. A 2227 bp deletion found in the LlutFTc1 promoter was linked with early phenology and vernalization independence, whereas LlutFTa1a and LlutFTc2 indels with photoperiod responsiveness. Comparative mapping highlighted convergence of FTc1 indel evolution in two Old World lupin species, addressing both artificial selection during domestication and natural adaptation to short season environmental conditions. We concluded that rapid flowering in yellow lupin is associated with the de-repression of the LlutFTc1 homologue from the juvenile phase, putatively due to the elimination of all binding sites in the promoter region for the AGAMOUS-like 15 transcription factor.
Introduction
Synchronization of flowering time with a particular season is essential for the reproductive success of plants growing in climates experiencing significant annual cycles. Knowledge of the regulatory network underlying flowering control would facilitate the breeding of new plant varieties that are better adapted to target agroecosystems. This issue is becoming more urgent in the era of climate change consecutively narrowing the time window for spring sowing of vernalization-responsive crops [1,2]. It is due to the reduction of the lengths of winter and spring in the Northern Hemisphere mid-latitudes, where temperate climate crops are cultivated [3]. As legume species have generally low tolerance to freezing temperatures [4], in colder regions of temperate climate, that include also major European cultivation areas, vernalizationresponsive species are sown in early spring rather than in autumn to fulfil vernalization requirements without an excessive risk of frost damage. Nevertheless, higher temperature in winter result-ing from climate change may also lead to incomplete fulfillment of vernalization requirement in some winter plant species, resulting in delayed flowering or a failure of floral induction [5]. Indeed, a long-term study based on 47-year record (1954-2000) of phenological changes revealed a progression of delayed flowering in natural populations with high vernalization requirements [1], whereas analysis of inter-annual sensitivity of winter wheat yields to vernalization degree days during 1975-2009 revealed its potential vulnerability to warming-mediated vernalization variations in temperate climate [6]. Moreover, spring heat waves, that may occur more frequently in warmer climates, can erase epigenetic marks of vernalization, resulting in de-vernalization and delayed flowering [7,8].
Genetic and molecular regulation of flowering induction is best known in the model plant, Arabidopsis thaliana (L.) Heynh. Flowering induction pathways address both environmental factors such as vernalization, high temperature, photoperiod and light quality as well as endogenous signalling, such as the gibberellin pathway, ageing and carbohydrates [9]. These pathways converge in the transcriptomic regulation of floral integrator genes. The key floral pathway integrator responding to environmental signals (low and high temperature, photoperiod and light quality) is the FLOWERING LOCUS T (FT) gene [10,11]. Arabidopsis has only two FT-like genes (FT and a one close homolog, TWIN SISTER OF FT), whereas legume genomes usually encode higher number of FTlike genes, assigned into three subclades, FTa, FTb and FTc [12,13]. While FT retains this basic role in all flowering plants studied to date, it is unclear how many homologues perform the role of floral integrator in legumes. Moreover, involvement of particular FT homologues in photoperiod or vernalization response considerably vary between legume species [12,[14][15][16][17][18]. In the present study, we selected yellow lupin (L. luteus L.), an annual legume with wild winter annual and domesticated spring annual forms, as a model to explore the sequence and functional divergence of FT homologues.
Yellow lupin (L. luteus L.) is a grain legume crop natively distributed primarily across the coastal region of the Iberian Peninsula [19]. Yellow lupin evolved numerous adaptations to deal with the dry-summer Mediterranean climate (such as drought avoidance by early phenology) and biotic pressures (i.e. anthracnose and aphid resistance) occurring in this environment [20][21][22][23]. Yellow lupin has a very short history of domestication as compared to other legumes, because all the key milestones in converting the wild types to domesticated forms were achieved in the 20 th century [24]. Modern yellow lupin cultivars have low alkaloid and high protein content in seeds, and as such are currently considered as a highly nutritional alternative to soybean meal in animal diets [25]. Yellow lupin phenology studies revealed high variability of the vegetative period resulting mainly from differences in vernalization requirements and long photoperiod preferences [26,27]. Depending on the application (green manure, biomass, or grain production) as well as climatic constraints (spring or winter sowing, the need of drought escape), breeding pressure on vernalization requirements and early phenology is different [28]. As the juvenile photoperiod-nonresponsive phase is relatively short in lupins [29], matching of periods with conditions ensuring fulfillment of vernalization and photoperiod requirements becomes challenging due to climate change. Determining the molecular components underlying existing variability in vernalization response and phenology would help to address this issue, enabling molecular-assisted breeding. Moreover, such a knowledge could facilitate studies in other vernalization-responsive legumes. Indeed, vernalization control in this large plant family differ from the Arabidopsis model because the majority of legumes do not have a key gene from the vernalization pathway, FLOWER-ING LOCUS C (FLC), whereas soybean, that has retained one such homologue, is vernalization independent because it evolved in a warm sub-tropical environment [30]. Early flowering based on thermoneutrality (vernalization independence) is the key agronomic trait of lupins, enabling their successful cultivation in temperate climates (spring-sown in Northern Europe or autumn sown in the mild Mediterranean-like climates of Australia) [29]. Quantitative trait loci (QTL) mapping revealed that flowering time in yellow lupin is controlled by several QTLs, with one major locus for vernalization response [31]. In the closely related species, narrow-leafed lupin, vernalization insensitivity is also based on a single locus, conferred by natural mutations that occurred during domestication period (Ku and Jul) [32,33]. These mutations constitute two overlapping deletion variants in the promoter region of one of the FT homologues, LanFTc1 gene [18,34]. Recent yellow lupin mapping studies revealed collinearity between the linkage group carrying the major QTL for vernalization response and the narrow-leafed lupin genome scaffold carrying LanFTc1 sequence [35,36]. Flowering time in another related species, white lupin, was found to be under quantitative control, with two QTLs associated with FTa and FTc gene-based markers [37][38][39][40]. All these findings improved the knowledge on the key role of FT in flowering time control in legumes and driven our attention to the FT clade present in the yellow lupin genome.
In this study, the involvement of FT genes in flowering time control in yellow lupin was analyzed by several complementary approaches, including linkage and QTL mapping, gene sequencing and quantitative expression profiling under two contrasting photoperiod and vernalization conditions. Moreover, a yellow lupin germplasm diversity panel carrying wild and domesticated accessions was phenotyped for selected phenology traits and vernalization responsiveness in controlled conditions as well as genotyped for the presence of indel polymorphisms using PCR-based markers spanning the whole FT gene sequences and promoter regions. The study provided several independent lines of evidence to support the involvement of three FT homologs in flowering time control in yellow lupin, with the LlutFTc1 homologue playing the key role in domestication driven by reduction of juvenile phase and elimination of vernalization requirements to induce flowering.
Yellow lupin phenology is strongly determined by genotype
Phenotypic data of phenology traits analyzed with and without pre-sowing vernalization under ambient long day photoperiod were obtained for 109 yellow lupin accessions in the 2016 trial and for 111 accessions in the 2017 and 2019 trials (Tables S1 and S2). High variability between particular accessions in phenology in the absence of vernalization was observed, ranging from 42.5 ± 2.2 (PRH444/14) to 82.4 ± 5.5 (Biscainhos-4) days to the first floral bud emergence, from 50.7 ± 1.4 (PRH444/14) to 91.8 ± 5.3 (Biscainhos-4) days to the onset of flowering and from 68.3 ± 2.7 (Idol) to 101.3 ± 4.7 (Biscainhos-4) days to the end of flowering on the main stem. Accessions in the diversity panel differed also in vernalization requirements, ranging from full thermoneutrality to very high responsiveness, manifested by acceleration of transition from vegetative to generative growth phases by 3 weeks. Estimated marginal mean pairwise comparisons revealed that 22 lines were fully thermoneutral for bud emergence, 29 for start of flowering, 18 for end of flowering, and 9 lines for all of these traits (Table S3). The broad sense heritability coefficients of phenology traits in the yellow lupin diversity panel were high in the absence of vernalization, ranging from 74.9% to 81.4% and moderately high with vernalization treatment, ranging from 55.6% to 58.9% (Table 1).
Large indels series are present in the promoter regions of all yellow lupin FT homologs
BLAST analysis of yellow lupin draft genome assembly using L. angustifolius FT genes identified four FT homologues. Aligning these homologues with other legume FT homologues allowed the assignment of these homologues (by Bayesian inference) to the FTa clade (LlutFTa1a and LlutFTa1b) and the FTc (LlutFTc1 and LlutFTc2) clade (Fig. 1). This analysis confirmed also reported lineage-specific duplications of FTa and FTc homologs in lupins [41]. Phenotypic trait abbreviations are as follows: the number days to floral bud emergence without vernalization (BE) and with vernalization (BE+v), the number of days to start of flowering without vernalization (SF) and with vernalization (SF + v), and the number of days to end of flowering without vernalization (EF) and with vernalization (EF + v). L. luteus genes were identified in this study, L. albus sequences were extracted from the reference genome [42], whereas the remaining sequences were derived from the recent phylogenetic studies [12,14,41]. NCBI accession numbers or L. albus gene names [42] were provided in parentheses.
Four yellow lupin accessions differing in domestication status and phenology were selected for further studies: PRH444/14 (Polish breeding line, very early flowering and thermoneutral), Wodjil (Australian cultivar, early flowering and near-thermoneutral), Parys (Polish cultivar, intermediate flowering and responsive to vernalization), and P28213 (Azorean wild population, late flowering and highly responsive to vernalization). Phenotyping in climatic chambers under two contrasting photoperiods confirmed variability in earliness and vernalization responsiveness observed between these accessions in greenhouse (Table 2).
Full FT sequences, including ∼8 kbp promoter regions, were retrieved by sequencing and assembly of overlapping PCR products (Table S4). This revealed the presence of indels in the promoter and other regions. Then, PCR-based screening of the diversity panel was performed. From the 76 primer pairs tested, 21 revealed indel polymorphism detectable under the resolution of 2% agarose gel electrophoresis with the minor allele frequency ranging from 0.009 to 0.234 (Table S5). Eight different long (≥6 bp) indel variants were identified for the LlutFTa1a gene, seven for LlutFTa1b, four for LlutFTc1, and eleven for LlutFTc2 (Table 3, Fig. 2).
Sequencing performed for PRH444/14, Wodjil, Parys and P28213 accessions revealed, besides long indels, numerous SNPs and/or short (≤ 5 bp) indels, namely 48 in LlutFTa1a, 258 in LlutFTa1b (including one in the fourth exon), 32 in LlutFTc1 and 81 in LlutFTc2 (including one in the second exon and two in the fourth exon). In LlutFTa1a, LlutFTa1b and LlutFTc1, the majority of SNPs were localized in the promoters, accounting for 41, 213, and 24 SNPs, respectively. In LlutFTc2, the majority of SNPs (56) were found in the third intron. FGENESH+ provided identical protein sequence predictions for all nucleotide variants. The list of polymorphic loci is presented in Table S6 whereas FASTA alignments in Supplementary File 1.
LlutFTa1a, LlutFTc1 and LlutFTc2 indels are strongly associated with flowering time and vernalization responsiveness
Three indel markers from the LlutFTa1a, one from the LlutFTc1 and six from the LlutFTc2 revealed statistically significant correlation with all phenology traits observed in a 3-year series of greenhouse experiments, as well as with vernalization responsiveness (understood as a shift in the BLUP for number of days in vernalized variant versus non-vernalized) ( Table S7, Fig. 2). Thus, a large indel from the LlutFTc1 promoter (2227 bp, indel 1) showed the highest association with phenology traits (Spearman's rank correlation coefficient, rho[ρ]-value, from 0.70 to 0.72) and with vernalization responsiveness (ρ-value from −0.66 to 0.53) among all analyzed markers. Interestingly, this LlutFTc1 indel allele was found only in domesticated germplasm except one landrace originating from Palestine (Palestyna-5). It should be noted that two other LlutFTc1 promoter indel markers (indels 3 and 4) did not reveal significant correlation with any of the phenology traits. Five LlutFTc2 indels, namely indels 1 and 2 from the promoter, indels 3 and 4 from the second intron and indel 9 from the third intron (carrying a large Copia-like retrotransposon insertion) also revealed significant correlation with phenology traits, but considerably lower than the large LlutFTc1 promoter indel 1 (ρvalue from 0.43 to 0.48) as well as moderate correlation with vernalization response (ρ-value from −0.33 to −0.28). The most 3' LlutFTc2 indel 11, localized close to the fourth exon, showed significant but moderate correlation with phenology (ρ-value from 0.35 to 0.39) and vernalization responsiveness (ρ-value from −0.19 to −0.23). Three LlutFTa1 indels (indels 1 and 3 from the promoter and indel 7 from the third intron) had similar correlation with vernalization response as four LlutFTc2 indels (ρ-value from −0.33 to −0.25) and lower correlation with phenology (ρ-value from 0.35 to 0.39). Another two LlutFTa1a promoter indels (2 and 4) revealed significant correlation with one or three traits (ρ-value from 0.20 to 0.22), and vernalization response for the end of flowering (ρ-value − 0.22 and − 0.23). Two rare LlutFTa1 alleles (FTa1a_F6_R6 presence/absence and indel 6) and all studied Figure 2. Sequence polymorphism revealed in the Lupinus luteus FLOWERING LOCUS T (LlutFTa1a, LlutFTa1b, LlutFTc1 and LlutFTc2) genes. Black tags visualize SNP and short (≤5 bp) indel loci whereas red rectangles and blue triangles show exons and large (≥6 bp) indels, respectively. P-value of Spearman's rank correlation coefficient, calculated for the phenology traits (BE, time to floral bud emergence; SF, time to start of flowering; and EF, time to end of flowering) and for the vernalization responsiveness of these traits (vBE, vSF and vEF, respectively), was shown in the following scheme: LlutFTa1b indels (2-7) did not reveal significant correlation with any trait. Marker grouping based on the distribution of FT indel polymorphism in analyzed yellow lupin lines revealed the presence of two major clusters, one carrying markers which revealed statistically significant correlation with all observed phenotypic traits and the other composed of the remaining markers (Fig. 3).
LlutFTc1 and LlutFTc2 genes co-localize with two major QTLs for yellow lupin flowering time
Based on the identified polymorphisms, 3 indel and 13 CAPS markers were developed for LluFTa1a, two CAPS markers for LluFTa1b, and single indel markers for the LlutFTc1 and LlutFTc2 (Table S8). Expected indel or and restriction enzyme cleavage products in parental lines were obtained for all markers. However, screening of the RIL mapping population revealed that two LlutFTa1a markers (FTa1a_F5_R5 and FTa1a_F13_R13) and one LlutFTa1b marker (FTa1b_M1_CAPS) were monomorphic. Moreover, segregation was significantly distorted (χ 2 p-value 1E-11) from the expected 1:1 ratio for the remaining LlutFTa1 markers. The LlutFTc1 indel marker was localized in the linkage group YL-21 (2.9 cM, LOD values to surrounding markers 26.5 and 23.1), the LlutFTc2 marker in the linkage group YL-01 (40.8 cM, LOD values 25.9 and 23.1), whereas the LlutFTa1b marker (FTa1b_F19_R20) at the end of the linkage group YL-60 (5.1 cM, LOD value 16.9). Markers with distorted segregation remained unmapped. QTL mapping was performed using linkage map updated with these markers and published data on flowering time in yellow lupin RIL population [31,35]. Six statistically significant QTLs (1000 permutation test pvalue <0.05) were identified (Table S9) . LlutFTc1 and LlutFTc2 markers were found to be localized directly in the major QTL peaks (Fig. 4), explaining approximately 11% (LlutFTc1) and 25% (LlutFTc2) of observed phenotypic variance (flowering time of non-vernalized plants). Moreover, LlutFTc1 marker matched the key locus for vernalization responsiveness in yellow lupin RIL population.
As a finished genome assembly is not available for yellow lupin, synteny with the better-characterized close relative, narrowleafed lupin, was explored. All yellow lupin flowering time QTLs revealed patterns of shared collinearity (Table S10). The majority of these blocks carried known regulators from vernalization and photoperiod pathways. Thus, the QTL on linkage group YL-01 was in a collinear region of NLL-17 carrying the LanFTc2 gene and the QTL on YL-21 was in a collinear region of NLL-10 encoding the LanFTc1 gene. Moreover, the QTL on YL-03 matched the NLL-02 region containing the LanCOL-9 gene and the QTL on YL-06 was syntenic to two NLL-20 regions separated by a break of collinearity; one of these regions carries the LanFTa1 gene (Table S10).
FT genes and alleles differ in responsiveness to vernalization and photoperiod
Four yellow lupin lines differing in time to flowering and vernalization responsiveness (PRH444/14, Wodjil, Parys and P28213) were subjected to LlutFTa1a, LlutFTa1b, LlutFTc1 and LlutFTc2 gene expression profiling under two contrasting photoperiods (Tables S11 and S12, Fig. 5). Comparing mean values (Table S12). Three biological and three technical replicates were analyzed. Error bars show standard deviation. Two reference genes, a DEAD box RNA helicase 1 (LlutDRH1) and a beta tubulin 7 (LlutTUB7) were used for Cq normalization. LlutFTa1a and LlutFTc1 graphs are shown in log scale whereas LlutFTa1b and LlutFTc2 in linear scale. Significance of vernalization influence on gene expression is shown above the data points. Significance of photoperiod influence is presented below the x axis (on the left panels for non-vernalized plants, n, and on the right panels for vernalized plants, v). * , significant (p ≤ 0.05); no symbol, not significant; −, not calculated due to very different variance between groups, ×, not calculated due to the lack of corresponding data point for pairwise comparison.
from all data points, LlutFTa1a, LlutFTc1 and LlutFTc2 genes revealed approximately 40-70 times higher expression levels than the LlutFTa1b gene (6.2 ± 15.4, 9.0 ± 16.8 and 4.9 ± 5.8 vs 0.13 ± 0.17, respectively). LlutFTa1a, LlutFTc1 and LlutFTc2 were considered as good candidate genes due to revealed association of sequence polymorphism with phenology traits and vernalization responsiveness, therefore their expression profiles are described herein in this context. Comparative analysis of expression levels between genotypes, photoperiod, vernalization variants, and growth phases is provided in Table S13. Moreover, LlutFTa1a, LlutFTc1 and LlutFTc2 genes revealed strong association between gene expression and timing of transition from vegetative to generative phases (Fig. 5).
Differences in expression profiles between genotypes were significant. In the absence of vernalization (Fig. 5), LlutFTa1a gene expression in Wodjil was higher than in P28213 up to 79-fold
FT indels carry hypothetical binding sites of transcription factors from vernalization and photoperiod pathways
Promoter regions of LlutFTa1a, LlutFTc1 and LlutFTc2 genes were annotated for the presence of hypothetical binding sites of transcription factors ( As the vast majority binding sites for particular transcription factors were present both in the polymorphic and monomorphic regions, the number of motifs found only in the polymorphic loci was much lower, reaching from 0 to 137 hits ( Table 4). As one polymorphic locus typically provided redundant hits, therefore the real number of candidate unique transcription factors even lower. Thus, just a few unique transcription factors which could participate in regulation of flowering time in response to photoperiod and vernalization were identified. Taking into consideration the reports from other studies providing evidence for FT promoter binding and/or control of flowering time (see Discussion), a narrow list of candidate transcription factors was selected, including TARGET OF EAT2 (TOE2) for the LlutFTa1a gene, AGAMOUS-like 15 (AGL15) for the LlutFTc1 gene and MYB62 for the LlutFTc2 gene. (Table 4).
As lengths and positions of major LanFTc1 and LlutFTc1 promoter indels are similar [34] it would be interesting to know if the sets of indel-specific transcription factor binding sites are also similar. Therefore, we analyzed LanFTc1 indels in the same way as the LlutFTc1 indels (Table S15). This analysis highlighted AGL15 as a candidate diversifying transcription factor for LanFTc1 gene (Fig. 6), revealing 3 binding sites in Pal indel, 4 sites in Ku indel, 5 sites in Jul indel, and one site in the monomorphic region with much lower similarity score (0.81-0.82) than the sites in the indels (0.96-1.00). Moreover, in the LanFTc1 indels, binding sites were found for AGL71 (two in the Ku, Pal and Jul indels, another two in monomorphic regions) and SUF4 (1 in Ku, Pal and Jul, 0 in monomorphic regions). No candidate binding site for VRN1 was found in the whole LanFTc1 promoter.
Yellow lupin duplicates of FTa and FTc homologues as remnants of a lineage-specific ploidy event.
The number of FTa and FTc homologues revealed in yellow lupin genome in the present study is the same as in the narrowleafed and white lupin genomes [18,39,40]. Bayesian inference provided evidence for split into FTa, FTb and FTc clades before the divergence of major Papilionoideae lineages [41]. This observation supports the concept that a simultaneous divergence of all major legume subfamilies was associated with mass extinction at the Cretaceous-Paleogene boundary (66 million years ago) and a whole-genome duplication event [54][55][56]. Additional polyploidy events occurred later in downstream lineages, including lupin (hypothesized triplication) and soybean (duplication) [42,57,58]. Remnants of these processes can still be found in plant genomes in the form of additional gene copies arranged in collinear blocks. The mechanisms conferring the retention of duplicated genes are not well understood, nevertheless, it was revealed that high retention rates, include, among others, genes from flowering and cold-responsive pathways [59,60]. The presence of particular lupin FTa and FTc paralogs in monophyletic clades ( Fig. 1) suggests their origin by duplication in the ancestral Lupinus lineage. Interestingly, lupin species lost the whole FTb clade, which is very abundant in other legumes, whereas they retained duplicates of the FTc which is usually a single copy clade [41].
Sub-functionalization of FTc1 into vernalization and FTa1 into photoperiod in lupins
The present study revealed sub-functionalization of LlutFTc1 into vernalization pathway (Fig. 2-4), whereas LlutFTa1a into photoperiod response (Fig. 5). A similar observation was made for LanFTc1 (wild allele) and LanFTa1 (Palestinian allele) genes in narrowleafed lupin [18,61]. Unfortunately, providing direct evidence for these functions by targeted reverse genetics is currently impractical due to the constraints of the lupin transformation system. In other legume species, FT duplicates also revealed functional divergence between photoperiod and vernalization pathways, such as genes MtFTb1 and MtFTb2 vs MtFTa2 in Medicago truncatula or genes PsFTb2 vs PsFTa1 in Pisum sativum, respectively [12,14,62]. Incorporation of different FT homologs into vernalization pathway in legumes is not surprising when the timeline of ancient climate changes is placed in the evolutionary context. Vernalization as a trait likely evolved in a response to major global cooling that peaked during the Eocene-Oligocene boundary 34 million years ago, as evidenced for temperate Pooideae grasses [63,64]. Therefore, a general mechanism of vernalization response based on the FTc clade may have been established several million years before the ploidy event in the Lupinus lineage [58,65]. Following duplication, FTc1 orthologs retained basic functions whereas FTc2 differentiated in downstream lineages, resulting in the loss-of-function in narrow-leafed lupin [18,61] and partial sub-functionalization in yellow lupin (Fig. 2-5). The other evidence supporting relatively recent evolution of vernalization trait is the observed lack of conservation of Arabidopsis FRIGIDA-FLC model in many species, including the above-mentioned Pooideae grasses [64,66]. The same phenomenon can be expected in legumes, as many of them, including lupins, did not retain any copy of the major integratory gene from the vernalization pathway, FLC found in the Brassicaceae [65,67].
AGL15 as a candidate FTc1 transcription factor controlling vernalization requirement and vegetative phase duration
The present study evidenced the association between indel polymorphism in regulatory region of LlutFTc1 and vernalizationindependent flowering (Fig. 2). A similar observation was reported for narrow-leafed lupin and the series of LanFTc1 indels [34]. In Arabidopsis, the FT promoter region is relatively long (∼5 kbp) and carries numerous binding sites for regulatory agents from photoperiod, light quality, vernalization and aging pathways [43,68]. Our study (Table 4) revealed that four transcription factors have specific candidate binding sites in the polymorphic regions of the LlutFTc1 promoter: AGAMOUS-like 15 (AGL15), AGL71, SUPPRESSOR OF FRI 4 (SUF4) and VERNALIZATION 1 (VRN1). Comparative analysis of candidate binding sites in LlutFTc1 and LanFTc1 promoters designated AGL15 as a candidate transcription factor diversifying between particular structural variants (Fig. 6).
AGL15 is a MADS-box transcription factor that acts as floral repressor during vegetative phase by binding FT promoter sequence at sites that partially overlap those bound by FLC and SHORT VEGETATIVE PHASE (SVP) proteins [43]. The LlutFTc1 indel carries all candidate AGL15 binding sites found in the whole promoter (Table 4). Therefore, hypothetical repression of the LlutFTc1 gene by AGL15 may occur in the late flowering Parys and P28213 lines whilst it should not occur in the early flowering lines lacking appropriate binding sites. High expression of LlutFTc1 observed in PRH444/14 and Wodjil lines beginning with the juvenile phase highlights AGL15 as a major candidate for the LlutFTc1 indelrelated early flowering. A similar conclusion can be made for the LanFTc1 (narrow-leafed lupin) indels.
The second candidate supported by in silico indel analysis in both lupin species, SUF4, controls vernalization dependence by binding FLC chromatin as a component of FRIGIDA transcription activator complex [44]. Nevertheless, to our knowledge, there is no evidence for SUF4 to bind the FT promoter. Moreover, in Arabidopsis, SUF4 activates transcription of the target, whereas in our study candidate binding site was found in the wild allele, indicating expected repressive activity. The third candidate, supported by indel analysis only in yellow lupin, VRN1, is a B3 domain carrying transcription factor associated with Arabidopsis flowering in response to vernalization [45]. VRN1 constitutes a hypothetically eudicot-specific component of PRC1-like complex, which is one of the two Polycomb complexes involved in epigenetic silencing of an FLC gene [46,47]. PCR1-like activity is also linked with epigenetic control of FT, enabling temperatureresponsive flowering time regulation [70]. Due to presence of VRN1 binding sites only in the wild allele, such vernalizationdriven silencing of LlutFTc1 should lead to opposite expression and flowering time profiles than observed, therefore this mechanism is unlikely. The last candidate, AGL71, is a MADS-box transcription factor acting downstream of SOC1 and promoting flowering in the shoot apical and axillary meristems under the gibberellin-dependent pathway [48]. However, we disregarded this transcription factor due to presence of additional candidate binding sites also in the monomorphic region of the LanFTc1 promoter.
Disruption of enhancer chromatin loop formation by FTc1 promoter indels is unlikely
The other possible mechanism that could explain the observed difference in phenotypes associated with LlutFTc1 promoter variants is related with protein-mediated interaction between structural components of the FT promoter [71]. Two such motifs, CCAAT and RE-alpha, were also found in narrow-leafed lupin FT promoters at conserved positions [41]. In Arabidopsis, CCAAT sequences serve as binding sites for the NUCLEAR FACTOR Y (NF-Y) -CON-STANS (CO) complex facilitating formation of long-distance chromatin loop bringing distal enhancer elements into close association with the proximal CO-responsive elements (CORE1 and CORE2) [72][73][74]. The functional consequence of this interaction is reduction of PcG protein levels at the FT promoter, relieving this region from Polycomb silencing under inductive photoperiod [75]. A large LlutFTc1 indel reported in this study carries six CCAAT elements (Table S14), whereas LanFTc1 indel variants carry from one to several such motifs [41]. Nevertheless, in both species there were additional CCAAT elements identified, flanking these indels, which may eventually participate in chromatin loop formation. Moreover, eventual disruption of chromatin looping at LlutFTc1 or LanFTc1 promoters by indels should result in the opposite phenotypic effects than observed.
TOE2 as a candidate LlutFTa1a transcription factor controlling photoperiod response
The present study identified three candidate transcription factors (Table 4) for the LlutFTa1a gene: BASIC LEUCINE ZIPPER 52 (bZIP52), GROWTH REGULATING FACTOR 6 (GRF6) and TARGET OF EAT 2 (TOE2). In Arabidopsis, bZIP52 protein is involved in heat stress response [49]. GRF6, known as a 14-3-3 protein, induces rice flowering by interaction in shoot apical meristem with FT and FLOWERING LOCUS D (FD) proteins to activate the floral promoter SUPPRESSOR OF OVEREXPRESSION OF CO 1 (SOC1) and downstream floral meristem identity genes [50]. However, no evidence for binding FT promoter by GRF6 was found in literature data. The latter transcription factor, TOE2, is a component of photoperiodic pathway and represses FT transcription by binding to its chromatin, preventing flowering under short days [51,52]. The presence of a TOE2 candidate binding site only in the wild (P28213) LlutFTa1a promoter allele supports the hypothesis on the involvement of TOE2 in photoperiod-related flowering control in yellow lupin.
Copia-like retrotransposon insertion at the LlutFTc2 gene may delay flowering under non-inductive photoperiod
A large (5269 bp) insertion of a Copia-like retrotransposon in the third intron of the LlutFTc2 was associated with delayed flowering (Fig. 2 and 4). Moreover, LlutFTc2 gene expression in P28213 line carrying this insertion was significantly reduced under short days (Fig. 5). A similar phenomenon was observed in soybean, where insertion of a Copia-like element (6224 bp) in the first intron of the soybean FT homologue (GmFT2a) was associated with decreased expression of this gene and delayed flowering [17]. A Tgm-like transposon insertion in the third intron of the GmFT2c gene that occurred at the early stage of soybean domestication also caused later flowering than the wild allele [76] Similarly, insertion of the Tnt1 retrotransposon within the first intron of the M. truncatula FT homologue (MtFTa1) resulted in a late-flowering phenotype [12]. In Arabidopsis, insertion of the Mutator-like transposable element in the first intron of the FLC gene conferred early flowering phenotype based on epigenetic silencing of FLC, mediated by short interfering RNAs [77]. Thus, the first intron of the FLC gene is relatively long and it is frequently targeted by transposons and retrotransposons in Brassicaceae species, providing significant transcriptional and phenotypic variation [78]. In A. thaliana most long introns are enriched with heterochromatic transposable element sequences [79]. Interestingly, the third introns of FT genes in three lupin species with sequenced genomes (L. angustifolius, L. albus and L. luteus) are also relatively long (from about 1.5 kbp to 6.5 kbp), therefore a similar mechanism like in Arabidopsis FLC may be expected. Apart from commonly observed deleterious effects, insertion of transposable elements can provide adaptive variation and facilitate evolutionary response to rapid environmental changes [80].
FT indel polymorphism provides high flexibility in modification of yellow lupin phenology by traditional breeding
In this study, LlutFTc1 indel allele conferring vernalization independence was found only in domesticated yellow lupin germplasm except one landrace from Palestine. A similar scenario was revealed for LanFTc1 indels in narrow-leafed lupin [34,61]. To our knowledge it is the first example of such a convergence of FT indel evolution addressing artificial selection during domestication process and adaptation to environmental conditions that favors short season (Palestinian allele) between two related species. Both species (yellow lupin and narrowleafed lupin) had lost vernalization requirement through large deletions in the promoter regions of the same FTc1 homologue, a central integrator of flowering time. This provides a unique opportunity to explore the molecular regulation of flowering time in these related species. It also provides motivation to prospect for additional examples of FTc1 deletions in other lupin species. This study provides for the first time a molecular marker that is perfectly predictive of vernalization responsiveness in yellow lupin. This can be used to facilitate introgression of new genetic diversity into domesticated germplasm (without vernalization requirement) from wild types (primarily with vernalization requirement). Moreover, the yellow lupin diversity panel offer very high flexibility for selection for vernalization and photoperiod responsiveness/independence due to the presence of lines carrying different allelic combinations of LlutFTa1a, LlutFTc1 and LlutFTc2 indels. High variability in allelic composition resulted in large phenotypic variance of flowering time and vernalization responsiveness, including numerous intermediate phenotypes with upgraded domestication status awaiting further exploitation by classic breeding. (Table S1) comprised 111 accessions (3 wild types, 5 landraces, 4 mutants, 33 cross-derivatives / breeding lines and 66 cultivars). A mapping population of 97 recombinant inbred lines (RILs) along with parental controls (P28213 and Wodjil) was provided by the Department for Primary Industries and Regional Development (South Perth, Australia).
Phenotyping of yellow lupin phenology and vernalization responsiveness
Vernalization was performed by placing seeds for 21 days at 5 • C on moist filter paper in Petri dishes in darkness. Non-vernalized control plants were sown four days before the end of vernalization treatment and grown at 24 • C to maintain similar thermal time. Plants were cultivated in a greenhouse located at the Institute of Plant Genetics, Polish Academy of Sciences, Poznań, Poland (52 • (∼12-17 h). Phenology observations included bud emergence (counted as days from sowing to the first bud appearance), start of flowering (recorded when the first fully colored petal was observed) and end of flowering (recorded when most of petals on the main stem faded). The number of observed replicates varied between 3 and 10 (mean value of 6.2) depending on germination rate and plant survival during the experiments.
Calculation of heritability and interactions
The linear mixed-effect model was used to estimate variance components and predict the genetic values via single-trait BLUP (best linear unbiased prediction). A lmer function was used to fit the model [81] from lmer4 (version 1.1-29) R 4.1.0 package [82]. Using variance components, the phenotypic variance, the broad-sense heritability, the heritability on the mean basis, the selective accuracy (the correlation between the predicted and true genotypic values), a genotype-environment correlation, a genotypic coefficient of variation and a residual coefficient of variation were calculated [83]. The vernalization effect on flowering time in investigated genotypes was tested using the estimated marginal means method [84]. Using the emmeans function from emmeans (version 1.5.4) R package, a combination of effect of genotypes and vernalization from a linear mixed-effect model was used in pairwise comparison. As a multiplicity adjustment method "tukey" was applied.
Sequencing of the yellow lupin FT homologues
Coding sequences of L. angustifolius FT homologs, LanFTc1, LanFTc2, LanFTa1a and LanFTa1b [41] were aligned to the yellow lupin genome scaffolds (N = 2458, N50 = 1.5 Mbp, unpublished) using progressive Mauve algorithm with gapped aligner MUSCLE 3.6 [85,86] implemented in Geneious v8.1 [87]. Gene features in selected scaffolds were annotated in FGENESH+ [88] using the Glycine max model and L. angustifolius FT protein sequences as references. The nucleotide sequences of FT homologues were analyzed in four yellow lupin accessions differing in flowering time and vernalization responsiveness (PRH444/14, Wodjil, Parys and P28213). Young leaves were collected from 5-week-old plants cultivated in a greenhouse. DNA was isolated using DNeasy Plant Mini Kit (Qiagen). Based on FT sequence annotations, a series of overlapping PCR primer pairs covering the entire gene sequences from ∼8 kbp promoter to 3 untranslated regions were designed (Table S4). The lengths of targeted genomic regions were 12 197 bp for the LlutFTa1a gene, 11 103 bp for the LlutFTa1b, 15 260 bp for the LlutFTc1 and 17 236 bp for the LlutFTc2. Standard sized (up to 2 kbp) PCR products were amplified using GoTaq G2 Flexi DNA Polymerase (Promega, Mannheim, Germany) whereas longer products used GoTaq ® Long PCR Master Mix (Promega). Amplicons were directly Sanger-sequenced using BigDye ® Terminator v3.1 Cycle Sequencing Kit (Applied Biosystems) and 96-capillary 3730xl DNA Analyzer (Applied Biosystems) by Genomed (Warsaw, Poland). Final FT sequences were assembled using de novo assembler in Geneious and aligned to each other using a progressive Mauve algorithm. Bayesian inference of FT coding sequences [12,14,41,42] was performed as previously described [41].
Linkage mapping of FT genes and flowering time QTL loci
Molecular markers anchored in FT polymorphisms were designed to localize FT homologues on the yellow lupin linkage map [31] (Table S8). Standard agarose gel electrophoresis was used for visualization of indel markers whereas Cleaved Amplified Polymorphic Sequence (CAPS) approach [89] for single nucleotide polymorphisms (SNPs). Restriction sites and corresponding enzymes were identified using dCAPS Finder 2.0 [90]. Restriction enzymes were supplied by Thermo Fisher Scientific (Warsaw, Poland) and New England Biolabs (Ipswich, USA). Chi-square (χ 2) values for Mendelian segregation in F 8 RILs were estimated using the expected 1:1 segregation ratio (disregarding heterozygotes). Published marker segregation data [31] and those developed in this study were imported to Map Manager QTXb20 [91] and distributed under p-value of 0.001 to the positions at which their insertions caused the greatest increase in the sum of LOD linkage scores for adjacent loci. Kosambi function was used to calculate map distances. Data on flowering time [31] and the updated linkage map from this study were exploited for composite interval mapping using Windows QTL Cartographer V2.5 (window size 10 cM and walk speed 0.5 cM). To test the stability of identified QTLs, calculations were performed within the range from 1 to 10 background control markers. Using the same parameters, permutation tests (×1000) were performed to establish LOD thresholds. Linkage groups were drawn using MapChart [92].
Correlation between genotype (FT indel polymorphism) and phenotype (phenological traits)
To survey distribution of FT indel polymorphism and find eventual novel variants in lines with contrasting phenology, yellow lupin diversity panel was screened by PCR and agarose gel electrophoresis with the same primers as those used for FT sequencing (Table S4). Putatively novel indel alleles were confirmed by Sanger sequencing. Reference alleles (Wodjil) were coded as 1, alternative alleles as 2, additional alternative alleles (the least frequent) as 3, whereas heterozygotes as 1.5 (presence of alleles 1 and 2) or 2.5 (presence of alleles 2 and 3). Association between genotype and phenotype (flowering time and vernalization response) was calculated as Spearman's rank correlation between alleles and BLUP values. To check if the revealed associations could be considered as statistically significant by normal standards, p-value was calculated using cor.test R base function. Correlation values were visualized using heatmap function from Complexheatmap (version 1.10.2) R package [93]. Promoter regions of FT genes were annotated for hypothetical transcription factor binding sites using the Plant Promoter Analysis Navigator 3.0 [69].
Expression profiling of yellow lupin FT genes in response to photoperiod and vernalization
Vernalization and sowing procedures were as described above. Plants were cultivated in climatic chambers with controlled humidity (40-50% day, 70-80% night) and temperature (22 • C day, 18 • C night). Two levels of photoperiod were applied, short day (SD, 8 h) and long day (LD, 16 h). Young leaves were sampled every week one hour before the end of the light phase, covering the period from about 2-3 weeks before floral bud emergence until flowering (Table S12). Plant material was immediately frozen in liquid nitrogen and stored at −80 • C. SV Total RNA Isolation System (Promega) was used for RNA isolation. Concentration and quality were measured using a NanoDrop 2000 (ThermoFisher Scientific). Additional quality control was performed for 60 isolates using Experion™ Automated Electrophoresis System and Experion RNA StdSens Analysis Kit (Bio-Rad, Hercules, CA, USA). The first-strand cDNA synthesis was performed using iScript cDNA Synthesis Kit (Bio-Rad) and 1 μg of total RNA per sample. The set of analyzed genes (Table S16) included LlutFTa1a, LlutFTa1b, LlutFTc1 and LlutFTc2 genes) and two references -a homolog of DEAD box RNA helicase 1 (LlutDRH1) and beta tubulin gene (LlutTUB7). Gene expression profiling was performed using a CFX Connect Real-Time PCR Detection System (Bio-Rad). Standard curves were developed following previously reported protocol [61]. R [2] and PCR efficiency values (Table S16) were calculated using Bio-Rad CFX Manager 3.1. Three biological replicates (each with three technical replicates) including inter-run calibration samples (LlutTUB7) and no-template controls were analyzed. High resolution DNA melting was performed after PCR to control the specificity of amplification. Calculations of Cq included both reference genes. Effects of growth phase (expression at analyzed date divided by expression at the first date), vernalization (xfold change of expression after vernalization), photoperiod (xfold change of expression of SD versus LD or vice versa) and genotype were analyzed. Statistical significance was tested using t test for mean ratio [94,95]. Calculations were made in R with custom script using "t.test.ratio" function from the mratios (version 1.4.2) package. First, the equal of variance was tested; if this condition was satisficed the classical t-test was used; otherwise, the Welch's t-test formula was used [96]. If variances were significantly different (p-value <0.001) it was assumed that the results come from different populations and calculation was not performed [97]. To evaluate stability of reference genes during vernalization, mean efficiency-corrected Cq values obtained for reference genes were compared between vernalized and nonvernalized accessions revealing non-significant differences for all studied lines (Table S17). | 2022-08-27T15:11:57.917Z | 2022-08-24T00:00:00.000 | {
"year": 2022,
"sha1": "5b33794b23592562e1acb5ebad989017ce28a83e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1093/hr/uhac180",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e8e5f64bc29b2eff2e792b6ada84a4c1b33a8504",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
108839834 | pes2o/s2orc | v3-fos-license | IS PREDICTION OF RENAL FAILURE WITH ITS INDICES FEASIBLE WITH PRESENCE OF HISTOPATHOLOGIC EVIDENCE FOR GASTRIC INTESTINAL METAPLASIA ?
Objectives: Gastric intestinal metaplasia has traditionally been associated with gastric adenocarcinoma. Gastric intestinal metaplasia is usually related to the Helicobacter pylori infection, older ages, smoking history, and consumption of strong spicy foods, socioeconomic status presence of IL10-592 C/A. The purpose of the present research study was to evaluate the simple laboratory parameters in subjects with gastric intestinal metaplasia. Findings: From May 2018 and October 2018, a total of 541, 281 male and 260 female, consecutive cases with gastric intestinal metaplasia with the mean age of 58.5 ± 15 years had been enrolled retrospectively with the exclusion of the cases with severe underlying disease, including the gastric cancer and gastric resection. The gastroscopy with the antral biopsy had been performed for all the cases and the biopsy samples had been evaluated for the presence of gastric intestinal metaplasia by Hematoxylin and Eosin and Helicobacter pylori status by Giemsa. The chi-squared test and independent t test were used for the comparison. The mean serum urea level detected as 34.2. ± 16.1 mg/dL in the gastric intestinal metaplasia and 31.2 ± 13.1 mg/dL in the control (95% CI from 32,3 to 34,6; p = 0.013), while the mean serum creatinin level 0.84 ± 0.28 mg/dL in the gastric intestinal metaplasia and 0.80 ± 0.26 mg/dL in the control (95% CI from 0,80 to 0,85; p = 0.042). The gastric intestinal metaplasia was detected mostly in elderly and male, regarding the multiple logistic regression (p < 0.001). Conclusion: The serum urea and creatinin levels may serve as a simple clinical tool to predict the cases patients at risk for gastric intestinal metaplasia.
INTRODUCTION
Gastric intestinal metaplasia (GIM), characterised by either the enteric or colonic mucosal immigration into the gastric mucosa, is prevalent in subjects, living in Asia and could lead to the gastric carcinoma at a rate of approximately 1%, annually (1).Both atrophic gastritis and GIM have been implicated in the gastric carcinogenesis and should be tracked by endoscopic screening programmes (2).The risk factors have been reported as the presence of Helicobacter pylori infection, older ages, smoking history, strong spicy food consupmtion, occupation status and presence of IL10-592 C/A (3).However, the role of facilitative laboratory tools to detect GIM remains largely unknown.
AIM
In the present study, it is purposed to explore the possible impact and association of established GIM on the basic laboratory parameters as well as the sociodemographic factors.
Criteria for incorporation into the study
A sum of 541 (281 male and 260 female) consecutive cases with GIM with the mean age of 58.5 ± 15 years had been enrolled retrospectively, during the period between May 2018 and October 2018.The related documents and data had been collected and evaluated.Gastroscopy with the antral biopsy had been performed for all the cases at the enrollment of the present study.The control group (90 male and 90 female) with the mean age of 54.6 ± 13.5 years was selected from the dyspeptic cases without GIM.The exclusion criteria were the cases with severe underlying disease, including the gastric cancer and the gastric resection.
Endoscopic and Histopathologic evaluation
All the endoscopic examinations had been performed by using the propophol anestesia with Fujinon videoscope (Tokyo, Japan).The biopsy samples had been evaluated for the presence of GIM and Helicobacter pylori status.The gastric biopsy specimens had been fixed in a formalin and assessed for Helicobacter pylori by Giemsa and intestinal metaplasia by Hematoxylin and Eosin, and the intestinal metaplasia had been classified in two grades: absent and present.
Statistical analysis
All the statistical analyses were performed with the SAS software (SAS Institute, Cary, N.C.).The de-mographic clinical and radiologic characteristics of the cases were compared by the Student's t-test exact test to assess the difference in the proportions.All the p values were two-sided and the significance was indicated by a p value of less than 0.05.
RESULTS
The characteristics of the cases at the baseline were well balanced between the studied cases and the control subjects with respect to age and gender (all p > 0.05).The baseline characteristics of the study subjects are depicted in Table 1.The mean serum urea level was 34.2 ± 16.1 mg/dL in the GIM group and was 31.2 ± 13.1 mg per deciliter in the control group (95% CI from 32.3 to 34.6; p = 0.013).The mean serum creatinin level was 0.84 ± 0.28 mg/dL in GIM group and was 0.80 ± 0.26 mg/dL in control group (95% CI from 0.80 to 0.85; p = 0.042).The further statistical analyses of those parameters for the serum urea levels (Fi-gure1a, b) and serum creatinin levels (Figure 2a, b) Sengul Demet
Figure 1b:
The detrended normal Q-Q plot of serum urea GIM: Gastric intestinal metaplasia.
DISCUSSION
In the present study, the mean rate of H. pylori infection was 56% and did not differ between the groups (54% versus 58%).We also reported an original research study very recently, on December 2018, about frequency of Helicobacter pylori and association of location, six age groups, and assessment of borderline of 50-year base-age, based on the anatomic pilot region with the degree of helicobacter pylori colonization.We reported in our other study that the Helicobacter pylori positivity was 55.2% in general and observed mostly in the antrum and 45-64 age group.However, no any difference was detected between the location, age groups, subgroups with over and under 50 and the degree of Helicobacter Pylori colonization (4).Some other already published Turkish studies (5) have revealed the similar results, where as some reported the different, higher (6) and lower (7), ratios with our two research studies about Helicobacter pylori.
A recent study from the United States involving 4.146 individuals with the gastric intestinal metaplasia exhibited that the incidence rate of the gastric adenocarcinoma was 0,72/1.000person-years in patients with the intestinal metaplasia, with a relative risk of 2,56 compared with the control group (8).The gastric cancer screening with the upper gastrointestinal tract en-doscopy should be considered in persons who was born in the high risk areas for the gastric cancer (East Asia, Russia, and South America) or who had a family history of the gastric cancer.The gastric screening by endoscopy should be done every 1 to 2 years in the patients with the findings of atrophic gastritis or intestinal metaplasia on their histopathologic assessments (9).The emerging evidence also suggests that the preexisting GIM detected by histopathologic examination of the gastric mucosa confers a longterm risk of gastric cancer even after the Helicobacter pylori infection has been successfully eliminated (10).In a recent retrospective cohort study involving 923 patients with GIM showed that only family history (the hazard ratio, 3,8-95% and the confidence interval, 1,5-9,7; p = 0.012) and the extent of GIM (the odds ratio, 9.4-95% and the confidence interval, 1,8-50,4%) increased the risk for the gastric cancer (11).It was not obtained that data due to the retrospective nature of the present study.
It has been a well known fact that the tobacco smoking and many foods, including processed, salted or smoked meats are positively associated with a noncardia gastric cancer in a dose-dependent manner (12).To our knowledge, only few studies present in the English literature, regarding the intestinal metaplasia in the patients with the chronic kidney disease.The first study conducted a quarter century ago involving 80 patients with the chronic renal failure, revealing 50 patients (62.5%) had the intestinal metaplasia (13).In a study of Netto et al (14), 96 patients with the chronic kidney disease were endoscopied as the preparation for kidney transplantation.The most frequent found gastric disorder was a pangastritis (57,30%) and erosive pangastritis was found with 30,2%.The gastric metaplasia was found in 8,33%, which is much less than in the study of 1989.Another study with 50 chronic renal failure patients and 50 control patients revealed the intestinal metaplasia in 29,4% of the cases in the renal failure group.In conclusion, a higher urea concentration in the gastric juice and following metabolic disorders were regarded as a causative for the higher frequency of gastrointestinal alterations compared with the patients with a normal renal function (15).The data above suggest that the renal dysfunctional alter the gastric mucosal tissue with the formation of the toxic products, which may play a potential pathogenic role in GIM.
There are several important limitations of the study.First of all, in the present study was in a retrospective manner.Secondly, it was not obtained the serum bicarbonate levels among the study population.Thirdly, it was not assessed the renal functions through the sonographic assessment, and lastly, it was not collected the dietary behaviours of the subjects with GIM, those lead to that disease.On the other hand, it is expected that the current study is large enough to assess the impact of GIM on the renal parameters.
CONCLUSION
Assessing serum urea and creatinin levels could serve as the simple clinical tool to identify the patients at risk for GIM as well as the further gastric cancer.Given the previously reported GIM prevalence, full biochemical screening may reveal the substantial numbers of cases with the previously unknown GIM.
Figure 2a :
Figure 2a: The normal Q-Q plot of serum creatinin
Figure 2b :
Figure 2b: The detrended normal Q-Q plot of serum creatinin | 2019-04-12T13:29:43.582Z | 2019-03-20T00:00:00.000 | {
"year": 2019,
"sha1": "a2eeb22c32ca57efdf90ff2c0df82b1f458866fb",
"oa_license": "CCBY",
"oa_url": "http://sanamed.rs/OJS/index.php/Sanamed/article/download/322/159",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a2eeb22c32ca57efdf90ff2c0df82b1f458866fb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270517943 | pes2o/s2orc | v3-fos-license | Understanding the Multiple Influences on Black Parents’ School Involvement: A Longitudinal Perspective
This study explores longitudinal influences of various factors on Black parents’ involvement in their children’s education. Guided by Hoover-Dempsey & Sandler’s Model of Parent Involvement, this research examines whether parents’ school climate perceptions, attitudes about involvement, self-efficacy, and children’s academic performance predict parent involvement over time. Utilizing data from the Maryland Adolescence in Context Study with a sample of 560 Black parents, we found that positive school climate perceptions and favorable attitudes towards involvement significantly predict increased parent involvement in later years. The results underscore the importance of supportive school environments and parent attitudes in fostering their involvement.
Introduction
Parents' involvement in their children's schooling (e.g., joining school activities and helping with homework) significantly increases their children's chances for academic success [1][2][3].For example, children whose parents are more involved tend to have higher grades, better test scores, fewer absences, and higher high school graduation rates [4,5].Yet, there is still much to learn about specific factors that lead parents to be involved in their children's schooling.Previous research has found that sociodemographic factors such as family income and education, parents' marital status, and number of children are all antecedents of parent involvement [6][7][8].Among researchers and policymakers, there is concern that Black parents are less involved in their children's schooling than parents of other races [9,10].Given the many benefits of parent involvement in their children's outcomes, understanding factors influencing involvement may increase Black parents' involvement.Therefore, this paper examines several factors that influence parent involvement in homework.These factors are parents' perceptions of the climate at their children's school, their attitudes towards involvement, their feelings of efficacy and prior involvement, and students' grades.
Theoretical Framework
Our conceptualization of the antecedents of parent involvement is guided by Hoover-Dempsey and Sandler's Model of Parent Involvement [11][12][13].According to this model, various ecological factors (individual and environmental) influence parent involvement in their children's education.Researchers have found that antecedents of parent involvement fit into three related categories.First, parents may become more involved if they perceive being an involved parent as an important role for parents to play [14].Second, parents may become more involved if they believe they have the skills to effectively help their children succeed in school [15].Finally, parents may become more involved when they view their children's school environment as welcoming [16,17].Parents are involved in their children's schools (e.g., attending parent-teacher meetings; [18]) or at home (e.g., helping with homework; [19,20]).The current study employs Hoover-Dempsey and Sandler's Parent Involvement Model because it is relevant to parents in the 21st century who continue to make involvement decisions.
Parent involvement decisions may be particularly crucial for Black parents [16].Prior studies have found that Black parents may harbor negative feelings toward schools because overt racism or discrimination may cause Black parents to take a hostile stance when interacting with their children's schools [21].Further, even when they receive an invitation to a school event, Black parents may be hesitant to become involved if they believe that school officials have biases and discriminatory attitudes toward them [22,23].Therefore, learning about the factors that impact Black parents' involvement is important.We test Hoover-Dempsey and Sandler's Parent Involvement Model by examining specific antecedents of parent involvement among Black parents.
Perceptions of School Climate
Parents' perceptions of the school climate play a role in parents' behavior and beliefs [16].School climate refers to the school norms as they are perceived and interpreted by individuals [24,25].Thus, parents' school climate perceptions refer to parents' perceptions about the happenings in their children's schools.Previous studies have shown that parents' school climate perceptions influence their beliefs and behaviors [16].When parents perceive their children's schools as welcoming places where their children are given the best chance to succeed, they are more likely to be involved.For instance, recent research by [26] found that school invitations, a correlation of parents' school climate perceptions, were related to higher parent involvement.Notably, this association was stronger for the Latino sample than for the Black or non-Latino sample.Similarly, [17] found that parents with more positive perceptions of the climate in their children's schools (i.e., parents felt welcomed and helped by school staff) were more likely to be involved in their children's school (e.g., volunteer in school activities and attend parent-teacher meetings).Concerning Black parents' involvement, the sample for [17] study only had 17.4% Black parents, who reported less involvement than the rest.
The literature on school climate perceptions suggests significant associations with parent involvement.However, little research has examined the link between school climate and involvement among Black parents.Because Black parents are likely to feel less welcome in their children's schools than parents of other races [27], it is still unclear whether school climate perceptions are associated with parent involvement among these parents.Though aged, the sample for the current study represents the largest sample of only Black parents to examine the link between parents' school climate perceptions and parent involvement.
Parents' Attitudes about Involvement
Parents' attitudes about involvement are also related to parents' actual involvement.Parents' attitudes about involvement refer to their beliefs about the extent to which they should be involved in their children's education [28][29][30].The literature on parent attitudes suggests that parents who view themselves as essential agents in their children's education are more likely to be involved in educational activities in general.Parents who believed that their children's education was solely the job of the schools were less likely to be involved [18,31].Parents' attitudes about involvement may also be connected to their cultural and ethnic backgrounds [32].For instance, ref. [32] investigated parent involvement attitudes in a sample of White and Asian American parents.They found that Asian American parents reported stronger attitudes about involvement in learning academic skills than European Americans.However, European American parents had stronger attitudes about involvement in school events than Asian American parents.Despite these findings about cultural influences on parents' involvement attitudes, little is known about the link between Black parents' attitudes about involvement and their involvement with their children's homework.
Parents' Efficacy in Helping Children in School
Parents' efficacy in helping children succeed in school refers to the extent to which parents believe they have the skills and knowledge necessary to help with homework [12,33].According to [12], self-efficacy comes from direct experience, vicarious experience, verbal persuasion, and emotional arousal.Research findings on parents' self-efficacy in parent involvement have been somewhat mixed.That is, while some prior research has shown that parents' self-efficacy is related to more parent involvement [12,34], other research has shown that higher levels of self-efficacy are related to lower parent involvement [35].[36] found that African American fathers' self-efficacy ratings were associated with greater involvement in education at home.On the other hand, [35] found that parents with higher self-efficacy were less likely to be involved among a sample of Latino parents.More research is needed to clarify the nature of the link between parents' self-efficacy for helping children and parent involvement.
Present Study
The current study extends Hoover-Dempsey and Sandler's 1997 parent involvement model by examining parents' homework involvement antecedents in a sample of Black parents.In particular, this study examined longitudinal associations between student grades, parents' school perceptions, and parents' attitudes and feelings of efficacy about their ability to help their children with homework on parent involvement with their children's homework.Previous research has demonstrated that parents' beliefs about being welcome in their children's school impacted their school involvement beliefs [17,26].Thus, we hypothesize that more positive school climate perceptions will be associated with more involvement from parents.
We also investigate whether children's grades were associated with their parents' school involvement.Prior research has shown a positive association between children's grades and parent involvement [37].We hypothesize that children with higher grade point averages will have more involved parents.Lastly, we examine whether parents' attitudes and feelings of efficacy about their ability to help their children with homework are associated with parent involvement.While past studies have generally found that parents' attitudes and efficacy regarding involvement are antecedents to parent involvement [12,32,34], given the unique school experiences of Black parents, it is unclear whether attitudes and efficacy are related to parent involvement among Black parents.We hypothesize that parents with more positive attitudes about parent involvement and parents who feel the efficacy of helping their children with homework will be more involved.
Participants
This paper utilized two waves of survey data from the Maryland Adolescence in Context Study (MADICS), a community-based longitudinal study of junior high students and their families based in Prince George's (PG) County, a suburb of Maryland.The first wave of data was collected in 1991, and the second wave was collected in 1995.The MADICS dataset has unique advantages for studying these research questions.PG County was selected because of its predominantly Black school population and the heterogeneity in the household socioeconomic status of Black families in this community [16].In particular, the demographic makeup of the MADICS allowed us to examine parent involvement among a diverse sample of Black parents and children.The MADICS aimed to investigate the influence of social contexts on adolescent development.
The MADICS sample included 879 Black parents who first completed the survey at wave 1 when their children were in seventh grade, and 580 completed the survey four years later at wave 4 when their children were in eleventh grade.The significant loss of participants (36%) was primarily related to dropout at wave 4. Little's chi-square test for missing data was significant (X 2 (44) = 55.887,p = 0.108), meaning we can assume that data were completely missing at random.Consequently, listwise deletion of cases with missing values will likely not bias results.After accounting for and removing participants through listwise deletion, the final sample of Black parents used for analyses was 560.The final sample did not differ significantly from the missing data on school climate perceptions, parent attitudes regarding parent involvement, family income, or gender.However, among those excluded from analyses, parents had lower levels of education and efficacy regarding parent involvement, while children had lower GPAs in seventh grade.The mean age of the 560 participants included in the analyses in seventh grade was 12.25 years (range of 11-14 years), and in eleventh grade, the mean age of participants was 16.28 (range of 15-18 years).The male-to-female ratio in the adolescent sample was approximately equal (51% male to 49% female).
Measures
Parent involvement (Eleventh grade).Youth reported on their parents' level of involvement in their children's schools using a three-item scale.Parents responded to the question: How often do the following things happen?A sample item for this scale includes the following: "Your parent(s) helps you with your homework after it's completed; for example, checking that it's done correctly or proof-reading reports during the school year."Parents responded using a 6-point Likert scale from one (almost never) to six (almost every day).Higher scores on this scale indicated greater parent involvement (α = 0.71).
Parents' school climate perceptions (Seventh grade).Parents' perceptions of school climate were measured using a five-item scale, which asked about parent perceptions of the availability and receptiveness of school staff.Parents responded to five prompts about the climate of their child's school.Parents answered using a 5-point Likert from (one (strongly disagree) to five (strongly agree)).A sample item for this scale is "Children generally feel that they belong".Higher scores on this scale indicated more positive perceptions of the school climate (α = 0.84).
Seventh-grade academic achievement.Academic achievement was measured using the target youth's seventh-grade GPA from school records.GPAs were measured on a scale from 1 (F) to 5 (A).
Parent involvement attitudes (Seventh grade).The MADICS research team developed the parent involvement attitudes scale.This 8-item scale asked about parents' attitudes about becoming involved in their children's schools.The response scale ranged from (1 = not at all to 4 = a lot).One sample question included the following: "I am not interested in doing things at school."Higher scores on this scale indicated more positive attitudes about being involved in their children's schools (α = 0.68).
Parent self-efficacy for homework help (Seventh grade).The MADICS research team developed the parent efficacy scale.This 3-item scale asked about parents' beliefs that they can help their children with their homework.The response scale ranged from (1 = not at all to 4 = a lot)."How much can you do to get your 7th grader to do (his/her) homework?"Higher scores on this scale indicated increased parent efficacy in helping their children with homework (α = 0.79).
Demographic measures and statistical controls.The sociodemographic characteristics of the target adolescents and their families were used as covariates.These measures included parent-reported total pre-tax family income in 1990 on a scale from 1 (less than $5000) to 16 (more than $75,000), with each number representing a range of USD 5000 and parents-reported highest level of education, which was recorded into three categories to represent did not graduate high school, finished high school/received a GED and graduated from college/had at least a college degree.Finally, children self-reported their gender.
Data Analysis Plan
The assumptions for the analyses were tested using SPSS.Multivariate normality was evident in quantitative analysis as both skewness and kurtosis did not differ significantly from zero.The current study examined data from the same sample at two time points four years apart.We utilized a predictive model over time with seventh-grade dependent variables predicting parents' involvement in the eleventh grade.One ordinary least squares regression model was used to answer the main study questions.Parent involvement in eleventh grade was the independent variable in this regression, and perceptions of school climate, GPA, parents' involvement attitudes, and parents' self-efficacy were the dependent variables.Additionally, previous studies have shown that income, parents' educational level, and children's gender are associated with parent involvement [8,37].Thus, these variables were used as statistical controls in the regression.
Descriptives and Correlations
Means, standard deviations, and intercorrelations are presented in Table 1.Our analyses used Pearson correlations to reveal small and positive correlations between parent involvement and school climate perceptions, involvement attitudes, and self-efficacy.
The Association between Independent Variables and Parent Involvement
The omnibus test for the analytic model (Table 2) was significant (F (7, 472) = 5.957, p < 0.001); adjusted R 2 = 0.08.Concerning the covariates, parents' education level was associated with parents' involvement in eleventh grade (β = 0.129, p < 0.05).On the other hand, neither family income (β = 0.047, p = 0.354) nor children's gender (β = 0.036, p = 0.431) were associated with parents' involvement while their children were in eleventh grade.With respect to the independent variables of interest, parents' school climate in seventh grade (β = 0.142, p < 0.01) was positively associated with parent involvement in eleventh grade.However, there was no significant association between grade point average in seventh grade and parent involvement in eleventh grade (β = −0.018,p = 0.735).Parent involvement attitudes were positively associated with parent involvement in eleventh grade (β = 0.12, p < 0.05).Lastly, parents' involvement self-efficacy was not associated with parent involvement in eleventh grade (β = 0.040, p = 0.424).
Discussion
Guided by Hoover-Dempsey and Sandler's Model of Parent Involvement [11], this paper conducted a longitudinal analysis to examine antecedents to parents' involvement in their children's homework.These data were collected from a sample of Black parents over two waves (when their children were in seventh and again in eleventh grade).In line with much of the prior scholarship on antecedents to parent involvement, parents' school climate perceptions and parents' attitudes about involvement for helping with homework all predicted parent involvement in eleventh grade.These findings highlight the multiple influences on Black parents' involvement in their children's involvement.Our examination of these factors as antecedents to involvement in a sample of Black parents expands the literature on parent involvement.
Parents' School Climate Perceptions and Parent Involvement
In keeping with prior research [26], parents' perceptions about the climate at their children's school were associated with parent involvement.Notably, this finding remained statistically significant even after controlling for the effects of children's gender, family income, and parents' education.This finding has implications for parents as well as for schools.Specifically, these findings underscore the critical role of parents' relationships with schools in their parenting behaviors.Furthermore, our results indicate that parents' school climate perceptions impacted their involvement in homework.Thus, parents' feelings about their children's schools appear to influence involvement outside school (i.e., homework).Given the link between parents' school climate perceptions and their involvement, we believe that efforts should be made to cultivate positive relationships between Black parents and their children's schools.
Parent Involvement and Students' Achievement
Contrary to our hypotheses, children's seventh-grade GPA was not associated with their parent's involvement in eleventh grade.This contradicts previous research findings showing a link between academic performance and parent involvement [38].Specifically, ref. [38] found that children with higher academic performance in fifth grade reported more homework help from their parents in seventh grade.The current findings suggest that parents in our sample did not consider their children's seventh-grade GPAs when making involvement decisions in eleventh grade.It could be that more recent grades (within one or two years) could be more influential on parent involvement.One reason for the current findings may be that, as shown in previous research, parent involvement declines as children advance through school [37].Thus, the four years that separated observations may have diluted the connection between GPA and parents' involvement.Future studies should examine the circumstances under which children's grades and other academic performance metrics may influence parent involvement.
Parent Involvement and Parent Involvement Attitudes
To our knowledge, this study is among the first to explicitly assess the association between parent involvement attitudes and parents' involvement attitudes.Consistent with our hypotheses, parent involvement attitudes when their children were in seventh grade were positively associated with parent involvement in homework during their children's eleventh-grade year.This finding provides further evidence that parents' beliefs or attitudes influence parenting behaviors (i.e., involvement).Among Black parents, our results suggest that having a positive attitude towards involvement is a precursor to their greater involvement in their children's homework.While our finding regarding parent involvement attitudes helps increase knowledge about beliefs that may lead Black parents to become involved, there is still a dearth of literature regarding how Black parents' involvement attitudes are formed.More research should investigate the factors that impact parent involvement attitudes (e.g., cultural beliefs; [32]) as this may provide more insight into influences on parent involvement.
Parent Involvement and Parents' Self-Efficacy in Helping Children
The current study is among the first to examine the association between parents' selfefficacy and involvement over time (four years).It is also among the first to investigate this association in a sample of Black parents.Interestingly, parents' self-efficacy for helping children with homework for their seventh-grade children was not associated with parent involvement in eleventh grade.This finding aligned with past research demonstrating that parents' self-efficacy did not predict parent involvement [35].It could be that parents' self-efficacy may not be sufficient to inspire their involvement.For instance, ref. [35] found that parents' self-efficacy predictive of parents' involvement was only in combination with perceived teacher invitations.More research is needed to investigate the circumstances under which self-efficacy leads to more involvement.
Limitations and Future Directions
While the current findings contribute to scholarship on parent involvement among Black parents, it is important to acknowledge certain limitations and suggest areas for further research.First, our measure of parent involvement was only two questions about homework.A more robust measure of parents' homework help may better detect differences.Further, only children reported their parents' involvement in homework help; it is unclear whether this is an accurate estimate of parents' involvement.Therefore, future research should examine antecedents to parents' reports of their involvement in their children's schooling.
Another limitation of this study is the age of the dataset, which includes data from 1991 (seventh grade) and 1995 (eleventh grade).While we argue that parents' participation in education has not fundamentally changed, it is crucial to acknowledge that much has evolved in the past three decades, especially regarding technology and internet use in schooling.Advances in technology have introduced new ways for parents to engage in their children's education.Yet, the messaging around Black parents' involvement in their children's education remains negative [9].Current perceptions of school climate may now be influenced by parents' online interactions with teachers and schools.
Furthermore, generational differences between parents from 1991-1995 and those in 2024 could impact the results.For example, parents in the early '90s might have emphasized traditional forms of involvement, such as attending parent-teacher conferences and helping with homework.In contrast, parents today may prioritize digital communication with teachers and support their children's learning through online resources and educational apps [39].These shifts in values and habits could influence parents' involvement differently.Therefore, it would be worthwhile to reference generational characteristics in the literature and research descriptions, as they greatly affect the interpretability and usefulness of the results.
Despite these considerations, this study's mean levels of parent involvement align with more recent empirical studies (e.g., [10]).Additionally, we maintain that there is no reason to believe that Parent Development Theory would operate differently for parents in 1995 compared to those in 2024.Future research should continue examining the antecedents of parent involvement among Black parents, considering technological advancements and generational shifts.
Implications
There is a consensus that parents play a central role in educational experiences, academic achievement, and success factors associated with the positive effects of educational success.However, prior scholarship has identified that parent participation may be lower among Black Americans than parents from other ethnic and racial populations [40].Subsequently, and in light of evident structural barriers and social inequities that may contribute to lower levels of parent involvement among Black parents [41], our paper makes significant contributions toward identifying potentially sustainable implications factors for practice that may bolster the parent involvement of Black parents in schools.
First, our findings that among Black parents' school climate perceptions (how they perceive their child's schools as welcoming towards them and their child) was related to their involvement supports the mezzo level scholarship on the importance of institutions towards improving individuals' prosocial behaviors [42].Practically, this translates into the importance of schools and related administrative units of schools to prioritize evaluations of district and individual school climate perceptions of parents.Disaggregating these data by race and ethnicity may inform equitable directions to improve such perceptions for all students and their families.Moreover, the school climate finding adds important nuance to the potential weight of more micro-level individual factors instead of micro-level home factors (e.g., household income, parents' marital status) that is sorely needed in research on Black Americans [43].Indeed, our significant school climate-parent involvement finding contrasts with our finding that another micro-level factor (i.e., seventh-grade GPA) did not contribute to Black parent involvement.Schools should be conscious not to bias perceptions of parents' involvement in students' achievement and instead prioritize ways to improve the relationship between the school and the parents to increase participation.
Another significant implication of our findings is that enhancing parents' belief in their contributions to their children's education may be crucial for increasing their involvement in homework [44].Prior scholarship supports that while there is a heterogeneity of experiences of Black familial households, for some Black families, particularly those in urban communities, many face a cluster of barriers that may strain capacities to feel empowered that their contributions are substantially impactful for their child's success in schools.Subsequently, schools can play a crucial role, providing parent literacy content that demonstrates their importance and providing potential community-based workshops or support mechanisms to improve parents' self-efficacy for Black parents' part in their child's life.
Conclusions
National efforts to increase parent involvement are one of the main thrusts of many efforts by teachers and parents [13].These efforts to increase parent involvement should consider Black parents' perceptions about their children's school, their attitudes about involvement, and their involvement efficacy as possible avenues to increase their involvement.Consistent with Hoover-Dempsey and Sandler's Model of Parent Involvement [11], our findings highlight that Black parents' beliefs about their children's schools, involvement attitudes, and self-efficacy for involvement are important.Schools should seek to ensure that parents feel that schools are welcoming to Black parents.By providing a welcoming environment for parents and supporting attitudes and feelings of self-efficacy, schools may increase parents' involvement and, by extension, improve student achievement.
Table 1 .
Correlations and descriptives for key study variables.
Table 2 .
Summary of regression analyses for antecedents of parent involvement (11th grade). | 2024-06-16T15:04:14.601Z | 2024-06-01T00:00:00.000 | {
"year": 2024,
"sha1": "2a6d1fa4504ab0da3db8f65f8f3901c4b49b70b9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9067/11/6/722/pdf?version=1718279246",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "77a9d9ba85ac47332d3d1506d24abbaf612469ab",
"s2fieldsofstudy": [
"Sociology",
"Education"
],
"extfieldsofstudy": []
} |
19176146 | pes2o/s2orc | v3-fos-license | Pathways of Forest Above-Ground Biomass Estimation Based on SAR Backscatter and Interferometric SAR Observations
Estimation of forest biomass with synthetic aperture radar (SAR) and interferometric SAR (InSAR) observables has been surveyed in 186 peer-reviewed papers to identify major research pathways in terms of data used and retrieval models. Research evaluated primarily (i) L-band observations of SAR backscatter; and, (ii) single-image or multi-polarized retrieval schemes. The use of multi-temporal or multi-frequency data improved the biomass estimates when compared to single-image retrieval. Low frequency SAR backscatter contributed the most to the biomass estimates. Single-pass InSAR height was reported to be a more reliable predictor of biomass, overcoming the loss of sensitivity of SAR backscatter and coherence in high biomass forest. A variety of empirical and semi-empirical regression models relating biomass to the SAR observables were proposed. Semi-empirical models were mostly used for large-scale mapping because of the simple formulation and the robustness of the model parameters estimates to forest structure and environmental conditions. Non-parametric models were appraised for their capability to ingest multiple observations and perform accurate retrievals having a large number of training samples available. Some studies argued that estimating compartment biomass (in stems, branches, foliage) with different types of SAR observations would lead to an improved estimate of total biomass. Although promising, scientific evidence for such an assumption is still weak. The increased availability of free and open SAR observations from currently orbiting and forthcoming spaceborne SAR missions will foster studies on forest biomass retrieval. Approaches attempting to maximize the information content on biomass of individual data streams shall be pursued.
Introduction
Above-ground biomass (AGB) refers to amount of organic matter that is stored in vegetation above the ground level.Forests store by far the largest amount of biomass when compared to all vegetation types.Knowledge of the forest above-ground biomass is crucial because of its importance from an ecological, climatic, and economic point of view [1].Estimates of AGB are typically obtained from sets of measurements of forest variables (i.e., diameter at breast height, tree height, forest composition, and tree density) that are taken on the ground.A detailed forest inventory takes a substantial amount of time, entails significant costs, and does not permit a synoptic view of the distribution of biomass across a forest landscape.In this sense, survey and biomass estimation techniques based on remote sensing data are more suitable for mapping and monitoring large areas.Above-ground biomass is indeed listed as an Essential Climate Variable (ECV) by the Global Climate Observing System (GCOS), for which accurate and up-to-date knowledge can be achieved with systemic observations.However, remote sensing estimates of biomass are not free from errors given that remote sensing does not provide a direct measurement of the organic matter stored in vegetation.The amount of biomass can only be inferred through e.g.,
•
the vertical distribution of organic matter as seen by LiDAR or interferometric radar; and, • direct observations of reflectance (optical sensor) or backscattered signal (active microwave sensor) with empirical models and functions; the retrieval can be aided by vegetation height as derived from laser measurements.
The potential of radar observations to estimate forest above-ground biomass has been investigated since the 1980s and reached a first peak during the 1990s with data from spaceborne (ERS-1/2, JERS-1, SIR-C/X-SAR) and airborne platforms (AIRSAR, CARABAS).Initial evidence that only long wavelengths sufficiently penetrate the forest canopy and are sensitive to biomass paired with the lack of long wavelength datasets, did not promote the use of radar observations for retrieving biomass in the following years.Towards the end of the last decade, the topic of forest biomass retrieval was revived thanks to concepts that were particularly favorable to support biomass estimation (single-pass interferometry) and new concepts concerning the use of multiple radar channels in the retrieval (polarimetric interferometric, tomography, hyper-temporal combinations).The development of spaceborne missions exploiting long wavelength radar systems that are targeting in particular observations of forests (ALOS PALSAR series, SAOCOM, TanDEM-L, NiSAR, and BIOMASS) is an additional setting for studies that are dealing with biomass retrieval approaches.
In this paper, we present a review of investigations that were published until 2017 in peer-reviewed literature that were concerned with retrieval approaches of forest aboveground biomass from the backscattered intensity or interferometric synthetic aperture radar (SAR) observations.Ultimately, the objective of this paper is to summarize knowledge on forest biomass retrieval with "standard" radar remote sensing observations so to identify salient aspects that are worth being addressed further in future studies.In particular, we are interested in understanding the prospects of using multi-frequency SAR observations in an epoch of increasing availability of SAR data from multiple platforms orbiting in space.SAR backscatter and interferometric observations are and will be available from all spaceborne SAR missions in orbit, and, therefore, are of interest to researches that are dealing with the development of forest retrieval algorithms worldwide.
Studies dealing with the retrieval of other forest variables from which above-ground biomass can be inferred with allometric functions (e.g., age, canopy cover, height) are not addressed in this study because involving functional relationships that are beyond those existing between remote sensing data and biomass.Retrieval schemes that are based upon advanced processing techniques, such as polarimetric interferometry or tomography, are also not addressed in this survey.There is still limited experimental evidence on the pathways followed by these techniques as researches have been performed on a relatively small number of datasets.
Section 2 provides the background to the analysis that is presented.Section 3 reports statistics that were obtained by grouping research papers according to a set of discriminants.The statistics represent the backbone of this summary; for completeness and transparency, the list of articles that were surveyed is included in a separate document being part of the Supplementary Information to this paper.Articles found to support our interpretation of research pathways are cited in this paper.Section 4 reports on strengths and limitations of retrieval approaches identified in literature.Results of our survey are summarized in Section 5, providing indications on possible frameworks for retrieving biomass.In Section 6, we finally review approaches where the total biomass is obtained by summing up estimates of biomass for the different tree compartments (stem, branches, and foliage).The paper ends with a set of conclusions on research pathways and suggested fields of future investigations (Section 7); also, the use of SAR observables is put in a broader context while considering approaches that are mentioned above but are not addressed in this survey.
Background
Forest biomass in this context refers from here onwards indistinctly to either forest above-ground live biomass AGB (i.e., amount of organic matter) or forest growing stock volume (GSV, i.e., amount of woody volume).AGB is defined as the amount of organic matter above ground per unit area (t/ha, Mg/ha or kg/m 2 ).GSV is defined to a wood volume above ground per unit area (m 3 /ha).From AGB, carbon stock densities can be estimated by means of a scaling factor of approximately 0.5 [2].The focus of this review is on approaches that are developed and applied to retrieve biomass.The pre-processing techniques (e.g., filtering, window size for coherence estimation, etc.) are not discussed.Similarly, we do not go into the details of the definition of AGB or GSV used by the authors.Typically, AGB or GSV are based upon measurements of trees larger than a certain threshold of the diameter at breast height, tree height etc.Although ultimately important when quantifying the spatial distribution of biomass, the exact definition of biomass is considered to be of minor relevance for the scope of this paper.When considering all of the factors that could eventually impact the retrieval performance, we believe that the biomass that was retrieved with SAR data has more uncertainty and errors associated with the sensitivity, or the lack thereof, of the data to biomass and the type of algorithmic framework used for the retrieval.It also needs to be acknowledged that while inventories report live above-ground biomass, the radar signal can be affected by both live and dead components of a forest.There is, however, limited experimental evidence on the relationship between biomass of coarse woody debris and the radar backscattered signal [3]; therefore, this aspect is not further discussed in this paper.
In total, 186 peer-reviewed papers were surveyed.The majority dealt with development and/or application of a biomass retrieval algorithm.Some basic studies on signature analysis have also been added when the authors acknowledged the potential of the dataset under investigation to retrieve biomass.The majority of studies that are presenting a retrieval scheme addressed the retrieval of total forest biomass; retrieval of biomass components (branch, stem, needles, etc.) was seldom addressed, all having in common a multi-frequency dataset and detailed inventory data to support the investigations.
Survey Statistics
For each paper, the survey identified the following set of parameters and a set of statistics was derived to identify pathways of research.In the survey, we decided to cover all the aspects of biomass retrieval with SAR data, thus including studies that explicitly addressed the retrieval of forest biomass from a set of SAR observations, as well as studies that dealt with the signatures of SAR observables as a function of biomass and studies that developed models relating observations to biomass, but not inverting them.Results reported in the 186 studies that are listed in the Supplement were found to depend on forest type and structure, viewing geometry of the radar, environmental conditions at the time of image acquisition, spatial and temporal resolution of the SAR observations, and repeat-pass interval.The outcome of each study was reported in form of a set of statistical measures or quality indicators (e.g., retrieval root mean square error, estimation bias, correlation coefficient between SAR observable and biomass, backscatter dynamic range, perpendicular component of the interferometric baseline, etc.).Given the differing premises of each study in terms of data, research focus, and modeling framework, we omit comparing and interpreting numbers in this paper.Instead, we tried to identify the major trends and behaviors by grouping studies according to a number of discriminants: Figure 1 shows a bar chart detailing the year of publication of the 186 research papers.The trend in Figure 1 is in our opinion closely related with the amount of data available for research and potentially suitable for biomass retrieval.The first studies (1987)(1988)(1989)(1990)(1991)(1992) were based upon a small number of airborne observations at one or multiple frequencies and suggested that lower frequencies (L-and P-band) could be more suitable for biomass estimation when compared to high frequencies (X-and C-band).The mid-1990s were dominated by studies based on AIRSAR and SIR-C/X-SAR observations and developed the concepts that were proposed earlier with multi-polarized and multi-frequency SAR data.In addition, in the mid-1990s, pioneering studies on the use of C-band ERS-1 backscatter observations for biomass retrieval were published.The first breakthrough of spaceborne satellite SAR data to estimate biomass occurred around the year 2000, with much attention being paid to ERS-1/2 and JERS-1 observations of the backscatter and the coherence.After this high, the paucity of SAR data available and suitable for biomass retrieval at the beginning of the 2000s is revealed by a minimum in peer-reviewed publications around the year 2005.With the start of operations of the L-band ALOS PALSAR sensor towards the end of 2006 and the TanDEM-X constellation in 2009, the topic of biomass retrieval was revived, reaching a first maximum in terms of peer reviewed publications in 2013 and then in 2015.The high throughput of research since then had a break in 2016, as a consequence of the end of the ALOS and the Envisat missions in 2011 and 2012, respectively.The start of operations of ALOS-2 PALSAR-2 and Sentinel-1 towards the end of 2014 coupled with increased knowledge on the potential of several existing datasets (primarily, ALOS PALSAR backscatter and TanDEM-X interferometry) is likely to explain the second highest number of publications per year in 2017.Grouping studies in terms of sensor/platform allowed for understanding the major study objectives (Table 1).Studies using SAR data acquired by more than one sensor were associated to each sensor.ALOS PALSAR was the sensor most used (23% of all datasets) thanks to the suitability of L-band to retrieve biomass and the observation strategy that was tailored to forest mapping applications [4].Data acquired by ERS-1/2, JERS-1, AIRSAR, and SIR-C/X-SAR during the 1990s accounted for 44% of the datasets that are reviewed here.During the 1990s, biomass retrieval approaches were developed following signature analyses, which demonstrated the sensitivity of SAR observables to biomass.TerraSAR-X and TanDEM-X data fostered primarily studies on the exploitation of three-dimensional information that was obtained with interferometric and radargrammetric approaches.Data acquired with the ground-based scatterometer HUTSCAT are also considered since they were actively used in the definition of retrieval algorithms.The survey identified a large variety of airborne observations (Airborne SAR-R99B, AeS-1, AIRSAR, CARABAS, CCRS radar, E-SAR, EMISAR, OrbiSAR, PiSAR, PiSAR-2, PLIS, RAMSES, SETHI, and UAVSAR), having the major objective to assess the sensitivity of SAR observations to biomass for specific configurations in terms of frequency band, viewing geometry, and polarization.When grouping studies according to the set of frequencies at which the radar data were acquired, it was evident that longer wavelengths were preferred in studies that were exploiting the radar backscattered intensities, given the stronger sensitivity of the SAR backscattered intensity to biomass [5][6][7].Studies involving the use of interferometric SAR (InSAR) observables focused on single-pass datasets or short repeat-pass intervals because of the direct and accurate measurement of vertical structural properties by interferometry, and, thus, the strong sensitivity to biomass as well [8][9][10].L-band data were used in 71% of the papers, followed by C-band (36%), P-band (21%), X-band (19%), VHF (3%), and S-band (1%).67% of the research papers dealt with a single band (Table 2), a consequence of the uncoordinated acquisition of SAR data by different platforms and missions and the unavailability of multi-band sensors on a single platform in space.Studies that were based on SAR data acquired at two or three frequencies were reported in 16% and 15% of the research papers surveyed, respectively.SAR data from four frequencies were used in four cases, corresponding to 2% of all studies.Grouping studies according to the SAR observable used as explanatory variable for the biomass revealed that research focused primarily on the backscattered intensity (Table 3).71% of the papers that were surveyed used observations of the backscattered intensity either as normalized radar cross section or as normalized radar cross section compensated for local topography.Local topography was expressed in terms of terrain slope angle, sensor look angle, local incidence angle, area of pixel, etc.Here, we did not further investigate the impact of the specific processing that was applied to generate the backscatter observations.For simplicity, we use the term "SAR backscatter" when referring to the backscattered intensity.From the SAR backscatter data, metrics such as average backscatter in time, textural parameters, and n-th intensity moment of the histogram were seldom investigated (4%).InSAR observables (i.e., InSAR height, coherence, or complex coherence) were used in 23% of the studies.Retrieval of biomass was primarily investigated with observations being characterized by short repeat-pass intervals that were acquired during the ERS missions and single-pass data acquired by the TerraSAR-X/TanDEM-X constellation.Table 3 reports slightly fewer studies involving the interferometric height, i.e., the elevation of the effective scattering phase center, than the coherence, i.e., a measure of the cross-correlation between the two images forming the interferogram [11], because the information content of the InSAR height is strongly affected by the temporal decorrelation occurring in repeat-pass scenarios.It needs to be remarked that the vertical profiling of vegetation with InSAR has been undertaken in far more instances, but these studies are not considered here because of our choice to focus on biomass retrieval approaches.The radargrammetric observable in Table 3 refers to the elevation estimated from the parallaxes between two images that were acquired at different look angles [12].The usefulness of radargrammetric height in the context of forest biomass estimation could be assessed so far only thanks to TerraSAR-X data acquired with intersection angles of 8 • or more [13].Retrieval of biomass was undertaken in 144 studies; in 42 research papers, instead, the focus was on signature analysis or modeling.In Table 4, the studies were grouped according to whether single or multiple observations were used to retrieve biomass.Biomass was retrieved from a single image in 56% of the cases.Single images were used primarily during the 1990s, an epoch that was characterized by few satellite observations.Multi-polarized data from a single acquisition was the most common dataset when using more than one observation in a retrieval scheme.Interestingly, single-date and multi-polarized datasets have been used in prototyping studies of the 1990s as well as in more recent investigations.Multi-temporal observations were used in 12% of the studies, with a clear focus on improving the biomass estimates when compared to single-image retrieval.In 7% of the studies, data from multiple bands have been combined.Such studies were mostly based on multi-frequency airborne acquisitions or sparse C-and L-band acquisitions during the 1990s.Only six studies ingested both multi-frequency and multi-temporal SAR data in a retrieval scheme, highlighting an almost unexplored field of investigation for biomass retrieval.Table 4. Investigations grouped according to which type of images were selected to undertake biomass retrieval (S = single image, M = multi, T = temporal, F = frequency, P = polarization).Biomass retrieval studies were undertaken in all of the forest ecosystems (Table 5).Boreal forests were mostly targeted (41% of the studies), primarily because the sensitivity of the spaceborne SAR backscatter observations to biomass was considered to be sufficient for developing retrieval algorithms to cover the range of biomass.InSAR retrieval models were also initially developed in boreal forests.Temperate forests (25%) were targeted as part of several airborne campaigns.The retrieval of biomass in tropical forests was assessed in 34% of the studies, primarily with longer wavelengths because of the supposedly stronger penetration into the forest canopy and the increased sensing of the major structural forest components, which primarily explain the biomass.However, given that most of the studies in tropical forests have been published in the last 10 years thanks to the availability of single-pass repeat pass InSAR data and more in situ data for model training and retrieval validation, it is believed that the retrieval of biomass will be the target of a large number of studies in the nearest future in such regions.
SAR Observable Counts
Boreal 82 Temperate 49 Tropical and sub-tropical (including savannas, cerrado and miombo woodlands) 68 Grouping the studies in terms of biomass variable of interest revealed an interesting divide between ecoregions (Table 6).AGB was the variable of interest in 71% of the studies, being retrieved almost exclusively in temperate and tropical forests.The sampling units here were primarily forest field plots with a size <1 ha.GSV was investigated in 26% of the studies.Retrieval of GSV was mostly undertaken in boreal forest, and, primarily, at the level of forest management units, i.e., forest stands, which were typically larger than 1 ha.Three studies looked at the direct retrieval of carbon stocks or carbon stock densities (i.e., tC/ha).
In case of a retrieval using backscatter or coherence data, it may be more rigorous to estimate the volume of a forest, i.e., a forest structural variable, and then convert the estimate to dry mass by accounting for wood density.If GSV is estimated, then the stem biomass needs to be expanded for the stem-to-total biomass proportion [2].Nonetheless, the conversion from volume to dry mass may be highly uncertain especially in regions of high biodiversity and composition where their relationship may be poorly characterized because of the spatial heterogeneity of species and vegetation structure.It is not the scope of this paper, however, to evaluate the prospects of retrieving volume or dry mass, as it is well understood that the observables here considered are only indirectly related to forest above-ground biomass.
Survey of Biomass Retrieval Approaches
The survey evidenced that the majority of the retrieval approaches that are presented in the literature used a small variety of observables, and, in particular, exploited observations that were taken at a single frequency (Table 4).The paucity of retrieval studies targeting SAR multi-frequency datasets and the increased availability of SAR data that were acquired at multiple frequencies in recent years (e.g., spaceborne X-, C-and L-band) suggested looking in more detail at retrieval approaches separately for single-frequency data and multi-frequency data.It is believed that trends in single frequency retrieval approaches are established and can inform multi-frequency retrieval strategies, which, on the other hand, are still in their infancy.
Retrieval of Biomass Using Backscatter Observations
Backscatter-based approaches can be grouped into three main categories: • parametric empirical regression models; • parametric semi-empirical and physically-based models; and, • non-parametric models.
Empirical regression models use a simple function with a limited number of coefficients to relate biomass and forest backscatter observations at one or multiple polarizations (and possibly across multiple observations in time).However, there is not a consensus on a single empirical model that performs better than the others.Several studies at first undertook an analysis of the SAR observations and the corresponding biomass values.Then, the mathematical function was identified that best represents the relationship between the observations and the biomass variable.Keeping the empirical model simple implied that such models could be inverted to allow for a retrieval of biomass from a set of observations of the SAR backscatter.Several authors, instead, directly set up a function from which the biomass could be predicted from the observed SAR backscatter.Notwithstanding whether a forward model is inverted or whether a direct retrieval model is presented, four typologies of empirical models to retrieve biomass were identified from the survey: linear models, multiple linear models, rise-to-max-exponential models, and logarithmic models.The latter, however, will not be discussed further as used only in very few cases.
Linear models relating the biomass (its natural logarithm or a power value) to SAR backscatter (either in linear scale or in the decibel scale) are the simplest possible type of regression models, requiring the estimation of the coefficient of slope and intercept from a set of training stands or plots.Such models have been used mostly for low frequency data (L-band, P-band, VHF), but occasionally also to explain the relationship between biomass and SAR backscatter at X-and C-band.In case of multi-polarized data, multiple linear regression models have been proposed; using multiple polarizations was reported to improve the retrieval accuracy when compared to single-image retrieval, in particular, when cross-polarized data were used.The advantage of linear models is that they are straightforward to apply since they attribute one biomass to any input value(s) of the backscatter.Nonetheless, deviations from the presumed linear relationship between backscatter and biomass, or transformations thereof, likely introduce systematic under-and over-estimation in certain biomass ranges.
The rise-to-max-exponential model was often used when assuming that a non-linear functional dependence existed between the SAR backscatter and the biomass, Equation (1).It expects the SAR backscatter to increase from the lowest value that was observed for a bare surface to the maximum value of the densest possible forest.This trend is typical for forests that were observed at the X-, C-, and L-band.The SAR backscatter increases rapidly for increasing biomass B from the level represented by the coefficient a, corresponding to a virtually unvegetated planar surface, to a backscatter level at which the model loses sensitivity to biomass.The coefficient of the exponential function, c, determines the slope of the modeled backscatter.The coefficient b corresponds to the backscatter value for a vegetation layer having a theoretically infinite biomass.
After model training, the inversion of Equation ( 1) is straightforward.The major drawback is that the estimation is affected by substantial errors when the backscatter is close to the largest of the modeled values, which necessitates a definition of a maximum retrievable biomass, for instance, with the biomass level for which the modeled sensitivity of backscatter to biomass falls below a certain threshold or the model inversion entails an error exceeding a desired level of accuracy.
Semi-empirical and physically based models describe the backscattered intensity from a forest in terms of the main scattering mechanisms occurring in a forest.In their simplest formulation, the models consist of a small number of components each describing one type of scattering mechanism, with a limited number of parameters related to the forest structural properties of the forest and the way the microwaves interact with the forest structure (e.g., attenuation, density of trees, or density of scatterers, etc.).With the aid of some simplifying assumptions, the models are set to express the total forest backscatter as a function of a single forest variable and consist of mathematical functions that can be easily inverted.One formulation that has been widely used to model the X-, C-, and L-band backscatter as a function of GSV or AGB is the Water Cloud Model (see e.g., [14][15][16]).The model stems from radiative transfer theory and expresses the total forest backscatter as an incoherent sum of the backscattered intensities from the canopy and the forest floor.In Equation (2), each contribution was expressed in terms of the biomass parameters of interest, V.The contribution of each component is expressed by the respective backscattering coefficients (σ 0 veg and σ 0 gr ), which were weighted by the relative contribution of the component to the total backscatter, which is expressed by the forest transmissivity.The transmissivity is typically modeled as an exponential, including the biomass variable of interest and an empirical coefficient that isassumed to be related to the forest attenuation, β.
Given that the model parameters have a physical meaning or can be modeled in terms of some additional parameters, they can be applied potentially wherever the model describes the physics of the scattering occurring in a forest.Such models tend to idealize the interaction of microwaves with the forest, which makes them potentially too general to be able to capture the complexity of a forest structure in their small number of parameters.The use of more advanced models could serve to resolve such an issue; however, such models require a significant amount of external information to be correctly calibrated and the inversion of such models often require numerical recipes.In this respect, a lookup table that links a set of parameters of a given structure to a specific value of the backscatter appears to be a promising approach.By matching an observation of the SAR backscatter with the value in the lookup table that is closest to the observation, it is possible to derive an estimate of the biomass without the need of having to invert the backscatter model [17].Model training is, however, necessary to set up the lookup table, which poses the question to which extent this approach can be generalized.
Non-parametric models include a range of computational algorithms that allow for learning from a set of observations.Learning means that multiple models are built and are then refined until convergence is reached.Such models require the intervention of the operator to tune the parameters of the algorithms while the leaning process and the construction of the models is left to the architecture of the system itself.Furthermore, they are quite advanced and they have been proposed for the retrieval of biomass to deal with aspects that parametric models either do not consider or fail to represent correctly in their oversimplification of scattering mechanisms.The advantage of non-parametric models over parametric models is greater when having multiple input datasets and additional auxiliary datasets.Nonetheless, such models require a fair amount of training datasets to perform optimally, which is often beyond what is available.Especially, when aiming at mapping large areas, the possibility to use non-parametric models becomes therefore unclear.
When compared to a retrieval based on SAR backscatter data only, the few studies exploring texture-based models reported smaller retrieval errors using texture either as stand-alone or in complement with the SAR backscatter [18][19][20].Texture is a measure of the spatial homogeneity of the scattering, and in this sense, should contain information about forest structure.It is reasonable to assume that the predictive power of texture is strictly related to the spatial resolution of the SAR data and its radiometric accuracy.However, none of these assumptions were proven and it is unclear whether the better performance of a retrieval based on texture was solely due to the properties of the texture or could be explained with the type of (empirical) modeling that was used for the retrieval.
With multiple observations of the SAR backscatter, strategies that exploit the temporal aspect of the radar signal were developed with the aim of decreasing the error of each single estimate due, e.g., to noise.Weighted combinations of biomass estimates from individual SAR observations were proposed in [15,16,21].Multiple regression models that were trained with in situ observations were proposed in [22].The retrieval was found to improve substantially for observations that were weakly correlated in time and with an overall weak sensitivity to biomass, e.g., at C-band [15].On the contrary, the improvement with respect to the best retrieval from a single observation was marginal in case of strong temporal correlation and strong sensitivity to biomass, e.g., at L-band [23].Thereof, the requirement on the number of observations necessary to improve the retrieval error is less stringent.Yet, also unclear is whether criteria exist according to which the retrieval performance can be measured on the basis of a minimum number of observations.
Retrieval of Biomass Using InSAR Observations
Coherence-based retrieval models are mostly parametric, with an equal share of studies investigating empirical regression models [24][25][26][27] and semi-empirical models [8,9,28].Their formulation is rather simple and the shape of the model predicted for a given set of observations differs only in case of rather long baselines, since empirical models do not include a term that is related to volume decorrelation [29].Similarly to the Water Cloud Model for the SAR backscatter, the Interferometric Water Cloud Model (IWCM) in Equation ( 3) describes the complex coherence of a forest as a sum of two contributions from the forest floor and the canopy.Each contribution has its own temporal decorrelation term (γ gr and γ veg ).The volumetric decorrelation that was induced by the spatial baseline is accounted for in the canopy term of the IWCM, with α being the two-way tree transmissivity (in dB/m) and ω being expressed in Equation ( 4) by B n the perpendicular component of the spatial baseline, λ the wavelength, R the slant range distance and θ the local incidence angle.As for the WCM in Equation ( 2), in the IWCM, we used the symbol V to refer to biomass.
The performance of coherence-based retrieval models depended strongly on whether temporal decorrelation enhanced the sensitivity of the coherence to forest structural parameter (e.g., under windy conditions) or cancelled out such sensitivity (e.g., after rainfall).The C-band ERS-1/2 one-day coherence was found to outperform the backscatter as long as at least one image pair that was acquired under dry and stable environmental conditions was acquired [8].In the case of ERS-1 3-to 12-days repeat-pass intervals, the sensitivity of the coherence to biomass decreased; accordingly, the retrieval error increased [30].Certain potential for the use of L-band repeat-pass coherence from the JERS-1 and ALOS PALSAR sensors with 46-and 44-days repeat-pass interval was demonstrated in Siberian boreal forest [26,27,31].Long winter-frozen conditions guaranteed the maximum contrast of coherence between low and high biomass.On the other hand, single-pass coherence as from the TanDEM-X mission, is only affected by volume decorrelation so that the sensitivity to biomass depends on the length of the spatial baseline [32].
As in the case of the backscatter, a multi-temporal dataset of coherence observations can be used in a multi-temporal combination to improve the accuracy of the biomass estimates with respect to values that were obtained from individual coherence observations [8].Experiments that were undertaken with ERS-1/2 tandem data revealed that a small number of coherence observations that were acquired under environmental conditions that preserve the coherence were sufficient to obtain the best possible retrieval accuracy [33,34].The use of multi-temporal coherence has not been investigated for other interferometric datasets either because a large number of coherence observations have never been obtained (e.g., at L-band from ALOS PALSAR) or there has not been sufficient interest to evaluate repeat-pass datasets that are potentially suitable for retrieving biomass (e.g., X-band TerraSAR-X 11 days or COSMO SkyMed 1-16 days).Retrieval of biomass from Sentinel-1 6-and 12-days repeat-pass coherence has not been reported yet; however, it is assumed that reliable estimates will be obtained only in areas with long periods of winter/frozen conditions or dry conditions preserving the coherence.
An InSAR height-based retrieval has enormous potential because of the direct relationship between the interferometric phase, ∆Φ, and elevation, Equation (5).
In the case of vegetation, it is worth noting that h int refers to the elevation of the scattering center, which is a function of radar frequency, canopy closure, and the vertical distribution of the scatterer.These factors, as well as information on the elevation of the ground, need to be considered when estimating biomass from the InSAR height.Several research papers demonstrated that simple linear relationships could predict biomass from estimates of InSAR height of single-pass datasets [35][36][37][38][39][40][41][42].Nevertheless, it is unclear whether such linear models that were validated at a number of test sites in boreal and savannah forest apply in other forest ecosystems as well [42].An advanced solution having more potential for generalization is given by physically-based models, such as Equation ( 3), which take into account the sensitivity of the InSAR elevation to baseline and by a combination of single-images estimates of biomass with a multi-temporal combination [40].Regardless of the retrieval approach, the quality of the retrieved biomass was found to be more affected by the uncertainty of the height estimate, i.e., the coherence, and by the availability of accurate information on the elevation of the terrain beneath the forest rather than by the degree of penetration of the microwaves into the forest canopy, i.e., the location of the effective phase scattering center.
Estimates of terrain height can be obtained with laser scanning techniques or with a low frequency interferometric system.The necessity of having (i) a single-pass interferometer, possibly at high frequency, such as TanDEM-X, and, (ii) an auxiliary dataset on terrain elevation implies that such an approach is not feasible to map large areas in the nearest future unless in regions with an advanced forest mapping and monitoring system [41].In [38], the use of the smallest InSAR phase has been proposed as a means to overcome the use of an independent dataset of terrain elevation with interesting results to be further pursued.
Although an interferogram contains two observables that are both potentially suitable to support the retrieval of biomass, most retrieval studies favored the use of only one observable, neglecting the other.In repeat-pass scenarios, the retrieval could not profit from the InSAR phase because the large uncertainty [43].With single-pass interferometry and long baselines, however, both InSAR height and coherence are sufficiently sensitive to forest structural properties and allow for an exploitation of their synergy.The two-level model (TLM) inversion that is proposed in [44,45] goes in this direction by estimating two parameters (area-fill factor and InSAR height) from the complex coherence, which resemble canopy closure and forest height, respectively, i.e., two variables that are closely related to biomass.In [38], it was argued, however, that coherence from TanDEM-X interferograms would only contribute 7% to a combined estimated of biomass.Further along the line of exploiting primarily InSAR phase information, the use of individual Fourier Transform frequency components of the vertical profile was suggested to be estimating biomass more accurately compared to using the mean InSAR height for the reference unit (as used on most studies here cited) [10].This appears to be a promising approach to be further evaluated.It is worth noting that the predictors and approaches that are proposed in [38,42] are favored by the high spatial resolution of the SAR data used in the experiments, pointing at the importance of scales in the context of biomass retrieval with interferometric data.
Multi-Frequency Retrieval Approaches
The literature survey identified two types of multi-frequency retrieval approaches based on SAR backscatter data.One type was developed with AIRSAR (C-, L-, and P-band) and/or SIR-C/X-SAR (X-, C-and L-band) images of the SAR backscatter that were acquired mostly over northern boreal and temperate forest during the 1990s [46][47][48][49].Both empirical (multivariate regression) and physically based models were systematically assessed.The second type can be considered more "opportunistic", since the models were developed with spaceborne SAR images that were acquired by multiple sensors independently from each other over a certain area, e.g., ERS and JERS [21,25] or ALOS PALSAR and RADARSAT-2 [50,51], with in addition TerraSAR-X [52].Except for [25,50], where coherence was used as one of the predictors, in all other research papers dealing with multi-frequency retrieval, the predictor consisted of the SAR backscatter only.
The multi-frequency retrieval approaches could be grouped in terms of their models relating observations to biomass as follows: • least-squares regression models applied to SAR backscatter and SAR backscatter ratios of several bands and polarizations; • neural networks inverting a physically-based model; • non-parametric models; and, • multi-temporal combinations of biomass estimates obtained from multi-frequency datasets.
The contribution of the different bands to the retrieval accuracy differed from study to study.P-band and L-band were judged to be the most predictive, but there was no consensus whether one specific polarization would be more effective than others to predict biomass.C-band data was reported to have less potential than longer wavelength, but could provide improved estimates when combined with L-band data only (i.e., when P-band data were unavailable).The contribution of X-band backscatter data was considered negligible when data from at least two other frequencies were available.The retrieval instead improved in combination with L-band data [53,54].Non-parametric models performed better than a multi-linear regression in [54], whereas using a physically based model, the retrieval performed better than using an empirical multivariate model [55].Finally, adding InSAR height at C-band to a linear model expressing biomass as a function of multi-polarized AIRSAR data improved the retrieval [56]; furthermore, the performance was better for increasing the wavelength.
With the launch of the P-band BIOMASS satellite ahead, one question is how much data that are acquired by sensors operating at higher frequencies (C-, L-and X-band, possibly S-band) can aid biomass retrieval from data acquired by BIOMASS.With AIRSAR data, one major benefit of adding data from shorter wavelengths to a retrieval that is based only on P-band was discussed by [57], who showed that the range of biomasses that could be predicted increased from 160 tDM/ha to 240 tDM/ha (tDM = tons of dry matter) in a mangrove forest.This aspect deserves further investigation as P-band campaigns by the European Space Agency are delivering datasets that can be combined with other airborne and spaceborne datasets.
It is interesting to observe that only one paper coordinated the use of multi-frequency backscatter and interferometric observations [58].Biomass was estimated from the difference of interferograms that were obtained at X-and P-band, representing the elevation of surface of the forest and terrain underneath, respectively.The (X-P)-band vertical information was combined with P-band backscatter, which was considered to be a measure of vegetation density, to estimate biomass.It was argued that such an approach mimics the most "natural" way of calculating volume, on the basis of the modeling framework that is reported in [59].
Pathways of Biomass Estimation Approaches Based on SAR and InSAR Data
The literature survey showed that both parametric and non-parametric models are feasible to retrieve forest biomass with mono-and multi-frequency SAR data.Non-parametric models are quite performing and could be a natural candidate if the aim is to deliver AGB estimates that are to be used as one of several layers in environmental studies, policy making, predictions, forecasting, etc. If, instead, the focus is on having an algorithm that is robust to environmental conditions and forest structure, attention should be given to parametric models.Strengths and weaknesses of the biomass retrieval approaches surveyed in this study and their prospects are summarized in Table 7.The synoptic view of biomass retrieval approaches that are based on SAR data should not be taken as conclusive with respect to which is best suited for a given investigation.The large range of retrieval statistics that are reported in the surveyed papers was not conclusive with respect to which specific model or equation is most performing.The performance of a biomass retrieval scheme has to be judged in its entirety, meaning that a very powerful retrieval scheme will not perform if the input data is sub-optimal to the scope of retrieving biomass.The same applies if one has collected data that are potentially suitable for retrieving biomass but the algorithmic aspects are poorly characterized, i.e., the algorithms do not extract the information on biomass in the input data.It is here important to remark that the choice of a retrieval approach is often constrained by data availability.In [60], biomass for the northern hemisphere was estimated with the parametric model in Equation (2).For training the model, a solution that does not rely on in situ measurements of biomass had to be developed to account for the unavailability in several regions of the area mapped.The paucity of in situ observations implied that several assumptions had to be done in order to train the model and achieve biomass estimates being at the same time reliable and spatially consistent.Having to relax on the modeling framework had the consequence that the retrieval performance was judged to be inferior to what could be expected if the model had been trained locally with in situ data, so to adapt to local environmental conditions at the time of image acquisition and local structural features of the vegetation.While the rules underpinning model training and biomass retrieval are being established for large-scale biomass estimation [15,25,60,61] using SAR backscatter data or coherence data, there seems to be a need to further investigate the spatial variability of the intriguing relationship between InSAR height and biomass [37,38,[40][41][42].The integration of multiple observations from InSAR and SAR backscatter at different frequencies have not been attempted yet and should begin deserving more attention despite the lack of data that are truly suitable for complementing as in the study proposed by [58].
Uncertainty of biomass estimation has seldom been approached [60,62,63].In such cases, error models for each of the terms that are involved in the retrieval procedure have been presented and discussed.The uncertainty of the retrieved biomass based on one or a few observations of the SAR backscatter was reported to be too large to consider the estimate meaningful (at L-band) [62].Averaging the SAR backscatter over adjacent pixels improved the uncertainty of the biomass estimates following a strong reduction of uncertainty of SAR observation [63], which is considered to be the largest contribution to the overall retrieval uncertainty [62].Alternatively, large stacks of weakly correlated observations, such as those that were obtained at C-band, can improve the uncertainty with respect to a retrieval based on a single observation [60].
Table 7. Summary of retrieval approaches surveyed in this study with an outlook on their performance when using multi-frequency synthetic aperture radar (SAR) data as input.
Approaches for Estimating the Biomass in Trunks, Branches and Foliage
The backscatter that is received by a radar from forested terrain, σ 0 for , may be described as the sum of four scattering mechanisms, each contributing with more or less power to the total backscatter that the radar receives: with σ 0 g representing the backscatter from the forest floor, σ 0 c the backscatter from the canopy, σ 0 tg trunk-ground interactions, and σ 0 cg crown-ground interactions.It therefore makes sense to postulate that forest above-ground biomass can be estimated starting from estimates of its components in stem, branches, and foliage.Theoretical forest scattering models have been developed based on such description, e.g., [64][65][66][67], so to predict radar backscatter at different wavelengths, polarizations, and incidence angles as a function of size, orientation, and dielectric properties of the major tree constituents, and hence provide a framework for analyzing the expected effect of tree architectural differences and varying biomasses in trunks, branches, leaves, and needles on backscatter.Overall, modeling results agree in that with increasing radar wavelength, the scattering from larger tree constituents gains importance; nonetheless, the modeled backscatter signal was hardly dominated by a single scatterer type/backscatter contribution consistently in all of the studies surveyed [64,[68][69][70][71].As a result, scattering theory suggested that: 1.
specific radar configurations should be best suited for the retrieval of a particular biomass compartment (i.e., biomass in foliage, large and small branches, and trunk), dependent on how exclusively scattering from forest at a certain wavelength and polarization is associated with a single scatterer type and scattering mechanism, 2.
the performance of the retrieval of total above-ground biomass with any single wavelength and polarization is constrained by the inherent correlations between the biomass compartments the radar senses to the total above-ground biomass, and 3.
the performance of the retrieval of above-ground biomass should benefit from the use of multiple wavelengths and polarizations since each maximizes the sensitivity to the biomass in different compartments.
So far, only few studies have attempted to verify the hypotheses that are formulated above with actual SAR data and even fewer have developed retrieval algorithms that seek to optimize the retrieval of total above-ground biomass through a combined use of multi-frequency and multi-polarization SAR data for estimating the biomass in different compartments of trees [46,47,49,55,[72][73][74][75][76].All studies were conducted at temperate and boreal forest sites across North America, almost exclusively with data acquired by AIRSAR and SIR-C/X-SAR during the 1990s.Given the small number of studies that are addressing the retrieval of compartment biomass, we provide a brief review of each, focusing on the highlights from an algorithmic point of view.
Under the assumption that L-HV backscatter is more closely related to basal area and height, AGB was estimated by first estimating these two attributes from L-band radar data and then applying allometric equations to convert the height and basal area to biomass [72].The advantage of estimating biomass indirectly via basal area and height was however not demonstrated.Multiple linear regression models relating multi-frequency and polarization backscatter intensities to the logarithm of compartment biomass were developed in [75].A stepwise regression analysis showed that most of the variability of biomass in different compartments was explained with P-band backscatter in all of the polarizations and L-band backscatter in HV polarization.This study suggested that, when estimating branch biomass from SAR, the biomass in other compartments as well as the total above-ground biomass could be estimated via allometric relationships.It was found that the approach of estimating biomass compartments indirectly via allometric relationships and SAR derived branch biomass performed better than a retrieval based on models relating the SAR backscatter to the biomass compartment of interest directly.A similar approach for estimating compartment and total biomass was presented in [73].Total biomass was estimated by summing up the canopy and trunk biomass estimates, the latter being estimated from the SAR derived estimates of basal area and height with the aid of ancillary information on the tree species' wood density and taper functions describing the trunks' shape.The rationale of the approach also followed the idea that backscatter in different wavelengths and polarizations is most correlated to specific biomass compartments.A comparison of the performance of different approaches for the retrieval of total above-ground biomass and the biomass in different tree components with multi-frequency radar was presented in [55].The two methods in which the total above-ground biomass was estimated indirectly via branch biomass [75] or via basal area, height, and crown biomass [73] performed slightly better than the direct retrieval of total above-ground biomass.
Knowledge of the biomass in different tree components is important for understanding the dynamics and impacts of forest fires, in particular, in fire prone ecosystems, such as boreal forests or savannahs [49].Therefore, investigating the use of airborne multi-polarization C-, L-, and P-band imagery for estimating key forest biophysical parameters with respect to forest fire, such as the biomass in trunks and crowns (including the biomass in foliage, branches, as well as biomass of non-forest vegetation such as sagebrush), canopy bulk density (i.e., the crown biomass per unit crown volume), and the foliage moisture content.
While all of the studies discussed so far were relating backscatter to total and compartment biomass using empirical models, a semi-empirical modeling was followed in [76] for estimating crown and trunk biomass via the crown and trunk water content.It was argued that radar backscatter is as much a function of the tree architecture and biomass distribution across trunks, branches, and foliage as it is a function of the water content, which primarily drives the dielectric properties of vegetation.An inversion targeting the vegetation' moisture content may therefore be more adaptive to the time-variant moisture influence on backscatter.A semi-empirical model, which was considering direct crown as well as crown-ground and trunk-ground scattering, was formulated that expressed backscatter as function of crown and trunk moisture content.
Model calibration and the estimation of compartment and total above-ground biomass from multi-frequency SAR data generally rely on the availability of in situ measurements.An alternative approach, first proposed by [77] and tested by [78], is to link forest succession and scattering models to predict the backscatter response to changing tree architecture and biomass across an entire chrono-sequence of forest growth and for a wide range of site conditions, forest management practices, disturbance regimes, etc.In [78], a GAP model was deployed, which predicts tree-and population-level forest dynamics by simulating individual tree birth, growth, and mortality given specific site conditions (e.g., in terms of temperature, light availability, soil moisture, soil fertility) and the competitive behavior of species.Such type of forest models enabled stand-level predictions of forest structural aspects that were relevant for modeling radar backscatter.The forest model output, together with ancillary information on branch and foliage geometry and dielectric constants of tree components, was then used to model the SAR backscatter at co-polarization C-, L-and P-band based on a scattering model developed by [65].Linear models were calibrated based on forest model predictions of above-ground biomass and the associated scattering model predictions of backscatter to relate co-polarization backscatter at P-, L-, and C-band to total above-ground biomass.A direct application of the model for estimating above-ground biomass from airborne radar imagery was not feasible due to the systematic offsets between modeled and observed backscatter, in particular, in the case of P-band.However, after cross-calibrating the model with the aid of forest plots for which forest structure agreed with forest model predictions of structure, allowed for estimating the above-ground biomass from the radar imagery with reasonable accuracy and comparable to the performance of a model that was calibrated directly with in situ data.
The majority of studies that are discussed above suggested that the use of multi-frequency SAR data for estimating the biomass in different tree compartments as well as the total above-ground biomass allows for improved estimation accuracies.Even though one might expect that the independent modeling of multi-frequency backscatter as function of compartment biomass should allow for a better capturing complex, non-linear relationships between compartment and total above-ground biomass throughout forest succession, the results that are presented so far were overall inconclusive with respect to the assumption that total above-ground biomass may best be estimated via independent estimates of the biomass in different tree compartments.Only the results that were presented by [75] for loblolly pine forests demonstrated that estimating total above-ground biomass via radar-based estimates of compartment biomass (i.e., branch biomass) performed better than the direct estimation of total biomass.In [55], instead, rather minor improvements were reported when estimating total biomass with the sum of radar derived estimates of trunk and crown biomass or via an allometric relationship between branch and total above-ground biomass when compared to the direct approach.In addition, radar configurations that were identified as being ideal for estimating the biomass in different tree compartments did not always comply with expectations from scattering theory.
The inconclusiveness of the results may be associated with the following three factors.
•
The high inherent correlation between biomass compartments and the total biomass complicates the identification of causative relationships between the biomass in different compartments and multi-frequency/-polarization backscatter.
•
Environmental conditions (soil moisture, canopy moisture, freeze/thaw) at the time of image acquisition could introduce backscatter variations that have a magnitude similar to the backscatter changes associated with changing biomass; in addition, they may alter the relative contribution of different scattering mechanisms and obscure the underlying correlations between backscatter and compartment biomass.Only a few of the existing studies were concerned with the retrieval of compartment biomass interpreted their results in light of the prevalent imaging conditions [55].
•
While the modeling results suggested that the backscatter is often dominated by a single scattering mechanism, the correlation analyses between backscatter and compartment biomass in the studies that are discussed above did not present clear evidence for this.Differences between modeled and actually observed backscatter were in many cases significant [71,79].
Each of the factors above deserves further investigation to clarify whether the retrieval of biomass from SAR data is better addressed by estimating biomass compartments rather than predicting directly total biomass.At this stage, the choice of the specific modeling framework to estimate biomass is considered to be of minor importance.
Conclusions
This paper summarized the results of a literature survey on forest biomass retrieval with SAR backscatter and interferometric SAR data with the scope of identifying pathways of research and suggesting future advances.Pathways clearly depend on two factors: data and models.
Single sensor observations are useful in understanding to which extent biomass can predicted with the configuration of the sensor (frequency, polarization, look direction, spatial resolution, thermal noise, etc.).Nonetheless, the key for biomass retrieval appears to be the combination of data from multiple sources.A multi-frequency SAR perspective is an advance with respect to a mono-spectral retrieval, which is still dominant in the remote sensing community.Assuming that a given SAR frequency and polarization sense a particular component of a forest, the predictors of a multi-frequency retrieval approach bring along a more complete description of the forest biomass to the retrieval model when compared to the case of single-sensor approach.As spaceborne SAR observations are spanning from X-to L-band, and will potentially extend to P-band in the future, there are substantial reasons for considering this pathway as realistically improving the accuracy of biomass retrieval and reducing uncertainties.A multi-frequency retrieval solution, however, shall be seen in a wider perspective where polarization, look direction, and spatial resolution are best combined to maximize the extraction of biomass-related information from the SAR data.Combination of InSAR observations and SAR backscatter observations are here envisaged; a limited number of observables from current and future spaceborne SAR missions may be able to characterize biomass to a level of error and uncertainty (e.g., 20% and 50%, based on the best results in the surveyed papers) of great appeal to science communities that are requesting spatially explicit estimates of carbon pools in vegetation.In addition, we encourage repeated acquisition of multi-frequency SAR datasets by the different missions.Since the training phase adapts the retrieval model to the SAR observable, the environmental conditions at the time of image acquisition are somewhat transferred to the biomass estimate.Exploiting the temporal features of the observations can reduce such conditioning and allow for maximizing the extraction of the biomass-related information from the set of input observations.
The reasoning on complementing datasets to improve the retrieval can be expanded by bringing in data that are acquired at other, non-strictly, microwave frequencies.Optical data can, for example, allow for the stratification of species or complement SAR observations to retrieve biomass; we foresee substantial advances on the rather novel radar-optical synergy with the increasing amount of repeated observations by currently orbiting sensors, primarily Sentinels.Even more, complementing backscatter observations, which reflect the horizontal structure of a forest, i.e., the density, with observations of vertical structure, such as those provided by PolInSAR, TomoSAR, and LiDAR, would provide a three-dimensional representation of vegetation, thus potentially leading to improved estimates when compared to using a single observable.Again, the multi-frequency and multi-temporal perspective on the integration of datasets is seen as an asset to ensure the best possible performance of the retrieval.In our opinion, it is beyond the scope of this paper to discuss perspectives of integration of datasets.
Entering an era that is characterized by a multitude of SAR observations demands future researches on biomass retrieval to: (1) identify which datasets carry which information on biomass; (2) best extract such information from each dataset; and, (3) combine such information to provide an as accurate as possible estimate of biomass.It is also believed that investigating the retrieval of biomass components from individual frequencies targets the three points listed above and should be considered as a main line of research.Such approaches are seen primarily as a step forward towards a full characterization of biomass in retrieval models; this shall be considered to be achieved once the entire range of observations describing forest structure and forest functioning are encompassed.The increasing amount of SAR data that are publically available and the forthcoming launch of several spaceborne missions, all having as explicit goal the mapping of forest biomass, is expected to boost developments on biomass retrieval approaches and substantially increase the number of peer-reviewed publications on forest biomass retrieval in the next 10-15 years.
Figure 1 .
Figure 1.Detailing the temporal distribution of research papers in terms of year of publication.
Table 1 .
Number of studies per sensor/platform.Sensors are listed alphabetically.
Table 2 .
Number of studies per frequency band or group of frequency bands.
Table 3 .
Number of studies grouped in terms of SAR observable.
Table 5 .
Number of retrieval investigations per forest ecosystem or zone.
Table 6 .
Number of retrieval investigations per type of biomass being retrieved.The unit of measurement is included for completeness. | 2018-05-09T00:43:45.813Z | 2018-04-14T00:00:00.000 | {
"year": 2018,
"sha1": "ef3e0b31bce8a5eae99846d080e71ef25f7cad54",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/10/4/608/pdf?version=1525348489",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "ef3e0b31bce8a5eae99846d080e71ef25f7cad54",
"s2fieldsofstudy": [
"Environmental Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Geology"
]
} |
1176853 | pes2o/s2orc | v3-fos-license | An analysis of marketing authorisation applications via the mutual recognition and decentralised procedures in Europe
Purpose The aim of this study is to provide a comprehensive overview of the outcomes of marketing authorisation applications via the mutual recognition and decentralised procedures (MRP/DCP) and assess determinants of licensing failure during CMDh referral procedures. Methods All MRP/DCP procedures to the Co-ordination group for Mutual recognition and Decentralised procedures–human (CMDh) during the period from January 2006 to December 2013 were analysed. Reasons for starting referral procedures were scored. In addition, a survey under pharmaceutical companies was performed to estimate the frequency of licensing failure prior to CMDh referrals. Results During the study period, 10392 MRP/DCP procedures were finalized. Three hundred seventy-seven (3.6 %) resulted in a referral procedure, of which 70 (19 %) resulted in licensing failure, defined as refusal or withdrawal of the application. The frequency of CMDh referrals decreased from 14.5 % in 2006 to 1.6 % in 2013. Of all referrals, 272 (72 %) were resolved through consensus within the CMDh, the remaining 105 (28 %) were resolved at the level of the CHMP. Most referrals were started because of objections raised about the clinical development program. Study design issues and objections about the demonstration of equivalence were most likely to result in licensing failure. An estimated 11 % of all MRP/DCP procedures resulted in licensing failure prior to CMDh referral. Conclusion Whereas the absolute number of MRP/DCP procedures resulting in a referral has reduced substantially over the past years, no specific time trend could be observed regarding the frequency of referrals resulting in licensing failure. Increased knowledge at the level of companies and regulators has reduced the frequency of late-stage failure of marketing applications via the MRP/DCP. Electronic supplementary material The online version of this article (doi:10.1007/s00228-015-1904-1) contains supplementary material, which is available to authorized users.
Introduction
Several regulatory pathways exist to authorise medicines in the European Union (EU). The centralised procedure was introduced in European legislation in 1993 and came into operation in 1995 [1,2]. It results in a single marketing authorisation (MA) that is valid throughout the EU. The centralised procedure is mandatory for marketing authorisation applications (MAAs) of new active substances for the treatment of HIV/AIDS, cancer, diabetes, neurodegenerative diseases, auto-immune and other immune dysfunctions, and viral diseases, all biologicals, advanced therapies, and orphan products. Applications for multiple Member States for products that do not fall within the mandatory scope of the centralised procedure must follow the mutual recognition procedure (MRP) or the decentralised procedure (DCP). In terms of volume, MRP and DCP procedures outnumber the centralised procedure and considerable resources are spent by both MA holders and national competent authorities on MAAs via the MRP/DCP procedures. When MAAs result in licensing failure-defined as those procedures that did not result in a MA-this leads to wasted resources, especially if this concerns preventable, late-stage failures. Whereas reasons for licensing failure for products authorised via the centralised procedure has received considerable attention, little is known about MAAs via the MRP/DCP procedure [3,4].
Since January 1, 1998, the MRP is mandatory for any product that is to be marketed in multiple Member States, when a MA exists anywhere in the EU [5]. During the MRP, an applicant informs the Reference Member State (RMS) that it aims to market a product in multiple countries and requests these other countries, the so-called concerned member states (CMSs), to recognise the MA granted by the RMS. The RMS circulates the assessment report, including the approved summary of product characteristics (SmPC), labelling and package leaflet. If the CMSs agree with the assessment of the RMS, they should recognise the decision within 90 days after receipt of these documents by granting a national MA (Fig. S1) [6].
The DCP was introduced into European legislation in 2004 and should be followed when a MA is applied for in multiple Member States at once [7]. Like the MRP, the DCP is also based on recognition of a first assessment performed by a RMS, but there is no preexisting MA. For both MRP and DCP procedures, a positive outcome will result in harmonised national MAs, granted by the respective national competent authorities. After a positive outcome of the MRP/DCP procedure (i.e. all CMSs agree to grant the MA), the procedure is closed and a national MA should be granted within 30 days, provided that well-translated documents are provided within 5 days after closing the procedure.
Member States can refuse to recognise the assessment of the RMS, but only on grounds of a 'potential serious risk to public health' (PSRPH). A PSRPH is defined as 'a situation where there is a significant probability that a serious hazard resulting from a human medicinal product in the context of its proposed use will affect public health' [8]. Despite the development of guidance, uncertainty remains about what qualifies as a PSRPH [9]. If disagreement on the PSRPH cannot be resolved by the RMS and the CMSs, the issue is referred to the Co-ordination group for mutual recognition and decentralised procedures-human (CMDh), through a socalled Article 29(1) procedure. The CMDh works by achieving consensus between the Member States. If it does not achieve consensus to approve or refuse the MAA within 60 days, the case is referred to the Committee for Medicinal Products for Human Use (CHMP) through an Article 29(4) procedure who will adopt an opinion that will result in a binding decision from the European Commission [10].
Limited data are currently available on the outcomes of MAAs via the MRP/DCP procedure. Furthermore, data on licensing failure prior to MRP/DCP procedures are not available from publicly accessible sources. Therefore, the current study aims to assess the efficiency of the MRP/DCP procedure by providing a comprehensive overview of the outcomes with these regulatory pathways. To do so, we have investigated frequencies and determinants for CMDh referral procedures, as well as reasons for licensing failure during the MRP/DCP. Three objectives were formulated. The first objective was to determine the frequency of CMDh referrals. The second objective was to assess the association of objections raised as PSRPH and other determinants with licensing failure during CMDh referrals. The third objective of this study was to determine the frequency of licensing failure of MAAs via the MRP/DCP prior to the initiation of a CMDh referral procedure.
Methods
Data were obtained from different sources. The total number of MRP/DCP procedures finalised between January 2006 to December 2013 and all data relating to Article 29(1) procedures, including procedure type (i.e. DCP or MRP), legal basis (see Table S1) and prescription status, were obtained from statistics and reports available from the CMDh website [11]. Additional data on individual products, including pharmaceutical form and legal status were retrieved from public assessment reports that were obtained via the Mutual Recognition Product Index [12]. Article 29(4) commission decision reports were obtained from the European Commission pharmaceuticals community register [13]. Our analysis was limited to initial MAAs; renewal procedures and type II variations were excluded.
A scoring system was developed to categorise objections raised during the CMDh procedure (see Table S2 of the Supplementary information). Two researchers (HE and JL) independently scored the objections; disagreement was resolved by consensus. Multiple objections were scored as 'Multiple objections from different categories', unless the issues concerned the same category. Licensing failure was defined as a MAA procedure that did not result in a MA and included negative results at the level of the CMDh, a negative European Commission decisions, or withdrawals by the applicant.
MAAs via the MRP/DCP may also result in licensing failure prior to the start of a CMDh referral. When an MAA is withdrawn before day 90 of the MRP (including the preexisting MAs) or day 120 of the DCP procedure, the information will not be reported on the CMDh website and was thus not available for our study. Therefore, a survey was conducted under 58 member companies of the European Federation of Pharmaceutical Industries and Associations (EFPIA) and the Association of the European Self-Medication Industry (AESGP) to estimate the frequency of licensing failure during the early phase of the MRP/DCP procedure. The European Generic Association (EGA) declined the invitation to participate in the survey. The survey also included questions on the consequences of PSRPHs raised during the MRP/DCP. All data were entered into a database, and descriptive statistics were obtained using IBM SPSS statistics version 20.0.0 (IBM Corporation, 2011). Significance for numerical variables was tested using Mann-Whitney U test (two-sided α< 5 %).
Frequency of referral procedures
A total of 10,392 MRP/DCP procedures were finalised during the study period, 2822 MRP and 7570 DCP procedures (Table 1). Generic applications accounted for 78 % of the procedures and hybrid procedures for 10 %. Full dossiers were provided for 6 % of the applications, bibliographic applications accounted for 4 % and the remaining 2 % concerned other applications (see Table S1). Most MAAs concerned products that were authorised as prescription-only in the RMS.
While MRP procedures predominated in 2006 and 2007, from 2008, DCP procedures accounted for the majority of the MAAs. During the study period, 377 (3.6 %) CMDh referral procedures were started. During the first years after the introduction of the DCP, more procedures resulted in a referral, compared to more recent years (Fig. 1). For the combined MRP/DCP procedures, the frequency of CMDh referrals declined from 14.5 % in 2006 to 1.6 % in 2013. MRP procedures were nearly five times more likely to result in a referral than DCP procedures (Table 1). MAAs based on a full dossier and on bibliographic data were more likely to result in a referral compared to generic applications. No difference in the frequency of CMDh referrals was observed for prescription versus nonprescription medicines.
Assessment of determinants of licensing failure during the CMDh referrals
Of the 377 CMDh referrals, consensus was found within the CMDh for 272 (72 %) referrals, leading to a positive opinion for 239 (63 %) MAAs and licensing failure for 33 (9 %) MAAs. Article 29(4) procedures (CHMP arbitrations) were started for 105 (28 %) MAAs. Of these, 37 (10 %) ended in a refusal and 68 (18 %) resulted in a positive recommendation from the CHMP. So, overall, 70 (19 %) MAAs resulted in a licensing failure. Two illustrative cases that were referred to the CMDh are presented in supplementary Box 1. The majority of PSRPH leading to a CMDh referral procedure were related to the clinical phase (Table 2). PSRPHs concerning the main category benefit-risk concerns accounted for most CMDh referrals. PSRPHs related to the design of the clinical studies and the demonstration of therapeutic equivalence and bioequivalence were more likely to result in a licensing failure during the referral procedure, than referrals started because of benefit/risk concerns, quality or regulatory/procedural objections. For 88 referrals, multiple objections from different categories were raised (see Table S4 for more detailed information on the combinations). The number of CMDh referrals was small, especially in the second half of the study period. No Table S3).
No association was observed between licensing failure and active substance type, administration route, prescription status or MRP vs. DCP application during the referral procedure ( Table 3). Referrals of MAAs based on a full dossier (Article 8.3) were less likely to result in licensing failure. Cardiovascular products and nervous system products were the two product classes most frequently included in CMDh referrals. Antineoplastic and immunomodulating agents and genitourinary system and sex hormones were less likely to result in licensing failure when compared to cardiovascular agents. The Netherlands, Germany, Denmark, the UK and Sweden together acted as RMS for 78 % of all referrals. Procedures in which the Netherlands or Sweden were RMS, were less likely to result in licensing failure, whereas procedures where Denmark was the RMS more often resulted in licensing failure, when compared to all other Member States. Per procedure, a median of 8 (IQR 4-12) CMSs were involved. Procedures that resulted in licensing failure involved fewer CMSs (5.5; IQR 1-9) than procedures with a positive outcome (8; IQR 4-23; p<0.001). This difference remained when we limited our analysis to only MRP, or only DCP procedures. No specific time trends were observed for the frequency of licensing failure.
Licensing failure prior to initiating a CMDh referral
In total, 16 of the 58 (28 %) invited companies returned the survey. Of these, four companies provided two surveys from different departments within the same company, e.g., consumer health care and innovative medicines, or consumer health care and generics. This resulted in 20 completed individual surveys, reporting a total of 208 MRP/DCP procedures ( Table 4). Out of all MRP/DCP procedures, 174 (84 %) ended in a MA, whereas 11 % resulted in licensing failure at the level of the RMS (i.e., were refused or withdrawn) prior to CMDh referral, and 10 (5 %) procedures were referred to the CMDh. For 20 (10 %) of the procedures, the applicant withdrew the application in one or more Member States. The majority of the withdrawals were reported to occur for reasons other than safety concerns. Five respondents (25 %) indicated that their company had withdrawn MAAs (and MAs) in response to safety concerns at least once. Of all the respondents, 21 % reported that their company had decided not to market a product in one or more Member States because of restrictions on the use of the product introduced during the MRP/DCP procedure at least once.
Discussion
We have provided a comprehensive overview of MAAs via the MRP/DCP. We found that only a limited number of applications are referred to CMDh, and the majority of these referrals resulted in a MA. PSRPH objections that related to the design of the clinical studies and the demonstration of therapeutic equivalence and bioequivalence were most likely to result in a licensing failure, whereas discussion on quality or regulatory concerns rarely resulted in a licensing failure during the procedure. Some factors, including procedure type, legal basis and timing of the procedure were associated with the frequency of triggering a CMDh referral, but not with a higher rate of negative outcomes once the referral was initiated. Overall, these data show that the frequency of late-stage licensing failure of MRP/DCP procedures, i.e., licensing failure after referral, has decreased substantially. Care must be taken when interpreting outcomes of regulatory procedures. We defined licensing failure as a withdrawal or refusal, but this does not mean that the procedure failed. On the contrary, it may imply that the DCP/MRP functions as expected and prevented (potential) untoward outcomes resulting from subpar products reaching patients. Moreover, our study focused on overall licensing failure, meaning that we did not take into account that for some products, the authorised indications and/or patient populations may have been restricted at the end of the MRP/DCP procedure. Respondents to the survey reported that this had on occasion resulted in decisions not to market a product. However, we did not systematically investigate the underlying reasons for those restrictions. This may be a topic for further study.
The frequency of MAAs that resulted in a CMDh referral decreased substantially over the years, indicating that regulatory learning takes place. Increased experience in the use of this pathway may have resulted in improved MAAs filed by companies, but also in earlier withdrawal of applications that are likely to result in a referral. Companies may also adapt their filing strategies to anticipate regulatory concerns and file in selected Member States. For regulators, regulatory learning means that they may have become better in finding consensus about MAAs in earlier phases of the application, but also the development of guidance on what are considered PSRPHs may reduce disagreements between different Member States [9]. Furthermore, an ever-increasing body of information about outcomes of referral and arbitration procedures will provide more clarity Table S3 on the interpretation of PSRPHs and prevent referrals. Work within the CMDh is ongoing to improve the harmonised interpretation of existing guidance [14]. Moreover, ongoing harmonisation efforts of SmPCs of products for which Member States have adopted different decisions over the years (resulting in different authorised indications, contraindications or posology) will continue to reduce sources of disagreement [15]. Our data clearly show that MRP procedures result in CMDh referrals more frequently than DCP procedures. A possible explanation for this finding is that the RMS is more reluctant to accept changes to the existing SmPC, than in the situation of a DCP, where there is no preexisting MA. Moreover, given the fact that DCPs do not have preexisting MAs, companies may withdraw an MAA more easily in response to objections raised during the assessment procedure, in order to resubmit with different claims, or in different member states.
Objections raised on the design and outcome of clinical studies were most likely to lead to licensing failure. Often, these objections related to bioequivalence parameters that were outside predefined borders, even when the studies were adequately designed. These cases may be the result of unforeseen differences in the product characteristics or due to chance findings, which may be challenging to prevent. On the other hand, a considerable amount of referrals were due to causes that may have been prevented by the applicant through early communication with the competent authorities, such as the choice of reference product or dosage strength. Consequently, careful planning of clinical studies and consideration of existing guidelines could further reduce the frequency of referrals.
We found that procedures resulting in licensing failure involved fewer CMSs than those that resulted in a MA. This seems counterintuitive, as more CMSs would give rise to more opportunity for disagreement. A possible explanation may be that applicants anticipate objections and file in strategically selected Member States. For example, it has been recognised that the MRP/DCP is underutilised by the nonprescription sector, because of different approaches towards self-medication in the member states [16]. While we did not observe a higher frequency of licensing failure for nonprescription medicines compared to prescription medicines, companies may anticipate concerns during the procedure and run multiple procedures for the same product, leading to fewer referrals.
We found that five RMSs accounted for 78 % of all referrals. However, these five countries also acted as RMSs for 69 % of all existing MAs included in the Mutual Recognition Product Index (Table S4) [12]. ATC classes of authorised products were also distributed unevenly over the RMSs (data not shown), which may also account for some of the observed variation in the licensing failure frequency seen in our study. It may be of interest to further investigate the underlying reasons for the observed differences in frequency of licensing failures between RMSs.
Data from our survey suggest that 16 % of all MAAs via the MRP/DCP procedures were withdrawn in one or all Member States at some point. This suggests that companies anticipate that objections will be raised and take mitigating measures.
Strengths and limitations
Our study was the first to provide a comprehensive overview of MAAs via the mutual recognition and decentralised procedures. An important limitation of our study is that for the MAAs which did not result in a referral various attributes were only available on an aggregated level, such as legal basis, prescription status and procedure. While these did not show major differences over the years, we were unable to perform multivariate analyses to identify explanatory variables for changes in the frequency of referrals over time. Other variables, including RMS, ATC class, and route of administration, were unavailable altogether.
Multiple data sources were required to obtain a full picture on the outcomes of MRP/DCP procedures. While it may be preferable to use a single data source, the use of multiple data sources allowed us to validate our findings. For example, it may not be possible to extrapolate our survey results to all users of the MRP/DCP procedures, as our sample included only a few generic companies. Nevertheless, in our survey, 10 out of 208 procedures (4.8 %) resulted in a CMDh referral. This is comparable to the number of referrals included in the CMDh database (377/10,392=3.6 %), providing some reassurance with respect to the representativeness of the survey sample. The data of the current study are also in accordance with data from another study that investigated licensing failure of DCP applications filed in the Netherlands and found that 9.8 % resulted in licensing failure (Langedijk et al., manuscript in preparation). This is in the same range as the 7.9 %
Conclusion
A limited number of MRP/DCP procedures in our study ended in a CMDh referral, and the frequency of referrals has decreased substantially in recent years, indicating that companies and regulators have learnt to prevent late-stage failures of MAAs via the MRP/DCP. Ever-increasing experience in using the MRP/DCP results in a growing body of information about past referral outcomes that may facilitate the development of strategies to prevent licensing failure late in the procedure. Ongoing harmonisation activities on the side of regulatory authorities will likely lead to a further reduction of licensing failure during the MRP/DCP procedure. | 2017-08-02T18:32:16.084Z | 2015-07-25T00:00:00.000 | {
"year": 2015,
"sha1": "315f13fc1125600ada5100c051843e63b80de036",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00228-015-1904-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "40a691097cf055d818202efb0869a9aa6a84fb55",
"s2fieldsofstudy": [
"Business",
"Medicine"
],
"extfieldsofstudy": [
"Business",
"Medicine"
]
} |
237006823 | pes2o/s2orc | v3-fos-license | Dielectric dispersion of Ag/PAN nanocomposites
The dependence of the electric modulus of silver/polyacrylonitrile nanocomposites on the ac field frequency has been studied at different temperatures and AgNO3 content in the base mixture. The observed relaxational maxima on the frequency dependences of the electric modulus’ imaginary part are connected with the interfacial polarization. It was shown that the frequency electric modulus’ experimental points are well-described by the Cole-Davidson model, correspondingly. The values of the relaxation times and the activation energies of these structures have been estimated by this model.
Introduction
Despite the fact that nanoscale impurities are sometimes destructive to definite optical materials, for example, chalcogenide glass [1][2][3][4], inorganic nanoparticles can be useful for various use, such as for biosensors, catalysis, optoelectronics, data storage, etc. Nanocomposites, consisting of dispersed conductive nanoparticles in a polymer matrix, draw a considerable interest of researchers due to their possible electrical and electromagnetic applications [5]. Some examples of the general use have to do with the screening of electromagnetic interference or radio interference and electrostatic charge dissipation. Recently, there has been a steadily growing interest in the composites based on a polymer matrix and metal nanoparticles [6][7][8][9].
By now a range of methods has been developed that allow obtaining nanoparticles in a polymer matrix [10][11][12]. It was found out that in polymers, containing metal nanoparticles, the dielectric permittivity is sufficiently high; this enables the use of such materials in electronics and microwave technical equipment. Furthermore, such materials can be used as electrically conductive adhesives and circuit elements in microelectronics. They also possess anti-corrosion properties and may be used as coatings for metal contacts.
In order to study the electrical properties of such systems, they are considered to be heterogeneous, and are described within the framework of effective media theories. Different ratios, based on the dielectric permittivity and specific electrical conductivity of the constituent parts, are used in these media [13]. Electrical characteristics of metal-polymeric nanocomposites depend on the inclusion volume fraction, size and shape of the metal nanoparticles.
Dielectric relaxations in metal-polymeric composites can be studied by using dielectric permittivity spectroscopy. However, researching such materials, in which the dielectric permittivity can reach the values over 1000 at low frequencies of the electric field, the problem of relaxations' disclosure and identification exists. The thing is that in this case the relaxations are concealed due to the presence of electroconductive impurities in the dielectric matrix. Therefore, to detect the dielectric relaxations the reciprocal value of the complex dielectric permittivity, the electric modulus, is implemented [14].
In this study we investigated the frequency dependence of the electric modulus of silver/polyacrylonitrile nanocomposites (Ag/PAN) synthesized at the stage of simultaneous processes of acrylonitrile polymerization and silver ions reduction.
Experimental part
Ag/PAN nanocomposite films were obtained by photopolymerization of silver nitrate (AgNO3) solution in acrylonitrile (AN) in the presence of 2.2-dimethoxy-1.2-diphenylethane-1-one (IN) as a photo initiator. After mixing the monomer, IN and AgNO3, the mixture was placed between two glass plates with a conductive ITO layer and polymerized by UV radiation with λ = 365 nm for 90 min. The concentration of the precursors varied in different experiments. The more detailed description of the nanocomposites synthesis is given in the papers [10,15]. The AC-conductivity and capacitance were measured with the Hewlett-Packard 4284A LCR-meter; the resistor and the capacitor were connected simultaneously in the frequency range of 20 -10 6 Hz. The preset temperature of the sample was maintained in the frequency range of 285-333 K by using the LOIP LT-100 circulation thermostat with an external cooling. Correspondingly, the real and imaginary parts of the dielectric permittivity were calculated based on the capacitance and AC-conductivity measurements.
Results and discussion
Generally, dielectric relaxations exist in polymer composites; this is well-presented in the work [14]. In metal-polymeric nanocomposites the interfaces between the metal nanoparticles and the matrix cause the interfacial polarization (Maxwell-Wagner effect). The charge carriers migrate under the influence of the applied electric field and are accumulated on the interface between the phases with significantly different dielectric permittivity and conductivity. As a result, large dipoles are formed on the surface of metal nanoparticles, this leads to the interfacial polarization formation. This relaxation is determined by the dielectric permittivity and the specific electroconductivity of the components which are the part of the heterogeneous material. The formed dipoles are vey inert in the polymer composites, that is why the relaxation, being the slowest of all the emerging dielectric processes, is observed in the low frequency range. The observed high values of the dielectric permittivity, which quickly decrease with the field's frequency growth, in nanocomposite Ag/PAN films are exactly the result of Maxwell-Wagner effect.
When the electric modulus is used for the interpretation of the interfacial polarization, the sharp jump of the dielectric permittivity is minimized. In this respect, the usual difficulties, related to the influence of the electrode nature, the contact ohmicity and the space charge injection which conceal the relaxation on the dielectric permittivity dispersion, can be resolved or even ignored [16].
The electric modulus (the reciprocal complex dielectric permittivity) M* is calculated by the following equation: where М' and М'' are the real and the imaginary parts of the electric modulus, and ε' and ε'' are the real and the imaginary parts of the dielectric permittivity, respectively. 3 333 K), respectively. As a result of the real part's dielectric permittivity increase, the real part of the electric modulus (M') decreases unevenly with the growth of silver nitrate nanoparticles content (the concentration of the metal salt in the base mixture) and with the increase of the temperature. Similar frequency dependence has been observed in other researches on composite materials on the basis of polymers with conductive impurities [17][18][19]. A sharp transition from low to high values assumes the relaxation process which is evident in the dispersion of the electric modulus imaginary part (M'') as the maximum losses (see Figure 1 and Figure 2). The relaxational peak shifts to high frequencies with the temperature increase. At the same time, the maximum's intensity tends to decrease with the growth of nanoparticles' inclusion volume fraction in the polyacrylonitrile matrix. Such an electric modulus frequency dependence demonstrates the interfacial polarization; this conforms both to the theory published in [20,21] and other experimental works on similar materials' dielectric properties [18,22].
The shift of the peak towards high frequencies with the increase of the silver inclusion volume fraction in polyacrylonitrile (the content of silver precursor in the base mixture) can be attributed to the probable growth of the intrinsic conductivity of metal nanoparticles [20,21]. While researching the morphology of nanocomposite films Ag/PAN [10,15] it was found out that, increasing the amount of metal salt in the reactionary solution, we obtain silver nanoparticles of a bigger size. In turn, the electroconductivity of nanoparticles may differ from the conductivity of bulky silver and can be dimensionally-dependent. Thus, the growth of the metallic nanoinclusions may lead to the growth of their conductivity.
The electric modulus' dispersions are not described by the fundamental model of Debye [23]. The observed maximums' losses are bigger and more intensive than during the Debye relaxation process. From the Maxwell-Wagner-Sillars equations [24], based on a simple Debye relaxation, narrower and more intensive peaks are also obtained [25]. Both the Debye model and the Maxwell-Wagner-Sillars equations describe the process only with one relaxation time; this appears not to be doing for the nanocomposite Ag/PAN films. While analyzing the dielectric properties of polymeric materials, the models of Cole-Cole [26], Cole-Davidson [27], Havriliak-Negami [28] and Kohlrausch-Williams-Watts [29] are often used. All these approaches view the processes with some distribution of relaxation times. Based on the results of the works [17,18], to explain the electric modulus' dispersion we used the Cole-Davidson model to present M. According to this model, the imaginary part (M'') and the real part (M') of the electric modulus are as follows: where Мs and М∞ are the values of М' when ω → 0 and ω → ∞, respectively, where ωmax is the frequency of maximum loss on the dependence ε''(f) (ωmax = 2 π fε,max), and τ is the relaxation time, connected with the electrostatic field (often designated as τε). The relaxation time, connected with the constant displacement vector, is calculated as τM = (Мs/М∞)τε, and the position of the relaxational peak on the M'' curve is calculated as fM,max= (М∞/Мs) fε,max [25]. The index γ determines the width of the maximum relaxation times' distribution; at the value of γ = 1 only one relaxation time is observed (purely Debye relaxation process). Figure 3 shows the dependences of the imaginary part of the electric modulus on the real part (the Cole-Cole equations) at different concentrations of silver nitrate in the initial mixture (1 -10, 2 -20, 3 -30wt.%, Figure 3a) and different temperatures of measurements (1 -293, 2 -313, 3 -333 K, Figure 3b). Slightly squeezed, half-circles correspond to the processes with little distributions of relaxation times. Almost for all the samples, the experimental points originate from the reference point; this proves the absence of any other relaxation process in the area of lower frequencies in the nanocomposite films. The changes in the radius of half circles demonstrate the impact of the inclusion volume fraction of silver nanoparticles in the PAN matrix.
The Cole-Davidson model describes the experimental curves well (the solid lines in Figure 1, 2 and 3). By the approximation of the experimental points we have estimated the indices γ and τM. All the values of γ are higher than 0.59, this indicates a very narrow distribution of relaxation times. With the increase of the silver inclusion volume fraction in the polymeric matrix γ tends to the growth, this indicates the approximation of the relaxation process to the purely Debye one. The growth of the measurement temperature causes the relaxation time's decrease for all the samples. This is evident as the thermal energy facilitates the movement of the dipoles, having formed on the surface of silver nanoparticles in the ac field. With the increase of silver nanoparticles' in the inclusion volume fracture of PAN we observe the decrease in the relaxation time as a result of the maximum losses' position shift to higher frequencies. Figure 4 shows the dependences of the relaxation time on the temperature's reciprocal value; the dependences are approximated well by the straight lines in the Arrhenius coordinates. According to the work [18], the relaxation time can be represented as follows: where ΔE is the relaxation process's activation energy, k is the Boltzmann constant, and T is temperature. The values of ΔE, obtained from the linear approximation and the equation (5), were 1.41 and 1.28 eV for the films, got at 20 and 30 wt.% of silver nitrate, respectively.
In the high frequencies area the experimental points diverge from the theoretical curves, obtained by the Cole-Davidson model; this can be explained by a probable emerging of another relaxation process. Such behaviour was observed at all the measurement temperatures and in all the nanocomposite films. It is necessary to mention that the type of the frequency dependence of the electric modulus' imaginary part at nanocomposites' high frequencies is similar to the dependence M'' for the polymer without silver nanoparticles. In polyacrylonitrile there are polar functional groups CN which create their own dipole moments. That is why the polymer may manifest dipole polarization [20]. The sharp increase of the electric modulus' imaginary part (Figure 1 and 2) in the high frequencies' range for the polyacrylonitrile with and without silver nanoparticles is, probably, connected with the low-frequency edge of the maximum losses during this polarization. | 2021-08-14T20:03:10.168Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "306032973bc0e4e3ec54f23033f100430146d9fe",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1967/1/012046",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "306032973bc0e4e3ec54f23033f100430146d9fe",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
128017928 | pes2o/s2orc | v3-fos-license | UTILIZATION OF WASTE LIVING CUTTLE COW IN SUPPORTING CORN AGROINDUSTRY DEVELOPMENT.
Muji Paramuji 1 , Suprihatin 2 , Titi Candra Sunarti 2 and Sukardi 2 . 1. Doctoral Student Study Program in Agricultural Industry Technology, Graduate School – IPB. 2. Agricultural Industrial Technology Study Program, IPB Dramaga Campus IPB 16682 Bogor. ...................................................................................................................... Manuscript Info Abstract ......................... ........................................................................... Manuscript History Received: 16 October 2018 Final Accepted: 18 November 2018 Published: December 2018
Corn agroindustry still faces problems, especially the availability of fertilizer. The solution that can be taken is to integrate corn plants with beef cattle. Farmers use livestock manure as organic fertilizer for their crops, then utilize corn waste as animal feed. The purpose of this study is to engineer traditional/commercial beef cattle waste processing into compost. Research on compost using the anaerobic composting system with treatments T 1 , T 2 , C 1 and C 2 (4 kg of traditional/commercial beef cattle waste mixed with an EM4 solution with a concentration of 5 ml and 10 ml / l water). Compost material is stirred evenly, put in hollow polybags on a sealed plastic bucket, fermented for 40 days and reversed every 3 days. Ripe compost is dried in the hot sun while being flipped dry (no later than three days). Dry compost is finely ground (40-60 mesh), packaged and tested parameters include yield, water content (Oven), pH (Potentiometry), organic C (Gravimetric), total N-level (Volumetry), total P 2 O 5 (Spectrophotometry), K 2 O (AAS), C/N ratio. The data obtained are presented descriptively. The results showed the performance of EM4 10 ml/l of water is better than 5 ml/l of water because it can accelerate the composting process. The C 2 treatment produced compost with a yield of 21.91%, water content 10.07%, pH 8.98, P 2 O 5 1.79%, K 2 O 1.54%, organic C 37.50%, N total 1.80% and C/N ratio 20.83 which are more in accordance with SNI quality standards.
Introduction:-
In integrated agroindustry models, farmers overcome the problem of availability of fertilizers by utilizing beef cattle waste. Farmers use livestock manure as organic fertilizer for their crops, then use agricultural waste as animal feed (Ismail and Djajanegara 2004). From the results of Wahyuni's research (2010), data have been obtained that solid dung and cow urine are approximately 25 kg/head/ day, livestock urine containing N ± 10 grams/ liter mostly in the form of urea.
Compost is an environmentally friendly organic fertilizer that is important to improve the physical and chemical structure of the soil so that it can enrich nutrients that can spur plant growth (Handayani, 2009, Priadi andErmayanti, 2014). The purpose of this study is to engineer traditional/commercial beef cattle waste processing into compost in supporting the development of integrated corn agroindustry.
Research Methods:-Time and Place:-
This research was conducted in the Greenhouse of the Faculty of Agriculture, the Islamic University of North Sumatra for approximately 3 months from June 1 to September 30,2014, covering preparation, data collection and report preparation.
Materials and Tools:-
The materials used in this study include solid/ liquid commercial / traditional beef cattle waste taken from beef cattle farmers around Deli Serdang Regency, water, EM4.
Figure 1:-Compost raw material
The equipment used in this study included 18 kg plastic buckets, polybag, tissue, label paper, plastic packaging, compost cutting machines, greenhouses, scales, ovens, and other analytical tools.
Method
Research on composting with the anaerobic composting system. Composting is done in steps: T 1 = 4 kg of mixed beef cattle wastewith an EM4 solution concentration of 5 ml/l of water T 2 = 4 kg of mixed beef cattle wastewith an EM4 solution concentration of 10 ml/l water C 1 = 4 kg of mixed beef cattle waste with an EM4 solution concentration of 5 ml/l of water C 2 = 4 kg of mixed beef cattle waste with an EM4 solution concentration of 10 ml/l water The stirred compost material is put into a hollow polybag in a sealed plastic bucket. Every 3 days checked and reversed. It is estimated that after 40 days the compost is ripe, and during the decomposition process does not give off a foul odor, even the aroma emitted is typical of aroma fermentation. Next, the compost is dried in the hot sun while being turned back to dry (no later than three days). Dry compost is finely ground (40-60 mesh), packaged and tested for parameters.
Results And Discussion:-
The results showed that the performance of EM4 activator at a concentration of 10 ml / l of water was better than 5 ml / l of water because it can accelerate the composting process and produce compost that has met the requirements of quality standards for organic fertilizers from SNI. From Table 1, it can be explained that in general the treatment of compost material shows differences with SNI quality standards, although there are some treatments that show higher values of SNI quality standards such as pH and organic C.
During the composting process the brownish color changes to dark brown, at the end of the composting process the color turns brownish black due to the formation of humic acid. In addition to discoloration, compost also emits a smell that resembles acidic odor/ fermentation. The highest compost yield on C 1 . The decomposition process in treatment C 1 depends on the ingredients used, wherein the material, cellulose is easier to decompose than lignin. Depreciation of compost material in material formulations due to composting takes place during the decomposition process of compost material by microorganisms that convert organic matter into carbon dioxide, water, hummus,and energy. This is in accordance with the explanation of Wahyono et al. (, 2011), that the final depreciation of mature compost is around 50-75% of the initial weight of compost.
The water content of compost from all treatments has met the SNI 19-7030-2004 standard with <50% moisture content. The water content of compost is obtained from the decomposition of organic matter into carbon dioxide, water vapor and compost (Arumsari et al., 2012).
Compost pH of all treatments did not meet SNI 19-7030-2004 standard, namely pH. This is due to the use of waste from beef cattle containing high-based ingredients such as concentrates and vitamins, especially commercial beef cattle waste. P 2 O 5 in compost can be caused by the amount of phosphorus contained in the raw materials used and the number of microorganisms in the composting process. All treatments to produce compost are mature and stable and meet SNI 19-7030-2004, namely P 2 O 5 levels of more than 0.1%. As stated by Miftahul (2003) that the high and low phosphorus content in compost may be due to a large amount of phosphorus contained in the raw materials used and the number of microbes involved in composting Testing of K 2 O levels from all treatments is quite high, exceeding what is required by SNI 19-7030-2004, which is more than 0.2%. Based on the results of the analysis it is known that the addition of EM4 as an activator material on compost affects the levels of K 2 O. Potassium is used by microorganisms in the substrate material as a catalyst, with the presence of bacteria and its activity will greatly affect the decrease in potassium content. This is according to Agustina (2004) statement, that potassium is a compound produced by microbial metabolism, where microbes use free K + ions present in fertilizer raw materials for metabolic purposes.
Analysis of C-organic from all compost treatments exceeds that required by SNI 19-7030-2004 which is more than 30%. The high organic compost C is because beef cattle waste contains high carbon from feed ingredients in the form of corn | 2019-04-23T13:21:53.722Z | 2018-11-30T00:00:00.000 | {
"year": 2018,
"sha1": "909a75ae90603a270863080f56cad50b85794f8b",
"oa_license": "CCBY",
"oa_url": "https://zenodo.org/record/2540420/files/102.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "125d2585769eb4758a8a1e4cab8879947f391db9",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
109796499 | pes2o/s2orc | v3-fos-license | On the magnetic field and temperature monitoring of a solenoid coil for a novel magnetorheological elastomer base isolator
Following a successful experimental validation of a magnetorheological elastomer (MRE) base isolator, this study presents one of the major concerns, the heating of the magnetic coil, in the design and development of the adaptive MRE based isolator. In this research, the MRE materials, with a total thickness of nearly 150 mm, are placed as the magnetic core of the device to best utilize the magnetic energy provided by the coil. A series of tests are undertaken to investigate the magnetic fields inside the coil with or without the MRE materials. Thermocouples are used to monitoring the surface temperature of the coil when it is applied with various currents for 10 min. It is shown that the measurement of field inside the solenoid when no MRE is placed inside agrees with the theoretical analysis. It is also shown that the temperature of the coil increase dramatically when a current is applied. Cooling of the coil may takes even longer, about 4 h, till down to the room temperature. Dropping of the magnetic field is observed when the temperature goes high.
Introduction
Finding engineering applications for a new class of smart material, magnetorheological elastomer (MRE), has been a major task for researchers in this field. Novel MRE devices, such as vibration absorbers [1][2] and vibration isolators [3][4], have been proposed and fabricated to pioneer its engineering applications. In civil engineering, Li [5] has proposed a novel MRE based isolator to be used in the base isolation system for mitigating the devastating effects of earthquakes on civil structures.
For any MRE-based device, electromagnetic coil is evitable involved to provide magnetic field for the MRE materials. Comparing with magnetic circuit design in MRF device, i.e. MR damper, MRE devices normally need a larger coil to energise the MRE materials, particularly for a large-scale MRE device. Therefore, investigation of the solenoid on the magnetic field distribution is of great importance. In particular, provision of sufficient and uniform magnetic field is essential towards the success in designing MRF/MRE devices. To understand the mechanism of magnetic field generation in a solenoid is the key for device design and optimisation.
Large electromagnetic coil has high electric resistance. Thus, heating becomes a great concern for the engineering design. Breese and Gordaninejad [6] presented theoretical and experimental studies on heat generation and dissipation of controllable MR fluid shock absorbers. Dogruoz et al. [7] performed theoretical and experimental analysis for enhancing heat transfer from MR fluid dampers using fins. Kavlicoglu et al. [8] studied heating of a high-torque, multi-plate magnetorheological (MR) fluid limited slip differential (LSD) clutch. These researches focused on the influence of the temperature rising on the performance of the ER/MR devices. Little attention has been paid to the electromagnetic coil itself.
The objective of this research is to investigate the magnetic field distribution and heating performance of a solenoid. Finite element analysis is explored to find the magnetic field distribution in the solenoid. To validate the analytical analysis, an experimental research is conducted to compare with the numerical findings. Finally, temperature monitoring on the solenoid under various applied current inputs is presented.
Electromagnetic coil for the magnetorheological elastomer based isolator
The solenoid is designed to provide magnetic field for the MRE based isolator. The sketch map of the solenoid is shown in Figure 1. The solenoid is made of electromagnetic coil and thin non-magnetic support as illustrated in the Figure. The electromagnetic coil is a cylinder shape configuration and is winded by 1.2 mm copper wire. The cylinder shape non-magnetic support is made of epoxy material and was folded into I-shape to protect the coil. Detailed structural parameters of the solenoid are shown in Table 1. The coil is firmly attached with the epoxy support. The total winding number of the coil is 3100 turns. The diameter of the coil wire is 1.2 mm and the total length of the wire is 2.1 km. The wire is made of copper and the resistance of the coil is 32.3 Ω.
Finite element analysis of the solenoid
To analyse the magnetic field inside the solenoid, a finite element model is developed in Ansoft Maxwell. The FE model is shown in Figure 2. The material properties of the coil and the coil and the epoxy support are properly selected according to the design. Figure 3 shows the magnetic field distribution inside the solenoid when the applied current is 5 A. Figure 4 and 5 are to quantify the magnitude of the magnetic field inside the solenoid, both axially and longitudinally. It can be seen that the magnetic field intensity in the longitudinal middle of the solenoid is higher than in the top and bottom of the solenoid. The magnetic field in the longitudinal-middle is also more uniform than other locations.
Experimental setup
To experimentally verify the findings from finite element analysis, an experimental testing is set up as shown in Figure 6. It consists of a solenoid, a DC power supply, a host computer, a magnetic field sensor and a temperature monitor. Details of the solenoid can be found at section 2. The DC power supply, with capacity of 250 V and 10 A, provides the solenoid with required currents from 0 A to 5 A. The magnetic field sensor, IDR-325 Gaussmeter, from Integrity Design and Research Corp, USA, is used to measure the magnetic field inside the MRE layer. The temperature monitor, model YC747UD-k type, is a four-channel thermometer with resolution of 0.1°C and capacity of -100°C to 1300°C. All the data from the magnetic field sensor and the thermometer is recorded by the host computer. Figure 8 shows the magnetic field distribution inside the solenoid when the currents applied to the coil are 1 A, 2 A, 3 A and 4 A. The meanings of the terms used in the figures, i.e. "50%", "100%", "Top", "Centre" and "Bottom" are shown in Figure 7. For example, 50% top means the location in the top of vertical centre line inside the solenoid. Comparisons between the finite element analysis and measurements are also listed in Figure 8. It is quite clear to see that the measurements of the magnetic fields follow the simulation results very well. With the increasing of the applied currents, the magnetic field appears a linear increasing trend. Figure 9 shows the magnetic field distribution along vertical directions inside the solenoid. It can be seen that the maximum magnetic field is in the centre of the solenoid. As indicated from the finite element analysis and the test results, the field intensity here is high and uniform. Figure 10 is the temperature monitoring results when a certain current pass through for 10 min. When the applied current I = 4 A, the temperature of the coil rises from 25°C to 50°C within 10 min. For the cooling phase, after removal of the current, it will last a long period till back to the room temperature, as shown in Figure11. For the case when a 3 A current is applied to the coil for a certain period of time and the coil temperature is above 40°C, it will last nearly 5 h till fully cooling down. As we know, the MR elastomer will have a weaken MR effect at high temperature, therefore, optimal design of the coil should be considered to find the compromise between the achieved magnetic field and the heating caused by the input current.
Conclusions
This study examined the magnetic field distribution and heating of a solenoid for a MR elastomer based isolator. Both finite element analysis and experimental testing were used for the magnetic field analysis in a solenoid. Experimental research on temperature monitoring was also conducted for the solenoid. | 2018-05-31T20:07:33.874Z | 2013-02-15T00:00:00.000 | {
"year": 2013,
"sha1": "2784118240630c1e4997809b622a27c2662a37dc",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/412/1/012033",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "430cf556f50970f36b575ed3ad97d446623f5160",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Engineering",
"Physics"
]
} |
59462262 | pes2o/s2orc | v3-fos-license | FROM ODD ENCOUNTERS TO A PROSPECTIVE CONFLUENCE : DANCE-PHILOSOPHY
This text inquires into the relationship between Western philosophy and Western theatre dance from their odd encounters in modernity to the current affiliations between contemporary choreographic poetics, critical theory and contemporary philosophical thought. The point of departure for the inquiry is a discussion of the three problems that have structured the historically vexed relationship between dance and philosophy: dance’s belated acquisition of the status of an art discipline, the special ontological status of the work of dance, and the limits of dance’s meaning-production set by the theme of bodily movement’s “ephemerality” and “disappearance.” After critically examining the approaches of Alain Badiou and Jacques Ranciere in whose philosophies dance is relegated to a metaphor or, even worse, to an ahistorical conduit for a general ontology, the author makes a case for another movement of thought that arises in dance practice and is at the same time philosophical, rooted in Spinoza’s (and Deleuze’s) principle of expression. Demonstrating how choreographers, like Xavier Le Roy and Jonathan Burrows, create by “posing problems,” Cvejic presents a theory of “expressive concepts,” whereby choreography contributes to a philosophical rethinking of the relationship between the body, movement and time. This points to the new prospects of a kind of “dance-philosophy,” in which the epistemic hierarchy is reversed: the stake is no longer in what philosophy could do for dance, but how an experimental, radically pragmatic orientation in dance offers a practical framework for theorizing perception, concept-formation and other philosophical issues.
between Western theatre-dance and philosophy repeat the same refrain: that (Western) philosophy 'neglects dance' (Sparshott 1983) and has had very little to say about dancing (Levin 1983).Although baroque ballet has developed equally through both dancing practice and the discourse of the eighteenth century genre of the treatise (Cahusac [1754] 2004, Noverre [1760] 2004), dance as such has been excluded from hierarchical classifications of the beaux arts, most notably from Diderot's and D'Alembert's Enyclopédie .Moreover, François Pouillaude has recently argued that the birth of modern aesthetics means, for dance, the installment of its literal, 'inaugural absence' (Pouillaude 2009, 15) from philosophical interest.While Kant's Critique of Judgment (1790) makes only two brief remarks in passing about dance, perhaps because the combination of 'the play of sensations in music with the play of figures in the dance' ( § 52) shows a confusion of temporal and spatial (plastic) categories, Hegel's Aesthetics ([1835/1842] 1975) and Schelling's Philosophy of Art ([1802-03] 1989) make no mention of it.With the exception of the poetic privilege that Nietzsche confers upon it in Zarathustra's dancing songs (Nietzsche, Thus Spoke Zarathustra [1891] 1974) -a conspicuous case of metaphorical abduction which, as we will discuss later, carries on into contemporary philosophy -we will have to wait until the second half of the twentieth century for dance to make its theoretical debut in a small number of serious attempts to investigate it philosophically (e.g., Langer 1953;Sheets-Johnstone 1966).PERFORMANCE PHILOSOPHY VOL 1 (2015) My interest here is not in rehearsing the arguments of a rationale for this significant omission of dance from the Western canons of philosophy, aesthetics and art theory.The list of reasons involves outdated, overly general, disputed or even humorously coarse speculations (that dance, for example, has always been a 'female art' [Sparshott 1983, 95]).Instead, I will draw out the distinct registers of encounters between dance and philosophy in a minor key that is 'aside from' or that critically transforms major concerns of Western modernity.The range of these encounters begins with a presentation of three characteristic themes or recurrent problems across both continental and analytic philosophical inquiries into the nature and status of dance as a specific art, inquiries which will expound the difficulty in the rapport between the practice of dance and the abstract reflection of thought.Exposing a variety of efforts in twentieth-century philosophy to provide essentialist definitions of dance will consequently lead us to a remarkable episode of contemporary French thought in which dance is wrested as an instrument to reinstate Alain Badiou's and Jacques Rancière's particular philosophical concerns-included here because of their considerable impact on contemporary dance practitioners.In a third step, I will observe an inverse movement: how late twentieth-century French theory prompted a reconceptualisation of choreography and performance which began in the mid 1990s in European dance (in the works of Jérôme Bel, Vera Mantero, Xavier Le Roy, Eszter Salamon, Mette Ingvartsen, and others).The implications of this paradigm shift from modern, formal abstract movement to what is inappropriately referred to as 'conceptual dance' provide the ground for another kind of thought that both stems from and gives rise to a distinctive set of current practices of making, performing and attending dance, and that could best be accounted for by the principle of expression in the philosophy of Gilles Deleuze.Lastly, I will conclude this preliminary outline of 'dance-philosophy' with the most recent philosophical encounters with dance, as well as with a few terms, specific to contemporary dance, which have yet to receive philosophical attention.
Three problems for philosophies of dance:
Dance as an art paradigm
Dance's belated acquisition of the status of an art discipline constitutes the first obstacle to philosophy's consideration of dance as worthy of its theoretical interest.Jean-Georges Noverre's plea for a reform of mid-eighteenth-century ballet as 'ballet d'action' after the Aristotelian dramaturgical model of mimesis during the Enlightenment proved to be a symptom of dance's historical subservience to and theoretical entanglements with other arts; Noverre's attempt to dignify dance on a par with tragedy was in vain, as his reform was ignored among theatre-dance practitioners during his lifetime and most of his experiments took place outside of France, the main center and object of his critique (Noverre 2004, xi).The subordination of dance to theatre drama and its inferior position vis-à-vis the other arts, due to its function of ornamental virtuosity, continued.It is only in the period of early modernism, and the second Industrial Revolution, from the latter half of the nineteenth century until the 1930s, that a rupture with ballet foregrounds bodily movements as both means and ends of modern dance as a new independent art.Dance history has favored the modernist ontology of dance in the vein of Clement Greenberg's PERFORMANCE PHILOSOPHY VOL 1 (2015) theorisation of modernism associated with abstraction, where modern dance was hailed as a new 'beginning', or as a 'discovery of the actual substance of the dance, which it found to be movement' (Martin [1933(Martin [ ] 1989, 6), 6), or as 'absolute dance' rooted in the pure bodily expression of subjective human experience, as Mary Wigman contends in her 'philosophy of dance' (Wigman in Cohen 1992, 149-153).For dance to become 'a paradigm of art', in Rancière's sense of 'becoming a paradigm of the relation between […] the movements of a body on a stage and the gestures of a body in a workshop or in the street' (Rancière 2014, n.p.), it also had to be recognized through 'other eyes', most notably in the writings of Stéphane Mallarmé, which will remain, as we shall see, the crucial reference for contemporary philosophy's relation to dance (Badiou 1993;Rancière 2014;Pouillaude 2009).For Rancière, this 'moment of dance', in both the historical and conceptual sense -wherein one could also speak of a momentum by which a new balance between the body and movement is struck -also involves the emergence of a new subject position in which the first authors of modern dance appear.Or as Rancière summarises: … the conventional art of dramatic action and the 'mechanical' art of the ballet could be dismissed and substituted by a unique art of the performing body 'speaking' to the audience in the universal language of movement.(Rancière 2014, n.p.) Critical of the political consequences of claims of autonomy by virtue of the so-called universality of bodily movement, the post-Marxist literary theorist Andrew Hewitt regards modern dance as a source of an aesthetic ideology which proclaims emancipation through the body's experience of its own truth as its nature.The purity of movement is staked out through its origin or source: the body of the dancer.Movement becomes ontologically bound to the body, ontologised as a minimal resting place of 'noncompromisable subjectivity' (Hewitt 2005, 18).Binding movement to the body as a mechanism of subjectivation will pose two problems: the core of dance's holistic resistance to discursive thought coupled with dance practitioners' mistrust of theorisation, and the difficulty in establishing the work of dance, which will be tackled next.
The work of dance
Thanks to the coincidence of the source, instrument and site of danced movement in the body, the work of dance is conferred special ontological status.In aesthetic theories from the 1980s and beyond, largely informed by phenomenology and analytic philosophy, the status presupposes a duality between the 'work' and its multiple instances, that is, 'performances' (see Davies 1991, Ingarden 1989). 1 Graham McFee has formulated the most prominent view of analytic aesthetics on this issue: With dance, as with music, there are at least two 'objects of appreciation': the work itself and tonight's performance of it.These might be treated differently for critical purposes: thus, the dance seen last night might have been a wonderful performance of a mediocre work or (more likely) the opposite.To provide a conceptual structure for discussion of such multiples, some writers (Wollheim 1980, sections 35-6;McFee 1992, 90-4) have employed a type/token framework, such that dance performances are tokens of an (abstract) type.(McFee 2001, 546) The uncertainty among analytic philosophers regarding what is constitutive and what is contingent for a work of dance viewed through a particular performance thereof cannot be the ground for attributing to it a purportedly weak, dubious, special or problematic condition.Such an ontological claim can be disputed, firstly, on the basis of its lack of specialist knowledge about dance practice, and secondly, its error of applying the standard of musical notation to dance.Moreover, the incapacity of Western philosophy and aesthetics to think dance might have to do with the tradition of applying to it the common regime of 'the work of art' (oeuvre), while there is 'no library of movement' and 'no stable objects' shareable by a broader community outside that of dance specialists (Pouillaude 2009, 9).
Contesting the strict conditioning of Nelson Goodman's division of allographic and autographic arts (Goodman 1976), Pouillaude has suggested that dance be considered an allographic art without notation.The allographic regime of dance grounds the possibility of iteration, extraction and retrieval of singular, constitutive characteristics -such as a repertory of movements, for instance -or contingent characteristics of an individual interpretation and their inscription in an oral-mimetic practice rather than in the writing and reading modes of music or literature.Most dance notation systems have proven insufficient or inadequate, lacking the prominence held by standardised Western notation in the world of music. 2 Therefore, Pouillaude has reformulated the type-token duality in a framework more suited to dance: the work of dance exists at once as a 'public object', shared and offered for judgment, and as a 'resistant object', capable of surviving the death of its initial protagonists, or in other words, existing beyond the experience or memory of its creation and performing processes (Pouillaude 2009, 77).He has chosen to redescribe the problem of dance's mode of existence in a conceptual imagery more passionate than the one yielded by the terms of positivist logic.Dance exhibits, for lack of an appropriate English translation, désoeuvrement: the regime of an 'unworking' (idle, inoperative) work.It is characterised, on the one hand, by physical expenditure (la dépense) or indifference to the trace or residue of action (Pouillaude 2009, 76), and on the other hand, by auto-affection, where motion in performance produces an infinite cycle of the renewal of energy in lieu of objects or things (Pouillaude 2009, 81).
In concert with many recent projects which reinvent the tools for documenting and transmitting works of dance, 3 we may conclude that the ontological status of each work must be resolved individually.This entails paying attention to the idiosyncratic relationship between the shareable (exterior or public) and the reticent, self-absorbed or shattered aspects of a dance work, case by case.
Meaning and sense
The third problem concerns the production of meaning in dance, how dance signifies or 'makes sense', which has been labeled as the 'standard sotto voce accompaniment' to much of twentiethcentury and contemporary dance, attesting to the puzzlement of novice dance audiences (Sheets-Johnstone 1979, 33).From the viewpoint of analytic philosophy, this problem is addressed as a matter of underdetermination: PERFORMANCE PHILOSOPHY VOL 1 (2015) … the dance work is always underdetermined, relative to any particular performance of it, since each performance makes concrete in particular ways features of the dance which might have been concretized in other ways, indeed, which might be made concrete in those other ways in another performance of that dance, even one by the same company.(McFee 2001, 548) One way in which recent dance studies have grappled with bewilderment in the face of what appears abstract and elusive about dancing movement was to attribute it to an ontology of disappearance.Peggy Phelan's thesis according to which performance is considered an event of elusive presence condemned to loss and repetitions of memory (Phelan 1993, 148-152) has had a significant impact on a segment of dance scholarship aligned with Lacanian and Derridean discourses on presence, writing, subjectivity, the gaze and history (Kruschkova ed. 2005;Siegmund 2006;Lepecki 2006;Foellmer 2009).The ephemerality of movement in dance is described as the body's self-erasure in the 'fading forms' of movement, and moreover is featured as a paradigm of the fundamental condition of performance.Disappearance, loss, lack and absence have been the notions through which dance scholars in the past decade have examined movement with bodily presence, regarding it as that which disappears and marks the passing of time.Comparison with music, as the time-based art with which dance shares some phenomenal characteristics, shows how inept the notion of dance existing 'at a perpetual vanishing point' is (Siegel 1972, 1).Music is no less immaterial than dance, yet thanks to its early alliance with science and philosophy, it has developed a notational system that secures against its disappearance.Had dance not been disregarded for its too fleshy (and therefore, ironically material) appearance in the past, it wouldn't have been so easily condemned to an ontology disappearance.Thus, the theme of disappearance obscures the problem of dance's significance, which the divergent philosophical theories had tried to solve earlier.It might be worthwhile to revisit the most noted attempts in so far as they disclose philosophy's method of making dance its object.We will map them out briefly here.Susanne K. Langer was among the first authors who sought to explain how dance signifies, specifically on the basis of her symbolisation theory, devised under the influence of Ernst Cassirer (Langer 1953).Countering the prevalent notion of self-expression in modern dance, which she, unlike many other philosophers, was sufficiently familiar with so as to invoke it in concrete examples (Mary Wigman features as a prominent case), Langer introduced a distinction between the virtual and actual aspects of gesture as a symbolic form of imagined feelings in lieu of felt or intended-to-be-expressed emotions.The founding assumption of her conception of the virtual is indebted to Cassirer's concept of 'mythical consciousness', in which the symbol and its meaning are inseparable, and which reveals a quest for a deep-seated meaning of dance in tune with the German idealist tradition of Ausdruckstanz, as demonstrated by the following excerpt: The dance creates an image of nameless and even bodiless Powers filling a complete, autonomous realm, a 'world.'It is the first presentation of the world as a realm of mystic forces….The substance of such dance creation is the same Power that enchanted ancient caves and forests, but today we invoke it with full knowledge of its illusory status, and therefore with wholly artistic intent.(Langer 1983, 38;45 A similar phenomenological ground with mystical undertones is to be found in Maxine Sheets-Johnstone's theory of objects-in-motion.Whereas Langer's theory accounts for the expressionist view of abstract motion, Sheets-Johnstone's phenomenological analysis of the perception of motion gives the basis for a formalist perspective on abstract bodily movement and presence.Or in her words, the dancer is not moving through a form; a form is moving through him.The dancer is not doing movement; movement is doing him.To be an object-in-motion is to fulfill a kinetic destiny, and to fulfill a kinetic destiny is to bring a qualitative world to life ….The dancer is not making the quality manifest, the quality is manifesting itself.… It is only insofar as the dancer is permeated by quality, that he or she allows it full play by surrendering to it, that quality appears, and that the dancer can be described as 'having' a certain quality.It is on the basis of being had and thus having, or being possessed and thus possessing, that we can speak of a qualitative presence.In effect, quality is everywhere present because it is an absolute possession, and it is an absolute possession because it is an absolute surrender.(Sheets-Johnstone 1979, 40) My aim in citing these two phenomenological interpretations of dance at length is to indicate the genealogy of the prevalent vitalist idea which motivates dance practitioners throughout the twentieth century and today, be it a mystical power that expresses itself in motion 4 or a formal quality that, despite its being objectified, possesses the body.This idea is the metaphysical horizon by which philosophy ennobled dance and elevated its status to a high art in the period from the 1950s to the 1970s.And it is also the episteme which Susan Foster breaks with in her quest for a dance theory that will specifically read dances and their subjects from a structuralist perspective of literary rhetorics and semiotics.
In retrospect, Foster's Reading Dancing: Bodies and Subjects in Contemporary American Dance (1986) is not only emblematic of the structuralist and, specifically semiotic, encroachment upon dance scholarship and its operation against received phenomenological ideas about dance.It also marks the beginning of a wholesale translation of methods of culturalist analysis and poststructuralist criticism, as well as of a set of particular concerns and topics mirroring the agendas of feminism, gender and queer theory, postcolonial theory and the politics of racial, ethnic and other kinds of identitarian difference, which shaped Dance Studies in the 1990s (see Goellner and Shea 1994, Koritz 1996, Dils andCooper 2001).Finally, dance theory was no longer short of meaningproduction, but of thought, or the problems and questions which would provoke philosophical thinking that would be particular to dance.In too many academic papers the works of dance started to model, like mannequins, for a particular theoretical interpretation, which reduced their meaning and thought-provoking capacity to readymade terms and concepts (see Desmond 1997).
Therefore, another turn was needed, this time coming from a number of choreographers in Europe who sought a new poetics, one which would upset the sensibility and knowledge about dance and exceed both the formalist-abstract paradigm of dance with its phenomenological heritage and the poststructuralist readings of dance qua text.It was the choreographers themselves-Jérôme Bel, Xavier Le Roy, Vera Mantero, Juan Dominguez, Mårten Spångberg, Eszter Salamon, Mette PERFORMANCE PHILOSOPHY VOL 1 (2015) Ingvartsen, BADco and others across Europe-who shifted their focus from the formal-expressive categories of style, language and thematic "aboutness" of an aesthetic object to a critical and experimental inquiry into the conditions of theatrical representation, such as the act and the subject of performance (Mantero's Perhaps she could dance first and think afterwards, 1991; Bel's The Last Performance, 1998;Salamon's What A Body You Have Honey, 2001), spectatorship (Bel's The Show Must Go On, 2001), the creation process, rehearsal and presentation (Le Roy's E.X.T.E.N.S.I.O.N.S., 1998-2003, Dominguez' All Good Spies Are My Age, 2002), material conditions of work (BADco/Nikolina Pristaš' Changes/Promjene), and so forth.Their preoccupations began to centre on what dance or performance is, how choreography could be expanded beyond the movement of the body and how the way dance is made necessarily determines performance.As the aesthetic values of kinetic forms or expressions became secondary, although not entirely absent, their work was labeled as 'conceptual dance'; however, this term is arguably a misnomer, since all the work had in common with conceptual art was the conceptualisation of its working methods and medium -namely, the dancing body. 5But the most important outcome of what critics also referred to as 'new choreography' or 'new choreographic performance' (Lepecki 1999(Lepecki , 2006;;Ploebst 2001) was that 'theory', or rather the reading of texts by Derrida, Deleuze, Deleuze and Guattari and so on, became a resource for choreographic texts, aligning dance with philosophy in the very poetics of dance.The effect that such a theoretical or conceptual turn had on contemporary dance is that it made it more widely visible, beyond what used to be the narrow and marginalized segment of the performing arts-that is to say, dance. 6
Philosophers' metaphors of dance
The conceptual turn in contemporary dance and the discussions that included the voices of 2014) were written on the occasion of gatherings organised by the protagonists of so-called conceptual dance. 7Prior to these, Alain Badiou's essay 'Dance as a Metaphor for Thought' ([1993] 2005) elicited attention outside philosophy among dance and performance theorists by the force of his contentious assertions about dance. 8 Although Badiou's and Rancière's views on dance differ to the extent that their philosophical projects are politically and epistemologically different, they share a familiar methodological habit: their approach bypasses works of dance by mainly focusing on literary or cinematic sources that mediate dance or bodily movement.In both cases, Mallarmé's writings on dance figure is a significant reference (Mallarmé 1956).Whereas Rancière occasionally invokes concrete works (Lucinda Childs' Dance from 1979, for example) because his thesis on the aesthetic regime of art must be situated historically with a hint of analytical examples, for Badiou dance doesn't exist empirically, in the history of its practice, works, techniques, names and bodies (the only dancerelated names being Mallarmé and Nietzsche).In fact, Badiou explicitly discloses his 'mission' to PERFORMANCE PHILOSOPHY VOL 1 (2015) speak of 'dance not thought on its own terms, on the basis of its history and technique, but of dance such as it is given welcome and shelter by philosophy' (Badiou 2005, 63; my emphasis).Dance appears as nothing more than an instrument of a philosophical exercise-a new 'metaphor' for probing Badiou's familiar subtractive ontology of event and thought.Therefore, we are compelled to make a binary decision, just like Badiou's event requires of its subjects: to either read this essay figuratively, as a specimen of the philosopher's conception of art and aesthetics, divorced from any historical and practical concerns of the art of dance, or to take Badiou's metaphor 'seriously' and envisage the dance that would ensue from his axioms.In a recent critique of Badiou, Jonathan Owen Clark has demonstrated how measuring the latter with the former register, namely, Badiou's theory from the viewpoint of the history of dance with his claims of 'inaesthetics', reveals difficulties in his philosophical arguments (Clark 2011).Let's briefly examine a few striking points in Badiou's encounter with dance.
With the aim of ostensibly furthering Nietzsche's praise of thought against the spirit of gravity, epitomized by the obedience of long German legs (i.e. military parade), Badiou rouses a series of Nietzsche's metaphors that depict dancing as flight ('bird'), explosive leap ('fountain'), the innocence of a new beginning ('child'), and as illusive lightness ('intangible air').The body of his description is likened to the silent ballet dancer 'on points' that 'pricks the floor just as one would puncture a cloud' (Badiou 2005, 59).To make the metaphor 'work', the philosopher adjusts his image of dance to the requirements of his well-known subtractive ontology of event, which I will briefly outline here.The dancing body must be unrestrained, its movement not caused externally.Dancing isn't about self-expression either, since it appears as a muted intensity, or in Badiou's words, 'interiority' itself.Thus, for Badiou dance solely extends in space-an indeterminate, 'pure', virgin site; it doesn't have a name, as its body is anonymous too.It determines the stage before the event acquires a name that would cut the past from the future, and is therefore a suspension of time within space.Badiou's vision of dance subtracts all particularity from it, not only a historical context in which a particular subject acts and all possible registers of relations resulting from composing the motion of bodies in time and space, but also the form and concept in the act of dancing, the choreographic knowledge which supports it as well as the gaze which interprets it.The closest image of such dance would be the 'spontaneous' free improvisation in a solo performance, but seen in and for itself, in a romantic guise of an incorporeal event in which dancing marks the limit between being and disappearing.The translation of this image into contemporary dance practice resonates with the problematic ubiquity of solo dance, which promotes the individual autonomy of the dancer and the fetishist exclusivity of a 'here-and-now' expression withdrawing from this world.Reveling in Mallarmé's paradoxical statement that 'the dancer doesn't dance', Badiou dispossesses dance of the right to be an art, bestowing on it, conversely, a loftier status: the vanishing 'sign' of the possibility of art as such, inscribed in a 'thought-body' (Badiou 2005, 64).A condescending gift evocative of the Hegelian evaluative hierarchy of the arts with philosophy's eminence above them: dance isn't art, because it is much more than art.It is the condition of possibility for art as the body's capacity for thought.That Badiou's philosophical abduction of dance implies a classical conception about modernist autonomy can be best inferred from a comparison with Rancière's 'Moment of Dance' (Rancière PERFORMANCE PHILOSOPHY VOL 1 (2015) 2014).Like Badiou, Rancière reasserts his established theory on the several regimes of art, the aesthetic regime in particular, upon the new terrain of dance (Rancière 2004).However, his reading of the same claims that Badiou elaborated with Mallarmé, about the anonymity of the dancing woman who is not a woman and who does not dance, yields, in Rancière, a different concept of autonomy.Aptly, the centerpiece of his analysis is movement in Dziga Vertov's film Man with a Movie Camera (1929), the notable documentary which experiments with cinematic techniques in representing Soviet urban life.When Rancière designates this movement as 'free', which peculiarly echoes the French term for early modern dance after Isadora Duncan ('danse libre'), he painstakingly distinguishes it from the spontaneous, free expression based on will.In an implicit commentary on Badiou's essentialist view on dance, Rancière explains that free movement is not a matter of a purified essence specific to dance, but of an indistinction between means and ends, an aesthetic revolution in the Kantian sense of beauty and a "human revolution" as in Marx's sense of ending the alienation of workers.Thus he wrests the autonomy of movement from the crosscutting of the images of people at work and people at play in Vertov's film as a case of heteronomy, a heterogeneous equality: The movements of the dancers are carried along in the rhythm of the montage.But, conversely, dance is the art that epitomizes the work of montage….[It] is not so much the model of an original spring of the body as it is a model of translation, in the two senses of the word: it is a movement that presents itself as the translation of another movement.This is what was meant by Mallarmé's formula: the dancer does not dance.Instead she writes.However what she writes is not a composition of the movements and figures belonging to the vocabulary of the ballet.It is, he says, a 'metaphor of our form'.But this metaphor has no translation in any dictionary of tropes.It is the task of the spectator to translate it in turn, to compose for himself the poem that the ballerina writes with her feet.(Rancière 2014, n.p.)In contrast with Badiou's ontological grip on dance as a new beginning, Rancière adamantly vies for an iterative differentiation that renders translation political.In a word, what operates in translation is the principle of equality which emancipates both dance and its spectators from the hierarchy of prescribed roles, activities and places, or of the police as a general law of distributing the sensible.
Apart from undoing the simple antinomy of the subtractive essentialist claim of modernist autonomy in Badiou-by showing that the more one stresses the specificity of an art, the more one is compelled to identify that specificity with the experience of radical heterogeneity (Clark 2011, n.p.)-what does Rancière's aesthetic regime do for dance?It provides the philosophical ground for a broader transdisciplinary consideration of dance as an instrument for studying the social practices of movement and bodies outside/beyond the narrow bounds of a specific art discipline.By consequence, the concept of 'social choreography' attests to such an approach, endorsing the critical analysis of how ideology operates aesthetically and how dance and everyday movement rehearse rather than only reflect social order (Hewitt 2005, Cvejić andVujanović 2012).In Rancière's notion of dance as 'montage' as well as in the study of 'social choreography', we can PERFORMANCE PHILOSOPHY VOL 1 (2015) observe how the instrumentalisation of dance exceeds its usurpation by philosophy by conversely bracing dance and choreography as the instrument (and not as a metaphor) of expanded thought.
Choreographic performance practice and the expression of thought
There is yet one more register of the encounter between dance and philosophy, one which perhaps comes the closest to 'performance philosophy' as its particular 'dance-variant'.It concerns a kind of thought that arises from the recent practice of European contemporary choreography (since the mid-1990s) and at the same time gives rise to, that is, distinguishes, the specific modes of making, performing and spectatorship in dance.What qualifies such movement is immanence, as Laura Cull describes it (Cull 2013, 12-13): a vertigo that ceaselessly produces processes that interfere in one another, processes of thought, sensibility, imagination, physical movement, attention and so on, as opposed to the hierarchy of philosophical thought transcending dance.Thus, the choreographic practice in question develops a distinctive method of creation which could be accounted for as choreographing problems, rooted in the philosophy of Gilles Deleuze and his reading of Spinoza and Henri Bergson. 9The choreographic creation of problems as an expressive logic of thought will be featured here as a specimen of the theorisation of choreography in the nexus of philosophy and experimental dance practice.A comparable approach is found in Petra Sabisch's study titled Choreographing Relations: Practical Philosophy And Contemporary Choreography (2011), whose explicit aim is to situate philosophy and choreography on a par with, and on the same plane as, an immanent practice of thought.While Sabisch emphasises thought and sensibility as concepts of relation -as singular 'assemblages of relations to objects, to music, to bodies, relations between bodies, relations of visibility, relations between forces, relations of movement and rest, etc.' (Sabisch 2011, 7) -here problem will be the term in my account of the relation between ideas and experimentation.Whereas relationality stresses the proximity between philosophical and choreographic articulation, 'problem' focalises the driving force of critical and experimental modes of creation.
The choreographers whose work is comprehended by the method of problems belong to that grouping of artists from various disciplines who have developed an affinity with Deleuze's thought over the last two decades. 10However, the references to Deleuze in their work are occasional and inconsistent, often mixed with a whole array of other philosophers and theories.Thus, the fact that these choreographers have been reading contemporary philosophy, Deleuze among other authors, does not legitimize per se or determine the ways that Deleuzian thought might matter for contemporary choreography.It informs us, though, as Efrosini Protopapa has remarked about Jonathan Burrows and Xavier Le Roy, that 'these artists consider writing, reading and discussing a method of practice within choreography' (Protopapa 2004, n.p.), which compels us to read them with a particular focus on the questions that guided them in experimenting.
In a brief definition, the method of problems consists in the posing of questions that differentiate terms and conditions under which the creation of a material object-the composition of a bodily movement-unfolds.In Deleuze, problems are objects of 'Ideas', as they characterise the relationship between forms of thought and forms of sensibility as one of difference rather than PERFORMANCE PHILOSOPHY VOL 1 (2015) identity.Ideas here are choreographic, which entails inventions of the body and/or movement in performance as well as of time that is coextensive with the body and movement in performance.
In European contemporary dance, the choreographic idea that constituted modern dance during the first decades of the twentieth century is still pivotal: the synthesis between the body and movement under two operations, the subjectivation of the dancer through (emotive) selfexpression and the objectivation of movement through the physical expression of the dancing body.Subjectivation secures the necessity of the movement in the body's urge to move and express its inner (emotional) experience.Objectivation presupposes another relationship between movement, the body and the subject in the expressive act: dancing is foregrounded, or even reduced to a physical articulation of the movement, whose meaning lies, tautologically, in itself.
Movement is created as an object in itself that engages bones, muscles, ligaments, nerves and other body parts of the dancer in strictly physical activity.Objectivation of the movement by selfreferentiality renounces the expression of the self in the movement-the 'outwarding' of an inner experience-but it still relies on the body-movement bind.
Both types of synthesis connect the body and movement in one organic whole, which in the experimental practice of European contemporary dance is rendered problematic.The rupture of the organic regime consists of dispensing either with the body as the source of authentic movement or with the object of movement to which the body is physically tied.Choreographing problems involves composing these ruptures between movement, the body and time in performance such that they engender a shock upon sensibility, one that renders many aspects of choreographic performances hard to identify, recognize or accommodate within the horizon of expectations of contemporary dance.These problems 'force' thought as an exercise of the limits of sensibility that can be accounted for not by representation, but by the principle of expression that Deleuze develops from Spinoza's philosophy in his key books on ontology, Expressionism in Philosophy: Spinoza ([1968] 1992) and Difference and Repetition ([1968] 1994).Expression is a logic opposed to representation; it is a certain way of thinking and forming ideas outside of analogy and emanation as the dual aspects that govern (transcendental) relations of agreement between the idea and the object understood to be a thing.It is the thought that forces a practical path in which ideas in the form of problems and compositions arise in parallel, non-causal correspondence.The probing of this path could be referred to as experimentation, whereby time is inserted into the construction of the problem, doubled by a sensorial and affective experience of the experiment parallel to the thought.This time could be regarded as a time of learning, which involves unlearning or undoing, ungrounding the knowledge of possibilities that reproduce rather than create new movements, bodies and their relations.Such learning implies 'violent' training without a general method, but with a dedication to the problem that, as Deleuze describes, 'demand[s] the very transformation of our body and our language' (Deleuze 1994, 192).Le Roy explicitly refers to learning as the process of a removal of habit under the construction of constraints: I always worked with constructing constraints in order to produce 'new' movement or to transform the perception of the body in a situation.What can you do when you cannot do this or that; you have to look for another way, and you have to go around habits.In a way, it's making things difficult in order to explore ways outside the power of habits.(Le Roy in Cvejić 2009, n.p.).PERFORMANCE PHILOSOPHY VOL 1 (2015) Problems, also understood as the disruptions of habits, as Le Roy reports above, offer us an insight into a coextensive parallelism between thinking and the practices of making, performing and attending choreographic performance.Thus, the parallelism accounts for their dual status: the problems stem from the very process of creation, as they express the thought that guides the choreographers in their decisions; and the problems are also given by the performances, as they further provoke us, who observe the work post hoc, to account for them conceptually by a philosophical method.In this way, choreography contributes to a philosophical rethinking of the relationship between the body, movement and time and, consequently, gives rise to distinctive concepts of its own. 11 The theory of expression of thought we have outlined here shares a common ground with a few other philosophical accounts of contemporary dance that separate these Deleuzian approaches from the previously discussed philosophies of dance. 12Its main assumption is that dance, like any other performance, should be regarded as a time-based art, in contrast to the linguistic assumptions of performativity.The approach of the expression-based thought in choreography favors the notion of attending, according to which performance is approached from the aspect of time conceived as Bergsonian duration: a 'succession of qualitative changes, which melt into and permeate one another' (Bergson 2002, 61), an indivisible continuous multiplicity.The experimental choreographic practice of the last two decades has countered the perception of movement's ephemerality or bodily presence/absence by sustaining motion and stillness, by persisting in the transformation of movement and bodies into the future, by exploring sensations and affects in processes of becoming, by implicating the spectators in processes beyond the actual performance, by manipulating performers' memory of past movements in the present.These strategies all point to the importance of duration, or time in which change is created and perceived, and becoming, through which the bodies and movements transform.Therefore, dance is better approached as a transformation process rather than as a fleeting act-contrary to the prevalent thesis about dance's disappearance, which we discussed earlier in Badiou's and other philosophical accounts of dance.The genesis of dance is located in process and duration rather than in an act whose meaning transcends or lies outside of duration.
Prospects of a dance-philosophy
We are coming to the end of a winding course marked by historical, inherited difficulties, and then by the sporadic ventures of philosophy into dance in the twentieth century.Having arrived at the beginnings of a 'dance-philosophy' today as a kind of thought which arises within the material practice of dancing, only provisory conclusions can be drawn.First of all, thanks to the expansion of choreographic poetics during the last two decades, and to its encounter with Deleuze, dance has ceased to figure as a metaphor in universal abstract singular form, an ahistorical conduit for a general ontology, as it was for Nietzsche or Badiou.What contemporary dance and its theories have 'learnt' from Deleuze is the immanence of the practice, whereby philosophy no longer claims the exclusive right to thinking nor does it seek dance to flesh out its abstract and general ideas.
Secondly, after centuries of musing on 'what philosophy could do for dance', the question is now PERFORMANCE PHILOSOPHY VOL 1 (2015) expanded in relation to A. N. Whitehead's process oriented philosophy and William James's radical empiricism (see also Manning 2013).Like Massumi and Manning, Noë has also incorporated his research of William Forsythe's movement language into his study of the embodied action-based cognition (Noë 2012).Dance in particular has enabled him to demonstrate how perception and concept-formation can no longer be accounted for by the traditional representational theory of mind, how they instead depend on skills acquired through doing and training, similarly to how dance is learnt and made.The effect of dance entering these philosophical considerations is an upgrading of a speculative philosophy of process, for instance, or of a phenomenology combined with cognitive science that takes an experimental, radically pragmatic stance.Given the newly gained experimental ground of philosophies which consider dance as a movement of thought, we might hope that some problems that contemporary dance has been grappling with-kinaesthesia and proprioception as sensations specifically related to bodily movement, and gesture-will become the object of a fruitful, reciprocal encounter between contemporary dance and contemporary philosophy, or, in a (portmanteau experimental composition of a) word, a dancephilosophy. 1 The duality in the ontological status of the work which involves performance, such as music and dance, was first posited by phenomenological aesthetics, most notably in the work of the Polish philosopher Roman Ingarden (1989).
2 Myriam Imschoot (200) writes: "When looking for an overview on the notational endeavors of choreographers and dance makers in the last centuries, what one sees is more a sort of 'babelisation' of idiosyncratic instructions than a commonly and widely applied overarching language.To some, the dream of making dance visible and thus indelible has therefore proven to be an illusion.Unable to furnish the bones, dance would linger outside, on the threshold of the archive." 3 William Forsythe, Improvisation Technologies, CD ROM 1999;Emio GrecoPC Inside Movement Knowledge, 2008; Forsythe's Motion Bank with online scores of the works by Jonathan Burrows and Matteo Fargion, Deborah Hay, Bebe Miller and Thomas Hauert available online (http://motionbank.org).
4 For a contemporary dance poetics rooted in formalism and combined with Far-Eastern philosophical influences that imbue the form with mystical value, see the work of Anne Teresa De Keersmaeker (De Keersmaeker and Cvejić 2013).
5 The debate about 'conceptual dance' went on for a few years in European journals and magazines specialised in the performing arts (Frakcija, Maska, TkH Journal for Performing Arts Theory, Ballet-Tanz International, Mouvement, Etcetera and others) and came to the conclusion that 'conceptual dance' does not designate any movement, poetics, style or genre.Instead, it symptomatically evidences a problem of qualifying as choreographies those performances that contest the foundational characteristics of dance as a historical art discipline.'Conceptual dance' is still used derogatorily, connoting the negative sense of a betrayal of dance.
Notes
contemporary philosophers precipitated an interest in dance among important figures in contemporary European philosophy, such as Jean-Luc Nancy, whose conversations with the choreographer Mathilde Monnier spawned a book and a performance (Nancy and Monnier 2005, performance Allitérations 2001), or Jacques Rancière, whose essays 'The Emancipated Spectator' (2009) and 'The Moment of Dance' ( reversed.Recent writings by Brian Massumi and Erin Manning, as well as Alva Noë, contribute with advanced solutions to the problem of 'what dance can do for philosophy' instead.In view of opening philosophy to its outside, Massumi and Manning explore 'what writing can do to make thought-felt what art can do, with philosophy' (Massumi and Manning 2012, vii) after Deleuze, | 2018-12-20T21:56:34.108Z | 2015-04-10T00:00:00.000 | {
"year": 2015,
"sha1": "7008f15467e64caa3de6947fa8144edce44107b7",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.performancephilosophy.org/journal/article/download/29/86",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7008f15467e64caa3de6947fa8144edce44107b7",
"s2fieldsofstudy": [
"Art",
"Philosophy"
],
"extfieldsofstudy": [
"Psychology"
]
} |
262128178 | pes2o/s2orc | v3-fos-license | Benefit of an action camera in endoscopy education for medical students under COVID-19
Background Endoscopy is an important form of clinical gastroenterology education because it gives students the opportunity to learn about diagnosis procedures and even treatment. During the COVID-19 pandemic, medical students were observed from outside the endoscopy room due to the risk of airborne infection. In this study, we investigated the efficacy of combining endoscopy education with doctor’s-eye-view videos of the procedure obtained using live-action cameras (GoPro®). Methods From February to May 2021, endoscopists wore GoPro Hero8 cameras on their heads to display a doctor’s-eye view video outside the room. The efficacy of the GoPro videos in combination with endoscopic monitoring was evaluated by 15 participating medical students. The participants rated the efficacy on a 5-point scale and commented on the positive and negative points. Results A total of 78.6% of participants evaluated the GoPro as good; 57.2% answered that it increased their understanding, with 71.4% stating that it increased their understanding of procedures in particular. A total of 85.7% of the students answered that their interest in endoscopy had increased, and 85.7% evaluated the benefit of the GoPro videos as good. In addition, 64.3% answered that the method was effective in preventing COVID-19 infection. Education using GoPro videos enabled students to feel as if they were conducting the endoscopy themselves and enabled them to concentrate on learning. Conclusions Practical endoscopic education using a GoPro is an effective educational tool that not only increases understanding of endoscopic practice but also stimulates students’ interest and awareness of their future as doctors.
Background
In clinical medical education, hospital training is a valuable opportunity for students to learn not only the medical practices of clinicians but also how to interact with patients and cooperate with healthcare professionals [1,2].Medical students are trained in clinical skills, including medical examination techniques, and in specific clinical skills through patient care under the guidance of a supervisor [3].However, medical student can only perform a limited number of medical interventions themselves, as many medical interventions can only be performed after practitioners have passed the national medical examination and have a specialty.Although some clinical procedures can be learned from simulated patients and practice equipment, the variety is limited, and it is difficult to study many medical procedures performed in the hospital setting.While training in the department of gastroenterology, medical students study abdominal examinations and patient management in hospital units.They also study abdominal ultrasonography and esophagogastroduodenoscopy (EGD).Non-invasive examinations, such as ultrasonography, can be performed by medical students, and students can practice on one another as well as on simulation devices.However, EGD is an invasive procedure that cannot be performed by students.Because medical students can only observe the procedure, they cannot get a real impression of it.
The novel coronavirus disease 2019 (COVID-19) became a rapidly spreading global pandemic in 2020 that attracted worldwide attention.On 30 January 2020 the International Health Regulation 2005 Emergency Committee declared the COVID-19 outbreak a public health emergency of international concern (World Health Organization [WHO], 2020a).The Japanese government declared a state of emergency in April of 2020, which was lifted in late May [4].During this time, many clinical clerkships were cancelled because young people could be asymptomatically infected with the virus and could infect patients with whom they came into contact during their clinical clerkships [5][6][7].All lectures, except for clinical practice, were provided online.Students were kept out of hospitals, and non-essential outings were restricted to prevent COVID-19 infection [8].Endoscopy is an essential part of gastroenterological clinical training.In this training, students learn not only endoscopic techniques and findings but also how doctors care for their patients.However, EGD observation during clinical training was also cancelled to prevent infection [9,10], so students had to learn through virtual devices and videos [11].The COVID-19 outbreak led to the development of various virtual devices and online educational systems [11], but this educational environment without face-to-face patient contact is thought to have taken away the opportunity for medical students to become aware of themselves as doctors.As measures to prevent COVID-19 infection became clearer, clinical practice in hospitals has now reopened.However, to prevent spread, social distancing is still required, and contact time between students and patients remains restricted [12].Before COVID-19, medical students entered the endoscope room and observed alongside the endoscopist; after COVID-19, the practice changed to viewing the EGD monitor from outside the examination room.This prevents infection, but using the monitor makes it difficult to learn about patient care during the examination.Moreover, students cannot observe the cooperation between the endoscopist and the EGD assistants.A headmounted ultra-high-definition video (GoPro®) has been reported to be useful as an educational device for observing surgical techniques and narrow-field-of-view medical procedures from the doctor's point of view [13,14].
Therefore, the aim of this study was to evaluate the efficacy of using both EGD monitor screens and doctors'-point-of-view images from GoPro cameras in EGD training.
Study Design and materials
The study was conducted with students on clinical training in the Gastroenterology unit of Juntendo University Hospital between February and May 2021.Participating students were all students who rotated during that period.
The endoscopist used a GoPro® Hero8 and HEAD-STRAP (San Mateo, CA, USA) to send a doctors'-pointof-view video to a 12-inch tablet device (Apple iPad Pro®, Apple Inc., Cupertino, USA), using a wireless connection (Fig. 1A).The endoscopists in this study were specialists and supervisors with at least 15 years of experience in endoscopic procedures.
First, medical students studied routine and therapeutic endoscopy from outside the endoscope room by observing an endoscopic video (Fig. 1B) narrated by the physician.The endoscopist's voice is heard outside the endoscopy room via speakers and directly in this study.Especially during biopsies and treatment of varicose veins, the endoscopist explained why the biopsy was being performed and during endoscopic variceal ligation (EVL) procedures, the endoscopist showed images via a GoPro screen on the EVL device (Fig. 1D).Next, a GoPro live video was added for endoscopic education (Fig. 1 C and 1D), after which medical students were asked to fill out questionnaires on the experience.
Evaluation of GoPro-combined education
To evaluate the efficacy of the GoPro in teaching about endoscopy practice, students were asked to complete questionnaires after the procedure The questionnaire covered six topics: (1) comprehension of EGD, (2) doctor's point of view, (3) comprehension of EGD techniques, (4) interest in EGD, (5) protection against infection, and (6) practical learning with GoPro.These were evaluated on a 5-point scale (unacceptable, poor, fair, very good, and excellent).In addition, we collected student's opinions on the good and bad points of GoPro dual-use education.
Evaluation of endoscopy education with GoPro in medical education
This study evaluated the efficacy of GoPro videos of EGD practice for 15 medical students.When asked about the co-use of the GoPro video from the doctor's point of view, 50% of the students answered 'Excellent' , and 28.6% answered 'Very Good.' Regarding their comprehension of EGD techniques, 72.4% of the students answered 'Excellent' , and none of the students evaluated it poorly.Regarding their comprehension of EGD, 42.9% of the students answered 'Excellent' , and 14.3% answered 'Very Good' .On the question regarding students' interest in EGD, 78.6% of answered 'Excellent, ' and 7.1% answered 'Very Good' , while 14.3% answered 'Fair' (Fig. 2A).Regarding prevention of infection through the GoPro combination practice, 28.6% of the students answered, 'Excellent' , 35.7% answered 'Very Good' , 21.4% answered 'Fair' , 14.3% answered 'Poor' .In terms of overall evaluation of the hands-on training with the GoPro.42.6% rated it as 'Excellent' and 42.6% stated 'Very Good' (Fig. 2B).
Comments on GoPro combination education
In addition to the questionnaire, the students noted the good and bad points of the GoPro procedure.On the positive side, students answered that it was better to use the GoPro video in addition to the endoscopy monitoronly training, so that they could learn not only about endoscopic findings but also experience the overall examination and activity around the physician.Students also stated that the GoPro video made it feel as if they were performing the procedure themselves, and it made it possible for them to concentrate on the learning process.They were also able to learn how to work with medical assistants during EGD.
As a bad point, they pointed out the narrow field of view of the GoPro, and the endoscopist had to move the face over to widen the field of view.On the other hand, some said that if the GoPro is moved too much to widen the field of view, the image on the iPad moves so rapidly that it causes screen sickness.The narrow field of view of the GoPro makes it impossible to operate the endoscope, especially to observe the air delivery and inhalation buttons and the angle section.Depending on the Wi-Fi environment, GoPro images were sometimes difficult to view on the iPad.(Table 1) Fig. 2 Evaluation of GoPro-combined education
Discussion
COVID-19 is highly contagious, and the pandemic required facilities to develop infection control strategies [15].In education, classroom group lectures were cancelled and switched to online training and assignments [16].As the infection control strategy against COVID-19 was gradually established [12], all medical students in our hospital resumed clinical clerkship after their temperatures and physical conditions were checked, following the infection control strategy.However, younger people, including medical students, could be asymptomatically infected with COVID-19, and in clinical clerkship, they had to be careful to avoid contact with immunocompromised patients, patients with respiratory diseases, and intensive care patients [7].
EGD can cause patients to cough, and there is a risk of airborne infection in the endoscopy room.Infection can be caused not only by droplets and contact but also by aerosolised COVID-19 [17].Of the 623 asymptomatic patients scheduled for endoscopy, six tested positive for COVID-19 [18].Patients remove their masks for the EGD, so patients, physicians, and medical assistants in the endoscopy room are increasingly at risk of infection.We have reported the effectiveness of endoscopic shields in preventing aerosol droplets during EGD examinations [19].However, shielding alone does not completely prevent infection.In brief, EGD is a high-risk examination in terms of infection, and the room must be completely infection controlled [20].
Before COVID-19, endoscopy was a highly specialised technique and a popular lecture for students.After COVID-19, medical students were only able to observe the procedure through endoscopic monitors from outside the endoscopy room, making it difficult to provide adequate education.The purpose of this hands-on training with GoPro was to provide students with a better understanding of the endoscopic procedure while also controlling infection (Fig. 2A and B).In EGD education from outside the endoscopy room, only the endoscopic video and the doctor's voice during the examination can be observed.During insertion, only the mouth is visible.However, with a GoPro video, students can observe the patient's facial expression and the timing of the endoscope insertion (Fig. 1C).In addition to checking the endoscopic monitor, endoscopists check the electrocardiogram and oxygen monitors as needed before and during the examination, and they collaborate with the medical assistants in the room (Fig. 1D).GoPro makes it possible to observe this as well, and it is considered to be a great educational tool for medical students.Of the participating students, 78.6% evaluated the GoPro video with high scores.In addition, 71.4% of the medical students answered that it was possible to observe the operations at hand on the GoPro screen, and 71.4% of medical students evaluated it highly in terms of their education on endoscopic procedures (Table 1; Fig. 2A).In fact, it is difficult to observe the patient's facial expression and the atmosphere during endoscopy from the physician's standing position using only an endoscope monitor.
In this endoscopic education with GoPro, the students observed the scene where the biopsy forceps are being given by the caregiver during the biopsy and the physician's view after the biopsy was performed.In particular, during the treatment of varicose veins (endoscopic variceal ligation: EVL), in addition to the usual endoscopic observation, the students were able to observe the physician preparing for EVL, attaching the EVL device, inserting the overtube into the patient, and removing the overtube after EVL.The use of realistic video images had the effect of making it seem as if the students were performing the examination (Table 1).Among the medical students, 56.9% answered that their comprehension of endoscopy increased, and 85.7% of students answered that they were interested in endoscopy (Fig. 2A).Furthermore, students could use both the endoscopic video and GoPro video while monitoring the endoscopist's explanations, which was an effective result.Regarding the endoscopy time, it was no different from the usual endoscopy Table 1 Comments from students on endoscopy observation with GoPro
Good points
It was possible to observe the examination from the beginning to the end.It was possible to observe preparations other than during the examination through the GoPro video.It was possible to observe the endoscopic technique and the realistic feeling of inserting the endoscope into the patient at the start of the endoscopy.It was possible to concentrate on the examination with a realistic atmosphere.It is great to see the patient's facial expressions during the examination.It was good to feel like being examined by myself.Understanding of the cooperation with medical assistants.In addition to the endoscopy views, it was good to be able to observe the preparation for examinations and procedures from the doctor's point of view.Observation of the doctor's view during biopsy forceps delivery, EVL preparation, and over-tube insertion.It was possible to observe the details, such as the care for patients.Wish the GoPro would be used to observe other examinations as well.
Bad points
GoPro video does not reflect the endoscopist's face without moving it.GoPro screen moves so quickly that it causes screen sickness Sometimes the hands cannot be seen unless the endoscopist moves his or her face.The narrow field of view visible in the GoPro image The left handling of the endoscope is outside the GoPro's field of view and therefore it cannot be observed.Internet and Wi-Fi environment for GoPro connection and procedures, and the use of the GoPro did not affect the examination time.
GoPro is an effective tool for endoscopy education.As for infection control, 64.3% of medical students evaluated it highly, but some students answered that it was inferior to being fully online because they had to enter the endoscopy centre, although they did not enter the endoscopy room (Fig. 2B).For this study, the GoPro video was sent to iPad screens wirelessly for observation.Some students commented on problems with the wireless connection and screen sickness.(Table 1) There were many opinions about the narrow field of view visible in the GoPro image and the Wi-Fi environment as education on the use of GoPro together.In particular, if the GoPro image from the forehead had a wide angle image, it would have been possible to see from the patient's face to the operation at hand.Some students did not appreciate the left-hand controls of the endoscope, such as the air delivery and suction buttons and angles, because they were outside of the GoPro image and could not be seen.It is necessary for endoscopists to understand the range of viewable areas on the GoPro, and to improve the GoPro in the future.By improving these problems, multiple students can observe during an examination at the same time.
There have been several reports on medical education using GoPro.COVID-19 infection prevention prevented many face-to-face education opportunities, and the usefulness of remote education using wearable cameras, including GoPro, has also been reported [21].In particular, the use of GoPro in surgical procedures has been reported to improve understanding and technique [13,14].Endoscopic education such as hands-on seminars and other endoscopic education and mainly explain lesions on an endoscopic monitor.However, multiple assistants are present during endoscopic procedures, and endoscopists do not only observe the endoscopic monitor.This is the first report on the usefulness of the GoPro combination in endoscopic education.Education from the physician's perspective is considered to be an important process, especially for medical students.
In addition, although there are privacy protection issues, students who must stay home from clinical practice because of illness can also share their observation practice online.
At our hospital, the endoscopy centre performs a variety of endoscopic examinations, including EGD, colonoscopy, device assisted enterosopy, endoscopic retrograde cholangiopancreatography related procedures, diagnostic and therapeutic endoscopic ultrasonography, and endoscopic submucosal dissection for malignant tumours.In this study, we used GoPro combined with education on EGD, but students requested that various other endoscopic examinations also be performed with GoPro (Table 1).Furthermore, by recording endoscopic videos and GoPro videos of emergency cases, such as gastrointestinal bleeding, which are not usually encountered during educational programs, students will be able to learn endoscopic operations from the doctor's point of view and observe cooperation with medical assistants in the endoscopy room.Endoscopic education with GoPro was performed while sufficient education was not possible due to the outbreak of COVID-19.Considering the responses from students, even after the end of COVID-19, the GoPro combination is an excellent educational style and will continue to be introduced.Furthermore, this can be useful not only for medical students but also for the education of young physicians and endoscopists.
Conclusions
Practical endoscopic education using a GoPro not only helps prevent infection, but it also helps students learn from the doctor's point of view.This is considered an effective educational tool that not only increases students' understanding of endoscopic practice but also stimulates interest among students and helps visualize themselves as future as doctors.
Fig. 1
Fig. 1 GoPro-images as the doctor's point of view | 2023-04-11T13:04:06.309Z | 2023-09-22T00:00:00.000 | {
"year": 2023,
"sha1": "ddc1bc7d4e28d4bc1c1ce09ad844855629997ccf",
"oa_license": "CCBY",
"oa_url": "https://bmcmededuc.biomedcentral.com/counter/pdf/10.1186/s12909-023-04702-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "74c50c1f22c6a10c8fd685e0d2bf044bd030b599",
"s2fieldsofstudy": [
"Medicine",
"Education",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265412658 | pes2o/s2orc | v3-fos-license | Causal effects of gut microbiome on hypertension: a Mendelian randomization study
Background Previous observational studies have shown that there is an important relationship between gut microbiota and hypertension, we performed a two-sample Mendelian randomization analysis to examine whether the gut microbiota is causally related to hypertension in order to find a basis for potential diagnostic or intervention approaches for hypertension. Methods We obtained significant single nucleotide polymorphisms related to gut microbiota and hypertension from publicly available genome-wide association studies for a two-sample Mendelian randomization study. A total of 18,340 individual genome-wide genotype data were included from 24 population-based cohorts. The inverse-variance weighted meta-analysis is the main analytical method for evaluating causal relationships, and the Mendelian randomization research results have been validated through a series of sensitivity analyses. Results The inverse-variance weighted analysis results indicated that phylum Verrucomicrobia (OR:0.831, 95%CI: 0.710–0.972; p = 0.021), family BacteroidalesS24.7group (OR:0.672, 95%CI: 0.496–0.911; p = 0.01), family Bifidobacteriaceae (OR:0.709, 95%CI:0.569–0.884, p = 0.002), genus Adlercreutzia (OR: 0.991, 95%CI: 0.982–0.999, p = 0.035), genus Phascolarctacterium (OR:0.819, 95%CI:0.685–0.981; p = 0.03), genus LachnospiraceaeNK4A136group (OR:0.990, 95%CI:0.981–0.999; p = 0.025), and genus Ruminococcus2 (OR:0.988, 95%CI: 0.979–0.997; p = 0.008) had protective causal effects on hypertension. The Family Alcaliginaceae (OR:1.011, 95%CI:1.000–1.021, p = 0.04), Genus Anaerostipes (OR:1.375, 95%CI:1.096–1.653; p = 0.025), Genus Collinsella (OR:1.899, 95%CI:1.361–2.348; p = 0.02), and Genus Lachnospiraceae_UCG_010 (OR:1.536, 95%CI:1.072–2.202; p = 0.019) were associated with a higher risk of HTN. The reverse Mendelian randomization analysis results showed no reverse causal relationship between HTN and these bacterial taxa. Conclusion Our Mendelian randomization analysis results indicate a potential causal relationship between these bacterial taxa and hypertension, providing a new perspective for the treatment and prevention of hypertension.
Introduction
As a global public health concern, Hypertension (HTN) is related to a significant global burden of cardiovascular disease and premature death (Mills et al., 2020).Hypertension is also a leading hazard factor for cerebrovascular, cardiovascular, and chronic kidney diseases He et al. 10.3389/fmicb.2023.1276050Frontiers in Microbiology 02 frontiersin.org(Franklin and Wong, 2013;Mills et al., 2016;Kjeldsen, 2018).Globally, Compared to high-income countries, without effective intervention the increasing burden of hypertension in low-and middle-income countries will exacerbate the global epidemic of cardiovascular and kidney diseases.By 2010, more than 30% of the adult population (1.39 billion) suffered from hypertension, and hypertension is also recognized as the primary cause of global mortality (Zhou et al., 2021).The prevalence of hypertension is steadily rising worldwide due to factors such as an aging population and increased exposure to lifestyle risk factors, including unhealthy diets (such as high alcohol consumption, excessive sodium intake, and insufficient potassium intake) and lack of physical activity (Whelton, 2002;Louca et al., 2020).To effectively prevent and treat hypertension, it is crucial to gain a better understanding of the underlying mechanisms that contribute to its development.However, the exact cause of the increasing incidence rate of hypertension remains to be fully elucidated.
It is believed that the development process of hypertension is multifactorial, and one's predisposition to hypertension is influenced by bot genetic and environmental factors, and the interaction of the two.A multitude of environmental factors increase the risk for HTN, including unbalanced diet, lack of physical activity, overweight and obesity, smoking, and psychological stress (Louca et al., 2020;Tsao et al., 2023).Recently, accumulating evidence has indicated that gut microbiota (GM) composition is closely related to human health and cardiovascular disease, including hypertension, which was strongly supported by at least three systematic reviews (Tang et al., 2017;Muralitharan et al., 2020;Louca et al., 2021).Considerable attention has been paid to the potential role of the gut microbiome in altering the development of hypertension, obesity, type-2 diabetes, and atherosclerosis (Tilg and Kaser, 2011;Howitt and Garrett, 2012;Karlsson et al., 2012;Qin et al., 2012;Tang et al., 2013;Yan et al., 2017).Studies have consistently shown that patients with hypertension exhibit dysbiosis in their gut microbiota, including reduced microbial richness, evenness, and diversity, and an increase in the Firmicutes/ Bacteroidetes ratio (Yang et al., 2015).
Compared with germ-free mice that received an FMT (fecal microbiota transplantation) from 2 normotensive donors, germ-free mice that received FMT from a hypertensive human donor developed a significant increase in diastolic and systolic blood pressure after 8 weeks (Li et al., 2017;Muralitharan et al., 2020).In addition, daily consumption of probiotics for more than 8 weeks can significantly reduce diastolic and systolic blood pressure in hypertensive patients (Khalesi et al., 2014).Oral medication (minocycline) can also regulate blood pressure and normalize the ratio of Firmicutes to Bacteroidetes in spontaneously hypertensive rats and angiotensin II-induced hypertensive rats (Yang et al., 2015).The observational study showed that the composition and abundance of intestinal microbiota in HTN patients had significant changes compared with the healthy control group.However, while all this research evidence emphasizes the correlation between gut microbiota and HTN, it is still unclear which specific bacterial taxa lead to population differences (Yang et al., 2015;Yan et al., 2017).Confirming whether the correlation between gut microbiota and hypertension is causal and which microbiota taxa are the most important for hypertension is of great significance for the clinical practice of HTN management.Further research on the causal relationship between hypertension and gut microbiota will provide new prospects and perspectives for the treatment and prevention of hypertension and related diseases.
The traditional observational study is vulnerable to the influence of many potential factors such as lifestyle, socioeconomic status, and so on in the implementation process, which is prone to prejudice.Large randomized controlled trials (RCT) or cohort studies for a specific gut microbiome taxa are expensive however, so a new strategy is needed to study the causal effect of gut microbiome taxa on hypertension.
Mendelian randomization (MR) studies use genetic variations associated with modifiable exposure, typically single nucleotide polymorphisms (SNPs), to statistically evaluate the causal relationship between exposure and outcomes, in order to reduce confounding factors (lifestyle, socio-economic factors) and potential biases in reverse causality (Skrivankova et al., 2021).At the same time, MR research can overcome the shortcomings of extrapolation differences and data acquisition difficulties of traditional observational epidemiological research results.The purpose of this study is to explore the causal effects of gut microbiota on hypertension, systolic blood pressure (SBP), and diastolic blood pressure (DBP) using the Genome-Wide Association Study (GWAS) dataset through MR studies.
Materials and methods
The summary-level data used in this study was obtained from publicly available GWAS studies.Each cohort involved in the GWAS study received ethical approval and participation consent from their respective institutions, and aggregated data was published for analysis.In short, the gut microbiota is exposure, while hypertension is the outcome.This study employed stringent inclusion and exclusion criteria to select single nucleotide polymorphisms (SNPs) that are strongly associated with specific gut microbiota taxa as instrumental variables (IVs).Sensitivity analyses were performed to assess the robustness of the observed correlations.Furthermore, a reverse Mendelian randomization (MR) analysis was conducted to address potential confounding effects of hypertension on the causal relationship between gut microbiota and health outcomes.
In addition, the MR analysis relies on three key assumptions: (1) the instrumental variables used should exhibit a significant correlation with the exposure of interest.The strength of this correlation is typically evaluated using F-statistics, with a value of F ≥ 10 indicating no significant evidence of instrumental variable bias.If the F-statistic is less than 10, indicating a weak correlation, the corresponding instrumental variable is excluded.The formula for the F-statistic is F = (beta/se)^2.(2) The instrumental variables should be independent of confounding factors that may influence both the exposure and the outcomes.(3) There should be no horizontal pleiotropy, meaning that the instrumental variables only affect the outcomes through their impact on the exposure.Overall, the study employed rigorous methods to select instrumental variables and ensure the validity of the MR analysis.
Gut microbiota
The summary data of gut microbiota was obtained from a largescale multi-ethnic GWAS coordinated by the MiBioGen consortium.As the largest human microbiome genetics study to date, a total of 18,340 individual genome-wide genotype data were included from 24 population-based cohorts (11 countries in Asia, Europe, North America, etc.) (Kurilshikov et al., 2021), and 22 cohorts are composed of adults or adolescents (n = 16,632), and two cohorts are composed of children (n = 1708).Among the 211 microbiome taxa, it includes five biological classifications: phylum, class, order, family, and genus.Five levels of IV of gut microbiome taxa were extracted from this large-scale GWAS to be applied in this study.The summary statistical data of the gut microbiota association research can be publicly available on the website www.mibiogen.org.
Hypertension
We obtained the outcome data (blood pressure) from the MR basic database, which is a well-planned database designed to ensure the effective implementation of the Mendelian randomization method.The MR-base database includes 1,674 GWAS datasets1 (Hemani et al., 2018).To identify relevant studies, we searched for keywords such as "hypertension, " "high blood pressure, " "systolic blood pressure, " and "diastolic blood pressure" in the MR-base database.We focused on studies conducted on the European population up to 2023.
Among the identified studies, we selected the one with the largest sample size as our outcome dataset.The selected dataset, with the ID "ukb-b-14177, " is from the MRC Integrative Epidemiology Unit (MRC-IEU) consortium based on the UK Biobank.The UK Biobank is a large and detailed prospective research institute that recruited over 500,000 participants aged 40 to 69 globally between 2006 and 2010 (Sudlow et al., 2015).The "ukb-b-14177" dataset includes 46,188 participants, with 2,076 cases and 460,857 controls.This dataset provides information on the diagnosis of hypertension by doctors.For the outcomes of systolic and diastolic blood pressure, we selected the datasets "ieu-b-38" and "ieu-b-39, " respectively.These datasets are based on the International Consortium for Blood Pressure (ICBP), which is a multi-stage design GWAS study on systolic and diastolic blood pressure for 200,000 Europeans (The International Consortium for Blood Pressure Genome-Wide Association Studies, 2011). 2 The "ieu-b-38" and "ieu-b-39" datasets include summary-level data from the ICBP study (Evangelou et al., 2018) (Supplementary Tables).Please note that Supplementary Tables are available for further details.
Statistical analysis
All statistical analyses in this study were conducted using R software (version 4.1.2).We utilized the R software package "TwoSampleMR" to perform MR analysis investigating the causal relationship between the GM classification group and hypertension.The evaluation indicators for assessing the magnitude of each specific microbiota effect in MR studies were odds ratio (OR) and 95% confidence interval (95% CI).A statistical significance level of p < 0.05 was considered as evidence of potential causal effects (Waters and Ley, 2019;Xiang et al., 2021).
To ensure the authenticity and accuracy of the causal relationship between gut microbiome and hypertension, we implemented quality control measures to eliminate the interference of strong linkage imbalance caused by SNPs.This was accomplished through a series of screening settings: (1) SNPs with a value of p threshold of 1 × 10-5 were identified based on the genetic group of 18,000 European individuals; (2) The clumping distance between two SNPs was set to 10,000 kb; (3) The correlation coefficient r2 threshold of linkage disequilibrium (LD) between genes is set to 0.001.(4) Palindrome SNPs were removed to prevent the influence of alleles on the causal relationship between gut microbiome taxa and hypertension.(5) In cases where there was no SNP associated with exposure in the outcome GWAS, a proxy SNP significantly associated with the variation of interest (r 2 > 0.8) was selected.
The primary analysis method used in this Mendelian randomization (MR) study was inverse variance weighted (IVW) (Burgess et al., 2013;Wang et al., 2021).IVW is a meta-analysis technique that combines ratio estimates with inverse variance weighting, ensuring the validity of each instrumental variable (IV) and accounting for SNP heterogeneity (Burgess et al., 2013;Bowden et al., 2017;Liu et al., 2022).The MR-Egger method, on the other hand, includes an intercept term in the weighted regression to assess horizontal pleiotropy among IVs (Burgess et al., 2017).The presence of a non-zero intercept suggests the presence of horizontal pleiotropy.While MR-Egger provides an estimate of the causal effect, it is less statistically efficient (Bowden et al., 2015).In contrast, the weighted median approach is able to provide consistent estimates of causal effects even when more than 50% of IVs are invalid (Hartwig et al., 2017).The weighted median method has advantages over MR-Egger in terms of result accuracy and maintaining a more precise causal effect estimate (Bowden et al., 2016;Xiang et al., 2021).Additionally, weighted mode and simple mode were used as additional methods for MR analysis (Hartwig et al., 2017;Wu et al., 2020).
To ensure the reliability and robustness of the causality assessment results, sensitivity analyses were performed.Cochrane's Q-test was used to assess heterogeneity among the selected SNPs associated with each bacterial taxa.A value of p < 0.05 indicated significant heterogeneity among the IVs.MR-Egger regression was used to test for horizontal pleiotropy among the included SNPs.Furthermore, a weighted median analysis was conducted, which is more robust to individual genetic variants with strong outlier causality estimates.To investigate the causal effect of hypertension (HTN) on the identified significant bacterial genus, a reverse MR analysis was performed (i.e., HTN as exposure and the identified causal bacterial genus as outcome) using SNPs associated with HTN as IVs.
Results
Table 1 shows the results of pleiotropy and heterogeneity tests for all bacterial taxa (phylum, order, family, genus) included in the study.In sensitivity analysis, we confirmed the impact of accurate MR results from one phylum, one order, three families, and seven genera on HTN.
Discussion
This Two-sample MR study is the first to analyze the causal relationship between gut microbiome taxa and hypertension through multiple datasets.After sensitivity analysis and reverse causality analysis, and the deletion of gut microbiota taxa lacking validity and reliability, The research results indicate that the levels of phylum Verrucomimicrobia, family BacteroidalesS24.7group,family Bifidobacteriaceae, genus Adlercreutzia, genus Phascolarctacterium, genus Lachnospiraceae NK4A136 group, and genus Ruminococcus2 are negatively correlated with the risk of hypertension, and have a protective causal effect on the pathogenesis of HTN.Family Alcaliginaceae, Genus Anaerostipes, Genus Collinsella, and Genus Lachnospiraceae-_UCG_010 may be risk factors for the onset of hypertension.The results were examined through some sensitivity analyses-MR-Egger analysis, IVW analysis, and MR-PRESSO Global Test analysis (Verbanck et al., 2018), which is consistent with our findings, and may promote the study of novel biomarkers in future HTN experiments.In the meantime, our results provide novel insights for future HTN prevention and therapeutic treatments: targeted regulation of dysbiosis of specific gut microbiome taxa to prevent and treat HTN.
The gut microbiota has the characteristic of diversity, it is mainly made up of 4 phyla: (1) Firmicutes, (2) Bacteroidetes, (3) Actinobacteria, and (4) Proteobacteria.The relative balance of gut microbiota composition plays a key role in maintaining intestinal immunity and systemic homeostasis; the imbalance of gut microbiota is often referred to as microecological imbalance, which is marked by the ratio of Firmicutes (F) to Bacteroides (B), compared to changes in the microbiota of healthy individuals (Guarner and Malagelada, 2003).Furthermore, some bacteria from the phylum Firmicutes are important producers of metabolic products that lower blood pressure, such as short-chain fatty acids (Petersen and Round, 2014).A multitude of studies indicated that there is an association between gut microbiota and hypertension (Yang et al., 2015;Li et al., 2017;Sun et al., 2019).The effect of gut microbiota on blood pressure regulation may be partially explained by the production of short-chain fatty acids (SCFAs) by gut bacteria, including beneficial SCFAs (acetate, butyrate, and propionate) and non-beneficial lactates.Meanwhile, Beli et al. suggested that gut microbiota interventions would be a new method for the prevention and treatment of HTN (Jose and Raj, 2015).
Considering the GM classification group at the phylum level, we found that Phylum Verrucomimicrobia is a protective factor for diastolic blood pressure.Verrucomimicrobia exists in the inner layer of the intestinal mucosa and is abundant in healthy individuals.They can decompose polysaccharides such as mucopolysaccharides and cellulose, providing energy and nutrients.The Verrucomimicrobia can also produce short-chain fatty acids, such as propionic acid and butyric acid, which play an important role in regulating intestinal health and the immune system (Schlesner et al., 2006).At the class level, we did not find a causal relationship between the GM taxa and HTN.It may be because refining the interactions between different taxonomic groups (such as the level of families and genera) can affect the observation results.
Furthermore, at the order level, we found that order Bifidobacteriales has a protective causal effect on diastolic blood pressure.Studies have shown that the abundance of bifidobacteria is higher in the healthy control group than in HTN patients (Peng et al., 2018).Short-chain fatty acids are produced in the process of fiber fermentation that are difficult to digest, and are one of the most characteristic microbial-derived metabolites.Acetate, propionate, and butyrate are three SCFAs with high abundance (Verhaar et al., 2020).The abundance of bifidobacteria in hypertension patients is lower, while Bifidobacterium, Enterococcus, and Lactobacillus are considered as probiotics.These three SCFA-producing microbes can produce SCFA and have multiple health benefits such as anti-inflammatory and beneficial metabolic effects (Hiippala et al., 2018;Parada Venegas et al., 2019).In addition, oral treatment of gut microbiota (specific bifidobacteria, lactobacilli, and SCFA-producing Anaerobutyricum soehngenii species that produce short-chain fatty acids) has a moderate antihypertensive effect on humans (Khalesi et al., 2014;Gilijamse et al., 2020).
At the family level, the family BacteroidalesS24.7group is a protective factor for systolic and diastolic blood pressure; research has shown a positive correlation between Bacteroides and blood pressure (Palmu et al., 2020).The family Bifidobacteriaceae belongs to the order Bifidobacteriales, and the analysis of the family Bifidobacteriaceae is as above.Family Alcaliginaceae is a risk factor for hypertension.Family Alcaliginaceae: In animal experiments, it was observed that after FMT transplantation in spontaneously hypertensive rats with normal blood pressure rats, the abundance of family Alcaliginaceae decreased in the gut (Adnan et al., 2017).
Unlike other GM and hypertension studies, we further identified three taxa at the genus level that increased the systolic blood pressure risk, five taxa at the genus level that increased the diastolic blood (Karlsson et al., 2012;Jie et al., 2017;Liu et al., 2019).In animal models of hypertension complications (acute myocardial infarction), the gut microbiome, especially the Lachnospiraceae family, Syntrophomonadaceae family, and Tissierella soehngenia genus, exhibit a higher trend (Wu et al., 2017).Cross-sectional studies on gut microbiota composition in hypertension in humans showed a lower abundance of genus Anaerostipes in HT.Salt intake in diets will affect the incidence rate of hypertension and the composition of intestinal microbiota.In animal trials, higher salt intake is associated with changes in microbial community composition, including an increase in Ruminococcus and Lachnospiraceae, as well as a decrease in Lactobacillus and Oscillibacter (Wilck et al., 2017;Bier et al., 2018).Butyrate is a kind of SCFA, and the microbial community that produces butyric acid includes bacteria from families Ruminococcaceae and Lachnospiraceae, as well as Anaerobutyricum hallii and Anaerostipes spp.Our research results also indicated a negative correlation between the Lactobacillus genus and some Lachnospiraceae genera and blood pressure.The reduction of butyric-acid-producing bacteria is related to inflammatory diseases (including diabetes, obesity, hypertension, and inflammatory bowel disease), which is because butyric acid has an anti-inflammatory effect (Bach Knudsen et al., 2018;Li et al., 2018).Previous studies have shown that butyrate, as the main energy source of colon cells, can regulate tight junction proteins and maintain intestinal barrier integrity (Wu et al., 2019).Onyszkiewicz et al. (2019) found that butyrate enters the bloodstream after passing through the intestinal vascular barrier, and has a vasodilation effect on the mesenteric artery.This process occurs after acting on the G-protein coupled receptor (GPR) (Onyszkiewicz et al., 2019).Wang et al. (2017) found that sodium-butyrate can also inhibit ANGII induced hypertension by inhibiting the renin-angiotensin system mediated by the renin-(pro) renin receptor.In short, butyric acid may play an important role as a differentially beneficial metabolite in the regulation of hypertension.
When analyzing the microbiota composition, a negative correlation between the SCFA-producing taxa Clostridiaceae, Ruminococcus, and Coprococcus on women and systolic BP was observed (Durgan, 2017).Bacteria within the Ruminococcaceae family are the key SCFAsproducing bacteria, which also play a crucial role in maintaining homeostasis and gut development (Biddle et al., 2013).
The main advantage of our study is that it is the first to use MR analysis to examine the relationship between gut microbiota and hypertension.This approach reduces confounding factors and provides more reliable results compared to observational studies.Additionally, our findings highlight the potential role of Verrucomimicrobia in the development of hypertension, which has not been previously reported.This suggests that Verrucomicronia may serve as a new biomarker for hypertension.However, our study does have some limitations.First, in terms of sample size, the gut microbiota GWAS contains a relatively small number of samples.Second, the MR study cannot determine whether there is data overlap in the included GWAS summary data.Of course, we have minimized the bias of participant overlap using the F-statistic (F > 10).Third, in this MR analysis, we did not find a causal relationship with HTN at the class level.Researchers can expand the sample to explore the relationship between gut microbiome taxa and HTN at the class level in future research.
In summary, our study provides a comprehensive assessment of the causal relationship between gut microbiota and hypertension.We identified eight gut bacteria (phylum Verrucomicrobia, order Bifidobacteriales, family BacteroidalesS24.7group,family Bifidobacteriaceae, genus Adlercreutzia, genus Phascolarctobacterium, genus Lachnospiraceae NK4A136 group, and genus Ruminococcus2) that have a negative causal relationship with hypertension, making them potential protective factors.Additionally, we found four gut bacteria (family Alcaligenaceae, genus Anaerostipes, genus Collinsella, and genus Lachnospiraceae_UCG_010) that have a positive causal relationship with hypertension, indicating they are hazard factors.These strains may serve as new biomarkers for the treatment and prevention of hypertension, providing new insights into the mechanisms underlying gut microbiota-mediated hypertension.
TABLE 1
The results of pleiotropy and heterogeneity tests for all bacterial taxa.
(Dan et al., 2019)a at the genus level that increased the hypertension risk.The research results also showed that the genus adlercreutzia is negatively associated with BP Indices(Dan et al., 2019).Palmu et al. (2020)found that Genu Collinsella is Positively Associated with BP Indices.Meanwhile, cross-sectional studies in humans showed that the gut microbiota of symptomatic atherosclerosis patients had a higher abundance of the Collinsella genus, Enterobacteriaceae, Streptococcaceae, and Klebsiella spp., and a lower abundance of bacteria Eubacterium, Roseburia, and Ruminococcaceae spp. that can produce short chain fatty acids compared with the healthy control group pressure | 2023-11-25T16:08:53.104Z | 2023-11-23T00:00:00.000 | {
"year": 2023,
"sha1": "fa45148b354f59e572059d6e76fcbd54c6fe2dd0",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2023.1276050/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7670030275edbe59c6293db33ba4e8a0875ba3e8",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": []
} |
204332169 | pes2o/s2orc | v3-fos-license | Rapid Detection of Staphylococcal Enterotoxin-B by Lateral Flow Assay
A cohort of monoclonal antibodies (mAbs) were generated against Staphylococcal enterotoxin-B (SEB) and selected by double sandwich enzyme-linked immunosorbent assay (ELISA) for solution capture of the toxin. Clonal hybridoma cell lines were established and a pair of anti-SEB mAbs selected for the development of a sandwich ELISA. Immobilized 3D6 mAb (IgG1, kappa) when paired with 4C9 mAb (IgG1, kappa) conjugated to horseradish peroxidase generates a typical dose–response curve with an EC50 of 24.8 ng/mL for purified SEB using chemiluminescent detection. These mAbs bind SEB by Western blot and ELISA binding to classical enterotoxin serotypes show that the 3D6 mAb binds both SEB and the SEC1 serotypes, whereas 4C9 binds only SEB. These mAbs effectively port onto lateral flow test strips with a visual detection sensitivity for SEB of 5 ng/mL in <10 minutes using a 4C9 conjugated to a 40 nm gold reporter.
Introduction
S taphylococcus aureus is a pathogenic gram-positive bacterium that can produce an impressive collection of protein toxins. (1)(2)(3) These secreted toxins represent virulence factors and staphylococcal foodborne poisoning (SFP) is a leading cause of foodborne illness in the United States. (4)(5)(6) The gastrointestinal (GI) illness associated with SFP is rarely life threatening and the disease is usually self-resolving without hospitalization. (7) However, the economic cost and lost productivity associated with SFP warrants effective control strategies. (8) The staphylococcal enterotoxins (SE) represent a large group of structurally similar and serologically distinct proteins (22-29 kDa) encoded in prophages, plasmids, and chromosomal pathogenicity islands. (5,9) There are five classical antigenic types (A-E) and these superantigens elicit an immune response that results in the massive production of inflammatory cytokines. (10)(11)(12) SEB is considered the most dangerous as it is produced by most Staphylococcus aureus strains. (7,13,14) SEB is a primary cause of SFP after ingestion (15,16) and is considered a military incapacitating agent as it is highly toxic, thermally stable, and can cause intoxication by inhalation if aerosolized. (17,18) SEB intoxication is difficult to distinguish from other GI illnesses and there is no vaccine and has limited treatment options. (13) There are many immunoanalytical technologies available for SEB detection, but a need remains for portable, rapid, and inexpensive methodologies to address foodborne contamination. (19,20) Commercially produced lateral flow test strips in general report 5-10 ng/mL detection sensitivities using optical readers (21,22) and their applicability is primarily directed toward emergency first responders. In this article we report the generation of a novel cohort of anti-SEB monoclonal antibodies (mAbs) and identify a suitable pair for the development of a sandwich enzyme-linked immunosorbent assay (ELISA) with application in a lateral flow assay format.
SEB mAbs
Female Balb/cByJ mice ( Jackson Laboratory, ME) were immunized by intramuscular injection of an SEB toxoid derived from purified SEB toxin (Sigma, MO) mixed 1:1 with TiterMax gold adjuvant (Sigma). Hybridomas were generated by chemical fusion with P3X myeloma cells and screened by double sandwich ELISA against purified native SEB (Toxin Technology, FL) using a biotinylated rabbit-a-SEB pAb (Toxin Technology) with an avidin-horseradish peroxidase (HRP) reporter and chemiluminescent detection. Hybridoma cell cloning was performed by limiting dilution and total of 24 hybridoma cell lines producing anti-SEB mAbs were isolated. All animal experiments were performed with institutional approval and followed national guidelines for the care and use of laboratory animals.
Sandwich ELISA
Anti-SEB mAbs were purified on protein-G and a functional pair of anti-SEB mAbs was identified for the development of a sandwich ELISA. In brief, the capture mAb (3D6; IgG1, kappa) was immobilized at 2 mg/mL on black 96well high-binding polystyrene plates at 5 mg/mL in 0.1 M carbonate buffer (pH 9.4); washed repeatedly in Trisbuffered saline with 0.1% Tween-20 (TBST; pH 7.2) and blocked in 10% nonfat dry milk (NFDM). The SEB antigen was diluted in TBST containing 0.1% BSA and added to wells for 1 hour. The detection mAb (4C9; IgG1, kappa) conjugated to HRP was added at 1 mg/mL for 1 hour. Chemiluminescent substrate (PicoECL; Pierce) was added and luminescent signal recorded as counts per second using a Victor X 3 luminometer (PerkinElmer). All reactions were performed at room temperature with a minimum of three replicates. Analysis was performed using four parameter logistic (4PL) with dynamic curve fitting (EC 50 = 24.8 ng/mL; Hillslope = 0.85).
Western blotting
The SEB antigen was diluted in sample buffer, heat denatured, and 0.5 mg separated on a 4-12% Bis-Tris Gel and protein transferred to a nitrocellulose membrane. Membranes were washed in TBST, blocked with 10% NFDM, incubated with 1 mg/mL of primary antibody then secondary anti-mouse IgG conjugated to HRP. Antibody binding was resolved by chemiluminescence and Tiff images captured using a Flur-oChem HD2 (Alpha Innotech, CA). Molecular weight was estimated using prestained dual-color protein standards (BioRad, CA).
Lateral test strips
In brief, RP membrane (Millipore) was striped using a noncontact BioJet HR value with a high-resolution syringe pump attached to an XYZ3050 platform (BioDot, CA) with the 3D6 capture mAb as a test line (T) and a donkey-antimouse IgG used for the control line (C). The RP membranes were water washed, then blocked in polyvinylpyrrolidone (PVP40; Sigma) and dried. The 4C9 mAb was conjugated to 40 nm gold (InnovaCoat Gold; Innova Biosciences) and 10 OD sprayed onto a 10 mm glass fiber conjugate pad (Millipore) using a noncontact AirJet HR aerosol dispenser (BioDot) attached to the XYZ platform. Dried membranes were adhered to 60 mm plastic backing card with 25 mm Fusion-5 membrane (GE Healthcare) as a sample pad and 22 mm CF6 membrane (Millipore) as an absorbent sink. The test strips were cut (60 · 4.5 mm) and housed in a two-part plastic cassette with a pressure point at material overlap. Dilutions of SEB were added to the sample pad (100 mL) and resolved for 10 minutes and then photographed.
Results and Discussion
We have isolated and cloned 24 anti-SEB producing hybridoma cell lines by double sandwich ELISA. Our screening and selection assay utilized purified SEB in its native conformation emphasizing solution capture capability of the mAbs. Most of these mAb show a high degree of SEB binding selectivity and perform in a variety of immunoassay formats that include sandwich ELISA, direct ELISA, Western blotting and lateral flow. Some of these mAbs evaluated by ELISA against the classic SEs (A-E) show binding to the SEC1 serotype that shares the most amino acid sequence identity (68%) with the SEB protein. (23) A pair of anti-SEB mAbs (3D6 and 4C9), with IgG1 heavy chains and kappa light chains, was identified for assay development ( Table 1). The 3D6 mAb binds both SEB and SEC1, whereas the 4C9 mAb binds only SEB in ELISA (data not shown). These mAbs bind purified heat denatured SEB protein by Western blot (Fig. 1A). To develop the sandwich ELISA the 3D6 mAb was immobilized and used for SEB capture with the 4C9 mAb used for detection. A typical doseresponse curve was observed using purified SEB dilutions with an EC 50 of 24.8 ng/mL of SEB and a hillslope of 0.85 using 4PL dynamic curve fitting (Fig. 1B). These two mAb both function in the sandwich ELISA format as either a SEB capture or detection reagent (data not shown).
To develop a rapid SEB detection assay these mAbs were ported on standard 60 · 4.5 mm lateral flow test strips with the 3D6 mAb immobilized at a test line (T) and 40 nm goldconjugated 4C9 as a SEB reporter. A donkey-anti-mouse IgG was immobilized at the control line (C) and functions to validate the proper performance of the test strip. A dilution series of purified SEB was prepared, 100 mL was applied to the test strip sample pad, and the test allowed to resolve for 10 minutes and then photographed. Visually observable test lines indicating detection of SEB were observed down to 5 ng/mL (Fig. 1C). No test line was observed in the absence of the SEB analyte.
Although the 3D6 mAb will bind the SEC1 serotype, when paired with the selective 4C9 mAb the assay will only detect the SEB serotype. This serotype specificity would address concerns regarding SE cross-reactivity reported with some commercial assays. (24) Many commercially available SEB lateral flow assays fail to report the sensitivity of their tests, whereas other require optical readers to achieve 5-10 ng/mL SEB detection. In this article we report a lateral flow assay that achieves 5 ng/mL SEB detection sensitivity by visual observation. Further optimization of these SEB-specific reagents in a lateral flow assay format along with the integration of an optical reader will likely result in an increase in detection sensitivity and assay performance suitable for commercialization.
Author Disclosure Statement
No competing financial interests exist.
Funding Information
This research was supported by USDA-ARS National Program in Food Safety (#2030-42000-050). The USDA is an equal opportunity provider and employer. | 2019-10-13T13:01:43.010Z | 2019-10-01T00:00:00.000 | {
"year": 2019,
"sha1": "10bbdb7bf700e9f218f614ac1a7c6ab108d7753c",
"oa_license": "CCBY",
"oa_url": "https://www.liebertpub.com/doi/pdf/10.1089/mab.2019.0028",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8ff52692726876cf238952ec3b44a0b79a89202c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
234155981 | pes2o/s2orc | v3-fos-license | Design of Fish Sales Information System PT XYZ Using Laravel Framework
* Corresponding author E-mail address: yukistasos4@gmail.com DOI: http://dx.doi.org/10.25105/itm.v1i1.7808 Received September 9 2020; Accepted November 13 2020 32 Abstract— The development of an information system for fish sales, produces an information system that includes various procedures that record, calculate, and create documents in the form of reports on management requirements. Reports are used for decision making, starting from supplier orders, recording inventory, to selling goods to consumers. Sales record information is still manual, so there may be entry errors and branch companies that are late or have not submitted sales reports. With the existing problems in this company, then in this final project research will create a sales system. Based on these problems, a sales information system was made at PT XYZ. The design of PT XYZ's Sales Information System uses the waterfall method with the Laravel php framework. The resulting system is to record and update purchase order data, goods data, and sales order data and can print reports according to the desired date, month and year.
I. INTRODUCTION
n the current era of globalization, competition between companies is becoming increasingly high in improving the quality of services or goods. This triggers a company to build information systems that are applied in various fields of an organization in the context of corporate strategic decision making, one of which is sales. Sales information system is an information system that includes various procedures that record, calculate, and make documents in the form of reports for management's needs in making decisions, ranging from ordering goods to selling to consumers [1]. PT XYZ is a company engaged in the field of fisheries. The main business of this company is fishing and selling fresh fish directly to consumers. However, the company does not yet have an information system that can record every operational activity so that the recording is still manual and there may be an error in the entry of goods ordering data, goods data, and fish sales data. Given the problems that exist in this company, this final project research will create a fish selling system using a Laravel framework. This system will record every order data, inventory data, and sales data. From this data, companies can also print reports that can later help companies make decisions. Systems can be developed and combined into resources in processing data into information to be able to meet the needs and achieve the goals of an organization. Not only companies that get benefits, but consumers also more easily get the information needed about the services and products offered by the company, so consumers can more easily order and buy it.
A. System
The system is a collection of elements or components that are combined to achieve a goal. The basic model of the form of this system is the presence of inputs, processes, and outputs. A system can be developed into a form of storage media. The system can be divided into two, namely open systems and closed systems. Open systems, namely the system can receive some input from outside the environment, while the closed system is the opposite [2].
B. Information
Information is data that is processed into a form that has meaning and has value for the user. A data depicts real events and unity. In the business world, these events can be in the form of sales [2].
C. Information System
From the understanding of the system and information above it can be concluded that the information system is a system built to support the various operational needs of an organization and is managerial in nature. Information systems are an important part of the organization in helping management and decision making. To implement an effective and efficient information system Laravel requires planning, implementation, regulation and evaluation in accordance with the goals of an organization [2].
D. Sales
Sale is the transfer of ownership of an item or the provision of services to someone where a seller will benefit from the sale to the buyer at an agreed price or value between the two parties [3].
Design of Fish Sales Information System PT XYZ Using Laravel Framework
Dimas Aditya Pratama 1* , Syaifudin 2 , Teddy Siswanto 3 Information System Study 123 , Faculty of Industrial Technology, University of Trisakti I Framework is a solution to a problem using the basic conceptual structure based on complex issues that exist. A Framework already contains a collection of architectures / concepts that can make it easier to solve a problem. In a framework, usually there are various kinds of features for building a system, including standard coding, best practice, design patterns and common functions. By utilizing various features that are already available on the framework, the application development process can be done quickly. Framework usually uses the Model View Controller or MVC method which is a method of separating data (Model), interface design (View) and function (Controller). [4].
F. Laravel
Laravel is a PHP framework that is used to build a web with the concept of MVC (Model-Controller-View) using the command line tool called "Artisan". Laravel uses a bundle packaging and installation bundle through the command prompt [5].
A. Methods for Developing Linear Sequential Systems (Wtarefall Model)
The methodology used in research in building sales information systems is the Waterfall development method which has the advantage of identifying and analyzing system requirements, long before programming begins and limiting changes during the project [6].
B. System Planning
In this study, the system to be built is a sales application at PT. XYZ uses the PHP Laravel and MySQL Framework. This application was built with the aim of being able to provide useful information in the form of inventory data, purchase order data, and fish sales orders which can later be processed according to user needs [7].
C. Needs Analysis
The needs analysis is used to provide a detailed explanation of why this system needs to be built and for whom it is intended. This system was built to meet the needs of 4 users namely admin, manager, warehouse and marketing. Admin can add or change user, user level, and access control. Admin can determine the access rights or roles on each account to prevent data changes arbitrarily by other accounts. User managers can add or change supplier data and purchase order data so that the account manager can control the entry of goods data and ordering goods. Warehouse users can add or change goods data and inventory data, warehouse users also control the amount of goods inventory. User Marketing can add and manage each item sales transaction.
D. System Design
Data Flow Diagrams (DFD) are a description of all sales activities so that they are easily understood in the data processing.
Fig 3. Context Diagram
Database design in this sales system is made to include database tables. This design uses the ERD (Entity Relationship Diagram) design as the basis for making database tables. In this system there are 4 tables, namely the users table, the purchase order table and the order table. At this implementation stage is realized based on the design that has been made previously and needs to prepare several software or applications to build sales applications, including: • XAMPP version 7.1.30
A. Initial Appearance
The Login page is the initial display of the sales application on PT XYZ. Users and admins must login to enter the sales application. Admin and user login pages are in the same user, but application users have different access rights. The results of the study were made with the aim to assist PT XYZ in recording and displaying information ranging from purchasing data, stock items to sales. This application is expected to help companies be helped in making decisions and operations can run well. | 2021-05-11T00:03:19.194Z | 2021-01-21T00:00:00.000 | {
"year": 2021,
"sha1": "d8f3f47bb51e52e8a190680c34420a315885fa3e",
"oa_license": "CCBYNCSA",
"oa_url": "https://trijurnal.lemlit.trisakti.ac.id/intelmatics/article/download/7808/6413",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "13da9c96ce236c0019c224a6b6c70927550cd667",
"s2fieldsofstudy": [
"Business",
"Computer Science"
],
"extfieldsofstudy": [
"Business"
]
} |
225171420 | pes2o/s2orc | v3-fos-license | An Intelligent Automatic Human Detection and Tracking System Based on Weighted Resampling Particle Filtering
: At present, traditional visual-based surveillance systems are becoming impractical, ine ffi cient, and time-consuming. Automation-based surveillance systems appeared to overcome these limitations. However, the automatic systems have some challenges such as occlusion and retaining images smoothly and continuously. This research proposes a weighted resampling particle filter approach for human tracking to handle these challenges. The primary functions of the proposed system are human detection, human monitoring, and camera control. We used the codebook matching algorithm to define the human region as a target and track it, and we used the practical filter algorithm to follow and extract the target information. Consequently, the obtained information was used to configure the camera control. The experiments were tested in various environments to prove the stability and performance of the proposed system based on the active camera.
Introduction
Recently, security surveillance has applied visual-based tracking and detection techniques for improving convenience and safety for humans. Human tracking and detection are essential topics in a surveillance system. Human recognition and moving object extraction are the two parts of any typical human detection system. Human recognition identifies an object as nonhuman or human, and objects are extracted from the background by means of moving object extraction, which determines the related size and position of the object in an image. The tracking system is essentially able to predict the location during and after occlusion, as the tracked object or human is possibly occluded by other objects while tracked.
Surveillance systems typically use two kinds of the cameras: fixed camera and active camera. The fixed camera has the benefit of being low cost but comes with limited field of view (FOV), whereas an active camera takes proper FOV as it can do pan-tilt to retain the target object within the camera scene. In addition, the latter has a better resolution since it can perform zoom in/out.
Generally, a tracking system on an active camera considers the temporal difference for extracting moving object. In this procedure, it is necessary to wait for the camera to be stable enough to process the image. In other words, the moving camera takes blurred images and extracts background pixels along with the moving object. Subsequently, the active camera operates non-smoothly and discontinuously. Hence, a particle filter tracking algorithm is applied to resolve such problem. The codebook technique is employed initially to spot the human as the target model, and after that the particle filter tracks the human by computing the Bhattacharyya distance amid the color histogram of target model with the next color histogram frame of the sampled particle position. There are various advantages of using a color histogram such as efficient computation, tracking of nonrigid objects, robustness to partial occlusion, scale invariant, and rotation.
In this paper, a real-time human tracking system is constructed with an active camera and has the following characteristics: • Rapidly detects a human • Tracks an object by not considering background information • Handles occlusion conditions • Operates an active camera continuously and smoothly • Appropriately zoom in/out
Related Work
There are four key parts in our entire system: image source, human detection, human tracking, and camera control, as described in Figure 1. As a quick review of our procedure, we set the initial FOV as the scene we wanted to capture. Then, we detect and extract an object recognized as a human. We track the human object and use its moving information to pan-tilt-zoom (PTZ) the camera via a proportional-integral-derivative (PID) controller so that the target stays in the center of the FOV. A human detection system finds the position and size of the human in an image. Optical flow [1,2] is considered in order to estimate a moving object independently at the cost of complex computations. Zhao and Thorpe [3] proposed a stereo-based segmentation technique for extracting objects from the background and then recognize the objects using neural network. While techniques based on stereo vision are more robust, it needs a minimum of two cameras, and it fails to perform well in long-distance detection. Viola et al. [4] proposed a cascade architecture detector, where adaptive boosting (AdaBoost) iteratively builds a robust classifier guided by performance criteria that are specified by user. The cascade method swiftly rejects non-pedestrian samples in the early cascade layer; thus, processing speed of this approach is high. The templates in a template-based approach [5] have short sequences of 2D silhouettes gained from motion capture data. This method detects human silhouettes having a particular walking pose. To rapidly spot humans, a shape-based human model is chosen, and codebook matching is used to classify a human. This reduces the time taken in detecting humans from the other objects. Montabone and Soto [6] proposed a novel computer vision technique that can operate moving cameras and spot a human in various poses in the case of a complete or partial appearance of the human. Pang et al. [7] presented an efficient histogram of a gradient-based human detection technique. A human tracking system follows a human target through the sequence of images regarding changes in scale and position. Between the several tracking methods, we analyzed three to synthesize our research.
First, feature-based tracking, a very common method, tracks features by motion, edge, or color using edge detecting methods such as the Sobel approach, Laplacian approach, and Marr-Hildreth approach [8,9]. These techniques use masks to perform convolution over an image for edge detection. Li et al. [10] proposed a 3D human motion tracking system with a coordinated mixture of factor analyzers. Lopes et al. [11] designed a hierarchical fuzzy logic-based approach for object tracking.
It uses a complicated and large set of rules, has a long computation time, and the pixels at the edges are not always continuously detected. The abovementioned approach uses gray scale images for edge detection, and we chose not to use this for color images because of information loss on the color space vector. Moreover, edge detection in a gray scale image cannot be robust and sufficient.
Big Data Cogn. Comput. 2020, 4, x FOR PEER REVIEW 3 of 24 Figure 1. Overview of the system. A human detection system finds the position and size of the human in an image. Optical flow [1,2] is considered in order to estimate a moving object independently at the cost of complex computations. Zhao and Thorpe [3] proposed a stereo-based segmentation technique for extracting objects from the background and then recognize the objects using neural network. While techniques based on stereo vision are more robust, it needs a minimum of two cameras, and it fails to perform well in long-distance detection. Viola et al. [4] proposed a cascade architecture detector, where adaptive boosting (AdaBoost) iteratively builds a robust classifier guided by performance criteria that are specified by user. The cascade method swiftly rejects non-pedestrian samples in the early cascade layer; thus, processing speed of this approach is high. The templates in a template-based approach [5] have short sequences of 2D silhouettes gained from motion capture data. This method detects human silhouettes having a particular walking pose. To rapidly spot humans, a shape-based human model is chosen, and codebook matching is used to classify a human. This reduces the time taken in detecting humans from the other objects. Montabone and Soto [6] proposed a novel computer vision technique that can operate moving cameras and spot a human in various poses in the case of a complete or partial appearance of the human. Pang et al. [7] presented an efficient histogram of a gradient-based human detection technique. A human tracking system follows a human target through the sequence of images regarding changes in scale and position. Between the several tracking methods, we analyzed three to synthesize our research.
First, feature-based tracking, a very common method, tracks features by motion, edge, or color using edge detecting methods such as the Sobel approach, Laplacian approach, and Marr-Hildreth approach [8,9]. These techniques use masks to perform convolution over an image for edge detection. Li et al. [10] proposed a 3D human motion tracking system with a coordinated mixture of factor analyzers. Lopes et al. [11] designed a hierarchical fuzzy logic-based approach for object tracking. It uses a complicated and large set of rules, has a long computation time, and the pixels at the edges are Secondly, pattern recognition methods learn the objects at the target and find it in sequential images. Williams et al. [12] extended the method to a relevance vector machine (RVM) that learns a nonlinear translation predictor. Collins et al. [13] proposed a mechanism for an online feature selection mechanism that can be used for multiple features evaluation. The presented approach tracks and adjusts the features set for improving tracking performance. The feature evaluation mechanism is embedded in a mean-shift tracking system. It can adaptively select tracking features. Zhang et al. [14] proposed a robust 3D human pose tracking approach from silhouettes using a likelihood function. Zhao et al. [15] used a principal component analysis to extract features from color and use them in a random walker segmentation algorithm to assist human tracking.
Thirdly, there are gradient recognition methods with a focus on pattern recognition, such as the mean-shift algorithm. Fukunaga and Hostetler [16] initially proposed the mean-shift algorithm for clustering data. Comaniciu et al. [17] proposed a kernel-based object tracking method, where object region tracking is denoted using a spatially weighted intensity histogram, and its similarity rate is computed using Bhattacharyya distance following an iterative mean-shift technique. Many applications [18][19][20][21] later proposed various mean-shift algorithm variants. Even though the mean-shift object tracking technique is well-performed over sequences with comparatively slight object displacement, its performance cannot be guaranteed in the case where objects suffers full or partial occlusions. Kalman filter [22,23] and particle filter [24,25] algorithms are considered along mean-shift algorithms for improving the tracking performance under partial occlusion. The approach by Bhat et al. [24] uses a fusion of color and KAZE features [26] in the particle filter framework to give an effective result in different environments for tracking the target. Still, this approach requires a strategy for fast failure occlusion recovery for the post-occlusion target recovery. To track multiple targets by deploying the same color description with cancelation functionality and internal initialization, Nummiaro et al. [25] proposed a color particle filer embedded along a detection algorithm. Our major contribution in this work is a novel multitarget tracking algorithm that incorporates particle filters with a Gaussian mixture model to improve tracking accuracy and computational efficiency. In order to detect humans fast, we chose the shape-based human model to classify humans by codebook matching, which decreases the time of human detection compared to the other objects.
Many tracking systems work only on PTZ because to keep the object in FOV, an active camera can be pan-tilt and can utilize zoom in/out for adjusting resolution, thus keeping the tracked object with a well-proportioned resolution regarded to the FOV. Morphological filtering of motion images were used by Murray et al. [27] to perform background compensation. Using an active camera mounted on a pan/tilt platform, Murray's technique can successfully track a moving object from dynamic images. A kernel-based tracking method was used in the proposed system to overcome the apparent background motion on a moving camera. Karamiani and Farajzadeh [28] considered feature points' information of direction and magnitude to detect camera motion accurately. The method is used for detecting multiple moving object accurately in active and fixed camera models. Lisanti et al. [29] proposed a method that enables real-time target tracking in world coordinates, and the method offers continuous adaptive calibration of a PTZ camera. Mathivanan and Palaniswamy [30] used optimal feature points and fuzzy feature matching to accomplish human tracking. In the context of the tracking applications of humans using deep learning, Fan et al. [31] proposed human tracking and detection using a convolutional neural network for partial occlusion and view, scale, and illumination changes. Tyan and Kim [32] proposed a compact convolutional neural network (CNN) based visual tracker in conjunction with a particle filter architecture. A face tracking framework based on convolutional neural networks and Kalman filter was proposed for the real-time detecting and tracking of the human face [33,34]. Luo et al. [35] proposed a matching Siamese network and CNN-based method to track pedestrian. The method used a faster-R-CNN to distinguish pedestrians from surveillance videos. However, the method still requires target occlusion to be resolved in order for it to be a more robust real-time pedestrian tracking tool. Xia et al. [36] proposed method tracks single and multi-objects in long-term tracking in real time, which determine and identify the target bounding box in a traffic scene, CNN is firstly trained. Then, a particle filter (PF) is used as the tracker to implement the preliminary multi-object tracking. A particle filter and neural network learning evaluated in person re-identification scenario was proposed in [37], while a hybrid Kalman particle filter (KPF) for human tracking was proposed in [38]. KPF is more time-consuming, especially in the case of non-occlusion. Real-time performance of the proposed filter is not good in terms of speed.
The deep learning models are time inefficient and costly in terms of memory as they tend to expand large number of nodes, which results in large computation. Such models mostly fail in real-time applications, and their implementation requires high-end processors. Therefore, complexities of the network need to be reduced to decrease the computation time and limit the number of computations [37]. The advantage of the proposed method is its simplicity and ease of implementation. The proposed models can be executed on a simple CPU for the real-time videos. Thus, it is an efficient approach as well.
In this research, we used a wide-angle camera to find the target, and then camera calibration methods gave the active camera pan-tilt commands to keep the target in the center of the FOV and for specific object position tracking. In the case where the size of the target was larger or smaller than a maximum or minimum predefined size, then the zoom in/out command was used accordingly.
Proposed System
This section describes each algorithm and method used in this paper. Figure 2 shows the three categories of the tracking system. To detect a human, we first extracted moving objects from the image source and then used codebook matching for each one of them to be categorized as human and non-human.
for specific object position tracking. In the case where the size of the target was larger or smaller than a maximum or minimum predefined size, then the zoom in/out command was used accordingly.
Proposed System
This section describes each algorithm and method used in this paper. Figure 2 shows the three categories of the tracking system. To detect a human, we first extracted moving objects from the image source and then used codebook matching for each one of them to be categorized as human and non-human.
Human Detection
In the majority of the surveillance systems, the position of the camera is fixed, whether it is a static camera or active camera. The fixed position of the camera allows for extraction of a moving object by using background subtraction. To make the method computationally efficient, background subtraction uses only gray level images. This will also make our system more efficient when using it in real time situations. The first image frame can be adjusted over time using Equation (1), which is used to construct the background, where and represent previous and current background images, respectively.
Scaling factor (0,1) was used to update the background image. Active pixels between frames n and n−1 are represented by ( , ).
To determine the moving object, the current image is subtracted from the background image as described in Equation (2). To obtain the binary moving object , threshold ℎ is applied to results of Equation (2) using Equation (3).
Human Detection
In the majority of the surveillance systems, the position of the camera is fixed, whether it is a static camera or active camera. The fixed position of the camera allows for extraction of a moving object by using background subtraction. To make the method computationally efficient, background subtraction uses only gray level images. This will also make our system more efficient when using it in real time situations. The first image frame can be adjusted over time using Equation (1), which is used to construct the background, where I n−1 B and I n B represent previous and current background images, respectively.
Scaling factor α(0, 1) was used to update the background image. Active pixels between frames n and n−1 are represented by I M (x, y).
To determine the moving object, the current image I c is subtracted from the background image I B as described in Equation (2). To obtain the binary moving object M obj , threshold ths is applied to results of Equation (2) using Equation (3).
M obj (x, y) = 1, I BS ≥ ths 0, I BS < ths The details of the moving object and codebook matching are indicated in Figure 3. The binary threshold image M obj undergoes a dilation process to fill holes of moving objects and to enlarge the boundaries. The step by step process is shown in Figure 4.
Human-shape information was used to build our codebook matching algorithm. The extracted moving object was normalized into a 20 × 40 pixels image. The position of the shape pixels in the image was extracted by the shape feature extraction. These features are pointed by red dots in Figure 5; 10 Y-axis coordinates are chosen from the object's rightmost and leftmost boundary, and 20 coordinates of the corresponding X-axis are arranged as a feature vector. The vectors are shown by blue blocks in Figure 5. As shown in Figure 5, there are a total of 10 bins in the histogram, represented by green blocks. As a result, there are 30 features vectors representing a human object.
We can conclude by observation that the top and bottom shape pixels of the Y-axis cannot be chosen as feature points as these pixels are changeable. The method used to select Y-axis coordinates is to firstly calculate the standard deviation of the reach value of Y-axis in the training sample, and then select the 10 lowest standard deviation values from each side.
The details of the moving object and codebook matching are indicated in Figure 3. The binary threshold image Mobj undergoes a dilation process to fill holes of moving objects and to enlarge the boundaries. The step by step process is shown in Figure 4. Human-shape information was used to build our codebook matching algorithm. The extracted moving object was normalized into a 20 × 40 pixels image. The position of the shape pixels in the image was extracted by the shape feature extraction. These features are pointed by red dots in Figure 5; 10 Y-axis coordinates are chosen from the object's rightmost and leftmost boundary, and 20 coordinates of the corresponding X-axis are arranged as a feature vector. The vectors are shown by blue blocks in Figure 5. As shown in Figure 5, there are a total of 10 bins in the histogram, represented by green blocks. As a result, there are 30 features vectors representing a human object.
The details of the moving object and codebook matching are indicated in Figure 3. The binary threshold image Mobj undergoes a dilation process to fill holes of moving objects and to enlarge the boundaries. The step by step process is shown in Figure 4. Human-shape information was used to build our codebook matching algorithm. The extracted moving object was normalized into a 20 × 40 pixels image. The position of the shape pixels in the image was extracted by the shape feature extraction. These features are pointed by red dots in Figure 5; 10 Y-axis coordinates are chosen from the object's rightmost and leftmost boundary, and 20 coordinates of the corresponding X-axis are arranged as a feature vector. The vectors are shown by blue blocks in Figure 5. As shown in Figure 5, there are a total of 10 bins in the histogram, represented by green blocks. As a result, there are 30 features vectors representing a human object. We can conclude by observation that the top and bottom shape pixels of the Y-axis cannot be chosen as feature points as these pixels are changeable. The method used to select Y-axis coordinates is to firstly calculate the standard deviation of the reach value of Y-axis in the training sample, and then select the 10 lowest standard deviation values from each side.
A list of feature vectors was represented by the codebook. Matching of the feature vector and codebook vectors was done to find the minimum distortion code vector in comparison to the object feature vector. We can say that X denotes a series of feature vectors including M-dimensional data, designated by … … ( ) . Code words V are defined as … … ( ) and have N sets each in codebook C. Similar to the feature vector, each code word has M-dimensional data defined as … … ( ) . Distortion between code words and feature vectors was defined by Equation (4). A list of feature vectors was represented by the codebook. Matching of the feature vector and codebook vectors was done to find the minimum distortion code vector in comparison to the object feature vector. We can say that X denotes a series of feature vectors including M-dimensional data, designated by X 0 . . . each in codebook C. Similar to the feature vector, each code word has M-dimensional data defined as . Distortion between code words and feature vectors was defined by Equation (4).
If the value of Dis min in Equation (5) is less than the threshold, it is assumed that feature vector X and the moving object it represented was of a human, and if the value of Dis min is greater than the threshold we then assume that it is a nonhuman object. The demonstration of comparing X with V j is shown in Figure 6.
We can conclude by observation that the top and bottom shape pixels of the Y-axis cannot be chosen as feature points as these pixels are changeable. The method used to select Y-axis coordinates is to firstly calculate the standard deviation of the reach value of Y-axis in the training sample, and then select the 10 lowest standard deviation values from each side.
A list of feature vectors was represented by the codebook. Matching of the feature vector and codebook vectors was done to find the minimum distortion code vector in comparison to the object feature vector. We can say that X denotes a series of feature vectors including M-dimensional data, designated by … … ( ) . Code words V are defined as … … ( ) and have N sets each in codebook C. Similar to the feature vector, each code word has M-dimensional data defined as … … ( ) . Distortion between code words and feature vectors was defined by Equation (4).
If the value of Dismin in Equation (5) is less than the threshold, it is assumed that feature vector X and the moving object it represented was of a human, and if the value of Dismin is greater than the threshold we then assume that it is a nonhuman object. The demonstration of comparing X with Vj is shown in Figure 6.
Human Tracking
A particle filter algorithm was proposed in the study, which is based on a weighted resampling particles method. In this algorithm, high weighted samples were selected for the human tracking system. The basic idea of our particle filter is to approximate the probability distribution by weighted sample sets. One hypothetical state of the object with corresponding discrete sampling probability is represented by each sample [25].
Colored information is more accurate compared to grayscale information if we use color as the feature for the purpose of object tracking. For our experimentation we chose HSV (Hue, Saturation, and Value) color space for better performance of tracking compared to RGB (Red, Green, Blue) color space because of its ability to reduce lightness and illumination sensitivity. Every color channel was represented by 8 bits, which in turn produces 256 × 256 × 256 bins of the color histogram. Color data are quantized into 6 × 6 × 6 without generality loss, thus making the entire bin of color histogram as
Human Tracking
A particle filter algorithm was proposed in the study, which is based on a weighted resampling particles method. In this algorithm, high weighted samples were selected for the human tracking system. The basic idea of our particle filter is to approximate the probability distribution by weighted sample sets. One hypothetical state of the object with corresponding discrete sampling probability is represented by each sample [25].
Colored information is more accurate compared to grayscale information if we use color as the feature for the purpose of object tracking. For our experimentation we chose HSV (Hue, Saturation, and Value) color space for better performance of tracking compared to RGB (Red, Green, Blue) color space because of its ability to reduce lightness and illumination sensitivity. Every color channel was represented by 8 bits, which in turn produces 256 × 256 × 256 bins of the color histogram. Color data are quantized into 6 × 6 × 6 without generality loss, thus making the entire bin of color histogram as 216 bins. To represent the target object, kernel function was used. The Epanechnikov kernel function was selected to represent the target object to introduce a spatially-smooth function to reduce the search on small neighborhood region. The convex and monotonically decreasing Epanechnikov kernel was selected to mask the target's density estimate spatially. The rationale of using the kernel as a weighted mask is to assign smaller weights to the pixels farther away from the center of the target, since those pixels are often affected by occlusion or interference from the background. Figure 7b shows the Epanechnikov kernel. This kernel function has the highest value at the center of distribution. If we look at the Region of Interest (ROI) of the target model in Figure 7a, the pixels that are closer to the center of the ROI contain more important information, and the background pixels are mostly near the ROI's boundary. The Epanechnikov kernel function was selected to represent the target object as it is computationally simple and can disregard the boundary information. This kernel performs well in terms of improved stability, accuracy, and robustness on camera motion and partial occlusions. Epanechnikov kernel is defined by Equation (6), where x represents normalized pixels in the region defined as the target model. When the proposed kernel function is applied to the target model, more critical information is contained by pixels closer to the ROI center, as shown in Figure 7.
that are closer to the center of the ROI contain more important information, and the background pixels are mostly near the ROI's boundary. The Epanechnikov kernel function was selected to represent the target object as it is computationally simple and can disregard the boundary information. This kernel performs well in terms of improved stability, accuracy, and robustness on camera motion and partial occlusions. Epanechnikov kernel is defined by Equation (6), where x represents normalized pixels in the region defined as the target model. When the proposed kernel function is applied to the target model, more critical information is contained by pixels closer to the ROI center, as shown in Figure 7. A robust tracking framework is provided by the particle filter algorithm, as it represents uncertainty. The algorithm is capable of keeping its options open and at same time it is also capable of considering multiple state hypotheses. Temporary occlusions can be dealt with by the particle filter as less likely object states will be part of the tracking process temporarily [25]. Occlusion handler steps and weighted resampling are the two basic differences between the original tracking method and our tracking method. Our proposed tracking method is shown in Figure 8. The differences between the original particle filter and ours are weighted resampling and occlusion handler. A robust tracking framework is provided by the particle filter algorithm, as it represents uncertainty. The algorithm is capable of keeping its options open and at same time it is also capable of considering multiple state hypotheses. Temporary occlusions can be dealt with by the particle filter as less likely object states will be part of the tracking process temporarily [25]. Occlusion handler steps and weighted resampling are the two basic differences between the original tracking method and our tracking method. Our proposed tracking method is shown in Figure 8. The differences between the original particle filter and ours are weighted resampling and occlusion handler. Step-by-step process of the weighted resampling particle filter.
The first step in the process of weighted resampling particle filter is to define the target model. It can be defined in Equation (7) at location y as m-bin histogram = { ( ) } … . The normalization factor f can be represented by Equation (8); δ is the Kronecker delta function, while is the number of pixels in the ROI region, and = √ + ℎ is used as the normalization factor for the size of the object region.
The sample model = { ( ) } … is represented in the same way as the target model.
Bhattacharyya distance is used to measure the distance between the sample and target model; it is Step-by-step process of the weighted resampling particle filter.
The first step in the process of weighted resampling particle filter is to define the target model.
It can be defined in Equation (7) at location y as m-bin histogram q y = q (u) y u=1...m . The normalization factor f can be represented by Equation (8); δ is the Kronecker delta function, while I is the number of pixels in the ROI region, and a = √ w 2 + h 2 is used as the normalization factor for the size of the object region.
The sample model p y = p (u) y u=1...m is represented in the same way as the target model.
Bhattacharyya distance d is used to measure the distance between the sample and target model; it is termed as similarity value ρ. If the value is large, the two models are considered similar, whereas if the value of ρ is equal to 1, it implies that the histogram of the sample and the target model is identical.
In the particle filter algorithm, the target model can also be represented by state vector s_target. It is defined in Equation (12) where w and h represent the width and height of ROI, respectively; (x, y) represents the center of ROI, and v x , v y represents the motion of the object. Equation (13) is used to compute the initial sample set S initial = s (n) n=1...N where I is an identity matrix, r.v. is a multivariate Gaussian random variable, and N represents the number of samples. A dynamic model is represented by Equation (14), which propagates the sample; the deterministic component of the model is represented by A. The target human size and position can be determined from the estimated vector using the weight of every sample and its state vector, as shown in Equation (15). To update the weight of each sample, Bhattacharyya distance is used and is shown in Equation (16).
The resampling step in the process of the weighted resampling particle filter is used to avoid the degeneracy of the algorithm, which means, it prevents the situation where most of the sample weights are close to zero. To determine the need and time of resampling step, Equations (17) to (19) can be used; in rate ∈ (0, 1), N ths and N e f f represent the given threshold sample and the effective number of samples, respectively.
N e f f < N ths (17) In the process of resampling, sample selection depends on weights; high weight samples may be selected a number of times, which will lead to a number of copies of those samples, and relatively low weight samples may not get selected at all. Given a sample set S t−1 and the target model q, for the first iteration, S t−1 is set to S initial . The details of the particle filter algorithm for each iteration is described as follows: 1.
Propagate each sample from the set S t−1 by a linear stochastic differential equation: Observe the color distributions: (a) Calculate the color distribution: p Calculate the Bhattacharyya coefficient for each sample of the set Estimate the mean state of the set S t : if N e f f < N ths : Select N samples from the set S t with probability ω (n) t : (a) Calculate the normalized cumulative probabilities c t : c Generate a uniformly distributed random number r ∈ [0, 1].
(c) Use binary search to find the smallest j for which c Finally, resample by S t = S t .
In the initial resample step of the particle filter, samples were selected randomly, so it is possible that the selected sample has a relatively low weight, and the process ended up tracking different objects and considering them as target object, which decreased tracking accuracy, as shown in Figure 9. Figure 9 shows the sample points with high weights are in the ROI (green block), and samples with relatively low weights are in the red block. Although two blocks have nearly the same similarity value, the actual target object is in the green block. Consequently, it may track a different object as the target object. In other words, it will decrease the accuracy of tracking. Thus, we proposed a weighted resampling algorithm to cover this problem. The proposed algorithm of weighted resampling prevents this problem. First, the top sample is selected and set to S top t with N top weights from set S t , as shown in Equations (20) to (21). The parameter top represents the top rate and for our experiment it is set to 0.
N samples were reproduced in S t according to the weight of s top(n) . This step will produce s top(n) , which has a relatively larger number of times in S t , and others with relatively low weight will be produced at least once. Figure 10 shows the samples points with high weights are in the ROI (green block), and samples with relatively low weights are in the red block. Figure 11 shows the weighted resampling result. Most of sample points lie in the green block or in the target object region. A Gaussian mixture model (GMM) was applied to update the target model over time. For approximation of any continuous probability distribution K, Gaussian distributions have been used. The GMM [39] is a robust method for dynamic backgrounds. It is mostly used due to its robustness to various background variations like multi-modal, quasi periodic and gradual illumination changes. GMM is a semiparametric multimodal density model consisting of a number of components to compactly represent pixels of image block in color space with illumination changes. Therefore, a Gaussian mixture model (GMM) was applied to update the target model over time. The image can be represented as a set of homogeneous regions modeled by a mixture of Gaussian distributions in color feature space. In comparison, non-Gaussian mixture models [40] present an image without taking spatial factor into computation. Gaussian distribution N(x µ k , σ k ) with mean µ k and standard deviation σ k was considered here. The weight of Gaussian distribution is represented by π k , and sum of all weights is equal to 1. Equation (22) resampling prevents this problem. First, the top sample is selected and set to with weights from set , as shown in Equations (20) to (21). The parameter top represents the top rate and for our experiment it is set to 0.2. The only selects samples with the top 20% weights from set .
samples were reproduced in according to the weight of ( ) . This step will produce ( ) , which has a relatively larger number of times in , and others with relatively low weight will be produced at least once. Figure 10 shows the samples points with high weights are in the ROI (green block), and samples with relatively low weights are in the red block. Figure 11 shows the weighted resampling result. Most of sample points lie in the green block or in the target object region. A Gaussian mixture model (GMM) was applied to update the target model over time. For approximation of any continuous probability distribution , Gaussian distributions have been used. The GMM [39] is a robust method for dynamic backgrounds. It is mostly used due to its robustness to various background variations like multi-modal, quasi periodic and gradual illumination changes. GMM is a semiparametric multimodal density model consisting of a number of components to compactly represent pixels of image block in color space with illumination changes. Therefore, a Gaussian mixture model (GMM) was applied to update the target model over time. The image can be represented as a set of homogeneous regions modeled by a mixture of Gaussian distributions in color feature space. In comparison, non-Gaussian mixture models [40] present an image without taking spatial factor into computation. Gaussian distribution ( | , ) with mean and standard deviation was considered here. The weight of Gaussian distribution is represented by , and sum of all weights is equal to 1. Equation (22) describes the process of Gaussian mixture model (GMM).
samples were reproduced in according to the weight of ( ) . This step will produce ( ) , which has a relatively larger number of times in , and others with relatively low weight will be produced at least once. Figure 10 shows the samples points with high weights are in the ROI (green block), and samples with relatively low weights are in the red block. Figure 11 shows the weighted resampling result. Most of sample points lie in the green block or in the target object region. A Gaussian mixture model (GMM) was applied to update the target model over time. For approximation of any continuous probability distribution , Gaussian distributions have been used. The GMM [39] is a robust method for dynamic backgrounds. It is mostly used due to its robustness to various background variations like multi-modal, quasi periodic and gradual illumination changes. GMM is a semiparametric multimodal density model consisting of a number of components to compactly represent pixels of image block in color space with illumination changes. Therefore, a Gaussian mixture model (GMM) was applied to update the target model over time. The image can be represented as a set of homogeneous regions modeled by a mixture of Gaussian distributions in color feature space. In comparison, non-Gaussian mixture models [40] present an image without taking spatial factor into computation. Gaussian distribution ( | , ) with mean and standard deviation was considered here. The weight of Gaussian distribution is represented by , and sum of all weights is equal to 1. Equation (22) describes the process of Gaussian mixture model (GMM). If the difference between the previous and current frames' ( ) was smaller than the threshold, we used Equation (24) to find the first Gaussian distribution where k follows the descending order { , , }.
If we successfully find the Gaussian distribution by Equation (24), it would update , , by Equations (25) . The proposed occlusion handler was color-based. The algorithm equated similarities between the target model and candidate model. Figure 12 shows the flowchart of the occlusion handler. The following is the step-by-step process of the proposed occlusion handler: ROI was created in the current frame. was computed.
3. If similarity was less than ℎ , resampling was not performed, and it was assumed that the candidate model was occluded by another object. 4. The count was increased using = + 1.
5.
Step 1-4 were repeated during the tracking process to see whether the similarity value becomes larger than ℎ , the tracked human appeared or ≥ 10. Termination condition avoids the spreading of the samples out of the image. Figure 13 shows the images for frame T, T+4, T+9, T+14 using proposed occlusion handler. The GMM update algorithm is applied to update the color histogram of the target model; K = 3 Gaussian distributions is used to model each bin q (u) . The mean µ k , standard deviation σ k , and weight π k were initialized respectively as µ k = q (u) , σ k = 1, and π k = 1 K , where k = 1 ∼ K.
We updated the bin's value using Equation (23) where A = 0.6, B = 0.25, C = 0.15, and a, b, c was the descending order. 3.
If the difference between the previous and current frames' q (u) was smaller than the threshold, we used Equation (24) to find the first Gaussian distribution where k follows the descending If we successfully find the Gaussian distribution by Equation (24), it would update µ k , σ k , π k by Equations (25) to (27), where α = 0.05 and β = 0.01, and the other weights would be updated by π j = (1 − β) * π j where j = 1 ∼ K and j k.
These steps produced the updated target model q = q (u) u=1...m . The proposed occlusion handler was color-based. The algorithm equated similarities between the target model and candidate model. Figure 12 shows the flowchart of the occlusion handler. The following is the step-by-step process of the proposed occlusion handler:
1.
Candidate model c = c (u) u=1...m ROI was created in the current frame.
2.
The similarity value between target model q = q (u) If similarity was less than ths sim , resampling was not performed, and it was assumed that the candidate model was occluded by another object. 4.
The count was increased using Count = Count + 1.
5.
Step 1-4 were repeated during the tracking process to see whether the similarity value becomes larger than ths sim , the tracked human appeared or Count ≥ 10. Termination condition avoids the spreading of the samples out of the image. Figure 13 shows the images for frame T, T+4, T+9, T+14 using proposed occlusion handler.
Camera Control
Pelco P-protocol [31] was used to control the active camera through an RS-232 to RS-485 converter. The protocol allows us to have control over pan, zoom step, and tilt angle to achieve effective tracking. The active camera is controlled by pelco P-protocol [34] through the RS-232 to RS 485 converter. It needs to control pan (horizontal direction), tilt (vertical direction) angle, and the zoom's step to achieve tracking purpose. The pelco P-protocol has 8 bytes data with message format as shown in Figure 14a. Byte 1 and Byte 7 are the start and stop bytes, respectively, and they are always set to 0xA0 for Byte 1 and 0xAF for Byte 7. Byte 2 is the receiver or camera address. In this thesis, we only used one camera, so Byte 2 is always set to 0 × 00. Byte 3, Byte 4, Byte 5, and Byte 6 are used to control the pan-tilt-zoom (PTZ) as shown in Table 1. The last byte is an XOR check sum byte.
Camera Control
Pelco P-protocol [31] was used to control the active camera through an RS-232 to RS-485 converter. The protocol allows us to have control over pan, zoom step, and tilt angle to achieve effective tracking. The active camera is controlled by pelco P-protocol [34] through the RS-232 to RS 485 converter. It needs to control pan (horizontal direction), tilt (vertical direction) angle, and the zoom's step to achieve tracking purpose. The pelco P-protocol has 8 bytes data with message format as shown in Figure 14a. Byte 1 and Byte 7 are the start and stop bytes, respectively, and they are always set to 0xA0 for Byte 1 and 0xAF for Byte 7. Byte 2 is the receiver or camera address. In this thesis, we only used one camera, so Byte 2 is always set to 0 × 00. Byte 3, Byte 4, Byte 5, and Byte 6 are used to control the pan-tilt-zoom (PTZ) as shown in Table 1. The last byte is an XOR check sum byte.
Camera Control
Pelco P-protocol [31] was used to control the active camera through an RS-232 to RS-485 converter. The protocol allows us to have control over pan, zoom step, and tilt angle to achieve effective tracking. The active camera is controlled by pelco P-protocol [34] through the RS-232 to RS-485 converter. It needs to control pan (horizontal direction), tilt (vertical direction) angle, and the zoom's step to achieve tracking purpose. The pelco P-protocol has 8 bytes data with message format as shown in Figure 14a. Byte 1 and Byte 7 are the start and stop bytes, respectively, and they are always set to 0xA0 for Byte 1 and 0xAF for Byte 7. Byte 2 is the receiver or camera address. In this thesis, we only used one camera, so Byte 2 is always set to 0 × 00. Byte 3, Byte 4, Byte 5, and Byte 6 are used to control the pan-tilt-zoom (PTZ) as shown in Table 1
Camera Control
Pelco P-protocol [31] was used to control the active camera through an RS-232 to RS-485 converter. The protocol allows us to have control over pan, zoom step, and tilt angle to achieve effective tracking. The active camera is controlled by pelco P-protocol [34] through the RS-232 to RS 485 converter. It needs to control pan (horizontal direction), tilt (vertical direction) angle, and the zoom's step to achieve tracking purpose. The pelco P-protocol has 8 bytes data with message format as shown in Figure 14a. Byte 1 and Byte 7 are the start and stop bytes, respectively, and they are always set to 0xA0 for Byte 1 and 0xAF for Byte 7. Byte 2 is the receiver or camera address. In this thesis, we only used one camera, so Byte 2 is always set to 0 × 00. Byte 3, Byte 4, Byte 5, and Byte 6 are used to control the pan-tilt-zoom (PTZ) as shown in Table 1 Figure 14b demonstrates the scheme used to keep the tracking object in the center of the FOV. Our FOV was divided into 9 regions corresponding to the directions of the pan-tilt. To make the target object size larger or smaller, zoom-out and zoom-in were also used. Every region has a specific direction as shown in Figure 14b. If the target is located on the stop-region, then the camera is set to stop. Meanwhile, the camera speed on other regions is determined by the PID controller. The zoom-in and zoom-out will be activated if the target's size becomes smaller or larger than the user's defined size. The details of the camera control are shown in Figure 15. To control the vertical and horizontal position difference, two independent PID controllers were used. Equations (28) and (29) used to estimate the pan/tilt speed, where we defined and values. Our PID controller defines its variables as follows: To control the vertical and horizontal position difference, two independent PID controllers were used. Equations (28) and (29) Speed pan = C out * 0.1 + o f f set pan (28) The pan and tilt speed of the camera are provided by the manufacturer of the camera (0 to 64). Equations (28) and (29) of PID controller will give the speed in limited range. If the speed is too low, the target object could go out of the frame of the camera by the time the camera moves. On the other hand, if the speed is too high, the camera could lose track of the target object and drive over it.
Depending on the size of the ROI, we decided on whether to zoom in or out. We applied Equations (32) and (33), where we set rate big = 1.1 and rate small = 0.9, and w initial and h initial were, respectively, the width and height of our human target object.
upper w = w initial * rate big upper h = h initial * rate big (32) lower w = w initial * rate small lower h = h initial * rate small (33) Upon zoom-in/out, we updated the size of the target model by an aspect of ratio w/h , which Equation (34) defines.
We updated the target model size with Equations (35) and (36) in the case of a zoom-in operation or Equations (37) and (38) in the case of a zoom-out operation. Later, we used these renewed states to update the variables from Equations (32) and (33).
Experimental Results
The proposed method was implemented on a PC platform with Intel ® Core™ i5 CPU 650 at 3.20GHz, 4GB RAM, and developed in Borland C++ Builder 6.0 on Windows 7. To verify the performance and stability of the system, it was tested under several environments. We tested both image sequences and video files (AVI uncompressed format) from the active camera, with a resolution of 720 × 480 pixels.
Results of Tracking on Video File
To verify the tracking algorithm with the proposed particle filter, we used three video files, with parameters as follows: • Number of bins in histogram m = 6 * 6 * 6 = 216 • Number of samples N = 30 • State covariance σ x , σ v x , σ y , σ v y , σ w , σ h = (2, 0.5, 2, 0.5, 0.4, 0.8) Video 1 shows our system's occlusion handler in operation. Figure 16 shows the tracking system without the occlusion handler while Figure 17 shows the same track with our occlusion handler solution. We used the second video to verify the tracking feature. The full occlusion condition happens in frame 3 of Figures 16 and 17. If the particle filter resamples during the full occlusion condition, it may resample on incorrect positions as shown in frame 4, and tracking will be lost, as in frames 5 and 6. Meanwhile, when the full occlusion happens in the particle filter with occlusion handle, the resample step will not be done immediately. Thus, the sample set can keep the widespread range to track the target after full occlusion. 2. Video 2 is used to verify the tracking feature. Figure 18 shows a human wearing a black jacket while walking near a black chair, which is used as an object with similar color features as the human. In this case, the target human has a similar color feature with the black chair, but the proposed system can still track the target human. Video 3 is used to verify the tracking performance in a complex situation. Figure 19 shows the target human is partially occluded with a chair. The target human performs sitting-down and standing-up activities, and later, another human object partially occludes our original target, which continues to be the target, hence showing the system not losing track of the target.
Results of Tracking On Active Camera Output
We used an active camera set up in our lab, with an environment complex enough to verify the system operation. We set the particle filter and PTZ parameters as follows:
Results of Tracking on Active Camera Output
We used an active camera set up in our lab, with an environment complex enough to verify the system operation. We set the particle filter and PTZ parameters as follows: • Number of bins in histogram m = 6 × 6 × 6 = 216 • Number of samples N = 30 • State covariance σ x , σ v x , σ y , σ v y , σ w , σ h = (10, 1, 10, 1, 1, 2) Figure 20 shows the tracking system controlling the pan/tilt of the camera. The targeted human was mostly located in the camera's FOV. Figure 21 shows the results of the zoom in/out while tracking. Figure 22 shows the tracking system controlling the pan/tilt/zoom of the camera, with the targeted human freely walking in the environment. Figures 23 and 24 show our system tracking a target human with more than one person walking in the same environment. While in the test from Figure 23 we see the target only walking around, in the test from Figure 24 we see the human target also performing some more actions, such as crouching and intentionally occluding himself. system operation. We set the particle filter and PTZ parameters as follows: Figure 20 shows the tracking system controlling the pan/tilt of the camera. The targeted human was mostly located in the camera's FOV. Figure 21 shows the results of the zoom in/out while tracking. Figure 22 shows the tracking system controlling the pan/tilt/zoom of the camera, with the targeted human freely walking in the environment. Figures 23 and 24 show our system tracking a target human with more than one person walking in the same environment. While in the test from Figure 23 we see the target only walking around, in the test from Figure 24 we see the human target also performing some more actions, such as crouching and intentionally occluding himself. Figure 21 (a) shows the target human has been detected and the is initialized to 0. The targeted human was walking away or approaching the camera. If there is a zoom-in happening, zoomlayer is added by 1. On the other hand, zoomlayer is subtracted by 1 when zoom-out happens. The details of zoomlayer is showed in Tables 2 and 3 for Figures 20a-l and 21a-i respectively. Table 2 Zoom layer varies in Figure 20. The experiment results show that the proposed system can track a moving human target by particle filter algorithm on an active camera. In addition, the tracking system is able to track the target human when more than one person is walking in the same environment. Moreover, the zoom-in/out adjusts the resolution image while tracking the human. There are several contributions in this The experiment results show that the proposed system can track a moving human target by particle filter algorithm on an active camera. In addition, the tracking system is able to track the target human when more than one person is walking in the same environment. Moreover, the zoom-in/out adjusts the resolution image while tracking the human. There are several contributions in this research: Figure 21a shows the target human has been detected and the Zoom layer is initialized to 0. The targeted human was walking away or approaching the camera. If there is a zoom-in happening, zoom layer is added by 1. On the other hand, zoom layer is subtracted by 1 when zoom-out happens. The details of zoom layer is showed in Tables 2 and 3 for Figures 20a-l and 21a-i respectively. Table 2. Zoom layer varies in Figure 20. Table 3. Zoom layer varies in Figure 21. The experiment results show that the proposed system can track a moving human target by particle filter algorithm on an active camera. In addition, the tracking system is able to track the target human when more than one person is walking in the same environment. Moreover, the zoom-in/out adjusts the resolution image while tracking the human. There are several contributions in this research: 1.
Our system can accurately distinguish human and nonhuman.
2.
The weighted resampling can help the particle filter to preserve the samples with high weights. 3.
The occlusion handler can solve the temporal full occlusion condition.
4.
It can track the human target smoothly by using the PID controller to determine the motion of camera.
Conclusions
In this paper, we proposed a new system that smoothly tracks the human target by camera motion by means of PID controller. The experimental results demonstrated that the proposed system was capable of tracking a moving human target using a particle filter on an active camera. It was also able to precisely differentiate nonhuman and human. In the case when multiple people are walking in the same environment, the tracking system accurately tracked the human targeted. The resolution image of the tracked human can be adjusted using zoom in/out. The weighted resampling used in this paper helps the particle filter to preserve high weight samples. In addition, the temporal full occlusion condition was solved using occlusion handler. | 2020-10-28T19:09:43.954Z | 2020-10-09T00:00:00.000 | {
"year": 2020,
"sha1": "f456612abe82496ccf78ce7a59320317e54f3ef0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2504-2289/4/4/27/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "08212e45180f10ff6214b7cf7692f7a023ac23d7",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
3636643 | pes2o/s2orc | v3-fos-license | Echinococcus multilocularis in Kyrgyzstan: similarity in the Asian EmsB genotypic profiles from village populations of Eastern mole voles (Ellobius tancrei) and dogs in the Alay valley
Echinococcus multilocularis is a cestode that causes human alveolar echinococcosis, a lethal zoonosis of public health concern in central Asia and western China. In the present study, one of 42 Eastern mole voles (Ellobius tancrei) caught in Sary Mogol (Alay valley, southern Kyrgyzstan) presented liver lesions with E. multilocularis from which the EmsB target was amplified. The Asian profile obtained was almost identical to one amplified from domestic dog faeces collected in a nearby village. This observation adds additional information to the potential role of E. tancrei in the transmission of E. multilocularis, and to the known distribution range of E. multilocularis (Asian strain) in central Asia.
Introduction
The taeniid cestode Echinococcus multilocularis is the causative agent of human alveolar echinococcosis (AE), a potentially lethal helminthic zoonosis (Eckert & Deplazes, 2004). Although AE is a rare disease within the distribution range of the parasite, several endemic areas have been reported in North America, Europe and Asia (Vuitton et al., 2003). Echinococcus multilocularis has a complex life cycle that involves carnivores (principally foxes) as definitive hosts, and cricetid rodents (e.g. Microtus spp.) or lagomorphs (e.g. Ochotona spp.) as intermediate hosts. Dogs are also good definitive hosts. The assemblage of wildlife host communities varies according to ecological features on multiple spatial scales . From a genetic point of view, E. multilocularis appears as an organism with low polymorphism (Haag et al., 1997;Eckert et al., 2001). However, distinct European, Asian and North American genotypes have been described (Bretagne et al., 1996;Bart et al., 2006) and the geographical location of the transitional zone between Asian and European genotypes, somewhere between eastern Europe and western China, is currently unknown. Furthermore, a tandemly repeated microsatellite, EmsB, has been used to describe the relative diversity of parasite genetic profiles on both regional and local scales (Knapp et al., 2007(Knapp et al., , 2008(Knapp et al., , 2009. Kyrgyzstan is one of the five republics of central Asia that, with northern Iran, eastern Turkey and Caucasia, provides the geographical link between the transmission foci of Asia and continental Europe. However, nothing is known about the genotypes of E. multilocularis circulating in the area, which theoretically may belong either to the Asian or the European clades, or both. In Kyrgyzstan, cystic echinococcosis caused by E. granulosus, is a national public health concern across the whole country (Torgerson et al., 2006). The highest incidences of human alveolar echinococcosis, however, are currently recorded in the sub-national administrative regions of Issyk-kul, Naryn and Osh, the latter including the Alay valley (Usubalieva et al., 2013). In the Alay valley (altitude 2900-3500 m) land cover is mostly Alpine grassland. Echinococcus multilocularis definitive hosts are the red fox (Vulpes vulpes) and domestic dogs (Ziadinov et al., 2008(Ziadinov et al., , 2010. In terms of potential prey biomass, the three dominant species in local small mammal assemblages are: Microtus gregalis (the narrow-headed vole), Cricetulus migratorius (the grey dwarf hamster) and Ellobius tancrei, (the Eastern mole vole) (Giraudoux et al., 2013 and unpublished). Although, historically, M. gregalis and E. tancrei have been found to be infected naturally in Kyrgyzstan (Gagarin et al., 1957;Tokobaev, 1959), their relative contribution to E. multilocularis transmission is still unknown. Ellobius tancrei has a wide distribution range, stretching from north-eastern Turkmenistan and eastern Uzbekistan through China and Mongolia (Batsaikhan & Tinnin, 2008). More than 50 years ago this species was already recorded as being infected naturally with E. multilocularis in Kyrgyzstan (Tokobaev, 1959), but in the original paper it was likely confused with E. talpinus, the Northern mole vole, which actually is not present in Kyrgyztan. No other mention since then of E. tancrei voles infected by E. multilocularis could be found in the literature. However, population surges of this species have been observed regularly, for instance in the Alay valley, the Tien Shan (Narati area, Xinjiang, China) and the Altai Mountains (Giraudoux et al., 2008, 2013 andunpublished).
Here we report infection of E. tancrei in Sary Mogol village (39840 0 33.06 00 N, 72853 0 02.06 00 E) ( fig. 1). Furthermore, dog faeces were sampled and tested for E. multilocularis in the same area, and one of them was used to compare genetic profiles. Those genotypic profiles were then compared to other E. multilocularis isolates from Eurasia and North America.
Materials and methods
In May 2012, a total of 42 Ellobius specimens were trapped within the periphery of Sary Mogol village using tong traps, in an area of about 0.53 ha (72853 0 27.78 00 E, 39840 0 50.952 00 N) at an altitude of 3000 m. As in every other household of this area, the hamlet was surrounded by Alpine grassland and farmland ( fig. 2a). Eastern mole voles were identified to the specific level using conspicuous and typical morphometric criteria (short and soft fur, small eyes, long and straight incisors extending far forward of the nasal cavities; fig. 2c). All animals were weighed, measured and sexed in a field laboratory. Rodent eyeballs were collected to assess their relative age by using their dry crystalline weight, and were preserved in 5% formalin (Kozakiewicz, 1976). At necropsy, the liver and lungs were examined macroscopically for any lesions. When lesions were found, samples were collected and stored in a 90% alcohol solution. The presence of protoscoleces was assessed under microscopy after a puncture into the lesion with a syringe. Rodent carcases were preserved in 10% formalin for reference collection.
Dog faeces were sampled in Sary Mogol and other villages over the same period. Echinococcus multilocularis DNA was amplified from dog faeces found in Taldy
Echinococcus multilocularis in Kyrgyzstan
Total genomic DNA from the rodent liver lesion was extracted by using the High Pure PCR Template Preparation kit (Roche Diagnostics, Mannheim, Germany), as recommended by the manufacturer. The Echinococcus species determination was done with DNA amplification by polymerase chain reaction (PCR) and sequencing of the mitochondrial DNA (mtDNA) fragment of the nd1 gene (primers ND1_Fwd: 5 0 -AGAT-TCGTAAGGGGCCTAATA-3 0 and ND1_Rev: 5 0 -ACCAC-TAACTAATTCACTTTC-3 0 ; Bowles & McManus, 1993) and compared to the GenBank database. Sequencing using the Sanger method was performed from the two ND1 primers, in order to obtain a consensus sequence. For the dog faecal sample, DNA was extracted using a Qiagen stool mini kit (Qiagen, Hilden, Germany) following the manufacturer's instructions but using 1 g of faeces. The positive dog faecal sample from Taldy Suu was also amplified for the nd1 gene.
Genotyping of parasite samples was performed by amplification of the tandemly repeated microsatellite EmsB as described previously (Knapp et al., 2007) and modified (Umhang et al., 2014). Briefly, the reaction was performed in a 25 ml reaction mixture, containing 200 mM of each deoxynucleoside triphosphate (dNTP), 0.4 mM fluorescent forward primer EmsB A (5 0 FAM-GTGTGGA-TGAGTGTGCCATC-3'), 0.7 mM classical reverse primer EmsB C (5 0 -CCACCTTCCCTACTGCAATC-3 0 ) and 0.5 U of Platinum Taq DNA polymerase enzyme (Life Technologies, Foster City, California, USA), with the addition of Platinum 1 £ PCR buffer (Life Technologies). The amplification reaction was performed in a Veriti thermocycler (Life Technologies), under the following conditions: a pre-amplification step of 948C for 2 min; followed by 45 cycles with a denaturing step at 948C for 30 s, annealing at 608C for 30 s and extension at 728C for 1 min; with a final elongation at 728C for 45 min. The PCR products were analysed by fragment size analysis using an ABI Prism 310 apparatus and the GeneMapper 4.1 software (Life Technologies, Carlsbad, California, USA). The Kyrgyz sample isolated from E. tancrei was compared to a database composed of 1084 genotyped samples from Europe (France, n ¼ 537; Germany, n ¼ 88; Switzerland, n ¼ 109; Austria, n ¼ 99; Slovakia, n ¼ 63; Czech Republic, n ¼ 66; and Poland, n ¼ 94), from Asia (Tibetan plateau in China, n ¼ 5; Hokkaido in Japan, n ¼ 6) and from North America (Canada, n ¼ 1; Alaska, n ¼ 13). The Kyrgyz positive dog faecal sample contaminated by E. multilocularis (n ¼ 1) was included, and a sample of E. granulosus sensu stricto as an outgroup (n ¼ 2). The genetic distance amongst samples was assessed by Euclidean distance between EmsB profiles. As described previously, two samples were considered as identical when the genetic distance was below 0.08 (Knapp et al., 2007).
Results
Among the 42 individuals, 15 were females and 27 males. The body weight ranged from 47 to 77 g and crystalline dry mass from 0.45 to 3.8 mg. One Ellobius specimen, an adult male, was caught by hand and brought by children from the hamlet. Its body weight was 62 g and crystalline dry mass 1.1 mg. This specimen was the only individual that presented larval cysts of E. multilocularis. It showed two liver lesions (12 -18 mm in diameter; fig. 2d). Protoscoleces were found after examining cyst vesicle fluid under a light microscope ( fig. 2e). For both the Ellobius specimen and the dog faecal sample, the amplification of the mtDNA fragment of the nd1 gene allowed us to generate a 400-bp consensus sequence. The two isolates had 100% identity with each other and presented 99% identity with the nd1 sequence from the complete mitochondrial genome (AB018440.2). One mutation was observed (position 8012 G/A mutation) in the referenced sequence in both the forward and reverse sequences, in comparison to the other sequences referenced in the GenBank database for the E. tancrei sample and the dog faeces extract (see sequences in fig. 3). This mutation was observed amongst, for example, a Polish sample (GenBank reference: AJ132908.1) and Chinese samples (Xinjiang sample: EU704124.1 and Sichuan: EU704123.1), these reference samples having the nucleotide A at the position 8012 in the nd1 gene, and the Kyrgyz samples a nucleotide G. The presence of the mutation was confirmed by performing the sequencing twice. In comparison to the EmsB database (n ¼ 1084 samples) no identical samples (, 0.08 of genetic distance) were clustered with the Kyrgyz sequences (from E. tancrei and the dog faecal samples), but the two Kyrgyz sequences were clustered together with a genetic distance of 0.12. They can subsequently been considered as similar strains but not identical, perhaps due to poor DNA quality ( fig. 4). Moreover, the two samples were linked with Tibetan (China) and Hokkaido (Japan) samples, and one Alaskan sample, with a genetic distance ranging from 0.17 to 0.24 ( fig. 4), but with neither the European nor American isolates.
Discussion
The current results add further information about the natural infection of the Eastern vole mole, E. tancrei, with E. multilocularis, first discovered more than 50 years ago. These findings based on EmsB genotyping indicate, first, that the two isolates (vole and dog) found in our study belong to the Asian strain of E. multilocularis, hence extending the western limit of the known distribution range of this genotype in central Asia. The Pamir Mountain range is situated in altitudinal continuity with the Tibetan plateau but, due to its complex high-altitude ranges, might have been considered a biogeographical barrier to the spread of the eastern Asian strain of E. multilocularis to the central Asian republics -a hypothesis that is refuted here. Second, very similar strains were found in dog faeces and the E. tancrei specimen in the study area, and the common mutation first described in the present study emphasized, as a fingerprint, the involvement of E. tancrei and dogs in the local parasite cycle. The occurrence of this mutation amongst Asian E. multilocularis isolates needs further studies to be understood. Associated with the fact that E. tancrei could be trapped at less than 10 m from house walls, and all of them at less than 100 m, this indicates that a synanthropic Echinococcus multilocularis in Kyrgyzstan cycle involving dogs and the Eastern mole vole may exist, not excluding the contribution of other small mammal potential host species (e.g. M. gregalis, C. migratorius) that were also observed not only in habitats remote from villages but also in the close vicinity of houses, where Mus musculus was also captured. Large population densities of both dogs and E. tancrei were observed in the Alay valley. Ellobius tancrei abundance has been shown to increase with grassland vegetation biomass (Giraudoux et al., 2013). This leads to the maintenance of larger vole populations in farmland that surrounds villages, where barley is grown, and in hay fields close to villages, with (E) fox intestine, Hokkaido, Japan; and (F, G) fox intestine, Europe. The dendrogram represents the similarities between samples, with bootstrap values (B ¼ 1000) at each node and the limit of high similarity being 0.08 (Knapp et al., 2007).
vole population spillover into villages. Moreover, 38 -74% of households have at least one dog in the villages studied in the Alay Valley (van Kesteren et al., 2013), which leads to a high concentration of potentially infective dog faeces. This should be added to a large red fox population in the area, with tens of fox dens found at less than 1 -2 km from villages (Giraudoux and Rieffel, pers. obs.), which may also feed the sustainable transmission of E. multilocularis (however, see Liccioli et al., 2015). Third, the only specimen of E. tancrei found to be infected by E. multilocularis was also the only specimen caught by hand by children. This might indicate that the animal found infected in the present study might have been caught not by chance but as the result of an increased vulnerability to capture induced by the parasite. This possibly altered host-behavioural aspect of the transmission ecology of E. multilocularis appears not to have been mentioned previously in the literature, and should be investigated carefully, using appropriate methods. | 2016-05-04T20:20:58.661Z | 2015-07-03T00:00:00.000 | {
"year": 2015,
"sha1": "125a11c786081c6c81b0ef42418eccc5b38cf4f3",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/D70123745B1BCF346661BAAC9A63C57E/S0022149X15000474a.pdf/div-class-title-span-class-italic-echinococcus-multilocularis-span-in-kyrgyzstan-similarity-in-the-asian-emsb-genotypic-profiles-from-village-populations-of-eastern-mole-voles-span-class-italic-ellobius-tancrei-span-and-dogs-in-the-alay-valley-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "1eb297f65d6274d6e448898cde8220e6ed70d7e2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
247958001 | pes2o/s2orc | v3-fos-license | Combinatoric topological string theories and group theory algorithms
A number of finite algorithms for constructing representation theoretic data from group multiplications in a finite group G have recently been shown to be related to amplitudes for combinatoric topological strings (G-CTST) based on Dijkgraaf-Witten theory of flat G-bundles on surfaces. We extend this result to projective representations of G using twisted Dijkgraaf-Witten theory. New algorithms for characters are described, based on handle creation operators and minimal multiplicative generating subspaces for the centers of group algebras and twisted group algebras. Such minimal generating subspaces are of interest in connection with information theoretic aspects of the AdS/CFT correspondence. For the untwisted case, we describe the integrality properties of certain character sums and character power sums which follow from these constructive G-CTST algorithms. These integer sums appear as residues of singularities in G-CTST generating functions. S-duality of the combinatoric topological strings motivates the definition of an inverse handle creation operator in the centers of group algebras and twisted group algebras.
Two-dimensional Dijkgraaf-Witten theories are simple examples of topological field theories associated to finite groups [1][2][3][4]. At a basic level, these theories describe orbifolds of points, [point/G], possibly with discrete torsion (described in this context as a twisting).
In the case when the group is a symmetric group S n , these theories admit defects, which have applications in describing counting and correlators in U(N) gauge theories [5] of interest in AdS/CFT [6][7][8]. Recent work on wormhole physics and baby universes [9][10][11][12][13][14][15], in the context of topology change in quantum gravity, considers sums over Riemann surfaces weighted by a string coupling g st , where each surface supports a Dijkgraaf-Witten theory.
We will refer to these theories, summing over worldsheets, as combinatoric topological string theories or G-CTST. Motivations and insights on the mathematical properties of these strings thus arise both from AdS/CFT and from models of topology change in quantum gravity. Another place Dijkgraaf-Witten theories arise is in couplings to physical theories. For example, consider an orbifold [X/Γ], where a subgroup K ⊂ Γ acts trivially on X, as studied in e.g. [16][17][18][19][20][21][22][23][24]. This can be interpreted as a coupling of the orbifold [X/G] (for G = Γ/K) to Dijkgraaf-Witten theory for the group K, as will be discussed in greater detail in [25]. The orbifold [X/Γ] is in any event equivalent to a disjoint union of orbifolds, a result known as decomposition [21], which when viewed as a coupling of a topological field theory, reflects the fact that as a topological field theory, Dijkgraaf-Witten theory itself is a disjoint union of invertible field theories [26][27][28][29]. Applied to G-CTST, decomposition implies that the 'string field theory' of Dijkgraaf-Witten theory (in the same sense as [30]) is a theory on a disjoint union of points, which could be interpreted as a noninteracting statistical mechanical theory.
In the recent paper [11] it was observed that well-known formulae for amplitudes in G-CTST can be used to give a finite algorithm which starts from group multiplications in G and arrives at the integer ratios |G|/(dim R) (relating the order of a finite group G and the dimension of an irreducible representation R). The integrality of these ratios is an interesting old result at the intersection of finite group theory and number theory (see for example [31,32]) and plays an important role in the algorithm. The form of the group multiplications in the input is understood geometrically in terms of the fundamental groups of two dimensional surfaces, which are interpreted in G-CTST as string worldsheets. The algorithm proceeds by finding the zeroes of a polynomial equation which has integer coefficients (which are G-CTST amplitudes) and has roots which are also known to be integers (i.e. the |G|/(dim R) ). The construction of representation theoretic quantities using combinatoric methods is an interesting general theme in representation theory [33], with implications for computational complexity theory [34,35]. G-CTST provides an interesting topological perspective on this theme. A quantum mechanics of bipartite ribbon graphs which constructs Kronecker coefficients as eigenvalue degeneracies of Hamiltonians [36] is another angle on the theme of exploiting stringy geometric/algebraic structures to address questions in combinatorial representation theory.
It is natural to consider twisted G-CTST (involving Dijkgraaf-Witten theories of orbifolds with discrete torsion) and its relation to the combinatorics of projective representations of G. In this paper we will show how the amplitudes in the vacuum sector of G-CTST can be used to obtain the integer ratios |G|/(dim R), where dim R is the dimension of a projective representation R. The algorithm takes as input group multiplications weighted by cocycle factors defining the twist, and proceeds by solving a polynomial equation as in [11]. (The fact that these ratios are always integers in the projective case is proven in [32], [37, theorem 3.5].) Standard algorithms for the construction of characters were also shown in [11] to be related to amplitudes in G-CTST, for two dimensional surfaces with boundary circles. In this paper we show that the geometrical picture based on G-CTST, along with the study of generating subspaces of centers of group algebras [38], can be used to give new algorithms for characters. The handle creation operator of G-CTST plays a role in one class of such algorithms. An interesting corollary of this discussion is that string amplitudes with one boundary in G-CTST determine a distinguished subspace of the center of the twisted group algebra, Z(C ω (G)), of dimension equal to the number of distinct integers dim R. This discussion will be presented for both the untwisted and the twisted case.
The study of generating subspaces of centers of symmetric group algebras in [38] was motivated by the consideration of a toy model for black hole information loss arising from the AdS/CFT correspondence [39]. A family of supergravity solutions [40] with AdS 5 ×S 5 asymptotics are dual to half-BPS states in the dual CFT labelled by Young diagrams [41]. As explained in [39] the asymptotic gravitational charges of the SUGRA solutions correspond to Casimirs of the U(N) gauge symmetry in the CFT. The information loss model considers the information content in a finite number of Casimirs. For quantum states having energy n in the natural units, the Casimirs are related by Schur-Weyl duality to central elements in the group algebra of C(S n ). The information content in low order Casimirs translates into a question about how effectively low order cycle operators in the center of C(S n ) distinguish Young diagrams. This is in turn related to the dimensions of subspaces of the center generated by a finite set of central elements. In this paper we will be considering the generating subspaces for general finite groups G in connection with Dijkgraaf-Witten topological field theories. The embedding of this discussion into gauge-string dualities is an interesting problem for the future.
The paper is organised as follows. Section 2 explains the use of amplitudes in the vacuum sector of G-CTST to give finite algorithms starting from group multiplications in G weighted by appropriate cocycle factors and deriving the integer ratios |G|/ dim R for projective representations R of finite groups G. The handle creation operator (C. 39) for twisted group algebras plays an important role in this discussion. By considering one-point functions of twist field operators on higher genus surfaces, expressible combinatorially using the handle creation operator, we give a combinatoric construction for the number of distinct dimensions dim R for irreducible representations of G, or irreducible projective representations of G. Section 3 extends the discussion to amplitudes in G-CTST for surfaces having boundaries to obtain algorithms for calculating characters. The constructions in sections 3.1,3.2, 3.3 are used to obtain some integrality properties of certain sums of characters and sums of powers of characters in section 3.4, which in turn have implications for factorisation properties of certain polynomials which are used in character algorithms [45][46][47]. The integer sums and power sums of characters appear as residues for singularities in appropriate G-CTST partition functions. For simplicity this section focuses on the untwisted case. Section 4 collects a few remarks on G-CTST: we elaborate on the connection between determinants appearing in the algorithms of sections [2,3] and plethystic exponentials of stringy amplitudes at low genus. We also comment on S-duality in G-CTST, which leads to the definition of an inverse handle creation operator. This is given as an expansion in terms of the projector basis of Z(C ω (G)), while its expansion in terms of the conjugacy class basis is an interesting question for the future.
Fourier transform and vacuum sector for G-CTST
The previous paper [11] studied computations of characters of ordinary representations of finite groups, as relevant to e.g. the AdS/CFT correspondence. In this section we generalize those computations to include discrete torsion, which twists the representations to projective representations. In broad brushstrokes, much of the analysis is formally similar to [11], so we will combine a review of the results of [11] while simultaneously describing novel features present in cases with discrete torsion.
To improve readability, we have banished a number of technical definitions and computations in cases with discrete torsion to appendix C, to which we refer as needed.
2.1
The twisted group algebra of a finite group C ω (G) Let G be a finite group and [ω] ∈ H 2 (G, U(1)). In this section we will review properties of the twisted group algebra C ω (G) and its center H = Z(C ω (G)), which will play an important role in our computations. Physically, the center H is the state space of a two-dimensional (twisted) Dijkgraaf-Witten theory, which we will call G-CTST for short.
Setting ω = 1 in the formulae that follow recovers formulae for centers of ordinary group algebras Z(C(G)).
The twisted group algebra C ω (G) is a vector space, with basis elements we label τ g corresponding to elements g of the group G, equipped with the product which is generically non-commutative. ω is a 2-cocycle representing the cohomology class [ω]. Generic elements take the form g∈G a g τ g , where a g ∈ C. C ω (G) has an inner product where the group elements are orthonormal: For general elements Now, we are interested in the center of C ω (G), denoted H earlier, which is the subspace of C ω (G) which commutes with τ g for any g ∈ G. It inherits an inner product from C ω (G) by restriction of (2.4). One basis for the center is given by twist fields, which are associated with ω-regular conjugacy classes. An element g ∈ G is said to be ω-regular if for all h commuting with g, ω(g, h) = ω(h, g), (2.5) and an ω-regular conjugacy class is defined [42, section 3.6] to be a conjugacy class in which every element is ω-regular.
Given an ω-regular conjugacy class [g] represented by g ∈ G, we define a twist field [42] T It can be shown (see for example [16, section 2.2.1]) that the twist fields commute with all elements of the twisted group algebra C ω (G), meaning for all h ∈ G, and also the {T [g] } form a basis for the center.
Note that these operators T [g] depend upon the representative g of the conjugacy class: as shown in e.g. [42, section 3], There is a second basis for the center, given by projectors associated to irreducible projective representations, which are in (noncanonical) one-to-one correspondence with ω-regular conjugacy classes. (Thus, there are as many projectors as twist fields.) Let us review some pertinent results on projective representations before defining those projectors.
Projectors will be constructed using characters of projective representations. Unlike characters of ordinary representations, characters of projective representations are not class functions, as they are not invariant under conjugation. If R is a projective representation of G, associated to some cocycle ω, and χ R denotes the character, then [ As a consistency check, it may be useful to note that (2.13) using the fact that (2.14) In fact, using this identity, one can show so we can write (2.10) as As a result, although characters of projective representations are not invariant under conjugating group elements, they are invariant under conjugating τ 's. Another important property of characters of projective representations is that they vanish on non-ω-regular group elements, see e.g. [42, section 7.2, prop. 2.2]. Now, we can define projectors, following [42, section 7.3], which are associated to irreducible projective representations, and which form another basis for the center of the twisted group algebra. These are given by where R is an irreducible projective representation. (Instead of summing over all group elements, one can equivalently sum only over ω-regular elements, as the character χ R will vanish on non-ω-regular elements.) These form a complete, mutually orthogonal, basis for the center of the twisted group algebra, meaning that they obey They also obey the relation (B.18) These two bases (of twist fields, and of projectors) are related as follows: (which formally matches the result of taking the definition (2.17) and replacing τ g ∈ C ω (G) with T [g] , an element of the center), and These Fourier transforms are known, but for completeness, as they are perhaps somewhat obscure, next we will perform a consistency check and provide derivations. As a consistency check, recall both T [g] and χ R (g) transform under conjugation. However, using the identity we see that both T [g] and χ R (g) transform in the same way under g → hgh −1 , and so the identity (2.21) is consistent. As a consequence, if C is any element of the center of the twisted group algebra, it can be expressed similarly. Write for C i ∈ C, so that We can establish (2.20) by direct computation, as follows.
dim R |G| g∈G where we have used the fact that P R is central in the group algebra.
We can establish (2.21) by direct computation, as follows.
using the index formula (B.4) and the fact that
Vacuum string amplitudes and H 0 ֒→ H
We observe that the vacuum amplitudes of G-CTST are constructed by applying the delta-function on the twisted group algebra C ω (G) to powers of a handle creation operator Π. We show in Section 2.2.1 that these powers generate a subspace of Z(C ω (G)) with dimension equal to the number of distinct dimensions (dim R) of irreducible representations of C ω (G). In section 2.2.2 we show that one point functions of twist fields on higher genus surfaces can be used to determine sums of irreducible characters over irreducible representations having the same dimension.
The handle creation operator and twist fields
One convenient way of expressing the partition function of (twisted) Dijkgraaf-Witten theory on a genus h Riemann surface is as where Π is the handle creation operator (a map C ω (G) → C ω (G) which descends to H → H) which, for twisted theories, is defined in section C.2. We can express the partition function more explicitly as follows. Using the identity (C.39), namely we have that the partition function is which matches the expression (C.28) obtained independently. We conclude that (2.45) Using the formula (C.38) for Π, the calculation of the delta function on the left-hand side can be done from the combinatorics of multiplying the elements τ g and picking up the coefficient of the identity. The formula, in the untwisted case, is well known in the mathematical literature [43,44]. The combinatoric input from the left-hand side serves to give the power sums of |G| dim R . As explained in [11], we can go from the powers sums to the integers in a finite number of steps by solving for the zeroes of a polynomial with integer coefficients. We further elaborate in section 4.1 on the stringy interpretation of the polynomial in the context of G-CTST.
Products of Z h , with appropriate symmetry factors, give us the vacuum sector of G-CTST. The vacuum sector of G-CTST defines two distinguished subspaces of H = Z(C ω (G)). Complex multiples of Π form a one-dimensional subspace of H. Powers of Π span a (generically) higher-dimensional vector subspace of H. Proposition The powers of the handle creation operator Π span a vector subspace H 0 ֒→ H which has dimension D 0 equal to the number of distinct integers dim R as R runs over the set of irreducible projective representations. Lemma then the powers of P generate a space of dimension equal to L. Proof of proposition: We can write where R runs over all the distinct irreducible projective representations, and R ′ runs over a maximal list of irreducible projective representations having distinct dimensions, whilẽ P R ′ is a sum of the projectors for irreducible projective representations with the same dimension as R ′ . The list of projectorsP R ′ spans a subspace H 0 ֒→ H of dimension D 0 . In this subspace H 0 , we can use the Lemma to show that the powers of Π span H 0 . The proposition has a physical interpretation in terms of the rank of a matrix of onepoint functions in G-CTST. Consider the one-point functions M l,[g] ≡ δ(Π l T [g] ), with l ranging from 1 to K and g ranging over representatives of all the ω-regular conjugacy classes. (In the untwisted case this reduces to the set of all the conjugacy classes.) This matrix has rank D 0 . In the case where M l,[g] is a matrix with rational entries (this is the case for all untwisted cases and when the twists ω(g, h) can all be chosen to be rational), an integer basis for the null space can be found using discrete integer matrix algorithms One approach is to use algorithms for Hermite normal forms (such as algorithm 2.4.4 of [49]) and extract the null vectors as explained for example in [36, section 4.1]. Such discrete algorithms for null vectors are available in computational group theory software GAP [50]. This gives a combinatoric algorithm, starting from group multiplication combinatorics, which produces an interesting representation theoretic integer: the number of distinct (dim R) among the irreducible (projective) representations of a (twisted) group algebra.
Character algorithm from higher genus one-point functions
By considering the one-point functions δ(Π l T [g] ) on general genus, for fixed [g], we can extract information about characters of χ R (T µ )/dim R. Consider for the range l ∈ {1, 2, · · · , D 0 }, where we have used the identity (C.67). The primed sum runs over a maximal set {R ′ } of irreducible representations R ′ having distinct dimensions. The sum over {R : R ′ } is a sum over the distinct irreducible representations R with the same dimension as R ′ . Let us defineR ′ to be the direct sum of irreducible projective representations R with the same dimension as R ′ . Then we can write (2.50) As h runs over the set {1, · · · , D 0 }, we have a linear system of equations of size D 0 × D 0 for the normalized characters χR ′ (g)/ dim R ′ . As R ′ and l range over the D 0 possibilities, we have a matrix and we recognize V as a Vandermonde matrix. Since the R ′ have been chosen to run over a set of irreducible (projective) representations with distinct dimensions, the integers are distinct. This ensures that V is invertible. The inverse matrix can thus be used to construct the normalized characters X R ′ from the combinatoric G-CTST data Y h . As explained earlier, the construction of the ratios from G-CTST data follows using the formulae in section 2 in the twisted case, using the same algorithm described for the untwisted case in [11].
Character algorithms and string amplitudes
In the AdS/CFT correspondence, one is led in connection with toy models of black hole information loss [39] to consider questions of when sequences of central elements suffice to distinguish representations and multiplicatively generate the center of the group algebra [38]. In the context of TQFTs such as Dijkgraaf-Witten theory, it is natural to supplements such lists by the handle creation operator. To this end, in this section we present some general statements about subsets that multiplicatively generate the center of a (twisted) group algebra. We also use these generating subspaces to give algorithms for the construction of characters from string amplitudes in G-CTST. In the last subsection, we use these constructions to derive some integrality properties of characters and factorisation properties of character polynomials.
Minimal generating subspaces of (twisted) group algebras
We will say that a set of elements 1 {C 1 , C 2 , · · · , C k }, with C i ∈ Z(C ω (G)), multiplicatively generate Z(C ω (G)) if every element T ∈ Z(C ω (G)) can be written as a linear combination of products of elements C i : T = n 1 ,n 2 ,··· ,n k ≥0 t n 1 ,n 2 ,··· ,n k C n 1 1 C n 2 2 · · · C n k k . (3.1) The coefficients t n 1 ,n 2 ,··· ,n k are in C, and C 0 is defined as 1, the identity element of the group algebra. Proposition The following two statements are equivalent: The proof uses the fact that each element C has an expansion in projectors P R given by (2.27), which we repeat here: where the P R form a complete set of orthogonal projectors, as in equation (2.18). Consider first the case where k = 1, and a single element C 1 ∈ Z(C ω (G)) has the property that { χ R (C 1 ) dim R } distinguishes the irreducible representations R. The following fact is useful.
We know that Z(C ω (G)) is spanned by the projectors P R . Since (in the case k = 1) C 1 can be written as a linear combination of P R with distinct coefficients, the lemma above implies each P R can be written as a linear combination of powers of where R ′ runs over a set of irreducible representations with distinct normalized characters χ R ′ (C 1 )/ dim R ′ and P R ′ is the sum of projectors P R for all R such that Let us define [C 1 ] R ′ to be this set of irreducible representations R with the same normalized characters as R ′ . Then we may write Let us denote the number of distinct R ′ in the sum for C 1 in (3.4) by K 1 , where by assumption K 1 ≤ K −1. Using the Lemma, we can write each P R ′ as a linear combination of powers of C 1 . The largest power in these expressions is (K 1 − 1). Consider now, for each R ′ , By assumption, dim R are distinct as R ranges over the set [C 1 ] R ′ . This means that we can apply the Lemma to express P R as a linear combination of powers of the form The powers l range up to K a 1 ;R ′ − 1. Since the P R ′ have already been expressed in terms of powers of C 1 , we conclude that each P R can be expressed as a linear combination of powers of {C 1 , C 2 }. We can express this more symmetrically by writing where the sums run over representations with distinct normalized characters, and the projectorsP are defined with respect to the various sets [C i ].
It is easy to see that this argument can be iterated for the cases of multiplicative generating subsets with more elements (k > 2).
We now describe another way to see that any projector P R is a linear combination of products of central elements {C 1 , C 2 . · · · , C k } with the property given in (2) of the proposition. For each C i , we can write where R ′ i runs over a maximal set S i of irreducible representations with distinct normalized characters We have introduced the notation [R ′ i : C i ] for the set of irreducible representations R i with the property that (3.13) The set S i is not unique because the sets [R ′ i : C i ] generically have more than one element, but we will make a choice of S i . Using the Lemma, the projectors P R ′ i can be written as a linear combination of powers of C i . Now we know, by assumption, that any irreducible representation R is uniquely characterised by its normalised characters This means that there is a unique list This list is defined by the property that It follows that In the next several subsections we will apply these ideas to examples of sets of twist operators motivated by AdS/CFT, sometimes combined with handle creation operators as also motivated by Dijkgraaf-Witten theory.
Untwisted example: Z n
The group Z n has n irreducible representations, which we label ρ r for r ∈ {0, · · · , n − 1}. If g denotes the generator of Z n , and ξ = exp(2πi/n) the generator of nth roots of unity, then ρ r (g) = ξ r = χ r (g). (3.21) From (2.7), the twist fields are and from the definition (2.17), we have that the projectors are In particular, in this case the center of the group algebra C(Z n ) coincides with the group algebra, and has dimension n. From (C.39) we have that the handle creation operator is Thus, we see that in this example the handle creation operator and its powers can only ever generate a one-dimensional subspace of the center of the group algebra. This is expected from section 2 since all the irreducible representations are one-dimensional, so the number of distinct values of dim R (D 0 in the discussion of section 2) is 1. Now, let us turn to the question of constructing multiplicative generators. Consider for example the case of Z 3 . Let g denote the generator of the group, and R 1 , R 2 the two nontrivial representations, then the character table is given in table 1, where ξ generates cube roots of unity. In this case, we see that the irreducible representations are uniquely determined by the (normalized) characters of g, and it is also easy to check that T [g] generates all the twist fields multiplicatively: (3.28)
Untwisted example: D 4
List the elements of the dihedral group D 4 as {1, z, a, b, az, bz, ab, ba = abz}, (3.29) where z generates the Z 2 center. D 4 has five irreducible representations: four one-dimensional representations, and one two-dimensional representation. The character table of D 4 is given in table 2.
Since there are five conjugacy classes (also five irreducible representations), the center Z(C(D 4 )) has dimension five. Note, however, that knowing the normalized characters of just two conjugacy classes suffices to distinguish characters. For example, from table 2, the characters of T [a] , T [b] suffice to distinguish all the irreducible representations. (By contrast, for example, the normalized characters of T [1] and T [z] can only be used to distinguish the two-dimensional representation from the one-dimensional representation, but cannot distinguish between the one-dimensional representations.) This tells us that although the center Z(C(D 4 )) is a five-dimensional vector space, it is generated multiplicatively by T [a] and T [b] , for example. Indeed, from (2.7) one finds and it is straightforward to check that (3.32) Thus, the products of nonzero powers of T [a] and T [b] generate themselves, T [ab] , and the combination 1 + T z , and when we include the zeroth power of , we get all of the elements of the center.
Untwisted example: S n
This question for the case of S n is motivated by AdS/CFT and was recently studied [38] in untwisted cases. In that paper, central elements T k correspond to conjugacy classes defined by permutations with a single non-trivial cycle of length k, and remaining cycles of length 1. For any Z(C(S n )), the set {T 2 , T 3 , · · · , T n } generates the center [38] (Since there is no discrete torsion in this example, twist fields depend only upon conjugacy classes, not upon representatives, and so we only list the former.). Typically a much smaller set {T 2 , T 3 , · · · , T k * (n) } generates the center [38], where k * (n) is much smaller than n, which is equivalent to the statement that the normalized characters distinguish the irreducible representation R. For example, the single normalized character χ R (T 2 )/ dim R distinguishes R for n up to 5 and 7. The normalized characters of T 2 , T 3 distinguish the Young diagrams up to n = 14. Using the formulae for normalized characters given in [51,52] were constructed for all the R at fixed n, and verified (in Mathematica) to be distinct for n up to 14. For tests at higher n (up to 80) it was convenient to convert the question (using formulae in [51,52]) of comparing lists of normalized characters to a question of comparing lists of power sums of contents of Young diagrams (for the precise procedure see [38]).
These numbers are not unique: −3 and 3 each appear twice. This means that T 2 does not generate the center of the group algebra of S 6 (as in [38]) but Π and T 2 together do. Computations in GAP also show the lists {dim R, χ R (T 2 ), χ R (T 3 ), χ R (T 4 )} distinguish all the irreducible representations for S n at n up to at least 30. This means that the center is generated by {Π, T 2 , T 3 , T 4 } for C(S n ) with n up to at least 30.
For later comparisons, we give the character 1 4 ) is the conjugacy class of the identity.
Untwisted example:S n
The groupS n is a central extension of the symmetric group S n by Z 2 : It is described in [53, chapter 2] by generators z, t 1 , t 2 , · · · , t n−1 and relations The character table ofS 4 is given in table 4 (from [53, table 4.7]). From table 4, we see for example that the normalized characters of the conjugacy classes (31) ′ and (4) ′ uniquely distinguish all the representations, hence, using the proposition in section 3.1 , we expect that the center Z(C(S 4 )) is multiplicatively generated by twist fields corresponding to those two elements.
Twisted example:
Let us now turn to a simple twisted example, namely where G = a, b , and with ω(g, h) = +1 for other g, h. The only ω-regular conjugacy class in this case is {1}. From the definition (2.7), the twist fields are (Although there is only one ω-regular conjugacy class, we can certainly compute twist fields for other conjugacy classes, though as we see we do not get any further twist fields.) There is only one irreducible projective representation [42, section 3.7], which we label ρ. It is two-dimensional, and for the 2-cocycle above can be represented by in the sense that ρ(g)ρ(h) = ω(g, h) ρ(gh). (3.43) From this and the definition (2.17), one quickly computes that the single projector is given by essentially because only ρ(1) has a nonzero trace. Then, using the identity (C.39), the handle creation operator is easily computed to be In this case, the center of the twisted group algebra is also one-dimensional, corresponding to complex multiples of the identity, and so Π generates the center, essentially trivially.
In passing, let us also compare to the character table of D 4 , table 2. Since D 4 is an extension of Z 2 ×Z 2 , it includes information about the irreducible projective representation of Z 2 × Z 2 , which in this case is an honest representation of D 4 . Looking at table 2, we see the first four D 4 representations descend to representations of Z 2 × Z 2 , because they take the same value on z as on the identity. The fifth representation, the two-dimensional one, takes a different value on z than on 1, and so does not arise from an ordinary representation of Z 2 × Z 2 . This representation corresponds to the irreducible projective representation of Z 2 × Z 2 .
Twisted example: D 4
Now, consider the 2n-element dihedral group G = D n . This can be generated by a, b, such that For simplicity, we assume n is even. This has a nontrivial element of H 2 (D n , U(1)), given by where ǫ generates the nth roots of unity. For n even, b n/2 is central, and the dihedral group D n has n/2 irreducible projective representations, each two-dimensional, described as follows 2 [42, section 3.7]. For r ∈ {1, · · · , n/2}, define and then the rth representation is given by for i ∈ {0, · · · , n − 1} and j ∈ {0, 1}.
To make this more concrete, we specialize to D 4 , which has center Z 2 , generated by b 2 . Here, H 2 (D 4 , U(1)) = Z 2 , with a representative of the nontrivial cocycle given above. The conjugacy classes in D 4 are of which only two are ω-regular, namely {1} and {b, b 3 }. From (2.7), twist fields are where ǫ generates fourth roots of unity, hence we can take ǫ = i. (Only for the ω-regular conjugacy classes are the twist fields produced by (2.7) nonzero. Also, although b, bz are in the same equivalence class, T [g] is not invariant under conjugation, but instead are related by (2.9), as is easily checked to relate Since there are two ω-regular conjugacy classes, there are two (two-dimensional) irreducible projective representations, which are given by Since there are two irreducible projective representations, the twisted group algebra of D 4 has a two-dimensional center. We give the character using the fact that ǫ 2r + ǫ 2−2r = 0. (As a consistency check, it is straightforward to show that P 2 r = P r , P 1 P 2 = 0, and P 1 + using the fact that P 1 + P 2 = 1. We see immediately that Π 2 ∝ Π, and so the handle creation operator generates a one-dimensional subspace of the two-dimensional center of the twisted group algebra of D 4 . On the other hand, note that hence the center can be multiplicatively generated by T [b] alone, which is consistent with table 5.
Character algorithms and generating subspaces
In [11, section 3.1] the first author and his collaborators interpreted the Burnside construction [45]( see [46,47] for subsequent improvements) in terms of (untwisted) combinatoric amplitudes on genus one surfaces. The key formula, which takes the same form in the twisted case, is (C.67), which implies (3.59) Using the power sums, we solve a polynomial equation to get the normalized characters for the twist fields T [g] . The polynomial equation is actually the eigenvalue equation for the matrix of structure constants (C After the normalized characters have been found, the dimensions can be found using the orthogonality relation (B.4), which implies (3.60) It is interesting to consider the implications for the character algorithms of knowing a subset of (ω-regular) conjugacy classes whose normalized characters determine the irreducible representations. Suppose a set of central elements {C 1 , C 2 , · · · , C k } (possibly including Π) are known to multiplicatively generate the center Z(C ω (G)) of a (possibly twisted) group algebra. In the case of the untwisted group algebra of S n (for n < 80) it has been shown [38] that there are interesting small (compared to n) subsets which have this property. In section 3.3 we explain how to find such minimal generating subsets.
Let us first consider the case where a single operator C 1 ∈ Z(C ω (G)) multiplicatively generates the center, as we have seen occurs in examples in sections 3.1.1, 3.1.3. In this case, following a construction similar to the use of the Vandermonde matrices in section 2, we can compute the characters of any (represented, ω-regular) conjugacy class C µ from the genus one amplitudes associated with C 1 , T µ and the normalized characters of C 1 . Specifically we start with the string amplitudes (C.67) (It suffices to only consider k ∈ {0, 1, · · · , K − 1}, where K is the number of conjugacy classes.) In terms of the Vandermonde matrix the expression (3.61) is an invertible linear system of equations relating string amplitudes to the normalized characters of T µ . By using the inverse of the Vandermonde matrix, we can solve for the normalized characters χ R (T µ )/ dim R in terms of the string amplitudes in (3.61) and the normalized characters of the generator C 1 , both assumed known. Suppose now that {C 1 , C 2 } are a minimal set that multiplicatively generate Z(C ω (G)), as we have seen in examples in sections 3.1.2, 3.1.3, 3.1.6. In such cases the lists {χ R (C 1 )/ dim R, χ R (C 2 )/ dim R} uniquely determine the irreducible representations R. Now, we can again consider the problem of determining the normalized characters for a general conjugacy class (with specified representative) T µ , from the string amplitudes. Start with the amplitudes (3.63) (As before, it suffices to restrict to k ∈ {0, 1, · · · , K ′ − 1}, where K ′ is the number of distinct normalized characters χ R (C 1 )/ dim R.) Let R ′ run over a set of irreducible representations (of size K ′ ) with distinct normalized characters χ R ′ (C 1 )/ dim R ′ , and [R : R ′ ] over the irreducible representations with the same normalized characters as R ′ . We write we now determine the sums ranging over the distinct irreducible representations R having the same normalized character χ R (C 1 )/ dim R as R ′ . We denote the number of such R (the number of elements of [R : R ′ ]) by D 1;R ′ . Using the fact that distinguish all irreducible representations, we know that for any R ′ , as R ranges over the set [R : R ′ ], the list {χ R (C 2 )/ dim R} has no repeated elements. Now for each R ′ , and each l ∈ {0, 1, · · · , D 1,R ′ − 1} we can consider As k ranges over {0, 1, · · · , K ′ − 1}, we have a linear system of equations for given by the invertible K ′ × K ′ Vandermonde matrix (3.65). By using the inverse of the Vandermonde matrix, we obtain Collecting the results for all the l ∈ {0, 1, · · · , D 1,R ′ − 1}, we now have a linear system for χ R (T µ )/ dim R for all the R in the set [R; R ′ ], given by the invertible D 1,R ′ × D 1,R ′ Vandermonde matrix with matrix elements By inverting the Vandermonde matrix, we obtain χ R (T µ )/ dim R for all R with the property that It is clear that the above procedure can be iterated to give a procedure for constructing normalized characters T µ in cases where a longer list distinguish the irreducible representations (equivalently {C 1 , C 2 , · · · , C k } generate the center). Note that the generating set of central elements can all be obtained by averaging over fixed conjugacy classes, and may also include central operators such as the handle operator Π as discussed in section 3.1.
Untwisted example: Z n
In this section we will illustrate the method in a case with well-known results, specifically, the case G = Z 3 . As discussed in section 3.1.1, if g generates the group Z n , then T g generates the center multiplicatively. Following the prescription given above, the Dijkgraaf-Witten amplitudes determine the normalized characters of any other conjugacy class. Specifically, write (3.75) Using for ξ a generator of cube roots of unity, hence Let us also take as given the string amplitudes From these string amplitudes we then compute matching the known result for each representation R. Similarly, matching the result Finally, matching the result Again, we emphasize that the point of this section is merely to illustrate the method in a simple well-known example.
Twisted example:
Let us apply the algorithm above to the case of Z 2 × Z 2 with a twist, as discussed in section 3.1.5. As discussed there, the center is one-dimensional, generated by Π. Now, suppose we are given the string amplitudes and we want to compute the normalized characters of T [1] . (Clearly, this will be trivial, but [1] is the only ω-regular conjugacy class, so for purposes of illustrating the method, we will walk through this example.) From (C.67), we know that which is a linear system of equations relating the normalized characters χ R (1)/ dim R to the Y k and the Vandermonde matrix and can be written in the form where χ is the vector of normalized characters In the present case, Z 2 × Z 2 with a twist, there is only one irreducible projective representation, of dimension 2, hence 93) so our system of equations is simply (To be clear, this is many equations for one unknown, which is why in general we restrict to a finite number of values of k.) In principle this allows one to compute the normalized characters in terms of the Y k . In this particular case, it is a fact that Y k = 2 2k , so we see that a result which will not surprise the reader, but which will hopefully help to illuminate the idea of the method.
Twisted example: D 4
Now, let us apply these ideas to the case of D 4 with a twist, using the computations in section 3.1.6. Here, let us take the (two-dimensional) center of the twisted gruop algebra to be generated by {T [b] }, and use the string amplitudes (Dijkgraaf-Witten correlation functions) to compute the normalized characters and reproduce the character table 5.
As before, suppose we are given the string amplitudes which are related to the normalized characters of T µ by As there are only two irreducible projective representations, it suffices to take k ∈ {0, 1} and write V k,R as the entries of a matrix one can then compute normalized characters from string amplitudes, formally as which implies correctly matching table 5.
Twisted example: S n
In this section we discuss the symmetric group S n with discrete torsion. First, let us describe the discrete torsion. We can do this implicitly using the extensioñ S n presented in section 3.1.4, and comparing to the presentation of S n itself in section 3.1.3. Specifically, the extension is determined by an element of H 2 (S n , Z 2 ), which maps into H 2 (S n , U(1)) and so determines an element of discrete torsion.
We can compute the cocycle as follows, following [53, pp. 9-10]. Let θ :S n → S n be the projection, with kernel {1, z}, and let r be a section, meaning a map r : S n →S n , such that θ(r(a)) = a and r(1) = 1. A cocycle is given explicitly by where r(a)r(b) = z nr(a,b) r(ab).
(3.106)
We can pick The section can also be used to construct projective representations of S n from the ordinary representations ofS n . Given representation matrices R(g) forg ∈S n , one gets projective representation matrices P (g) as P (g) = R(r(g)) (3.108) as in [53, Theorem 1.4].
As an example to illustrate the use of the above equations, consider the symmetric group S 4 . It has three generators {x 1 , x 2 , x 3 }, which are the adjacent transpositions x 1 = (1, 2), x 2 = (2, 3), x 3 = (3, 4). The section r is defined by mapping words in the x i to words in t i . As an example of cocycle factors deduced from the above equations, note that Using the projection θ and the section r, the above equations specify a map from C(S 4 ) to C ω (S 4 ). Using the character table forS 4 (Table 4), we note that the characters for elements inS 4 , in the last three rows associated with non-trivial twist, and corresponding to cycle structures (2, 1 2 ), (3, 1) are zero. This means that the only non-zero ω-regular classes in C ω (S 4 ) correspond to cycle structures (1 4 ), (3, 1), (4). The equality of the number of ω-regular conjugacy classes and irreducible projective reps illustrates our discussion of the center Z(C ω (G)): we observed that there is a basis for the center in terms of twist operators labelled by ω-regular conjugacy classes and another basis in terms of projectors, labelled by irreducible projective representations. The splitting of (3, 1) and (4) into two columns illustrates the fact that characters are not class functions in the case of projective representations. Focusing on the column (4) ′ , and taking into account the dimensions of irreducible projective reps (given in the last three entries in the first column labelled by (1 4 )), we find that the normalized characters {1/ √ 2, −1/ √ 2, 0} distinguish the three irreducible projective reps. Following our discussion in section 3.1, this means that a central element labelled by conjugacy class (4) can be used to multiplicatively generate the center of Z(C ω (S 4 )).
Algorithm for minimal generating subsets
In the above, we have assumed we are given central elements which distinguish irreducible representations, or equivalently, multiplicatively generate the center Z(C ω (G)). In this section, we outline an algorithm finding a minimal generating subset of the center of a twisted group algebra, using the topological field theory amplitudes. We start with a central element C a . We can determine its normalized characters using the Burnside algorithm [45][46][47], equivalently as explained earlier, by considering genus one amplitudes with insertions of boundaries labelled by C a . If the number of distinct eigenvalues, i.e. the number of distinct normalized characters χ R (Ca) dim R is equal to the dimension of Z(C ω (G)), then we know that C a generates the center. But suppose the number of distinct eigenvalues is smaller. Let us ask how to determine whether adding another central element C b indeed generates the center. This can be done by considering the structure constants of the multiplication operator for C a , C b in the basis of central elements labelled by conjugacy class operators T µ These structure constants can be obtained from G-CTST amplitudes on the sphere: We know from (2.27) that the projectors P R obey If C a , C b generate the center, then the simultaneous eigenspaces of the matrices (C a ), (C b ) are one-dimensional with eigenvalues Motivated by AdS/CFT applications of minimal generating subspaces, we can start with the twist field associated to (a representative of) the smallest conjugacy class T a 1 (excluding the conjugacy class of the identity) and the associated structure constant matrix C a 1 obtained from G-CTST amplitudes involving T a 1 , then alongside consider C a 2 for the next smallest conjugacy class. If the simultaneous eigenspaces are one-dimensional, we have a generating subspace spanned by (T a 1 , T a 2 ). If the simultaneous eigenspaces are more than one-dimensional, we add another central element T a 3 and simultaneously diagonalize C a 1 , C a 2 , C a 3 . If the eigenspaces are one-dimensional, then the ordered lists of eigenvalues of {C a 1 , C a 2 , C a 3 } which give can be used to label the irreducible representations.
To find the eigenvalues for these basis elements in a minimal generating subspace, we have to solve the eigenvalue equations for the K × K matrices, where K is the dimension of the center. For the characters of the remaining conjugacy classes, we use the inversion of Vandermonde matrices of smaller size as explained above.
G-CTST and properties of characters of finite groups
In this section we will use the properties of handle-creation operators in G-CTST from section 2, and the AdS/CFT-inspired construction of characters using minimal generating subspaces from the previous subsections 3.1, 3.2, 3.3, to derive certain integrality properties of residues of poles of partition functions appearing in G-CTST.
Along the road to those physics results, we will derive some mathematical properties of characters of finite groups. We expect that these properties are already known in the mathematical literature; we are not claiming any fundamental mathematical novelty. We include them because they follow from the framework of G-CTST and are related to the properties of singularities in generating functions arising therein. The methods in the proof are based on the combinatorics of group multiplications along with linear algebra. Similar methods have been used to obtain integrality properties of characters in, for example, [54]. A comprehensive textbook discussion of these properties is in Chapter 3 of [55].
In section 3.4.1 we begin by deriving integrality properties for sums of characters of a given conjugacy class C µ , where the characters are being summed over certain restricted classes of irreducible representations. The restrictions depend on the dimension of the irreducible representation or the character of certain additional specified conjugacy classes, where these conjugacy classes have the property that all their characters are integers. In section 3.4.2 we extend the discussion to obtain integrality properties of sums of powers of characters, where the sums are constrained by similar restrictions as in 3.4.1. We show that the integrality of these power sums is equivalent to factorisation properties of polynomials arising in the Burnside algorithm [45][46][47] for the computation of characters, which we will refer to as Burnside character polynomials. In section 3.4.3 we show that the integer sums of normalized characters considered in 3.4.1 and 3.4.2 arise as residues of singularities in generating functions of G-CTST.
For simplicity we will restrict to Dijkgraaf-Witten theories without discrete torsion (twisting) in this section.
Integrality properties of some character sums
In this subsection we will derive some properties of characters that we will use in the analysis of poles of G-CTST generating functions.
First, it is useful to rewrite (3.59) with an adjusted normalization (3.119) The ratios |[g]|χ R (g)/ dim R in the right-hand side are known to be algebraic integers. This follows from the fact that eigenvalues of integer matrices (in this case, the matrix of structure constants of multiplication by the central elements |[g]|T [g] in Z(C(G))) are algebraic integers (see e.g. [31, chapter 3]). It is also known that algebraic integers form a ring. Hence a sum of algebraic integers is an algebraic integer. Thus, the sum (where the sum is over all the irreducible representations R with a fixed dim R ′ = dim R as in section (2.2.2)) is an algebraic integer. It is useful to rewrite (2.50) with the normalization (3.121) For the untwisted case C(G) the left-hand side gives a sequence of rational numbers for different values of h. In section (2.2.2) we inverted the Vandermonde matrix of integers, applied it to a finite vector with the rational numbers on the left-hand side above, to give the characters χR ′ (g)/ dim R ′ . Applying the same procedure here, we see that the normalized characters |[g]|χR ′ (g)/ dim R ′ are rational numbers. Now, any algebraic integer which is rational is also integer (see e.g. [31,chapter III]). This means that the sums of normalized characters in (3.120) are always integers, for any C(G) (even though the individual terms in the sum may not be integers).
To summarize, these arguments suggest the following Proposition 3.4.1-I: The sum of normalized characters over all the irreducible representations R of a fixed dimension dim R ′ is an integer for any finite group G. This is easy to verify in examples by inspection of finite group character tables. In addition, we expect that the statement above, as well as the other propositions in this section, likely already exist in the literature, though we are not able to give precise references. We include them here because we will use these results in the analysis of poles of G-CTST generating functions. We are not claiming any fundamental mathematical novelty.
It is also known that the characters χ R (g) are algebraic integers (e.g. [31, chapter III]), hence the sum is an algebraic integer. The rationality of |[g]|χR ′ (g)/ dim R ′ explained above also implies that χR ′ (g) is rational. Using again the fact that rational algebraic integers are integers, we conclude that χR ′ (g) are integers. We state this as over all irreducible representations R with a fixed dimension dim R ′ , is an integer, for any finite group G.
A corollary of the discussion on integrality of character sums above, is that if for every irreducible representation R of a finite group G which has a unique value of the dimension, i.e. a value not shared by any other irreducible representation, the characters |[g]|χ R (g) dim R and χ R (g) are integers for g in any conjugacy class.
Following the discussion in section 3.2 where we consider linear systems for a given dim R using a pair of central elements, we can generalize the above argument. We start again with the untwisted case C(G). Consider central elements {C 1 , C 2 }, chosen to have the property that χ R (C 1 ) dim R and χ R (C 2 ) dim R are both integers for all R. We do not require here that C 1 , C 2 generate the center Z(C(G)) in the present discussion. The key equation is (3.68), part of which we repeat for convenience, is It is worth noting that the product in the sum above, namely is a product of algebraic integers and hence itself an algebraic integer. Using the discussion in section 3.2, we can construct the character sums where R is being summed over all the irreducible representations having a fixed pair of eigenvalues for [C 1 , C 2 ], using inverses of integer Vandermonde matrices multiplying the combinatoric data on the left-hand side of (3.125) consisting of rational numbers. Thus we conclude that these sums, which are known to be algebraic integers, are in fact integers. This also means that the character of a χ R (g) of a group element g ∈ C µ is rational, and since it is known to be an algebraic integer, also in fact integer. By taking C 1 to be the handle creation operator with eigenvalues |G| 2 (dim R) 2 and C 2 the sum of elements in a conjugacy class C with the property that χ R (g) for g ∈ C is an integer for all irreducible representations R, we conclude Proposition 3.4.1-III: The character sums χ R (g) for g ∈ C µ (3.129) for any conjugacy class C µ , over irreducible representations with a fixed specified dimension denoted dim R ′ and a fixed value of the character for the conjugacy class C, are integers. If we take [C 1 , C 2 ] to be two conjugacy classes having integer characters, then we have Proposition 3.4.1-IV: The character sums for any conjugacy class C µ are integers, where the sum is over all irreducible representations which have fixed characters [χ R 1 (C 1 ), χ R 2 (C 2 )] for two conjugacy classes C 1 , C 2 , and where these latter are conjugacy classes known to have integer characters for all irreducible representations R. This property for C µ generalizes to the case where we fix the characters for any number of conjugacy classes {C 1 , C 2 , · · · , C m } having the property that all their irreducible characters are integers. We also have this integrality property for C µ when we fix {dim R, C 1 , C 2 , · · · , C m }. Integrality properties of fusion matrices and quantum dimensions have recently been studied using Galois theory methods [56] in the context of 3D topological quantum field theories. The combination of Galois theory methods with the constructive methods used here in general classes of topological field theories would be an interesting area for future investigation.
Integrality of power sums and factorisation properties of character polynomials
In the previous subsection, as part of our physical analysis of G-CTST, we derived some intermediate mathematical integrality properties involving single characters. In this subsection we similarly derive integrality properties for power sums of characters which have implications for the factorization properties of the Burnside character polynomials. In the next subsection we will apply these properties to the analysis of generating functions in G-CTST. For a conjugacy class C µ consider a diagonal matrix X µ of size K, with entries |Cµ|χ R (g) dim R for g ∈ C µ where K is the number of conjugacy classes in G.
where e i (X) are elementary symmetric polynomials. They can be expressed in terms of traces of X and in terms of the eigenvalues of x i of X as Here p is a partition of k, with p i parts of length i, so that i ip i = k. As reviewed in [11] the quantity det(x − X µ ), viewed as a polynomial in x, is also the characteristic polynomial for the integer matrix |Cµ||Cν | |C λ | (C µ ) λ ν of structure constants of Z(C(G)). Solving for the eigenvalues of the matrix of structure constants for conjugacy classes C µ is a step in determining the character table in the Burnside algorithm [45]. A useful piece of terminology is that det(x − X µ ) is an integer monic polynomial: a monic polynomial has the coefficient of the highest power of x to be equal to 1 while all the other coefficients are also integers.
The above arguments for integrality of sums of characters apply equally well for the power sums. In this case we consider, for fixed k and for h ∈ {0, 1, · · · , K − 1} (3.134) The last line includes a sum over irreps R having a fixed dimension dim R ′ . This allows us to write, in terms of an inverse Vandermonde matrix, the power sums over irreducible These are known, on general grounds, to be algebraic integers. Applying the reasoning in section 3.4.1 above to these power sums, they can be expressed as a matrix product of a rational matrix (inverse of a Vandermonde matrix) times a vector of rational numbers (obtained from the evidently rational numbers on the LHS of (3.134)). This means that these sums of powers, restricted to all irreducible representations R having the same dimension as R ′ , are actually integers. It is now useful to consider a diagonal matrix X (R ′ ) µ of size equal to the number K (R ′ ) of distinct irreducible representations with the same dimension as R ′ , and with entries equal to |[g]|χ R (g) dim R as R ranges over the distinct R with the specified dimension. We can construct a polynomial det(x − X (R ′ ) µ ) of degree K (R ′ ) . The coefficients of the powers of x are elementary symmetric polynomials e i (X (R ′ ) ), expressible as polynomials in these normalized characters |[g]|χ R (g) dim R for R having fixed dimension dim R ′ . Since these normalized characters are known to be algebraic integers, the elementary symmetric polynomial functions of these (which are sums of products of these according to the second line in (3.133)) are algebraic integers. These elementary symmetric polynomials are also expressible in terms of linear combinations with rational coefficients of power sums (first line of (3.133)). These power sums are integers as explained above. Combining these facts, and since numbers which are rational and algebraic integer are also integers, we conclude that these coefficients of powers of x in det(x−X is an integer monic polynomial in the variable x. Since the diagonal entries of the diagonal matrix X (R ′ ) µ form a subset of the entries of the diagonal matrix X µ defined above, we ) is an integer monic polynomial which is a factor of the Burnside character polynomial det(x − X µ ). We summarise this conclusion as Proposition 3.4.2-I: The Burnside character polynomial for any conjugacy class C µ , which is an integer monic polynomial, factorises into lower degree integer monic polynomials parametrised by the list of distinct dimensions dim R ′ (3.136) Following the discussion in section 3.3.1, we can also consider further integrality properties for powers of normalised characters summed over sets of irreps restricted by dimension as well as characters of conjugacy classes. By following the argument above, this integrality of power sums leads to more refined factorisation properties of the Burnside character polynomials. Suppose C 1 is a conjugacy class with integer characters. i.e. for all irreducible representations R of G, the characters χ R (g) for g ∈ C 1 are integers. Let χ C 1 ;R ′ 1 be the list of the distinct values of these characters, and K C 1 ;R ′ 1 be the multiplicity of the eigenvalue. We have R ′ The polynomial det(x − X is an integer monic polynomial.
Proposition 3.4.2-II: The Burnside character polynomial for any conjugacy class C µ , which is an integer monic polynomial, factorises into lower degree integer monic polynomials parametrised by the list of distinct characters Let the pair [R ′ , R ′ 1 ] be labels for pairs of irreducible representations which run over the distinct possible values of [dim R, χ R (g)] for g ∈ C 1 . Let K Π,C 1 ;R ′ ,R ′ 1 be the multiplicity of the pair of values associated with [R ′ , R ′ 1 ]. For any other conjugacy class C µ = C 1 , we can construct the integer monic polynomial det(x − X We have the factorisation property Proposition 3.4.2-III: The Burnside character polynomial for any conjugacy class C µ , which is an integer monic polynomial, factorises into lower degree integer monic polynomials parametrised by the list of distinct ordered pairs [dim R ′ , χ R (g)] for g ∈ C 1 (3.139) These factorisation properties can further be generalised to run over lists [dim R ′ , χ R (g 1 ), · · · , χ R (g m )] for g 1 ∈ C 1 , g 2 ∈ C 2 , · · · , g m ∈ C m where C 1 , C 2 , · · · , C m have integer characters. We can also drop dim R ′ from the lists to have factorisation over distinct lists [χ R (g 1 ), · · · , χ R (g m )].
Integral power sums as residues of singularities in G-CTST generating functions
In this subsection we now apply the properties we have derived to the analysis of G-CTST generating functions. We observe that the integer sums of normalized characters and sums of powers of normalized characters derived in sections 3.4.1 and 3.4.2 arise as residues at singularities of G-CTST generating functions. The argument is an extension of the one in section 5 of [11]. Let us define a sum over arbitrary numbers of handles of the string amplitude with one boundary labelled by conjugacy class C µ (3.121) weighted by the appropriate power of the string coupling. Taking g ∈ C µ , i.e. [g] = C µ we write (3.141) The poles of this generating function are at and the residues are which we showed to be integers (proposition 3.4.1-I). Similarly we can define a stringy generating function for the k'th power sums (3.147) The singularities are at while the respective residues are As shown in proposition 3.4.1-II these residues of the G-CTST generating function defined are integers. The connection between integer character sums and residues of G-CTST partition functions extends to the more refined sums considered in sections 3.4.1 and 3.4.2. As an example consider (3.125) involving powers of two conjugacy class sums C 1 , C 2 and a single power of C µ and let us introduce a partition function depending on two chemical potentials µ 1 , µ 2 In the last line, we have introduced sums over a complete set of pairs of irreducible representations R 1 , R 2 which have distinct character values [χ R 1 (C 1 ), χ R 2 (C 2 )]. For each pair of values, we have a sum over R running over the distinct irreducible representations having these characters. It follows that the singularities of these generating functions are at The residues at these simgularities are These residues are integers as explained in Proposition 3.3.2-IV.
Further remarks on G-CTST and future directions
We collect a few comments here on the stringy interpretation of the determinants that have played a central role in the algorithms earlier in the paper. We find a link to plethystic exponentials of low genus amplitudes. The plethystic exponential function has well known applications in AdS/CFT relating the counting of single trace gauge invariants in CFT to multi-trace counting [58] . It also has a related application in tensor model holography, relating the counting of connected and disconnected surfaces which are related to tensor model invariants [59]. Careful quantum gravitational discussions of the normalizations of partition functions relevant to combinatoric topological strings are in [9,10,13]. The second point we develop is S-duality for G-CTST. While S-duality was discussed in [11] in terms of entangled disconnected surfaces, we observe that there is also an interpretation of the S-dual amplitudes in terms of the inversion of the handle-creation operator in the group algebra of G. We observe that for both the untwisted and untwisted case this inverse operator is well-defined. We give an expression for the inverse handle creation operator as an expansion in the projector basis Z(C ω (G)). A combinatoric description of the expansion in terms of the conjugacy class basis for Z(C ω (G)) is an interesting question. The third point concerns the implications of finiteness of G for relations between G-CTST amplitudes.
Background
The construction of the integer ratios |G|/ dim R from group multiplications in [11] used the determinant det(x − X), and its expansion in terms of products of traces where the e i denote the elementary symmetric functions, given by e 0 (X) = 1, The elementary symmetric functions can be expressed in terms of traces of X as in (3.133).
As was argued in [11], the algorithm presented there was a stringy construction more than a field theoretic construction, since it involved combining amplitudes of different genera, but there was not a crisp simple connection between the algorithm and a stringy observable.
As a first step in this direction, note that e 1 (X) is trX = Z h=2 . In a stringy partition function this is naturally weighted with g 2 st . The next elementary symmetric polynomial, e 2 (X), is a linear combination of trX 2 = Z h=3 and (trX) 2 = Z 2 h=2 . Both of these are weighted with g 4 st . The next elementary symmetric polynomial, e 3 (X), is a linear combination of Z h=4 , Z h=3 Z h=2 , and Z 3 h=2 , all of which are naturally weighted by g 6 st . In general, e k (X) is associated with g 2k st . The determinant above can be written as By substituting x → g −2 st , we can write This looks like a stringy observable. We develop a link with disconnected string diagrams below.
Determinant from generating function of disconnected worldsheets
Start with the observation that a generating function of disconnected diagrams of genus 2 or higher can be obtained by expanding the exponential of a sum The argument of the exponential is motivated by the plethystic exponential function as studied in [58,59]. Now observe that the first line is a determinant: , (4.8) where in the last equality, we used det(A) = exp tr log(A). (4.9) We conclude . (4.10) So the determinant used in the algorithm for |G| 2 /(dim R) 2 is nothing but the inverse of the generating function for the disconnected diagrams. The zeroes of this inverse generating function are at g 2 st = (dim R) 2 /|G| 2 , or g −2 st = |G| 2 /(dim R) 2 . A remarkable fact is that this inverse generating function truncates at a finite power of g 2 st . This is due to the finiteness properties of the theory. Another way to express the remarkable fact is that the generating function of disconnected string diagrams is a rational function.
In [11], it was observed that finding the zeroes of det(x − X) in (4.1), viewed as a function of x, gives a finite algorithm (which uses as input the products of traces of X available from G-CTST partition functions) to arrive at the integer ratios |G|/ dim R. The identification of the formal variable x with g −2 st above and the equation (4.10) shows that the integer ratios have the physical interpretation of being the locations in the g −2 st plane of the poles of the generating function of disconnected amplitudes. It was also observed in [11] that the poles of the connected generating function as a function of g st are given in terms of the integer ratios |G|/ dim R. Connected and disconnected generating functions are related through the plethystic exponential function (see [58] for applications of the plethystic exponential in the combinatorics of moduli spaces of supersymmetric gauge theories).
S-duality in G-CTST and the inverse handle-creation operator
Following the discussion in [11], the generating function of connected closed string amplitudes is . (4.12) An S-dual generating function is defined as It is calculated to bẽ In [11] a geometrical interpretation for the positive power sums of dimensions was given in terms of disconnected entangled surfaces. Here we develop an alternative interpretation of this S-dual expansion.
Recall the handle creation operator with the property δ(P R ) = (dim R) 2 |G| so that the genus h partition function is obtained by taking the trace of h powers of Π.
We observe that the handle creation operator has an inverse element in the center of the group algebra, which is given by We have We propose to interpret the inverse handle creation operator Π −1 as the handle creation operator of the S-dual theory and denote it as Π −1 =Π. Note that the leading order term in the S-dual generating function (4.14) is Since there is a single power ofΠ, it is natural to interpret this as the partition function at genus one of the S-dual theory. The higher powers are which can therefore be interpreted as genus k partition function of the dual theory. Remark: It would be interesting to understand if there is a string field theory that generates the S-dual perturbation expansion above. One may be able to get some hints by examining the coefficients of the expansion of Π −1 in a basis of twist fields. Such an expression could be obtained using the character expansion of P R to obtain a formula for Π −1 as an expansion in terms of the twist operator basis of Z(C ω (G)). The expansion coefficients involve the calculation of the sums R (dim R) 3 χ R (g). (4.25) These sums are some functions of g. It would be interesting to find out how these depend on the conjugacy class of g.
For example in C(S 3 ) it is easy to calculate Π = 18 + 9((1, 2, 3) + (1, 3, 2)), 2, 3)). (4.26) It would be interesting to explore this for general C(S n ) and other group algebras. Dualities in the context of discrete gauge theories have been discussed in [56,57]. It will be interesting to investigate potential relations between these dualities and the S-duality considered here.
Finiteness relations
Systematic studies of the consequences of finiteness of G on the string amplitudes of G-CTST, both in untwisted and twisted case, are interesting future directions. For any group G, with K conjugacy classes, there are universal K-dependent finiteness relations which were described explicitly in [11]. Requiring that these finite K relations appear as null states of an inner product led to a discussion of the factorization puzzle in 2D/3D holography [60]. The inner product discussed in [11] was not uniquely determined. It would be interesting to investigate if there is a natural inner product, determined by the finiteness relations, possibly with additional data naturally related to G-CTST. As we have seen in this paper, the degeneracies of representation theoretic data (e.g. of values of dimensions of irreps) have important implications for integrality. They can be expected to play a role in G-dependent refinements of the finite K relations. modulo equivalences One can always pick cocycles so that, for example for 2-cocycles, for any group element g. We work with such normalized cocycles in in this paper.
B Characters of projective representations
In this appendix we review some basics facts and results on characters of projective representations of finite groups that are used elsewhere in this paper. Perhaps the first result to recall is that, unlike characters of ordinary group representations, characters of projective representations are not class functions (not invariant under conjugation), but instead obey [ as was previously mentioned in (2.10). Second, these characters vanish on non-ω-regular group elements, see e.g. [ where R, S are irreducible projective representations (with respect to ω), D R (g) is a matrix representing g ∈ G in R, meaning the sum in the second identity is over irreducible projective representations, and |[g]| denotes the number of elements in a conjugacy class containing g.
For use in other sections, from the expressions above one can show (see e.g. [16, (Alternatively, by writing in terms of characters of products of τ 's, one can produce equivalent expressions without factors of ω.) Furthermore, from (B.4), it is straightforward to show that Let us check that this identity is well-defined under conjugation. Using (B.1), If hgh −1 = 1, then both sides vanish, so there is no ambiguity. Similarly, if hgh −1 = 1, then g = 1, and so again the identity is unambiguous.
In passing, for the projector P R given in equation (2.17), note that this implies but from (B.6), one has Another identity that will be useful involves the handle creation operator Π given in (C.39). Using (B.6), first note that Then, One of the consequences of the fact that characters of projective representations are not invariant under conjugation is that, unlike characters of ordinary representations for which For example, in the notation of that reference, In fact, we can derive a general relation between χ R (gh) and χ R (hg) as follows. In principle, χ R (g) = Tr ρ R (g), (B.26) where ρ R (g) is a matrix representing g. Now, As a consistency check, we claim that δ(gh) = δ(hg). Now, from (B.10), Now, if gh = 1, then hg = 1, so both sides of the relation above vanish, and in particular, δ(gh) = δ(hg) = 0. Suppose instead that gh = 1, so that δ(gh) = 1. In this case, h = g −1 , and from (dω)(g, g −1 , g) = 1, (B.35) we have ω(g, g −1 ) = ω(g −1 , g).
C Two-dimensional Dijkgraaf-Witten theory
In this appendix we collect some technical results on two-dimensional twisted Dijkgraaf-Witten theory that are used in the main text. Although we have not located a complete set of prior references, we believe these results were known previously; we include them and their derivations here for completeness and to make the detailed arguments of the main text convincing.
C.1 Partition functions
In this section we will compute genus g partition functions of two-dimensional Dijkgraaf-Witten theory with discrete torsion, in the same style as the analysis of [11, section 2] to include discrete torsion. Now, to be clear, these partition functions have been computed previously in the literature, see for example [10] in the physics literature for a recent computation in two-dimensional Dijkgraaf-Witten theory specifically, [27, appendix C.1] for a recent review of results on partition functions of 2d TQFTs, and in the math literature, see for example [62][63][64][65][66] for partition functions and one-point functions in cases without 5 discrete torsion, where these are given as the orbifold Euler characteristics of the moduli space of flat G bundles, .
(C.5) (Compare [11, equ'n (2.4)].) From this one immediately derives 5 Partition functions including discrete torsion have certainly been computed previously in the physics literature, see e.g. [1]. We include such computations here for completeness. Our expectation is that partition functions including discrete torsion were also computed, albeit in different language, in the mathematics literature in the same era as [64,65], though we have not been able to find a specific mathematics reference.
C.2 Handle creation operator
In this section, we will describe the handle creation operator in the presence of discrete torsion, and its basic properties. Without discrete torsion, the handle creation operator is [11, equ'n (6.23)] Π = g 1 ,g 2 ∈G τ g 1 τ g 2 τ −1 g 1 τ −1 g 2 , (C. 29) and it is claimed that for P R the projection operator. As a consistency test, note this implies (C.31) Let us check that this implication is correct for every irreducible representation S. First, from (C.6), in the absence of discrete torsion, we have confirming (C.31. Since this holds for any irreducible representation S, we take this as a confirmation of the handle creation operator identity (C.30). | 2022-04-06T01:16:40.749Z | 2022-04-05T00:00:00.000 | {
"year": 2022,
"sha1": "652008fcf1121bb3209f1761931b1bbb29f02e32",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7277e7f4f79f0c080810070e6e6e8dc6c7fb58f5",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
16955434 | pes2o/s2orc | v3-fos-license | De Broglie-Bohm Prediction of Quantum Violations for Cosmological Super-Hubble Modes
The hypothesis of quantum nonequilibrium at the big bang is shown to have observable consequences. For a scalar field on expanding space, we show that relaxation to quantum equilibrium (in de Broglie-Bohm theory) is suppressed for field modes whose quantum time evolution satisfies a certain inequality, resulting in a 'freezing' of early quantum nonequilibrium for these particular modes. For an early radiation-dominated expansion, the inequality implies a corresponding physical wavelength that is larger than the (instantaneous) Hubble radius. These results make it possible, for the first time, to make quantitative predictions for nonequilibrium deviations from quantum theory, in the context of specific cosmological models. We discuss some possible consequences: corrections to inflationary predictions for the cosmic microwave background, non-inflationary super-Hubble field correlations, and relic nonequilibrium particles.
Introduction
Hidden-variables theories, such as the pilot-wave theory of de Broglie [1,2] and Bohm [3], reproduce quantum theory for a particular 'equilibrium' distribution of hidden parameters. But allowing arbitrary distributions (analogous to non-thermal distributions in classical physics) opens up the possibility of new, 'nonequilibrium' physics that lies outside the domain of quantum physics [4,5,6,7,8,9,10,11,12,13,14,15]. Such new physics may have existed in the very early universe, with relaxation to quantum equilibrium having taken place during the violence of the big bang [4,5,6,7,8,15]. In this paper, the hypothesis of early quantum nonequilibrium is shown to have observable consequences today.
The concept of quantum nonequilibrium has been discussed for general (deterministic) hidden-variables theories [9,10,12,14]. For the specific case of de Broglie-Bohm theory, it amounts to having configurations with a distribution P that differs from the usual Born-rule distribution |Ψ| 2 (for a pure subensemble with wave function Ψ) [4,5,6,14]. There were several motivations for proposing that the early universe began in a state of quantum nonequilibrium. Let us briefly summarise them.
There seems to be a peculiar 'conspiracy' at the heart of modern physics, whereby quantum nonlocality cannot be used to send practical instantaneous signals. In hidden-variables theories, this conspiracy is explained as a contingency of the quantum equilibrium state. Nonlocal signalling is generally possible out of equilibrium (suggesting the existence of an underlying preferred foliation of spacetime [16]); whereas in equilibrium, nonlocal effects cancel out at the statistical level [5,6,9,10]. Our inability to convert entanglement into practical nonlocal signals is then not a law of physics, but a contingency of the equilibrium state. Similarly, standard uncertainty-principle limitations on measurements are also contingencies of equilibrium [5,6,11]. There is a parallel here with the classical thermodynamic heat death: in the complete absence of temperature differences, it would be impossible to convert heat into work, and yet such a limitation would be a mere contingency of the state, and not a law of physics. Furthermore, it has been shown that relaxation towards quantum equilibrium occurs, in pilot-wave dynamics, in similar fashion to thermal relaxation in classical dynamics (under analogous conditions and with similar caveats) [4,6,8,17,18]. Given that all physical systems to which we have access have undergone a long and violent astrophysical history, it is then possible to understand the ubiquitous quantum noise we see around us as, in effect, a remnant of the big bang.
On this view, the effectively local and indeterministic quantum physics we experience today emerged via relaxation processes (presumably occurring close to the big bang) out of a fundamentally nonlocal and deterministic physics -a physics whose details are currently screened off from view, by the all-pervading statistical noise. For as equilibrium is approached, the possibility of instantaneous signalling disappears, and statistical uncertainty emerges. In effect, a hidden-variables analogue of the classical heat death has actually occurred in our universe, explaining the above 'conspiracy'.
The assumption of early quantum nonequilibrium was also proposed as a possible alternative resolution of the cosmological horizon problem (which persists even in some inflationary models [19]): the resulting early nonlocality might explain the otherwise puzzling homogeneity of the universe at early times [5,6,7,10].
The search for early quantum nonequilibrium may also be motivated simply on the grounds that de Broglie-Bohm theory (and indeed any deterministic hidden-variables theory) certainly allows nonequilibrium to occur. We have an alternative formulation of quantum physics, which yields standard quantum theory in the equilibrium limit, and which yields departures from standard quantum theory outside that limit. It seems natural to explore this possible new physics, and in particular to test for it experimentally, as far as one can. If nothing else, setting experimental bounds on the existence of quantum nonequilibrium can provide new bounds on possible deviations from quantum theory [15].
Finally, if hidden-variables theories are taken seriously, one is obliged to take the possibility of nonequilibrium seriously as well: for it is only in nonequilibrium that the underlying details become visible. If the world were always and everywhere in quantum equilibrium, the details of de Broglie-Bohm trajectories (for example) would be forever shielded from experimental test. De Broglie-Bohm theory as a whole would then be unacceptable as a scientific theory. And much the same could be said for hidden-variables theories in general.
Given the above motivations, the idea that the universe relaxed to quantum equilibrium from an earlier nonequilibrium state is plausible enough. However, to be a scientific theory it is essential to make new, quantitative predictions. The new physics of systems in quantum nonequilibrium has been explored in some detail [5,6,8,9,11,13,14,15], and a specific signature of nonequilibrium has been developed [12,14]. It has also been shown that if nonequilibrium were present at the beginning of an inflationary phase, then there would be observable consequences for the statistics of the temperature anisotropies imprinted on the cosmic microwave background (CMB) [15,20]. Further, heuristic arguments have been given, suggesting that relaxation might be suppressed for long-wavelength field modes on expanding space [15] (a suggestion that forms the starting point for the present work); and that, if relic cosmological particles decoupled sufficiently early, they might still be in nonequilibrium today [8,15]. However, so far, no definite quantitative predictions have been made. The aim of this paper is to fill this gap.
For the first time, given a specific cosmological model, we are able to point to precisely where quantum nonequilibrium could be found. We accomplish this by studying the evolution of nonequilibrium distributions for a scalar field on expanding space. We show that relaxation is suppressed for field modes whose quantum time evolution satisfies a certain inequality. For these particular modes, early quantum nonequilibrium is 'frozen'. For a radiation-dominated expansion, the inequality implies a physical wavelength larger than the (instantaneous) Hubble radius. On the basis of these results, it is possible to make quantitative predictions for nonequilibrium deviations from quantum theory, in the context of a given cosmological model. As we shall see, there are a number of possible consequences: in particular, infra-red corrections to inflationary predictions for the CMB, and relic nonequilibrium particles at low energies.
De Broglie-Bohm Scalar Field on Expanding Space
For simplicity we consider a flat metric, where a(t) is the scale factor, H ≡ȧ/a is the Hubble parameter, and H −1 is the Hubble radius. As is customary, we take a 0 = 1 today (at time t 0 ), so that |dx| is a comoving distance (or proper distance today). At time t, field modes have physical wavelengths λ phys = a(t)λ, where λ = 2π/k is a comoving wavelength (or proper wavelength today) and k = |k| is the comoving wave number. We consider a free (minimally-coupled) massless scalar field φ with a Lagrangian density L = 1 2 g 1/2 ∂ α φ∂ α φ or The action is dt d 3 x L. We then have a canonical momentum density π = ∂L/∂φ = a 3φ and a Hamiltonian density Here, it is convenient to write the dynamics in Fourier space. Expressing φ(x) in terms of its Fourier components and writing for real q kr (r = 1, 2), where V is a box normalisation volume, the Lagrangian [3,6,7,21,22,23,24,25,26]. Here, the Schrödinger equation for Ψ = Ψ[q kr , t] is which implies the continuity equation and the de Broglie velocities (where Ψ = |Ψ| e iS ). The 'pilot wave' Ψ is interpreted as a physical field in configuration space, guiding the time evolution of an individual field φ(x, t) in 3-space. (Note that a similar construction may be given in any globallyhyperbolic spacetime, by choosing a preferred foliation [13], so there is no need for spatial homogeneity.) Over an ensemble of field configurations guided by the same pilot wave Ψ, there will be some (in principle arbitrary) initial distribution P [q kr , t i ], whose time evolution P [q kr , t] will be determined by If P [q kr , t i ] = |Ψ[q kr , t i ]| 2 , then P [q kr , t] = |Ψ[q kr , t]| 2 for all t, and one obtains empirical agreement with standard quantum field theory [3,22,23,24,26,25]. On the other hand, for an initial nonequilibrium distribution P [q kr , t i ] = |Ψ[q kr , t i ]| 2 , for as long as P remains in nonequilibrium the predicted statistics will generally differ from those of quantum field theory. In any case, whatever the distribution P may be (equilibrium or nonequilibrium), its time evolution will be given by (7).
Preliminary Discussion for a Decoupled Mode
A proper treatment of nonequilibrium freezing is given in sections 4 and 5. As we shall see, our treatment is applicable to arbitrary (entangled, mixed, and interacting) quantum states. As a preliminary exercise, in this section we shall discuss some elementary features for the simple case of a single decoupled mode k of a free field in a pure quantum state.
From equations (4), (6), and writing Ψ = ψ k (q k1 , q k2 , t)κ where κ depends only on degrees of freedom for modes k´ = k, we find that the wave function ψ k of a decoupled mode k satisfies while the de Broglie velocities for the mode amplitudes (q k1 , q k2 ) arė (with ψ k = |ψ k | e is k ). The time evolution of the marginal distribution ρ k (q k1 , q k2 , t) will then be given by Equations (8)-(10) are identical to those of pilot-wave dynamics for an ensemble of nonrelativistic particles of time-dependent 'mass' m = a 3 moving in the q k1 − q k2 plane in a harmonic oscillator potential with time-dependent angular frequency ω = k/a. We may then discuss relaxation (and relaxation suppression) for a decoupled field mode in terms of relaxation (and relaxation suppression) for a nonrelativistic two-dimensional harmonic oscillator.
Before doing so, let us recall what is already known about relaxation in pilot-wave dynamics.
For a system with configuration q and wave function ψ, the H-function (the relative negentropy of an arbitrary distribution ρ with respect to |ψ| 2 ) obeys a coarse-graining H-theorem similar to the classical one [4,6,8]. Introducing a coarse-graining in configuration space, and assuming appropriate initial conditions for ρ and ψ, the coarse-grained functionH(t) will begin to decrease with time, corresponding to an evolution of the coarse-grained densityρ towards |ψ| 2 . This 'subquantum H-theorem' formalises a simple intuitive idea: because ρ and |ψ| 2 obey the same continuity equation, they behave like two classical fluids that are 'stirred' by the same velocity field, thereby tending to become indistinguishable on a coarse-grained level. Such relaxation has been studied numerically, on a static spacetime background, for simple one-and two-dimensional systems [6,8,17,18]. For an ensemble of nonrelativistic particles in a two-dimensional box, with a wave function consisting of a superposition of the first 16 modes, it was found that relaxation occurs very efficiently, with an approximately exponential decayH(t) ≈ H 0 e −t/tc of the coarse-grained H-function (over a timescale t c ) [17]. Similar results have been obtained for an ensemble of nonrelativistic particles in a twodimensional harmonic oscillator potential [18]. As discussed in ref. [17], the numerical timescale t c was found to be in approximate agreement with a theoretical relaxation timescale τ defined by 1/τ 2 ≡ −(1/H)d 2H /dt 2 [6]. For a particle of mass m, and using a sufficiently small coarse-graining length ε, a rough order-of-magnitude estimate yields τ ∼ 1/ εm 1/2 (∆E) 3/2 , where ∆E is the quantum energy spread associated with ψ [8,17]. (The quantity τ is analogous to the scattering time of classical kinetic theory: one expects a significant approach to equilibrium over timescales of order τ .) If we choose a 'natural' value ε ∼ 1/∆p, where ∆p is the quantum momentum spread, then taking ∆E ∼ (∆p) 2 /2m one has the simple (and rough) result where ∆t is the quantum timescale over which the wave function ψ evolves.
Relaxation for Sub-Hubble Modes in the Minkowski Limit
One expects that in the short-wavelength limit, λ phys << H −1 , the above equations (8)-(10) will reduce to those for a decoupled mode k on Minkowski spacetime, because (roughly speaking) the timescale ∆t ∝ λ phys over which ψ k = ψ k (q k1 , q k2 , t) evolves will be much smaller than the expansion timescale H −1 ≡ a/ȧ [15].
To obtain a more precise and rigorous statement, note first that at any time t the HamiltonianĤ(t) appearing in the Schrödinger equation (8) has the same eigenfunctions and eigenvalues as are usually obtained for a two-dimensional harmonic oscillator of (instantaneous) mass m = a 3 and angular frequency ω = k/a. Thus, for quantum numbers n 1 , n 2 = 0, 1, 2, ... , we have energy eigenfunctions φ n1 (q k1 , t)φ n2 (q k2 , t) and eigenvalues E k (t) = (1 + n 1 + n 2 )ω(t). (The time dependence in φ n1 (q k1 , t) and φ n2 (q k2 , t) comes, of course, from the time dependence of m = a 3 and ω = k/a.) The wave function at any time t may then be expanded in terms of these energy eigenstates, where n k ≡ n 1 + n 2 . If we consider a subsequent evolution over a time δt << H −1 , where H −1 is the timescale over which the HamiltonianĤ(t) changes, then the Hamiltonian (together with its eigenfunctions and eigenvalues) will be almost constant during (t, t + δt), and in this interval the wave function ψ k will evolve like that of a conventional two-dimensional oscillator, with an evolution timescale ∆t (where we have = 1). Significant evolution of ψ k over the interval (t, t + δt) can occur only if ∆t << H −1 or We may then take (13) to be a good characterisation of the short-wavelength or Minkowski limit. In this limit, over timescales ∆t ≡ 1/∆E k << H −1 , the wave function ψ k evolves just as it would on Minkowski spacetime. On such timescales, the scale factor a is approximately constant, and the equations (8)-(10) reduce to those of pilot-wave dynamics for an ensemble of nonrelativistic particles of constant mass m = a 3 moving in a two-dimensional harmonic oscillator potential of constant angular frequency ω = k/a. From the numerical results for the latter case [18] we may deduce that, in the Minkowski limit, for a decoupled mode k in a superposition |ψ k ∼ |1 k + |2 k + |3 k + ... of many different states of definite occupation number, the distribution ρ k (q k1 , q k2 , t) of the mode amplitudes will relax to equilibrium, ρ k → |ψ k | 2 (on a coarse-grained level, again assuming appropriate initial conditions), on a timescale τ given roughly by (12) or
Freezing of the Wave Function for Super-Hubble Modes
In contrast, in the long-wavelength limit, we have ∆t ≡ 1/∆E k >> H −1 and the change in the HamiltonianĤ(t) over timescales H −1 may be treated as a sudden perturbation, leading to the conclusion that the wave function ψ k is approximately static -or 'frozen' -over timescales H −1 . More precisely, let us again consider an evolution over an interval (t, t + δt) -but now with δt of order H −1 , so that the HamiltonianĤ(t) changes significantly. We may writeĤ(t + δt) =Ĥ(t) + δĤ, where δĤ is comparable tô H(t). In the limit λ phys >> ∆n k · H −1 , the timescale ∆t ≡ 1/∆E k associated with the 'unperturbed' HamiltonianĤ(t) will be large compared to the timescale H −1 over which the Hamiltonian changes. We may then treat the change δĤ as a sudden perturbation, applied over a timescale that is short compared to the natural timescale of the system. By standard reasoning (for example, ref. [27]), we deduce that ψ k hardly changes over the interval (t, t + δt), that is, that ψ k is essentially static over timescales H −1 .
Note that the above freezing of the wave function on timescales H −1 need not occur for all super-Hubble modes, since for any λ phys > H −1 the longwavelength condition (14) will be violated if ∆n k is sufficiently large. On the other hand, of course, for any given value of ∆n k , the condition (14) will be satisfied for sufficiently large λ phys and the wave function will indeed be frozen.
If ψ k is frozen over timescales H −1 , then the equilibrium density |ψ k | 2 is also frozen over timescales H −1 . Because the evolution of |ψ k (q k1 , q k2 , t)| 2 is driven by the de Broglie velocity field (q k1 ,q k2 ), in accordance with the continuity equation we then expect that the trajectories (q k1 (t), q k2 (t)) will also be frozen over timescales H −1 . (In principle, of course, (15) can have solutions with an essentially static density |ψ k | 2 and a non-negligible velocity field (q k1 ,q k2 ), but we expect these to occur only in exceptional circumstances. And in any case, because the phase gradient ∂s k /∂q kr is also frozen over timescales H −1 , from (9) we see that the velocitiesq kr become smaller as the scale factor a increases over expansion timescales H −1 .) Assuming this to be the case, it then follows that an arbitrary nonequilibrium distribution ρ k = |ψ k | 2 , evolving in time according to the same continuity equation (15), will also be frozen over timescales H −1 . In other words, at least in this simple case of a decoupled field mode, initial quantum nonequilibrium will be frozen on timescales of order the expansion timescale H −1 . (This is reminiscent of the well-known 'freezing' of super-Hubble modes in the theory of cosmological perturbations [28,29].) The above reasoning then suggests a mechanism, whereby the rapid expansion of space at early times can suppress the normal process of relaxation to quantum equilibrium, raising the possibility that remnants of early nonequilibrium could have survived to the present day [8,15]. However, our treatment so far is rather limited. We have considered only a free, decoupled mode in a pure quantum state. It is only expected, and not generally proven, that a frozen |ψ k | 2 will be associated with a family of frozen trajectories. And, perhaps most seriously, while it seems significant to demonstrate nonequilibrium freezing over the (time-dependent) expansion timescale H −1 , in a standard -say radiationdominated -expansion we have H −1 → 0 as t → 0, so by itself nonequilibrium freezing over the timescale H −1 does not tell us very much about the possible survival of initial nonequilibrium. These limitations will be overcome in the following two sections. We shall first derive a rigorous condition for nonequilibrium freezing, applicable to an arbitrary time interval and to any (generally entangled) pure quantum state of a free field. Then, we shall generalise this condition to mixed states and to interacting fields.
Inequality for the Freezing of Quantum Nonequilibrium
To study nonequilibrium freezing over arbitrary time intervals and for arbitrary quantum states, we shall examine the behaviour of the trajectories themselves (instead of the behaviour of their guiding wave functions), thereby obtaining a direct constraint on the evolution of nonequilibrium distributions.
Mathematically, as we saw in section 2, the field system is equivalent to a collection of non-interacting one-dimensional harmonic oscillators with positions q kr (and with time-dependent masses m = a 3 and time-dependent angular frequencies ω = k/a). The Hamiltonian operator isĤ = krĤ kr , witĥ EachĤ kr has (time-dependent) energy eigenvalues E kr = (n kr + 1 2 )ω, where n kr = 0, 1, 2, .... . (Because of the explicit time dependence in the Hamiltonian, the mean energy is of course not conserved: d Ĥ /dt = ∂Ĥ/∂t = 0.) For an arbitrary wave functional Ψ[q kr , t], the de Broglie velocity field is given by (6), and the evolution of an arbitrary ensemble distribution P [q kr , t] will be driven by this velocity field via the continuity equation (7).
Note that the use of a classical spacetime background must break down in the limit t → 0. The equations defining our model can be trusted only down to some minimum initial time t i . For example, very optimistically, one might take the 'initial time' to be of order the Planck time, t i ∼ t P ∼ 10 −43 s. Now, an initial nonequilibrium distribution P [q kr , t i ] = |Ψ[q kr , t i ]| 2 can in general relax to equilibrium (on a coarse-grained level) only if the trajectories wander sufficiently far over the region of configuration space where |Ψ| 2 is concentrated; otherwise, for example, if P were initially small in regions where |Ψ| 2 is large, P could remain so, and equilibrium would never be reached. We may then write a simple condition for initial nonequilibrium to be 'frozen', by considering the displacements of the trajectories, and requiring that the (equilibrium) mean magnitude of the displacements be smaller than the width of the wave packet.
Let us write the total configuration of the system as q(t). Note that Ψ[q, t] is in general an entangled function of all the q kr 's. Even so, given the initial distributions P [q, t i ] and |Ψ[q, t i ]| 2 , one may calculate the corresponding marginals for just one q kr (for some given kr). If the resulting two marginals are equal or unequal, we may say that we have equilibrium or nonequilibrium respectively, for the given degree of freedom q kr . In this sense, it is clearly possible for some of the q kr 's to be in nonequilibrium while the others are in equilibrium.
Let us now consider the motion q kr (t) of one degree of freedom, for some given kr, over a time interval [t i , t f ]. An initial point q kr (t i ) undergoes a final displacement δq kr (t f ) = Let ∆q kr (t) be the width -with respect to q kr -of the quantum distribution |Ψ[q, t]| 2 at time t. If the whole family of trajectories q kr (t) (with fixed kr and arbitrary initial total configurations q(t i )) were such that the magnitude |δq kr (t f )| of the final displacement were small compared to ∆q kr (t f ), then relaxation (with respect to q kr ) during the interval [t i , t f ] would in general be impossible, as the configurations would not move far enough for the two 'fluids' P and |Ψ| 2 to be significantly 'stirred' or mixed (with respect to q kr ). This is clear because the time evolutions of P and |Ψ| 2 are determined by the same continuity equation and the same family of trajectories. For example, if |Ψ| 2 is initially spread over an interval [a, b] of q kr -space of length ∼ ∆q kr (t i ), and if the displacements of all the trajectories during ; while if P is, say, initially confined to the left half of the interval [a, b], it will essentially remain so during [t i , t f ], and there will be no significant evolution towards equilibrium (for the coordinate q kr ).
Thus we might take our condition to be |δq where 'most' could be defined for example with respect to the Lebesgue measure or with respect to the |Ψ| 2 -measure), then relaxation would still be impossible in general. Hence we may take the weaker condition where |δq kr (t f )| eq is the average of |δq kr (t f )| over an equilibrium ensemble. The condition (16) implies that 'most' of the ensemble cannot move by 'much' more than a small fraction of ∆q kr (t f ), in the following precise sense.
is a function of the initial total configuration q i ≡ q(t i )). From (16), we can write δ eq < ε∆q kr (t f ) for some ε << 1. We can then show that 'most' values of δ cannot be 'much' bigger than ε∆q kr (t f ) -where we define 'most' with respect to the equilibrium measure |Ψ[q i , t i ]| 2 dq i over the ensemble of initial configurations q i , and where we define δ to be 'much' bigger than ε∆q kr (t f ) if δ > 2ε∆q kr (t f ). Let R be the set of initial points q i such that δ > ε∆q kr (t f )+d, for some fixed d > 0. Such points make up a certain fraction F of the ensemble, ponents of higher-dimensional trajectories q(t) (unlike in a strictly one-dimensional system, where the single-valuedness of the velocity field prevents trajectories from crossing).
Since δ ≥ 0 for all q i , we have Given (16), or δ eq < ε∆q kr (t f ), we then have ε∆q We may then indeed conclude that 'most' of the initial ensemble cannot move by 'much' more than ε∆q kr (t f ). In this case, even an approximate relaxation cannot (in general) occur.
If (16) is satisfied, then, relaxation will in general be suppressed. Of course, while (16) is a sufficient condition for relaxation suppression, it is not necessary: in principle, the trajectories could even wander over distances larger than ∆q kr (t f ) but without a sufficiently complex flow to drive the ensemble towards equilibrium. (As discussed in section 7, it is reasonable to assume that this is unlikely.) While (16) provides a condition for the freezing of quantum nonequilibrium, in practice it is likely to be more stringent than is necessary. Without attempting to give a rigorous justification, we expect that there will be cases where the weaker condition suffices to prevent relaxation, at least partially (that is, some significant relaxation towards equilibrium will occur but significant deviations from equilibrium will remain). Generally speaking, we expect that the transition from essentially complete relaxation suppression to essentially full relaxation will take place when the ratio r ≡ |δq kr (t f )| eq /∆q kr (t f ) increases from r << 1 to r 1, with the critical demarcation line being somewhere in the neigbourhood of r ∼ 1. We therefore expect that the weaker condition (18) will define (approximately) essentially the whole of the suppression regime, including those cases where significant relaxation towards equilibrium does occur but where significant deviations from equilibrium still remain. (Note that (18) implies that 'most' of the ensemble cannot move by 'much' more than ∆q kr (t f ), in the sense given above.) Pending a more precise treatment, then, here we shall take (18) as our condition for the freezing -or at least partial freezing -of quantum nonequilibrium.
Let us now proceed to draw inferences from (18). Note first that the final displacement δq kr (t f ) has modulus |δq kr (t f )| ≤ where the equilibrium mean speed |q kr (t)| eq at time t is (the velocityq kr (q, t) being given by (6) as a time-dependent function of the total configuration q).
For the sake of clarity, let us explicitly demonstrate the last equality in (19). The initial equilibrium distribution |Ψ[q i , t i ]| 2 represents an ensemble of initial (total) configurations q i . From each q i , the de Broglie velocity field generates a trajectory q(t) (for the whole system), and each such trajectory implies a subsystem trajectory q kr (t). Thus, at any time t, the subsystem velocityq kr may be regarded as a function of q i and of t (assuming the wave functional is given). We may then writeq kr =q kr (q i , t) -where of courseq kr (q i , t) anḋ q kr (q, t) here denote two different functions of the first argument. (This notation is strictly speaking ambiguous, but clear from the context.) We then have (with the mean taken over the distribution as used above. (We have shifted notation back and forth, withq kr (t) anḋ q kr (q i , t) denoting the same thing.) Using x ≤ x 2 for any x, we then have . Now note that, at any time t, (where Ω denotes the usual quantum expectation value for an operatorΩ). The last equality follows from Thus, since (∂|Ψ|/∂q kr ) 2 ≥ 0, we have and so (where it is understood that quantities under the integral sign are evaluated at time t).
Since q 2 kr > 0, we also have and so Introducing the number operatorn kr , where n kr ≥ 0, the mean energy in the mode kr is Ĥ kr = ( n kr + 1 2 ) k a .
We then have The mean |δq kr (t f )| eq at time t f is to be compared with the width ∆q kr (t f ) (with respect to q kr ) of the quantum distribution |Ψ[q, t f ]| 2 at time t f . Using the uncertainty relation ∆q kr ∆π kr ≥ 1 2 and ∆π kr ≤ π 2 kr , we have 1/∆q kr ≤ 2 π 2 kr . Again using (25) we then have Combining the results (26) and (27), we obtain an upper bound for the ratio (where a f ≡ a(t f ), and so on). Note that n kr is in general a function of time t, and that the inequality (28) holds for any arbitrary (in general entangled) state Ψ.
We may now consider the following inequality, that the right-hand side of (28) is less than one, that is When this 'freezing inequality' is satisfied, |δq kr (t f )| eq /∆q kr (t f ) < 1 and initial quantum nonequilibrium will be (at least partially) 'frozen'. We may also write (29) directly in terms of Ĥ kr , yielding The dependence on the wave number k is of course still present in Ĥ kr . Roughly speaking, the freezing inequality (30) requires that the mean energy Ĥ kr in the mode kr be not too large over the time interval [t i , t f ] (see below).
Generalisations
Before discussing the consequences of the above results, let us first generalise them to more realistic situations. The above derivation of the freezing inequality (29) (or (30)) assumed that the quantum state was pure and that the field was free. The derivation is easily generalised to mixed states and to (finite models of) interacting fields. With these generalisations in hand, one can then discuss nonequilibrium freezing for a mixed (for example thermal) ensemble of interacting particles, and one can apply the results to realistic models of the early universe.
Mixed States
In quantum theory, a mixed state is represented by a density operatorρ, which may be written as a decomposition with appropriate probability weights p α and pure states |Ψ α . For a scalar field φ, the quantum-theoretical distribution for φ will be The decomposition ofρ is generally non-unique, and different decompositions of the sameρ are physically equivalent in all respects. The situation is different in pilot-wave theory. A mixed quantum state is interpreted as a statistical mixture of physically-real pilot waves Ψ α , with probability weights p α , corresponding to a preferred decomposition ofρ [30]. For a given element of the ensemble, the de Broglian velocity of the actual configuration is determined by the actual pilot wave Ψ α . A different decomposition of ρ would generally yield different velocities, and so be physically distinct at the fundamental level. (Note that, in quantum nonequilibrium, the velocities and trajectories for single systems can be measured without necessarily disturbing the wave functions [11,14], enabling the preferred decomposition to be detected. The operational equivalence of different decompositions ofρ is a peculiarity of the quantum equilibrium state; see ref. [13].) Now, given such a preferred decomposition, for each pure subensemble with wave functional Ψ α [φ, t] -taking the system to consist of a scalar field φwe may define a distribution P α [φ, t] (generally = |Ψ α [φ, t]| 2 ) and an associated H-function H α = Dφ P α ln(P α /|Ψ α | 2 ) (for some appropriate measure Dφ). The whole ensemble has a distribution and the mean H-function obeys a coarse-graining H-theorem (for a closed system with constant p α ) [13]. The equilibrium minimum H = 0 (which may be approached in a coarse-grained sense) corresponds to H α = 0 and P α = |Ψ α | 2 for every α, so that (33) reduces to (32). Thus, we may discuss relaxation for a mixed state in terms of relaxation for its component pure subensembles. We may then consider the freezing inequality (29) (or (30)) for each pure subensemble separately. Clearly, the inequality might hold for some subensembles and not for others (or for all of them, or none).
If the (quantum) mean occupation number for state |Ψ α is n kr α ≡ Ψ α |n kr |Ψ α , then for a mixed state (31) the overall mean occupation number will be n kr = α p α n kr α .
For example, for a thermal ensemble with temperature T , we will have the Planck distribution n kr P = 1 e ω/kBT − 1 .
In general, n kr α for a pure subensemble will differ from n kr , and the total ensemble will contain a range of different values for n kr α . Initial quantum nonequilibrium will be frozen for the pure subensemble with wave functional Ψ α , if the corresponding quantity n kr α satisfies the freezing inequality (29) (with n kr replaced by n kr α ).
To investigate which (if any) pure subensembles will satisfy the freezing inequality (29), we need to know the quantities n kr α as functions of time, that is, we need to know which pure states |Ψ α are present in the total ensemble. Despite the operational equivalence of different decompositions ofρ in quantum theory, it has been argued that, in the case of thermal (canonical) ensembles, there is a natural probability measure on the space of normalised wave functions, the 'Gaussian adjusted projected measure', which is unique for eachρ, and which may be used to define a preferred decomposition [31]. This proposal has been applied to the case of an ideal gas (though described in terms of particle theory rather than field theory) [32]. For our purposes, we would need to apply the preferred measure to a thermal ensemble of wave functionals in field theory on expanding space, and use the results to deduce which (if any) subensembles of finite measure satisfy the freezing inequality (29) (or (30)). We do not attempt such a calculation here, but it should be clear that the problem is well-defined.
If certain pure subensembles -with labels α in some set S -are predicted to be frozen, then (assuming initial nonequilibrium) the total ensemble distribution of φ will take the form where P α = |Ψ α | 2 (for α ∈ S), and P [φ, t] will generally differ from the equilibrium result (32). The physics of nonequilibrium mixed states needs further development. In particular, one should explore how measurements could probe the nonequilibrium physics particular to a specific pure subensemble (noting again that, unlike in quantum theory, in nonequilibrium pilot-wave theory it is operationally meaningful to speak of the physics of component pure subensembles). However, the above suffices for the purposes of this paper.
Interacting Fields
Our derivation in section 4 of the freezing inequality (29) assumed that the field φ was free. The derivation is easily generalised to interacting fields, at least if one considers finite models with an appropriate high-frequency cutoff (so that divergences may be ignored).
Let the scalar field φ interact with other fields, denoted collectively by Φ. (These other fields need not be scalars.) We have a total Hamiltonian H total =Ĥ +Ĥ Φ +Ĥ I , whereĤ andĤ Φ are respectively the free Hamiltonians for φ and Φ, whileĤ I is the interaction Hamiltonian.
We may still of course write φ in terms of its Fourier components φ kr , and the free HamiltonianĤ still decomposes into a sumĤ = krĤ kr , withĤ kr = (n kr + 1 2 ) k a , exactly as before. Equation (22) still holds (for a pure subensemble with wave functional Ψ, and where the total configuration q now includes Φ as well as φ). So we still have the inequality (23). The other inequalities -such as (25) and ∆q kr ∆π kr ≥ 1 2 -are also valid as in the case of a free field. We therefore arrive again at the upper bound (28) and the freezing inequalities (29) and (30).
The only difference from the free case is in the time evolution of n kr (or of Ĥ kr ), which now involves contributions fromĤ I : The calculation of n kr t as a function of time t will then be more complicated than in the free case, where only the first term appears on the right hand side.
(The evolution of n kr t in the free case is studied in section 8.)
General Implications of the Freezing Inequality
Quite generally, then, even for an interacting field in a mixed state, we may conclude that relaxation will be suppressed -that is, nonequilibrium will be frozen -for modes whose (time-dependent) mean occupation number n kr satisfies the inequality (29). For a given time evolution, defined by a(t) and n kr t (for all kr) on [t i , t f ], it is of course possible that (29) will not be satisfied for any value of k, and that all modes relax (at least approximately) towards equilibrium during the interval [t i , t f ]. On the other hand if, for a given time evolution, (29) is satisfied only for certain values of k, then we can predict that significant deviations from quantum equilibrium are to be expected only for those particular values of k.
We emphasise that, for each mode, whether or not the inequality (29) is satisfied depends on the history of the expansion and on the time evolution of the quantum state of the field.
For a radiation-dominated expansion on [t i , t f ], with a(t) = a f (t/t f ) 1/2 , we may make a general statement about the kind of modes that can satisfy (29): the physical wavelength λ phys (t f ) = a f (2π/k) at time t f must be larger than the Hubble radius H −1 f at time t f (assuming that t f (1.17)t i ). This is easily shown for any quantum state. Since n kr ≥ 0, the inequality (29) (assuming it to hold) implies that where H −1 f = 2t f and where the right-hand side is indeed larger than H −1 (This is of course not to suggest that the freezing inequality is satisfied for all super-Hubble modes: rather, if the inequality is satisfied, then the corresponding modes must be super-Hubble.) Note that, in any reasonable application of this result, the factor ln(t f /t i ) will not be large. For example, taking t i ∼ t P ∼ 10 −43 s, for t f ∼ 10 −35 s (the time at which inflation begins in some models [28]) we have ln(t f /t i ) ∼ ln 10 8 ∼ 20, while even for t f ∼ 1 s (the time of neutrino decoupling) we have ln(t f /t i ) ∼ ln 10 43 ∼ 10 2 . The factor 2π ln(t f /t i ) is then likely to be at most of order 10 2 − 10 3 , in which case the minimal value of λ phys (t f ) for nonequilibrium field modes will be at most two or three orders of magnitude larger than the Hubble radius H −1 f . On the other hand, again for a radiation-dominated expansion, the true lower bound on λ phys (t f ) (set by (29)) will be much larger than 2πH −1 f ln(t f /t i ) if n kr t >> 1 during the period [t i , t f ], as is clear from (29).
Thus, de Broglie-Bohm theory (with the assumption of early quantum nonequilibrium at some initial time t i ) predicts that residual or 'frozen' nonequilibrium will exist at later times t f > t i for modes satisfying the inequality (29), where for a radiation-dominated expansion the physical wavelength λ phys (t f ) of nonequilibrium modes at time t f must be bigger than 2πH For a radiation-dominated expansion, and assuming t f /t i >> 1, we then have (inserting ) Ĥ kr (where, dimensionally speaking, H −1 f = 2t f is the Hubble time and cH −1 f is the Hubble radius).
Finally, we note that violation of the freezing inequality (29) in the infra-red limit k → 0 requires that n kr be divergent as k → 0. Alternatively, for (30) to be violated as k → 0, the mean energy per mode Ĥ kr must remain finite as k → 0.
Relaxation for Modes Violating the Freezing Inequality
We have shown that, for modes satisfying (29), relaxation will be suppressed over the time interval [t i , t f ]. For a radiation-dominated expansion we know from (35) that such modes, if they exist, must have super-Hubble wavelengths.
Further, as discussed in section 3, we know from previous studies that relaxation is likely to occur in the short-wavelength (Minkowski) limit. What can we say about modes that violate the freezing inequality (29) without approaching the Minkowski limit? Our derivation of the upper bound (28) made use of several general inequalities (such as π 2 kr ≤ 2a 3 Ĥ kr ). For a large class of quantum states, these general inequalities could be replaced by approximate equalities, to be used as rough, order-of-magnitude estimates (for example, π 2 kr ∼ 2a 3 Ĥ kr ). For such states, then, we have an estimated ratio It then follows that if (instead of (29)) or equally if (instead of (30)) then |δq kr (t f )| eq ∆q kr (t f ) 1 .
From this we may reasonably deduce that relaxation, or at least significant relaxation, is likely to occur (except of course for special states with very simple velocity fields). Unlike our proof of relaxation suppression for modes satisfying (29), this is not a rigorous result. (It is roughly analogous to saying, in classical kinetic theory, that significant relaxation to thermal equilibrium is likely to occur, over timescales of order the mean free time, if the mean magnitude of momentum transferred in molecular collisions is comparable to the width of the equilibrium momentum distribution.) To delineate the precise behaviour in this region requires further study, perhaps through numerical simulations.
To avoid potential misunderstandings, we should emphasise that relaxation might of course be suppressed for special quantum states violating the freezing inequality (29) (in particular, states with an especially simple de Broglie velocity field). However, one should bear in mind that we are concerned with the evolution of quantum nonequilibrium in our actual universe, which is known to have had a complex and violent past history. Thus, for example, in a standard radiation-dominated phase, special states with no entanglement at any time are of no interest: we are concerned with states that are likely to have actually occurred. In seeking a general criterion for the freezing of early nonequilibrium, it is then of no use to point to special quantum states exhibiting particularly simple velocity fields. 3 In contrast, the freezing inequality (29) is a natural constraint on quantum states in general, providing a realistic pointer to where nonequilibrium might be found in our actual universe. And violation of (29) is, as we have argued in this section, likely to imply relaxation or at least significant relaxation. To make precise predictions, then, we require a specific cosmological model, and an explicit expression for n kr t as a function of time t. We leave such detailed studies for future work. Here, we give a method for calculating n kr t (and Ĥ kr t ) for an arbitrary pure quantum state. This method might prove useful.
The mean energy
W kr ≡ Ĥ kr = ( n kr + 1/2)(k/a) in the mode kr evolves in time according to dW kr /dt = ∂Ĥ kr /∂t , which implies (usingȧ = Ha) where U kr ≡ 1 2 a 3 ω 2q2 kr is the mean potential energy. (For an interacting field, as discussed in section 5.2, dW kr /dt would contain additional terms from −i [Ĥ kr ,Ĥ I ] .) The rate of change of n kr = (a/k)W kr − 1/2 is then given by d n kr dt where K kr ≡ π 2 kr /2a 3 is the mean kinetic energy. To solve for W kr (t) = K kr (t) + U kr (t), and hence the required function n kr t , one may write first-order (linear) differential equations for K kr , U kr and for the quantity χ kr ≡ 1 2 q krπkr +π krqkr . Using d Ω /dt = −i [Ω,Ĥ] + ∂Ω/∂t , it is readily shown that dK kr dt = −3HK kr −ω 2 χ kr , dU kr dt = HU kr +ω 2 χ kr , dχ kr dt = 2(K kr −U kr ) .
If H =ȧ/a and ω = k/a are known functions of time, then given values of K kr , U kr , χ kr at any one time (say t i or t f ) -where these values are determined by the wave functional Ψ at that time 4 -the equations (43) determine K kr , U kr , χ kr at all times, yielding W kr (t) = K kr (t) + U kr (t) as well as the required function n kr t = a(t)W kr (t)/k − 1/2. Introducing the vector X = (K kr , U kr , χ kr ) T , the equations (43) take the form dX/dt = AX, where A is the time-dependent matrix For interesting forms of a, such as a ∝ t 1/2 , it seems likely that these equations will have to be solved numerically.
It would be interesting to study this system of equations, and to establish the conditions under which solutions for n kr t (or W kr (t) = Ĥ kr t ) satisfy the freezing inequality (29) (or (30)). We leave this for future work.
Approximate Solutions for n kr t Satisfying the Freezing Inequality
However, it is important to show first of all that solutions for n kr t satisfying (29) can exist for some values of k. Here, we construct approximate solutions of (43) valid in the long-wavelength limit k → 0, that satisfy (29) for appropriate initial conditions and time intervals. The conditions of validity are probably too restrictive for useful application to realistic cosmological scenarios, and we give these solutions here only to show that solutions satisfying (29) are indeed possible.
We consider a radiation-dominated expansion, for which a ∝ t 1/2 and H = 1/2t. Dropping the indices kr, we find approximate solutions to (43) satisfying (for appropriate values of k) ω 2 |χ| << HK, HU (where K, U are non-negative), or . We then have the simple solutions (that is, K ∝ 1/a 3 and U ∝ a), and Note that, for these solutions, the quantities q 2 kr = 2U kr /(k 2 a) and π 2 kr = 2a 3 K kr are time independent.
We need to show the consistency of the solutions (47) and (48) with the assumed approximation (46). This may be done if k is appropriately small. Specifically, writing we have (since Kt and U t respectively decrease and increase with time) If we assume that (since K and U respectively decrease and increase), and so the approximation condition (46) is indeed satisfied. For k satisfying (49), we then have the approximate solutions (47) for K and U . We wish to show explicitly that, for these solutions, there are values of k that satisfy the freezing inequality (29) (or (30)).
To show this, for simplicity we first choose initial conditions with K i << U i . Since K decreases with time, we then have min {K f , U i } = K f and (from (49)) the solutions (47) are valid if (where we continue to suppress the indices kr), or (using (47)) Inserting this into the freezing inequality (30), and using a = a f (t/t f ) 1/2 and H −1 f = 2t f , and taking t i /t f << 1, we obtain Since Ĥ i ≈ U i we have (restoring indices kr) the freezing inequality (Note that, since for the above solution Ĥ kr increases with time, the general result (37) also applies, with Ĥ kr i = Ĥ kr min < 1/8H −1 f . This is consistent with (53), since we have assumed t i /t f << 1 which implies a i /a f << 1.) Thus, for a given mode kr satisfying (50), if Ĥ kr i is sufficiently small (satisfying (53)), then relaxation will be suppressed and initial nonequilibrium (if it exists) will be frozen. And it is indeed always possible to choose Ĥ kr i so as to satisfy (53), provided k is sufficiently small. For the only general constraint If instead we choose initial conditions with K i >> U i , we will have K >> U only for as long as t i /t f is not much smaller than 1. Over this limited time, we have Ĥ Inserting (54) into the freezing inequality (30), and assuming that t i /t f is small compared to 1 (but not so small as to invalidate the approximation K >> U ), we obtain the freezing inequality Again using Ĥ kr i ≥ (1/2)(k/a i ), we now find that it is possible to satisfy By assumption, a i /a f is not much smaller than 1, so we still have λ phys H −1 f . (In any case, it follows from (55) that the solution is valid only if a f λ > H −1 f . For we have so that (55) gives
Possible Consequences of Early Nonequilibrium Freezing
The freezing inequality (29) (or (30)) makes it possible, for the first time, to make quantitative predictions for nonequilibrium deviations from quantum theory, if we are given a specific cosmological model. The potential consequences are many, and much remains to be done to develop them. Here, we restrict ourselves to a preliminary sketch of some possible nonequilibrium effects, in particular: corrections to inflationary predictions for the CMB, non-inflationary super-Hubble field correlations, and relic nonequilibrium particles. We hope to develop further details elsewhere, in the context of specific (and realistic) cosmological models. As we saw in section 6, for a radiation-dominated expansion (29) implies the general lower bound (35) on the physical wavelength λ phys (t f ) -the wavelength of what might be termed 'relic nonequilibrium field modes' -at the final time t f . In terms of the ambient temperature T , where T ∝ 1/a ∝ t −1/2 , the lower bound may be written as As we have discussed, this lower bound will in practice be not more than two or three orders of magnitude larger than the Hubble radius H −1 f at time t f . Note that, to satisfy the freezing inequality, the bound (57) is a necessary but not sufficient condition. A detailed understanding of where nonequilibrium freezing can occur requires, as discussed in section 5.1, a calculation of the time evolution of the mean occupation numbers n kr α for the pure subensembles (with wave functionals Ψ α ) contained in the early mixed state, to find out which -if any -of these subensembles satisfy (29). This is a matter for future work. Here, we consider only the necessary condition (57), which provides a pointer to where residual nonequilibrium could be found (pending the said more complete analysis). In particular, (57) suggests that one should look for nonequilibrium above a specific critical wavelength.
Corrections to Inflationary Predictions for the CMB
In inflationary cosmology, the universe undergoes a period of exponential expansion, a(t) ∝ e Ht , driven by the energy density of an approximately homogeneous scalar or inflaton field φ, where quantum fluctuations in φ seed the primordial curvature perturbations that are later imprinted as temperature anisotropies in the CMB [29].
To a first approximation, inflation predicts that modes of the inflaton field will have a quantum variance and a scale-invariant power spectrum where |φ k | 2 QT is obtained from the Bunch-Davies vacuum in de Sitter space, for λ phys >> H −1 . In the slow-roll limit (Ḣ → 0), this results in a scaleinvariant spectrum, P QT R (k) = const., for the primordial curvature perturbation R k , in approximate agreement with what is observed in the CMB [33]. Now, quantum nonequilibrium in the early Bunch-Davies vacuum generally implies deviations from (58). It has been shown [15,20] that if (microscopic) quantum nonequilibrium exists at the onset of inflation, then instead of relaxing it will be preserved during the inflationary phase, and furthermore it will be transferred to macroscopic lengthscales by the expansion of physical wavelengths λ phys ∝ a(t) ∝ e Ht . Specifically, for each mode k, explicit calculation shows that the width of the evolving nonequilibrium distribution remains in a constant ratio with the width of the equilibrium distribution. (This is essentially because the vacuum state has the special property of being non-entangled across modes, so that the de Broglie-Bohm trajectories decompose into independent one-dimensional motions. See ref. [20].) If we write the nonequilibrium variance as (where equilibrium corresponds of course to ξ(k) = 1 for all k), the power spectrum for R k is then just the quantum result multiplied by the 'nonequilibrium factor' ξ(k): that is, P R (k) = P QT R (k)ξ(k).
Thus, quantum nonequilibrium at the beginning of inflation will generally break the scale invariance of P R (k). As discussed in detail elsewhere [20], measurements of the angular power spectrum for the CMB may be used (in the context of inflation) to set bounds on ξ(k).
Given these results, the next step is to try to predict some features of the function ξ(k). This requires a constraint on the form of nonequilibrium at the onset of inflation.
One possible strategy is to consider a pre-inflationary era, and to derive constraints on residual nonequilibrium from that era. If we take the pre-inflationary era to be radiation-dominated (a ∝ t 1/2 ), the lower bound (57) shows that nonequilibrium (for whatever fields may be present in that era) can survive only for sufficiently large, super-Hubble wavelengths. Since λ phys ∝ t 1/2 and H −1 ∝ t, at sufficiently early times all physical wavelengths will in fact be super-Hubble (λ phys > H −1 ), raising the possibility of nonequilibrium freezing for the corresponding modes (if the freezing inequality (29) is satisfied). During the subsequent inflationary phase, H −1 is (approximately) constant, and relevant cosmological fluctuations originate from inside H −1 . Some of these fluctuating modes could be out of equilibrium only if they evolved from modes that were outside the Hubble radius in the pre-inflationary phase.
Thus, in order to obtain nonequilibrium corrections to inflationary predictions for the CMB, arising from an earlier pre-inflationary era, some of the preinflationary nonequilibrium modes must enter the Hubble radius, and they must avoid complete relaxation by the time inflation begins. Because pre-inflationary modes with larger values of λ enter the Hubble radius later, they are presumably less likely to relax before inflation begins, in which case residual nonequilibrium will be possible only for λ larger than some infra-red cutoff λ c . (For further discussion, see ref. [20]. ) We hope that future work, based on a specific pre-inflationary model, will provide a prediction for λ c , as well as some indication of the form of the nonequilibrium spectrum for λ λ c . Note that ξ(k) < 1 at wave number k implies that the nonequilibrium width of the corresponding inflaton mode is less than the equilibrium width. One might reasonably expect this, in view of the hypothesis that quantum noise arose from statistical relaxation processes in the very early universe: it seems natural to assume that early nonequilibrium would have a less-than-quantum dispersion, ξ(k) < 1, as opposed to a larger-than-quantum dispersion, ξ(k) > 1 (though the latter is of course possible in principle). Thus, a dip ξ(k) < 1 in the power spectrum below some critical wave number k c = 2π/λ c might be naturally explained in terms of quantum nonequilibrium surviving from a very early pre-inflationary era.
It has in fact been found that an infra-red cutoff in the primordial power spectrum provides a slightly better fit to the 3-year WMAP data; however, the improvement is not sufficient to justify introducing the additional cutoff parameter in the model [34].
Super-Hubble Correlations without Inflation?
As noted in the introduction, one motivation for assuming quantum nonequilibrium at the big bang was that the resulting nonlocality at early times could eliminate the cosmological horizon problem (which persists, as we have mentioned, even in some inflationary models [19]). One might also ask if early quantum nonequilibrium could provide an alternative, non-inflationary means of laying down primordial curvature perturbations at super-Hubble lengthscales, in a standard Friedmann cosmology. Since we have shown that nonequilibrium can remain frozen at super-Hubble scales, one may ask if such nonequilibrium could generate appropriate super-Hubble correlations without the need for an inflationary era.
The Bunch-Davies vacuum for a scalar field φ, with variance given by (58) (at long wavelengths), has the remarkable property that the two-point correlation function is independent of distance |x 1 − x 2 |, as is readily verified for |φ k | 2 = |φ k | 2 QT ∝ 1/k 3 . As a first step, one may ask how this inflationary quantum behaviour could be mimicked by a non-inflationary vacuum in quantum nonequilibrium.
Consider a vacuum state whose quantum variance is |φ k | 2 QT ∝ k mQT for some fixed index m QT . Assuming that the quantum two-point function decreases with distance, where φ( Then consider the same vacuum state in quantum nonequilibrium, with |φ k | 2 = |φ k | 2 QT ξ(k), assuming that ξ(k) ∝ k µ for some fixed index µ. To obtain a nonequilibrium two-point function that is independent of distance, we require m QT + µ = −3, or µ < 0, so that (in this simple example) the nonequilibrium function ξ(k) must increase as k → 0.
As things stand, we are unable to say if such behaviour for ξ(k) is likely to emerge from any reasonable model. However, given the upper bound (28) on the ratio |δq kr (t f )| eq /∆q kr (t f ), one could study how the 'degree of freezing' varies with k (for example for k → 0), where a high or low degree of freezing could be defined respectively as a low or high value of the upper bound on the righthand-side of (28). For a specific cosmological model, with some assumptions about initial conditions, this could provide constraints on the behaviour of the function ξ(k). The results will obviously depend on how n kr varies with k.
Finally, we note that a nonlocal model, based not on hidden variables or quantum nonequilibrium but on the holographic principle, has been shown to generate the required (approximately) scale-invariant perturbation spectrum at super-Hubble scales [35]. Whether or not early quantum nonequilibrium could reproduce such effects in a natural way remains to be seen.
Relic Nonequilibrium Particles
We saw in section 10.1 that relic nonequilibrium field modes could, in the case of the inflaton, change the power spectrum for primordial curvature perturbations, resulting in observable effects in the CMB. Thus, inflationary cosmology provides a simple and definite means whereby early nonequilibrium could yield observable consequences today. In this section, in contrast, we shall attempt to outline some much more complicated and uncertain scenarios, according to which relic nonequilibrium field modes (for some appropriate field) might in some circumstances manifest as relic nonequilibrium particles that could be detected today. Unfortunately, for these scenarios to be at all plausible, some questionable assumptions have to be made, and at the time of writing it is not clear if these scenarios can really work in practice.
There is of course no preferred definition of particle states in quantum field theory on expanding space, except in the short-wavelength limit (where one recovers the usual Minkowski definition) [36]. The very notion of 'particles' is in fact highly ambiguous for modes of frequency lower than the typical inverse timescale over which the spacetime metric changes. In a cosmological setting, this means that there is no generally useful definition of quantum particle states at wavelengths larger than the Hubble radius. Thus, if we consider relic nonequilibrium field modes from a radiation-dominated era -where the bound (57) implies that such modes must have super-Hubble wavelengths -we must be careful not to interpret such modes too naively in terms of (quantum) particle states. However, pending a more precise treatment, one might reasonably assume that if such modes enter the Hubble radius at later times, then they will manifest as (approximately-defined) particle states in the usual sense.
One should also bear in mind that, generally speaking, excitations of super-Hubble modes will not be produced by the local processes of particle scattering and decay (which are not expected to be effective over lengthscales larger than the instantaneous Hubble radius H −1 ). However, such excitations will of course be produced by the global effects of spatial expansion.
In order to maximise the chance of obtaining relic nonequilibrium particles that could be detected in practice (in particular, with energies that are not so low as to be completely out of range), we ought to try to minimise the lower bound on the mode wavelength defined by (35) or (57). This can be done by choosing the final time t f to be as small as possible, subject to the constraint that further relaxation may be neglected for times later than the chosen value of t f . Thus, one might take t f to be the time t dec at which the relevant particle species decouples. For one might reasonably assume that relaxation may be neglected (at all wavelengths) for t > t dec -if the quantum states, defined postdecoherence, are such that the associated de Broglie velocity field is sufficiently simple (as occurs, for example, for energy eigenstates). For a super-Hubble mode at t dec that becomes sub-Hubble at later times, it is then conceivable that any nonequilibrium present at the time t dec could persist until much later. (A proper discussion of this scenario would require an analysis of decoherence before and after decoupling.) If we make the above assumptions, the key question is then whether residual nonequilibrium field modes can exist at the time t f = t dec . From (57), this is possible if the modes have physical wavelength (inserting the Boltzmann where H −1 dec and T dec are respectively the Hubble radius and temperature at time t dec . We have λ phys (t dec ) = a dec λ, where a dec = T 0 /T dec (with T 0 ≃ 2.7 K the temperature today). Assuming that decoupling occurs before the end of the radiation-dominated phase, we also have H −1 dec = 2t dec , where t dec may be expressed in terms of T dec using the standard temperature-time relation The lower bound (61) then becomes (inserting the speed of light c) This provides a lower bound on the wavelength λ today, at which nonequilibrium could be found. The freezing inequality (29), and the resulting lower bound (64), have been derived in this paper for massless scalar fields only. One certainly expects to find comparable results for more general massless boson fields, such as the electromagnetic field. For fermions, however, a separate analysis is required. There are different approaches to the pilot-wave theory of fermions, and the details of nonequilibrium freezing may depend on which model is adopted. One might try to derive a fermionic analogue of the freezing inequality using, for example, the Dirac sea pilot-wave model [37]. Pending such extensions of our analysis, here we assume that the lower bound (64) applies (at least approximately) to fermions as well, provided they are effectively massless at the temperature T dec (that is, of mass m << k B T dec /c 2 ).
With this understanding, let us now apply the approximate result (64) to various particle species, both bosonic and fermionic. For definiteness, we first consider a standard Friedmann cosmology with no inflationary period, taking our initial conditions at the Planck era k B T i ∼ k B T P ∼ 10 19 GeV. (An alternative possibility, of nonequilibrium relic particles arising from the decay of the inflaton, is considered below.) Photons decouple from matter at k B (T dec ) γ ∼ 0.3 eV. From (64) we then have a lower bound λ γ 0.7 × 10 30 cm , which exceeds the Hubble radius today, H −1 0 ≃ 10 28 cm. If instead we consider neutrinos, which decouple at k B (T dec ) ν ∼ 1 MeV, we have λ ν 1.7 × 10 23 cm ≃ 5.5 × 10 4 pc (66) (or ∼ 10 5 light years). Residual nonequilibrium for relic neutrinos could plausibly exist today only at such tiny energies. Unfortunately, this is of course far outside any realistic range of detection. (Note, again, the implicit assumption being made, that if nonequilibrium super-Hubble modes at t dec enter the Hubble radius at t > t dec , they will manifest as nonequilibrium particle states.) The situation improves drastically, however, if one considers particles that decouple soon after the Planck era. Gravitons, for example, are expected to decouple at a temperature (T dec ) g T P . Writing This might be compared with the range of wavelengths expected for a (thermal) relic graviton background, whose temperature today is estimated to be (T 0 ) g ∼ 1 K [38]. At this temperature, the spectral energy density of a Planck distribution peaks at the wavelength λ max (1 K) ≃ 0.3 cm. There may also exist other particles that decouple not too long after the Planck era, and that (unlike the graviton) are unstable, eventually producing decay products that could be more easily detected today. A natural candidate, arising out of current supersymmetric theories of high-energy physics, is the unstable gravitinoG, which has been estimated to decouple at a temperature [39] k B (T dec )G ≡ xG(k B T P ) ≈ (1 TeV) g * 230 where g * is the number of spin degrees of freedom (for the effectively massless particles) at the temperature (T dec )G, m gl is the gluino mass, and mG is the gravitino mass. This provides us with an estimate for the lower bound in the case of gravitinos, For the purposes of illustration, if we take (g * /230) 1/2 ∼ 1 and (1 TeV/m gl ) 2 ∼ 1, then xG ≈ mG 10 3 GeV 2 .
If the gravitino is not the lightest supersymmetric particle, then it will indeed be unstable. For large mG, the total decay rate is estimated to be [40] GeV is the Planck mass. The time (t decay )G at which the gravitino decays is of order the lifetime 1/ΓG. Using (62), the corresponding temperature is For example, again for the case mG ≈ 100 GeV, the relic gravitinos decay when k B (T decay )G ∼ 1 keV. This is prior to photon decoupling, so that any (potentially nonequilibrium) photons produced by the decaying gravitinos would interact strongly with matter and quickly relax to quantum equilibrium. To obtain gravitino decay after photon decoupling, we would need k B (T decay )G k B (T dec ) γ ∼ 0.3 eV, or mG 0.5 GeV. For such small gravitino masses, however, decoupling occurs at (roughly) (T dec )G = xGT P ≈ mG/10 3 GeV 2 T P 10 −7 T P and (68) (with xG 10 −7 ) yields the much larger lower bound λG 10 7 cm. Thus, it may prove more promising to consider other decay products (that decouple prior to gravitino decay but for larger gravitino masses). These could in turn decay into photons at later times, or they might be detected directly.
There are of course strong constraints on the presence of gravitinos in cosmological models, in particular from the abundance of light elements emerging from big-bang nucleosynthesis and from limits on dark matter abundance. These constraints have been extensively studied -see, for example, ref. [41] -and the subject is an active area of current research. Our hope is that an acceptable and compelling scenario will eventually be found, satisfying the standard cosmological constraints and at the same time allowing the possibility of relic nonequilibrium surviving in particles that could be detected today. To develop such a scenario in detail is a topic for future work.
So far in this section, we have assumed a standard (non-inflationary) Friedmann expansion, with initial nonequilibrium at around the Planck era. An alternative scenario is obtained if we consider relic nonequilibrium particles in the context of inflationary cosmology. If inflation did occur, the density of any relic particles (nonequilibrium or otherwise) from a pre-inflationary era will of course be so diluted as to be completely undetectable today. However, one may consider relic particles that were created at the end of inflation, by the decay of the inflaton field itself.
As discussed in section 10.1, during inflation the inflaton field does not relax to quantum equilibrium, and in fact the exponential expansion of space transfers any initial nonequilibrium from microscopic to macroscopic lengthscales. The inflaton field, then, is a prime candidate for a carrier of primordial quantum nonequilibrium. As well as manifesting as statistical anomalies in the CMB, such nonequilibrium in the inflaton field could manifest as nonequilibrium in its decay products, where in standard inflationary scenarios inflaton decay is in fact the source of the matter and radiation present in our universe today.
The process of 'preheating' is driven by the homogeneous and essentially classical part of the inflaton field (that is, by the k = 0 mode) [42]. Here, the inflaton is treated as a classical external field, acting on other (quantum) fields which become excited by parametric resonance. Because of the classicality of the relevant part of the inflaton field, this process is unlikely to result in a transference of nonequilibrium from the inflaton to the created particles.
During 'reheating', however, perturbative decay of the inflaton can occur, and one may reasonably expect nonequilibrium in the inflaton field to be transferred to its decay products. This possibility opens up a large field of investigation. Here, again, we restrict ourselves to making some preliminary remarks.
The perturbative decay of the inflaton occurs through local field-theoretical interactions, so one expects the decay products to have physical wavelengths no greater than the instantaneous Hubble radius. Taking the lower bound (57) as a guide (even though it was derived for a radiation-dominated phase), we then expect that the decay products will come into existence already violating the freezing inequality. Subsequent relaxation might then be avoided (possibly) only if the particles are created at a temperature below their decoupling temperature. Once again, the gravitino suggests itself as a possible candidate. Gravitinos can in fact be copiously produced by inflaton decay [43] (and could even make up a significant component of dark matter [44]). If the gravitinos are unstable, again, one could try to detect (say) photons produced by their decay at later times.
The possible realisation of this scenario depends of course on uncertain features of high-energy particle physics and of inflationary models. As before, one may hope that a scenario will eventually be found, satisfying the constraints of particle physics and cosmology, and at the same time allowing the possibility of relic nonequilibrium surviving in particles that could be detected today.
We close this section with some general remarks. First, we note that particle decay (for example for the gravitino) is likely to result in some relaxation and erasure of any quantum nonequilibrium that may have existed in the parent particles. However, one hopes that the erasure will not be complete and that some nonequilibrium will still be present in the decay products. It would be useful to study this, in pilot-wave models of specific decay processes.
Second, once suitable candidates for nonequilibrium relic particles have been identified, one must consider how best to test them for violations of the Born rule. For photons, a particularly simple test involves searching for anomalous polarisation probabilities, or deviations from Malus' law (where such deviations reflect the nonequilibrium breakdown of expectation additivity for noncommuting quantum observables in a two-state system) [12,14].
Third, for a given species of relic particle in the universe today, even if there exist pure subensembles with significant residual nonequilibrium, in practice it might be difficult for us to locate those subensembles and perform experiments with them. In particular, if a given detector registers particles belonging to different subensembles, without distinguishing between them, it is possible that even if nonequilibrium is present in the individual subensembles it will not be visible in the data.
Conclusion
The hypothesis of quantum nonequilibrium at the big bang has been shown to have a number of observable consequences. Our main result is the freezing inequality (29). For cosmological field modes satisfying (29), initial nonequilibrium will be 'frozen' at later times. This result may be applied to specific cosmological models, yielding predictions whose verification could constitute evidence for quantum nonequilibrium in our universe. For a radiation-dominated expansion, (29) implies the general lower bound (35) on the wavelength of relic nonequilibrium field modes.
The detailed study of quantum nonequilibrium freezing, for realistic cosmological models, is left for future work. A useful first step might be to study the system of equations (43), and to delineate the general conditions under which the time evolution of a (mean) mode occupation number n kr t can satisfy the freezing inequality (29). Crucially, future work will need to study the statistical distribution of wave functionals for a realistic mixed state on expanding space, the goal being to identify subensembles satisfying (29). For these subensembles, quantum nonequilibrium is expected to be frozen over the relevant time period, resulting in definite predictions that might be tested today. | 2008-04-29T17:05:39.000Z | 2008-04-29T00:00:00.000 | {
"year": 2008,
"sha1": "98fa52fd5e67eee04b13b590c054c4845e420842",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "98fa52fd5e67eee04b13b590c054c4845e420842",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
244231962 | pes2o/s2orc | v3-fos-license | Combination of MSAT and Korean Medicine for Managing Foot Drop Due to Lumbar Disc Herniation: Case Report
CC This is an open access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/ by-nc/4.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. KOREAN JOURNAL OF ACUPUNCTURE CASE REPORT Korean Journal of Acupuncture Vol.38, No.3, pp.189-195, 2021 pISSN 2287-3368 / eISSN 2287-3376 https://doi.org/10.14406/acu.2021.012
Introduction
Foot drop is characterized by weakness of the ankle dorsiflexors resulting in the inability to or difficulty in lifting the forefoot contributing to considerably impaired mobility and general health 1) . Among the various disorders that lead to foot drop, the most common cause is lumbar degenerative disease. Lower back pain, radiating pain, sensory defects, and weakness in the distribution of nerve roots are primary symptoms of Lumbar Disc Herniation (LDH) 2) . However, foot drop due to LDH is a rare event and there is insufficient research clarifying the clinical features, risk factors, natural history, and prognosis 3) .
At present, patients with motor deficit resulting from LDH are mostly treated by surgery, but there is inadequate evidence to certify the superiority of surgery over conservative treatments 4) . In this case report, the patient was also recommended surgery because his persistent pain was refractory to pain medication and he developed motor deficit 5) . We
Physical Examination at Admission
On the first day of his visit to our outpatient department, (2) MSAT: The patient is instructed to lie in a supine position with their lower legs exposed. Before inserting the acupuncture needles, the difference in left and right ankle dorsiflexors is examined against the examiner's resistance at the same time and acupuncture is performed on the side with muscle weakness. After sterilizing with ethanol around the admission, he reported noticeable restorement of hypoesthesia in his lower left extremity and further improvement of the pain's intensity (3/10) (Fig. 2). Evaluation of left ankle dorsiflexion was done during MSAT treatment every other day.
Progress note
On December 12, 2020, the muscle strength of left ankle improved to 4/5 and restored to full extent on December 19, 2020 (Fig. 3). On December 31, 2020, he was discharged from the hospital since he had no difficulty with daily activities such as walking, standing, and sitting. At the end of the treatment, EQ-5D (0.899), ODI (6.67) scores were reevaluated (Fig. 4).
Discussion
Lumbar disc herniation is the most common degenerative abnormality of the lumbar spine resulting in weakness presenting in a myotomal distribution. However, foot drop due to LDH is relatively infrequent and presents a severe motor deficit. Surgical treatments are useful, but surgical treatment has a higher risk of complications 9) and does not show superiority to conservative treatment in the midterm and longterm 10) . Conservative management provides pain control for almost 90% of patients after 3 months 11) . Acupuncture is also an effective means in the conservative treatment of LDH and shown to be more advantageous than lumbar traction, and ibuprofen 12) .
MSAT refers to a relatively novel method which has been reported to be a valuable means to derive satisfactory treatment results in terms of instantaneous pain reduction and functional improvement 13) . MSAT has a distinction from traditional acupuncture in that the needles are inserted into the body followed by either passive or active movement of the patient's body. The effectiveness of MSAT has been recognized in clinical fields in South Korea and China, of which the use has been continuously increasing 14) . We administered TA MSAT in addition to conventional treatment to shorten recovery. Weakness in ankle-dorsiflexion is mainly caused by disc prolapse at the L4-L5 level 15) 19) .
This study is the first in literature to report TA MSAT and suggests that TA MSAT can elicit rapid pain reduction and health improvement in a patient with foot drop. Thereby, we suggest this study sheds light on the effectiveness of MSAT to treat foot drop due to LDH.
Some limitations of this case report should be addressed.
First, this study is a single case for results to be generalized.
Second, the practice of MSAT is still primarily based on personal experiences and very few studies have been conducted to investigate its efficacy and safety 13,20) . Third, the sole therapeutic effect of TA MSAT for the current case could not be separated from other interventions. Thus further randomized, controlled trials are required to provide additional evidence to confirm the effectiveness of MSAT.
Funding
None.
Data availability
The authors can provide upon reasonable request. | 2021-10-19T15:21:46.294Z | 2021-09-27T00:00:00.000 | {
"year": 2021,
"sha1": "d1998aaa867eb0e366c97319705636887533f92c",
"oa_license": "CCBYNC",
"oa_url": "http://www.kjacupuncture.org/journal/download_pdf.php?doi=10.14406/acu.2021.012",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "38070b419c8ddc41c58e7247b5a9782b8228c0df",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16209696 | pes2o/s2orc | v3-fos-license | Systems Biology Approaches to Disease Marker Discovery
Our understanding of human disease and potential therapeutics is improving rapidly. In order to take advantage of these developments it is important to be able to identify disease markers. Many new high-throughput genomics and proteomics technologies are being implemented to identify candidate disease markers. These technologies include protein microarrays, next-generation DNA sequencing and mass spectrometry platforms. Such methods are particularly important for elucidating the repertoire of molecular markers in the genome, transcriptome, proteome and metabolome of patients with diseases such as cancer, autoimmune diseases, and viral infections, resulting from the disruption of many biological pathways. These new technologies have identified many potential disease markers. These markers are expected to be valuable to achieve the promise of truly personalized medicine.
Introduction
Disease markers are of vital importance to clinicians and their patients as early detection, accurate prognosis/diagnosis and monitoring of therapy can lead to increased overall survival and cure rates. As our knowledge of diseases quickly expands, the field of disease marker discovery will play increasingly important roles in the delivery of improved diagnosis and treatment. These markers, such as protein (including autoantibodies, which are antibodies specific to self-antigens [43]), hormonal markers (such as lack of insulin in Type I diabetic patients [89]), and genetic/genomic markers (such as BRCA1 mutation in breast cancer patients [52]), enable clinicians to diagnose the disease while it is still at early stages, to ensure appropriate surgical intervention, efficient drug treat-ment and monitoring, and to predict an individual's risk of developing specific diseases before they experience symptoms. Traditionally, discovery and detection of these disease markers relied on low throughput technologies such as Enzyme-Linked Immunosorbant Assay (ELISA) or 2D-gel plus Edman degradation for protein markers, Reverse Transcription-Polymerase Chain Reaction (RT-PCR) for mRNA markers, and restriction enzyme digestion, cloning and Sanger sequencing for DNA markers. Before the dawn of high-throughput technologies these methods played important roles in marker identification and yielded significant discoveries in various diseases such as systemic lupus erythematosis, rheumatoid arthritis and breast cancer [32,52,107], which greatly enhanced the diagnostic efficiency in these diseases.
During the past two decades, high-throughput technologies emerged and have displayed great potential in large-scale studies for marker discovery. These technologies include protein microarrays [42,119], mass spectrometry for large-scale shotgun studies [116], and, more recently, high-throughput parallel sequencing (including RNA-Sequencing) [9,63,67]. This article will review disease marker discoveries using these systems biology approaches, with a focus on high-density protein microarray technologies. We will also briefly review current progress in disease marker identification using parallel sequencing and mass spectrometry technologies.
Marker discovery using high-density protein microarrays
Currently, high-density protein microarrays contain hundreds to thousands of proteins that are arrayed on coated glass microscope slides (e.g. nitrocellulosecoated slides) in an addressable format [23]. These arrays are usually probed with fluorescently labeled molecules and the signals are then acquired with a confocal laser scanner. A number of surface chemistries have been employed for their ability to bind proteins efficiently although there is often a trade-off between retention and decreased protein function or improper folding. There are several broad categories of protein microarrays: arrays composed of cell or tissue lysates or protein fractions isolated from crude lysates [33,74], antibody or analytical arrays that contain types of antibodies directed specific analytes [13] as well as socalled functional protein arrays [86,118]. Functional protein arrays contain full length proteins with intact catalytic function and proper epitope folding, which are often generated by arraying purified proteins produced individually prior to printing [118] or proteins produced in situ by in vitro transcription and translation of DNA that is printed directly on the surface of the array [86]. Our group has been developing the last type of protein microarrays by arraying purified proteins on nitrocellulose-coated glass slides. This type of functional protein microarray has clear advantages over the other alternatives: the ability to specifically identify individual proteins compared with cell lysate arrays and the ability to ensure the quality of each arrayed protein compared to the in situ transcription-translation arrays. After development of the first protein microarray that contains 5800 full-length proteins of the budding yeast Saccharomyces Cerevisiae [118], our group produced a number of protein arrays including the yeast N-terminal and C-terminal arrays, a 500 protein Arabidopsis array, and a coronavirus array [119]. In conjunction with Protometrix Corporation (now part of Invitrogen Corporation) we also collaborated in the development of a human protein array that currently holds more than 9,000 proteins expressed individually using a Baculovirus/sf9 expression systems. These various arrays were used for a variety of applications including assaying for protein-protein, protein-lipid and proteinnucleic acid interactions as well as probing for substrates of protein kinases [41,45,118]. We also developed algorithms for positive signal calling and large dataset processing [50]. Recently, we have applied this technology in a novel proteomics-based approach to screen for human antibodies that react with foreign and self-antigens [64,65,79]. Particularly notable in this review is our use of these protein microarrays to analyze the immune response to coronaviruses [119] as well as our screening projects to analyze ovarian cancer [43], myeloma, multiple sclerosis and asthma. In this section we will review disease marker identification in several fields using high-density protein microarrays.
Antibodies as markers of viral infections
Currently, tests for the detection of microbial infections are the only clinical tests that rely on measuring antibody responses. ELISA-based detection methods are often used to detect a patient's antibody titer to epitopes of the microorganisms for diagnosis of the infection.
In late 2003, an outbreak of a novel coronavirus (CoV), the Severe Acute Respiratory Syndrome (SARS) virus, resulted in that killed over 900 deaths. Novel diagnostic tests were required to identify and monitor this disease, and ELISA, immunofluorescence and nucleic acids tests were employed. It was shown that protein array-based methods proved to be more accurate than any of the existing antibody based methods [119]. Our lab developed a coronavirus proteinmicroarray that contained 82 coronavirus proteins including all SARS-CoV proteins and proteins from five additional coronaviruses. The microarray was used to probe sera obtained from 399 Canadians and 203 Chinese during the SARS outbreak, including samples from confirmed SARS-CoV cases, other respiratory disease patients, and healthcare professionals. After detection with Cy3-labeled anti-Human IgG antibodies, the bound reactive antibodies to coronavirusencoding proteins from sera were visualized and quantified. The reactivity results of the different proteins were analyzed using a variety of computational methods, and we developed computer algorithms based on the reactivity results to predict which patients were infected with SARS [119].
The protein microarray platform displayed a very high sensitivity, and reliably detected SARS-CoV reac-tive antibodies even when serum was diluted at 16,000 fold. The assay showed good reproducibility with less than 10% variance in signal intensity between duplicate slides. Importantly, the method requires less than one microliter of serum for detection, which is desirable when serum samples are limited. Moreover, probing of the coronavirus protein microarrays with SARS infected serum samples also shows high specificity to SARS-CoV-specific proteins, as very little crossreactivity with proteins of other coronaviruses has been observed in SARS infected serum samples. To determine the best classifiers to distinguish SARS-positive from SARS-negative sera, we used one unsupervised clustering method and two supervised methods: k-nearest neighbor (k-NN) and logistic regression (LR). Both supervised models showed high sensitivity (90% for k-NN and 89% for LR) and specificity (93% for k-NN and 94% for LR) with a panel of 5 (k-NN) or 4 (LR) best classifiers, and these numbers were greater than 97% when the assay was performed in triplicate. The prediction methods were then tested on 56 sera from Chinese fever patients for SARS-infection prediction and were determined to have 100% sensitivity and 95% specificity, which are superior to two ELISA-based detection methods that were used during the SARS outbreak.
Our study in SARS-CoV infection diagnosis via detection of SARS-CoV-specific antibodies by protein microarrays demonstrated that this approach is sensitive (50-fold more sensitive than ELISAs), specific (little crossreactivity with other coronavirus proteins), and rapid (performed in a few hours). Nevertheless it should be noted that tests based on immune responses to foreign antigens are more likely to achieve higher accuracy than those based on autoantibody-autoantigen responses since there is no self-tolerance to the foreign antigens and the presence of pathogen associated molecular patterns (PAMPs) significantly increases the immune response. Overall, this study demonstrated for the first time that protein microarrays could be used to diagnose and monitor human antibodies as protein markers that are generated during the course of a disease. Moreover, it demonstrated the power of using a panel of multiple classifiers for diagnostics.
Marker discovery for early detection and prognosis of cancer
While protein microarray technology can efficiently and accurately detect antibodies generated against foreign antigens from infectious organisms, perhaps the most intriguing application of this technology is in the discovery of novel protein markers for the early detection of various cancers. The identification of disease markers holds the promise of increasing the effectiveness of clinic therapies and marker-based routine screening programs and can potentially enable diagnosis at the earliest stages of the disease, before the development of clinically recognizable cancers that are usually at advanced stages. For instance, in heavy smokers autoantibodies recognizing mutant forms of the tumor suppressor p53 have been detected prior to the diagnosis of lung cancer [103]. Early detection and treatment would result in markedly improved survival rates, especially for patients whose cancers do not present symptoms during early stages such as pancreatic and ovarian cancer [25,29,91].
Oncoproteomics is a rapidly expanding field aimed at applying high-throughput proteomics approaches to understanding the mechanisms involved in cancer. Proteomic approaches to discover cancer markers have been an area of strong interest in recent years. In the past, these projects often involved serum screening with phage expression libraries prepared from cancer tissues, or SEREX (serological analysis of cDNA expression libraries), or by immunoblotting cancer cell lysates after two dimensional polyacrylamide gel electrophoresis (2DE-PAGE). These approaches have yielded some promising candidate markers but suffer from particular issues, such as the fact that phage expression libraries often contain out of frame and truncated protein targets and protein candidates discovered by 2DE-PAGE are difficult to identify since the proteins are unknown. Mass spectrometry is often required in order to identify the candidate autoantigens [20,46,53,90]. An additional problem with these approaches is that the samples are usually limited in amount and are difficult to reproduce. Protein microarrays overcome those difficulties as all of the spotted proteins are derived from known, well-characterized clones. Additionally, even a small amount of purified protein is sufficient to print hundreds of arrays for patient screening [4,7,16,47,81].
As most traditional disease markers are proteins that have become over-or under-expressed during the course of disease, there is much interest in the potential use of autoantibodies as a novel class of disease markers. Recently, detection in serum of circulating autoantibodies targeting Tumor-Associated Antigens (TAAs) has emerged as an effective approach for identifying cancer early detection markers (e.g. breast, lung and ductal pancreatic cancer [5,80,102]). This approach is based on the fact that the immune system produces an-tibodies against abnormal/mutated proteins generated from apoptotic/necrotic cancer cells. These autoantibodies can then be detected with immunosorbant assays like ELISA. Because the levels/stability of autoantibodies are potentially much greater than those of the original autoantigens, they would be more easily detected. By comparing autoantibody profiles between different groups (cancer patients versus controls), it is possible to identify markers that are significantly differentially expressed. This method is expected to be superior to DNA array-based methods since changes in RNA expression levels do not necessarily correlate with protein expression.
The area of research in autoantibody marker discovery using protein microarrays has rapidly expanded over the last several years as the protein array platform continues to mature. The recent availability of high content protein microarrays allows for global profiling of autoantibodies to cancer antigens in both highthroughput (thousands of protein candidates) and high sensitivity ( 10 fg of protein) [43,118]. Improvements in printing techniques and increases in protein spot quantity have made these arrays promising vehicles for exploring the repertoire of autoantibodies in human disease. This approach has been applied, by various groups, for the discovery of autoantibody markers in breast cancer [5], lung cancer [80] and ovarian cancer [43], as well as a smaller study in pancreatic cancer [74]. Here we will review past and ongoing research in immune response profiling using protein microarrays relating to a number of disease states.
While self-tolerance usually abrogates the antibody response to self-proteins it is possible to elicit an autoimmune response under certain conditions present in cases of disease. The antigenicity of self proteins may result from overexpression of normal proteins such as in the case of Her-2 in breast cancer subtypes and prostate specific antigen (PSA) in prostate cancer, from aberrant post-translational modification such as different Mucin-1 glycoforms in breast cancer, or from mutations in the proteins as has been found to be the case with the tumor suppressor p53 in multiple cancer types [2,14,17,55]. Additionally, proteins that are usually restricted to expression in germ line cells or are expressed only in the early stages of development may be aberrantly expressed in cancer. This is the case with the testis antigen NY-ESO-1 and carcinoembryogenic antigen (CEA) respectively. Because many of the proteins mentioned above are detected only at very low levels in serum, even in late stages of disease, they would be of little utility for screening purposes. However, even slight increases in the expression of those antigens can lead to detectable increases in the corresponding autoantibody. Generally, we find the existence of a basal autoantibody level to many self-antigens, however, this response has been shown to be markedly increased in cases of diseases such as those mentioned above [2,26,87].
CA-125 is currently the only clinically approved marker for ovarian cancer screening. Unfortunately, although CA-125 serum levels are significantly elevated in advanced stages of the disease, its positive predictive value for the detection of early stage ovarian cancer is less than 10% [66]. For this reason the identification of new markers for this disease is of critical importance. Scientists, such as the group led by Gil Mor at Yale University, recruited proteomics-based approaches using antibody-based protein microarrays to identify new serum biomarkers, which, in combination with CA-125, may enhance the early detection of ovarian cancer [48,66,110]. Our group also launched a pilot study to profile ovarian cancer-associated autoantibodies with protein microarrays containing 5,005 fulllength human proteins [43]. We compared the autoantibody profiles in 30 cases of epithelial ovarian cancer patients and 30 healthy controls, and after statistical analysis, identified 90 proteins to have significantly different immune reactivity in the patient group versus the control group. The results were validated by immunohistochemistry (IHC) and demonstrated high sensitivity (95%) and specificity (97.5%) when the top two markers (Lamin A/C and SSRP1) were combined. However, further validation is required before the candidate markers can be adopted in a clinical setting. Therefore we carried the top ranking candidates through to the validation phase in which a much larger set of samples will be tested to evaluate the performance of the potential markers. We have generated focused protein microarrays containing these candidates as well as control proteins such as CA-125 (Fig. 1). These arrays are printed in twelve blocks per slide allowing as many as twelve samples to be screened per array. This approach will allow hundreds to thousands of samples to be screened in order to determine which markers or combination of markers demonstrate the best receiver operator characteristic (ROC) performance [34,43].
Autoantigens in breast cancer subtypes such as Her-2/neu positive tumors have been shown to correlate with increased autoantibody responses in patients. Her-2/neu autoantibodies in those patients demonstrate approximately 18% sensitivity and 94% specificity. Other Groups have adapted similar approaches to multiplex- ing immune responses using high-density protein microarrays in breast cancer [15,117]. Among these studies, Anderson et al. used nucleic acid programmable protein arrays (NAPPA) for sera screening in breast cancer [5]. These arrays were generated by printing individual genes as plasmid DNA along with capture antibodies to GST tags on the fusion proteins. The arrays were then incubated with in vitro transcription and translation coupled cell-free lysates to produce the proteins and anchor them to the array surface. Each array consisted of 1700 cancer associated candidate proteins including p53 as well as the Epstein bar nuclear antigen (EBNA) as a control. Sera from four breast cancer patients and four healthy controls were used to probe the arrays and they found anti-p53 autoantibodies in cancer patients. Because the proteins are not produced until the arrays are ready to be probed they do not suffer from degradation during periods of storage. However, the protein quantity is more variable between spots compared to directly printed arrays. We have found that spot to spot variation in protein amount may be overcome by evaluating the autoantibody response relative to the protein amount (i.e. the ratio of autoantibody signal to the signal from an epitope tag on the autoantigens). In our protein array immune response profiling studies we have found that this approach results in decreased signal variance between replicate spots (unpublished data). To date, no studies that attempt to identify novel breast cancer markers have been performed using high-density protein microarrays. Pre-vious high-throughput serum screens in breast cancer have relied on SEREX and 2DE-PAGE and involved relatively small sample sets [53,90].
A more interesting immune response profiling study may be the autoantibody responses to self-antigens in cancers of the blood and lymph, such as multiple myeloma. Our lab has been involved in a protein microarray based screening project aimed at elucidating the autoantigen repertoire in multiple myeloma. Multiple myeloma is a cancer of the bone marrow system resulting from the uncontrolled proliferation of monoclonal plasma cells (precursors of the Bcells responsible for the production of antibodies) [60,83]. Monoclonal gammopathies of unspecified significance (MGUS) is a precursor disease to multiple myeloma characterized by bone marrow plasmacytosis and increased M-protein levels in the blood [84]. We have probed high-density protein arrays using plasma from dozens of cases of MGUS, multiple myeloma, and healthy controls. By comparing IgG responses to individual antigens on the arrays between the healthy and diseased groups we have identified multiple autoantigens that are significantly differentially targeted by IgG autoantibodies in early stage disease. This study was unique as it employed the highest density protein arrays for multiple myeloma immune response profiling to date and the patient samples were from a prospective collection. Because samples were drawn prior to disease onset there are no artifacts resulting from medical treatment of the patients. By querying such early stage samples we have a better chance of identifying markers that will be effective for diagnosing patients at early stages when they are more treatable and will experience better outcomes. The markers identified in this study may yield insight into the biological processes and mutational events that contribute to the development of aggressive forms of multiple myeloma.
In addition to early detection, cancer markers may also enable clinicians to offer personalized treatment. Recent advances using the anti-cancer drugs Herceptin and Iressa illustrate this point. These drugs target specific patient populations: Herceptin is effective against those tumors expressing the Her2 receptor and Iressa is effective for patients with specific mutations in the epidermal growth factor receptor [58,109]. These drugs offer limited benefits to patients with the same cancer types when these markers are not present. Thus, determining the marker profile of an individual's disease can enable the identification of distinct patient populations, allowing tailored and more effective treatment.
One issue that we would like to note is that no single autoantibody response to an autoantigen has been confirmed to have sufficient sensitivity and specificity for screening purposes in early stage disease. However, by evaluating the antibody responses involving a panel of autoantigens, accuracy has been markedly improved [15,43,66]. Thus, future protein microarray immune response screening tests will likely combine multiple autoantigens. Numerous approaches have been applied to combining disease markers. Some of the common methods for combining multiple autoantibody responses are linear regression, split-point analysis, and k-nearest neighbor (k-NN) [24,119]. Still there are currently no clinical screening or diagnostic tests that rely on a panel of protein markers,although multi-parameter DNA microarray tests are becoming commonplace in diseases such as breast cancer [106]. Clearly, more work is needed before autoantigen based microarray tests can be implemented in a clinical setting.
Marker discovery in autoimmune diseases
The use of protein microarray technology for biomarker discovery in autoimmune diseases seems a natural extension of the technique as autoantibody responses have already been shown to contribute to disease progression in diseases such as lupus [59]. While antinuclear antibody tests are sometimes used to confirm diagnosis of certain autoimmune diseases, they are not disease specific [59,78].
Multiple sclerosis (MS) is a debilitating disease of the central nervous system characterized by rounds of axonal demyelination and repair. It affects mostly younger people and is more common in women than men. The underlying cause remains unknown, though there is mounting evidence that antibody responses to self proteins play a role in both demyelination and repair [28]. Previous efforts have identified a number of myelin specific proteins that demonstrate increased autoantibody responses in MS. These autoantibodies have been detected in both serum and cerebrospinal fluid (CSF). Recently, protein microarray screening has been applied to evaluate autoantibody responses to subsets of myelin proteins that are known to be associated with MS and other neurodegenerative diseases [78,82,97]. The antigens that were shown to elicit autoantibody responses include classical MS antigens such as myelin-basic protein (MBP), myelin associated glycoprotein (MAG) and myelin oligodendrocyte glycoprotein (MOG) as well as proteins that have not been demonstrated to play a significant role in MS previously. These studies have suffered from the fact that the arrays were focused on a relatively small number of previously known candidate autoantigens. Our group is currently conducting larger-scale screenings based on high-density protein microarrays. With this platform, we have tentatively identified a number of novel candidate markers in multiple sclerosis. We are currently working to validate these markers in a larger sample set. This less biased approach may result and the identification of novel autoantigens leading to a better understanding of the underlying mechanisms in the etiology of MS and result in improved screening and diagnostic tests.
Asthma, a common disease with a prevalence of 11% for all ages [75], and 13% in children under 18 years of age in the United States [11], is another disease involving an autoimmune mechanism [88,121]. This heterogeneous inflammatory disease of the airways is marked by recurrent episodes of airway obstruction and wheezing [95,114], and is anatomically characterized by bronchoconstriction, inflammation and thickening of the airway walls [37]. Considering asthma as an aberrant chronic wound healing process [36], it would not be surprising that some of the released/leaked cellular contents from the airway epithelium due to damage and remodeling, similar to necrotic cancer cells, may elicit autoimmunity. In fact, aberrant autoantibodies have been detected in asthmatic sera by autologous serum skin tests as compared to normal controls [44]. Autoreactive antibodies have also been de- tected in the asthmatic sera against the high-affinity IgE receptor FcεRI [100,101]. A few specific autoantigens have been identified in the serum of asthmatic patients, including the autoIgG-reactive β-adrenergic receptor [35,108], cytokeratin 18 [68], DFS70 [105], and α-enolase [54]. Moreover, studies on atopic dermatitis, which often occurs with asthma, revealed autoreactive IgE antibodies against Hom s 1-5 (Hom s 1 = SART1; Hom s 2 = α-NAC; Hom s 3 = BCL7B; Hom s 4 = a protein with calcium-binding motif; Hom s 5 = a Type II Cytokeratin) [70,104,105] and DFS70 [105]. However, these studies focused on small patient groups with no cross validation, as well as limited number of potential targets investigated. Since both autoreactive IgG and IgE may be involved in the pathogenesis of asthma, we conducted a large-scale screening for asthma-associated auto-IgG and IgE reactive autoantigens using protein microarrays with more than 8,000 protein candidates (unpublished data). This is the first large-scale study to profile asthma-associated autoantigens, and the results will greatly improve our understanding of the role of autoimmunity in the etiology of asthma. One unique feature of this study is that, in order to maintain uniform probing conditions, we multiplexed the detection for both autoreactive IgG and autoreactive IgE in the serum samples simultaneously on the same array, with a mixture of anti-human IgG and IgE secondary detection antibodies labeled with distinct fluorescent dyes. Our result suggested that the protein array is capable of detecting both IgG and IgE reactive signals in distinct emission channels with high specificity and no/detectable signal bleeding across the channels (Fig. 2). Similar applications of protein arrays have been performed in studies of other autoimmune diseases. Song et al. discovered 3 novel autoantigens, namely RPS20, Alba-like and dUTPase for autoimmune hepatitis (AIH) using a protein microarray containing 5011 nonredundant proteins [98]. Horn et al. profiled the repertoire of IgG autoantibodies in plasma samples from Dilated Cardiomyopathy (DCM) patients with a redundant protein microarray containing 37,200 total proteins and identified 26 autoreactive proteins to IgG (with 6 of them reactive specifically to the IgG3 subclass) [39]. Autoantigens were also identified for the chronic disease alopecia areata by protein microarray technology [61]. These examples demonstrate the great potential of protein microarray technology in the application of autoantigen marker identification in autoimmune diseases.
Limitations of protein microarray technology for disease marker screening
Although protein microarray technology provides a high-throughput method with high sensitivity and specificity for protein marker discovery, there are particular limitations that investigators need to be aware of before applying the technology to their research.
First of all, as probing protein microarrays for autoantibodies are in vitro studies with all the protein targets arrayed in a 2-D platform, one has to take into consideration off-target binding. Therefore findings from protein microarray screenings should be validated in larger sample sets, and the autoantigens have to be confirmed by direct detection methods such as Western Blotting, ELISA, or IHC in patient samples before an autoantigen can be confidently associated with the disease. Secondly, microarrays that contain fulllength, folded proteins may not be recognized by autoantibodies that are directed against misfolded or degraded proteins expressed in disease cases, contributing to false negative detections. Patwa et al. developed a method to chemically digest the proteins with CNBr before printing them on the arrays, which may help to overcome this problem [74], however, as the digestion rate is hard to control, the final complex mixture of digested proteins at different levels may complicate normalization efforts and experimental control. Normalization of the array data is another important consideration. As we have already discussed, normalization of protein spot morphology and quantity can be achieved by probing with a labeled antibody directed against an epitope-tag appearing on all of the arrayed proteins (e.g. GST). Various software based methods have been adopted to adjust for regional defects and background that has traditionally been an issue for protein and DNA based microarrays [120]. Nevertheless, analysis and interpretation of the acquired large-scale data is still a challenge to both biologists and statisticians, therefore the development of improved algorithms is an ongoing effort. Under real probing conditions, uncontrollable events such as scratches on the slides, deposition of salt and non-homogeneous local concentrations can further complicate the analysis of the array data, although internal controls are often included to help overcome these defects and improved array surface chemistries have significantly decreased local and regional background defects.
Marker discovery using parallel sequencing technologies
In recent years it has become feasible to sequence entire genomes and transcriptomes using massively parallel sequencing platforms such as the 454 and Solexa Genome Analyzer. These platforms use a highly sensitive light sensor (such as CMOS sensors or CCD cameras) to capture fluorescent signals emitted from each deoxynucleotide as they are added to the DNA chain simultaneously in up to millions of parallel reactions in a flowcell [38], thus performing sequencing in a high-throughput manner to obtain short sequences (vary from 30 to 450 bp depending on the platform) from one or both ends. Currently, related platforms and products are available through Illumina IG, Applied Biosystems SOLiD, Roche 454 Life Science, and the Helicos Biosciences tSMS [112]. Another company, Pacific Biosciences, will manufacture a new sequencer that will perform single molecule sequencing by the end of 2010 [22].
A typical parallel sequencing procedure consists of the following steps: DNA/RNA isolation, fragmentation and DNA/cDNA library construction, highthroughput sequencing and read assembly and mapping. This method has many advantages compared to the traditional tiling microarray hybridization-based methods, or the more traditional RT-PCR and Sanger sequencing method. These new platforms achieve single-base resolution in a high-throughput manner, have low background, no cross-hybridization noise, low dependence on the availability of existing genomic sequence, high reproducibility and low cost per base, and there is no upper limit for quantification [112]. In this section we will briefly review recent efforts in genetic marker discovery with next-generation parallel sequencing.
Whole genome sequencing
This new generation of sequencing technology is shaping a new paradigm in disease marker research, in which massive amounts of sequence information from genomic DNA and expression libraries are screened for linkages and associations of genetic and genomic markers to specific diseases by comparing disease patients and healthy individuals [1,21,31,62]. Genomic DNA sequencing provides rich information on genetic variations (such as Single Nucleotide Polymorphisms, insertions and deletions) and structural variations (such as copy number variations, transposition and transloca-tion) of the investigated genomes and is a powerful tool to reveal novel disease-associated markers. Genome sequencing can also detect integrated viral sequences which may help address studies of virus-associated diseases. Whole genome sequencing has already been applied in organisms with small genomes, such as Acinetobacter baumannii [96], Toxoplasma gondii [12], and Drosophila melanogaster [77], however, due to the large size of human genome and the high cost of parallel sequencing, human whole genome sequencing is still in its infancy. Ley et al. were the first to sequence the entire genome of one type of cancerous tissue, the acute myeloid leukemia (AML) cells (32.7X haploid coverage), as well as corresponding normal tissue, the patient's skin tissue (13.9X haploid coverage) [57]. Due to the unbiased nature of the sequencing methods, they were able to use read frequency to establish how rates of mutations vary within the cancer tissue. This concept is important for future works as we seek to understand the progression of mutational events that lead to the development of diseases like cancer. The researchers found that 59,209 single nucleotide variations were unique in the cancer tissue sample. These mutations resulted in changes to the coding regions in ten genes, two of which were previously implicated in cancer. Nonetheless, as sequencing costs continue to decrease with the maturation of the platforms, whole genome sequencing of larger sample sets is shedding light on new venues of genetic and genomic marker identification in various diseases. Both biologists and clinicians are preparing for this coming revolution, and projects have already been conceived such as ClinSeq, a pilot project led by Green et al. which currently enrolls about 1000 participants for whole genome sequencing [10].
Transcriptome sequencing
Whole transcriptome sequencing by RNA Sequencing (RNA-Seq) is another promising application of parallel sequencing technology and has already been applied to marker discovery in various cancers. Gene expression profiling has been shown to predict the outcome of breast and other types of cancer [40,76]. Shah et al. used paired end RNA-Seq to canvas the transcriptomes of four granulose-cell tumors (GCT) from ovarian cancer patients [93]. They found a common missense mutation in the FOXL2 gene (C402G) in those tumors that was not present in 11 other ovarian cancer transcriptomes that they also sequenced. Additionally, this mutation was confirmed to be present in 97% of other GCT cancers that they tested. Leven et al. performed Illumina sequencing on tiling-array-hybridization enriched transcripts representing 467 cancer associated genes from the K-562 chronic myeloid leukemia cell line and detected a wide range of DNA and RNA sequence alterations in the targeted transcripts [56]. These alterations included fusion transcripts such as BCR-ABL1 and NUP214-XKR3, as well as SNPs within and splice isoforms of these transcripts. While whole genome sequencing is still a relatively expensive proposition, transcriptome sequencing can be performed at much lower cost, facilitating the discovery of any mutations in the transcriptome that may contribute to the development of disease. In addition, RNA sequencing also provides information that would not be obtained through whole genome sequencing, such as information on the expression level of each transcript, alternative splicing and RNA editing, as well as trans-splicing events. We should expect to see genetic and genomic biomarkers identified as causative or contributing factors in human disease in the near future thanks to the utility of RNA sequencing.
One other promising application for transcriptome sequencing is to identify viruses in the host sample. In a study carried out by Sorber et al., the authors successfully detected Hepatitis B Virus (HBV) sequences in the serum sample from a patient with HBV infection [99]. Similarly, Palacios et al. used RNA-Seq and identified a novel Old World arenavirus in RNA samples extracted from the liver and kidney of three deceased patients who had transplantation-related infection [72]. Nakamura et al. obtained 20-460 reads of influenza virus sequence in nasopharyngeal aspirates and 484-15,260 reads of norovirus sequence in fecal specimens from patients suffering from influenza or norovirus infections [69]. Meanwhile, Rwahnih et al. sequenced the RNA from a grapevine with the Roche 454 system and revealed infection of 32 plant viruses as well as one novel virus in the diseased grapevine [3]. Our group has also carried out an RNA-Seq experiment to detect West Nile virus sequences in infected macrophages (unpublished data). We obtained 4700 reads (0.06% of the total mapped reads) mapped to the West Nile virus genome from the infected cells and very few reads (30, ∼0.0003% of the total mapped reads) mapped to viral sequences in RNA isolated from mock control cells. After analysis recheck of the few reads that did map to the virus genome in the control cells, we found that they were redundant with sequences in the human genome. These studies proved that RNA-Seq can achieve high sensitivity and specificity to identify and profile infecting viruses, or the "virome", us-ing RNA samples isolated from the host. Moreover, RNA-Seq will also provide information on virus-host interactions by monitoring expression changes of the host's genes.
ChIP-Seq and Sono-Seq
While mutations in protein-coding sequence are well known to contribute to multiple diseases, RNA-Seq is limited to only those actively transcribed sequences of the genome. This bias results in overlooking variations in non-transcribed regions of the genome that can be important contributors to disease as mutations in these regions can result in aberrant gene regulation. Besides whole genome sequencing, ChIP-Seq, or Chromatin ImmunoPrecipitation-Sequencing, is another approach to address mutations in functional non-transcribed regions of the genome. This technology sequences the genomic regions bound by transcription factors or other DNA-binding proteins (such as histones), and provides information on the position of these binding sites as well as possible mutations in these sites. The binding site profiles of multiple transcription factors, as well as the identified sequence variations within, may act as new markers for diseases such as leukemia [8,73]. Sono-Seq is a related technology developed in our lab, which parallel sequences sonicated formaldehyde cross-linked chromatin DNA via Illumina sequencing, and identifies the chromatin regions that are open and accessible (nucleosome-free therefore susceptible to sonication) [6]. With this technology we identified multiple highly accessible chromatin regions including actively transcribed promoter regions as well as the CTCF insulator protein binding sites. This technology is similar to another open chromatin finding technology, termed FAIRE (formaldehyde-assisted isolation of regulatory elements), which selects open chromatin regions for DNA microarray hybridization by phenolchloroform extraction of sonicated cross-linked samples [27]. When interrogated in the background of a disease compared with healthy controls, the identified profiles of these nucleosome-free regions may provide a new type of disease marker for future studies.
Limitations
While next-generation sequencing holds great promise for the discovery of novel disease markers, there are issues with the current technology, such as artifacts due to sample preparation (both reverse transcription and polymerase chain reaction can generate biases) and data processing (assembly of short reads can result in errors, especially in regions of repetitive sequence). Newer technologies from companies like Helicos Biosciences and Pacific Biosciences are on the horizon that could overcome these issues by direct RNA sequencing and long single molecule sequencing, fulfilling the promise of personalized medicine in the post genome era [22,71].
Marker discovery using mass spectrometry
Mass spectrometry technology (MS) has been growing rapidly in the past several decades. Since John Bennet Fenn and Koichi Tanaka developed new soft desorption methods that made mass spectrometric analyses of biological macromolecules possible, this technology has been widely used in proteomic studies [19,49]. The ability to identify and quantify target molecules (e.g. peptides) makes mass spectrometry methods a popular tool for disease marker discovery.
Disease marker discovery with mass spectrometry is usually combined with varied sample separation methods such as 2DE (2-Dimentional Electrophoresis) and 2D-DIGE (2-Dimentional Differential In-Gel Electrophoresis) [18]. In a typical procedure, mixed proteins from pooled disease samples and pooled controls are separated with 1D or 2D electrophoresis, and individual protein bands or spots are visualized and differential bands or spots are then excised followed by enzyme digestion (e.g. trypsin). The digested peptides are then subjected to mass spectrometry analysis for protein identification. With this method, Shen et al. identified 40 potential markers for pancreatic adenocarcinoma, and the spectrum of these markers covered antioxidant proteins, chaperones, calcium-binding proteins, catalytic enzymes, signal transduction proteins and extracellular matrix proteins [94]. Similarly, Wang et al. identified 52 differentially expressed proteins (including 8 novel markers) associated with oral squamous cell carcinoma, and validated one of the eight markers named RACK1 using immunostaining and gene silencing studies [113]. Even though 2D electrophoresis can improve protein separation and assist in further identification by mass spectrometry, the discovery of low-abundance disease markers has been greatly limited by the poor resolution and sensitivity of 2DE or 2D-DIGE methods. Furthermore, the pooled samples will not only lose important information such as personal variation of the disease markers among different individuals, but also miss protein that are only present in a subset of the sample population.
Recent improvement of mass spectrometry techniques as well as data analysis algorithms has enabled analysis of complex protein samples [116]. Currently, liquid chromatography (LC)-MS/MS using electrospray ionization (ESI) is one of the commonly used methods for large-scale shotgun proteomic studies. Gel-free methods, such as MudPIT (multidimensional protein identification technology) has been more and more popular and greatly enhanced detection limits [115], and the improved resolution and accuracy of mass spectrometers makes this technology more useful in disease marker discovery [111]. With 2D LC-MS/MS, Ralhan et al. identified 811 nonredundant proteins in head-and-neck squamous cell carcinoma from 15 individual cancer samples compared with one pooled normal control, and the panel of the three best performing markers achieved a sensitivity of 92% and specificity of 91% in cancer classification [85]. A most recent study to identify ovarian cancer biomarkers from patient ascites samples with 2D LC-MS/MS also yielded a panel of 25 known and 52 novel protein markers [51]. These studies demonstrated that the improved MS techniques not only enabled researchers to search for biomarkers in unpooled samples, but also could lead to the identification of a larger number of potential markers due to improved sensitivity.
In addition to protein marker identification, mass spectrometry is also capable of identifying metabolom-ic markers [92]. Metabolomics is a novel field that studies the global profiles of all metabolites in a given sample. Since diseases such as cancer usually have unique metabolomes [30], the over-or under-presented metabolites could serve as potential markers of the disease. Moreover, certain metabolites in the cells will also influence the activity of larger biomolecules such as kinases (unpublished data), therefore identification of the cancer-specific metabolomes would be of great value. Currently, multiple metabolites have been associated with various tumors, such as Alanine, saturated lipids, CCMs, Glycine, lactate, myo-inositol, nucleotides, PUFAs and Taurine [30], and it will not be surprising if this list will grows dramatically in the coming years.
Although mass spectrometry is a powerful tool in molecular marker identification, the clinical application of this technology for diagnostic purposes is still limited. Mass spectrometry may one day become the platform of choice for detection of disease-associated markers, however, the high cost of mass spectrometers as well as lack of standardized methods are preventing it from being adopted by clinicians as a diagnostic tool. Moreover, the high level of molecular complexity of biological samples is still a large obstacle in both marker identification and application. There remains great space for improvement before mass spectrometry realizes its ultimate potential in the clinic.
Closing remarks
Disease markers are important for the efficient diagnosis, prognosis and treatment of a disease, therefore identifying these markers is crucial especially for diseases with high mortality rates such as cancer. Each technology reviewed in this article has a unique niche in disease marker discovery. Protein microarray technology excels in finding protein markers, especially antibody markers; next-generation parallel sequencing is designed for RNA and genetic/genomic marker discovery; and mass spectrometry specializes in the identification of protein markers as well as metabolomic markers. Each technology has its unique advantages and limitations, and the "-omics" information obtained with the help of these technologies may complement each other and leading to a comprehensive view of disease (Fig. 3). This information will greatly expand our understanding of the etiology and course of human diseases, resulting in more efficient diagnosis and treatment of disease.
Moreover, by adopting a comprehensive "-omics" view of human diseases, it will be interesting to discover the extent to which these systems interact with each other. Will we find that a disease associated with certain genetic markers also develops specific autoantibodies? Or a certain autoantigen response is actually due to the dysregulation of a specific metabolite or trans-spliced mRNA? Is it possible that diseases such as multiple sclerosis and asthma actually are caused by the coordinating effect of genetic susceptibility and, say, viral infection? While systems biology approaches to disease marker discovery are still in their infancy they have already led to the discovery of many promising genetic and protein markers in various diseases. Additionally, new dimensions in the development and application of these approaches may further revolutionize this field by interrogating additional "-omes", such as the methyl-genome, the "kinome" and the "virome". These systems may be as important as the systems reviewed above to establish a complete understanding of, and efficient treatments for, many diseases. | 2018-04-03T04:06:08.952Z | 2010-06-09T00:00:00.000 | {
"year": 2010,
"sha1": "463446fb992dd841bbb3f87f72131e072d47a4e4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "3399d0fe01c7cb8bff0615a506b7beacc813a05e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
16769657 | pes2o/s2orc | v3-fos-license | Comparative evaluation of prophylactic single-dose intravenous antibiotic with postoperative antibiotics in elective urologic surgery
Background Unrestricted antibiotic use is very common in Iran. As a result, emergence of resistant organisms is commonplace. Antibiotic prophylaxis in surgery consists of a short antibiotic course given immediately before the procedure in order to prevent development of a surgical site infection. The basic principle of prophylaxis is to maintain effective concentrations of an antibiotic active against the commonest pathogens during the entire surgery. Materials and methods We prospectively investigated 427 urologic surgery cases in our department between August 2008 and September 2009 (Group1). As reference cases, we retrospectively reviewed 966 patients who underwent urologic surgery between May 2004 and May 2008 (Group 2) who were administered antibiotics without any restriction. Prophylactic antibiotics such as cefazolin were administered intravenously according to our protocol. Postoperative body temperature, peripheral white blood cell counts, urinalysis, and urine culture were checked. Results To judge perioperative infections, wound condition and general condition were evaluated in terms of surgical site infection, as well as remote infection and urinary tract infection, up to postoperative day 30. Surgical site infection was defined as the presence of swelling, tenderness, redness, or drainage of pus from the wound, superficially or deeply. Remote infection was defined as occurrence of pneumonia, sepsis, or urinary tract infection. Perioperative infection rates (for surgical site and remote infection) in Group 1 and Group 2 were nine of 427 (2.6%) and 24 of 966 (2.5%), respectively. Surgical site infection rates of categories A and B in Group 1 were 0 and two (0.86%), respectively, while those in Group 2 were 0 and five (0.92%), respectively. There was no significant difference in infection rates in terms of remote infection and surgical site infection between Group 1 and Group 2 (P = 0.670). The amounts, as well as the prices, for intravenously administered antibiotics decreased to approximately one quarter. Conclusion Our protocol effectively decreased the amount of antibiotics used without increasing perioperative infection rates. Thus, our protocol of prophylactic antibiotic therapy can be recommended as an appropriate method for preventing perioperative infection in urologic surgery.
Introduction
For more than two decades it has been claimed that prophylactic antibiotics are often inappropriately used in a variety of surgical procedures. 1 Inappropriate antibiotic use increases environmental selection pressure, favoring the emergence of antimicrobialresistant bacteria that can cause surgical site infections, resulting in administration of more antibiotics, an increase in the cost of care, and a prolonged hospital stay. 2
552
Moslemi et al site infection is defined by the Centers for Disease Control and Prevention as an infection occurring at or near the surgical incision within 30 days of a procedure. 3 Rates of surgical site infection are emerging as the leading indicator of quality in surgery. Attention to surgical site infection as a surrogate of quality, combined with the growing problem of antibioticinduced resistance, has brought the issue of prophylaxis to center stage. Antimicrobial prophylaxis is the periprocedural systemic administration of an antimicrobial agent intended to reduce the risk of local and systemic postprocedural infections. The potential benefit of antimicrobial prophylaxis is determined by patient-related factors (ability of the host to respond to bacterial invasion), procedural factors (likelihood of bacterial invasion at the operative site), and the potential morbidity of infection. Antimicrobial prophylaxis is recommended only when the potential benefit outweighs the risks and anticipated costs (including expense of agent and administration, risk of allergic reactions or other adverse effects, and induction of bacterial resistance). The prophylactic agent should be effective against organisms characteristic of the operative site. Cost, safety, and convenience of the agent should also be considered. The duration of antimicrobial prophylaxis should extend throughout the period when bacterial invasion is facilitated and/or likely to establish an infection. 4 There have been many reports and a comprehensive review on the prevention of surgical site infection and the use of antimicrobial prophylaxis in general surgery. 5 However, in urologic surgery, only a few papers have been published, except for those on transurethral prostatectomy. 6 The aim of the present study was to assess whether our antimicrobial prophylaxis protocol, which was designed to decrease the use of antibiotics as well as perioperative infection rates, was appropriate in urologic surgery in Iran.
Materials and methods
We prospectively investigated 427 patients who underwent urologic procedures at our center from August 2008 to September 2009 (Group 1). A total of 1393 cases were analyzed, consisting of 427 cases in Group 1 and 966 cases in Group 2. The mean age of the two groups were 47 years (range 1-89) for Group 1 and 51.5 years (range 4-111) for Group 2. The number of patients in categories A and B were 196 and 231 in Group 1, and 428 and 538 in Group 2, respectively ( Table 3). In each category, there were no statistically significant differences between Group A and Group B in terms of clinical background including age, gender, body mass index, hemoglobin, smoking, operation time, and bleeding (Table 1). We classified our surgical operations into two categories according to invasiveness and contamination level, ie, category A (clean and less invasive surgery, eg, endoscopic surgery) and category B (clean invasive surgery or clean contaminated surgery, Table 5). Patients with systemic or local signs and symptoms of infection were excluded from the study. Urinalysis and urine culture were performed for all of our eligible cases. Cases of positive or suspicious urine culture before operation were also excluded. Antibiotics were administered intravenously according to our protocol, ie, cefazolin during the induction of anesthesia for both categories. All of the endourologic cases, except for cystoscopy, had insertion of an indwelling Foley catheter for at least 24 hours postoperatively. In addition, we inserted an indwelling Foley catheter postoperatively for cases of open simple or radical prostatectomy, open pyelolithotomy or nephrolithotomy, radical nephrectomy or partial nephrectomy, and ureterolithotomy. Postoperative oral antibiotics were not initially administered. The occurrence of surgical site infection and remote infection in Group 1 was compared with the retrospectively reviewed reference group of 966 cases who underwent urologic surgery with uncontrolled administration of antibiotics from March 2006 to April 2008 (Group 2). We also analyzed risk factors for surgical site infection or remote infection in Group 1, including preoperative patient factors (age, gender, body mass index, smoking status, diabetes mellitus, hypertension, and hemoglobin concentration), and intraoperative conditions (duration of surgery and amount of bleeding). In Group 2, intravenous cefazolin was given for 24-48 hours after surgery and in Group 1 only a single-dose intravenous cefazolin dose was given at the time of operation. In all cases, including both groups, the preoperative hospital stay was less than 24 hours. The new one-dose protocol required that cefazolin 1 g would be given at induction of anesthesia. No doses would be given after the end of surgery. All patients were visited on postoperative days 2 and 4. If local or systemic signs or symptoms of infection, including fever, tenderness, and/or swelling at the incision site were detected, an appropriate oral or intravenous antibiotic was commenced. In addition, complete blood count, urinalysis, and urine culture were performed 24 hours after operation and 48 hours after Foley catheter removal. The approximate cost submit your manuscript | www.dovepress.com
553
Antibiotic prophylaxis in urologic surgery of a cefazolin 1 g vial was 1 USD. When a patient showed signs of systemic infection, ie, body temperature $38°C, a white blood cell count .12,000/mm, 3 or localized signs or symptoms including pain, swelling, redness, wound drainage, and tenderness, treatment using another appropriate antibiotic was started, and the case judged as a failure to prevent perioperative infection.
statistical analysis
SPSS software (version 16; SPSS Inc., Chicago, IL) was used for the statistical analysis. A P value of less than 0.05 was considered significant.
Results
Perioperative infections, including surgical site infection and remote infection, were observed up to 30 days postoperation. We primarily judged perioperative infections from the wound condition and general condition at the second or fourth day after operation. Perioperative infection, including both surgical site infection and remote infection, occurred in nine of 427 patients (2.6%) in Group 1 and in 24 of 966 (2.5%) in Group 2 (Table 4). There was no statistically significant difference in perioperative infection rates between Group 1 and Group 2 (P = 0.670). Rates of surgical site infection in Group 1 were 0 and 2 (0.86%) in categories A and B, respectively, while those in Group 2 were 0 and 5 (0.92%), respectively. Again, there was no statistically significant difference in the rate of surgical site infection in each category between Group A and Group B (P = 0.670 and P = 0.667, respectively, Table 2). In categories A and B, the amount of intravenously administered antibiotics per patient in Group 1 was significantly smaller than that in Group B. Thus, the average price for intravenously administered antibiotics decreased to approximately one quarter (1 USD for Group 1 versus 4 USD for Group 2) and the average price for oral antibiotics decreased to approximately one-fifth (0.5 USD for Group 1 and 2 USD for Group 2). No significant differences were found between the single-dose group and the two-day group in terms of total surgical site infection, superficial incisional surgical site infection, deep incisional surgical site infection, febrile urinary tract infections, or pneumonia. In both groups, underlying conditions, such as diabetes, did not have an influence on the incidence of postoperative complications.
Discussion
Surgical site infection and urinary tract infection are a common cause of patient morbidity. Surgical site infections complicate up to 5% of clean extra-abdominal operations and up to 20% of intra-abdominal procedures. 7 There are many potential factors to consider in choosing an appropriate perioperative antibiotic regimen. These considerations include the infection rate at both the surgical site and at remote sites, the potential development of antimicrobial resistance, cost, and the potential for adverse reactions to the antibiotic. Surgical site infections increase morbidity and mortality and can incur considerable costs to an already overwhelmed health care system. Surgical antimicrobial prophylaxis has been shown to reduce the incidence of postoperative wound infections in many randomized clinical trials. The drug chosen should be active against the pathogens most commonly associated with wound infections following the specific procedure and against the pathogens endogenous to the region of the body being operated on, 8 but need not be active against every potential pathogen. 9 The prophylactic dose should never be smaller than the standard therapeutic dose of the drug. It is reasonable to use a dose in the upper therapeutic range (eg, 1-2 g of cefazolin or cefotetan for adults and 30-40 mg/kg for children). Infection can be prevented when effective concentrations are present in the blood and tissues during and shortly after the surgical procedure. Therefore, antimicrobial prophylaxis should begin just before the operation. Starting earlier is unnecessary and potentially dangerous, and starting later is less effective. 9 Current information indicates that additional intraoperative doses of an antimicrobial agent should be given at intervals of one-or two-fold the half-life of the drug so that adequate levels are maintained throughout the operation. 10 Because the half-life of almost all antibiotics is 0.7-1.5 hours, it is necessary to administer antibiotics again when the operation time is more than three hours. 11 Supplementary doses are indicated in cases where blood loss is greater than 1500 cc. Misuse of antibiotics is not harmless. Increasing adverse effects, bacterial resistance, and costs are commonly associated with antibiotic use. To our knowledge, no one has demonstrated an increase in adverse effects using surgical antibiotic prophylaxis. Many risk factors have been reported, such as age, nutritional status, diabetes mellitus, smoking, and obesity, 12 as well as coexistent infections at a remote body site, colonization with microorganisms, altered immune response, length of preoperative stay, transfusion, preoperative hair removal, antimicrobial prophylaxis, operating room, surgical attire and drapes, and surgical technique.
Taken together, we classified surgical procedures according to invasiveness, contamination levels (Table 5), and antimicrobial prophylaxis schedule, including timing, period, and the selection of antimicrobial agents designed according to each category. Because the targets are not only Gram-positive but also Gram-negative bacteria in category B, first-or second-generation cephems for skin incisions are recommended. 8 We believe that our protocol was very simple for medical staff to implement. Most importantly, there were no significant differences in the rates of surgical site infection as well as remote infection in each category between the two groups, in spite of a decrease in the amount of antimicrobial prophylaxis. In a study by Briffaux et al there was no significant difference between two antibiotic prophylaxis regimens (single-dose or three-day) for patients undergoing transrectal ultrasound-guided biopsies. 13 In a study by Zomorrodi and Buhluli 14 , there was no difference between 1 day and 7 days antibiotic prophylaxis in donor nephrectomy cases. In a study by Trinchieri et al antimicrobial prophylaxis according to European Association of Urology guidelines together with active surveillance seemed to be adequate to prevent symptomatic/febrile genitourinary infections, as well as serious wound infections, in the majority of patients. 15 An appealing argument for decreasing antibiotic usage may involve cost. Our study showed that adjusting 24-hour prophylaxis to one dose-prophylaxis reduces costs without increasing infection rates, and results in monthly cost savings. Importantly, our savings are not restricted to decreasing two to three doses per surgery, considering that overuse of antibiotics may be much more expensive than the cost of the drug itself. Resistant organisms, potential allergic reactions, and other adverse events related to antibiotic use will certainly cost more than the 3 USD per TURP 63 72 sOP 67 143 TUL 74 75 Oss 31 58 TURB 21 31 nephrectomy 11 29 iU 14 16 Varicocelectomy 53 131 PB 4 10 herniorrhaphy 34 87 BB 7 5 hydrocelectomy 35 69 CLL 13 22 Open cystolithotomy 8 21 Total 196 231 Total 239
Conclusion
A single-dose antimicrobial prophylaxis regimen was effective for prevention of perioperative infections, including surgical site infection, urinary tract infection, and remote infection in endoscopic-instrumental, clean, and cleancontaminated surgical procedures in urologic patients. We have demonstrated that single-dose prophylaxis is feasible.
To the best of our knowledge, this is the first reported study from Iran to evaluate the role of antimicrobial prophylaxis in urologic surgery. In the current era of restricted hospital budgets, one-dose prophylaxis may provide a way to improve performance by lowering costs.
Publish your work in this journal
Submit your manuscript here: http://www.dovepress.com/therapeutics-and-clinical-risk-management-journal Therapeutics and Clinical Risk Management is an international, peerreviewed journal of clinical therapeutics and risk management, focusing on concise rapid reporting of clinical studies in all therapeutic areas, outcomes, safety, and programs for the effective, safe, and sustained use of medicines. This journal is indexed on PubMed Central, CAS, EMBase, Scopus and the Elsevier Bibliographic databases. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors. | 2016-05-04T20:20:58.661Z | 2010-11-09T00:00:00.000 | {
"year": 2010,
"sha1": "da344a3b412cc1e9c24e61e69ea4f62c5f29a337",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=8134",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1991fec8eac28b6842b33c273455cc0571059902",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225262283 | pes2o/s2orc | v3-fos-license | Do Higher Educated People Feel Better in Everyday Life? Insights From a Day Reconstruction Method Study
Past research has shown a positive association between education and well-being. Much of this research has focused on the cognitive component of well-being (i.e., life satisfaction) as outcome. On the other hand, the affective component, that is, how often and intensively people experience positive affect (PA) and negative affect (NA) in their everyday lives, has received far less attention. Therefore, we examined the association between education and PA and NA in everyday life, with a particular focus on affective experiences at the sub-facet level (based on a structure of NA with multiple factors). We used data from a nationally representative sample (N = 1647) of the German Socioeconomic Panel Innovation Sample (SOEP-IS), employing the Day Reconstruction Method (DRM) to capture affective experiences of everyday activities. Multilevel structural equation models revealed that (1) education was not related to PA, but (2) was positively associated with two sub-facets of NA (mourning/worries and loneliness/boredom); (3) income might in part explain the association between education and NA; (4) education does not particularly seem to serve as a resource in times of unemployment or retirement (i.e., there were no interactions between education and unemployment/retirement regarding well-being) In essence, higher educated people reported fewer negative emotions in everyday life than their lower educated counterparts, but not more positive emotions. The findings underline that different facets of NA, in addition to life satisfaction, are relevant variables related to education and should receive more attention in order to gain a more comprehensive understanding of non-monetary correlates of education.
Introduction
The field of research on subjective well-being (SWB) is one of the fastest growing areas in social and life sciences (Diener and Scollon 2014), with many publications on antecedents and consequences of feeling well. SWB is associated with several positive outcomes such as health, social relationships, and resilience (Diener et al. 1999;Kansky and Diener 2017). Furthermore, there is growing interest in how to promote SWB at the individual and national levels (Kahneman et al. 2004a). For example, economists are interested in how socio-economic variables such as education can foster higher levels of SWB (Dolan et al. 2008). In this context, it has been postulated that the returns of education have a positive effect on the possibility to satisfy human needs through better material and non-material living conditions (Vila 2001). In this regard, the outcomes of education can be viewed as human capital that likely impacts individual lives way beyond economic productivity (Becker 1993). Thus, SWB can be considered as a non-monetary outcome of education.
The majority of studies on the relationship between education and SWB have found a positive association between education and SWB (e.g., Diener et al. 1999;Witter et al. 1984). Better educated people seem to be happier and more satisfied with their own life than their less educated counterparts. However, there are three issues that have not been considered sufficiently in the literature so far.
First, studies often use life satisfaction as the only indicator of SWB. According to Diener (1984), however, SWB consists of two different components: the cognitive and the affective component. The cognitive component is defined by global judgements of one's life and satisfaction with different life domains (e.g., work satisfaction). The affective component is characterized by the experience of positive affect (PA) and negative affect (NA). To date, there has been only a limited number of studies examining the association between the affective component of well-being and education. That is, there is a lack of information about whether more highly educated individuals are not only more satisfied with their life but also feel better in everyday life than less highly educated individuals. This is of particular interest since specifically affective well-being is linked to a number of health-related outcomes such as a lower risk of mortality (Moskowitz et al. 2008) or fewer symptoms of illness (Pettit et al. 2001).
Second, researchers commonly use composite scores based on retrospective self-reports of PA and NA as indicators of well-being. However, such composite scores of retrospective self-reports pose two problems. On the one hand, retrospective reports are commonly biased by memory effects and beliefs about the self (Robinson and Clore 2002). Scholars thus suggest aggregated scores across repeated assessments (e.g., mean levels across the occasions of a diary study) per person as an alternative to retrospective reports. Such scores were shown to have distinct predictive validity on numerous outcome variables (Conner and Barrett 2012), indicating that they are more relevant than retrospective reports. On the other hand, composite scores are commonly computed across different sub-facets of affect, neglecting the multidimensional structure of affect. However, there are alternatives to such composite scores, mainly because there are instances in which a multidimensional conceptualization of affect seems superior to differentiating between only two global dimensions (e.g., Möwisch et al. 2019).
Third, little is known about variables that possibly affect the relationship between education and SWB (Chen 2012;Desjardins 2008). The OECD (2007) concluded that there is a lack of coherent information about possible mechanisms linking education and SWB. However, some potentially relevant variables have been studied, such as income (e.g., Dolan et al. 2008) or unemployment (McKee-Ryan et al. 2005). In these studies, researchers focused predominantly on mechanisms between education and life satisfaction. Therefore, there is little evidence on variables that possibly affect the relationship between education and PA/NA. 1 The purpose of the present study was to address the following three issues. First, we intended to investigate the relationship between education and the affective component of well-being (Aim 1). We then wanted to conceptualize affective well-being in a more nuanced way than other studies in this field of research, and to examine the association between education and phenomenologically distinguishable facets of affect (Aim 2; cf. Möwisch et al. 2019). Third, we set out to examine the extent to which income, employment status, and retirement influence the relationship between education and affective wellbeing (Aim 3).
Education and Subjective Well-being
Research across various disciplines, including economics, sociology, and psychology, has shown a positive association between education and the cognitive component of SWB (e.g., Blanchflower andOswald 2004, 2005;Powdthavee et al. 2015). In their meta-analysis, Witter et al. (1984) concluded that there is a weak positive association between formal education and SWB; the latter was mostly measured as life satisfaction or domain-specific satisfaction (e.g., work satisfaction). More recently, a study with four different national surveys also revealed a positive association between education and happiness with one's own life (Easterbrook et al. 2016). Non-significant or negative associations between life satisfaction and education have been reported comparatively rarely (e.g., Anand et al. 2005;Clark and Oswald 1996;Headey 2008).
In comparison, fewer studies have examined the relationship between education and the affective component of SWB (PA and NA). Based on an analysis of two nationally representative data sets of US households, Ross and Van Willigen (1997) pointed out that higher educated persons are less depressed, anxious, and angry than lower educated persons. In a study investigating the validity of the PANAS scales, Crawford and Henry (2004) found a significant positive correlation between education and PA and a non-significant correlation with NA. A positive direct effect of education on happiness (as an aspect of PA) was also found in a Spanish sample, even after controlling for socio-economic variables such as income (Cuñado and de Gracia 2012). In a recent study by Nikolaev (2018), education was associated with more positive and fewer negative emotions. Similarly, another recent study using data from the European Social Survey revealed a positive effect of vocational and tertiary education on emotional well-being, including on PA (Jongbloed 2018). 2 However, there are also studies that have found no significant or even contrary associations. For example, Collins, Sarkisian, and Winner (2009) found a positive association 1 The results remain unchanged after controlling for the death of a family member in the last year. As a reviewer noted, such an event could affect the association between education and especially NA3 (mourning and worries). The results also remain robust after controlling for self-reported health. 2 To compare and interpret the regression coefficients between education and PA/NA, the measurement models of PA and NA in both groups have to be tested for measurement invariance (Brown 2014). Our results indicated partial metric measurement invariance across both groups for the measurement models of PA and NA (i.e., only the factor loadings for NA1 were allowed to vary in both groups). Therefore, the regression coefficients between education and PA/NA were comparable in both groups between education and NA. In their study, higher educated persons were more nervous, afraid, and aroused than lower educated persons. Moreover, the World Happiness Report did not show any associations between education and PA and NA (Helliwell et al. 2012). Together, the majority of these studies indicate a positive association between education and affective well-being, but the empirical basis is still inconclusive.
Importantly, these studies used either global composite scores or single items (e.g., happiness) to measure NA und PA, which is a common approach for capturing affective well-being. However, these composite scores can lead to a loss of information (Schimmack 1999). For example, Watson and Tellegen (1985) distinguished between two broad global dimensions of affect: positive and negative. According to their conceptualization, these dimensions represent higher-order factors that embrace several discrete emotions at lower levels, based on the assumption that affect has a hierarchical structure (Tellegen et al. 1999). Moreover, appraisal theorists (e.g., Lazarus 1991;Scherer 2001) have postulated that affective states are characterized by phenomenologically distinguishable facets as a result of appraisal processes, supporting the idea of a more nuanced structure of affect. Thus, we differentiated between multiple sub-facets of NA to address the problem of global composite scores and to gain information about the relationship between education and affect at the level of sub-facets. Specifically, instead of investigating a global composite score, we distinguished between the factors NA1 (anger/frustration/ stress), NA2 (mourning/worries), and NA3 (loneliness/boredom). NA1 (anger, frustration, stress) is characterized by a high level of arousal. The other sub-facets (NA2 (mourning, worries) and NA3 (boredom, loneliness)) represent moderate to low levels of arousal and reflect specific feelings that are high in valence. Moreover, NA3 (boredom, loneliness) can be described as a lack of stimulation. Boredom generally reflects a lack of internal and external stimulation (Struk et al. 2016), while loneliness represents a lack of social stimulation. Based on the literature, we expected a positive association between educational attainment and PA and a negative association with all three subfacets of NA.
Furthermore, previous studies used retrospective self-reports of PA and NA referring to a longer period of time, such as "in the last year," to examine the association between education and affective well-being. Alternatively, individual differences in PA and NA can be obtained by measuring affect repeatedly and in close proximity to when the affect is experienced, which is the approach of the Day Reconstruction Method (DRM; Kahneman et al. 2004b). The DRM captures information about activities in everyday life and related affective experiences by reconstructing the previous day. This method allows the study of variation in PA and NA within persons across situations. Importantly, it also allows the investigation of variation between persons when information across situations is aggregated per person. This approach provides a different type of information on between-person differences in affective experiences than the more common retrospective self-reports. The latter are based on episodic and semantic knowledge, while online ratings measure actually felt affective experiences (Robinson and Clore 2002). Importantly, the different sources of information used for current versus non-current emotion reports can lead to discrepancies between the two types of report. The validity of the distinction has more recently been supported empirically by a study which found retrospective and mean levels of current emotion reports to be distinguishable predictors of outcomes (Conner and Barrett 2012). Therefore, we shifted our measurement approach to the use of repeated momentary assessments of affect during daily activities using the DRM.
The Role of Income, Unemployment, and Retirement in the Education-Well-being Association
Several factors can influence the level of SWB (Diener et al. 1999). For example, the following mechanisms are supposed to mediate the association between education and affective well-being: better health behavior (Ross and Wu 1995), higher optimism, higher levels of self-esteem, more control over one's own life (Cummins 2000) as well as better functioning social networks of higher educated persons (Chen 2012). In this study, we focus on income, unemployment, and retirement (e.g., Dolan et al. 2008;Luhmann et al. 2012) because we expect monetary effects of education on wellbeing, which operate via income, and non-monetary effects of education on well-being which should be revealed by the interaction of education and unemployment / retirement in the prediction of well-being. In the following, we will elaborate on potential moderating and mediating relationships between these variables.
Income
Income has been identified as an important mechanism linking education and SWB (see Clark et al. 2008, for an overview) -it may in fact be one of the main channels through which education could have a positive impact on SWB (Dolan et al. 2008). Bivariate studies provide evidence of positive associations between education and income (Aryee et al. 1999;Vila, 2005) as well as between income and SWB (Gardner and Oswald 2007;Howell and Howell 2008). Over and above this, one study showed that education is positively associated to SWB through income (Powdthavee et al. 2015). However, these studies examined the relationship between education, income, and life satisfaction, but not between education, PA and NA. Moreover, correlations between income and the different components of SWB (life satisfaction vs. PA/NA) can vary (Luhmann et al. 2011). Therefore, we considered income as a potentially linking variable between education and PA and NA. Specifically, higher income generated by higher education should be associated with fewer worries in everyday life (cf. Payne and Hartley 1987). Furthermore, a higher income is likely to enable more opportunities to visit friends or pursue specific leisure time activities which can lead to less loneliness and boredom in everyday life.
Additionally, income might moderate the association between education and SWB. Income could serve as a resource and mitigate the potentially negative effect of low levels of education on well-being. For example, Rief et al. (2012) showed that less educated individuals were more worried about their health than higher educated people in a German sample. A higher income could buffer this negative association between education and health because more financial resources allow for better health care and possibly reduce health-related worries. Therefore, we investigated the interaction of education and income regarding PA and NA.
Unemployment and Retirement
In addition to the monetary effect of education and well-being via income, we expected non-monetary effects of education to impact well-being. These effects can be revealed by considering other aspects of one's current economic living conditions. We focused on unemployment and retirement for two reasons. Both variables are related to well-being (Luhmann et al. 2012), and education could be viewed as human capital that may be particularly relevant for well-being during unemployment and retirement. Thus, we assume an interaction between education and unemployment/retirement in the prediction of wellbeing in the following.
Unemployment. The possible effect of education on well-being might depend on how strongly it is needed as a resource. Education is considered to be one of the most important aspects of human capital (Becker 1993); the latter can be understood as the accumulation of knowledge, skills, and abilities. In the life-facet model of coping, McKee-Ryan and Kinicki (2002) elaborated on the role of human capital more generally and in particular in situations when people face unemployment. Aspects of human capital can be a beneficial resource during unemployment and thus have a buffering effect on a potential decline in well-being during unemployment. For example, McKee-Ryan et al. (2005) reported in their meta-analysis that higher educated people have better mental health and higher life satisfaction during unemployment. Thus, more highly educated persons seem to succeed better in maintaining their well-being during episodes of unemployment than their less educated counterparts. For example, existential worries in everyday life might be buffered by higher education, since higher educated individuals may be more optimistic and have better chances of reemployment. Given that this meta-analysis focused on life satisfaction, we will examine the potential buffering effect of education on PA and NA during unemployment.
Retirement. Similar to unemployment, retirement can have a negative effect on wellbeing, for example, through the loss of social contacts to co-workers (Kim and Moen 2002). Education could also be an important resource during this phase of life. Higher educated people have stronger social networks (Nieminen et al. 2008;Pichler and Wallace 2009) and tend to be more socially active after retirement (Wetzel and Huxhold 2016). Social contacts are particularly important after retirement, because social structure often changes in the absence of work, which has potential effects on well-being. For example, Crisp, Windsor, Butterworth, and Anstey (2015) showed that larger social networks lead to lower levels of loneliness after retirement. Thus, we investigated education as a potential resource during retirement in terms of affective well-being.
Participants and Procedure
This study used data from the SOEP-IS (Richter and Schupp 2015), which constitutes a nationally representative sample of the German population. To collect the data, German households were visited. Interviews were conducted with all household members who had reached at least the age of 16. The data were collected in Computer-Assisted Personal Interviews (CAPI) conducted by trained interviewers. For the data analysis, we used two consecutive waves of the SOEP-IS (2012 and 2013). As part of the DRM, participants were first asked when they had woken up on the previous day. Subsequently, they were asked to describe what they had done next by referring to a list of 24 possible activities such as "work." This procedure was repeated until the entire day had been reconstructed for one day per wave. Additional information collected on the episodes (e.g., starting and ending times of the episodes, interaction partners) will not be considered in this study. On average, the participants reported 11.1 episodes per day (SD = 4.1). All participants provided their informed consent; the Institutional Review Board of the German Institute for Economic Research approved the study.
The mean age of the participants was 56.02 years (SD = 15.09, range: 28-96). All participants under the age of 28 were excluded from the initial sample because they had not (yet) reached the opportunity to achieve the highest level of education. We chose 28 as a cut-off value because the mean age at first university graduation was 28 years in our sample. Fifty-seven percent of all participants were female (57.5%), and 37% were retirees (37.3%). Seventy-five percent of the employed individuals were in full-time employment or regular part-time employment (75.6%). The total sample included 1647 participants.
Affective Well-being
The study participants provided information on how they felt during three episodes that they had reported in the DRM. The three episodes were selected randomly by the CAPI program. We used the following items for the analyses: happiness, enthusiasm, satisfaction, anger, frustration, stress, mourning, worries, boredom, and loneliness. The answering scale ranged from 1 (not at all) to 7 (very much). The selection of items was based on previous research (Anusic et al. 2017;Möwisch et al. 2019).
Educational Attainment
Individual educational attainment was measured using the international standard classification of education developed by the UNESCO to internationally compare educational systems and educational attainment levels (ISCED-97;Unesco, 1997). The ISCED comprises six different levels (including the percentage distributions in the total sample): Level 1 = primary education (1.2%), Level 2 = lower secondary education (11.2%), Level 3 = upper secondary education (53.8%), Level 4 = post-secondary non-tertiary education (6.0%), Level 5 = first stage of tertiary education (5.8%), Level 6 = second stage of tertiary education (22%). In order to improve the distribution of the education variable, we merged Levels 1 and 2 into "low education level," Levels 3 and 4 into "moderate education level," and Levels 5 and 6 into "high education level." We computed three dummy-coded variables to compare the three levels with each other ("low vs. moderate," "moderate vs. high," and "low vs. high").
Income, Unemployment, and Retirement
The current gross labor income and current gross secondary income (additional sources of income) of each participant served as indicators for income. The employment status was covered by a dummy-coded variable comparing "unemployment" and "employment." Retirement was assessed with a binary variable and could be answered with "yes" or "no."
Statistical Analyses
To test Aim 1 and Aim 2 (to examine the relationship between educational attainment and PA/NA), we conducted regression analyses via multilevel structural equation modeling using Mplus 7.4 (Muthén andMuthén 1998-2015). This multilevel approach (with repeated affect ratings nested within persons) allows the examination of models and associations of variables between persons as well as within persons across situations. One important aspect of the model was the representation of the factor structure of PA and NA on two levels, at the within-person and at the between-person level. This was accomplished by separating the within-and between-person variance of PA and NA and by modeling factor structures at both levels. The factor structure at the between-person level reflected the structure of items' mean levels across different situations. The emerging latent variables at the between-person level were used to examine the association between education and affective well-being.
In accordance with the theory and statistical analyses reported in Möwisch et al. (2019), we modeled one PA factor and three NA factors, all of which were supposed to be related. More precisely, a comparison of models using the same data as in this study reported in Möwisch et al. (2019) revealed that this model fitted better than other measurements models (e.g., a model with only one factor for NA).
In the multilevel structural equation model, we regressed the latent affect factors (one PA factor and three NA factors) at the between-person level on education (see Fig. 1) to examine the relationship between education and affective well-being. We used the three dummy-coded variables "low vs. moderate," "moderate versus high," and "low vs. high" to compare the influence of different levels of education on PA/NA.
To address Aim 3 (to examine how income, unemployment, and retirement affect the relationship between education and PA/NA), we first included income in the regression analyses. We then used path analyses to investigate possible indirect effects of education via income on PA and NA. To investigate the interaction between education and employment status and retirement, respectively, in the prediction of well-being, we conducted multigroup analyses (employed vs. unemployed, retired vs. not retired). Here, we investigated whether the regression coefficients between education and PA/NA differed in the two groups of employed and unemployed people as well as in the two groups of retired and non-retired people. To test whether differences in regression coefficients were significant, we performed the Satorra-Bentler scaled χ 2 -difference test (Satorra and Bentler 2001).
The models were estimated according to the principle of robust maximum-likelihood (MLR), which uses parameter estimates with standard errors and chi-square statistics which are robust to non-normality and non-independence of observations. The missing data were managed with a full-information maximum likelihood approach (FIML). We evaluated the model fit using the comparative fit index (CFI), root mean square error of approximation (RMSEA), and the standardized root mean square residual (SRMR). We applied the conventional cut-off criteria proposed by Hu and Bentler (1999), that is, 0.95 or higher for CFI, 0.08 or lower for SRMR, and 0.06 or lower for RMSEA.
Educational Attainment and Affective Well-being
To address Aims 1 and 2 (to examine the relationship between educational attainment and PA and NA), PA and NA were regressed on education in three separate models (Model 1: comparison of low vs. high education; Model 2: comparison of low vs. moderate education; Model 3: comparison of moderate vs. high education). Age and gender were included as control variables in the models. The model fit indices indicated an acceptable model fit for all three models. Table 1 presents the regression coefficients and model fit indices for Models 1-3. In Model 1 (low vs. high education), education was significantly negatively related to NA2 (mourning, worries) and NA3 (boredom, loneliness). Accordingly, people with a higher level of education felt less sad and worried as well as less lonely and bored in everyday life than people with a low level of education. In Model 2 (low vs. moderate education), education was also negatively associated with NA2 (mourning, worries) and NA3 (boredom, loneliness). Moderately educated people reported less mourning and worries (NA2) and less loneliness and boredom (NA3) in everyday life than people with low levels of education. In contrast, Model 3 (moderate vs. high education) revealed no significant associations between education and the facets of PA and NA. To sum up, education did not predict PA in Models 1 to 3. Yet, higher education was associated with less NA except for in the model comparing moderate vs. high education, where there were no significant effects of education. 1
The Role of Income, Unemployment, and Retirement
Bivariate correlations between the variables income, unemployment, and retirement as well as education and well-being are reported in the Supplement. The directions of the correlations generally correspond to our expectations: Income was positively related to education and negatively related to NA2 (mourning, worries) and NA3 (loneliness, boredom); employment status (unemployed vs. employed) was negatively related to NA2 (mourning, worries) and NA3 (boredom, loneliness); retirement (non-retired vs. retired) was positively related to NA2 (mourning, worries) and NA3 (boredom, loneliness).
Income
To examine the effect of income on the relationship between education and PA/NA (Aim 3), we first included income in a further set of three regression models. The results of these analyses are presented in Table 2. The model fit indices were acceptable for these models. In Model 1 (low vs. high education), there were no significant associations between income and the latent affect factors. More importantly, the regression coefficients between education and PA/NA did not change after we included income. The significant negative coefficients between education and NA (NA2 and NA3) remained significant after income was included. Path analyses revealed no significant indirect effects of education on NA via income for Model 1 and Model 2 (Model 1: NA2: Β = −0.001; SE = 0.01; p > 0.01; NA3: Β = 0.002; SE = 0.01; p > 0.01; Model 2: NA2: Β = −0.010; SE = 0.01; p > 0.01; NA3: Β = −0.012; SE = 0.01; p > 0.01). In Model 3 (moderate vs. high education), there were no significant effects of income and education on NA2 (mourning, worries) and NA3 (boredom, loneliness).
To examine whether income moderates the association between education and SWB, we investigated the interaction of education and personal income in the prediction of PA and NA (see Table 3). The results revealed significant interaction effects for NA2 (mourning, The interaction effects are shown in Figs. 2 and 3. These indicate that individual educational background only seems to be a relevant covariate for NA2 (mourning, worries) for lower levels of personal income. For higher levels of personal income, the differences in NA2 (mourning, worries) decrease between the lower educated and highly educated persons. However, this interaction effect did not appear for the other sub-facets of NA. In addition to the interaction effects, we also sought to identify indirect effects between education and NA via income in this model. In Model 1 (low vs. high education) and Model 2 (low vs. moderate education), there were also indirect effects of education on NA2 (mourning, worries) via income (Model 1: Β = −0.151; SE = 0.04; p < 0.01; Model 2: Β = −0.046; SE = 0.01; p < 0.01). Thus, the significant association between education and NA2 (mourning, worries) can be partly explained by the higher incomes of higher educated persons. Nevertheless, the direct effect of education on NA2 (mourning, worries) remained significant. In sum, there are indirect and moderating effects of income on the association between education and NA2 (mourning, worries).
Employment Status
To investigate a potential interaction between education and employment status regarding affective well-being, we used multigroup modeling and excluded all retirees from the analysis. In particular, we examined whether the regression coefficients between education and PA/NA differed significantly between the groups of employed and unemployed people. A stronger association between education and PA/NA in unemployed than in employed people would suggest that education serves as human capital or a resource during unemployment 2 . Table 4 shows the results of the regression analyses in the group of unemployed people, and Table 5 the results for employed persons. As in the previous analyses, the model fit indices showed acceptable values for the multigroup analyses. Contrary to our expectations, Model 1 (low vs. high education) revealed no significant associations between education and PA/NA in either group. Likewise, Model 2 (low vs. moderate education) and Model 3 (moderate vs. high education) did not show any significant associations between education and the facets of PA and NA in either group. All in Table 3 Standardized regression coefficients for all, we did not find evidence for the notion that education might function as human capital during unemployment.
Retirement
To identify a possible interaction between education and retirement with regard to PA and the different NA factors, we used the same procedure as for employment status. That is, we analyzed whether the regression coefficients between education and PA/NA differed significantly between the groups of retired and non-retired people. Again, we demonstrated partial measurement invariance across both groups for PA and NA. Table 6 shows the results Table 7 the results for retirees. The model fit indices exhibited acceptable values for this multigroup analysis. In Model 1 (low vs. high education), education was negatively related to NA2 (mourning, worries) in the group of retired but not in the group of non-retired people. The χ 2test statistics showed, however, that the regression coefficients between education and NA2 (mourning, worries) did not differ significantly between the groups (Δχ 2 (1) = 2.88, p > 0.05). The relationship between education and NA3 (boredom, loneliness) was also negative in the group of retirees but not significant in the group of non-retired people. The χ 2 -test statistics, however, revealed that the regression coefficients between education and NA3 (boredom, loneliness) did not differ significantly between groups (Δχ 2 (1) = 0.81, p > 0.05). Model 2 (low vs. moderate education) showed the same pattern of findings. The associations between education and NA2 (mourning, worries) as well as NA3 (boredom, loneliness) were only negative in the group of retirees. The χ 2 -test statistics showed that the regression coefficients for NA2 (mourning, worries) and NA3 (boredom, loneliness) did not differ significantly between retired and non-retired persons (for NA2: Δχ 2 (1) = 1.59, p > 0.05; for NA3: Δχ 2 (1) = 0.69, p > 0.05). In Model 3 (moderate vs. high education), there were no significant associations between education and PA/NA in both groups.
Although these results showed numerically different associations between education and the facets of affect in the two groups (NA2 (mourning, worries) and NA3 (boredom, loneliness)), the associations did not differ significantly. This indicates that there was no reliable interaction between retirement status and education regarding affective well-being.
Discussion
This study examined the relationship between education and the affective component of subjective well-being (i.e., PA and NA) with a particular focus on specific sub-facets of NA in everyday life. Furthermore, we investigated how income, unemployment, and retirement affect the relationship between education and affective well-being. The main finding was that higher educated persons reported less boredom/loneliness and mourning/worries in everyday life than their less educated counterparts. Moreover, we found effects of income linking education and affective well-being for NA2 (mourning, worries). Additionally, we found an interaction effect between education and income for mourning/worries, according to which lower educated persons reported more mourning and worries than higher educated persons only when they had comparatively low incomes. Finally, this study did not provide evidence that education functions as human capital during unemployment and retirement.
Education and Affective Well-being
One of the important findings of this study is that higher educated persons experience less NA than lower educated persons in everyday life, specifically regarding the sub-facets mourning/worries and loneliness/boredom. There are several potential reasons for this finding. First, higher education is related to better health behaviors, such as less smoking or more physical exercise (Ross and Wu 1995). Better health, in turn, is associated with a higher level of SWB (Diener 1984). A second possible explanation is given by Cummins (2000) whose model postulates that three psychological variables mediate the link between education and well-being: control, self-esteem, and optimism. According to this model, education fosters control of one's own environment, a higher level of self-esteem, and a positive perspective on the future. These three factors, in turn, affect well-being positively and could be potential mechanisms linking education and well-being. A third reason could be that higher educated people have better functioning social networks, which directly reduces or makes it easier to deal with negative emotions, such as mourning or loneliness, in everyday life due to better social support. For example, Chen (2012) reported that social networks were important mediators between education and SWB in an East Asian sample. Thus, better interpersonal relationships and better social support seem to have buffering effects on negative emotions. In contrast to NA, education was not related to PA; this finding is possibly influenced by the way PA was measured in this study. Unfortunately, the SOEP-IS contains only three items to measure PA. Therefore, we could only model one latent factor for PA. This structure may not capture all sub-facets of PA (e.g., low-arousal sub-facets such as the state of being calm or relaxed), some of which might have a relation to education. However, a nonsignificant (Kahneman and Deaton 2010) or even a negative association between education and PA (Miret et al. 2014) was reported by a previous study. Therefore, further studies should also measure multiple facets of PA to gain more information about the association between education and PA.
While the generally negative relationship between education and NA is consistent with previous literature, our study provided some insights into this relationship. First, our approach of modeling three latent NA factors instead of using aggregated composite scores is in line with appraisal theories which postulate phenomenologically more nuanced affective states (e.g., Lazarus 1991;Scherer 2001). Our factors represent different sub-facets of NA. NA1 (anger, frustration, stress) is characterized by a high level of arousal. In contrast, NA2 (mourning, worries) and NA3 (boredom, loneliness) represent moderate to low levels of arousal and reflect specific feelings that are high in valence. The results showed that there are significant negative associations between education and NA2 (mourning, worries) as well as NA3 (boredom, loneliness). Education thus particularly seems to influence facets of negative affect that are characterized by a high negative valence, such as mourning or loneliness (NA2 and NA3). In contrast, facets of affect that are characterized by high arousal, such as anger, seem to be less related to education. A possible explanation for this finding could be that more specific negative emotions such as mourning or loneliness can be better handled by higher educated persons because their social networks function better, but that the higher educated cannot handle NA facets such as anger better. Another explanation for the non-significant association between education and NA1 (anger, frustration, stress) could be confounding effects of age. Interestingly, the correlation between education and NA1 was positive for Model 1 ("low vs. high education") and Model 3 ("moderate vs. high education"), but the standardized regression coefficients were not significant. Note that education correlated negatively with age, and age correlated negatively with NA1. Therefore, it is possible that older people tend to have a lower education but they also experience less anger, frustration, and stress in their everyday lives than younger people. Thus, the association between education and NA1 might be confounded by age. Second, our results imply that the relationship between education and NA depends on the levels of education which are compared. We used the ISCED as a measure of education (Unesco 1997) and computed three dummy-coded variables to compare different levels of education (low vs. high education, low vs. moderate education, moderate vs. high education). This corresponds to a broad assessment of education and goes beyond school-leaving qualifications and also considers vocational training or a university degree. We found significant negative associations between education and NA only when comparing a low level of education with a moderate or high level of education. This apparent nonlinear relation and the question of whether it is particularly important to achieve a moderate level of education (upper secondary education/post-secondary non-tertiary education) in order to ensure an adequate level of affective well-being are important topics for further research. Fewer financial resources, precarious employment, and small social networks are possible reasons why particularly lower educated persons struggle with a high amount of negative emotions in everyday life.
Income
Income was examined as one monetary aspect that might affect the association between education and well-being. Our findings show that income might indeed be a potential pathway by which education affects affective well-being. The indirect effect of education via income on NA2 (mourning, worries) could be interpreted such that a higher education leads to a higher income which, in turn, may counteract financial worries. Importantly, this indirect effect was only found for NA2 (mourning and worries) and not for the other facets of affect, highlighting the importance of modeling the sub-facets of NA. In general, it must be noted that no conclusions can be drawn about causal relationships between education, income, and affective well-being due to the correlative study design. Thus, it is possible that more affluent people can achieve a higher level of education, which may lead to better affective well-being in everyday life.
In addition to the indirect effects of education via personal income, income also moderated the effect of education on NA2 (mourning, worries). While the relationship between education and NA2 was negative for lower incomes (lower educated persons reported more mourning and worries), the differences in NA2 between lower educated and higher educated persons were smaller for higher incomes. In other words, income can be interpreted as having a buffering effect for lower educated people, as higher personal income is associated with less mourning and worries. In contrast, income has no effect on mourning and worries in everyday life for higher educated people.
To sum up, the role of income in the education-well-being relationship appears quite specific in this study; income seems to make a difference particularly for those who are less educated. Education remains an important covariate for negative emotions in our study, regardless of personal income. For this reason, the notion that income is the main mechanism between education and SWB seems unlikely (Clark et al., 2008), and thus, the nonmonetary effects of education play an important role.
Employment Status
Contrary to our expectations, we did not find an interaction between education and employment status regarding PA and NA. One explanation for not finding an effect might be that we did not consider the duration of unemployment. Studies have shown that the longer unemployment lasts, the more well-being decreases (McKee-Ryan et al. 2005). Moreover, long-term unemployment has been shown to be related with poorer health (Böckerman and Ilmakunnas 2009) and permanent psychological distress (Daly and Delaney 2013). Since our study has a cross-sectional design, we could not investigate the dynamic processes of well-being during unemployment.
Another potential explanation could be that higher educated people have higher aspirations, counteracting the potential buffering effect of education in periods of unemployment. They may experience a greater drop in income and prestige when entering unemployment than their less educated counterparts which, in turn, could lead to them to experience more negative emotions than less educated persons. For example, Clark and Oswald (1996) found that higher educated people showed a lower level of well-being in Britain in the 1990s and suggested that this was due to a particularly sharp drop in income following the economic recession.
Finally, based on our findings, we cannot draw conclusions on whether there are differences in affective well-being between employed and unemployed individuals. For example, a German DRM study showed, that while there were differences in life satisfaction between employed and unemployed, there were no differences in affective well-being (Knabe et al. 2010). In the study, the authors argued that unemployed individuals feel generally sadder than people in employment, but they can compensate this by pursuing more enjoyable activities during times when employed individuals need to work.
Retirement
Finally, we investigated the interaction between education and retirement status. Surprisingly, we could not find a significant interaction between education and retirement. Similar to unemployment, higher and lower educated persons might adapt their aspirations differently upon retirement and this may play a role. Higher educated persons may be more likely to have had jobs that were important and meaningful for them than less educated persons. Therefore, the absence of such meaningful work-related activities may counteract other, potentially positive, effects of education on retirement (e.g., larger social networks that facilitate retirement).
Another reason could be that retirement can also have a positive effect on affective wellbeing depending on the situation before retirement. For example, Hetschko et al. (2014) showed that retirement has a positive effect on life satisfaction when people were previously unemployed.
Limitations and Conclusion
This study also has some noteworthy limitations. First, we could only investigate correlative associations and could not make any causal assertions about the relationship between education and affective well-being. For this purpose, different methodological approaches, including quasi-experimental and longitudinal research designs, are necessary. Second, it would be of interest to consider a broader scope of PA with more items and latent sub-facets to ensure a more differentiated measurement of PA than we did here. In particular, it would be relevant to examine the extent to which education is related to facets of PA that are characterized by low arousal, such as the states of being relaxed or calm, which could not be considered in this study. Such facets of PA are likely to occur more frequently in everyday life than "enthusiasm" and may thus have greater relevance for education as a covariate. Third, only three randomly selected episodes per day were rated regarding PA and NA. However, various scholars have demonstrated the validity and reliability of this random-sampling approach (Anusic et al. 2017;Hudson et al. 2017;Möwisch et al. 2019). Fourth, although the DRM is less timeintensive than other repeated-measurement designs such as the Experience Sampling Method (ESM), DRM measurements may be more biased by expectations related to how people normally feel in specific situations (Lucas et al. 2019). Nevertheless, the DRM provides comparable results to the ESM when examining between-person differences (Dockray et al. 2010;Lucas et al. 2019). Fifth, the significant effects of education on NA2 and NA3 were potentially mainly driven by retired persons because the regression coefficients were only significant in the retired subsample. Note, however, that the subsamples with non-significant effects of education on NA2 and NA3 were smaller than the retired subsample, and the lack of significant effects might simply be due to a lack of statistical power.
To sum up, this study provides new insights into the relationship between education and well-being by investigating specifically the affective component of SWB (PA and NA) with a focus on several sub-facets (e.g., stress-related states vs. sadness-related states). One implication of this study is that the comprehensive measurement and modeling of affective experiences can reveal differential outcome patterns that would be overlooked when working with a global composite score of NA. Other implications emerge from the nonlinear relationship between education and affective well-being: According to this study, especially people with low educational backgrounds have to deal with negative emotions such as worries, boredom, or loneliness in everyday life. Finally, we showed an association between education and affective well-being beyond income. Further research should therefore also explore other non-monetary mechanisms linking education and affective well-being.
Funding Open Access funding provided by Projekt DEAL.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. | 2020-09-03T09:04:49.870Z | 2020-09-01T00:00:00.000 | {
"year": 2020,
"sha1": "57469c315162ac9eafb0f908bca9c85be983f156",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11205-020-02472-y.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "f291ba48d654fad68ebf962b8b88069dc1bc2c80",
"s2fieldsofstudy": [
"Education",
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
25732607 | pes2o/s2orc | v3-fos-license | Biomarker Response to Galactic Cosmic Ray-Induced NOx and the Methane Greenhouse Effect in the Atmosphere of an Earthlike Planet Orbiting an M-Dwarf Star
Planets orbiting in the habitable zone (HZ) of M-Dwarf stars are subject to high levels of galactic cosmic rays (GCRs) which produce nitrogen oxides in earthlike atmospheres. We investigate to what extent this NOx may modify biomarker compounds such as ozone (O3) and nitrous oxide (N2O), as well as related compounds such as water (H2O) (essential for life) and methane (CH4) (which has both abiotic and biotic sources) . Our model results suggest that such signals are robust, changing in the M-star world atmospheric column by up to 20% due to the GCR NOx effects compared to an M-star run without GCR effects and can therefore survive at least the effects of galactic cosmic rays. We have not however investigated stellar cosmic rays here. CH4 levels are about 10 times higher than on the Earth related to a lowering in hydroxyl (OH) in response to changes in UV. The increase is less than reported in previous studies. This difference arose partly because we used different biogenic input. For example, we employed 23% lower CH4 fluxes compared to those studies. Unlike on the Earth, relatively modest changes in these fluxes can lead to larger changes in the concentrations of biomarker and related species on the M-star world. We calculate a CH4 greenhouse heating effect of up to 4K. O3 photochemistry in terms of the smog mechanism and the catalytic loss cycles on the M-star world differs considerably compared with the Earth.
Introduction
We investigate the effect of Galactic Cosmic Rays (GCRs) upon atmospheric biomarkers (O 3 and N 2 O) as well as H 2 O (essential to life) and CH 4 (which has biogenic and nonbiogenic sources) (Des Marais et al., 2002) of an earthlike planet orbiting an active M dwarf star in the Habitable Zone (HZ). We will henceforth refer to these compounds collectively as "biomarker and associated molecules". Such Mstars are important observational targets because (1) they are abundant in the solar neighbourhood (Tarter et al., 2006;Scalo et al., 2006, this issue) and (2) they have closein HZs, favourable for transit observations of potentially habitable planets. The magnetosphere of a planet in the HZ of an Mstar is probably much smaller than on Earth because: (a) the stellar wind flux is much higher because of the small orbital distance, and (b) the planetary magnetic dipole moment is reduced because the planetary rotation is limited by tidal locking (Grießmeier et al., 2005).
Both these effects contribute to enhancing the flux of high energy cosmic ray particles into the planetary atmosphere. Scalo et al. (this issue,their section 4.2) provide an overview of the atmospheric chemistry of CRs, including the production of NO x and its influence upon biomarker and associated molecules.
In this contribution, we used GCR fluxes calculated from a magnetosphere model (Grießmeier et al., 2005) and we then adopted an air shower approach to calculate the corresponding atmospheric NO x source. Finally, we implemented this source into a photochemical column model to calculate the effect on biomarker and associated molecules.
Section 2 describes the method/models and the runs made. Section 3 presents results, section 4 is the discussion and summary.
Aim of this work
Our main aim is to estimate the effect of high GCR fluxes upon biomarkers and associated molecules by implementing NO x sources from GCRs into our coupled atmospheric column model. Note that in this work we have investigated interplanetary GCR fluxes and interstellar GCR fluxes appropriate for our own solar system and just outside our own solar system, respectively. The work by Segura et al. (2005) was an important previous study, which used a different version of the model used in our work. The code differences between our work and Segura et al. (2005) are as follows: (a) CH 4 and temperature coupling between the chemistry and climate routines has been improved. N 2 O coupling has been introduced.
(b) The weak lightning CO source in the chemistry routine has been removed. year (Houghton et al. 1994). Differences arose due to missing processes (e.g. clouds) and/or missing chemistry (e.g. higher hydrocarbons which affects e.g. OH hence methane) in the model.
(f) The surface albedo was updated from 0.237 to 0.239 so that the mean surface temperature reproduced the Earth i.e. 288K.
Note that the Segura work considered both "active" and "quiet" Mstars whereas we consider only the former. The electromagnetic flux of "quiet" Mstar drops sharply below 320nm compared with "active" Mstars. For M stars with no chromospheric activity, higher amounts of N 2 O and methane are expected due to the low UV emitted by these stars. In a different approach to our work, the Segura work chose to fix surface methane to an earthlike concentration in their radiative module for computational reasons. So, their chemistry module was allowed to calculate high methane values, but these values were not fed back into their radiation scheme.
Model Description and GCR parameterisation
2.1 Cosmic Ray Proton Flux (CRPF) Model the flux of GCRs through the magnetospheres of different terrestrial exoplanets is calculated by Grießmeier et al. (2005Grießmeier et al. ( , 2006. The number of protons of galactic origin reaching the top of a planetary atmosphere is calculated for the energy range 100 MeV< E<8 GeV. For each particle energy, 7 million particle trajectories are calculated. The planetary magnetosphere is assumed to be closed, and is modelled as a cylinder topped by a hemisphere of identical radius. This radius is determined by the pressure balance between the stellar wind ram pressure and the magnetic pressure of the planetary magnetic field. The CRPF calculates the particle fluxes (protons) at the top of the planetary atmosphere. Figure 1 shows some typical GCR flux spectra. The dashed line is for the top of the Earth's atmosphere output by the CRPF model. The thin, continuous line is for the top of our Mworld atmosphere, again output by the CRPF model (Grießmeier et al. 2005(Grießmeier et al. , 2006. The medium thick line is for interplanetary space (Seo et al., 1994 ). This represents the upper limit for a planet without magnetospheric protection. The thickest line is for local interstellar space (Beer et al., 1991) i.e. this is the GCR flux without heliospheric/astrospheric protection by the stellar magnetic field.
Cosmic Ray-NO x parameterisation
GCRinduced NO x is formed via secondary electrons dissociating nitrogen molecules (N 2 ) followed by reaction of the resulting nitrogen atoms with atomic or molecular oxygen: We assumed the heightdependent NO x production rate to be: where 2 N D is the destruction rate of molecular nitrogen and k the number of NO x produced per nitrogen molecule destroyed, which can be considered as a "quantum yield" and X is the overlying atmospheric mass. Since not only nitrogen atoms, but also ions are produced by the dissociation of the molecule, k must be less than 2. Various works have assumed different values for k (see, for example , Nicolet 1975: k = 0.96, Jackman et al. 1980. Note there may be an altitudedependence for k but it is not welldefined. Here we take k to be unity. D N2 (X) is calculated as: where 2 N n is the number density of molecular nitrogen, F el the spectral electron flux, total σ the energydependent total destruction cross section and E 1 ,E 2 the limits of the energy range of electrons capable of dissociating molecular nitrogen. According to Nicolet 1975, the average cross section is total σ =1.75 10 16 cm² for electron energies between E 1 =30eV and E 2 =300eV.
To calculate the flux of secondary electrons, we adopted an air shower approach, which assumes that the incident protons produce electromagnetic cascades while travelling through the atmosphere. We assume a straight flight path in the atmosphere, thus neglecting any scattering. We can make the separation of the spectral electron flux F el into the total electron flux R el (X) and the spectral distribution of electrons, S el (E e ), so that F el (X, E e )= R el (X)S el (E e ) with: where Ω is the solid angle, E low to E high represent the critical energy range for shower production. If the energy loss exceeds the proton energy from the CRPF model , no shower is generated. Above an energy of about 8 GeV the intensity of GCRs drops (Figure 1), so we set E high =8.19GeV. total N is the electron flux created by a proton of energy E p coming from a direction ) , ( ϕ θ . Thus: where f is the fraction of electrons produced which can destroy N 2 . We assume a thirdpower law for the normalized electron energy spectrum (Bichsel et al. 2005 This means that equation (3) results in: The proton energy spectrum, I p used is provided by the CRPF model (Grießmeier et al., 2005). N(X',Θ,φ) describes the progress of the primary proton as it penetrates further into the atmosphere (Gaisser & Hillas 1977): where X' is the total atmospheric column mass density crossed by the primary particle, N max the number of particles at the shower maximum X max , X 0 the height of the first interaction and λ the attenuation length of the produced particles. X' is given by X' = µ X with the atmospheric height measured by the mass column density X and θ µ cos = (θ incidence angle of the proton) as the usual definitions. Integrating over the solid angle in equation (9), assuming the atmosphere to be plane and the proton spectrum to be isotropic, gives: This integral was evaluated numerically using a 5 th order quadrature with 100 grid points.
The calculated rate of NO x production from GCRs on Earth (see Figure It is not the intention of this paper to calculate accurately the propagation of cosmic rays through the atmosphere, but to examine their eventual impact on atmospheric chemistry. three times more GCRNO x production than for the Earth (dashed line). The thickest continuous line (GCR fluxes for local interstellar space) has NO x production about eight times higher than for the Earth. The resulting NO x production rates were implemented into the photochemical routine in the model.
Column
Model the original code has been described in detail by Kasting et al. (1984Kasting et al. ( , 1985 and developed further by Segura et al. (2003). The climate module used is called the Rapid Radiation Transfer Module (RRTM), as employed by Segura et al. (2005). It uses the correlated kdistribution method of Mlawer et al. (1997) where 'k' is the absorption coefficient and employs 16 kcoefficients per wavelength interval with16 intervals between 3.1µm and 10 3 µm. This method groups together identical values of k in a spectral interval and calculates a mean absorption coefficient subsequently used in the transmittance calculation. Scattering is based on Toon et al. (1989). Note that although CH 4 features a pressure (Karkoschka, 1994) and temperaturedependence absorption in the near IR, this is currently not included in the model. Regarding the temperaturedependence, Sromovsky et al. (2006) suggested up to 40% increase in the nearIR absorption coefficient from 288K up to 330K (a typical surface temperature of the Mstar runs). The incoming shortwave (SW) radiation routine in our model may require more kvalues and an improved pressure parameterisation. These points will be explored in future work.
The chemistry includes 55 species for 220 reactions from the surface up to 64 km at 1 km intervals. The troposphere includes methane oxidation as well as wet and dry deposition. The chemistry features HO x , NO x , O x and ClO x families as well as their major reservoirs. Photolysis was diurnallyaveraged for a cloudfree sky. For the runs described here we use the stellar spectrum of an M4.5V star, AD Leo, as described in Segura et al. (2005). Notice that AD Leo is a chromospherically active star, as a result it produces more UV radiation at wavelengths < 300 nm than the UV that will be emitted by an M star without an active chromosphere ( Fig. 1B in Segura et al. 2005). For a discussion of how this may affect the abundance of biomarker and associated molecules in a planet orbiting such a star see the 'Discussion and Extensions' section in Segura et al. (2005).We chose the orbital distance of the planet (=0.16AU) so that the surface temperature, T s , yields 288K. CO 2 was fixed to a modernday Earth value of 3.55x10 4 vmr. Note that our radiation scheme operates up to CO 2 levels of 3.5% volume mixing ratio. The chemistry levels extend up to 64km, the climate grid height is variable depending upon temperature and typically extends up to 70km for modernday Earth conditions. Possible effects whereby GCRs deposit energy in the upper atmosphere are not included. We assumed terrestrial biota i.e. source emissions of CH 4 , N 2 O, CO and CH 3 Cl on the surface of the Mstar planet were based on the Earth. Our work employed updated chemical kinetics based on the Jet Propulsion Laboratory (JPL) Report 2003. Appendix 1, Table A1, shows the main differences between the chemical kinetics used in this work compared with the Segura et al. (2003) work. The updated reactions differ by about 510% in their rates and the overall effect is not large. The photochemical model was integrated until the concentrations converged. Simulating a tidallylocked planet with a night and day face using a column model of the type employed here with averaged conditions is clearly a first approximation which depends on whether atmospheric density is sufficient to distribute quantities such as heat and momentum. We have performed some sensitivity studies for the Earth (not shown) which suggest that results are valid up to about 2 bar but at higher densities e.g. pressurebroadening effects and interpolation of the kcoefficients used in the climate code to derive the absorption coefficients make the results uncertain.
About the Runs
We performed in total five runs: Run (1) Mstar run without methanecoupling between chemistry and radiation. This run was performed to compare with Segura et al. (2005) who performed a similar calculation.
Differences between the basic code used for this run in our work compared with that of the Segura work were discussed in section 1.2. Run 1 employed an Mdwarf (AD Leo) flux spectrum."Without methane coupling" means that CH 4 values used in the radiative transfer calculation were fixed at present day values corresponding to the Earth, whereas in the chemistry methane could build up to large values because OH concentrations were low, as already mentioned. High methane values in the chemistry were not passed to the climate subroutine.
Run (2) Mstar run with methanecoupling -'methanecoupling' means, changes in the CH 4 concentration calculated in the chemistry module are fed into the radiation module (in uncoupled mode, CH 4 in the radiation module was constant).
Run (3) with top of atmosphere GCR induced NO x source -as for run (2) Run (4) with interplanetary GCR induced NO x source -as for run (2) Run (5) with interstellar GCR induced NO x source -as for run (2) but with NO x sources ( Figure 2, thickest line) derived from GCR fluxes for local interstellar space. This represents a star with a very weak magnetic field, so that there is no heliosphere/astrosphere shielding the planetary system against GCRs. for various model scenarios.
Ozone comparison for this study with previous works
In Table 2 than doubles (e.g. 681DU, run 1) compared with the Earth. Ultimately, the discrepancy with the Segura work arose from relatively modest changes in the biogenic fluxes employed (see section 1.2 (e) above) . We could reproduce the Segura ozone value reasonably well when we adopted their biogenic fluxes, but changing these by quite modest values led to quite large changes in ozone.
Our high ozone values e.g. 681DU for Run 1, were mainly favoured by a slowing in the HO x and NO x cycles e.g. by factors of nine and eighteen compared with our Earth control run. Quantifying why the Segura work differed from this work would require a careful comparison of these catalytic cycles as part of a full sourcesink analysis of ozone which is beyone the scope of this work. Our results however suggest, there may exist regimes in the Mworld photochemistry, which depend sensitively on biospheric input and to which the ozone column may be particularly sensitive. The highly nonlinear response between changing the biogenic flux by a modest amount and the ozone column response may indicate a strong positive feedback. The effect is not discerneable in the Earth control -it appears to be a facet of the Mstar world photochemistry. As an example, we employ lower methane fluxes, compared with the Segura work, hence we calculate lower methane concentrations (discussed below) which influences OH hence has a widereaching influence on the chemistry.
Subsequent changes in the ozone column for Runs 2 to 5 in Table 2 are more modest.
Evident is a gradual decrease with increasing NO x from GCRs which we interpret as an increasing role from ozone destruction via NO x based catalytic cycles. Table 2 nevertheless suggests that the ozone column can mostly survive the effects of GCRs. We now investigate the ozone photochemistry in more detail.
Ozone response to GCRs On the Earth, 10% of the ozone column is produced by the so called smog mechanism (HaagenSmit et al. 1952) in the troposphere. This requires volatile organic compounds (e.g. methane), a NO x catalyst and sunlight. In the Earth's stratosphere, where 90% of the ozone column resides, oxygen photolysis produces ozone (Chapman chemistry) and catalytic cycles destroy ozone. So, GCRs can lead to ozone formation via the smog mechanism or to ozone removal via catalytic NO x cycles depending on which altitude the NO x is deposited. To estimate the relative importance of the two mechanisms (smog or Chapman), one can derive steadystate expressions which predict the concentration of ozone from smog alone or from Chapman alone.
For smog, the usual approach is to assume NO 2 is in steadystate i.e. loss NO 2 = production NO 2 , assume the main loss is via photolysis and the main production is via (1) and (2) indicates which of the two mechanisms is dominating the ozone concentration in the model. Also, by comparing results from (1) and (2) with the actual ozone concentration calculated in the model stratosphere, we see how important the catalytic cycles are in lowering ozone compared with the Chapman value. We have calculated expressions (1) and (2) as diagnostics in our model. vmr ozone, and the smog expression alone suggests a value of 9.6x10 6 . The smog mechanism is important in the Mstar runs because (1) methane levels are very high, as already discussed and (2) there is an extra NO x source from the GCRs. For runs 3 to 5, both (1) and (2) favour the smog production. Comparing the columns marked 'model O 3 ' and 'smog O 3 ' in Table 3, we see that the smog mechanism alone can account for about half of the ozone present in the model. The column marked 'Chapman O 3 ' in Table 3 features higher values for run 1 compared with the Earth, probably due to the differing UV environment of the Mstar, as also discussed in Segura et al. (2005). In run 2, which undergoes some warming in the stratosphere due to the methane greenhouse effect, the Chapman chemistry slows -this is the expected negativetemperature dependence signal. For subsequent runs (runs three to five) the smog mechanism does not change greatly and Chapman increases.
Despite this, model ozone decreases -the last column of Table 3 features increasing values, indicating an increasing role in the catalytic cycles. This is consistent with enhanced NO x from the GCRs with increasing run number. This implies that these cycles strengthen from runs three to five and contribute to the model ozone decrease. Randall et al. (2005) noted that observed increases in CR NO x production of about a factor 4 led to about 60% ozone decrease in the Earth's atmosphere. However, our results differed from the Randall result, since our ozone was stable over quite a wide range of NO x production levels. A possible explanation for this discrepancy is, in the Earth's stratosphere, enhanced CR NO x leads mainly to catalytic loss (as observed by Randall's measurements) whereas on our Mstar world, the smog mechanism (which is actually stimulated by enhanced NO x ) plays a more important role than on the Earth and ozone is not greatly depleted. Another reason for the difference between this work and Randall is, for the Earth CR NO x changes are large in the upper to middle atmosphere where ozone is abundant. In our results, most NO x is deposited lower down i.e. in the upper troposphere, due to the differing energy spectrum of the terrestrial compared with the Mstar CRs. Note that stellar particle sources, not considered in this work, may also play a potentially significant role in affecting biomarkers and associated molecules. Solar Proton events, (SPEs) for example, could contribute a similar magnitude to stratospheric NO production as GCRs (Jackson et al., 1980, their Table 1) so our results should be viewed in this sense as a lower limit. Including the GCRs (runs 3 to 5 in Figure 3) does not influence the temperature profile to a great extent. However, as we discussed, the methane level may be sensitive to small changes in the biogenic flux input. Values of up to 1x10 3 vmr methane may be possible (Segura et al. (2005). Pavlov et al. (2000) suggested that hazes may form for (CH 4 /CO 2 )>1 which could lead to considerable cooling hence offset the methane greenhouse effect -such haze formation does not feature in our model. Figure 4 shows column values for the Earth (white) and for runs 1 to 5, shown in gradually darker shading. All column values increase for the Mstar world, compared with the Earth. This was especially the case for methane and chloromethane, as already noted in previous works. The increase was mainly due to a sharp decline in OH, the main sink for these compounds, which decreased by around a factor of 10 5 in the troposphere compared with the Earth control. Columns for a particular compound were all rather similar for runs one to five, suggesting that the signal can survive the effects of the GCRs, even for the extreme, interstellar scenario (run 5).
Column Amounts
Although the biomarker and associated molecules survived in the runs presented here, note that NO x from the GCRs was mainly created around 20km. This is below the region where ozone peaks. Therefore it was difficult for the GCRs to impact ozone to a great extent. For e.g. more dense atmospheres, where the GCRs deposit their NO x at higher levels, if this region occurs close to the ozone maximum, then the GCRs will affect ozone to a larger extent. On Earth, we note that NO x cycles are important for regulating ozone in the lower to mid stratosphere. On our Mstar world, however this may not be the case. For example, strong volcanic activity may increase the relative importance of ClO x cycles in the atmosphere. On the other hand, a somewhat warmer surface with a generous ocean coverage would likely stimulate atmospheric HO x cycle. Clearly, the NO x family must play an important role affecting ozone, if CRs are to play a major role. The sharp decrease in water on the upper levels is consistent with the strong stellar flux in UVB and EUV compared with the Earth. Figure 6 shows the %changes in profiles due to GCRs for the same species as in Figure 5. For ozone (Figure 6a) we see a decrease of up to 25% from the surface up to 40km.
Profile Quantities
It is initially puzzling that increasing NO x from GCRs (which, by itself should stimulate the smog mechanism) should lead to a lowering in ozone in a region where the smog mechanism operates. Further investigation implied that the reason was a drying effect (see Figure 6d) which led to a lowering in OH hence a slowing in the smog mechanism , since the first step of the mechanism involves reaction with OH. Methane and nitrous oxide changes were rather low in Figure 6. The water change (Figure 6d) is shown in absolute vmr units and not as a % due to very large changes in the % value in the upper layers, where absolute levels tended to zero. Evident in Figure 6d is the sensitivity of water to changes in the cold trap temperature at 40km, where a small warming (cooling) effects leads to modest increases (decreases) in water vapour. Note, for the upper levels in Figure 6d, zero values are not plotted.
Summary
• Can biomarkers and associated molecules survive GCRs?
Our results imply that GCRinduced NO x sources only modestly affects ozone and water concentrations and appear to be negligible for the other molecules considered. This suggests that biomarkers and associated compounds on these worlds are not destroyed by GCR induced NO x chemistry, increasing the chances that they can be measured by forthcoming missions.
Appendix 1
Reaction k (Segura et al. 2003) k (this work) Table A1: Differences in the chemical kinetic data used in the Segura et al. (2003) work compared with the present work, which adopted data from DeMore et al, (2003). Units are k (molecules 1 cm 3 s) and T (Kelvin). Results are shown relative to run 2 which is without GCR sources. Plotted is the value ((run x run 2) / run 2) *100% where x=run 3 (long dashed line), run 4 (short dashed line) and run 5 (dotted line), except for water, which shows the difference (run with GCRs -run 2 without GCRs) in 10 6 vmr. The water results (Figure 6d) are not plotted above 50km to avoid very large values which arose because the denominator value approached zero. | 2018-04-03T06:08:15.718Z | 2007-02-23T00:00:00.000 | {
"year": 2007,
"sha1": "158c9f1a488479a7025db0d6259392a3e562b90a",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Arxiv",
"pdf_hash": "158c9f1a488479a7025db0d6259392a3e562b90a",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
234804042 | pes2o/s2orc | v3-fos-license | Perioperative high dose rate brachytherapy in head and neck cancers: case report and review of clinical application
Perioperative high dose rate brachytherapy is a radiotherapy treatment technique which involves intraoperative insertions of brachytherapy catheters into the tumor bed during the surgical resection followed by treatment in the post-operative period. We report here two cases to highlight its use in the primary treatment and reirradiation of head and neck cancers.
INTRODUCTION
The first reported application of using a radioisotope to treat a malignancy intraoperatively is credited to Robert Abbe in the year 1910. 1 This technique in time evolved into the discipline of Brachytherapy and it has established itself as an important treatment modality over the last century. Brachytherapy using iridium interstitial implants has been practised in head and neck cancers. It has also been combined with surgical resection of tumors with treatments delivered either intraoperatively or perioperatively. Perioperative high dose rate brachytherapy (PHDRB) is a technique which involves intraoperative insertions of brachytherapy catheters in the tumor bed at time of surgical resection followed by a fractionated brachytherapy treatment in the perioperative period. 2 It combines the advantages of a highly conformal radiation dose delivery with a clear visualization and demarcation of the tumor bed at the time of surgical dissection. This coupled with incorporation of CT-based treatment planning gives it a high level of treatment delivery precision. Perioperative brachytherapy is an established technique of delivering radiotherapy treatments in sarcomas and its application has been reported in pancreatic cancers. 3 Its use in head and neck cancers has been infrequent due to the complex regional anatomy, surrounding vascular and nervous tissues, multitude of organs at risk and the invasive nature of the procedure. PHDRB can be used in head and neck cancers to primarily irradiate the tumor bed to a very high dose of radiation as a single modality or combined with external beam radiotherapy in both the primary treatment and for reirradiation of recurrent cancers. 4 We report here two cases highlighting the applications of perioperative brachytherapy in these clinical situations.
CLINICAL PRESENTATION
The first case was a 45 years male who presented with a non-healing ulcer in posterior aspect of the right oral cavity of 2 months duration. The patient was evaluated in the multidisciplinary head and neck cancer tumor board of our institute. Examination revealed a right level Ib lymph node of 2 × 2 cm which was firm mobile, non-tender. Local examination of the oral cavity showed an ulceroinfilitrative growth 6 × 4 cm in size involving the right retromolar trigone region. Biopsy was suggestive of squamous cell carcinoma. Patient was diagnosed as a carcinoma of the right retromolar trigone cT3N1M0 (Stage III). The patient was planned for surgery with post-operative radiotherapy using a combination of perioperative and external beam radiotherapy in view of anticipated close margins. He underwent a wide local excision and right inferior maxillectomy along with right modified neck dissection Type III with right segmental mandibulectomy ( Figure 1a). Frozen section done during surgery after maximal possible resection showed positive surgical margins in the retromolar area. Four interstitial brachytherapy catheters were inserted 1.5 cm apart into the tumor bed ( Figure 1b). The catheters were secured to the tumor bed with absorbable sutures. Surgical reconstruction was done using a Deltopectoral and Pectoralis Musculocutaneous flap. The patient underwent a treatment planning CT scan on the third post-operative day with a CT slice thickness of 2.5 mm. Brachytherapy planning ( Figure 2) was done on Oncentra brachytherapy planning system 4.3 (Elekta, Stockholm, Sweden). A dose homogenity index ([ V 100 -V 150 ] / V 100 ) of 0.60 was achieved on planning (V 100 & V 150 represent tissue volume encompassed by 100% and 150% prescription isodose). Brachytherapy treatment was started from the fourth post-operative day on microSelectron HDR (Elekta, Stockholm,Sweden) and treatments were delivered twice a day at 6-hourly intervals for a total of 7 fractions of 3 Gy each Figure 1c. Brachytherapy catheters were removed on last day of treatment Figure 1d. The patient was reviewed for proper scar healing after surgical sutures were removed and started on external beam radiotherapy to a dose of 50 Gy delivered over 5 weeks. The patient was disease free at last follow-up at 1 year. This case highlights the use of PHDRB for radiotherapy dose escalation and increasing the surgical margins in the primary treatment of these cancers.
The second patient was a 51-year-old male diagnosed with Carcinoma of the Base of Tongue cT3N2bM0 (Stage IVA). He had received radical radiotherapy to a dose of 70 Gy over 35 fractions delivered in 7 weeks along with concurrent Inj. Cisplatin 100 mg/ m 2 D1, 22 in the year 2014. The radiation was delivered using conventional radiotherapy with two parallel opposed head and neck treatment portals. He presented with a mass in the right lower neck of 2 month duration in 2017. On examination, there was left level Vb 5 × 5 cm lymph node mass. It was firm and fixed to underlying structures. Contrast-enhanced CT scan was suggestive of infilitration into adjoining soft tissues and biopsy revealed a squamous cell carcinoma. Patient was staged as a recurrent carcinoma base of tongue rT0N2bM0 (rStage IVA). The patient was planned for surgical excision with modified neck dissection and reirridation using perioperative brachytherapy. The indications for perioperative brachtherapy were extracapsular spread of disease with infiltraion into the sternocleidomastoid muscle and an anticipated doubtful R0 resection. The patient underwent modified neck dissection with excision of nodal mass. Five brachytherapy catheters were inserted over tumor bed with intercatheter spacing of 1.5 cm (Figure 3). The patient was planned for a brachytherapy dose of 30 Gy delivered in 12 fractions at 2.5 Gy per fraction. 5 A dose homogeneity index of 0.67 was achieved for the brachytherapy treatment plan. The
BJR|case reports
Case Report: PHDRB in head neck cancers treatments were delivered twice a day at 6-hourly intervals. The patient was disease free at 9 months follow-up. This case demonstrates the use of PHDRB in reirradiation of recurrent head and neck cancers.
DISCUSSION
PHDRB exemplifies the true interdisciplinary management of cancers by integrating brachytherapy with surgery. The 'tumor bed effect' theory explains the rationale of delivering a high dose of radiation in the immediate perioperative period. It alters the interaction of the microscopic residual disease cells within the tumor bed with the normal host tissues and prevents their reimplantation and subsequent local recurrences. Improved local control in turn lead to decreased local and distant failures and improved overall survival. 6 Reducing the treatment-related toxicity is another argument for the practice of brachytherapy, as it confines the radiation dose to a small area with a rapid dose fall off in surrounding structures. Concurrent chemoradiation is considered the standard of care in management of locally advanced head and neck cancers. Significant Grade 3 toxicity ranging from 21 to 38% has been reported with the use of concurrent chemoradiotherapy protocols in head and neck cancers.
Grade 3 toxicity upto 39% has been reported with reirradiation using external beam techniques. 7 One of the ways to reduce this treatment associated toxicity is to reduce the volume of irradiation by integrating brachytherapy into the treatment protocols. The isodose distribution of PHDRB implants Figure 4 show the high degree of conformity achieved within the treatment area with minimal dose to the surrounding critical organs. Gaztanaga et al have shown perioperative high dose rate brachytherapy to have equivalent treatment outcomes when compared to wide field radiotherapy with 5 year locoregional control rates from 60.9 to 79.4%. 4 Non-nasopharangeal head and neck cancers are a locoregional disease with reported recurrence rates as high as 50% after curative treatment. 8 Reirradiation has been effectively used to manage recurrent head and neck cancers 6 and PHDRB is an excellent modality to be used in this indication.Single plane PHDRB implants can deliver a high targeted dose to positive margins and can also increase the surgical margins by 1-1.5 cm thereby improving the local control. 9 An added advantage of brachytherapy implants is that they are not affected by organ motion or respiratory movements. Being an invasive treatment modality, perioperative brachytherapy needs a careful patient selection when being implemented in clinical practice. Patient selection for PHDRB can be aided by dividing patients requiring brachytherapy for reirradiation or those with inadequate surgical margins at difficult resection sites or those with adequate surgical margins as a means of dose escalation. 10 To help patient selection, the University of Navarre predictive model can also be used which divides patients into low risk, intermediate, high and very high risk categories. 11 Preplanning with the surgical team is useful and should also review the reconstruction procedure and type of surgical flaps to be used. Identifying the feeding vasculature for the surgical flap can prevent unintended irradiation to the flap vasculature and have implications on flap viability. Tumor location may be the limiting factor for selecting patients in head and neck cancers and PHDRB should be done in a site which allows easy catheter entry and exit without undue bending or knicking of catheters. A vast majority of implants in head and neck will be single plane with an aim to irradiate only the tumor bed as a target area. Table 1 gives the dose schedules of PHDRB reported in head and neck cancers. The GEC-ESTRO guidelines recommend restricting the individual dose fraction between 3 and 4 Gy per fraction for primary brachytherapy treatment. 9 Periopertive high dose rate brachytherapy alone after R0 resection has been associated with 9 year cancer-specific survival rate of 47.9%. 4 Martinez et al reported the 4 year local control rate and overall survival were 85.6 and 46.4%, respectively in head and neck cancers treated with reirradiation using PHDRB. 13,14 Treatment-related acute side-effects include bleeding, fistula, graft failure, delayed wound healing. Late morbidity can occur in the form of fibrosis, soft tissue necrosis and osteoradionecrosis. Overall, high grade toxicity has been reported to be between 15 and 69% in single modality procedures and 2.8-30.5% in combined modality procedures. 15 This case report demonstrates the clinical application of PHDRB in head and neck cancers and can be considered as one on the treatment options in patients with high risk margins or where reirradiation is being planned after surgical resection.
LEARNING POINTS
1. PHDRB allows brachytherapy catheter placement in anatomical regions not easily accessible to conventional interstitial brachytherapy, since catheters are placed intraoperatively under direct tumor bed visualization. 2. Overall treatment time can be reduced by integration of PHDRB with external beam radiotherapy. 3. It is a versatile treatment to be used for increasing the surgical margins or for dose escalation in primary and recurrent head and neck cancers. | 2021-05-21T16:57:12.816Z | 2021-04-12T00:00:00.000 | {
"year": 2021,
"sha1": "a1001adcc396547e74f06a7574d0ccb9e19add10",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1259/bjrcr.20200158",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1c7336946eac56827a444a89ea22dca5f144e1cb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54540361 | pes2o/s2orc | v3-fos-license | Environmental Method to Determine Dopamine and Ascorbic Acid Simultaneously via Derivative Spectrophotometry
Many various methods were applied to determine dopamine and ascorbic acid simultaneously using hazardous materials and complex procedures. Derivative absorption spectra can give safely and five sensitive derivative equations that are used for the simultaneous determination of dopamine and ascorbic acid in the UV region, using first and second derivative spectroscopy with high precision at pH value of 9.2. Dopamine and ascorbic acid can be detected in the ranges of 0.375–9.45mg L and 0.352–5.28mg L, respectively. These obtained methods could be used to determine both reagents in real and synthesized samples.
Introduction
Dopamine (DA) is a neurotransmitter, which plays a key role in research on the pathology of Parkinson's disease (PD) [1].The development of dopamine determination methods has attracted scientists' attention for years and extensive work has been carried out on it.For the proper treatment of PD patients, biomedical analysis requires reliable and efficient tools for analytical implementations.Many researchable groups focused on the problems of recognition and selective determination of catechol amines, especially dopamine, with optical detection methods.As physiological level of dopamine is as low as 0.2-0.30mol L −1 , analytical methods should be very sensitive whereas in pharmaceutical preparations dopamine concentration is in several orders of magnitude higher (ca.40 g L −1 ), so less sensitive spectrophotometric methods could be applied.
AA is oxidized at similar potential of DA at conventional electrode and has much larger signals in the brain than DA [38].As DA and AA are electroactive substances, electrochemical methods are of the most favourable techniques for the determination of both compounds [39,40].However, one of the major problems encountered in the electrochemical determination of DA is the intervention of AA, which has similar structure and oxidation potential close to those of DA at most solid electrodes, resulting in a great difficulty of their simultaneous determination due to overlapped signals.Moreover, the bare solid electrodes often suffer from the fouling effect due to the accumulation of oxidized products on the electrode surface, leading to the rather poor selectivity and sensitivity [40].
Derivative spectrophotometry is an analytical technique of great utility for extracting both qualitative and quantitative information from spectra composed of unresolved bands [41][42][43].The derivative method has found applications not only on the ultraviolet-visible region spectrophotometry, but also in infrared, atomic absorption, flame emission spectrophotometry, and also in fluorimetry [44][45][46][47].Derivative spectrophotometry has been shown to be more versatile than classical spectrophotometry for solving analytical problems.It leads not only to an increase in selectivity, but also, in many cases, to an increase in sensitivity [48][49][50].The scale of this increase depends on the shape of the normal absorption spectra of the analyte and the interfering substances, as well as on instrumental parameters technique (e.g., peak-to-trough on zero crossing) chosen by the analyst in a given analytical procedure [51][52][53].
For a single peak spectrum, the first derivative is a plot of the gradient / of the absorption envelope versus wavelength and features a maximum and a minimum, the vertical distance between these is the amplitude, which is proportional to the analyte concentration theoretically, and / is zero at max for the band in the normal spectrum.The second derivative spectrum, 2 / 2 versus , has two maximum with a minimum between them at the max location in the normal absorption band.In this work, we could overcome all of the above shown disadvantages by using first and second derivative equations, which are not reported before in the literature survey to detect both of DA and AA by ultraviolet-visible absorption technique, as it is considered one of the most cheapest techniques with high sensitivity and easiest operation.Also, with this method, we have not used expensive chemicals or columns, or hazardous solvents which could be harmful to the environment.The obtained data are highly precise with highly speed acquisition.
Experimental
2.1.Instruments.Absorption measurements were made on a Thermo Evolution 300 recording spectrophotometer using 10 mm matched quartz cells and slit width 2 nm.The pHmeter (HANNA HI 223) equipped with a radiometer combined glass electrode was used for pH measurements.The pH values in water-ethanol medium were corrected as described elsewhere [41].
Chemicals.
All chemicals were of analytical reagent grade and distilled water was used for preparation of solutions.Freshly prepared stock solutions of 1 × 10 −4 mol L −1 DA and AA were prepared by dissolving an accurately weighed amount of the reagent in distilled water (Figure 1).The ionic strength of solutions was maintained at a constant value by using universal buffer.All measurements were made at 25 ∘ C.
Standard
Procedure.Aliquots of standard solutions of DA or AA were transferred into a 10-mL calibrated flask, 2.5 mL of universal buffer was added to reach pH 9.2, and the mixture was allowed to stand for 5 min at room temperature.The contents were then diluted to the mark and mixed well.
The derivative absorbance at 1 and 2 max.was measured against water blank.
Results and Discussion
3.1.Effect of pH on Dopamine and Ascorbic Acid.By using the universal buffer (pH; 5.5-11.5),one could see an increase in absorbance of DA to reach its maximum at pH 9.2.In contrary, there is a decrease in the absorbance of AA with increasing the pH of the solution, so the pH 9.2 was selected for working condition as our main target was DA (see Figures 2 and 3).Although that DA has some absorbance in the visible region with increasing the pH, this absorbance is very weak in comparison with the absorbance in the UV range, which is ten times higher in intensity than that in visible region that can provide higher sensitivity for the detection of DA in the UV region rather than the visible region.
Preliminary Studies.
Figure 4 shows the absorption spectra of DA and AA in distilled water and it is observed that the spectra of these two compounds are completely overlapped and each compound interferes in the spectrophotometric determination of other one.
Optimum Instrumental Conditions.
The main instrumental parameters that affect the shape of the derivative spectra are wavelength scanning speed and the wavelength increment over which the derivative is obtained (Δ).These parameters need to be optimized to give a well-resolved large peak, that is, to give good selectivity and large sensitivity in the determination process.Generally, the noise level decreases with an increase in Δ, thus decreasing the fluctuations in the derivative spectrum.However, if the value of Δ is too large, the spectral resolution is very poor.Therefore, the optimum value of determined Δ was taken into account with the noise level, the resolution of the spectrum, and the sample concentration.Some values of Δ were tested and 2.0 was selected as the optimum.After careful study of lower and higher speeds the speed (1200 nm/min) was selected.
3.4.First Derivative Spectrophotometry.From Figure 5, the obtained data show the first derivative absorption spectra of 1 × 10 −5 mol L −1 of DA and AA at pH 9.2 after 5 min.
As Figure 4 shows large overlap of the spectral bands of the drugs between wavelengths 200.0-320.0nm, this prevents the simultaneous determination of these two compounds in the mixture from their zero-order spectra.However, the first derivative spectra allow the simultaneous determination of them.In DA for example, it has zero crossing points at wavelengths 258.8 and 286 nm, where AA can be determined and DA can be determined at 265 nm. Figure 5 shows the first derivative spectra of DA with a wide range of concentration from 2 to 50 M (see Figure 6), while this range is from 2 to 30 M in AA as shown in Figure 7. 8 and 9 show the second derivative absorption spectra of DA and AA at pH 9.2 after 5 min with a range of concentrations 2-50 M and 2-2.4 M, respectively.different wavelengths for DA and AA.Table 1 shows the statistical analysis of the experimental data for both analytes.The regression equation was calculated from the calibration graph, along with standard deviations of the slope and the intercept of the ordinate; the high value of the correlation coefficient indicates the good linearity of the calibration graph, and we check the better derivative equation by analysis of known mixture between DA and AA.In Table 1, 1 of DA and AA are measured at 265 nm and 259 and 287 nm, respectively, while the 2 of DA and AA are measured at 251 and 281 nm and 275 and 297 nm, respectively.The precision was ascertained by carrying out four replicate determinations of synthetic mixtures of DA and AA.The relative standard deviations for four replicate determinations of a mixture containing 2 × 10 −5 mol L −1 of DA and AA indicate reasonable repeatability of the proposed method; these results are given in Table 1.The accuracy was tested by the determination of mixture containing different concentrations of DA and AA.The recovery values were calculated for DA and AA and depending on these values the best equations were selected for measuring synthetic mixtures of DA and AA; these results are given in Table 2.
Calibration Graphs and Statistical
3.7.Interferences.Some common interferences such as lactose (700 times), starch (250 times), glucose (950 times), sucrose (700 times), and fructose (950 times), which are usually present in preparation of tablets and capsules, show no influence on the determination of DA and AA by proposed method at low levels.
Application.
Although most of the previous works were done to determine both analytes without informing about specific drug that contains the two analytes, we have tried here in this paper as was reported before in [30][31][32][33][34][35][36][37][38] to apply our method to drugs with adding the other analyte and in addition determine these analytes in the human urine.These methods are successful to determine both reagents with good standard deviations in real samples (human urine), which was treated like in [54], and in synthetic mixtures using the usual procedure as is described above.The proposed method has been successfully applied to the determination of DA and AA in synthetic mixtures containing different ratios of both drugs and in some pharmaceutical samples (DA injection and AA tablet) with the main interfering substances to AA and DA.The results are given in Table 2 using the best wavelengths for determining both analytes from Table 1.In Table 2, one can observe that there is no significant difference between the results obtained by the proposed method and the reported values.
Conclusions
Although derivative methods are quite old but no one before to our best of knowledge has used it to determine these two reagents.The proposed methods are simple (no need for solvent extraction or chemical reaction), rapid (as it only requires measurements of 1 and 2 values at different wavelengths), direct (as it estimates each drug independently of the other), and friendly to the environment (no need to use neither synthesized materials nor nanoparticles which do not have specific precaution to the environment).This paper demonstrates the potential of first and second derivative spectrophotometry methods as analytical techniques and their usefulness to obtain accurate, rapid, simple, and simultaneous quantization of DA and AA in pharmaceutical preparation and urine samples.In comparison with previous techniques, this method may be considered as a green tool for determination DA and AA in aqueous medium.It can also be seen from Table 2 that the second derivative in general is more favorable than the first derivative for the simultaneous determination of dopamine and ascorbic acid.
Figure 1 :
Figure 1: Structure of AA (a) and structure of DA (b).
Sample 1 : 1 .
89 mg L −1 of DA and 3.52 mg L −1 of AA, Sample 2: 3.78 mg L −1 of DA and 1.76 mg L −1 of AA, Sample 3: 1.89 mg L −1 of DA and 1.76 mg L −1 of AA, Sample 4: injection content 200 mg/5 mL, Sample 5: tablet content 100 mg.Average of three determinations based on the drug label.Urine 1: 2.0 mg L −1 of DA and 2.16 mg L −1 of AA were added; Urine 2: 1.95 mg L −1 of DA and 2.06 mg L −1 of AA were added.
Table 1 :
Statistical analysis of the determination of dopamine and ascorbic acid in mixtures by first and second derivative spectrophotometry.Sample: 3.78 mg L −1 of DA and 3.52 mg L −1 of AA. a
Table 2 :
Determination of dopamine and ascorbic acid in synthetic mixtures and real samples by first and second derivative spectrophotometry. | 2018-12-03T14:22:07.695Z | 2013-12-25T00:00:00.000 | {
"year": 2013,
"sha1": "915fac47fdd0b65a446bde26b09cd0dfd8d1ef3e",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jspec/2013/260376.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "915fac47fdd0b65a446bde26b09cd0dfd8d1ef3e",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
246473478 | pes2o/s2orc | v3-fos-license | Real-time pandemic surveillance using hospital admissions and mobility data
Significance Forecasting COVID-19 healthcare demand has been hindered by poor data throughout the pandemic. We introduce a robust model for predicting COVID-19 transmission and hospitalizations based on COVID-19 hospital admissions and cell phone mobility data. This approach was developed by a municipal COVID-19 task force in Austin, TX, which includes civic leaders, public health officials, healthcare executives, and scientists. The model was incorporated into a dashboard providing daily healthcare forecasts that have raised public awareness, guided the city’s staged alert system to prevent unmanageable ICU surges, and triggered the launch of an alternative care site to accommodate hospital overflow.
have shifted. Accounting for biases in our observational processes is critical to providing reliable situational awareness, investigating pandemic drivers and risks, and accurate forecasting. Case counts and test positivity can indicate changing risks, but are often biased by geographic and temporal variation in testing effort and priorities (27)(28)(29)(30). For example, when COVID-19 antigen tests were initially distributed for proactive screening in schools and long-term care facilities, some states reported the combined antigen and PCR test results, while others did not (31). While COVID-19 mortality counts are likely underreported (32), they are a high priority outcome of interest for national forecasting efforts (33) and may provide the most accurate but substantially delayed signal of past transmission (34,35). Often, case and mortality counts are analyzed jointly to reduce both delays and biases (4,34,36). COVID-19 healthcare data including hospital admissions, census, and ICU usage offer the fidelity of mortality data with a shorter lag, while also providing an immediate indication of healthcare resource needs. For example, COVID-19 hospitalizations have been used to estimate the impact of nonpharmaceutical interventions (37), provide healthcare demand forecasts (38)(39)(40)(41), and guide mitigation policies (42,43). However, such data can be biased by shifting demographics of COVID-19 patients, changes in admission criteria during surges, and the availability of post-acute care facilities (44,45). The municipal COVID-19 task force in the City of Austin, TX, developed a COVID-19 healthcare forecasting model that has guided regional pandemic responses since April 2020. The model is designed to provide robust, accessible, and holistic information about the changing pandemic situation. Using comprehensive COVID-19 hospital admissions and discharge data as well as cell phone GPS traces, the model estimates the impact of past policies and community behavior, real-time prevalence and transmission risks, and future COVID-19 hospitalizations and ICU needs. Here, we motivate our use of hospital admissions data by comparing the timeliness and fidelity of alternative indicators and then apply the model to characterize the first year of the COVID-19 pandemic in terms of the daily SARS-CoV-2 prevalence, transmission rate, case detection rate, and correlation between mobility and transmission. We then examine the impact of key policy and behavior shifts on these trends and retrospectively assess the performance of our 3-wk-ahead COVID-19 healthcare forecasts. These analyses led to two public dashboards, one tracking daily COVID-19 admissions from all area hospitals (46) and another providing COVID-19 healthcare forecasts (47). Both have been maintained since the spring of 2020 and continue to guide risk awareness, mitigation policies, and healthcare resource allocations in the fastest-growing large city in the United States, with a metropolitan area population approaching 2.3 million.
Results
A visual comparison of COVID-19 case counts, hospital admissions, hospital census, ICU census, and death counts in the Austin-Round Rock metropolitan statistical area (MSA) from March 13, 2020 through February 28, 2021 reveals persistent lags and different degrees of variability (Fig. 1A). Deaths tend to lag the other variables by several weeks; the three healthcare variables--hospital admissions, hospital census (which includes general and ICU patients), and ICU census--are smoother than case counts, with multiweek hospital stays causing the hospital census and ICU census to decline more slowly following peaks. Assuming that the goal of surveillance is to anticipate COVID-19 healthcare demand, we evaluate all variables in terms of the timing and strength of their correlation with COVID-19 healthcare usage indicators such as hospital and ICU census (Fig. 1B) Given the advantages of COVID-19 hospital admission data over the alternative indicators, we propose a forecasting model that uses admissions counts in combination with cell phone GPS data to estimate local transmission rates and project imminent healthcare surges (SI Appendix, Fig. S1). Specifically, we use particle filtering to fit an age-and risk-structured susceptibleexposed-infected-recovered (SEIR) model to daily reported COVID- 19 in exposure rates stemming from changes in policy and behavior, we assume that transmission rates depend on population mobility and simultaneously estimate time-dependent regression coefficients governing that relationship (Fig. 2). The model yields COVID-19 hospital admissions estimates that mirror the observed data in the Austin MSA from March 13, 2020 through February 28, 2021 ( Fig. 2A). We observe similar fidelity with respect to COVID-19 hospital census, ICU usage, discharge, and in-hospital mortality during the same time period (SI Appendix, Fig. S3 Fig. S4). Following the citywide closure of schools on March 13 and Stay Home-Work Safe order on March 24, 2020 (52, 53), the estimated reproduction number dropped to a temporary low of 0.91 (95% CrI: 0.65 to 1.3) on April 6 ( Fig. 2C). Although the reproduction number remained relatively flat through late April, the upper bound of the 95% CrI never fell below one. Following the White House's Opening Up America Again guidelines, Texas reopened in phases starting May 1, 2020 (54)(55)(56). Within weeks, the estimated SARS-CoV-2 transmission began to increase, reaching a peak of 1.7 (95% CrI: 1.3 to 2.0) on June 6. To curb rising hospitalizations, the City of Austin enacted a mask order and limited gathering sizes on June 15 (57). Statewide, Texas closed bars on June 26 and enacted mask orders and gathering limits on July 3 (58,59). The pandemic then slowed rapidly to the minimum detected Rt of 0.65 (95% CrI: 0.52 to 0.77) on July 19. Between mid-August and mid-October, the University of Texas opened, with an estimated 30,000 students in Austin participating in hybrid instruction (60); Austin Independent School District, with an enrollment of over 80,000 students, returned to optional in-person instruction (61); and bars were reopened statewide (62). During this period, the reproduction number steadily increased to a high of 1.3 (95% CrI: 1.0 to 1.5) on October 31 and likely remained at or above 1.0 until January 18, 2021, producing an alarming winter surge.
Since May 2020, the city has maintained a public-facing dashboard (46) that tracks the 7-d moving average of COVID-19 hospital admissions and provides clear threshold values for activating different alert levels, ranging from stage 1 (open) to stage 5 (lockdown) (63). According to these triggers, the city enacted stage 5 between June 26, 2020 and July 26, 2020 to mitigate the summer surge, and between December 23, 2020 and February 9, 2021 to mitigate the winter surge, with the COVID-19 ICU census peaking on January 12, 2021 at 190, just short of the estimated local capacity of 200 patients. Austin opened an alternative care site in a large convention center on January 9 and triggered the state's GA-32 order which restricted restaurant capacity and elective surgeries on January 10, after COVID-19 patients exceeded 15% of all hospitalized patients in the region for seven consecutive days (64)(65)(66). The estimated reproduction number declined throughout the stage 5 period, reaching a minimum of 0.65 (95% CrI: 0.5 to 0.9) on February 2.
Population mobility, as measured by the proportion of the day spent at home and numbers of visits to public points of interest, declined sharply during the spring 2020 shelter-in-place order, and then exhibited fluctuations that tracked local COVID-19 policies and epidemiological trends (Fig. 2B). After reducing the dimensionality of eight mobility variables via a principal components analysis, we find that the first principal component clearly reflects known holidays and other anomalous periods, including Thanksgiving, Christmas, and the catastrophic Texas winter storm of February 2021 which forced many residents to shelter in place (67). The academic calendars of the local K-12 school districts and the University of Texas at Austin are reflected in the changing frequency of visits to campuses but have little impact on the overall mobility trends reflected in the principal components analysis (SI Appendix, Fig. S6). Fluctuations in bar and restaurant visits likewise mirror changing COVID-19 restrictions.
When a community adopts precautionary measures that reduce transmission risks in public venues--like face masking, keeping physical distance, and proactive testing--the relationship between mobility and transmission may weaken; the same level of mobility may correspond to a lower level of transmission. When communities loosen such measures, the reverse may occur. We indirectly estimate changes in such precautionary behavior by simulating a counterfactual scenario in which the relationship between mobility and transmission is fixed at the level estimated from the 4 wk beginning on March 13, 2020, the day of the first reported hospital admission. By comparing the resulting hypothetical transmission rates to those originally observed, we estimate the changing relationship between mobility and transmission ( Fig. 2D). We estimate that, on February 14, 2021, mobilityassociated transmission was reduced by 62% (95% CrI: 52 to 68%) relative to early 2020.
We estimate that 15.9% (95% CrI: 15.6 to 16.4%) of the population had been infected by the end of February 2021, and validate these results with CDC seroprevalence estimates ( Fig. 2E) (48). The estimated prevalence of SARS-CoV-2, including asymptomatic infections, peaked at 0.8% (0.7 to 0.9%) in early January 2021 (SI Appendix, Fig. S9). We estimate the timevarying case detection rate by comparing predicted infections to observed case counts. The rate ranged from just under 25% in March 2020 to a peak of 70% in December 2020 (8) (Fig. 2F). On February 1, 2021, the city reported almost 6,000 previously unreported cases dating back several months; 2 wk later, reporting was largely suspended as a historic freeze brought the city to a halt (50,51).
Since May 29, 2020, we have used this model on a daily basis to provide 3-wk-ahead projections of COVID-19 healthcare demand on a dashboard that is widely used by local policy makers, healthcare systems, press, and the public (47). In retrospective validation, we find that 92.9%, 89.5%, and 87.9% of reported daily COVID-19 hospital census values fall within the 95% prediction intervals of our 1-wk-, 2-wk-, and 3-wk-out projections, respectively ( Fig. 3). For COVID-19 ICU data, the corresponding performance metrics are 89.7%, 88.1%, and 87.0%. Our models tend to overproject COVID-19 healthcare demand, particularly at pandemic peaks (Fig. 3, black tick marks). During the summer and winter peaks, the forecasts indicated that the city might exhaust local ICU capacity but not hospital general bed capacity.
We compare the forecasting performance of our model to three alternative models-a simple random walk (68), an automated autoregressive integrated moving average (ARIMA) model (68), and a simple version of our model which omits the mobility covariate (Fig. 4). The proportion of observed data points that fall within the 95% prediction intervals is highest for the nonmobility version of our model, across the 1-wk, 2-wk, and 3-wk forecasting horizons (SI Appendix, Fig. S10). Our full model performs on par with the ensemble model from the CDC's national COVID-19 healthcare forecasting hub (69) and outperforms the simpler random walk and ARIMA models (SI Appendix, Fig. S10A). The four models achieve comparable levels of error in their (median) point estimates (SI Appendix, Fig. S10B). However, these summary statistics do not reflect time-dependent performance differences among the models. Our full model offers the highest precision and accuracy during pandemic surges ( Fig. 4 and SI Appendix, Figs. S11 and S12). Although the two simple statistical models offer highly accurate (and precise) forecasts during periods of relative stability, they fail to predict exponential growth and rapid decline. Our model outperforms the nonmobility version in reducing uncertainty-providing narrower prediction intervals-particularly at critical epidemic change points.
Discussion
Through a unique collaboration between policy makers, public health officials, healthcare systems, and scientists in the Austin-Round Rock metropolitan area, we developed a flexible model for pandemic surveillance and healthcare forecasting that has guided local COVID-19 responses for over a year. Daily projections have contributed to key pandemic decisions, including enacting the initial Stay Home-Work Safe order (52), face mask mandates (57), and the launch of an alternate care facility to accommodate healthcare overflow (66). Throughout the pandemic, city leadership and local news organizations have regularly cited our model outputs to communicate risks and explain policy changes to the public (70)(71)(72).
Although early COVID-19 risk assessments and forecasts relied almost exclusively on COVID-19 case and mortality data, we find that COVID-19 hospital admissions provide a more accurate and timely indication of recent transmission and imminent healthcare usage. Given the average 5.2 d between infection and symptom onset and average 5.9 d from symptom onset to hospital admission, we expect hospital admissions data to lag infection by roughly 11 d to 12 d, although there is significant individual variation in the time course of infection (73,74). Case counts could provide a more immediate signal of incidence, if cases seek testing and receive rapid results immediately after or even before symptom onset. However, testing in the United States has been plagued by biases and delays throughout the pandemic, including restricted access (75,76), public health guidance to wait until after symptom onset (77), and chronic lags in laboratory processing and reporting (77,78). We expect case data to exhibit 11-to 12-d lags similar to hospital admissions data, given the sequence of delays from infection to symptom onset to test seeking to receipt of test results. A national survey in September 2020 suggested that cases seek tests an average of 2.5 d after first symptoms and wait an average of 3.7 d to receive results (78). Moreover, case count data have persistently exhibited racial, ethnic, and geographic biases due to differential testing access and availability (27). Thus, hospital admissions provide an equally lagged but potentially less biased signal of recent transmission than case data. Despite the utility of COVID-19 hospital admission counts, such data were not widely available in the United States until 9 mo into the pandemic (26). Part of the challenge is that COVID-19 status is not always known at the time of admission, particularly early in the pandemic, when diagnostic resources were limited (75). In Austin, hospitals occasionally updated admissions counts retroactively when SARS-CoV-2 confirmations were delayed.
We estimate that, early in the pandemic, the SARS-CoV-2 reproduction number (Rt ) reached 5.8 (95% CrI: 3.6 to 7.9). Although high, it is consistent with previously published estimates (79). Similar estimates in other cities have been attributed to superspreading events, which we do not explicitly model (80). We note that our estimate is sensitive to the timing of COVID-19 emergence in Austin. If we assume that the initial case arrived on January 20, 2020 rather than February 19, 2020 (which is based on the timing of the first COVID-19 hospital admission), then we estimate a maximum R t of 4.5 (95% CrI: 3.0 to 6.4). However, the estimates quickly converge after March 13, 2020, when COVID-19 healthcare data become available (SI Appendix, Fig. S5).
We estimate that the case detection rate has been highly variable, ranging from less than 20% of cases reported at the outset to well over half reported since early 2021. This variation likely reflects evolving testing priorities, technologies, and access, as well as changes in test seeking behavior driven by fear and effective public health communications (81). However, these citywide averages do not capture demographic and geographic heterogeneity in testing behavior (27,81). For example, children are much less likely to develop symptoms and seek testing than adults, although some private schools have mandated weekly or more frequent testing of all students and staff. The University of Texas at Austin population is similarly overrepresented in the citywide testing data, with their proactive testing program screening an average of 340 students and faculty per day during the 2020-2021 academic year (82).
Our retrospective estimates of COVID-19 infections in Austin are consistent with seroprevalence data (48). Just prior to the summer 2021 emergence of the Delta variant in Austin, we estimated that just under 20% of the Austin-area population had been infected and 58% of adults over age 16 y had received at least one dose of a SARS-CoV-2 vaccine (83,84). As vaccine uptake counterbalances increased transmissibility of COVID-19 variants, our model can be used to continually monitor local transmission dynamics. Going forward, forecasting models like ours must integrate the dynamics of infection-acquired and vaccine-acquired immunity against wild-type and variant SARS-CoV-2 viruses.
Our forecasting model performs well in comparison to simpler mechanistic and nonmechanistic statistical models. Although the four models considered achieve comparable coarse-grained performance statistics, our mobility-driven mechanistic model provides the best combination of accuracy and precision surrounding pandemic surges, when reliable forecasts are particularly important for effective healthcare provisioning, public health responses, and general risk awareness. Removing the mobility covariate from our model significantly increases forecasting uncertainty. Although this increases coveragethe proportion of observed values falling within prediction intervals-it significantly reduces the informativeness and public health utility of the forecasts. Since May 2020, our model projections have informed numerous time-sensitive policy decisions and response actions, including resource planning by local hospitals, urgent requests to state and federal agencies for additional surge resources, the launch and dismantling of alternative care sites to provide additional healthcare capacity, and numerous changes in the Austin-area COVID-19 alert stage to communicate and manage rising and declining risks (43).
In March 2020, we faced an unexpected technical challenge. Prior to the COVID-19 pandemic, most models of respiratory virus transmission assumed that daily contact patterns would be fairly stable. The simplest models assumed that populations are entirely homogeneous and well mixed, others incorporated age-specific contact patterns from diary-based surveys (85) or inferred from epidemiological data (86), and still others assumed complex networks of interactions based on sociological data sources (87,88). The nationwide shelter-in-place orders broke these assumptions. The cell phone mobility data provided by SafeGraph and other technology companies provided an immediate and valuable window into changing behavioral patterns (19). Early in the pandemic, cell phone GPS data reflected COVID-19 policies and correlated with transmission rates (18,89). Our model comparison-with and without mobility datafurther suggests that mobility data can provide an immediate and reliable indication of changing risk behavior. However, the relationship between mobility and transmission can evolve as communities adopt and relax precautionary behavior. To capture this, we estimated a coefficient that relates daily mobility to daily transmission rates in Austin. The data suggest that mobilityassociated risks of transmission initially declined in the spring of 2020, then spiked following the White House's Opening Up America Again campaign, and slowly increased between August and the end of 2020. As novel sources of behavioral information become available, such as more granular mobility trends (90), Bluetooth-enabled contact tracing records (91), or self-reported face covering usage (22), we should carefully consider and (if possible) explicitly model the observational processes used to collect the data and the behavior dynamics that shape them.
Our retrospective analysis of the Austin experience provides anecdotes regarding the impact of COVID-19 policies on risks. Notably, the statewide reopening in May 2020 appeared to fuel the major summer wave. The constellation of policy relaxation, behavioral fatigue, return to school, and winter holidays preceded the winter surge. Recent studies have quantified the impact of restaurant and bar restrictions, school closures, and mask mandates on local SARS-CoV-2 transmission (92-94). Our study of the COVID-19 pandemic in Austin does not disentangle the relative impacts of such measures but provides an intuitive case study for the dynamic interplay between public policy, human behavior, and viral transmission.
Throughout the pandemic, we have applied this model to provide estimates of key COVID-19 indicators and month-ahead hospitalization forecasts. In April 2020, we started by providing model-based projections at the city task force meetings multiple times per week. By June 2020, we had automated the data processing and statistical fitting procedures and launched a public-facing dashboard (47). The choice of indicators and plotting formats were honed through months of engagements with city leadership and local media. The Austin-Round Rock MSA dashboard provides the daily reproduction number with 95% CrIs, the probability that the pandemic is in a growth phase (that is, the probability that the reproduction number is above one), and the 14-d change in incidence as a percent (SI Appendix, Figs. S14 and S15). It also includes time-series graphs for COVID-19 hospital admissions, hospital census, and ICU census, each of which displays data from the beginning of the pandemic and spaghetti plot forecasts, which convey uncertainty by depicting 100 distinct stochastic projections. This visually communicates that qualitatively different futures may be equally likely, and emphasizes the considerable uncertainty we have faced throughout the pandemic stemming from data quality issues and our inability to anticipate changes in behavior and government policies. Our retrospective performance evaluation revealed that the 95% prediction intervals do not capture 95% of the future data. Specifically, the model failed to predict the rapid deceleration of transmission leading to the peaks observed in July and January. One possible explanation is unmodeled feedback from the system (Austin) to the model, as suggested in prior COVID-19 forecasting studies (95). As COVID-19 hospitalizations climbed, city leadership enacted stricter policies and aggressively communicated the pessimistic forecasts to the public to encourage precautionary behavior and curb transmission. Indeed, our largest prediction errors are clustered around the two pandemic peaks, shortly after Austin transitioned to its most restrictive COVID-19 alert stage. The model does not directly or immediately capture such policy and behavioral changes but rather estimates their effects, with delay, from mobility and hospitalization data. Our COVID-19 forecasting successes and failures will likely inspire a new generation of epidemiological models that include mechanistic behavioral dynamics, organizational decision-making, and feedback between sociological and epidemiological dynamics. Through discussions with the city's COVID-19 task force, media outlets in central Texas, local school districts and universities, major hospital systems, and community organizations, we believe that the dashboard has served as a trusted, daily touchstone for the leadership and residents of Austin, TX. For example, the modeling informed decisions to enact the city's Stay Home-Work Safe order in March 2020, the design of the staged alert system that has guided policy since May 2020 (43), the provisioning of hotel rooms as isolation facilities for populations experiencing homelessness and university students living in congregant settings (96), and the launch of an alternative care site at the convention center to accommodate healthcare overflow, as well as reopening policies by universities and schools throughout the city (97). Arguably, the primary value of this effort has been providing a common, predictive understanding of the changing risks, even when the forecasts have been imperfect.
We note three key limitations of our model. First, we do not consider superspreading events, which could lead our model to underestimate future risks, particularly if a superspreading event occurs in a long-term care facility (98). Our model likely captures the potential for sudden transmission rate changes from superspreading events; however, mechanistically incorporating such dynamics could increase the precision of our projections. Second, we assume that Austin is a well-mixed population, and thus ignore important heterogeneities such as long-term care facilities (99) and the extreme east-west segregation of the city, with majority-Latino communities experiencing much higher rates of infection and severe outcomes than the majority-White communities (100-104). Incorporating such heterogeneity for Austin and carefully adapting such assumptions to other cities could substantially improve projections and inform more strategically targeted mitigation efforts. Finally, our estimates for SARS-CoV-2 incidence are sensitive to the assumed infection hospitalization rates, which vary across age and health subgroups and remain uncertain (37,105). Incorporating uncertainty in these parameters would yield wider and, arguably, more reasonable credibility intervals around our estimates for SARS-CoV-2 incidence and case reporting rates. As better data become available, through serological surveys and prospective studies, these parameters can be readily updated.
Immediate, reliable, and comprehensive access to SARS-CoV-2 hospitalization, vaccination, and molecular surveillance data--all of which are collected in electronic databases throughout the United States--is critical for real-time risk assessments, reliable forecasting, and, most important, effective decision-making by individuals, organizations, and government agencies. Translating such data into interpretable indicators and accessible graphs can improve coordination among stakeholders and encourage public buy-in. Our model is designed to provide such retrospective insight and actionable guidance for the public and policy makers in communities throughout the United States.
Materials and Methods
Epidemiological Model. We use an age-and risk-structured SEIR model that incorporates asymptomatic and symptomatic transmission, hospitalization, and mortality. The demographic and risk structure are based on estimates for the Austin-Round Rock MSA (SI Appendix, Fig. S2 and Tables S4-S6), and the natural history of SARS-CoV-2 follows published estimates (SI Appendix, Tables S1-S3). Transmission rates are driven by regional mobility, and the governing relationship between mobility and transmission is allowed to change daily to reflect the dynamic impacts of policy and behavior. The hospital stay duration is also allowed to vary as standards of care and healthcare strain impact the COVID-19 hospital experience (106,107).
The model structure is diagrammed in SI Appendix, Fig. S1, and we present the stochastic formulation in the equations below. For each age and risk group, we build a separate set of compartments to model the transitions between the states: susceptible (S), exposed (E), presymptomatic infectious (P Y ), preasymptomatic infectious (P A ), symptomatic infectious (I Y ), asymptomatic infectious (I A ), symptomatic infectious that are hospitalized (I H ), recovered (R), and deceased (D). The symbols S, E, P Y , P A , I Y , I A , I H , R, and D denote the number of people in that state in the given age/risk group, and the total size of the age/risk group is Transitions between compartments are governed using the tau-leap method (108,109) with key parameters given in SI Appendix, Tables S1-S3. The stochastic model for individuals in age group and risk group is given by
Fox et al.
Real-time pandemic surveillance using hospital admissions and mobility data PNAS 7 of 11 https://doi.org/10.1073/pnas.2111870119 where B(n, p) denotes a binomial distribution with n trials each with probability of success p; γ A , γ Y , and γ H (t) are the recovery rates for the I A , I Y , and I H compartments, respectively; σ is the exposed rate; ρ A and ρ Y are the pre(a)symptomatic rates; τ is the symptomatic ratio; π is the proportion of symptomatic individuals requiring hospitalization; η is the rate at which hospitalized cases enter the hospital following symptom onset; ν is mortality rate for hospitalized cases; and μ(t) is the daily instantaneous rate at which terminal patients die. Fa,r denotes the force of infection for individuals in age group a and risk group r and is given by where A and K describe the age and risk groups, respectively; ω A , ω Y , ω PA , and ω PY are the relative infectiousness of the I A , I Y , I PA , and I PY compartments, respectively; and φ a,i is the mixing rate between age group a and age groups i ∈ A. We define the time-dependent transmission rate β(t) as a function of mobility as and PC2 describe the first and second principal components from our mobility data as described below, ψ = 0.97, and N(μ N , σ N ) denotes a normal distribution with mean of μ N and SD of σ N . Finally, we allow the duration in the hospital for individuals who survive, γ H (t), and those who pass away, μ(t), to vary in time as where Zμ(0) = 0, Zγ(0) = 0, and ψγ = 0.99. To run the SEIR model without mobility, we set PC1(t) = 0 and PC2(t) = 0 for all t, so β(t) = β(0) · e Z(t) .
Mobility Trends. We used mobility trends data from the Austin MSA to inform the transmission rate in our model. Specifically, we ran a principal component analysis (PCA) on eight independent mobility variables provided by SafeGraph (19), including 1) home dwell time and visits to 2) universities, 3) bars, 4) grocery stores, 5) museums and parks, 6) medical facilities, 7) schools, and 8) restaurants. All metrics are provided at the census block group (CBG) and aggregated to the five county metropolitan regions (Bastrop, Caldwell, Hays, Travis, and Williamson Counties). For each CBG, SafeGraph provides the daily average home dwell time and number of reporting devices. We estimate average home dwell time in the MSA by averaging across CBGs weighted by the number of reporting devices. For all other visitation metrics, we sum the total visits for the specific indicator across all CBGs within the MSA. We baseline each metric according to prepandemic mobility by calculating the average value for the metric in the MSA from January and February of 2020 and dividing all subsequent values of that metric by the prepandemic baseline. We carry out a PCA on the eight baselined metrics using all data up to the day the projections are made, which captures almost as much variation in mobility as a more granular sliding window PCA (SI Appendix, Fig. S7). We use the first two principal components as covariates for a regression as described in the modeling equations for β(t). Daily 7-d averages for the raw mobility data can be seen in SI Appendix, Fig. S6.
Model Fitting. We obtained daily hospital admit, discharge, census, and death data for the Austin MSA from Austin Public Health. We assumed all sources of data were negative binomially distributed around their predicted values from the SEIR stochastic model with dispersion parameter k. We chose informative but relatively dispersed priors for certain parameters for stability in parameter estimation and to prevent the model from overfitting data through large perturbations to time-dependent variables. A full explanation of the likelihood for the model can be found in SI Appendix. We estimated ψμ, σμ, and σγ and fixed the remaining parameters as described in SI Appendix, Tables S1-S3. Fitting was carried out using the iterated filtering algorithm made available through the mif2 function in the pomp package in R (110)(111)(112). This algorithm is a stochastic optimization procedure; it performs maximum likelihood estimation using a particle filter to provide a noisy estimate of the likelihood for a given combination of the parameters. For each parameter combination, we ran 300 iterations of iterated filtering with a cooling fraction of 50% every 60 steps, each with 3,500 particles. This iterated filtering was run 50 times, and the maximum likelihood estimate (MLE) among these 50 was selected. We calculated smoothed posterior estimates for all of the states within the model through time (including β(t) and other time-dependent parameters which are technically state variables in our model formulation). We estimated these smoothed posteriors as follows: 1) We ran 1,000 independent particle filters at the MLE, each with 2,500 particles. For each run, l, of particle filtering, we kept track of the complete trajectory of each particle, as well as the filtered estimate of the likelihood, L l . 2) For each of the 1,000 particle filtering runs, we randomly sampled a single complete particle trajectory, giving us 1,000 separate trajectories for all state variables. 3) We resampled 1,000 trajectories from these 1,000 trajectories with probabilities proportional to L l to give a distribution of state trajectories.
The result can be thought of as an empirical Bayes posterior distribution; that is, a set of 1,000 smoothed posterior draws from all state variables, conditional on the MLEs for the model's free parameters. This smoothed posterior distribution is how we calculate summary statistics for our timevarying state variables. Our estimates for β(t) are converted to R(t) estimates as described below, and model estimates for the instantaneous discharge rates for surviving (γ H (t)) and dying (μ(t)) patients can be found in SI Appendix, Fig. S13.
Making Projections. Our model fitting procedure provides MLE for all of the key parameters (e.g., the SD governing the random walk of the transmission rate) in the model alongside smoothed posteriors for the state variables (e.g., the number of individuals in each compartment of the model or the daily transmission rate). We sample from the smoothed posterior distribution to obtain a distribution of initial state conditions for the projections. We initialize 1,000 projections with those initial state conditions and run the stochastic model forward according to the MLE of the fixed parameters. In this way, we capture two sources of uncertainty in our parameter estimates: 1) uncertainty in the underlying state of the community at the time the projections are made and 2) uncertainty in how behavior might change in the future as captured by the random walk function in our transmission rate.
Projection Model Comparison. We compare projections from the SEIR epidemiological model with projections from statistical null models provided by the forecast package in R (68). For the random walk model, we use an ARIMA model of order (p = 0,d = 1,q = 0) (68), and we use the Hyndman-Khandakar algorithm for automatically determining the order of an ARIMA model for the Auto ARIMA model (68). We fit the models to all available data up to the date the projection is made, and project forward with the fitted model.
Time-varying reproduction number (Rt).
To estimate the time-varying reproduction number (Rt), we apply the next-generation method to our daily estimated smoothed posterior distributions for β(t) with the MLE values of the estimated parameters and the fixed parameters listed in SI Appendix, Tables S1-S3 (113). Reporting rates. We estimate the reporting rates by comparing our estimates for daily incidence with daily reported cases counts for the Austin MSA (Bastrop, Caldwell, Hays, Travis, and Williamson Counties) as provided by The New York Times (8). To roughly estimate changing reporting rates, we lag the case data by 11 d to account for the lag between infection and case reporting (73,78). In estimating the maximum and minimum reporting rates, we exclude case data for February 2021, because reporting was impacted by and mobility data a severe weeklong winter freeze and the reporting of a large number of backlogged cases (49)(50)(51). Estimating Austin COVID-19 seroprevalence. COVID-19 seroprevalence estimates are not available for the Austin metropolitan region, but the CDC has conducted biweekly Texas seroprevalence estimates since the summer of 2020 (48). We adjust the Texas seroprevalence estimates to account for the heterogeneous burden of the pandemic across the state. Specifically, we assume that Austin seroprevalence can be estimated as where Itexas is the seroprevalence estimate provided by the CDC for the state of Texas, and D indicates the per capita mortality rate for Austin or the state of Texas as provided by The New York Times (8). As carried out in ref. 48, we shift all time-dependent estimates to their corresponding date of infection, so seroprevalence estimates are shifted to 7 d before the first sampling day to account for the time it takes to become seropositive following infection, and mortality date are shifted 20 d to account for the delay between infection and mortality (114). We then compare the corrected estimate for I austin (t) with the daily cumulative estimated infections from the model.
Estimating the time-varying relationship between mobility and transmission.
Our model estimates the time-varying transmission rate as with b 1 (t) and b 2 (t) governing the relationship between the mobility data and the transmission rate. Since transmission is governed by a combination of b 1 (t), b 2 (t), and Z(t), an increase in one may be compensated by a decrease in another without significantly changing the overall transmission rate. Thus, we cannot easily estimate the contribution of each in isolation. Instead, we estimate the time-varying relationship between mobility and transmission through a comparison between our fitted model and a counterfactual scenario where b 1 (t), b 2 (t), and Z(t) are fixed at their average initial estimated values (b 1 ,b 2 , andZ). Specifically, we estimateb 1 ,b 2 , andZ as the average for the respective parameters over the first 4 wk of hospitalization data from the fitted model (from March 13, 2020 to April 10, 2020), and calculate the expected transmission rate based on this initial relationship and subsequent mobility data as β (t) = β(0) · eb 1 ·PC1(t)+b 2 ·PC2(t)+Z .
β (t) can be thought of as the counterfactual transmission rate if the initial relationship between mobility and transmission remained constant over the course of pandemic. We estimate the reduction in mobility-driven transmission that is unexplained by mobility levels as We provide a point estimate for the overall reduction in mobilitytransmission risk on February 14, 2021 relative to early in the pandemic, and provide a sensitivity analysis with respect to the start and duration of the baseline period (SI Appendix, Fig. S8).
Data Availability. All code and healthcare time-series data used in this study are publicly available and have been deposited in GitHub (https://github.com/UT-Covid/SEIR-Austin). | 2022-02-03T06:23:52.444Z | 2022-02-01T00:00:00.000 | {
"year": 2022,
"sha1": "3bd1ef3d9338b6fd49c152f3f3d9a051e358a08d",
"oa_license": "CCBY",
"oa_url": "https://www.pnas.org/content/pnas/119/7/e2111870119.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "47c0da108c374fcc66b09980b561430de1db9110",
"s2fieldsofstudy": [
"Computer Science",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17021074 | pes2o/s2orc | v3-fos-license | MDM4 Overexpressed in Acute Myeloid Leukemia Patients with Complex Karyotype and Wild-Type TP53
Acute myeloid leukemia patients with complex karyotype (CK-AML) account for approximately 10–15% of adult AML cases, and are often associated with a poor prognosis. Except for about 70% of CK-AML patients with biallelic inactivation of TP53, the leukemogenic mechanism in the nearly 30% of CK-AML patients with wild-type TP53 has remained elusive. In this study, 15 cases with complex karyotype and wild-type TP53 were screened out of 140 de novo AML patients and the expression levels of MDM4, a main negative regulator of p53-signaling pathway, were detected. We ruled out mutations in genes associated with a poor prognosis of CK-AML, including RUNX1 or FLT3-ITD. The mRNA expression levels of the full-length of MDM4 (MDM4FL) and short isoform MDM4 (MDM4S) were elevated in CK-AML relative to normal karyotype AML (NK-AML) patients. We also explored the impact of MDM4 overexpression on the cell cycle, cell proliferation and the spindle checkpoint of HepG2 cells, which is a human cancer cell line with normal MDM4 and TP53 expression. The mitotic index and the expression of p21, BubR1 and Securin were all reduced following Nocodazole treatment. Moreover, karyotype analysis showed that MDM4 overexpression might lead to aneuploidy or polyploidy. These results suggest that MDM4 overexpression is related to CK-AML with wild-type TP53 and might play a pathogenic role by inhibiting p53-signal pathway.
Introduction
Acute myeloid leukemia patients with complex karyotype (CK-AML) account for approximately 10-15% of adult AML, and the incidence increases with age. CK-AML is characterized by chemoresistance, higher rates of refractory disease, and poor prognosis [1][2][3]. However, the molecular mechanisms mediating of leukemogenesis in CK-AML patients have remained elusive. A series of large sample studies show that nearly 70% of CK-AML cases carry TP53 mutations and have biallelic inactivation of TP53 [4,5]. p53 plays an important role in spindle damage induced mitotic arrest in proliferating T cells [6] and p53 lost myeloid progenitors exhibit aberrant self-renewal, thereby promoting AML [7]. Yet the question remains as to the leukemogenic mechanisms of the nearly 30% of CK-AML patients without TP53 alterations.
MDM4 is a negative regulator of p53, and by binding p53, close the transcriptional activity domain and thereby inhibits p53 function [8]. The short isoform of MDM4 (MDM4S) is one of the MDM4 alternative splicing isoforms that results from the exclusion of exon 6 and termination of translation in exon 7. MDM4S is essentially a truncated protein that mainly consists of the p53binding domain. MDM4S has been reported to bind and inhibit p53 more efficiently than full-length MDM4 (MDM4FL) [9].
Several recent studies suggest that an increased MDM4S/ MDM4FL ratio may serve as both a more effective biomarker for p53 pathway attenuation in cancers than p53 gene mutation and as a poor prognostic indicator. [10,11]. The molecular mechanisms of myeloproliferative neoplasm (MPN) converting into AML were examined in 330 cases [12]. Among the 22 patients with transferred to AML, 10 (45.5%) cases had evidence of a p53-related defect mediated by gains (amplification) of chromosome 1q (which contains the potent p53 inhibitor MDM4) or TP53 gene mutations. These reports suggest that overexpression MDM4 may be involved in the leukemogenic mechanisms of CK-AML patients without TP53 alterations. This question has not been fully explored to date.
In this study, we detected the expression levels of MDM4S and MDM4FL in CK-AML patients with wild-type TP53. We also measured cell proliferation, cell cycle, proteins related to p53 pathway and spindle checkpoint expression levels, and analyzed karyotypes in MDM4-overexpressing tumor cell line with wildtype TP53. We used these approaches to investigate the possible pathogenesis of MDM4-overexpression in CK-AML patients lacking TP53 mutations.
Ethics Statement
This study complies with the Declaration of Helsinki, and has been approved by the Ethics Committee of Shanxi Medical University. The written informed consent was obtained from all patients and from the legal guardians in the case of minors.
Patients
Bone marrow samples were collected at the time of diagnosis of 140 non-M3 de novo AML patients. The fusion genes RUNX11/ RUNX1T1, PML/RARa or CBFb/MYH11 of the patients were identified to be negative at the time of enrollment.
Karyotype analysis
Conventional cytogenetics was performed at the time of diagnosis in 140 patients. Bone marrow cells were cultured in RPMI 1640 medium with 10% fetal bovine serum and penicillinstreptomycin for 24 hours, followed by treatment with 0.01 mg/ ml colcemid for 60 min. Cells were harvested and placed in 0.075 M KCl for 15 min. After several changes in methanol-acetic acid fixative, slides were prepared by hot-plate drying. Metaphase chromosomes were banded by the trypsin-Giemsa or Phosphate R technique, and karyotyped according to the International System of Human Cytogenetic Nomenclature (ISCN 2005).
PCR and Gene sequencing
Exons 3-9 of the TP53 gene and exon 3-9 of RUNX1 were amplified by PCR from genomic DNA and sequenced directly in all cases with complex karyotype. TP53 deletions were detected by interphase FISH in complex karyotype cases. Fms-related tyrosine kinase 3 length mutation (FLT3-ITD) analysis was performed as published [13] in CK-AML with wild-type TP53 and NK-AML patients.
Real-time RT-PCR
For quantitative RT-PCR, cDNA was prepared using Prime-Script 1 st Strand cDNA Synthesis Kit (TaKaRa, Shiga, Japan) and used in quantitative real-time PCR reactions with SYBR Premix Ex Taq (TaKaRa) and 0.5 mM of forward and reverse primers.
Cell culture
HepG2 and 293T cell lines were obtained from the Institute of Cell Biology, Chinese Academy of Sciences, Shanghai, China. Cells were maintained in DMEM (Wuhan Boster, Biotechnology Ltd., Wuhan, China) supplemented with 10% fetal bovine serum (FBS; Gibco, Carlsbad, CA, USA), 100 U/ml penicillin, and 100 mg/ml streptomycin (Sigma, St. Louis, MO, USA). Nocodazole (Sigma) was dissolved in DMSO and used at either 0.1 mg/ml or 1 mg/ml.
Cell cycle and cell proliferation assay
Cells stably expressing MDM4FL, MDM4S or vector control were cultured overnight and 0.1 mg/ml Nocodazole added the following day, and cells incubated for an additional 18 hours. Cells were stained by propidium iodine (PI) and cell cycle stage determined by flow cytometry (FCM). Cell proliferation was analyzed using the MTT assay. After 4 h incubation with MTT reagent, cells were lysed with DMSO for 10 min at 37uC and absorbance measured at 570 nm. The average percentage is shown for three independent HepG2 control, MDM4FL or MDM4S-expressing pools.
Western-blot analysis
After treatment with 1 mg/ml Nocodazole for 18 hours, total protein was extracted from approximately 5-10610 6 control, MDM4FL or MDM4S-expressing cells, and stored at 280uC before use. Lysates (30 mg) were resolved by 8-12% SDS-PAGE and gels transferred to nitrocellulose membrane. Membranes were blocked with 5% nonfat milk in PBST for 1 h followed by primary antibody and incubation overnight at 4uC with gentle rotation. Membranes were washed twice with PBS containing 0.2% Tween 20 and incubated with appropriate secondary antibodies for 1 h at room temperature with gentle rotation. Membranes were then washed twice with PBST and incubated with Super Signal West
Mitotic chromosome and karyotype analysis
Chromosomes spreads were prepared from control, MDM4FL, MDM4S-expressing cells, and stained with Giemsa. Images were acquired with Motic high quality scientific grade CCD cameras (Hong Kong). Metaphase cells (75 per sample) from control, MDM4FL and MDM4S expressing pools were scored for chromosomes. Three independent chromosome counts were obtained for each data set, and the rank sum test used to compare chromosome number dispersion. Kruskal-Wallis was used to compare the medians of three ranked variables. All statistical analyses were performed using SPSS 16.0 (IBM, Chicago, IL, USA) and P,0.05 was considered significant.
CK-AML patients with wild-type TP53 correlated with poorer prognosis than NK-AML patients
This study cohort included 15 CK-AML patients with wild-type TP53, a male/female ratio of 1.14 (8:7) and median age of 59 years (range, 17-80 years), with seven patients (46.7%) $60 years. Two patients (13.3%) had WBC counts greater than 100610 9 /L. One patient was classified with M0, four with M2, five with M4 and five with M5 according to FAB classifications. Karyotype analysis showed monosomy 5 (25) (n = 2), and monosomy 7 (27) (n = 4). Of the 15 patients monitored for therapy response and survival, four achieved complete response (CR) and two achieved partial response (PR). The median survival time was 292 days (range, 66-738 days). The clinical characteristics of the 15 CK-AML karyotypes were provided in Table 1. The overall survival (OS) of NK-AML patients was significantly higher than that of CK-AML patients (P = 0.001) (Figure 1).
TP53 mutation and deletions were detected by genome PCR sequencing and interphase FISH in 24 CK-AML cases, and 15 CK-AML cases were wild-type TP53. In order to rule out other genes mutation associated with poor prognosis of CK-AML, we detected RUNX1 mutations in 15 CK-AML patients and FLT3-ITD mutation in 131 de novo AML cases (15 patients with wildtype TP53 and 116 NK-AML patients.). Among the 15 CK-AML Table 2). The melting curve showed a single peak, suggesting a specific of amplified product (Fig. 2).
The metaphase arrest was reduced and cell proliferation activity increased in MDM4-expressing cells
HepG2 cells stably expressing MDM4FL, MDM4S or vector control were cultured overnight and 0.1 mg/ml Nocodazole added the following day and incubated for 18 hours. The percentage of M phase for control, MDM4FL and MDM4S-expressing cells were 51.94%, 33.35% and 35.61%, respectively. Compared with the control, there were fewer M phase cells in MDM4FL and MDM4S-expressing cells (P,0.05) (Fig.3A). We next examined the percentage of G0/G1 at different time points after Nocodazole treatment. Before Nocodazole treatment, the percentage of G0/ G1 cells in all three lines was approximately 40-60%. Following Nocodazole treatment for 8 h, the percentage of G0/G1 cells in all three cell lines decreased sharply, and then gradually increased with prolonged treatment. At 18 h, the percentages of G0/G1 in MDM4FL and MDM4S-expressing cells were higher than that in control cells (P,0.05) (Fig. 3B). Finally, we examined cell
p21 expression levels decreased in MDM4-expressing cells
To explore whether MDM4 overexpression inhibited the activity of P53 pathway, p53 and p21 expression levels were examined in the overexpressed MDM4 cell model. Our data showed that compared with control, p53 expression level decreased in MDM4FL-expressing cells (P,0.05), but it did not decline significantly in MDM4S-expressing cells (P.0.05). However, the p21 expression levels decreased in both MDM4FL and MDM4S-expressing cells compared with control (P,0.01) (Fig.4A-B).
BubR1 and Securin expression levels decreased in MDM4-expressing cells
The spindle checkpoint proteins, BubR1 and Securin, were assessed by western blot in control, MDM4FL or MDM4Sexpressing cells. The results showed that the expression levels of BubR1 and Securin in MDM4FL and MDM4S-expressing cells decreased following Nocodazole treatment. However, control cells exhibited increased Securin levels, consistent with previous reports [14] that APC activity is required to destabilize Securin (Fig.4C-E).
Polyploidy and aneuploidy in MDM4FL and MDM4Sexpressing cells
We then monitored chromosome number, premature sister chromatid separation and polyploidy or endoreduplication in control, MDM4FL and MDM4S-expressing cells. Karyotype analysis showed that prematurely dissociated sister chromatids prior to anaphase, polyploidy or endoreduplication were observed in MDM4FL or MDM4S-expressing cells, but not in control cells. (Fig. 5). Chromosome number data are expressed as medians (25th and 75th percentile). The median chromosome numbers were 81(52, 94) (range 45-120), 102 (86, 108) (range 45-284), and 100 (73, 102) (range 26-206) for control, MDM4FL and MDM4Sexpressing cells, respectively (Kruskal-Wallis evaluation, P,0.05). Therefore, we conclude that at least one of these chromosome numbers had a different ranking distribution relative to the others. Boxplot analysis suggests that the MDM4S and MDM4FL cells most likely have different distributions from control cells. The chromosome numbers of each group reflects the range of chromosome numbers for MDM4S and MDM4FL, which were much more diverse. There were several singular and outlier values in MDM4FL or MDM4S-expressing cells, however they were not found in control cells (Fig. 6).
Discussion
About 70% of CK-AML cases contain p53 mutations, and are often associated with poor prognosis [4,5]. Cell cycle regulation is closely related to the transcriptional activation of p53. Several studies have shown that Nocodazole, a spindle inhibitor, when applied to p532/2 mouse fibroblasts, become polyploidy because of endoreduplication. This suggests an important role for p53 in regulating spindle checkpoint in mice [15][16][17]. p53 dysfunction leads to decreased p21 expression and a weakened spindle checkpoint. A cell with a chromosome aberration and with a weakened spindle checkpoint will continue to proliferate and exhibit aneuploidy or complex karyotype [18]. In this study, we ruled out mutations of some genes related to poor prognosis of CK-AML, including RUNX1 [19], and FLT3-ITD in 15 CK-AML patients lacking TP53 mutation. These results implied that there might be other important molecular events involved in the leukemogenic mechanisms that occur in CK-AML patients with wild-type TP53.
MDM4 is a negative regulation factor of p53, which exerts its effect by binding p53. MDM4 has several transcript variants [20], with the MDM4S transcript obtained by exon 6 deletion, resulting in a truncated protein containing only the p53 binding domain. It has been reported that MDM4S affinity to p53 is approximately 10-fold higher than that of MDM4FL [21]. High levels of MDM4S mRNA expression are associated with short treatment free survival [11] and its overexpression was significantly correlated with an unfavorable prognosis in soft-tissue sarcoma patients [10,22]. Our results showed that MDM4FL and MDM4S expression levels were elevated in CK-AML patients relative to NK-AML patients. We thus speculate that MDM4 overexpression may be involved in the leukemogenic mechanisms of CK-AML patients with wild-type TP53.
To prove the above speculation, we tried to find a leukemic cell line with wild-type p53 in the catalog of the American type culture collection (ATCC). However, all myeloid cell lines either contain mutant p53 or do not express p53 [23][24][25][26]. Taking into account the purpose of our experiments is just to investigate if MDM4 overexpression would influence p53 signal pathway in cancer cell with normal p53, we decided to choose other appropriate cancer cell to continue the study. The HepG2 cell line expresses wild-type p53, normal levels of MDM4, and low levels of MDM4S [27]. These characteristics were appropriate for our experiments. MDM4-expressing HepG2 cells displayed a reduced mitotic index following Nocodazole treatment, suggesting a failure in a subset of cells to undergo mitotic arrest through a functional spindle checkpoint. Additionally, MDM4-expressing cells had reduced levels of p21, an important effector molecule downstream of p53. This indicates that overexpression of MDM4FL or MDM4S inhibits p53 signaling pathway.
BubR1 is a critical component of the spindle checkpoint. BubR1 performs several roles during mitosis and ensures accurate chromosome separation [28]. Securin is one of the main substrates of APC/C [29]. The expression levels of BubR1 and Securin decreased in MDM4-expressing cells following Nocodazole treatment, suggesting that APC may be active in these cells because of a spindle checkpoint decline. However, following Nocodazole treatment, control cells had increased levels of Securin. These results indicate proper functioning of the spindle checkpoint and an inactive APC in control cells. Cells that continue to proliferate with an attenuated spindle checkpoint should missegregate chromosomes and become aneuploid. Previous reports indicate that Securin loss can lead to karyotype changes in cell lines [30]. Therefore, it is possible that the spindle checkpoint and APC activity, through BubR1 and Securin downregulation, contribute to the attenuation of cell cycle checkpoints.
Suppression BubR1 results in a dysfunction spindle checkpoint and leads to abnormal mitosis and aneuploidy [31]. CK-AML patients have been defined as the presence of at least five clonal aberrations or at least three abnormalities in the absence of t(8; 21), inv(16)/t(16; 16), and t(15; 17) [32]. Complex karyotype, like aneuploidy, may result from chromosome missegregation during mitosis. Our results suggest that MDM4 overexpression may cause aneuploid or polyploidy. We have not observed the association between specific chromosomal abnormalities and MDM4 overexpression because we only have 15 CK-AML patients with wildtype TP53. Although it is not well known if there is a causal relationship between MDM4 overexpression and aneuploidy, these date raise the possibility that MDM4 overexpression plays a role in CK-AML pathogenesis. It will be necessary to evaluate more patients and to further explore the molecular mechanisms of MDM4 overexpression and to develop targeted therapies for CK-AML patients. At least in theory, restoration of p53 function is a potential therapeutic approach in leukemia. Bista M et al [33] reported that SJ-172550, an inhibitor of the interaction between MDM4 and p53, may be a new option for the treatment of CK-AML. Their results suggest that the combination of a MDM4 inhibitor and traditional chemotherapy for refractory CK-AML may be worth evaluating.
MDM4 expression levels were elevated in CK-AML patients relative to NK-AML patients, MDM4-overexpressing HepG2 cell lines had a reduced mitotic index, reduced p21, BubR1 or Securin expression levels following Nocodazole treatment, and MDM4overexpressing cells were aneuploidy or polyploidy. Based on data presented in this study, we speculate that the leukemogenic mechanism of CK-AML without TP53 alternations is partly due to the p53 signaling pathway inhibited and the spindle checkpoint weakened by MDM4 overexpression. MDM4 may be a novel therapeutic target in the treatment of CK-AML patients with wildtype TP53. | 2016-05-12T22:15:10.714Z | 2014-11-18T00:00:00.000 | {
"year": 2014,
"sha1": "554a2ea4cbd8633341614e630838664c77708b74",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0113088&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "554a2ea4cbd8633341614e630838664c77708b74",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
231919102 | pes2o/s2orc | v3-fos-license | Power formulas for mixed effectsmodels with random slope and intercept comparing rate of change across groups
We have previously derived power calculation formulas for cohort studies and clinical trials using the longitudinal mixed effects model with random slopes and intercepts to compare rate of change across groups [Ard & Edland, Power calculations for clinical trials in Alzheimer’s disease. J Alzheim Dis 2011;21:369–77].Wehere generalize these power formulas to accommodate 1)missing data due to study subject attrition common to longitudinal studies, 2) unequal sample size across groups, and 3) unequal variance parameters across groups.We demonstrate how these formulas can be used to power a future study evenwhen the design of available pilot study data (i.e., number and interval between longitudinal observations) does not match the design of the planned future study. We demonstrate how differences in variance parameters across groups, typically overlooked in power calculations, can have a dramatic effect on statistical power. This is especially relevant to clinical trials, where changes over time in the treatment arm reflect background variability in progression observed in the placebo control arm plus variability in response to treatment, meaning that power calculations based only on the placebo arm covariance structure may be anticonservative. These more general power formulas are a useful resource for understanding the relative influence of these multiple factors on the efficiency of cohort studies and clinical trials, and for designing future trials under the random slopes and intercepts model.
Introduction
Ref. [1] have previously described sample size formulas for longitudinal studies with study subject dropout for the mixed model repeated measures analysis comparing change from baseline to last visit across groups. Missing data due to study subject dropout in clinical trials and cohort studies is common and reduces statistical power to detect treatment effects or differences in change across groups. We here derive power formulas for longitudinal studies with study subject dropout for a different model, the mixed effects model with random slopes and intercepts comparing mean slope across groups. We demonstrate how power formulas under this model can be used to power a future trial of arbitrary design (arbitrary number and interval between follow-up observation) regardless of the design of pilot study informing power calculations. We expand and generalize previously published mixed effects model power formulas (e.g. [2,3]) to fully accommodate differences in length and interval between longitudinal observations, different allocation ratios, and different study subject attrition rates. We also derive a formula that accommodates different covariance structures across groups. Differences in covariance are typically ignored, but may be critical to clinical trials, where changes over time in the treatment arm reflect the normal background variability in progression observed in the placebo control arm plus variability in response to treatment, meaning that power calculations based only on the placebo arm covariance structure may be anticonservative. To our knowledge, this is the first presentation of power formulas for the mixed effects model with random slopes and intercepts that accommodates differences in model variance parameters across groups. We note that a substantial literature describes many of these features for mixed model repeated measures analyses assuming compound symmetric or autoregressive covariance of repeated measures [1,[3][4][5]. While compound symmetric and autoregressive covariance structures are mathematically more tractable, in our experience these models are not appropriate for repeated measures of chronic progressive conditions. We demonstrate by example that compound symmetric and autoregressive covariance structures typically are not appropriate for modeling chronic progressive conditions. In the interest of clarity, in this paper we focus exclusively on the model with covariance structure imposed by random slopes and intercepts most appropriate for chronic progressive outcome measures.
Background, the mixed effects model
The parameterization of the mixed effects model with random slopes and intercepts used in this derivation is the familiar Laird and Ware mixed effects model parameterization with estimation and hypothesis testing by restricted maximum likelihood (REML). We use the notation of [6] to represent within group longitudinal observations y i on subject i as where α are the fixed effect intercept and slope describing the mean longitudinal trajectory, b i ∼ N (0, D) are random, subject-specific intercepts and slopes, and e i ∼ N (0, R i ) is residual variation about the individual trajectories. When convenient, we will represent the elements of D as ( ). In the derivation below, X i Z i are subject specific design matrices composed of a column of ones and a column of times at which measurements y i were made. To simplify presentation we maintain large sample normality assumptions in all that follows, and we do not consider covariates beyond t i . Consistent with prior literature [2,3], we assume that data are missing at random and that the covariance parameters are known.
Ref. [7] showed that V(α), the asymptotic variance of maximum likelihood estimates of α, is independent ofα and derived its value. Under model (1), y is normally distributed with mean Xα and variance-covariance V. The likelihood function is The log likelihood, apart from a constant is By the ̅ ̅ n √ -consistency and asymptotic efficiency of MLE,α the maximum likelihood estimate of α follows where I(α) is the information matrix which equals to E( ∂ 2 L ∂α h ∂α k ). For the log likelihood (3), after taking the partial derivative and expectation, Thus the asymptotic variance ofα is We can further simplify this as where In particular, the lower right diagonal of V(α) is the variance of the mean slope estimate which is required for sample size formulas to power clinical trials comparing mean slope in treatment versus control. The components of V(α) can be estimated by REML [6]. Two specific cases of Eq. (7) are useful for illustrative purposes. If we are dealing with balanced data, then X i and V i are constant across subjects, and Eq. (7) reduces to simply A similar clinical trial with missing observations due to missed clinical exams or study subject dropout would not have constant V i and X i , but instead would have a finite set of design and variance matrix pairs. Letting k index this set, the variance of the fixed effect estimates for a clinical trial with missing data is then equal to where the n k are counts of subjects in each set and sum to n, and p k n k /n.
Power formula, balanced design with no dropout
For the balanced design with no dropout, standard power formulas apply. E.g., for equal allocation to arms, sample size to detect a difference in mean slope Δ between treatment and control is This formula can be used given an estimate of V i Cov(y i ) obtained from pilot data or a previously completed trial of comparable design. A more generally applicable formula can be derived given the usual assumption of independent residual , and Eq. (11) reduces to where Σ(t j − t) 2 is the sum over the measurement time vector t (t 1 , t 2 , …., t m ) ′ of the squared differences t j minus mean time.
Equation (12) is more generally applicable because it only requires estimates of σ 2 ϵ and σ 2 b 1 , which can be obtained by REML fit to longitudinal pilot data of arbitrary design. That is, future studies can be powered using prior study data that do not necessarily have the same duration or interval between follow-up as the planned future study [9]. Equation (12) also provides a heuristic illustration of the influence of study design on powerlonger trials or trials with more longitudinal observations increase power by reducing the influence of σ 2 ϵ on overall variance.
Power formula, balanced design with dropout
Another important example, following Lu et al., is the case of study subject dropout during a cohort study or clinical trial, also referred to as study subject attrition (SSA). SSA implies a subset of the dropout patterns indexed by k in Eq. (10), restricting to the m − 1 longitudinal dropout patterns composed of subjects whose last visit is at t k , k = 2 through m inclusive. Given the independent residual errors assumption and equal allocation to arms, under SSA the sample size is calculated by where the sum is over the m − 1 dropout patterns defined by SSA, p k (X ′ k V −1 k X k ) are as in Eq. (10), and V k are matrices with off diagonal elements u, v equal to As before, the parameters σ 2 b 0 , σ b 0 , b 1 , and σ 2 b1 of D and the residual error variance σ 2 ϵ are estimated by REML fit to representative prior longitudinal data. Power formulas accommodating study subject attrition such as Eq. (13) and [1] are technically anticonservative because they ignore information lost by the occasional missed interim visit, although this bias is typically small. If missing interim visit data is a concern, then applying Eq. (13) over all sets of missing data patterns will ensure true nominal type I error rates are maintained.
Power formula, unequal allocation, unequal study subject attrition, and unequal variance across groups
Formulas (12) through (13) assume that variance parameters and study subject attrition rates are the same in the two groups being compared and the number of subjects in each group is equal. We may require a formula that accommodates different study subject attrition rates across groups, and/or unequal allocation to groups [1]. It would also be useful to have a formula that accommodates different variance parameters across groups.
Letting Term 1 and Term 2 indicate the values [(Σp k (X ′ k V −1 k X k )) −1 ] 22 calculated separately for group 1 and group 2 , and given the independent identically distributed residual error assumption, sample size for group 1 can be calculated by where λ is the sample size ratio across groups (N group 2 N group 1 /λ). The derivation of Eq. (14) is straightforward, and follows from the observation that the variance of the difference in fixed effects slope estimates equals the sum of the individual slope estimate variances. Factoring out 1/N group 1 from this sum leaves the quantity (Term 1 + λTerm 2 ), and power as a function of N group 1 follows.
Modeling under the unequal variance across groups assumption
It is given that using Eq. (14) with unequal variance parameters to power a study presumes the analysis plan for the study explicitly models the covariance structure of the two groups. For most applications, including clinical trials, σ 2 ϵ is assumed constant across groups. Sample syntax explicitly modeling the remaining, within group random effects parameters determining the covariance structure of repeated measures is included in Appendix B.
Example
Given representative pilot data it is a simple matter to estimate the variance terms required for the power formulas. For example, Table 1 is the output from a mixed effect model fit to longitudinal ADAS-cog scores observed in the ADCS trial of a folic acid/B6/B12 compound to slow the progression of Alzheimer's disease [10] (n = 330 subjects and m = 7 observations per subject) using the software provided with the standard mixed effects model text Mixed-Effects Models in S and S-PLUS [11]. The correlation of repeated measures estimated by the random slopes and random intercepts REML model fit (Table 2) mirrors the empirical correlation calculated from the same sample data, confirming that this model well represents the covariance structure of longitudinal repeated measures of a chronic progressive condition. In contrast, the commonly assumed compound symmetric and autoregressive covariance structures are constant on the diagonals and inconsistent with these longitudinal data of a chronic progressive condition.
From Table 1, the estimated standard deviation of slopesσ b1 is 3.964 and the estimated standard deviation of residual errorsσ ϵ is 3.705 (Table 1). Assuming equal variance across arms, and using these values in Eq. (12), the sample size required to detect a 25% slowing of cognitive decline (Δ 0.25*4.06) with 80% power and a type I error rate of 5% for an 18 month trial with observations every three months is 360 subjects/arm. For comparison, a 24 month trial with observations every three months would require 296 subjects per arm using Eq. (12). Note that it is not necessary for the design of the pilot study (i.e., the number of observations and interval between observations) to match the design of the future trial, we only require that there are sufficient pilot data to estimate the variance parameters σ 2 b 1 and σ 2 ϵ .
Validation by computer simulation
To evaluate the performance of Eq. (12) through (14) we have performed computer simulations assuming data following the model fit obtained in the Example above. We first performed simulations assuming a clinical trial with balanced design with six post-baseline time points with no loss to follow-up and with equal variance within arms consistent with Eq. (12). Simulating a series of clinical trials with sample size from 200 to 600 subjects per arm with effect size equal to a 25% reduction in the mean rate of decline observed in placebo (25% of the mean 4.06 points per year rate of decline observed in the pilot data (Table 1)) with 10,000 simulations per sample size simulated, we found that simulated power closely tracks the power predicted by Eq. (12) (top line, Figure 1). To validate the power formula for data with study subject attrition described in Eq. (13), we simulated data under equivalent conditions, except that for each simulation we randomly dropped 5% of the initial sample from each arm at t 2 through t 7 . We similarly found that simulated power closely tracks the power predicted by Eq. (13) power formula (bottom line, Figure 1). Study power decreases when there is study subject attrition (Figure 1).
To validate the power formula for data with unequal allocation to groups described in Eq. (14), we simulated data with 5% study subject attrition at each follow-up visit as above, but let the allocation ratio λ vary from one to two. Simulated power closely tracks the power predicted by Eq. (14) power formula (Figure 2). Predictably [12,13], power is maximized when λ equals one, and declines as the allocation ratio deviates from one ( Figure 2).
To validate Eq. (14) power formula when covariance structures differ across groups, we simulated data as done in the top line of Figure 1, but increased σ b 1 by 50% in one of the groups. Simulated power closely tracks the power predicted by Eq. (14) power formula (Figure 3). The top line from Figure 1 is included in Figure 3 for reference. Figure 3 illustrates the potential for anticonservative power calculations in the clinical trial setting when variance parameters used in power calculations are informed by prior placebo arm data and assumed to be constant across arms.
Discussion
There are limitations to the Laird and Ware model as parameterized in Eq. (1), because this model depends on the assumption that mean trajectories are linear as a function of time. This assumption may be violated, particularly in clinical trials of treatments with potential acute treatments effect beyond simple alteration of rate of progress of disease. In this circumstance mixed model repeated measure analysis [1] or model robust alternatives such as generalized estimating equations [14] would be preferred. In our experience the linearity assumption is often appropriate for chronic progression conditions, especially when the interval of observation under study is small relative to the full trajectory of disease. We further note that the formulas presented here assume variance parameters are known, as is typical of the power formula literature [1-3, 5, 15]. However, variance parameters may be uncertain if sample size in pilot studies used to estimate the variance parameters is small or if pilot data are not perfectly representative of the future investigation being powered. There is a literature on characterizing power when variance parameter estimates are uncertain (e.g. [16]). However, these methods apply to narrow applications that do not include random effects models. We recommend sensitivity analyses using a range of plausible variance parameters to ensure that planned future investigations are adequately powered. If the prior data informing power calculations are available, sensitivity analyses may be informed by bootstrap estimates of the uncertainty of variance parameter estimates (e.g., [17]). We have also used computer simulations to explore the adequacy of pilot study sample size to inform future trials in other applications [18]. The formulas derived here are useful for determining the relative efficiency of different study designs using the mixed effects model to test for differences in mean rate of change between groups. We have described how efficiency can vary by the number and interval between observations, the study subject attrition rate, the allocation ratio, and by differences in variance parameters between groups. Increasing the length of observation or number of observations increases statistical power, although with diminishing returns depending on the magnitude of residual error variance of the outcome measure under study (see Eq. (12)). Study subject attrition can also meaningfully impact statistical power and should be accounted for in study design (see Eq. (13) and, e.g., Figure 1).
Regarding recruitment allocation ratios, if all other conditions are equal across groups, then altering the allocation ratio from one-to-one reduces statistical power for given study sample size [12]. Altering the allocation ratio has been propose to improve statistical power when there is differential attrition rates across clinical trial arms [1]. More commonly, allocation ratios are altered to increase the probability of randomization to the active treatment in the hope of increasing clinical trial recruitment rates. While this approach may increase recruitment rates, it also implies more subjects will have to be recruited to achieve target statistical power, and trade-offs between clinic trial cost and time to completion should be considered carefully when planning a trial with unequal randomization to arms [13].
Finally, we describe how statistical power depends on variance parameters which may vary across groups (Eq. (14)). This consideration is typically overlooked, but may be especially relevant to clinical trials, where rate of progression in the active treatment arm is a function of both underlying variability in rate of progression and variability in response to treatment. Given that response to treatment is unlikely to be constant across subjects, we can anticipate that the variance of random slopes in the treatment arm will be larger than variance in the control arm if there is a treatment effect. Hence, power calculations based only on the covariance within placebo data will be anticonservative. Typically pilot data for clinical trials are from placebo arm data of a previous trial or registry trial with no treatment arm. A conservative power calculation assumption under these circumstances would be to use an inflation factor for σ 2 b1 within the treatment arm in (14) to be more likely to achieve nominal power in the planned trial.
Formulas (12), (13), and (14) are implemented in the R package longpower [19], and will be useful tools for planning future cohort studies and clinical trials as well as for comparing the influence of the many factors affecting the efficiency of such investigations. Areas of additional research include modifying power calculation methods in anticipation of evolving guidelines on statistical analysis plans for clinical trials in the presence of missing not at random data [20], and generalizing power formulas to more directly address the stochastic nature of covariance parameter estimates typically used in practice.
Author contribution: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.
Appendix A
To derive the variance term in Eq. (12), we need to find the bottom right corner of (X ′ i V −1 i X i ) −1 . As derived by [21], Substituting and collecting terms,
Appendix B
The random effects model with random slopes and intercepts can be performed with the lmer function within the R package lmerTest [22]. To test for differences in slopes between groups under the assumption of equal covariance structure in the two groups, the lmer model call is lmer (Y ∼ GROUP * TIME + (TIME | ID)) where ID indexes individual subjects, GROUP is a 0, 1 variable indicating placebo (0) and active treatment (1), and TIME are times of repeated observations on the dependent variable Y. | 2021-02-15T06:16:09.776Z | 2021-01-18T00:00:00.000 | {
"year": 2021,
"sha1": "5415ac99d1adc8ed4d94f3f4f3d953c9df6ba2a1",
"oa_license": "CCBY",
"oa_url": "https://www.degruyter.com/document/doi/10.1515/ijb-2020-0107/pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "512008b0fda339b23635ea35d907a2988bee6a9d",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249804298 | pes2o/s2orc | v3-fos-license | A comprehensive review on Nepalese wild vegetable food ferns
Ferns are used as traditional and fascinating foods in many countries. They are also considered to possess important ethnomedicinal values; however, ferns are one of the underutilized plant resources by both scientific and local communities. Pharmagonostical studies reveal that ferns and fern-allies possess several biological activities including antibacterial, antiviral, antifungal, antimalarial, antidiarrheal, anthelmintic, analgesic, anti-inflammatory, antidiabetic, anticancer, neuroprotective, nephroprotective, hepatoprotective, antifertility, etc. Flavonoids and terpenoids are major secondary metabolites present in ferns. Ugonins, particularly isolated from Helminthostachys zeylanica, have found diverse bioactivities. Ptaquiloside, a norsesquiterpene glucoside, found in Pteridium revolutum, Dryopteris cochleata and Polystichum squarrosum, is one of the hazardous metabolites of ferns which is responsible for the toxic effect. Alkaloids are reported to be present in the ferns; however, the qualitative data are uncertain. Some fern metabolites, such as cyanogenic glycosides and terpenoids, are considered to possess defensive activity against animal attacks. Some ferns are also used for manuring as biological alternative to pesticides. Nepalese have consumed at least 33 species of ferns and fern-allies belonging to 13 families, 20 genera as cooked vegetable foods. The aim of this review is compilation of information available on their distribution, ethnomecinal values, pharmcognosy, pharmacology and phytochemistry.
Introduction
Nepal is a small land-locked country resides on the lap of the great Himalaya, in the north. It is sandwitched between China and India. Despite occupying only 0.1% of the global area, the country is naturally blessed and enriched with cultural, geographical and biological diversities [1]. Its cultural diversities are interlinked with more than 60 ethnic groups speaking more than 100 languages. Geographically, it has a large altitudinal variations ranging from a low land (altitude nearly 60 m in south) to the higest peak of the earth (Mt. Everest, 8,848.86 m in north) within a span of just about 200 km of aerial expansion from south to north. Its physiographic regions comprises (1) Tarai, (2) Siwaliks and Chure, (3) Mahabharat, (4) Midlands or central hills, (5) Himalayas, and (6) Inner Himalayas and Tibetan marginal mountains [2]. The country has 118 different types of ecosystems harboring 3.2% of the world's flora including 5.1% of gymnosperms, 3.2% of angiosperms, 5.1% of pteridophytes, 8.2% of bryophytes, 2.5% of algae, 2.6% of fungi and 2.3% of lichens [1].
For food and livelihood securities, Nepalese significantly depend on cultivated and wild vegetables [3]. In Nepal, it has been estimated that >650 species are used as food materials, and >1,000 species possess medicinal values [4]. Recently, we have listed 318 wild plant species (excluding mushrooms, fruits, spices and condiments, seeds and pulses) that are consumed by Nepalese as vegetables [5]. Wild vegetables are indispensable constituent of the human diet and are the nature's gift to mankind. They are not only cheap but also supply minerals, vitamins and certain hormone precursors. Among them, wild edible ferns play an important role in diet, and they are also a measure of income source with no investment.
Wild plants are globally used by human societies for food, medicine, ornament, defence and industrial purposes. Ferns and fern-allies have unique features in their appearance, natural habitat, food value and ethnomedicinal properties. Ferns possess a wide range of medicinal activities; unfortunately, they are poorly investigated. Comprehensive review on phytochemicals and bioactivities of ferns and fern-allies is rare. It is commonly considered that plants grown in harsh environmental conditions usually possess incredible bioactivity. Therefore, writing of this review article is envisioned to increase scientific attention on the fern species, particularly those grown in high altitudes of Nepal. Consequently, an extensive literature search was conducted. Dissertations, books, journal articles and conference proceedings on ethnobotonical surveys in Nepal were accessed in the libraries to generate a list of wild vegetable food ferns of Nepal. Taxonomic names mentioned in the original articles were verified using several efloral databases. Online literature search was conducted in Google Scholar and Pub Med by using the terms ferns, pteridophytes and individual name of the fern species. Cited references in the published research articles were also traced to obtain information on pharmacognosy, phytochemicals and bioactivites of the fern species grown across the globe.
Ferns and fern-allies (pteridophytes)
Fern and fern-allies (pteridophytes) are considered as a very ancient family. They are the first vascular land plants that evoluted during the Devonian and Carboniferous periods of the Paleozoic era [6]. Pteridophytes dominated the earth's vegetation in the beginning of the Mesozoic era, about 280-230 million years ago [7,8]. Ferns are consumed as vegetables in many countries across the globe and they are also used in the traditional Chinese medicines [9,10]. However, surprisingly, compared to other vascular plants, pteridophytes are remain under-explored both in ethnobotanical and pharmacological aspects. In the last decade, some natural compounds have been isolated from different fern species and some phramacognostical studies are begun [11]. Initially, traditional knowledge based ferns were ethnomedicinally used for some ailments, and the major studies on pteridophytes were primarily focused on taxonomic identification.
Ferns are seedless vascular plants and they reproduce by sporulation. They have many similar features of mosses and algae, but are usually differentiated from them by having xylem and phloem to transport water and nutrient materials. Like other vascular plants (gymnosperms and angiosperms), they possess stems, fronds, pinnae and roots. The life-cycle of fern is referred to as alternation of generations with two different stages-gametophyte phase (sexual) and sporophyte phase (asexual) ( Figure 1). Motile male gametes (antherozoids) are produced from antheridia and non-motile female gametes (egg cells) are bornt singly in archegonia. Fusion of both the gametes results in the formation of a zygote, which develops into the sporophyte (diploid) after mitotic divisions. Fern sporophytes are free-living, independent on the gametophyte (prothallus), dominant and grow to a much larger size. Mature sporophyte bears sori on the underside of the blade. Sporangia release a number of non-motile spores that germinate and grow by mitotic divisions into haploid gametophytes.
It has been estimated that there are around 4,03,000 plant species in the earth including phanerogams and cryptogams [12]. About 13,271 species of pteridophytes are distributed which forms nearly 3% of the world flora [13]. The Plant List has listed 35 plant families and 568 genera of pteridophytes [14]. About 63 families, 230 genera and 2,600 species of pteridophytes have been reported in China [15]. From India, 33 families, 130 genera and 1,267 species of ferns and fern-allies are reported [16]. About 37 families, 687 species in Japan [17]; 801 species in Taiwan [18]; 39 families, 144 genera, 1,100 species in the Philippines [19]; 1,165 species in Malaysia [20], and 670 species of pteridophytes in Thailand [21] have been reported. In Nepal, ferns are called as "Unyu/Oony" and they are distributed between 60 to 4,800 m above sea level (asl) [22,23]. Thapa et al. [24] have listed 535 species of pteridophytes belonging to 35 families and 102 genera from Nepal. Recently, a book Ferns and Fern-allies of Nepal [25] has been published including annotated checklist and critical account of 550 species and additional 32 sub-species, altogether 582 taxa of fern and fern-allies of Nepal belonging to 32 families and 99 genera. In Nepal, Deparia boryana, Diplazium esculentum, Dryopteris cochleata, Ophioglossum vulgatum, Pteridium revolutum, Diplazium maximum, Diplazium spectabile, Diplazium stoliczkae, Polystichum squarrosum, Pteris biaurita, Tectaria gemmifera, etc. are popularly consumed as vegetables, and they are also sold in the markets [5,26]. Popularity gained in consumption of ferns in Nepalese communities is not only due to their unique taste but also due to beliefs of their high nutritional contents such as vitamin C and iron [27]. Obviously, it is considered that the nutritional value and mineral content in wild edible plants are richer than that of commercial vegetables [28]; therefore, their consumption for the nutritional purpose should be encouraged. At the same time, their poor availability and prone vulnerability are the serious issues.
History, expeditions and literature on Nepalese pteridophytes
Species Plantarum, originally published in 1753 by Carl Linnaeus, is the first botanical work that applied the binomial nomenclature sytem for the taxonomic description of 5,940 plant species, and recognized 15 fern genera [29]. The pioneering plant collection and taxonomic study of Nepalese pteridophytes were first begun in the early 19th century when Scottish Franchis Buchannan (later known as Franchis Hamliton) collected 433 plant specimens including 34 species of Nepalese pteridophytes in 1802-1803 and published An Account of Kingdom of Nepal [30,31,32]. In subsequent historical book publications, namely, Tentamen Florae Nepalensis Illustratae by Wallich [33], Prodromus Florae Nepalensis by Don [34], The Flora of British India by Hooker [35], Index Filicum by Moore [36], A Priced Catalogue of Hardy Exotic and British Ferns by Sim [37], Catalogue of the Plants of Kumaon by Duthie [38], Notes from a Journey to Nepal by Burkill [39] and A Plantsman in Nepal by Lancaster [40] have documented many Nepalese ferns. Some foreign authors have also contributed in the taxonomic identification of pteridophytes of Nepal [41,42,43,44].
After establishement of Botanical Survey and Herbarium Office by Nepal Government in 1961 (later renamed as National Herbarium and Plant Laboratories), Nepalese botanists have started collection of plants and preservation of herbariums, and then they could publish several articles and books [45]. Earlier contributions are made by Gurung [46,47,48,49] and Manandhar [50,51,52,53] followed by others [24,54,55,56,57]. A major contribution of plant collection and taxonomic studies on Nepalese pteridophytes including Sikkim Himalayan ferns has been made by Christopher Fraser-Jenkins, a Welsh pteridologist, who resides for 40 years in Kathmandu, Nepal. Formerly, he was a research fellow at the Natural History Museum, London, and at the Royal Botanic Garden, Edinburgh. Lately, he worked in conjunction with the National Herbarium and Plant Laboratories Godawari, Nepal. He has spent more than five and half decades in studying Himalayan pteridophytes and published Ferns and Fern-allies of Nepal, Vols. 1-3 [58,59,25]; Annotated Checklist of Indian Pteridophytes, Parts 1-3 [60,61,62]; including a series of other publications [63,64,65].
A number of works on exploration, taxonomic identification and documentation along with rare molecular study on Nepalese pteridophytes have been made; however, not only Nepalese but also global edible peridophytes are poorly investigated for their phytochemical constituents and pharmacological effects. Therefore, exploitation of both edible and non-edible fern flora deserves serious attention by scientific communities [66].
A list of wild edible ferns of Nepal
From the literature survey of the publications on Nepalese ferns by various authors, we have generated a list of 33 edible fern species ( Table 1). The taxonomic identification of the species was followed according to Kramer and Green [67], modified by Fraser-Jenkins [65], and was cross-verified in the e-databases of The Plant List [68] and Flora of China [69]. Cornopteris decurrenti-alata and Matteuccia intermedia are rarely occurred in Nepal, while Blechnum orientale, Osmunda japonica, Coniogramme intermedia and Pteris wallichiana are common species [31]. These species are used as fern food in China [70]. We could not find proper documentation for these pteridophytes as wild edible ferns of Nepal; therefore, they are omitted in the list. The pteridophytes, which consumed for the medicinal values, but not eaten as salad or cooked vegetables, are also not included in the table [71].
Distribution, ethnomedicine, pharmacognosy and phytochemistry of the wild edible ferns
It has been recently reported that there are 582 taxa of pteridophytes in Nepal [72]; however, Nepalese researchers have not been focused in the ethnopteridological studies in traditional medicinal knowledge, food safety and phytoconstituents present in the pteridophytes. At the same time, surprisingly, very few scientific studies have been carried out globally in the areas of the chemical constituents and pharmacological activites of pteridophytes [71]. In this section, we accumulate available information on the distribution, ethnomedicinal uses, pharmacognosy and phytochemistry of the wild edible ferns listed in Table 1. C (central), N (north), E (east), W (west) and S (south) abbreviations are used to indicate distribution of the fern species. The edible fern distribution in China and India is specified as available in different provinces.
Ethnomedicine.
The tender shoot is used as demulcent, stomachic and laxative [92]. The powder or paste is used to treat dysentery [93]. Rhizome is used in abdominal spasm [75]. Root paste is applied externally to treat burns, injury and wounds [76].
Pharmacognosy. The ethyl acetate extract inhibits the lipid for-
mation with 35% at 100 μg/mL in 3T3-L1 cell model [92]. Antioxidant as well as anticancer activities are explored [92,94]. [95]. Aerial parts are used to treat fever, pain, glandular swelling, diarrhea, dermatitis, wound and measles. The rhizome is considered as anthelmintic, antidysenteric, antidiarrheal and pest repellent [96]. It is also used in cough, asthma, fever, dyspepsia and stomachache. Decoction of rhizome and young leaves are useful in haemoptysis and constipation. Rhizomes are also used in scabies and boils [53]. A paste of leaf and stem is applied externally to treat cuts and wounds [97].
Ethnomedicine.
Fronds are used in dysentery, skin diseases and infirtility [134,135,136]. Leaf is used for abdominal pain, constipation and sore throat [135]. Rhizome is used as purgative. [137,138] and antibacterial [138] activitities of the aerial parts are reported. Anti-HIV activity has been shown by the rhizome extracts [139,140].
Ethnomedicine.
It is an ornamental fern that planted for indoor decoration and borders [151]. It is traditionally used for animal bedding [52]. The rhizomes are astringent and useful in diarrhea, intestinal inflammation and wounds. The rhizomes are also used for making breads and brews. The plant juice is considered antibacterial.
5.4.1.4. Pharmacognosy. It contains poisonous cyanogen glycosides which are antithiamine and carcinogenic. It has been demonstrated that feeding of the plant induces urinary bladder tumors, illeum sarcomas, pulmonary adenomas and leukemia [152,153]. However, Yoshihira et al. performed toxicity tests and found that none of the plant extracts and the fractions could induce tumors under the conditions studied [154].
Several sesquiterpens have been isolated from the fronds that include pterosins A-G, I-J, K-L, N, O, Z, etc. [156,157,158,159]. Thiaminases 1 and 2 are isolated, which cause vitamin B1 deficiency in animals [160].
Interestingly, L. japonicum grown in the metal-enriched conditions can accumulates copper in the cell wall pectin [193].
Ethnomedicine.
Fronds are used as a tonic and styptic [253]. It is used in wounds and old skin diseases [52,254]. Habitat. Moist exposed grassy areas.
Flavonoids, glycerides and amino acids are the constituents in O. vulgatum [265]. [163,270,271]. It's paste with turmeric is applied in cuts and wounds. It is also used as a tonic and styptic [272], and it is also reported to be toxic [163].
Habitat.
Moist, shady and exposed areas, and roadsides.
Ethnomedicine.
The rhizome and frond decoction is used to treat chronic disorders [278]. To relieve body pain, the rhizome paste is applied [279]. Frond juice and paste are applied on cuts and bruises [53,280].
5.12.1.3. Ethnomedicine. Rhizome decoction is given to treat stomachache and gastrointestinal disorders [291]. Root juice is used to treat diarrhea and dysentery [142]. Leaf decoction is given to treat asthma and bronchitis [253]. Leaf paste is applied in stings of honeybee, centipeds, etc. Whole plant is used in eczema, scabies and jaundice [254,292].
Altitudinal zones-based categorization of Nepalese wild edible ferns
The distribution of ferns is varied with different altitudinal zones. Nepal is particularly considered to be divisible into three altitudinal zones, namely, (a) Tarai (plain area), (b) Pahad (hilly area) and (c) Himal (mountainous area). Accordingly, based on the species found at different altitudes, the Nepalese wild edible vegetable ferns can be categorized into three:
Inhabitations-based distribution of Nepalese wild edible ferns
Virtually, the ferns are inhabitated where flowering plants are found [32]. Humid, moist and shady forests are more suitable for their growth. Based on the species found at different ecological habitats, the edible vegetable ferns of Nepal are categorized into epiphytes, lithohytes, terrestrials, climbers and hydrophytes as given below. Noteworthly, some species could be found in more than one habitat.
Epiphytes
These species grow on bases, trunks and branches of trees, and are covered with mosses and liverworts. Rainy season, hill-side evergreen forest and moist forest are more suitable for their growth. In middle elevation, altitude up to 2,000 m, the distribution of ephiphytes is very few. Examples of epiphytes are: N. cordifolia, B. lanuginosum and P. vittata.
Terrestrials
They grow in evergreen and semi-evergreen forests that enriched with humus and organic nutrients. They also grow in shady areas, stream banks, moist hill slopes and shaded roadsides.
Climbers
They grow on rich humus soil. They climb other trees growing inside shaded forests, forerst edges and tropical areas. Examples of climber species include: S. palustris, L. flexuosum and L. japonicum.
Hydrophytes
They are water loving species. They occur on wet or marshy border of ponds, lakes and waterfalls. They also occur in paddy fields. Examples of hydrophytes include: M. quadrifolia and C. thalictroides.
Medicinal ailments categorization of Nepalese wild edible ferns
Based on pharmacognosy of the plant extracts of ferns and fern-allies reported by several authors, the medicinal ailments categorization of Nepalese wild edible vegetable food ferns is summarized in Table 2.
Important phytochemicals present in the wild edible ferns
The major bioactive compounds present in the ferns and fern-allies are flavonoids, terpenoids, steroids and alkaloids.
Terpenoids
Ptaquiloside ( Figure 5) is one of the main hazardous metabolites found in the fern that is considered to be responsible for the toxicological problems to ruminant and non-ruminant animals, and then to human alike through the milk and meat [150]. The amounts of ptaquiloside have been estimated in P. revolutum, D. cochleata and P. squarrosum (leaves) that collected from different parts of India [108]. P. revolutum is considered as carcinogenic [152]. It has been demonstrated that feeding of bracken fern induces urinary bladder tumors, illeum sarcomas, pulmonary adenomas and leukemia [153]. On the other hand, Yoshihira et al. have isolated several sesquiterpens, namely, pterosins A-G, J-L, N, O, Z, pterosides A-C, etc., from the fronds of Japanese bracken fern and performed cytotoxicity test (Table 3) [154]. They have mentioned that although these indanonone derivatives showed some cytotoxicity to HeLa cells; however, none of them could induce tumors under the conditions studied.
Lygodinolide ( Figure 5), present as major constituent in L. flexuosum, is attributed for the wound healing property [181,182].
Conclusion
Taxonomic and ethnobotanical studies on the ferns of Nepal are exploited; however, chemical and biological investigations on Nepalese ferns and fern-allies are limited. This review primarily briefs historical background on expeditious journey on Nepalese pteridophytes, updates number of ferns consumed as vegetables by Nepalese, outlines their ethnomedicinal uses, and compiles information on their pharmacognosy, pharmacology and phytochemistry that reported from elsewhere. In Nepal, some species of ferns that primarily collected from nearby forests are sold in the markets. Some species are consumed by rural Nepalese as the last option of food and there is no proper information transformation to the next generation about ferns as food materials and their potential medicinal values. C. decurrenti-alata, M. intermedia, B. orientale, O. japonica, C. intermedia and P. wallichiana are available in Nepal, but there is no precedented report for their consumption by Nepalese; however, they are consumed in China. The traditional ethnopharmacological knowledge on ferns and fern-allies is neglected both by locals and scientific community.
Future recommendation
Most of the ferns and fern-allies, collected for vegetable food by Nepalese, grow in their natural habitat. Lack of sharing of traditional ethnomedicinal knowledge to new generation and unsustainable collection practice making them vulnerable. Only in the recent past, a very few species bearing economical value are agricultured in local farms. From the literature evidences, we believe that investigations on unfocused ferns can provide new research opportunities and potent bioactive phytochemicals.
Parts of some species such as P. revolutum, P. squarrosum, C. thalictroides, P. vittata, etc. are considered to be toxic, thus they warrant immediate further investigations. P. revolutum constitutes cyanogen glycosides. P. squarrosum constitutes prunasin that can release HCN. P. vittata hyperaccumulates arsenic. P. revolutum and P. squarrosum can induce carcinogenesis. On the other hand, the antioxidant capacity of the extracts of D. boryana, D. esculentum, D. maximum, S. palustris, W. unigemmata, D. cochleata, P. squarrosum, M. quadrifolia, N. cordifolia, P. biaurita, T. gemmifera, T. zeylanica, etc. has been reported indicating their role in cancer chemoprevention. Further investigations of these plant materials to isolate and identify effective anticancer agents are highly desirable.
Rhizomes of C. spinulosa and O. reticulatum have found their traditional uses against snake bites. Ugonins, particularly isolated from the rhizomes of H. zeylanica, have found diverse bioactivities. Recently, ugonin J has reported to be effective against coronavirus disease 2019 (COVID-19). Hence, not only the vegetative parts of the ferns but also other parts should be investigated in order to search for potential drug candidates.
Author contribution statement
All authors listed have significantly contributed to the development and the writing of this article.
Funding statement
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Data availability statement
Data included in article/supp. material/referenced in article.
Declaration of interest's statement
The authors declare no conflict of interest.
Additional information
No additional information is available for this paper. | 2022-06-18T15:17:19.477Z | 2022-11-01T00:00:00.000 | {
"year": 2022,
"sha1": "85fff2fd3182eb94c17991cc1527dd3b58e1fc89",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "57e5b28461784d50cba6823398a229d08914edf8",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17204066 | pes2o/s2orc | v3-fos-license | The Saudi clinical practice guideline for the management of overweight and obesity in adults
Objective: To assist healthcare providers in evidence-based clinical decision-making for the management of overweight and obese adults in Saudi Arabia. Methods: The Ministry of Health, Riyadh, Kingdom of Saudi Arabia assembled an expert Saudi panel to produce this clinical practice guideline in 2015. In collaboration with the methodological working group from McMaster University, Hamilton, Canada, using the Grading of Recommendations, Assessment, Development and Evaluation (GRADE) approach, which describes both the strength of recommendation and the quality of evidence Results: After identifying 11 questions, corresponding recommendations were agreed upon as guidance for the management of overweight and obese adults. These included strong recommendations in support of lifestyle interventions rather than usual care alone, individualized counseling interventions rather than generic educational pamphlets, physical activity rather than no physical activity, and physical activity in addition to diet rather than diet alone. Metformin and orlistat were suggested as conditional recommendations for the management of overweight and obesity in adults. Bariatric surgery was recommended, conditionally, for the management of obese adults (body mass index of ≥40 or ≥35 kg/m2 with comorbidities). Conclusions: The current guideline includes recommendation for the non-pharmacological, pharmacological, and surgical management of overweight and obese adults. In addition, the panel recommends conducting research priorities regarding lifestyle interventions and economic analysis of drug therapy within the Saudi context, as well as long term benefits and harms of bariatric surgery.
I n the Kingdom of Saudi Arabia (KSA), obesity (defined as body mass index (BMI) of ≥30 kg/m 2 ) and overweight (defined as BMI of 25-29.9 kg/m 2 ) represent an alarming threat for population health based on their high prevalence. 1 Both genders are affected, with some differences existing, where obesity is more prevalent among females, while overweight is higher among males. 1 The high prevalence is a real concern; especially since obesity and overweight are well known risk factors for several life-threatening conditions including type 2 diabetes, coronary artery disease, hypertension, and certain cancers, in addition to impaired quality of life. Obesity and its comorbidities are multifactorial (including genetic, environmental, psychological, social, and cultural factors), requiring multiple approaches to population management in various settings with input from a range of stakeholders.
The management of obesity is composed primarily of lifestyle interventions. These interventions are multicomponent treatments that involve promoting healthy lifestyle habits, dietary interventions, dietary counseling, physical exercise training as well as psychological and behavioral interventions. Pharmacotherapies are often an adjunct to lifestyle interventions, especially in those who struggle to lose weight with lifestyle interventions alone. They can also help patients maintain weight loss. Surgical management of obesity is considered in patients who met certain criteria. 2 Previous guideline statements available on this topic include: "Management of obesity: Saudi Clinical Guideline"; 3 "Summary of updated National Institute for Health and Care Excellence (NICE) guidance"; 4 "Canadian Task Force on Preventive Health Care, Obesity in Adults 2015;" 5 and "US Preventive Services Task Force (USPSTF), Obesity in Adults Screening, and recommendations 2012. 6 Every population has its unique cultural, environmental, and lifestyle profiles, that are necessary to address within the current guideline of managing obese and overweight individuals in the KSA.
Rationale for KSA obesity guideline. Reduction in obesity is an important public health consideration for the KSA and this guideline considers the diversity of lifestyle, pharmacological, as well as surgical management strategies that contribute to the current development of a broad national strategy to combat obesity. The Ministry of Health (MOH) at KSA launched an evidence-based program to produce clinical practice guidelines (CPG) for the management of common diseases in KSA. Obesity was among the topics that were given a priority in this program, given its high negative impact on the health of individuals and the society as a whole. Compared with other guideline statements published for obesity; there is an agreement between Saudi Arabia, NICE, and USPSTF recommendations regarding lifestyle, exercise, medication, and surgical management, the Canadian guideline does not make management recommendations, but advises screening for obesity using BMI as does the USPSTF.
The Saudi Center for Evidence Based Health Care (EBHC) of the MOH coordinated the development of clinical practice guidelines between the methodological team from McMaster University and local clinical expert panel members in Saudi Arabia. Local clinical experts of multiple disciplines were recruited through Saudi specialist societies and also independent experts. Guidelines were based on pre-selected available evidence synthesis. Twelve topics for wave 2 were selected by the EBHC through consultation with local stakeholders and based on the selection criteria defined by the McMaster team. Guideline 14) can be found at: http://www.moh.gov.sa/ depts/Proofs/Pages/Guidelines.aspx. 7 The obesity expert Saudi panel members formally prioritized questions addressed within this guideline. An existing systematic review on the management of obesity from 2013, published by the National Health and Medical Research Council of Australia was updated for all selected questions. Systematic searches were also conducted for information on patients' values and preferences, as well as costs and resource use specific to the Saudi context. These systematic reviews formed the basis of recommendations following the Grading of Recommendations, Assessment, Development and Evaluation (GRADE) approach. Evidence profiles were developed to prepare GRADE evidence-to-decision frameworks allowed the guideline panel to follow a structured consensus process in order to transparently document decisions made during the meeting. External peer review was conducted by a methodological expert independent of the guideline development process.
As a quality measure prior to publication, the final report has been externally peer reviewed by a methodological expert who has not been involved in this guideline development.
The guideline is considered as guidance to general practitioners, family doctors, allied health professionals, and other relevant specialists. In addition, policy makers may refer to recommendations and judgments made in this guideline. As such it is expected to exert a beneficial impact in the area of management for overweight, obesity, and associated comorbidities and mortality.
Recommendations were developed by the Saudi expert panel members and facilitated by McMaster methodologists. Panel members deliberated over prepared evidence profiles for each key question, and reached consensus on recommendations while documenting their decision making processes following the GRADE evidence to decision framework.
Methods. This CPG is part of a second wave of a larger initiative by the Saudi MOH to ensure quality and consistency of care across the KSA. The MOH's, EBHC, in collaboration with McMaster University guideline group worked together to publish and disseminate CPG with the aim of improving the quality and safety of health care in the KSA. Through this program, the Obesity Research Center at King Saud University, Riyadh, KSA was contacted to nominate expert Saudi panelists in the field of obesity management. A brief description of the methods used to develop recommendations is described, details are available in a separate publication. 7 Topic selection. Topics for this guideline were selected by the panel members and all healthcare questions were prioritized using a formal online consensus process.
Literature search. We updated an existing systematic review on the management of obesity for adults from 2013, published by the National Health and Medical Research Council (NHMRC), Canberra, ACT Australia. 8 Questions were grouped into categories of pharmacological, non-pharmacological, and surgical approaches to management of obesity in adults. For each question, the McMaster guideline-working group (lead by AM for this guideline) updated the search strategy to identify new studies, or new systematic reviews and updated meta-analyses when relevant. The McMaster group also conducted systematic searches for contextual information necessary to develop the full guideline for the KSA, including searches for information on patients' values and preferences, and costs and resource use specific to the Saudi setting (Appendix 1*).
Evidence to decision. For each question, one evidence profile was developed as well as an evidence-to-decision (EtD) table following the GRADE approach. 9,10 Profiles and tables were shared with the panel members. The guideline panel was invited to provide additional information, particularly when published evidence was lacking.
Recommendations. Final recommendations were formulated during an in-person meeting of the guideline panel members and McMaster guideline working group members in Riyadh on March 17th and 18th 2015. The GRADE evidence-to-decision framework was followed. This allowed a structured consensus process and transparent documentation of all decisions made during the meeting. Potential conflicts of interests of all panel members were managed according to the World Health Organization (WHO) rules. 11 Interpreting recommendations. Grading the quality of evidence. To facilitate the interpretation of these guidelines, the GRADE working group defines the quality of evidence as the degree of confidence that the estimate of an effect is adequate to support a particular decision, or recommendation. 9 We assessed the quality of evidence using the GRADE approach.
Quality of evidence is classified as "high", "moderate", "low", or "very low" based on panel decisions on methodological characteristics of the available evidence for a specific health care problem. The definition of each category is as follows: • High: We are very confident that the true effect lies close to that of the estimate of the effect. • Moderate: We are moderately confident in the effect estimate. The true effect is likely to be close to the estimate of the effect, but there is a possibility that it is substantially different. • Low: Our confidence in the effect estimate is limited. The true effect may be substantially different from the estimate of the effect. • Very low: We have very little confidence in the effect estimate. The true effect is likely to be substantially different from the estimate of effect.
Grading the strength of recommendations. The GRADE working group defines the strength of recommendation as the degree to which we can be confident that desirable effects of an intervention outweigh undesirable effects. According to the GRADE approach, the strength of a recommendation is either strong, or conditional (also known as weak) and has explicit implications. 12 Understanding the interpretation of these 2 grades, either strong or conditional, of the strength of recommendations is necessary for sound clinical decision-making ( Table 1).
Results. The guideline has discussed 11 questions in the management of obesity. These questions were divided into 3 sections: I. non-pharmacological management (questions 1-8), II. pharmacological management (questions 9-10), and III. surgical management (question 11). Upon reaching a recommendation, the panel members made a consistent judgment regarding obesity as a priority problem due to the high prevalence of obesity in KSA. Similarly, with respect to values and preferences, panel members agreed that there was probably no important uncertainty on how much people value the main outcomes (namely, mortality, cardiovascular disease, weight loss, and change in BMI).
No cost effectiveness studies were identified specific to Saudi Arabia. Studies from other countries were identified in the literature update for orlistat and bariatric surgery. 13,14 Panel members provided their estimate of average unit costs for the specific intervention in Saudi Arabian Riyal (SR).
I. Non-pharmacological management: Question 1. Should lifestyle interventions compared to other interventions be used for overweight and obese adults?
Lifestyle interventions are the cornerstone of obesity treatment. These interventions are multi-component treatments that involve promoting healthy lifestyle habits, dietary interventions, dietary counseling, physical exercise training as well as psychological and behavioral interventions. Pharmacotherapeutic agents are often supplementing lifestyle interventions; however, they are not considered as a part of the lifestyle interventions. 15 The question was primarily based on the NHMRC systematic review published in 2013. 8 The updated literature search identified a Canadian systematic review and meta-analysis. 15 The benefits of lifestyle interventions clearly outweigh the harms, and the resources required are small. As such, the option was judged to be cost-effective. The option is both feasible and acceptable and has no impact on health inequities. 15 Generation of local evidence for lifestyle modification is recommended (the research evidence used in this guideline involving behaviors may not be applicable Table 1 -Interpretation of strong and conditional (weak) recommendations.
Strong recommendation Conditional (weak) recommendation For patients
Most individuals in this situation would want the recommended course of action and only a small proportion would not. Formal decision aids are not likely to be needed to help individuals make decisions consistent with their values and preferences.
The majority of individuals in this situation would want the suggested course of action, but many would not.
For clinicians Most individuals should receive the intervention. Adherence to this recommendation according to the guideline could be used as a quality criterion or performance indicator.
Recognize that different choices will be appropriate for individual patients and that you must help each patient arrive at a management decision consistent with his or her values and preferences. Decision aids may be useful helping individuals making decisions consistent with their values and preferences.
For policy makers
The recommendation can be adapted as policy in most situations Policy making will require substantial debate and involvement of various stakeholders.
Recommendation 2.
The panel suggests using intensive lifestyle modification rather than usual, or minimal care in overweight and obese adults (conditional recommendation, moderate quality evidence).
Remarks. This recommendation pertains to those who are at higher risk for obesity-related comorbidities such as diabetes as they would benefit more from intensive lifestyle interventions. Well-organized and standardized programs dedicated for lifestyle intervention will be required for implementation.
Questions 3 & 4. Should physical activity and diet compared to diet, or physical activity alone be used for overweight and obese adults?
Reduced-energy diets and increased energy expenditure through physical activity are the main components of lifestyle intervention, which is first line treatment of choice for obesity management. Although evidence supporting the effectiveness of physical activity alone on weight loss is disappointing, studies do support physical activity effectiveness for preventing weight gain and incidence of diabetes. Additional potential benefits include: improved mobility, physical function (strength), decreased joint pain (associated with arthritis), decreased cardiovascular risk, and improved bone density. Studies informing these questions are derived from the NHMRC systematic review and include 2 Cochrane reviews 19,20 and 4 more recent randomized controlled trials (RCTs). [19][20][21][22][23][24] The updated literature search identified no new studies. Desirable consequences clearly outweigh undesirable consequences in most settings, and no harmful outcomes were identified. However, the panel judged that the desirable anticipated effects due to exercise, or diet alone are probably not large. As for the resource use out of pocket expense for cost and travel to indoor recreation centers are prohibitive (1000-3000 SR per month), hot weather is a barrier to outdoor exercise. For diet, costs are associated with the use of specialized diets. Purchasing low-energy diet items to replace meals may be costly for individuals and their use requires frequent monitoring by healthcare professionals. The relevant healthcare professional to monitor use may be a general practitioner, dietician, or specialist nurse, depending on access to the type of provider. Therefore, health professional visit fees would also need to be considered among the resources required. A per visit fee for such providers was estimated by the expert panel as follow: 300 SR for general practitioner, 250 SR for diabetic educator, or nutritionist, and 200 SR for behavioral specialist. The panel judged that exercise and for Saudi population which could affect outcomes). Individualized package of lifestyle interventions should be prescribed to each patient according to his/her comorbidities.
Recommendation 1.
The panel recommends lifestyle intervention rather than usual care alone in overweight and obese adults (strong recommendation, moderate quality evidence).
Question 2. Should intensive lifestyle interventions compared to usual care be used for overweight and obese adults?
Intensive lifestyle interventions (ILI) involve more extreme dietary, physical, and behavioral counseling, delivered by multidisciplinary teams of nutritionists, physicians, behavioral therapists, and exercise trainers. [16][17][18] Low calorie diet (800-1200 Kcal/day) and very low calorie diet (<800 Kcal/day) are typically included in the ILI. Also included are moderate to intense physical activity consisting of at least 30 minutes of activity a day, or the equivalent of consuming 1800-2500 Kcal/week; and individualized behavioral goal setting, delivered at weekly, or bi-monthly visits for one to several years. Intensive lifestyle interventions is therefore reserved for populations at high risk of obesity. Outcomes for this intervention are consequently longer-term, such as mortality and cardiovascular events, which are direct patient important endpoints. [16][17][18] The question was primarily based on the Finnish, Diabetes Prevention Study (DPS), and the Look AHEAD (Action for Health in Diabetes) trial. [16][17][18] The updated literature search identified no new studies. Most panel members thought that the benefit in terms of prevention of diabetes and associated cost of care outweighs the downsides. The panel judged the option of ILI to be probably acceptable to key stakeholders and the feasibility to vary and perhaps not be possible on a population level. The panel judged the feasibility to be possible in selected settings where human and financial resources are available, as barriers include the resources and availability of health care professionals to support intensive lifestyle modification. The panel judged the provision of ILI to probably increase health inequity. Required resources were judged by the expert panel to not be small and probably not cost-effective. In addition, generation of local evidence for lifestyle modification is recommended (the research evidence used in this guideline involving behaviors that may not be applicable for Saudi population, which could affect outcomes). diet would be both feasible to implement and acceptable to most stakeholders. The panel also judged that there is no important uncertainty, or variability on how much people value this outcome.
Recommendations 3 & 4.
The panel recommends physical activity rather than no physical activity in overweight and obese adults (strong recommendation, low quality evidence).
The panel recommends physical activity in addition to diet rather than a diet alone in overweight, or obese adults (strong recommendation, low quality evidence).
Question 5. Should nutrition and physical activity counseling compared to health education pamphlets be used for overweight and obese adults?
While counseling for nutrition and physical activity represents the core component of lifestyle interventions for obesity management, this brief and simple intervention (education pamphlets) is intended to be delivered by primary care physicians within constraints of limited office time and limited behavioral counseling skills. The emphasis on brevity, and less intensive behavioral/psychotherapeutic aspects may favor tolerability for both patients and practitioners as evidenced by the lower dropout rate (13%) compared to the traditional lifestyle interventions (>20%). 25 The question was based on one moderate sized trial, 25 derived from the NHMRC systematic review. 8 The updated literature search identified no new studies. The panel, judged the cost of providing nutrition and physical exercise information to probably not be small cost, as physicians must provide time to discuss tailored information and preparation of individualized information. However, the panel did judge the option to be cost effective. The panel judged the provision of nutrition and physical activity information to be acceptable and probably feasible with no impact on health inequities.
There is health benefit without downsides other than the cost of implementation and there is no doubt among members that an individualized approach in overweight and obese individuals is better than a generic approach. More research on the methods of individualized interventions is required.
Recommendation 5.
The panel recommends individualized counseling interventions rather than generic educational pamphlets in overweight or obese adults (strong recommendation, low quality evidence). Question 6. Should iso-caloric low-fat compared to moderate-fat diet be used for overweight and obese adults?
People with diabetes are advised to reduce fat intake in order to decrease their risk of cardiovascular disease. The practicality of adherence to a very low fat diets, which are usually less appetizing, and hence the benefits of such diet for reducing weight and improving cardiovascular risk, are a matter of debate. Three studies [26][27][28] informing this question are derived from the NHMRC systematic review. 8 There was no metaanalysis provided in the original review, but this analysis was undertaken for the current report to estimate the effect on weight reduction and lipid profile. One of the RCTs reported additional outcomes of systolic and diastolic blood pressure 26 that were not evaluated by the other trials. 27,28 The updated literature search identified no new studies. Individual dietary programs to create an energy deficit may be more cost-effective than broad general practitioner advice if delivered by an accredited practicing dietitian. Costs are associated with the use of specialized diets. Purchasing very low-fat diet items to replace meals may be costly for individuals and their use requires frequent monitoring by healthcare professionals. The relevant healthcare professional to monitor use may be a general practitioner with special training, dietitian or specialist nurse, depending on access to the type of provider. The panel judged the use of low-fat diets both acceptable and feasible, and the impact on inequity to be not applicable. In addition, the panel judged that as far as how much people value this outcome, there is no important uncertainty, or variability. The panel judged the balance between desirable and undesirable consequences as uncertain due to lack of information on undesirable effects. Therefore, the panel suggests RCTs be carried out with adequate follow-up duration that compares iso-caloric diets with fat content lower than 20%, approximately 20%, and approximately 30%.
Recommendation 6.
The panel makes no clinical recommendation regarding iso-caloric low-fat versus moderate-fat diets. The panel suggests randomized controlled trials be carried out with adequate follow-up duration that compare iso-caloric diets with fat content lower than 20%, approximately 20% and approximately 30% (low quality evidence).
Remarks. Panel members judged that there was not enough evidence to choose one option over another. If any diet is used, fat content should be determined according to the Acceptable Macronutrient Distribution Range (AMDR) and fatty acids subtypes should be defined (saturated fatty acids, trans fatty acids, and Omega 3 and 6 fatty acids) in order to evaluate benefits or harms.
Question 7. Should portion-controlled diet compared to non-portion controlled diet be used for obese and overweight adults?
The meals of portion-controlled diet are expected to improve long-term adherence to diet since they are easily incorporated into individuals' lifestyle due to their commercial availability, and specific convenient design. This question addresses the effect of portioncontrolled diet against standard diet and was based on a single long-term (36 months) RCT identified in the NHMRC systematic review. 29 The updated literature search identified no new studies. Costs are associated with the use of specialized diets. The monthly estimates for resources required for commercial preparation of portion-controlled diet are judged by the panel to be most likely not small. As for the home-made portion controlled diets, due to lack of data on resources required, the panel was uncertain if such resources are small. Purchasing diet items to replace meals may be costly for individuals and their use requires frequent monitoring by healthcare professionals. The relevant healthcare professional to monitor use may be a general practitioner, dietician or specialist nurse, depending on access to the type of provider. Therefore, health professional visit fees would also need to be considered (their estimated costs were previously mentioned in questions 3 and 4). The balance between desirable and undesirable consequences is closely balanced, or uncertain as it is not clear whether the desirable anticipated effects are large and whether the overall undesirable effects are small. The panel judged the option both feasible and acceptable to key stakeholders to implement. The panel also judged that increased associated costs would likely cause health inequities to increase. The panel judged that there is no important uncertainty regarding the variability on how much people value this outcome. More research on portion controlled diet strategies for weight loss is suggested. Standardization of diet if made at home may be a barrier for successful implementation.
Question 8. Should psychotherapy-cognitive behavioral therapy (CBT) compared to no cognitive behavioral therapy be used for overweight and obese adults?
Multicomponent lifestyle interventions that include diet, exercise, and behavior modification are a common strategy for weight loss, associated with moderate weight reduction. 15 Psychotherapy is a core component of behavioral modification. In order to understand the role of psychotherapy, this question focuses on cognitive behavioral therapy (CBT), which is an established psychotherapeutic treatment of choice for weight loss. 4 Cognitive behavioral approaches offer individuals the opportunity to identify behavioral and thinking patterns that relate to their particular weight problems. 30 Cognitively oriented weight programs have been developed to reach the growing number of overweight men and women since the 1970's. More recently, RCTs evaluating the impact of psychotherapy have identified CBT to be superior for reducing binge-eating, compared to other psychotherapies. 31 Findings in the NHMRC systematic review (2013) was the reference for this question, and the group did not find new studies in the literature related to CBT. There are probably large beneficial effects of CBT, and other than the high required resources, there are no anticipated adverse consequences. The panel judged that the resources are probably not small in Saudi Arabia because psychologists, or well-trained primary care physicians are required for this intervention and training in psychotherapy would be required. The panel judged the provision of CBT to be cost effective, for overweight and obese adults. However, the panel was concerned about applying this recommendation to complex populations with suspected, or confirmed eating disorders who would require specialized psychiatric assessment. In addition, there is a need to conduct a systematic review of observational studies in Saudi Arabia among both men and women to assess the role of CBT in the management of obesity.
Recommendation 8.
The panel suggests CBT rather than no such therapy in overweight and obese adults (conditional recommendation, low quality evidence).
Remarks. This recommendation pertains to general obese populations. Individuals with suspected or confirmed eating disorders or depression require specialized psychiatric assessment and management. Cognitive behavioral therapy, as interpreted in this intervention, is delivered by a health care worker with special competence in CBT and therefore, requires consideration in terms of implementation and the need fo health professional training. Psychotherapy should not be a substitute for psychiatric assessment in any individual with suspected, or confirmed eating disorder, or depression.
II. Pharmacological management
Question 9. Should metformin compared to no metformin be used for overweight and obese adults?
The cornerstone of obesity treatment is lifestyle changes. In view of the low success rate in achieving weight loss and even lower success rate for maintaining this weight loss, drug therapy for obesity in conjunction with lifestyle changes are often used. [3][4][5]15,32 Evidence suggests that metformin therapy alone contributes to weight loss, 15 although its use for obesity is considered off-label in most jurisdictions. Evidence informing this question is derived from the NHMRC review and includes a meta-analysis on insulin-sensitizing drugs for weight loss in women pooled across 8 trials, 33 as well as the Diabetes Prevention Program Outcomes Study. 34 Our literature update also identified one RCT on hypertensive patients using low dose metformin. 35 The panel judged resources required for metformin to be small and cost effectiveness to be uncertain. The monthly cost for an adult taking 850 mg BID is estimated to be 20SR. The panel judged metformin to be feasible and acceptable and that benefits may be larger in patients with pre-diabetes and those with risk factors for diabetes. Because the desirable anticipated benefits of metformin are not large, and those individuals with pre-diabetes and other diabetes risk factors may experience larger benefits; the panel judged that desirable consequences probably outweigh undesirable consequences in most settings. The undesirable side-effects vary. Future RCTs are required on unselected obese and overweight populations that report all patient-important outcomes (namely, quality of life, function, morbidity and mortality) rather than surrogate outcomes only. Economic analysis in the KSA health care system is also recommended.
Recommendation 9.
The panel suggests metformin in obese or overweight adults (conditional recommendation, low quality evidence).
Question 10. Should orlistat compared to no orlistat be used for overweight and obese adults?
Orlistat is a lipase inhibitor, which prevents absorption of approximately 25% of fat consumed and is the main anti-obesity drug approved for long-term treatment of obesity. 36 Evidence suggests that orlistat therapy alone contributes to weight loss. 8 Its high safety profile is implied by its availability over the counter in some jurisdictions as in the US. This question was informed by a meta-analysis (of 11 trials) included the NHMRC review, and a Cochrane review (4 RCTs) also derived from the NHMRC; where the adverse events, mortality and myocardial infarction rates are summarized. 13,37 The panel judged that overall, benefits of orlistat probably outweigh anticipated important adverse events and that implementation is feasible and acceptable. The panel recommends that patients be advised to expect adverse events, to avoid fatty meals and to consider vitamin supplementation (since orlistat decreases absorption of fat soluble vitamins). Regarding health inequities, the panel judged that there is a probable increase impact on health inequities; and therefore in situations with limited resources it would be also reasonable not using orlistat. The panel judged resource use associated with orlistat to be small and probably cost effective, and that economic analyses in the Saudi Arabian context be undertaken. Studies investigating whether and when to use multivitamin supplementation with orlistat are also recommended.
Recommendation 10.
The panel suggests orlistat in obese and overweight adults (conditional recommendation, moderate quality evidence).
Question 11. Should bariatric surgery compared to non-surgical therapies be used for overweight and obese adults?
Bariatric surgery in general carries risk of morbidity and peri-operative mortality. It is therefore considered when other treatments have failed. Risks with bariatric surgery include; bleeding (0.5%), thromboembolic events (0.8%), wound complications (1.8%), deep infection-abscess or leak (2.1%), pulmonary complications (6.2%), miscellaneous complications (4.8%), cholecystitis and mortality (0.52%). 38 Large observational studies of bariatric surgery, confirms effectiveness for major weight reduction and improvement in comorbidities, which are reported in small, randomized trials. 39 Our literature update identified a meta-analysis of bariatric surgery within the 2014 Cochrane review by Colquitt. 38 Nevertheless, comparing sleeve gastrectomy to medical therapy, which is the comparison of interest to the panel members, was not included in this review. Panel members identified the RCT by Schauer et al, 40 2014 that specifically evaluated sleeve gastrectomy in comparison to medical therapy as important for consideration in the Saudi context and this RCT informs this question. The panel judged that health inequities would probably increase in relation to surgical interventions, that implementation considerations need to address pre-operative screening requirements by trained physicians for evaluation of comorbidities and other causes of obesity, including eating disorders and depression and that postoperative lifelong follow-up by interdisciplinary teams (trained physician, surgeon, clinical nutritionist, psychotherapist) are required to prevent and manage dietary deficiencies and other complications. These health professional resource requirements represent additional implementation considerations. The panel judged that this option is feasible and acceptable to implement. All in all, and in most settings, the desirable consequences of this surgical intervention were judged by the panel to probably outweigh the undesirable ones. Anticipated beneficial effects are large, risks are probably small. Costs are judged to be not small and probably cost-effective. The intervention is acceptable and feasible. Certain points were clearly identified by the panel. These include: the data are limited to sleeve gastrectomy, there are associated inequities, and implementing this intervention needs consideration of screening resources and integrated postoperative follow up care. Long-term evaluation of benefits and complications related to bariatric surgery are required, as well as evidence from studies involving obese individuals with lower BMI (30-35 kg/m 2 ).
Remarks. This recommendation pertains to individuals with larger BMI since anticipated benefits are larger in the setting of individuals who are at higher health risk due to obesity when considering risks associated with surgery. It also considers implementation requirements of interdisciplinary teams to prevent and manage lifelong dietary deficiencies, operative complications and weight management.
Discussion. Obesity is considered a disease, and as such national and international efforts have to be intensified for its prevention and management. Weight loss results in numerous health benefits even if it is modest (5-10% of body weight), however greater weight loss produces greater health benefits. 41 The magnitude of the health problem of overweight and obesity is enormous in the KSA, and therefore management of overweight and obesity in adult Saudis became an essential part of the larger initiative of the MOH to establish a program of rigorous development of guidelines. The ultimate goals are to provide guidance for clinicians and other healthcare decision makers and reduce unnecessary variability in clinical practice across the Kingdom.
Clinicians, patients, third-party payers, institutional review committees, other stakeholders, or the courts should never view these recommendations as dictates. As described in other guidelines following the GRADE approach, no guideline or recommendation can take into account all of the often-compelling unique features of individual clinical circumstances. Therefore, no one charged with evaluating clinicians' actions should attempt to apply the recommendations from these guidelines by rote or in a blanket fashion.
The panel members of this CPG emphasized the local context, patients values and preferences and culture. This is expected to increase acceptance by patients and relevant health care providers. The recommendations in this guideline shared some similarities with other internationally available recommendations. Previous reports such as American Heart Association/American College of Cardiology/The Obesity Society guidelines, 2 and the Endocrine Society CPG on the pharmacological management of obesity, 32 all share the Saudi CPG focus on diets, exercise and behavioral approaches, for obesity in adults.
Lifestyle interventions, which are considered the cornerstone of obesity management, are multicomponent treatments that involve promoting healthy lifestyle habits, dietary interventions, dietary counseling, physical exercise training as well as psychological and behavioral interventions. Although the current guideline focuses on lifestyle modifications as a means for managing of overweight and obesity, the panel members recognize that permanent weight loss could be difficult to achieve based on lifestyle interventions alone. Therefore pharmacological and surgical approaches for weight management were included in the current guideline.
The panel decided to focus on 2 of the most commonly used medications to promote weight loss in KSA; orlistat and metformin, and to postpone the inclusion of other US-FDA approved medications due to the lack of Saudi studies for their use. Although the Endocrine Society has recently published CPG on the pharmacological management of obesity, where several medications commonly prescribed in the United States were discussed, they encouraged additional scrutiny of medications available in the United States by the European Medicines Agency and the funding of additional long-term clinical trials in the European Union and elsewhere to study the safety and efficacy of these medications. 32 As regards to the surgical approach in the management of obesity, the panel identified that data are limited to sleeve gastrectomy techniques. The implementation considerations need to address pre-operative screening requirements by trained physicians for evaluation of comorbidities and other causes of obesity, as well as post-operative lifelong follow-up by interdisciplinary teams (trained physician, surgeon, clinical nutritionist, psychotherapist), which are required to manage body weight and to prevent and manage dietary deficiencies and other complications.
It should be noted that this guideline did not address all the questions related to obesity and overweight management. During the initial phase of this project, 15 questions were prioritized as potentially relevant. However, only 11 questions were addressed for various reasons. For instance: the question of: "Should laparoscopic adjustable gastric band surgery rather than no laparoscopic adjustable gastric band surgery be used in obese adults?" was not addressed as the panel members agreed that this procedure was not relevant in the Saudi context as it has been replaced by other surgical procedures. Further questions were also not addressed such as: "Should intensive lifestyle intervention rather than group education sessions be used in overweight and obese adults?" and "Should motivational interviewing rather than no motivational interviewing be used in overweight and obese adults?" since group education and motivational interviewing were considered by the panel members to be a part of the intensive lifestyle modification already addressed in a separate question.
Developing a CPG with a rigorous approach is a challenging task. 42 To ensure neutrality and applicability of recommendations reached in these CPGs, and to avoid under-representing any pertinent angle or point of view, the EBHC of the MOH insured that pressing health issues were addressed, that a wide range of specialties in the expert panel were assembled, as well as collaboration with the McMaster Working Group, an internationally renowned specialized institution employing well-designed and standardized guideline development methods.
The following questions were identified by the panel members as priority to be answered in the future update of this guideline: 1. Should more novel anti-obesity drugs compared to classical anti-obesity drugs be used in overweight and obese adult patients? 2. Should specific types of bariatric surgery be offered to obese Saudi patients, as compared to the most commonly performed surgery? 3. Should herbal and traditional/cultural medicine (whether locally produced or imported) be encouraged in overweight and obese adult patients? 4. What are optimal intervention strategies for the prevention of obesity? 5. What are optimal strategies for the management and prevention of obesity in children and adolescents?
Due to the lack of data from Saudi Arabia in certain areas, the panel was not able to reach some recommendations. Therefore, the panel has highlighted that research is needed in certain fields, such as the use of portioncontrolled diets, the generation of local evidence for lifestyle modification to be applicable for the Saudi population, and assessing the methods of individualized interventions as opposed to generic approaches. Other specific proposed ideas were agreed upon by the panel members, such as performing a systematic review of observational studies in Saudi Arabia among both men and women to assess the role of cognitive behavioral therapy in the management of obesity, RCTs to be designed with adequate follow-up duration that compare iso-caloric diets with fat content lower than 20%, approximately 20% and approximately 30%, RCTs are required on unselected obese and overweight populations that report all patient-important outcomes (namely, quality of life, function, morbidity and mortality) rather than surrogate outcomes only. In addition, economic analysis in the Saudi Arabian health care system, studies investigating whether and when to use multivitamin supplementation with orlistat, and long-term evaluation of benefits and complications related to bariatric surgery, as well as evidence from studies involving individuals with lower BMI (30-35 kg/m 2 ) are additional examples of recommended studies.
In conclusion, the current Saudi CPG is on the management of obesity and overweight in adults in Saudi Arabia. For the non-pharmacological management of overweight and obese adults, the panel members strongly suggests lifestyle intervention rather than usual care alone, individualized counseling interventions rather than generic educational pamphlet, physical activity rather than no physical activity, and physical activity in addition to diet rather than diet alone. Some conditional recommendations were also reached, such as cognitive behavioral therapy rather than no such therapy, and using intensive lifestyle modification rather than usual or minimal care. As for the pharmacological management; metformin and orlistat were suggested as conditional recommendations for the management of overweight and obesity in adults. Finally, bariatric surgery was recommended, conditionally, for the management of obese adults (BMI ≥40 or ≥35 kg/m 2 with comorbidities). | 2018-04-03T06:03:43.930Z | 2016-10-01T00:00:00.000 | {
"year": 2016,
"sha1": "ac81c44f67de5462377949d52fbc56eae34aa434",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.15537/smj.2016.10.14353",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2962f4a650fc6dbc250b0692a5fe858948dd95f3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
56937321 | pes2o/s2orc | v3-fos-license | Etiologies of the Hearing Loss and Their Impacts at the Patients Worn Hearing Aid in the International Center of Hearing Correction in Abidjan ( ICHC )
Objective: Determine the etiologies and their impacts of hearing loss at the patients’ worn hearing aid in the international center of hearing correction in Abidjan (ICHC). Material and Method: The study is of transverse and analytical type realized in the ICHC from July 1999 to June 2010. It concerned the files of patients’ worn hearing aid in the center. The patients worn initially in another center but followed in the ICHC were excluded. Data were collected from medical files of patients and concerned etiologies, age, type, degree of hearing loss as well as the prosthetic gains and the satisfaction. Results: Fifteen etiologies were listed with in first three rows the meningitis (17.9%) the presbyacusis (17.5%) and chronic otitis media (12.1%). The degrees of hearing loss in the seven etiologies most frequent were severe and profound in 87.6% of cases to the right and 82.8% to the left. The prosthetic pure tonal gain was significant in case of sound trauma and sudden hearing loss. The prosthetic speech reception threshold gain was only significant in case of presbyacusis and sudden hearing loss. The satisfaction of hearing aids was significant in case of presbyacusis, sudden hearing loss and the sound trauma. Conclusion: The main clauses etiologies were the meningitis and the presbyacusis. The tonal and speech prosthetic gain were significant in case of sudden hearing loss.
Introduction
The hearing aid is one of the first technical and therapeutics progress allowing managing the hearing loss.This hearing impairment is a social handicap.Three hundred and fifty million of people in the world or 7% of the world population are affected by hearing disorders according to the WHO [1].Only 20% of these patients have hoped to improve their hearing acuteness by medical and surgical means to restore the biological mechanism of the hearing.The other patients can only satisfy themselves with a hearing aid [2].This hearing loss can only be tied to ear disease or a part of a general disease.In the latter case, the hearing aid can be the only one of the means of care.So it is possible to prevent some etiologies.According to Luzia in the city of Salvador state of Bahia (Brasilia), the main responsible etiological factor for hearing loss in the evaluated population (53 children and adolescents) was maternal rubella, amounting to 32% of the cases of hearing loss, followed by pyogenic meningitis with 20%.Otitis media represented 4% and ototoxicity 2%.Bilateral sensorineural hearing impairment was presented by 62% of the population.Thirty subjects (56%) had profound hearing loss.The hearing aid device was worn by 58% population [3].The hearing loss can decrease the ability to communicate.Research has indicated the existence of a critical period that occurs within the first years of a child's life for speech acquisition.Lack of proper auditory stimulation in childhood may preclude the complete development and maturation of central auditory pathways [4].In Ivory Coast, there is no data work concerning the etiologies of hearing loss at the patients worn.That study allows a screening indication of hearing aid or orientation to cochlear implant according to our practice.That motivated this work to determine the etiologies of hearing loss and their impact at the patients worn.
Patients and Method
This is a transverse and analytical study realized in the International Center of Hearing Correction (ICHC) from July 1999 to June 2010 or a period of 10 years.It concerned patient worn hearing aid in this center, all age and sex, regularly followed, evaluated in relation with prosthetic pure tonal gain, prosthetic speech reception threshold gain and satisfaction.The following prosthetic was made in the first month and all the fourth month during the first year.Above the first year, the control was achieved according the patient's need.The prosthetic evaluation was updated at each control.Patients worn in another center and secondarily followed in the ICHC for adjustment problem or maintenance of their hearing aid were excluded.Data collection was performed using an index card of investigation according to the medical files of patients.The studies parameters were etiologies, age, type of hearing loss, degree of hearing loss, the prosthetic pure tonal gain, the prosthetic speech reception threshold gain and the satisfaction of the patient.These parameters were obtained from the medical files kept in ICHC and updated after each control.The degree of hearing loss was determined according to the classification of International Bureau of Audio Phonology, from the average hearing loss (BIAP) [5].The prosthetic pure tonal gain was the difference between the (tonal) hearing threshold in not worn ear and the (tonal) hearing threshold in worn ear [6].The prosthetic speech reception threshold gain was the difference between the vocal comprehensibility threshold in not worn ear and in worn ear.It was determine from the curve of comprehensibility by measuring in decibel on the axis of orderly of 50%, the distance which separates the curve pathological (not worn ear) of the curve worn ear [6].The level of satisfaction was obtained from the following information: advice of use.Appreciation on the device quality, clarity, sound sensation, adaptation to the sound environment, easy in use of the device, assessment of achieved result with the device, conversation in quiet environment, listen to the television, use of phone, satisfaction of patient's people, occasional whistling (Larsen effect), conversation in noisy environment, conversation in car, conversation in group time to bear of devices, frequency of renewals of battery.This information was noted on a scale as follow: 0 = not satisfied; 1 = little satisfied, 2 = satisfied.Then the total of point was made.It allowed stratifying patient into: Satisfied (3/3 -2/3 of total point), little satisfied (1/3 -2/3 of total point) and not satisfied (0 -1/3 of total point).
The index card of investigation were exploited and analyzed on the software (EPI INFO) a statistical connection was made between different etiologies and age by means of the test KHI SQUARE of PEARSON.Also this test was used to determine the influence of these etiologies on the prosthetic pure tonal gain, the prosthetic speech reception threshold gain and the satisfaction of the patient.The threshold signification for this test was established in 5%.The ethical considerations have been approved by ethics and compliance committee.
Results
536 patients were listed.We noticed 329 men and 207 women, that is a sex-ratio of 1.5.The average age of the patient was 36 years (extreme: 2 and 92 years) with a standard deviation ±23.94.The children were among 134% or 25% of cases.In 69% of cases (n = 370) behind the ear hearing aids were used, intra-auricular hearing aids in 29.5% of cases (n = 158) and hearing glasses in 15% of cases.The hearing aid was bilateral in 62.7% of cases (n = 336) and unilateral in 37.3% of cases (n = 200 with 160 cases of bilateral hearing loss).Fifteen etiologies were observed.The meningitis in 17.9% of cases, the presbyacusis in 17.5% of cases and the chronic otitis media in 12.1% (Figure 1).For the comparative analysis, there are seven etiologies most frequent which were selected.It was about of the meningitis, the chronic otitis media, the presbyacusis, the ototoxicity, the sudden hearing loss, the sound trauma, and the cranial trauma.According to the Table 1, we noted 34.2% of etiologies related to meningitis and 18% consecutive to chronic otitis media among children.Among young adult, meningitis was notice in 22% of cases followed by chronic otitis media in 13.7% of cases, ototoxicity in 12.5% of cases and sound trauma in 12.1% of cases.Finally, presbyacusis represented 55% of etiologies among elderly.Meningitis, chronic otitis media, presbyacusis, ototoxicity, sudden hearing loss, sound trauma were significantly aged-related (Table 1).The different etiologies engendered a sensorineural hearing loss in 76.1% of cases.A mixed hearing loss in 20.8% of cases and conductive hearing loss in 2.9% of cases.The degrees of hearing loss in seven etiologies most frequent were severe and profound in 87.6% of cases to the right and 82.8% to the left.They were significant in case of chronic otitis media to the right and in case of sound trauma in the both side (Table 2 and Table 3).The prosthetic pure tonal gain was significant among patient who represented a sound trauma and a sudden hearing loss (Table 4).Concerning the prosthetic speech reception threshold gain, it was significant among patient who represented a presbyacusis and a sudden hearing loss (Table 5).The patients were satisfied, little satisfied and unsatisfied respectively in 68.4%, 22.7% of cases and 8.7% of cases.The satisfaction as significant in case hearing aid of patient suffering from meningitis, presbyacusis, sound trauma, cranial trauma and sudden hearing loss (Table 6).
Limits
It is retrospective study so that we didn't have more information about the history of certains patients.Some hearing loss have not been enough explorated to find the etiology due to the lack of exploration's mean.
The hearing loss of patients worn in the ICHC was essentially acquired hearing loss.The main etiologies were meningitis and presbyacusis.The etiologies were dominated by meningitis and chronic otitis media among children and young adult.The meningitis was significantly related to the children.According to Ag Mohamed [7].The meningitis represented 54.3% of etiologies of hearing loss among children and young people of young deaf persons school of Bamako in 1996.It also represents the main cause of hearing loss among children in the western countries.Probably because of the harmful action of toxin bacterial toward the internal ear during the meningitis.This hearing loss can be aggravated in case of central involvement [8].That underlines the necessity to continue the sensitization of population about the importance of prevention of this affection through vaccination.But other studies must be undertaken to determinate if the current meningitis is not due to other bacterial not taken into by current vaccines.The chronic otitis media, the ototoxicity, the sound trauma and the sudden hearing loss were significantly related to the young adult.The after-effect of the chronic otitis media have lead 52% of sensorineural hearing loss in the Ali's serie [9].In this affection, the severity of sensorineural hearing loss is function of disease duration.Then it exposes the internal ear to toxin coming from mediator of the inflammation during of the chronic otitis media or ear toxic drop used for a previous treatment.This toxin could penetrate in the internal ear through the round window.They lead an irreversible involvement of bone conduction [10].
Anne heuschkel reported an ascendancy of the sudden hearing loss among old subject (age median 60 years) in his series of 490 patients [11].The absence of subsidy for the hearing aid, would explain the important number of young people in our series.Then they need all their capacity to find or maintain their job.Among the old subject, the main etiology was the presbyacusis.It represented 30% within the population of more than 60 years in France [12].It is the most frequent shape of hearing loss [13].The increase of its prevalence would be a sign of an increase of the life expectancy.But it favors the social isolation and the depression of the patients.The hearing loss noticed at the ICHC was essentially a severe and profound sensorineural hearing loss.The hearing aid was adapted for the severe hearing loss.But it could be the first step of care in profound hearing loss to allow a hearing stimulation meanwhile setting-up of cochlear implant in the case of deaf-muteness or meningitis.The technical and financial limitations, justified the choice of the hearing aid in the profound hearing loss.We noticed that the prosthetic pure tonal gain was more frequent at patient suffering of hearing loss consecutive to the meningitis and the presbyacusis.But this gain was significant at the patients suffering of sound trauma and sudden hearing loss.The prosthetic speech reception threshold gain was frequent in case of hearing loss secondary to meningitis, presbyacousie, chronic otitis media and sudden hearing loss.This gain was significant only in case of presbyacousie and the sudden hearing loss.It would be a neuronal plasticity tied to the hearing rehabilitation favored by the sound environment maintained by the hearing aid.It would allow a reorganization of neurons coding the zones of hearing in decline due to a lesion of the cochlea.It would allow reactivating these neurones again which would code these zones in decline.This would improve the perception and the hearing discrimination.This neuronal plasticity would be set up faster in case of sudden hearing loss [13].What would explain why the prosthetic pure tonal gain and the prosthetic speech reception threshold gain were significant in case of sudden hearing loss.The satisfaction of these patients after hearing aid in summer was also significant when at the patients suffering from a hearing loss consecutive to a presbyacusis, an sudden hearing loss and a sound trauma.
Conclusion
The main etiologies were the meningitis and the presbyacusis.The prosthetic pure tonal gain and the prosthetic speech reception threshold gain were significant in case of sudden hearing loss.The satisfaction of the patients was significant in case of sudden hearing loss, presbyacusis and of sound trauma.In case of non-satisfaction, we need cochlear implant.
Table 1 .
Distribution of etiologies according to the age.
Table 2 .
Distribution of etiologies according to the right hearing loss.
Table 3 .
Distribution of etiologies according to the left hearing loss.
Table 4 .
Distribution of etiologies according to the prosthetic pure tonal gain.
Table 5 .
Distribution of etiologies according to the prosthetic speech reception threshold gain.
Table 6 .
Distribution of etiologies according to the Satisfaction. | 2018-12-27T14:12:56.200Z | 2018-11-08T00:00:00.000 | {
"year": 2018,
"sha1": "1128585aafc64ad65d8833234cbcc33850679b85",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=88466",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1128585aafc64ad65d8833234cbcc33850679b85",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
213507665 | pes2o/s2orc | v3-fos-license | Study of modifiable risk factors associated with learning abilities in MBBS students
Introduction: Healthy body is the basis for healthy mind and sound learning. Factors affecting physical health can influence one‟s learning abilities. BMI, neck girth, breakfast eating habits, resting blood pressures are surrogates of physical health; when they err different learning skills may go hayward. Materials and Methods: A cross-sectional study was carried out on first year medical students in a medical college in western part of central India. 127 complete entries were evaluated by „Metacognitive Awareness Inventory‟. Obesity was assessed by BMI and Neck Circumference, and resting blood pressure measured by standing mercury sphygmomanometer. Breakfast eating behaviour was assessed by semi structured close-ended questionnaire. The data were coded and analysed by SPSS version 20. The level of significance was assigned at p < 0.05. Results: On Pearson correlation analysis it was observed that metacognitive awareness inventory (r = -0.191, p< 0.05), regulation of cognition power (r = -0.197, p <0.05), and information management strategy (r=-0.347, p<0.01) were negatively co related with BMI. The breakfast eating behaviour also showed negative relationship with knowledge about cognition (r= 0.191, p < 0.05) and comprehension monitoring (r= 0.180, p<0.05), whereas the blood pressure showed negative relationship with information management strategy (r=-0.244, p<0.01). Conclusion: Obesity favouring factors and increased BP correlated strongly with diminished metacognition power, whereas daily breakfast eating had strong relationship with improved knowledge about cognition and comprehension skills.
Introduction
Metacognition is the ability to examine how you process thoughts and feelings. This ability encourages students to understand how they learn best. It also helps them to develop self-awareness skills that become important as they get old. 1 Any health condition whether "physical or cognitive" has many manipulators. Metacognitive health of an individual is no exception. It is influenced by modifiable factors like breakfast eating habits, BMI, resting blood pressure and other obesogenic indicators.
A gap of about 10 to 12 hours between dinner and breakfast leads to low blood glucose levels and habitually missing breakfast act as an antagonistic factor that adversely affect cognitive performance. 2 Breakfast eaters tend to have higher basal metabolism, and have less craving for food which reduces the chances of obesity. A number of studies had reported that skipping breakfast lowers cognitive function and work efficiency. 3 Overweight and obesity, as indicated by increased body mass index (BMI), has been found to be associated with a higher risk of cognitive decline. It is an important risk factor for vascular disease which can also influence executive function via the vascular pathway. 4 The harmful effects of blood pressure have been long recognised (1960s). A study on psychomotor speed of air traffic controllers and pilots demonstrated reduced performance in individuals with hypertension 8 . Blood pressure is likely to underlie the association between obesity and cognition. 6 Both high and low blood pressure have been linked with cognitive decline and dementia. The person with high blood pressure has reduced abstract reasoning (executive dysfunction), slowing of mental processing speed, and memory deficits. 7 Most of the vascular alterations induced by hypertension contribute to cognitive impairment by leading to hypoperfusion, ischemic and haemorrhagic stroke, and white matter injury. 10 Considering above factors the present study was designed to assess the relation of metacognition with breakfast eating habits, obesogenic health indicators and resting blood pressure in 1 st year MBBS students.
Hypothesis
It was hypothesized that learning through metacognitive process would most likely correlates to breakfast eating habit, obesity predisposing anthropometries and resting blood pressure in new entrants to medical college.
Aim and Objectives
To assess existence of any correlation between learning skills by "metacognitive awareness inventory" and its subcomponents against variables like breakfast eating behaviour, obesity indicators, and resting blood pressure among first MBBS entrants.
Materials and Methods
A cross sectional study was conducted among I MBBS students from June to August 2019 after obtaining permission from the competent authorities. Participants were explained about all the procedural details of the study before asking for informed consent.
The study was conducted in two phases. In the first phase of the study metacognitive awareness inventory (MAI) questionnaire was handed over to the consenting participants and adequate time (25 minute) was assigned for completing it. Through MAI questionnaire different learning skills of the participants were evaluated. MAI does this by two major components namely; "Knowledge about cognition", and "Regulation of metacognition". The "knowledge about cognition" consists of statements on declarative knowledge, procedural knowledge, and conditional knowledge whereas the "regulation of metacognition" covers a wide perspective involving planning or goal setting, organizing and managing information, and monitoring that sharpens one"s intellect.
In the second phase of study anthropometric information were collected. Body weight was measured by a portable weighing scale with a maximum recording capacity of 125 kg with a margin of error ± 100 gm. Individuals were asked to remove shoes and heavy clothes prior to weighing. Height was measured by stadiometer (SECA 213) on individuals without shoes on. BMI was calculated by the formula weight in kg/height in meter square. BMI guideline for Asian Indian populations was adopted. The BMI value of < 18.5 kg/m 2 was considered underweight, 18.5 -22.9 kg/m 2 normal, 23.0-24.9 overweight, and >25 kg/m 2 as obese. Neck circumference (NC) was measured by a standard calibrated measuring tape just below the level of "Adam"s Apple". The normal cut off value of NC for male was < 37 cm, and females < 34 cm. BP was measured on the left arm in a sitting position with the subject in a relaxed state by mercury sphygmomanometer (Diamond deluxe, BP apparatus, Pune, India) with the available pre-supplied small adult arm cuff size 22 x 11 cm. Two BP readings were taken five minutes apart and the average calculated to estimate the final blood pressure. JNC -8 guidelines was used for interpretation of BP recording.
Statistical analysis
Data entry and statistical analysis were done using SPSS v 20. The results were explained as descriptive and inferential analysis.
Results
There were 127 study participants consisting of 69 males and 58 females with a mean age of 19 14.9% of participants were obese and 17.3% over weight as per adopted BMI criteria. Employing neck girth for detection of obesity 15.7% was found to be obese. A high percentage i.e. 83(65.54%) of participants were pre hypertensive and 18(16.5%) had stage I HTN. Most subjects 83(65.3%) were daily breakfast eaters. The descriptive details of explorative variables with respect to age and gender are presented in Table 1. On exploratory analysis by Pearson"s correlation; substantial inter relationship between studied / modifiable factors was observed there by validating the construct selection. This is presented in Table 2. .734 *correlation is significant at the 0.05 level (2-tailed); ** correlation is significant at the 0.01 level (2-tailed) Similarly the MCAI demonstrated strong to very strong correlations for its constructs their by lending support for internal validly of the instrument. This is depicted in table number 3. *correlation is significant at the 0.05 level (2-tailed); ** correlation is significant at the 0.01 level (2-tailed).
After establishing the validity and reliability (α =0.752, for MCAI and 0.552 for modifiable independent variables) of the tools their inter relationship was studied. On Pearson"s correlation scale, an increase in BMI had negative effect on the above all "metacognition power" (r= -0.191*) along with "power for regulation of cognition" (r = -0.197*), and "information management strategy" (r = -0.347**). Similar results were observed with respect to systolic blood pressure and "information managing power" (r = -0.244**). Breakfast skippers also showed lower scores in "knowledge about cognition"(r= -0.191*) and "comprehension monitoring subscales"(r= -0.180*) in contrast to regular breakfast eaters. Table number 4 presents the detailed exploratory analysis. .043 *correlation is significant at the 0.05 level (2-tailed); ** correlation is significant at the 0.01 level (2-tailed).
Discussion
Review of literature exploring the present hypothesis had a low yield. Therefore present discussion has taken into account relevant studies exploring the hypothesized variables employing relating tools like "PGI memory scale" 11 , "P300 evoked potentials" 12 , "the Kaplan-Albert Neuropsychological Test Battery" 5 ,"Visual Reproduction Immediate and Delayed Recall tests" 9 , and "Mini-Mental State Examination (MMSE)." 6 Marwa Mohammed Yousif et al in Sudan observed significant relationship between eating behaviour and BMI. 13 Similar positive association between dietary adherence, BMI and blood pressure in an urban black population in South Africa was also found by other researcher. 14 It is no secret that obesity is an important correlate of hypertension, 15,16 and the prevalence of hypertension increases with rising BMI. 17,18 The correlations among the independent/ exploratory variables in the present study did toe the line established by previous researchers as mentioned above ( Table 2).
Awareness of metacognitive knowledge empowers learners by giving them an understanding of how they think. It further helps them to reflect on their strengths and weaknesses as learners and guides them toward remedial measures. Schraw and Dennison (1994) developed the Metacognitive Awareness Inventory (MAI) to assess metacognitive knowledge and metacognitive regulation which they referred to as the knowledge of cognition factor and the regulation of cognition factor. The MAI consists of 52 questions tapping into these two components of metacognition. They found that there was strong support for the knowledge of cognition and regulation of cognition components and that these two components were related. 19 The present study also demonstrated strong to very strong co relationship among the different constructs of the adopted MCAI toll (Table 3).
Having established the fact that the MCAI and its sub sections, and the independent variables demonstrated intra group correlation; an attempt was made to find any correlations between the dependant and independent variables. It was found that participants with higher BMI scored poorly in metacognitive awareness inventory score (r= -0.191*), regulation of cognition (r= -0.197*), and information management strategy (r= -0.347**) that may implicate obese participants might be poor learners, and less equipped in handling day to day situation. Framingham Heart Study had shown that obesity is associated with accelerated cognitive decline in aging men. 14 A study conducted in southern Ethiopia had demonstrated similar observations where adolescents with normal BMI had good learning potential. 3 Participants who had high systolic blood pressure had relatively less skills in processing information strategies (r= -0.244**) as compared to those with normal systolic blood pressure. Study on relationship between blood pressure and cognitive function showed that both hypertension and hypotension affect brain perfusion and worsen cognitive outcome. 9 Breakfast eating frequency showed negative correlation with knowledge about cognition and comprehension (r= -0.191*, -0.180* respectively) i.e. less frequent eaters had poor cognitive ability. A study conducted on perceptual development of adolescents in relation to nutritional status with the help of picture ambiguity test (PAT) reported that there was a significant difference in the performance of wellnourished and undernourished adolescents during this test (PAT). 13 Another study, conducted on school children reported that skipping breakfast can have adverse effects on cognitive performance. 14 Alternatively, breakfast skippers run the risk of becoming malnourished which had been linked to delayed cognitive development. 3 All these observations lend support to our hypothesis.
Conclusion
Metacognitive awareness inventory is a welldocumented and scientifically proven comprehensive tool to assess the intellectual ability of a person. This Study was a first timer where MCAI was explored in terms of common modifiable factors in day today life of medical students. Inculcation of healthy life style can be a profound booster to learning abilities. Healthy eating, maintain normal BMI and blood pressure can put a professional student on an advantageous platform.
Strength and weakness
MCAI was found to have high construct validity and internal consistency in the present context. Conduction of study in a systematic and phased manner with effective time capping was able to minimize respondent / information bias. A follow up study over subsequent years and association with their academic performance can further add on to the present evidence. | 2020-01-30T09:04:42.368Z | 2020-01-15T00:00:00.000 | {
"year": 2020,
"sha1": "52b4c457abcfaea8ff6ba21921ec868ba1a66a55",
"oa_license": "CCBY",
"oa_url": "https://www.jchm.in/journal-article-file/10601",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cb7decb5da79b4884479a011e1c4442ccc7afb81",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
5291816 | pes2o/s2orc | v3-fos-license | Development of Cell-SELEX Technology and Its Application in Cancer Diagnosis and Therapy
SELEX (systematic evolution of ligands by exponential enrichment) is a process involving the progressive isolation of high selective ssDNA/RNA from a combinatorial single-stranded oligonucleotide library through repeated rounds of binding, partitioning and amplification. SELEX-derived single-stranded DNA/RNA molecules, called aptamers, are selected against a wide range of targets, including purified proteins, live cells, tissues, microorganisms, small molecules and so on. With the development of SELEX technology over the last two decades, various modified SELEX processes have been arisen. A majority of aptamers are selected against purified proteins through traditional SELEX. Unfortunately, more and more evidence showed aptamers selected against purified membrane proteins failed to recognize their targets in live cells. Cell-SELEX could develop aptamers against a particular target cell line to discriminate this cell line from others. Therefore, cell-SELEX has been widely used to select aptamers for the application of both diagnosis and therapy of various diseases, especially for cancer. In this review, the advantages and limitations of cell-SELEX and SELEX against purified protein will be compared. Various modified cell-SELEX techniques will be summarized, and application of cell-SELEX in cancer diagnosis and therapy will be discussed.
Introduction
Aptamers are single-stranded DNAs or RNAs that fold into unique 3D structures for interacting with specific targets. Compared with other ligands, aptamers' small size, facile chemical synthesis, excellent chemical stability, versatility in structural design and engineering and low immunogenicity enable their widely use in cancer imaging and therapy applications [1]. In recent years, there are growing numbers of aptamers being exploited and applied in various areas including disease diagnosis, clinical therapy, analytical chemistry, food safety, bio-sensing and environmental toxicity detection [2]. SELEX (systematic evolution of ligands by exponential enrichment) technology, established by Tuerk and Ellington in 1990, is the method used to identify aptamers with high affinitive and [3,4]. With the optimization and development of SELEX methodology, a wide range of targets could be directly used in the selection, including small molecules, proteins, viruses, bacteria, live cells, and even tissues [5]. Proteins are the most common targets used in SELEX for the identification of aptamers [6]. However, it is rather difficult to obtain enough high purity recombinant human proteins with native conformation from various in vitro expression systems, especially for transmembrane proteins and intracellular proteins. Therefore, in order to solve these problems, researchers have been working on developing new methods for aptamer selection.
In 1998, Morris and Jensen [7] firstly used human red blood cell membranes as a complex mixture target to select aptamers through cell-based SELEX methodology (cell-SELEX), which provided an in vitro protocol for isolating high affinitive aptamers specifically against complex mixture of potential targets. Unlike other SELEX methods, cell-SELEX selects aptamer against a whole cell, so molecular targets on the cell surface are in their native state and would represent their natural folding structures [8]. In the past two decades, aptamers have been developed for a wide variety of live cells and other complex systems, especially for live cancer cells [9].
In this review, we will first bring a comprehensive description of the development of cell-SELEX methodology and explain its advantages and limitations by comparing with traditional SELEX and protein-SELEX. Then we will expound the cell-SELEX strategy and procedure in details and talk about recent progress in the exploitation of new technologies based on cell-SELEX. Finally, an overview of the application and prospects of aptamers selected by cell-SELEX in cancer diagnosis and therapy will be given, which would be helpful for understanding and facilitating the application of cell-SELEX technology in future.
Advantages of Cell-SELEX
The emergence of cell-based screening methods greatly enriched screening targets and expands the potential applications of aptamers. Cell-SELEX has been the first choice in developing aptamers to recognize particular biomarkers on cancer cell surface for the application of both diagnostic and therapeutic purpose. For developing membrane protein aptamers against particular diseases using traditional protein-SELEX, prior knowledge of protein targets is necessary in the first place. Furthermore, enough recombinant membrane proteins with high purity would be needed. However, because of the post-translational modifications, membrane proteins expressed in prokaryotic or some eukaryotic systems often cannot fold into the correct 3D structure that is formed under physiologic conditions. This causes the low solubility and low yield of membrane proteins expressed in vitro expression systems, which limit their application [5]. Cell-SELEX overcomes the difficulties in obtaining purified recombinant membrane proteins [10]. In cell-SELEX, aptamers are developed against molecules on the cell surface without requirement for prior knowledge of the molecular targets [11]. Therefore, protein purification is also not necessary in prior to the selection.
Large transmembrane molecules are functionally important molecules involved in many biological processes, such as signal transduction, cell adhesion and migration, cell-cell interactions, and communication between the intra-and extra-cellular environments [12]. Yet the membrane proteins aptamers developed through protein-based SELEX may not be able to selectively recognize and interact with their corresponding targets in vitro, which would result in failure of the bio-medical application [5]. In cell-SELEX, all molecules on the cell surface are in their native states and would therefore represent their natural folding structures and distribution. All post-translational modifications are left intact for proteins, and so aptamers will bind to the real folded conformation [8]. Therefore, cell-SELEX eliminates the risk that identified aptamers would only bind to the purified proteins but could not recognize the native form of the proteins on living cells.
Moreover, cell-SELEX also can be used to discovery new biomarkers in particular cancer cells surface. Prabodhika et al. [13] presented a strategy for identifying proteins with expression level changed in a diseased cell using cell specific aptamers. They used the selected aptamers that showed different recognition patterns with different cell lines of leukemia to capture and enrich the target receptor proteins. And followed by mass spectrometry, they recognized the receptor was the membrane-bound immunoglobin heavy mu chain in Burkitt's lymphoma cells [13]. This study, which discovered new biomarkers in particular cancer cells, demonstrates that specific aptamers could be developed by cell-SELEX and in turn used as probes to identify target. Cell-SELEX strategy, as well as the aptamers selected from cell-SELEX offer valuable tools to isolate the disease-specific protein targets and facilitate the discovery of clinically important biomarkers. They would open a door for the development of "personalized" medicine and novel biological probes technologies [14].
Limitations of Cell-SELEX
Although cell-SELEX has a great potential in the biomedical field, several technical limitations remain and must be addressed in the following optimizations. Firstly, cell condition is rather important in aptamer selection. Presence of dead cells in a suspension will lead to non-specific uptake and binding of oligonucleotides which would be a negative impact on whole selection process. Such methods may be possible solutions to remove dead cells and help to decrease possibility of nonspecific aptamers obtained through cell-SELEX. For example, for this purpose, Raddatz et al. [15] implemented fluorescence-activated cell-sorting (FACS) in the cell-SELEX procedure to separate aptamers bound to vital suspension cells (Ramos Burkitt's lymphoma B cells). Aptamers were incubated together with vital and dead cells and only aptamers bound to calcein-AM-stained vital cells were collected. Meltem et al. [16] also established a method to remove dead cells from the cell suspension. The cell suspension was centrifuged after the detachment of cells with EDTA to discard a large number of dead cells, which remained in the supernatant. Thereafter, the remaining dead cells were isolated by magnetic depletion using dead cell removal microbeads. The amount of dead cells could be reduced to 5.2% using the optimized method. All these approaches efficiently optimized the selection strategies for generation of cell-specific aptamers.
Since the cell surface components are very complex, additional counter selections against other non-target cells are essential in order to improve the specificity of aptamers, and therefore the operation is more complex [5]. This might have a negative economic impact on selection as increasing the time and cost will be required. Automated SELEX could generate aptamers with the required qualities within several days and multiple targets could be handled at the same time which is rather efficient and labor free [17]. Moreover, selection cycles can be automatically conducted without any requirements of direct intervention steps during the whole selection process [18]. As yet, various attempts on automatic in vitro selections have been made in the past few years. Companies of Aptasol (York, UK) and Vivonics (Sudbury, MA, USA) have been committed to the development of rapid selection systems, such as automated high-throughput selection and one-step selection [19]. With flow cytometry and high throughput sequencing technology successfully adopted into the cell-SELEX procedure, it is believed that cell-SELEX technology will be developed to shorten the selection period, improve success rate, and accelerate high-affinity aptamer identification in the near future.
Another difficult step is how to successfully identify of the target of the aptamers generated by cell-SELEX. On the one hand, cells are complex targets to aptamers and the cell surface components are very complex. On the other hand, some aptamers are not only capable of selectively binding to the membrane protein in the target cell surface, and even internalizing into the target cells [20]. Thses increase the difficulty for target identification. Biomarker discovery is also a pressing task in molecular medicine. Tan's laboratory developed their own procedure for target membrane protein identification, and they have identified several novel protein targets using this approach [9]. We hope to have more methods applied to discover the target of the aptamers selected from cell-SELEX in the future.
Furthermore, it is known that the cell surface carries a net negative charge and thus repulsion would occur between the DNA polyanion and the cell surface [21]. Therefore, it would be difficult hope to have more methods applied to discover the target of the aptamers selected from cell-SELEX in the future.
Furthermore, it is known that the cell surface carries a net negative charge and thus repulsion would occur between the DNA polyanion and the cell surface [21]. Therefore, it would be difficult to generate nucleic acid aptamers binding to cell surface. On the other hand, in order to avoid membrane protein in the cell surface being covered, target cells for cell-SELEX can not be fixed, so the separation efficiency of binding complex with unbound nucleic acids is low [22]. Currently, there is still no effective method to overcome these problems, but more efforts would be made to optimize the cell-SELEX technology to eliminate the negative effect of these limitation.
Cell-SELEX Strategy and Procedure
The main steps of cell-SELEX are similar to traditional SELEX, which includes incubation, partitioning, and amplification. More details are illustrated in Figure 1. To generate aptamers that can specifically target cancer cells, the protocol of cell-SELEX includes positive selection and negative selection. The step of negative selection is necessary to remove sequences which binding to normal cells and improve the specificity of candidate aptamers. Aptamers are chemically synthesized, short single-stranded DNA (ssDNA) or RNA molecules.
Here we take ssDNA aptamers as an example to describe the process of cell-SELEX. Firstly, a single stranded oligonucleotide library which has a high diversity of random sequences are synthesized and incubated with the target cells. After washing, the DNA sequences bound to the target cell surface are eluted by heating cell-ssDNA complexes at 95 °C from the cells and collected by centrifugation. And the recovered pool is incubated with the negative control cells (control cells that do not express the target biomarker). All ssDNA sequences that show binding to the negative control cells are removed. Unbinding sequences are amplified by PCR using biotinylated reverse primer, leading to the enrichment of specific binders to the target. Streptavidin magnetic beads are used to capture the biotinylated antisense strand and unlabeled sense ssDNA is separated by NaOH. In general, a corresponding steady increase in binding affinity of the aptamer candidates is observed through the selection rounds increasing. The enrichment of the selected pools is monitored by flow cytometry binding assays. Finally, the enriched pools are sequenced and representative DNA aptamers are chosen for subsequent characterization [8]. Several factors need to be taken into consideration in cell-SELEX.
(1) Design the oligonucleotides library for SELEX. Four factors are involved: type of randomization, the length of the random sequence region, the chemistry of the pool, and the utility Aptamers are chemically synthesized, short single-stranded DNA (ssDNA) or RNA molecules.
Here we take ssDNA aptamers as an example to describe the process of cell-SELEX. Firstly, a single stranded oligonucleotide library which has a high diversity of random sequences are synthesized and incubated with the target cells. After washing, the DNA sequences bound to the target cell surface are eluted by heating cell-ssDNA complexes at 95 • C from the cells and collected by centrifugation. And the recovered pool is incubated with the negative control cells (control cells that do not express the target biomarker). All ssDNA sequences that show binding to the negative control cells are removed. Unbinding sequences are amplified by PCR using biotinylated reverse primer, leading to the enrichment of specific binders to the target. Streptavidin magnetic beads are used to capture the biotinylated antisense strand and unlabeled sense ssDNA is separated by NaOH. In general, a corresponding steady increase in binding affinity of the aptamer candidates is observed through the selection rounds increasing. The enrichment of the selected pools is monitored by flow cytometry binding assays. Finally, the enriched pools are sequenced and representative DNA aptamers are chosen for subsequent characterization [8]. Several factors need to be taken into consideration in cell-SELEX.
(1) Design the oligonucleotides library for SELEX. Four factors are involved: type of randomization, the length of the random sequence region, the chemistry of the pool, and the utility of the constant regions [6]. Aptamers can be RNA or single-strand DNA. The original report of SELEX used a randomized RNA pool, RNA SELEX generally involves in vitro transcription. It is more complex than the amplification process of DNA SELEX. So that following studies gradually used single-stranded DNA pools to generate DNA aptamers for a wide variety of targets instead. In general, RNA has better diversity in the fold due to the 2'OH group, whereas DNA is more stable, cheaper and easy production [23]. No significant differences in specificity or binding abilities have been observed between these two types. Nucleic acid libraries offer a great diversity of species and the ease of screening. Length of the random region of the starting library is normally between 20 to 40 bp [11]. Modified nucleotides could be also included in the library, which may greatly broaden the range of possible sequences and probably enhance their in vivo stability or nuclease resistance [24]. The design of the conserved primer regions is also important as improper design would introduce unspecific products in PCR. It can be designed using appropriate software such as Integrated DNA Technologies according to standard primer design considerations, including reasonable annealing temperature, proper G-C content, no primer heterodimers and primer self-dimers [8].
(2) Cell-SELEX uses whole live cells as targets and therefore good cell culture maintenance is very important. Cell overgrowth leads to more cell death, and probably causes an alteration in cell morphology and protein expression. The elimination of dead cells can significantly enhance the success rate of screening. The choice of cell lines depends on the purpose of the selection and what goals to achieve. Cell-SELEX has commonly been performed using cultured cancer cell lines, such as human hepatocarcinoma cell [25], prostate cancer cell [26], and cancer stem cells [27], etc. With these cell lines, aptamers that can differentiate between two different cancers or between cancer and normal cells have been generated [8].
(3) Negative SELEX is used to improve the selectivity of aptamers by excluding a portion oligonucleotide which can bind to similar target cells. The negative followed by the positive selection steps are carried out to filter out sequences against the molecules existing on the surface of both the target and control cell lines. These steps should be repeated several times to enrich the aptamer pool for target cells [28].
(4) Proper separation methods need to be chosen in ssDNA regeneration. There are several methods reported to generate ssDNA from double-stranded PCR products, including asymmetric PCR, denaturing high-performance liquid chromatography (DHPLC) method, lambda exonuclease digestion, size separation by denaturing urea-polyacrylamide gel derived from unequal primers with chemical modification and magnetic separation with streptavidin-coated beads. Asymmetric PCR use different amount of forward and reverse primers [29]. When the primer in limiting amount has been used up, an excess of ssDNA will be produced in each cycle. However, as an unequal molar ratio of the two primers is used in the PCR reaction, the diversity of ssDNA in the enriched oligonucleotide pools may be reduced. In the denaturing high-performance liquid chromatography (DHPLC) method [30], one of the two primers used in a PCR is biotinylated and another is normal. This method used the DNA wave fragment analysis system that incorporated denaturing reverse-phase ion pair high-performance liquid chromatography (RP-IP DHPLC) technology. Therefore, under denaturing conditions, due to the increased hydrophobicity of the biotin moiety attached to one of the strands, the retention time in HPLC of the two strands is different. So the separation of the ssDNA species from the PCR products is achieved through DHPLC. But this obviously is an expensive and instrument dependent method. In lambda exonuclease digestion method [31], a phosphorylated reverse primer is used in amplification. Then the phosphorylated strand is digested by lambda exonuclease and the remaining sense stranded is obtained. It has a drawback that incomplete digestion may lead to the contamination of the dsDNA in a reaction mixture. In size separation derived from unequal primers with chemical or structural modification method [32], reverse primer is designed with a chemical terminator or a GC-rich stem-loop structure at its 5'-end. So unequal strands of DNA could be created and subsequently separated on a denaturing urea-polyacrylamide gel. It can achieve a high recovery rate and purity of ssDNA. Currently, the most commonly used method for the isolation of ssDNA from dsDNA is magnetic separation with streptavidin-coated beads [33]. In the method, biotinylated reverse primer is used for PCR amplification. Biotinylated PCR products are immobilized onto streptavidin-coated beads and the unmodified ssDNA is rapidly separated from biotinylated strands by alkaline denaturation.
New Methods Derived from Cell-SELEX
With the applications of aptamer technology in the tumor cell detection and treatment, cell-SELEX technology has gradually developed, and the range of targets was gradually expanded [34]. Currently a variety of new screening methods based on cell-SELEX have emerged to improve the success rate of aptamer screening. Here we provide a brief description of some of new cell-based SELEX methods.
TECS-SELEX
In 2005, Cerchia et al. [35] reported for the first time that screening of aptamers specifically inhibit the receptor tyrosine kinase RET (Rearranged during Transfection) via the RET over-expression cell line. In the study, PC12/MEN2A cells that over-express the human RET were used as targets. The identified sequences bound to target cells with apparent K d values ranging from 30 to 70 nm, while showed no tight binding to parental PC12 cells. Moreover, the aptamer obtained by the whole-cell SELEX strategy not only recognized the extracellular domain of RKT, but also blocked RET downstream signaling and subsequent molecular and cellular events. Next, Ohuchi et al. [36] developed a novel SELEX procedure named TECS-SELEX, in which a cell-surface displayed recombinant protein was directly used as the selection target. Using this method, they isolated RNA aptamers against transforming growth factor-β (TGF-β) type III receptor expressed on Chinese hamster ovary (CHO) cells. One of the RNA aptamers has dissociation constant at 1 nM range and competed with TGF-β to bind to the cell surface receptor in vitro. The development of TECS-SELEX provides a useful, novel method to isolate aptamers against any cell surface proteins of interest and especially useful when the purified protein target cannot be easily obtained.
FACS-SELEX
In 2010, Mayer et al. [21] published a detailed protocol based on fluorescence-activated cell sorting (FACS) to select aptamers that target to the subpopulations of cells. In this method, a fluorescently labeled aptamer library is incubated with the target cells. A FACS device is used to simultaneously differentiate and separate cells that bound to aptamers, which is rather sensitive, efficient and high-throughput. The bound aptamers are then eluted, purified and amplified. Cell membrane integrity is a prerequisite for the success of the SELEX process. Thus, those undergoing apoptosis or necrosis cells, which may take up nonspecifically binding nucleic acids in SELEX process, need to be eliminated. Conversely, FACS could exclude the impact of dead cells by setting a region of interest in a forward versus side scatter dot plot on membrane-intact cells, so that it can eliminate the false negative and improve the separation efficiency. The protocol provides a state-of-the-art approach for identifying aptamers that selectively target any cells under investigation. In 2014, Kim et al. [37] performed FACS-SELEX to isolate the aptamers against EpCAM-A transmembrane glycoprotein detected in most adenocarcinomas and cancer stem cells. The selected anti-EpCAM aptamer EP166 could distinguish cells expressing EpCAM from negative control cells.
3D Cell-SELEX
3D cell-SELEX is a novel strategy to select specific nucleic acid ligands against spheroid cells in 3D cell culture to mimic the tissue microenvironment in vitro [22]. The 3D cell culture is performed by magnetic levitation method (MLM). This magnetic field is applied on top of the culture plate to promote magnetic levitation and generate levitated spheroid cells in the incubation period. Compared with the 2D-cell culture, the method makes the extracellular domains of membrane proteins more homogeneous exposure to aptamers and allows better cell-aptamer interaction. Integrated with a negative selection against a non-tumor cell line in the first round of 3D cell-SELEX, Aline lab showed that they obtained the aptamer A4 as a specific ligand to prostate tumor cells after nine selection cycles by 3D cell-SELEX, with dissociation constant in the nanomolar scale.
Hybrid-SELEX
Traditional cell-SELEX provides low aptamer enrichment efficiency as off-target surface biomarkers/molecules are co-expressed on the cells of interest. To overcome this obstacle, Hicke's lab [38] introduced a hybrid-SELEX that combines the cell-based SELEX with purified protein-based SELEX techniques to develop Tenascin-C-specific RNA aptamers in 2001. In hybrid-SELEX, both purified proteins and cells bearing the same protein on their surface were used as targets. After a certain number of rounds of cell-SELEX, two additional rounds of selection with purified target protein, as a crossover SELEX experiment, were performed to enrich TN-C aptamer representation in the cell aptamer pools. Compared the screening results, two rounds of crossover selection on purified target protein improved the affinity by 50-fold of the cell-SELEX result pool. The first cell-SELEX process in hybrid-SELEX aimed to select aptamers against target in its native state on the surface of the cells. The following additional rounds of selection with purified target were to enrich the high-affinity aptamers, which were rare in cell aptamer pool [39].
Cell-Internalization SELEX
Cell-internalization SELEX is a cell-based selection process for identification and characterization of cell-internalizing RNA aptamers for delivering siRNA drugs into the cytoplasm of target cells. In the novel selection strategy, the main steps are recovering the internalized RNA sequences for amplification and discarding unbound or surface-bound RNAs in the iterative selection. Thereby it enriches RNAs that are internalized by the target cell. The principal obstacle to RNAi-based therapeutics is cellular uptake. Cell-internalizing RNA aptamers conjugated with siRNAs as a delivery tool is being proposed to address this problem [40]. In 2012, Giangrande's lab successively obtained aptamers that internalize into cells that express specific cell surface receptors (e.g., HER2 or TrkB) [41]. As an example, aptamers specifically identified and internalized HER2-expressing cells were covalently linked to siRNAs targeting the anti-apoptotic gene, Bcl-2. They demonstrated that, when applied to HER2-expressing breast cancer cells, the HER2 aptamer-Bcl-2 siRNA conjugates selectively internalize into HER2(+)-cells and silence Bcl-2 gene expression. Moreover, Bcl-2 silencing sensitizes these cells to chemotherapy (cisplatin), suggesting a potential new therapeutic approach for treating breast cancers with HER2(+)-status [42]. These studies indicated that technology might promote the widespread use of RNA-based reagents for targeted therapeutic applications.
Applications of Cell-SELEX in Cancer Diagnosis and Therapy
In the past 20 years, aptamer has attracted much attention because of the high specificity, high affinity and promising application in medical diagnosis and disease treatment. Cell-SELEX facilitated the development of aptamer based diagnostic and therapeutic technology for cancer research. It can not only select aptamers specifically against particular designed biomarkers, but also unknown biomarkers on the surface of cancer cells.
Application of Cell-SELEX Aptamers in Cancer Diagnosis
Using cell-SELEX technology, Shangguan et al. identified a DNA aptamer targeting a T cell acute lymphoblastic leukemia cell line CCRF-CEM for acute lymphoblastic leukemia diagnosis. The selected aptamer sgc8c could specifically recognize leukemia cells in human bone marrow aspirates in real clinical specimens [43]. Therefore, sgc8c holds a great promise in developing specific molecular probes for cancer diagnosis. In 2010, Sefah et al. [44] developed a panel of DNA aptamers against colorectal cancer cultured cell lines DLD-1 and HCT 116 by cell-SELEX. The selected aptamers have high affinity and selectivity to identify specific biomarkers associated with colorectal cancers. Subsequently, on the basis of the previous studies, Suwussa et al. [45] created pattern recognition of different cancer cells using aptamer-conjugated magnetic nanoparticles (ACMNPs). In their study, aptamer sgc8c (target to CCRF-CEM cells), KDED2a-3 (target to DLD-1 cells) and KCHA10 (target to HCT 116 cells) were used separately to conjugate with dispersed magnetic nanoparticles, and became stable nanoassemblies for cancer cells detection and diagnosis. The specificity and sensitivity of the method were demonstrated by detection in cell mixtures and complex biological media, including fetal bovine serum, human plasma, and whole blood. sgc8c-ACMNPs could successfully detect in mixtures with the ratio between target and non-target cells was as low as 1:100. KDED2a-3 and KCHA10-ACMNPs also demonstrated strong specificity to their targets. Simultaneously, none of these ACMNPs had any interaction with the normal cell line. This strategy for pattern recognition shortened the incubation time and increased specificity in unpurified native samples. These proof-of-concept studies of magnetic relaxation switches (MRSw) certificated that it was a practical way to detect cancer cells and comprehensive cancer cell profiling. Once bound to nanoparticles, the aptamers could function to target cells and provide a multivalent effect. Furthermore, a new cellular molecular profile can be created through an array of ACMNPs to help clinicians accurately identify cancer cells at the molecular and single-cell level. All these advantage, as well as the simple operation of magnetic relaxation instrument, will make ACMNP-based nanosensors potential approaches to early diagnosis and effective screening of cancer.
TTA1, AS1411 and MUC-1 are more successful aptamers that were reported to specifically bind to cancer cells or cancer tissues [46][47][48]. Aptamer TTA1 was selected to bind the extracellular matrix protein, tenascin-C, of cancer cells. Aptamer AS1411, identified by antiproliferation selection, binds to the nucleolin in the plasma membranes of cancer cells. Aptamer MUC-1 selected by protein-SELEX targets mucin (MUC-1), which is highly expressed by the majority of human adenocarcinomas. In 2009, Kang et al. [49] conjugated these three aptamers (TTA1, AS1411 and MUC-1) to quantum dots respectively, and to demonstrate multiplex detection of cancer cells using quantum-dot (QD)-conjugated aptamers. For confocal microscopic analysis of multiplex imaging of cancers, healthy and cancer cell lines were incubated with each QD-conjugate. QDAS1411 showed strong fluorescence activity on cellular membranes in HeLa, C6, PC3 and NPA cell. QD-TTA1 was clearly visualized at 605 nm on C6 cells, but it showed very weak fluorescence signals with PC3, HeLa and NPA cell lines. QD-MUC-1 showed relatively higher fluorescence signals in the membrane of C6 and HeLa cell lines than other cancer cells, as nucleolin, tenascin-C, and mucin proteins had different cellular expression in different cancers. Experiments proved these probes could detect and differentiate different types of particular cancer cells and produce a visible fluorescence signal in the presence of target cells. It can be determined that the QD-aptamer conjugate offers a promising tool to assess the diverse genetic expression in a single cell by the demonstration of different colors. However, the multiplex imaging approach must solve problems associated with the biosafety of QDs and biostability of aptamers prior to clinical application.
HER2 belongs to tyrosine kinase receptor and the epidermal growth factor receptor (EGFR or ErbB) family. The overexpression of HER2 is related to many cancers, including ovarian, lung, gastric and oral. Therefore, the monitoring of HER2 expression is conducive to the early diagnosis of cancer [50]. Javed et al. [51] developed a new in vitro assay to detect human epidermal growth factor receptor 2 (HER2) protein in 2015. The method was based on affinity dissociation of carbon nanotube (CNT)-wrapped anti-HER2 ssDNA aptamers. Firstly, they selected an anti-HER2 ssDNA aptamer (H2) using cell-SELEX. Then the selected anti-HER2 aptamer was packed around CNTs (coupled with magnetic microbeads previously) by physical wrappings. The thus-formed magnetic microbeads coated with uniform layers of CNTs were isolated by applying an external magnetic field. The magnetic nature of MBs allowed extensive washing of MB-CNTs for the effective removal of unattached CNTs and thereby minimized the surfactant effect. The high affinity and specificity of aptamers to HER2 protein on tumor cells enables the sensitive and accurate diagnosis of cancers with HER2 overexpression. The results demonstrated that the developed assay can be an effective approach in detecting native forms of disease biomarkers in free solutions or in biological samples, for accurate diagnosis.
Furthermore, aptamers targeting cancer-specific proteins, such as vascular endothelial growth factor (VEGF) [52], pigpen [53], prostate-specific membrane antigen (PSMA) [54], and receptor tyrosine kinase (RTK) [55], have been developed and studied. Cell-SELEX technology not only can generate aptamers that can recognize such unique features with very high affinity and specificity, but can discover molecular signatures of the cancer cells easily. With the specificity and high-recognition, aptamers as the new probes can correctly distinguish different cell types and even subtypes of cancer cells. Also, in comparison to antibody-based techniques, aptamer-based specific probe is less labor-intensive and more cost-effective. More importantly, the aptamers probes recognize the targets at their native state to create a true molecular profile of the disease cells, which is important in clinical application. In addition, the ease of site-specific chemical modification of aptamers makes it possible to conjugate to gold nano-particles or quantum dots for use in colorimetric or fluorescence detection of cancer cells, and aptamers also can conjugate to the gold nanorod for the enhancement of the signal. These devices are so high sensitivity with aptamers that they can be applied for the early diagnosis of cancer when the concentration of cells is relatively low [56]. Therefore the use of a panel of probes has a clear advantage over the single biomarker-based assays in clinical practice, providing much more information for accurate disease diagnosis and prognosis.
Application of Cell-SELEX Aptamers in Cancer Therapy
Aptamers generated via cell-SELEX can specifically recognize cell-surface biomarkers without prior knowledge of their molecular signature. Therefore, aptamers may also serve as drug payload or aptamer-functionalized nanoparticles helping drugs to get released in specific target regions, which can achieve drug targeted delivery. More importantly, due to better target-specific physical binding properties, aptamers have less off-target toxicity effects. The prostate-specific membrane antigen (PSMA) is considered to be an excellent prostate tumor cell marker expressed on the surface of prostate cancer cells. Lupold et al. [57] used the purified protein as the target, and identified an aptamer A10 that could recognize the extracellular domain of PSMA. Farokhzad et al. [58] synthesized a bioconjugate with docetaxel (Dtxl)-encapsulated nanoparticles and PLGA-b-PEG connected to RNA aptamers A10 for targeted delivery. They examined its efficacy for targeted delivery of Dtxl to prostate cancer cells and found that the conjugate is capable of specifically binding to tumor cells with PSMA over-expression, and then engulfed by the tumor cells to exert cellular toxicity and achieve targeted therapy. The targeted therapy also showed a significant anti-tumor efficacy and reduced toxicity in vivo in the nude mouse model than using Dtxl only [59]. In 2013, Taghdisi et al. [60] synthesized PEG-Apt-Epi complex to achieve targeted delivery of Epirubicin to cancer cells by PEGylated A10 aptamer. Epirubicin (Epi), an anthracycline, is one of the main chemotherapy agents in the treatment of a variety of tumors. Flow cytometry analysis and MTT assay showed that PEG-Apt-Epi complex delivery system was able to specifically deliver and internalize Epi to LNCaP cells, and thereby this system can reduce cytotoxic effects of Epi by targeted delivery.
In addition, aptamer-based delivery of drugs including chemotherapy drugs such as doxorubicin, docetaxel, daunorubicin and cisplatin, toxins such as gelonin, and various photodynamic therapy agents, as well as a variety of small interfering RNAs have been reported [61]. These reports demonstrated the potential utility of nanoparticle-aptamer bioconjugates for cancer therapeutic application. Aptamer-based delivery may enable the transport of drugs across a range of biological barriers including epithelial and endothelial, and facilitate the delivery of drugs to intracellular sites of action, enhancing the efficacy and safety of therapeutics. With the development of aptamer selection technologies and nanomedicine, aptamer-functionalized nanoparticles are being explored as promising platforms for targeted therapeutic and we expect it to advance from preclinical into clinical development for further evaluation in the continuous efforts.
Meanwhile, aptamers with the efficient targeting ability to cancer cells and tissues not only provides a promising way to delivery drug, but also can be used as an anti-cancer drug for cancer therapy. Up to date, several aptamers are evaluated in the clinical trials for treatment of different types of cancers. The most frequently studies aptamer for cancer treatment is AS1411. AS1411 is an unmodified guanosine rich 26-mer DNA strand. It binds to the external domain of nucleolin, which is a protein over-expressed on the surface of cancer cells and responsible for survival, growth, and proliferation of cells. More recently, AS1411 has now been tested in several cancer cell lines, including prostate, breast, lung, pancreatic, renal cell carcinoma, ovarian, cervical, colon and so on [62]. It also displayed anti-proliferative activity in almost every cancer cell type that was tested in vitro. However, the exact mechanism for the activity of AS1411 was not totally understood. The phase I clinical study of AS1411, which was completed in 2006, found that AS1411 can specifically target nucleolin without serious toxicity, making it the first in-human and the first in-class anticancer aptamer drug. Phase II clinical trial for acute myeloid leukemia found AS1411 has therapeutic efficacy to acute myeloid leukemia patients. However, in phase II, evaluation for renal cell carcinoma showed only one patient had a response to treatment during 35 patients who were enrolled and treated [63].
Additional anti-cancer aptamers are studied in the preclinical setting (Table 1). These aptamers share some common mechanisms with anti-cancer effects. One mechanism is to block the signaling pathways by inhibiting kinases, phosphatases, or carboxypeptidases, etc., to stop downstream activation and signaling for tumor growth, for example RET aptamer D4. Aptamer D4 could not only recognize the extracellular domain of RET, but also inhibit RET phosphorylation and block RET downstream signaling and subsequent molecular and cellular events [35]. Aptamers that have anti-tumor activity could be promising prognostic tools in cancer therapy. Another way is to bind to proteins that have a close connection with tumor development. At present, most aptamers for cancer therapy in research function by inhibiting target functional membrane proteins. For example, CD44 protein is responsible for the migratory ability of cells. Anti-CD44 DNA aptamer can form complexes with CD44 protein to inhibit the migration of the breast cancer cells [64]. The potential therapeutic efficacy of the aptamers will be evaluated in vivo in subsequent studies. In the meanwhile, more aptamers with promising anti-tumor efficacy in preclinical studies are expected to be evaluated in clinical trials to facilitate the development of aptamer based drug in cancer therapy in the near future.
Future Perspectives
Aptamers have been widely used in biomarker discovery and detection, cancer imaging, cancer therapy and several other fields based on cell-SELEX technology [76]. As recognition elements, aptamers have several advantages, including efficient and cost-effective chemical synthesis, easy and controllable modification with functional moieties to meet various clinical requirements, large-scale commercial production, nontoxicity and limited immunogenicity [77][78][79]. Cell-SELEX is also a promising technology because it selects aptamers against whole live cells without prior knowledge of resident proteins on the cell surface. Cell-SELEX aptamer is an ideal tool for preferential binding to diseased cells, especially cancer cells. More importantly, cell-SELEX technology paves the way for the selection of tumor-and cell-specific aptamers that target distinct subpopulations of cells in heterogeneous composite mixtures of cells that are present in primary tissues or body fluids. This method may also be a first step toward the use of aptamers for individualized diagnostic and medical applications because this selection process will be accessible to clinical laboratories for the assessment of high-affinity and specific cell-targeting agents, which are promising diagnostic or biomedical research tools, and thus is a useful alternative to antibody-based assays.
However, the development of aptamers derived from cell-based selection for cancer diagnosis and therapy is still in the early stages. There are several challenges that need to be addressed before it is applicable. First, to design and engineer nucleic acid aptamers as chemical antibodies for application in more disease states, more aptamers need to be screened. Although much effort has been made for developing novel, automated, or high-throughput systems SELEX techniques, it is still difficult to achieve in cell-based selection. Tumor cells have different species (including subtypes) and usually show complex molecular characteristics. In order to really filter the aptamer molecular fingerprint that may indicate different stages of the disease and different subtypes of cancer, SELEX technology still needs further research in terms of screening and identification of tumor markers, such as screening methods, labeling technique and tracing. In addition, the structures, folding patterns, binding affinities, and regulation of protein and/or cell functions of aptamers need to be explored. Pharmacokinetics, toxicity and off-target effects also remained to be resolved [80]. | 2017-05-12T00:39:19.992Z | 2016-12-01T00:00:00.000 | {
"year": 2016,
"sha1": "f31b5739f6f840a344c8ed1ad48410537b4801db",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/17/12/2079/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f31b5739f6f840a344c8ed1ad48410537b4801db",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
249096193 | pes2o/s2orc | v3-fos-license | Oral hypofunction and association with need for daily assistance among older adults in long‐term care
Abstract Background Oral hypofunction (OHF) is related to occlusal status and bite force. It has specific symptoms and varying degrees of severity. Objectives OHF was determined with five signs. The relationships between OHF and need for assistance in oral hygiene, moving, eating and occlusal status in older adults living in long‐term care (LTC) were examined. Methods A comprehensive clinical oral examination was conducted on 393 residents who lived in LTC in Helsinki, Finland. The five signs to determine OHF were mouth dryness, visible food residue on oral or denture surfaces, ability to keep the mouth open during examination, clearness of speech, and diet of pureed or soft food. Score points of 0–2 were given for each sign, and the sum was categorised as mild, moderate or severe OHF. Participants were divided into three groups accordingly, and occlusal status was determined based on contact units. In addition, nurses collected background information on number of medications and level of cognition. Need for assistance was based on oral hygiene, moving and eating. Results Of participants (n = 319), 21% showed severe and 41% moderate OHF. Occlusal status differences between the OHF groups were significant. OHF severity associated linearly with increased severity of cognitive impairment and increased need for assistance in oral hygiene, eating and moving. Conclusions OHF score based on the five signs can be used to determine OHF severity. OHF was common and associated with occlusal status, cognitive impairment and need for assistance in oral hygiene, moving and eating in older adults living in LTC.
tongue and muscles, 5 and contact area between opposing teeth. 6 Some studies have included decline in occlusal force and motor ability of the tongue as part of OHF, a functional pathophysiological condition consisting of several deteriorated oral functions. 7,8 Also, a few studies have investigated whether impaired mastication has a relationship with decline in systemic conditions such as dysphagia, frailty and sarcopenia. [9][10][11] According to recent studies, impaired oral function leads to a gradual deterioration of dietary habits and malnutrition. 12 Impaired function per se can also affect the structure, composition, size and shape of edible food. 13 In addition to a clinical examination, questionnaires have been developed to screen for oral frailty and OHF. [14][15][16] No consensus exists on the definition for either OHF or oral frailty. 1,7,17,18 Daily activity has been assessed by several indices. The most commonly used are activity of daily living (ADL) and instrumental activities of daily living (IADL), designed as scoring systems for evaluating independent living at home. [19][20][21] Other popular indices are oral health impact profile (OHIP) 22 and geriatric oral health assessment index (GOHAI). 23,24 In addition, it has been pointed out that poor oral health and ADL are associated with cognitive impairment. 25 Moreover, low ADL has been found to be associated with decreased chewing ability and cognitive functioning. 26 Our aim was to determine the OHF of older adults living in LTC with five signs presented in earlier studies-mouth dryness, visible food residue on oral or denture surfaces, ability to keep the mouth open during examination, clearness of speech and diet of pureed or soft food. Furthermore, we present a scoring system to categorise the severity of OHF that is precise and easy to use in clinical practice by the staff of LTC facilities. 4,5,13,27,28 Finally, we examined the relationships between OHF and occlusal status, cognition, need for assistance in oral hygiene, moving and eating in older adults in LTC in Helsinki, Finland. 19,29 We hypothesised that OHF severity can be determined by using the score of five signs and that severe OHF in older adults living in LTC is associated with occlusal status, cognitive impairment and need for assistance in oral hygiene, moving and eating.
| MATERIAL S AND ME THODS
Older adults in this oral health study had participated in a previous nutrition study. 30 Registered nurses collected the following data on residents using a standardised questionnaire and medical records: demographic factors, length of residence, cognitive disease (no or mild/moderate/ severe), number of medications and need for assistance with oral hygiene, moving and eating.
In the clinical oral examination, data on occlusal status, number of teeth and dentures in use were collected. Also, the following five signs describing OHF were examined: mouth dryness, 31 visible food residues on oral or denture surfaces, ability to keep the mouth open during oral examination, clearness of speech and from the questionnaire, food consistency. These five signs were available for 319 older adults; those who lacked one or more entries needed for OHF determination were excluded from the analysis (n = 74) ( Table 1).
A study participant was determined as dentate if she/he had at least one visible tooth or root remnant left. The tooth was determined as a whole natural tooth if it had a crown of dental material, filling material or a fixed prosthetic crown. Removable dentures were determined to be partial or complete. The occlusal contact units were determined visually as follows after the subject had been asked to bite the teeth together: contact between natural teeth (natural contact unit), contact between natural tooth and removable dentures (mixed contact unit), contact between removable dentures (denture contact unit) or no contact.
| S TATI S TI C S
Ordinal variables were expressed as n (%) and continuous variables as mean (standard deviation, SD). The linearity across the OHF groups was evaluated using chi-square linear-by-linear association or with The association between severe OHF determined with five signs (dependent variable), and categories of occlusal status (independent variable, each separately included in the analysis dummy-coded as 0 or 1) were determined with an unadjusted model and with a confounder (age as continuous and sex as categorised covariate)adjusted binary logistic regression model.
All statistical analyses were performed with SPSS Statistics 25
(IBM Japan). Differences were considered significant at p < 0.05.
Number of teeth was identical in all groups. The highest proportion (37%) of participants in Gr1 had mixed or denture contact units (natural teeth and a removable denture or denture/denture contact units). In Gr3 with severe OHF, most common (42%) were edentate participants or those without occlusal contact units. The occlusal status differences between the OHF groups were significant ( Table 2).
Manifestation of the five signs of OHF as a percentage in each study group can be seen in Figure 1. The manifestation percentage of all five OHF signs scored by 2 points increased linearly from Gr1 to Gr3, and the absence of signs (0 points) was linearly reversed (pvalue for linearity through study groups for each sign <.001). In
| DISCUSS ION
In this study, we aimed to find a simple and effective tool to identify signs that could describe the severity of OHF and its relationship with occlusal status, daily need for help and cognitive impairment in LTC facilities. We found that OHF based on five signs (mouth dryness, visible food residues on oral or denture surfaces, ability to keep the mouth open during oral examination, clearness of speech and diet of pureed or soft food) is common among older LTC residents.
Clinically, the clearest difference between severe OHF and no or mild OHF seemed to emerge with three signs: unclear speech, ability to keep the mouth open during examination and a soft-food diet.
Furthermore, OHF was strongly associated with occlusal status and correlated with cognitive impairment, and the need for daily assistance in oral hygiene, moving and eating. The present findings support the study hypothesis that OHF severity can be determined by using the score of five signs.
Masticatory ability and efficiency depend on dental status, location and number of remaining teeth, and bite force, which is determined by jaw muscle mass, activity and coordination. 2 Masticatory muscle strength and bite force are used to evaluate chewing or muscle function. 32 Masticatory muscle weakness and associated detectable signs are related to muscle size, accuracy of function and maintenance of mastication activity. 8 Furthermore, a recent study concluded that selected signs could be used to determine oral frailty and that oral frailty is associated with Fried's frailty phenotype. 11 There is no consensus on the definitions of OHF, masticatory performance and oral frailty, although a few indices have been created.
The underlying signs also vary from study to study. 1,[14][15][16] According to our knowledge, no method or assessment score exists that could be used both in dental practice and in long-term care facilities for older adults, and that would be beneficial to caretakers of older persons, although points or assessment methods exist at least for research purposes. 14,16 Expecting older adults to reliably carry out complex biting or swallowing tests or to fill out questionnaires is unreasonable, and assigning a diagnosis should be simple and practical for medical staff in the course of daily care. 1,14,16 Sensory and motor function of a tongue for allocation and transportation of food bolus is necessary for effective masticatory performance. 5 The pressure caused by the movement of the tongue is the main factor in the formation of food bolus. 33 The accumulation of food residues and microorganisms on the surfaces of the oral cavity or dentures indicates a decrease in motor function. 34 Mastication has been found to be associated with, for example, cognitive activity, food intake and some activities in daily life. 35,36 Fruits, vegetables, nuts and meat in meals are considered to be difficult to chew, and intake of these is affected by masticatory function. 37 Our findings are consistent with the earlier study and suggest that OHF may lead to selection of softer and easier-tochew food. 36 One of the signs in the OHF score was clearness of speech.
Coordinated movements of masticatory organs are part of speech, and oral dexterity represents articulatory oral motor skill of a person. 18 Tongue-lip motor function is said to be a major component of TA B L E 2 Demographics and findings of the clinical oral examination (%) in the FINORAL study of older adults with different grades of oral hypofunction (OHF) living in long-term care in Helsinki, Finland (N = 319), in study groups 1-3 (Gr1-3) Mouth dryness has been investigated as one of the reasons for OHF. 7,39 Saliva aids in swallowing, oral cleansing, speech, digestion and taste. 40,41 In our study, neither mouth dryness nor food residues correlated with functional capabilities of a person, but participants with severe OHF had the highest prevalence of poor scores for both signs relative to the other groups, in line with earlier studies. [40][41][42] Masticatory muscles undergo skeletal muscle atrophy and weak- The association of oral health with need for daily assistance and with cognitive impairment has been evaluated using ADL, GOHAI and OHIP. 25 Also, the need for assistance in walking to reflect mobility limitations in older people with cognitive disease has been used. 25,43 Questionnaires like OHIP and GOHAI require a lot of time and adequate cognitive ability from an older adult to be able to answer them properly. 23,24 Moreover, such indices as IADL and ADL are used to determine future residency, but they are not appropriate for our aim of evaluating daily assistance need in LTC. [20][21][22] A practical and useful way to identify problems related to the need for daily assistance in an older adult is observation of ordinary daily tasks such as maintenance of oral hygiene, moving and eating. Most of our participants were unable to understand the guidelines and instructions of complicated experiments or questionnaires.
Our findings confirmed a close relationship between especially three of the analysed five signs of OHF and need for daily assistance in older residents in LTC. In addition, among our participants, severity of cognitive impairment and need for assistance increased linearly from no OHF to severe OHF, consistent with earlier research. 25,44,45 Our results, in accordance with previous publications, strongly suggest that OHF is associated with the need for assistance in oral hygiene, moving and eating. [46][47][48] Low chewing capacity has been associated with lower ADL, poor cognitive function, depression and lack of food intake in communitydwelling older people. 26 Number of teeth did not significantly differ between the study groups, contradicting earlier findings. 49 On the contrary, according to our findings, the status of occlusion had an association with OHF. Edentulousness and reduced occluding pairs have traditionally been perceived as part of masticatory impairment and our categorisation of OHF follows the same pattern. 39,50 In the age-and sex-adjusted logistic regression model, severe OHF was associated with both no occlusion (OR 3.1) and fewer than 10 natural contact units (OR 2.7), further supporting our scoring system. The results suggest that the number of natural teeth does not determine OHF or its severity. The number and quality of occlusal contacts are more significant. Also, noteworthy is the role of occlusal contact units created with a removable denture among those who had natural tooth/denture or denture/denture occlusal units. We conclude that occlusal contact units provided by a removable denture are more important in maintaining occlusion than has been assumed.
A strength of our study is that no oral examinations as compre-
| CON CLUS ION
According to our findings, oral hypofunction can be identified by using five signs combined into a three-grade severity score.
CO N FI C T O F I NTE R E S T
The authors have no conflict of interest.
DATA AVA I L A B I L I T Y S TAT E M E N T
The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions. | 2022-05-28T06:22:56.683Z | 2022-05-27T00:00:00.000 | {
"year": 2022,
"sha1": "c82379cbcfb92db6764a336e021c94c03a49628d",
"oa_license": "CCBYNC",
"oa_url": "https://helda.helsinki.fi/bitstream/10138/346658/1/J_of_Oral_Rehabilitation_2022_Oura_Oral_hypofunction_and_association_with_need_for_daily_assistance_among_older.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "25dbef81c7fbea64366cf0855832fa1aefb837a4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
201283049 | pes2o/s2orc | v3-fos-license | A critical evaluation of predictive models for rooted soil strength with application to predicting the seismic deformation of rooted slopes
This paper presents a comparative study of three different classes of model for estimating the reinforcing effect of plant roots in soil, namely (i) fibre pull-out model, (ii) fibre break models (including Wu and Waldron’s Model (WWM) and the Fibre Bundle Model (FBM)) and (iii) beam bending or p-y models (specifically Beam on a Non-linear Winkler-Foundation (BNWF) models). Firstly, the prediction model of root reinforcement based on pull-out being the dominant mechanism for different potential slip plane depths was proposed. The resulting root reinforcement calculated were then compared with those derived from the other two types of models. The estimated rooted soil strength distributions were then incorporated within a fully dynamic, plane-strain continuum finite element model to assess the consequences of the selection of rooted soil strength model on the global seismic stability of a vegetated slope (assessed via accumulated slip during earthquake shaking). For the particular case considered in this paper (no roots were observed to have broken after shearing), root cohesion predicted by the pull-out model is much closer to that the BNWF model, but is largely over-predicted by the family of fibre break models. In terms of the effects on the stability of vegetated slopes, there exists a threshold value beyond which the position of the critical slip plane would bypass the rooted zones, rather than passing through them. Further increase of root cohesion beyond this value has minimal effect on the global slope behaviour. This implies that significantly over-predicted root cohesion from fibre break models when used to model roots with non-negligible bending stiffness may still provide a reasonable prediction of overall behaviour, so long as the critical failure mechanism is already bypassing the root-reinforced zones.
Introduction
Understanding and quantifying the mechanical effect of vegetation on steep slopes began approximately 50 years ago with direct shear tests performed on soil blocks containing roots (Wu 1976(Wu , 2013Stokes et al. 2014). Since then, various approximate models for predicting root reinforcement of soils have been introduced. Generally, these models can be classified into two types: (i) continuum approaches, which consider the root-soil matrix as a homogenous material of increased strength Δτ (also described as root cohesion, c′ r ) or (ii) soil-root interaction approaches which consider roots as a structural element embedded in the soil.
Continuum approaches involve laboratory tests of or numerical simulations of representative elements of rooted soil, with the strength being represented as a Mohr-Coulomb failure envelope or yield surface, as conventionally used for non-vegetated soil, like fibre-reinforced sands (e.g. Michalowski and Čermák 2003;Zaimoglu and Yetimoglu 2012;Wood et al. 2016). This approach is convenient where the dimensions and spacing of the reinforcement are small and behaviour can be homogenised statistically; otherwise, tests are difficult to perform, time consuming and expensive. As for the latter approach, the soil-root interaction properties can be estimated from axial root properties which can be determined from axial tension or pull-out tests of the roots (e.g. Van Beek et al. 2005;Docker and Hubble 2008;Fan and Su 2008;Mickovski et al. 2009;Sonnenberg et al. 2010;Loades et al. 2010;Comino et al. 2010;Schwarz et al. 2011;Boldrin et al. 2017;). The additional resistance within the soil due to the presence of roots may then be introduced into stability calculations either as boundary forces (Greenwood et al. 2004;Greenwood 2006) or used to evaluate c′ r for use in the Mohr-Coulomb failure envelope equation (Waldron 1977;Wu et al. 1979;Pollen and Simon 2005).
For the latter approach, the most widely used soil-root interaction models are the family of fibre break models, commonly known as Wu and Waldron's Model (WWM, Wu 1976;Waldron 1977;Wu et al. 1979) and the Fibre Bundle Model (FBM, Pollen and Simon 2005). Both models assume that roots are highly flexible with negligible bending stiffness and will break (structurally) in tension during soil shear deformation, so that the additional strength provided by the roots is a function of root properties only (i.e. tensile strength of roots, root density and root orientations); however, an indirect effect of the soil properties is incorporated in the way these influence the growth of the roots and therefore the aforementioned root properties. The major difference between the two models lies in the ability of FBM to model progressive failure as the weakest roots within the root system break first (Thomas and Pollen-Bankhead 2010), with load shared between different diameters of root by either: (i) equal load applied to individual roots regardless of root dimension, (ii) load apportioned by root diameter or (iii) load apportioned by root cross-sectional area.
For plants with larger structural roots where root bending, rather than axial breakage, may be more dominant, considering the roots as flexible cables (fine roots) or bending beams (coarse/structural roots) subject to lateral loadings provides an alternative means of estimating root reinforcement, e.g. using py models, as reported by Duckett (2013), Liang et al. (2015) and Meijer et al. (2019). Such models use a set of transverse forcedisplacement (p-y) springs, which may be highly non-linear, to model the root-soil interaction in bending. They are Landslides Original Paper computationally efficient (at least compared to continuumbased finite element simulations) and implicitly incorporate the effects of soil properties as well (even where these may vary along the length of a root); however, further development would be required to generalise such analyses into analytical or finite difference-based models which are simple to use in practice.
For plants with shallower, fine and fibrous root systems, axial fibre breakage models may not always work (e.g. Loades et al. 2010) as roots are pulled out of the soil before breaking. Waldron and Dakessian (1981) reported a pull-out-based method for estimating root reinforcement; however, this model has not been widely adopted due to its dependence on root strain, which is relatively difficult to estimate in practice. Schwarz et al. (2010) proposed a more complicated pull-out-based model, the Root Bundle Model (RBM), which incorporated some features of the geometry (e.g. root length, root diameter, root branching pattern and root tortuosity) and mechanics (e.g. maximum tensile strength, Young's modulus, root-soil interfacial friction). Such a model contributes to understanding the pull-out behaviour of roots; however, it is seldom used in engineering practice or implemented in numerical codes since developed due to its complexity in input parameters (Wu et al. 2015).
In contrast, Wu (2013) proposed that the WWM (and by extension, FBM) could be adapted to use the pull-out capacity rather than the breakage tensile strength (whichever is lowest), to capture potential root pull-out. Further studies are, however, required to confirm this as little information is available regarding direct comparison between breakage strength and pull-out strength for different root species in different soil media (Schwarz et al. 2011;Sonnenberg et al. 2011;Kamchoom et al. 2014). There is some preliminary evidence that analytical models developed in the field of piling engineering may also be an efficient way estimating the pull-out capacity of roots .
This study will develop further the estimation of root reinforcement from pull-out capacity based on a series of laboratory pull-out tests of root analogue segments in sandy soil, in which the confining stresses are varied to simulate segments of root up to 1.5 m deep below ground level. The individual roots will be modelled as straight vertical analogue elements as typically made in prediction models (e.g. WWM and FEM). The root shear strength contributions so determined will then be compared with those derived from the different classes of fibre break model and a beam bending (p-y) model to evaluate the implications of selecting a particular method for quantifying root reinforcement for slope stability calculations. This will be achieved through incorporating the root reinforcements suggested by the different methods within a fully dynamic non-linear finite element (FE) model of a slope subject to earthquake-induced instability. Seismically induced slip is a convenient way of assessing the stability of a slope as instants during the earthquake where the factor of safety is less than 1.0 result in deformation which can be measured accurately, in contrast to trying to directly 'measure' the factor of safety of a slope under static conditions. The FE simulations are validated against a physical model test of the slope conducted in a geotechnical centrifuge and previously reported by Liang et al. (2015).
Laboratory testing of root analogue pull-out
Soil A uniformly graded fine sand (HST 95 Congleton silica sand) was used throughout this study, as this was previously used in the centrifuge slope test. Cohesionless soil (sand) is used as it is possible to pluviate this (dry) around the analogues, while also being an analogue for coarse-grained field soils that we have observed previously in field applications (e.g. Meijer et al. 2018). It is a specific fraction of the sand extracted at Bent farm, Congleton, Cheshire. The sand had a mean particle size D 50 = 0.16 mm, minimum dry bulk density ρ min = 1462 kg/m 3 , and maximum dry bulk density ρ max = 1795 kg/m 3 . This sand has been widely used in previous geotechnical research at the University of Dundee (e.g. Al-Defae et al. 2013;. For all tests, the sand was air pluviated dry, resulting in a density of ρ = 1636 ± 8 kg/m 3 and a relative density of 55-60%. The critical state friction angle of the sand is φ 0 cs =32°across a range of relative densities (9-93%) and effective confining stresses (5-200 kPa), as measured in direct shear tests (Al-Defae et al. 2013).
Model root analogues
Root analogues have previously often been made of either rubber or wood as contrasting analogue materials (e.g. Mickovski et al. 2007Mickovski et al. , 2010Sonnenberg et al. 2011) with material properties (strength and stiffness) which bracket typical mechanical properties of plant roots. Neither of these materials is ideal; however, Liang et al. (2015 pioneered the use of 3D printing to fabricate root analogues using Acrylonitrile Butadiene Styrene (ABS) plastic, which could exhibit representative mechanical behaviour to real roots of woody root species such as trees and shrubs, as shown in Fig. 1 (after Liang et al. 2015), based on uniaxial tensile testing within an Instron 4204 loading frame. The 3D printed analogues could be considered as a stack of fibres aligned unidirectionally, and such structure is very similar to the cellular structure of real roots, with overlying layers of tissue. Among them, the xylem layers, which consists of long, cylindrical cells that are joined from end to end and provide unidirectional fibre orientation, play a significant role in mechanical behaviour, driving the characterisation of tensile strength (Karam 2005). As a result, the analogues model tensile strength particularly well, including a dependence on diameter (power function, as commonly used to fit measured data of field roots), and while they are stiffer than field roots, they are a closer representation than either wood or rubber analogues. In Fig. 1, the 'real root' data was collated from the literature (Mora et al. 2009;Warren et al. 2009;Mickovski et al. 2009;Mao et al. 2012).
Throughout this study, straight vertical elements with 150-mm anchorage length were used to simulate individual roots. These rods, with diameters of 1.6 mm, 3 mm and 12 mm, represent roots 1.5 m long in the 1:10 scale centrifuge test summarised later. A steel hook was attached to the top of each root analogue used in the pull-out tests (either using epoxy-resin adhesive for smaller analogues or a screw in the larger ones) so that the Instron 5985 loading frame used to perform the pull-out tests could 'pick-up' the root with minimal disturbance using a horizontal bar.
Model preparation and test procedure
Vertical high-density polyethylene (HDPE) plastic tubes, of 150mm inner diameter and 500 mm in length, were used as model containers. Such dimensions of the tubes (D Pipe > 8D root ) ensures that any boundary effects of the container on the pull-out resistance are minimised according to previous suggestions for piles (Randolph 1981(Randolph , 2003. The soil was initially pluviated to a depth of 140 mm, at which point a single analogue was pushed vertically into the soil by 20 mm. Pluviation was then continued until the analogue was completely surrounded by sand. This was identical to the procedure used in the centrifuge test. In the centrifuge test, the root analogues represented roots 1.5 m deep, at which depth the confining vertical effective stress would be approximately 24 kPa. To simulate higher confining stresses in the 1:10 scale tests, slotted circular surcharge weights which just fitted within the tubes were added on the soil surface. The slot allowed both easy placement around the analogue after pluviation and also root pull-out through the gap while minimising stress non-uniformity over the surface area. Four levels of confining stresses (q = 0 kPa, 4 kPa, 8 kPa, 12 kPa) were considered in this study, which represented 0 m, 0.25 m, 0.50 m and 0.75 m below ground level. The maximum stress applied was Landslides limited by the number of weights which could be stacked within the tube.
The root analogues were pulled out at a speed of 10 mm/min using the aforementioned load frame (Instron 5985L7706, Instron Inc., UK). The capacity of the load cell used was 30 kN with an accuracy of 1mN.
Interpretation using the beta (β) method
Assuming a root analogue acts as a miniature pile with capacity provided from interface friction along the root 'shaft', the uplift capacity (F p ) of a segment of length (L) and constant diameter (D) will be given by: where K is the coefficient of lateral earth pressure, δ' is the friction angle mobilised at the root-soil interface, z is the depth, and σ 0 V z ð Þ is the vertical effective stress at depth z. In the tests, at the soil where γ is the unit weight of the soil. The beta method takes its name from the combination of the coefficient of lateral earth pressure and interface friction angle into a single parameter (K tan δ ′ = β) multiplying the vertical effective stress. From the pull-out tests, F p was measured and used to backcalculate the value of β representing that of a root segment at a depth of z = q/γ + L/2 ≈ q/γ. The overall resistance of a 1.5-m-long prototype root in the centrifuge could then be obtained using Eq.
(1) and the distribution of β with depth as measured from the tests.
Finite element modelling
General assumptions Finite element analyses were performed using the geotechnical engineering FE code PLAXIS 2D 2015. Two-dimensional planestrain dynamic analyses were conducted in the time domain in order to model the subsequent seismic response of a vegetated slope. The constitutive model used was the 'Hardening Soil model with small-strain stiffness' (HS small; Schanz et al. 1999), which can simulate the non-linear stress and strain dependent behaviour of geo-materials and also the limiting stiffness shown at very small strains (G 0 ). This specific constative model has previously been verified to be effective at simulating the dynamic behaviour of the HST95 sand used in this study (see Al-Defae et al. 2013). A summary of the adopted values of the model parameters for this sand at the density used in both the pull-out tests and the centrifuge test are listed in Table 1.
The FE mesh employed is shown in Fig. 2. The slope is 2.4 m high from toe to crest, with a further 0.8 m of soil underneath, which represents one side of a small height embankment, such as might support road or rail infrastructure at the crest. The slope angle is 27°, representing a 1:2 slope. The soil is modelled with 15noded triangular elements with 12 Gaussian points.
The boundary conditions (see Fig. 2b) were modelled as an extension of both the left and right boundaries of centrifuge model (see Fig. 2a) to represent a semi-infinite soil condition provided by the Equivalent Shear Beam (ESB) container within centrifuge tests (See Zeng and Schofield 1996). The boundary conditions along the bottom boundary of the mesh are fixity in the vertical direction and prescribed values of acceleration as a function of time in the horizontal direction, while viscous boundaries, which allow lateral deformation in reaction to normal stress and incorporate nonreflecting elements, were applied along the vertical sides of the model in both directions based on the method described by Lysmer and Kuhlemeyer (1969).
The input accelerations consisted of eight successive earthquake motions, from three different historical events with distinct peak ground acceleration (PGA), duration and frequency content, as simulated in the centrifuge test. The sequence of motions is summarised in Table 2. Further details about these motions can be found in Liang et al. (2015).
Root-soil matrix modelling
Two types of physical models were used to represent three types of natural tree root systems in the centrifuge tests (see Liang et al. 2015. Among them, plate/heart root system was modelled as an idealised group of straight vertical rods (see Fig. 3b), and this may be a reasonable simplification for such two types of root system, where the main vertical roots which cross the shear plane grow downwards from the main horizontal lateral roots. This idealisation is used in existing analytical models described earlier. However, this idealisation may not be suitable to simulate a tap root system where lateral roots are interlocked by the main tap roots. For this specific root system, a 3D root cluster model (see Fig. 3a) was also used. It should be noted that the straight vertical rods group model (see Fig. 3b) was arranged to have the same cross-sectional distribution at the level of the middle of the 3D root cluster model, which was designed to identify the corresponding root morphology effect (after ). In the following section, the idealised group of straight vertical rods will be used to assess the suitability of different models for root reinforcement and the 3D root cluster model will be shown as a reference for comparison. Figure 4 shows a comparison of measured root reinforcement (in terms of additional soil shear strength due to roots, c′ r ) riginal Paper Landslides provided by the 3D printed root analogues used in this study with some in situ Direct Shear Apparatus (DSA) test data on 14 young trees and 1 shrub (data collected by . In this figure, the data of the additional shear strength provided by both the straight vertical rods group model and the 3D root cluster model were obtained from large laboratory DSA tests across different shear planes and confining normal stresses (Liang et al. 2015. As can be seen in Fig. 4, most of the in situ test data available were concentrated on the top 0.5 m due to the limitations of the shear apparatus. In addition, measured root reinforcement data deeper than 1 m for mature trees (S. Mexicana, E. camaldulensis and M. ericifolia, after Shields and Gray 1992; Abernethy and Rutherfurd 2001) is also shown in Fig. 4 for comparison. These data points were obtained by using a root reinforcement model (WWM) according to the field data of root distribution and root tensile strength.
The comparison clearly indicates that the printed root analogues have root reinforcement highly comparable with field roots. It should be noted here that the straight vertical rods group model demonstrated generally higher magnitude of root reinforcement with depth, compared to the 3D root cluster model. This is not unexpected as the individual roots of the straight vertical rods group model had higher anchorage length in the lower layer of soil.
The spatial distribution of root groupings used in this study is shown in Fig. 2, representing the areas rooted in the centrifuge test (see later). The root-soil matrix was modelled using a composite set of soil blocks (see Fig. 2) with a distinct additional soil shear strength due to roots c′ r , added to the HST95 soil properties in these zones (e.g. Li et al. 2016, Temgoua et al. 2016). This parameter was determined for the case of various fibre break and bundle models (WWM and FBM), fibre pull-out model using the results of the previously described laboratory tests, and a recently published beam bending model (Liang et al. 2015). It should be mentioned here that the determination of the size of the zone of the root group influence (the width of the blocks in Fig. 2) is essential for accurately predicting the global slope performance. Liang et al. (2015) suggested to use the Landslides actual extreme boundary circumscribing the root analogues (also known as the critical rooted zone) and it was employed in the modelling (Fig. 3b). Roots can be stretched by 10-20% of their length before failure while most soils fail at strains around 2% (Tobin et al. 2007). When a root-soil system is subjected to shear loading, the soil will typically reach peak strength ahead of the roots, with the roots then providing enhanced shear strength until they themselves fail. The value of c′ r used for the WWM simulations described here assumes that all of the roots' strengths are mobilised and broken simultaneously with the contribution of the roots to shear strength being: where T rn is the tensile strength of a root of size (n), RAR n is the root area ratio (root cross-sectional area as a fraction of the crosssectional area of the critical rooted zone) of roots of this size, and R θ is a root orientation factor, calculated using where φ ' is the effective stress friction angle of the soil, generally taken as the critical friction angle (see Table 1) in practice (e.g. Wu 1976;Wu et al. 1979;Docker and Hubble 2008) and θ is the angle of shear distortion of the (vertical) root, estimated by where x is the shear displacement at failure (peak shear resistance), and Z is the thickness of the shear zone (Fig. 5a). Wu (1976) found that R θ was fairly insensitive to normal variation in θ and φ ' (40-70°and 25-40°, respectively), with values ranging from 0.92 to 1.31. Hence, a constant value of 1.2 was used in practice to replace R θ .
In the FBM, c r ' is also a function of RAR and tensile strength, but the roots are considered to break progressively rather than simultaneously. When some roots break, the total shear force is redistributed among the remaining roots with this being apportioned to each root in a bundle according to one of three potential assumptions (after Pollen and Simon 2005): where n is the root number ordered from strongest to weakest, n ∈ [1, N]; N is the total number of roots; j is the weakest root removed at each simulation step, j ∈ [1, N]; and T rj is the tensile strength of the weakest remaining root. For the above three assumptions, the breaking order of each root can be evaluated by T rj , T rj D j and T rj D j 2 , as a function of root CSA, root diameter and root number, respectively (Mao et al. 2012).
The WWM and FBM assume that root breakage represents the maximum tension a given root can carry; however, in reality, some roots will slip through the soil before breakage if the interface shear strength between the root and soil is low. A root may either pull up from the stable subsoil below the shear plane, or pull downwards through the slipping soil mass, depending on the depth of slip surface (i.e. the length of root in each part of the soil). The root is considered as a series of segments such that for any given element (Fig. 5(b)) below the shear plane, the tensile force and shaft resistance are in equilibrium: where T i-1 is the tensile stress generated at the top section of ith element, T i is the tensile stress generated at the bottom section of ith element, l i is the length of ith element and F i is the shaft resistance provided to the ith element by the surrounding soil. The maximum tensile stress T up is generated at the slip plane, while tensile stress reduces to zero at the tip of root. Integration of the stresses on these elements gives: where T up is the maximum tensile stress within the root. In the same way, the generated maximum tensile stress can also be estimated through the integration of root elements above the slip surface: Equation (2) can then be modified for determination of the root cohesion: where T n is the minimum tensile stress generated within the roots: If T n = T up , the part of the root below the slip plane will be pulled up from the soil below; if T n = T down , the part of the root within the slipping mass will be pulled down from the above the slip plane; if T n = T r , the pull-out strength is high enough that root will break.
Results and discussion
Pull-out of model root analogues Figure 6 shows selected pull-out resistance-displacement relationships measured for the model root analogues at varied confining stress. The pull-out resistance experienced a rapid increase initially and reached a peak at ≈ 2 mm displacement. After mobilisation of peak resistance, the pull-out resistance reduced steadily due to a combination of strain softening at the soil-root interface and the reduction of interface area as the root is pulled out, as suggested by Mickovski et al. (2010). The behaviour is very similar to wood and rubber analogues pulled out of similar soil reported by Mickovski et al. (2007), and it was also very similar to that of an unbranched primary root of Pea (Pisum sativum L.) (Hamza et al. 2004) and a tap root of Sunflower (Helianthus annuus L.) (Ennos 1989). Figure 7a shows that the maximum pull-out resistance of roots increased with diameter for all confining stresses. The maximum pull-out resistance was not linearly proportional to the diameter (or surface area). This indicates that the root-soil interface shear Landslides strength varied with diameter. Mickovski et al. (2010) treated this as a type of scale effect between root dimeter and soil particle size based on the standard axial pile and soil nailing theory. As suggested by Meyerhof (1983) and Foray et al. (1998), the interface friction between piles and soil depends on the ratio of the interface shear band thickness to the pile diameter while the shear band thickness is related to the soil particle size and asperity height on the root surface. For a given confining stress and root material, the shear band thickness is constant for all diameters, such that roots with smaller diameter experienced a higher interface friction. Figure 7 also indicates that the maximum pull-out resistance of the root analogues increased with confining stress, which was true for all diameters tested. The variation of peak pull-out resistance against confining stress is shown in Fig. 7b and also exhibited nonlinear behaviour. This will be further discussed in the following section.
The back-calculated β value for pull-out tests using Eq. (1) is shown in Fig. 8. For a given diameter, two distinct values were presented depending on the effective embedded depth of root (corresponding to the confining stress level during the tests). For stresses representing shallow embedment (up to 0.15 m below the ground surface), the calculated β value was approximately 3 times larger than those for stresses representing elements of the root deeper than 0.15 m. This may be attributable to high dilation at very low stress. The interface friction angle δ' is a function both of root roughness and soil properties. API codes for piling (API 2000) recommend to estimate it as a function of soil peak friction angle ϕ', that is where k is a dimensionless coefficient to account for root roughness and soil particle size. For a given diameter in this study, the k value should be constant. Bolton (1986) proposed a model to quantify the effects of dilation at low confining stress in terms of an increase in the soil friction angle ϕ' pk , as shown below: where A is the dimensionless factor to account for strain type: A = 3 for triaxial strain; A = 5 for plane strain. I R is given by where I D is the relative density of the sand, σ'(z) is the mean confining stress in the soil at a known depth z; Q and R are fitting parameters that depend on the intrinsic sand characteristics, which can be simplified to 10 and 1 respectively when 0 < I R < 4 (Bolton 1986), while at very low confining stress level (I R > 4), Chakraborty and Salgado (2010) suggested to use7.1 + 0.75 ln p ′ (for plane strain) and 1, respectively (after Liang and Knappett 2017b). The lateral earth pressure coefficient is also incorporated within β. In this study, the root-soil model was prepared using air pluviation around the root analogues so that the lateral earth pressure coefficient K is initially approximately equal to the undistributed lateral earth pressure coefficient K 0 : During uprooting, the movement of the root causes the surrounding soil to dilate and also experience lateral compression strain; as a result, K increases from K 0 until it reaches the limiting passive state, K p (see Fig. 9, after Knappett and Craig 2012): Fig. 6 Results of pull-out tests on root segments with varied confining stress: a 1.6-mm-diameter roots; (b) 3-mm-diameter roots; (c) 12-mm-diameter roots The soil lateral earth pressure coefficient should fall between K p and K 0 , and an upper bound of K p and lower bound of K 0 were considered.
The β value expected was calculated as a function of depth using Eqs. (14)-(18) and root roughness coefficient k between 0 and 1 as shown in Fig. 10. When k is close to 1 (fully rough interface), β experiences a significant increase near the ground surface and is relatively constant at depth below 0.25 m, which explains the measured beta values in Fig. 8.
Prediction of rooted soil strength
The predicted pull-out resistance (POR) of the 1.5-m-long root analogues given any possible slip plane position with depth was determined using the measured β values from the previous section (1), and compared with the breakage force so that c r ' for the group of roots allowing pull-out could subsequently be found. As shown in Fig. 11, all three types of roots will be directly pulled out without any breakage regardless of any potential slip plane depths. However, for natural roots, any branches or tortuosity may increase the pull-out resistance of single root (e.g. Mickovski et al. 2007, Schwarz et al. 2010. Once the pullout resistance is high enough to exceed the break force, roots will break before pull-out. The derived root cohesion considering the shear plane at different depths (with change of confining stress) was then determined using Eqs. (12) and (13) and the data in Fig. 11 and validated Landslides against the measured values. This is shown in Fig. 12 with a comparison to the calculated root cohesions determined using fibre break models (WWM and FBM) and a previously proposed Beam on-non-linear-Winkler-foundation (BNWF) model presented by Liang et al. (2015). The BNWF model appears to demonstrate the best match with the limited DSA test data points. In contrast, using the root cohesion distribution assuming pull out of roots under-predicts the additional resistance contributed by the root analogues, while fibre break models (WWM and FBM) significantly over-predict the additional resistance. The corresponding failure mechanisms of these three models are shown schematically in Fig. 13. It should be noted that following shearing, no roots were observed to fail when roots are subjected to a shear displacement of around 50 mm (see Liang et al. 2015 and the measured angles of shear distortion θ, ranged from 30°to 60°, with resulting R θ of 1.04-1.18; hence, the value of 1.2 recommend by Wu et al. (1979) could provide a reasonable representation of R θ and was used in the calculation. The root with a diameter > 10 mm (central root in Fig. 3b) was not included in the calculation for WWM or FBM considering their rotational behaviour during soil slip (Genet et al. 2008;Stokes et al. 2009;Mao et al. 2012). Despite not including 12-mm-diameter root, WWM and FBM models appear to significantly over-estimate the additional strength contributed by the root analogues. This overestimation is not surprising as no roots were observed to break. The implications of this wide range of predicted root strength contributions was assessed by simulating each of the distributions shown in Fig. 12 within the FE simulations.
Finite element simulations Figure 14 shows a comparison of measured (from centrifuge tests) and simulated (from numerical simulation) settlement at the crest of the model slope under the eight successive earthquake motions. The presence of root analogues caused a significant reduction in permanent slope movement compared to the fallow case, regardless of root morphology. Interestingly, both the centrifuge tests and numerical simulation results consistently suggest that slopes reinforced by the 3D root cluster model induced a greater reduction of the slope crest settlement than the one reinforced by a group of straight vertical rods (see Fig. 14). This is not consistent with the variation of root cohesion profile shown in Fig. 4, for which the straight root case is generally higher. It may therefore not be suitable to relate the root reinforcing effects solely to root cohesion. Apart from root cohesion, another key parameter that may affect the root reinforcing effect is the lateral extent of the root system, namely, the diameter of critical rooted zone. As shown in Fig. 3, the diameter of the critical rooted zone of the straight root group (0.4 m) is much smaller than that of the 3D root cluster model (1.0 m). Thus, more attention should be paid on the importance of the lateral extent of root systems on the overall performance of a vegetated slope in engineering practice. Further parametric studies are needed to more fully quantify the influence of the diameter of critical rooted zone.
The implication of using commonly made model simplifications is examined in Fig. 14 with a direct comparison with the test result for slope reinforced with straight root group. Using the root cohesion distribution determined from the bending (BNWF) model presented by Liang et al. (2015) provides the best match to the measured response in the centrifuge test. In contrast, using the root cohesion distribution assuming uplifting of roots overpredicts the crest settlement compared with the measured value, as this model does not derive any additional resistance from the bending of the root analogues. Interestingly, when using the significantly over-predicted strength derived from WWM (see Fig. 12), a good match is obtained. This indicates that once the root contribution to shear strength is sufficiently high, continuing increases in the magnitude of root reinforcement may not have a significant effect on subsequent response. However, the key issue then becomes the determination of this critical 'minimum' shear strength, for which it is essential to be able to model root-soil interaction correctly. Investigating the failure mechanism of the slope, Fig. 15 shows the mobilised shear strain inside the slope after the earthquake sequence. The presence of roots causes the slip plane to move depending on the root contribution to shear strength. At low root cohesion (Fig. 15b), the increased cohesion is small enough that the critical mechanism passes through the rooted zone (as is conventionally assumed), though the additional shear strength within the rooted zone decreases the total shear
Original Paper
Landslides strain mobilised. When the rooted soil strength is high enough to buttress the movement of the sliding block, the slope is separated into several small sliding blocks, as shown in Fig. 15c and this improves the overall performance of the slope. Further increase of the root cohesion has minimal effect on slope deformation (Fig. 15d) as the soil is already failing in the unreinforced area between the zones reinforced with root analogues.
Implications for engineering application
A key finding of this study is that where roots are clustered on slopes (e.g. beneath individual shrubs or trees), there exists a threshold shear strength distribution beyond which increasing the strength of the rooted zone (e.g. by stronger or more roots) will not provide any further benefit to stability as the critical failure mechanism will already have moved to bypass the stronger zones and fail through the weaker unreinforced zones. It is therefore important that root strength contributions are not simply added to the resistance of the fallow shear plane, with the implication that indefinitely increasing c r ' will result in ever greater stability. Such a finding is important for the selection of suitable species to protect slopes against natural hazards.
Original Paper
Landslides experiments presented here, use of the BNWF model to determine rooted soil strength appeared to result in the best predictions of the slope behaviour. The pull-out model under-predicted reinforcement as the root must deform in shear to generate the axial strains to reach pull-out strength. However, the model is then neglecting the resistance due to bending of the roots, which results in an under-prediction of the reinforcing effect on the slope. The WWM and FBM models over-predicted reinforcement as they similarly rely on the roots to deform to generate breakage tensile stresses; however, because these stresses are very high in these structurally competent root analogues, the bending resistance of the analogues means that the soil will fail in shear around the analogues before this sufficient tensile stress can be achieved (a feature which is captured in the BNWF model via the limiting strength of the soil springs). For woody root species such as trees and shrubs, structural and coarse roots occupy the majority of the total root mass; for example, a study by Parr and Cameron (2004) observed that a spruce tree had a total of 82,500 roots. Among them, coarse roots (> 5 mm) comprised 62%.
FBM/pull-out models may continue to be suitable for modelling fibrous root systems where the roots are of such small diameter that they have negligible resistance in bending. The smallest analogues used in this study were 1.6-mm diameter and none of these were observed to break within the centrifuge test under substantial shear deformation. These analogues are notably much finer than the critical diameter (10 mm) used by Mao et al. (2012) to predict the root cohesion of natural diverse mountain forests via a fibre break model. Further study is required to determine the combination of root diameter and soil conditions below which the bending resistance is negligible and fibre break models can be employed.
It was also observed in this study that the lateral extent of the root system also plays an important role, sometimes even more important than the magnitude of root cohesion, on the improvement of the overall performance of vegetated slopes (see Fig. 14 and detailed discussion in the "Finite element simulations" section). This has important implications for the engineering use of vegetation to protect natural or man-made slopes against landslides, and it may be advisable to select plant species for their propensity of lateral spread and deep rooting, rather than species with the strongest possible roots, so as to maximise the reinforcing effects given by the vegetation. For the field investigation of vegetation sites, it would be desirable to use hand-held devices for rapid testing of root strength properties in multiple locations quickly (e.g. Meijer et al. 2016Meijer et al. , 2018 to better quantify the distribution of rooted soil strength with position, rather than only a limited number of highly detailed yet slow and expensive in situ tests (e.g. large-size direct shear box tests).
Conclusions
The family of fibre break models (i.e. both WWM and FBM) predicted much higher root cohesion than the pull-out models proposed in this study. In all of the experiments, root analogues with diameters ranging from 1.6 to 12 mm did not show any Fig. 15 Comparison of failure mechanism between fallow slope and root-reinforced slope with cohesion derived from different analytical models: a fallow slope; b rooted slope with cohesion from fibre pull-out model; c rooted slope with cohesion from beam bending model; d rooted slope with cohesion from fibre break model (WWM) Landslides breakage upon vertical pull out under confining pressures ranging between 0 and 12 kPa. However, the pull-out models underpredicted resistance compared to a BNWF (root bending) model, which demonstrated the best fit to the test results. When the root cohesion estimated by the different predictive models was applied in a dynamic slope analysis, there appears to exist a threshold value of enhanced rooted soil shear strength in concentrated zones around the root analogues where the position of the critical slip plane would bypass the rooted zones, rather than passing through them. Interestingly, further increase of root cohesion beyond this threshold value introduced very limited influence on the global slope behaviour. This implies that the significantly over-predicted root cohesion made by the family of fibre break models for roots that have non-negligible bending stiffness may still provide a reasonable prediction of the overall behaviour, so long as the critical failure mechanism is already bypassing the rootreinforced zones. However, they may potentially over-estimate the stability improvement if the actual rooted soil strength is low.
Acknowledgements
The James Hutton Institute receives funding from the Scottish Government (Rural & Environmental Services & Analytical Services Division). The authors would also like to express their sincere gratitude to Dr. Gary Callon at the University of Dundee for his assistance in printing the model root analogues and undertaking the test programme. The first author would like to acknowledge the financial support of the China Scholarship Council.
Notation
A, dimensionless factor accounting for strain type; c ′ , cohesion of soil; c' r , additional soil shear strength due to roots; D, diameter; D j , diameter of the weakest remaining root; D n , diameter of root size n; D pipe , inner diameter of plastic tube; D root , diameter of root; D 50 , particle diameter at which 50% is smaller; E ref 50 , triaxial secant stiffness (at 50% of deviatoric failure stress in drained triaxial compression; E ref oed , oedometric tangent stiffness (in compression); E ref ur , unloading-reloading stiffness; F down , pull down force; F i , shaft resistance provided to the ith element; F p , uplift capacity; F up , pull up force; g, acceleration due to gravity (= 9.81 m/s 2 ); G 0 , small strain shear modulus; E ref 0 , reference small strain shear modulus; i, number of element; I D , relative density; I R , relative dilation index; j, weakest root removed at each simulation; k, dimensionless coefficient accounting for root roughness and soil particle size; K, coefficient of earth press at rest; K 0 , coefficient of undistributed earth press at rest; K a , coefficient of active earth press at rest; K p , coefficient of passive earth press at rest; l i , length of ith element; L, length of pile; m ′ , power-law index for stress level; n, root number ordered from strongest to weakest; N, total root number; p, reaction from soil due to the deflection of pile; q, bearing pressure; Q, fitting parameter for relative dilation index; R, fitting parameter for relative dilation index; R f , ratio of deviatoric failure stress to asymptotic limiting deviator stress; R θ , root orientation factor; RAR, root area ratio; RAR n , root area ratio of root size n; T down , tensile stress below the shear plane within the root; T i-1 , tensile stress generated at the top section of i th element; T i , tensile stress generated at the bottom section of i th element; T up , tensile stress above the shear plane within the root; T r , ultimate tensile strength; T rj , tensile strength of weakest remaining root; T rn , tensile strength of a root of size n; T n , maximum tensile stress within the root; x, shear displacement at failure; y, deflection; z, depth of soil; Z, thickness of the shear zone; β, drained interface strength parameter; δ'', root-soil interface friction angle; ε s, 0.7 , shear strain; ρ max , maximum dry bulk density, ρ min , minimum dry bulk density; γ, unit weight; γ unsat , unsaturated unit weight; γ sat , saturated unit weight; θ, angle of shear distortion; Δτ, additional soil shear strength due to reinforcement; σ′, effective confining stress; σ' v , vertical effective stress; ϕ ′ , effective angle of friction; ϕ' cs , critical state angle of friction; ϕ' pk , (secant) peak angel of friction.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2019-08-23T14:43:24.135Z | 2019-08-22T00:00:00.000 | {
"year": 2019,
"sha1": "7770f031921afe5ab539f55b8f4477ff9ea6feac",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10346-019-01259-8.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "420f685b073ee39786be36e545d6257d1b67447d",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
13106554 | pes2o/s2orc | v3-fos-license | Delayed Surgical Treatment of Distal Biceps Tendon Rupture – A Case Report
Abstract Traumatic rupture of the distal biceps tendon is rare. Conservative treatment can result in reduced flexion and supination power with reduced function. This case report emphasizes the need for prompt surgical treatment and describes the possible complications of delayed surgical intervention.
Introduction
Traumatic rupture of the biceps tendon is rare. When rupture does occur, it usually involves the long head of the proximal insertion. Distal biceps tendon rupture only occurs in about 3% of all biceps tendon injuries (1). Tendon ruptures can occur at any age; however, most patients are middle aged, ranging from 30 to 60 years (2). The biceps brachii muscle flexes the elbow and supinates of the forearm. It comprises of a long head, whose origin is from the glenoid fossa, and a short head, which comes from the coracoid process, with a distal insertion on the radial tuberosity. The main mechanism of injury to the biceps is either eccentric contraction or resisted flexion of the elbow due to heavy lifting or a fall onto an outstretched hand (3). The patient usually hears or feels a "pop", and a deformity of the muscle contour of the upper arm develops. The distal tendon is normally easily palpable at the antecubital fossa. Failure to recognize a tendon rupture and treat it appropriately could result in muscle atrophy and loss of function (2).
Case Report
A 61 year-old left hand dominant office worker presented to the emergency department with a painful left arm. This gentleman was a non-smoker, had no previous history of biceps muscle pain or tendonitis, he was fit and well and a keen recreational tennis player. Five days previously, this gentleman had been trying to use his left arm to catch a motorcycle that was toppling over. The patient instantaneously felt a 'pop' over his left upper arm and an episode of extreme pain which quickly subsided. The pain then steadily increased over the next five days when this patient presented himself at the emergency department. On examination, although there was no apparent bruising, the patient had an obvious "reverse Popeye's sign" showing a prominence in the proximal arm due to the proximal displacement of the biceps muscle belly, indicating rupture of the distal biceps tendon. The patient was very tender in the antecubital fossa on palpation and a palpable defect was noticed -this defect was accentuated on active flexion. The patient was able to painfully flex and extend the elbow with a range of motion from 0-90 degrees, with a Medical Research Council (MRC) power score of 4/5. Active pronation was 30 degrees and active supination was 10 degrees with a MRC power score of 3/5. The patient had full sensation in all dermatomes in his arm and was vascularly intact. A broad arm sling was applied and the patient was given the next available orthopedic trauma clinic appointment which was 4 days later. He was reviewed in the orthopedic clinic in four days' time and was advised conservative treatment with analgesia and mobilization as pain allows.
After discussing with his friends and relatives, the patient returned to orthopaedic clinic one week later enquiring about the possibility of operative treatment. The patient was subsequently referred to an upper limb surgeon for a specialist opinion. Unfortunately the patient had already booked and paid for a holiday prior to injury and decided to go on vacation in the meantime, and it was not until a further two weeks that the patient was seen again in clinic. By this time, four weeks after the initial injury, it was decided to attempt operative fixation using a suture anchor as described by Galatz (4).
In the operative room, the patient was positioned supine on an arm board. The patient was adequately prepped and draped and a tourniquet was inflated to 250 mm Hg. A single S-shaped incision was made over the antecubital fossa and careful superficial and deep dissection down to the biceps tendon was made. The distal part of the biceps tendon was found to be completely ruptured and there was no remnant of the tendon on the radial tuberosity. The tendon was short and frayed and there was difficulty in moving the biceps muscle distally. Despite releasing the adhesions around the tendon and applying a suture to the tendon stump, the stump remained 2 cm away from the radial tuberosity even with the elbow flexed to 90 degrees. The original operative plan was abandoned and the stump of the biceps tendon was sutured onto the brachialis using a polydioxanone (PDS) suture and with the elbow in 40 degrees of flexion.
Layer closure was applied and the patient was put in a backslab for three weeks. He was advised passive range of movement whilst avoiding the last 30 degrees of extension for the next three weeks and then to commence full range of movement with the aid of physiotherapy. At follow up three months after the surgery, although the wound had healed nicely and the patient was neurovascularly intact, the patient still had an obvious "reverse Popeye" deformity. The patient was able to fully flex and extend the elbow from 0 to 140 degrees actively with MRC power 5/5. However supination remained at 10 degrees with reduced power of 3-4/5. The patient had returned to his office work and could perform his activities of daily living, but he could not return to playing recreational tennis due to limited supination. After discussing with the patient in detail, because he had recovered fully flexion and extension, returned back to work full time and was able to resume his activities of daily living, it was decided not to undertake any further surgical interventions in the form of tendon grafts.
Discussion
Prompt surgical repair (ideally within 3 weeks) of a ruptured biceps tendon is usually the preferred treatment (5). Although conservative treatment can be used in the initial stages following injury (6), patients treated conservatively have been shown to have up to 30% reduction in flexion power and 40% reduction in supination power (2). In delayed repair as in this case, the tendon is often retracted and frayed making it difficult to bridge the gap from tendon stump to the radial tuberosity. In this event, a tenodesis can be performed whereby the surgeon can suture the tendon stump onto the brachialis thereby restoring flexion power but not supination. If supination is also to be restored, then a tendon graft needs to be performed to bridge the gap. Potential autograft sites can be from the iliotibial band, tensor fascia lata or tendoachilles. Allografts can also be used in distal biceps tendon repair (7), however they may be difficult to obtain in terms of harvesting the graft from donors and cryopreservation. They may also more expensive than autografts, and may carry a risk of disease transmission and tissue rejection; therefore they are not routinely used in our hospital.
Conclusions
This case emphasizes the importance of prompt early surgical treatment in suspected distal biceps tendon rupture. | 2016-10-11T02:19:10.865Z | 2012-10-10T00:00:00.000 | {
"year": 2012,
"sha1": "387256f459d0d1e1ca49d8726c62e1e54e613e83",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5812/traumamon.7146",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "387256f459d0d1e1ca49d8726c62e1e54e613e83",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18129863 | pes2o/s2orc | v3-fos-license | Quantum heat engine in the relativistic limit: The case of a Dirac particle
We studied the efficiency of two different schemes for a quantum heat engine, by considering a single Dirac particle trapped in an infinite one-dimensional potential well as the"working substance."The first scheme is a cycle, composed of two adiabatic and two isoenergetic reversible trajectories in configuration space. The trajectories are driven by a quasistatic deformation of the potential well due to an external applied force. The second scheme is a variant of the former, where isoenergetic trajectories are replaced by isothermal ones, along which the system is in contact with macroscopic thermostats. This second scheme constitutes a quantum analog of the classical Carnot cycle. Our expressions, as obtained from the Dirac single-particle spectrum, converge in the nonrelativistic limit to some of the existing results in the literature for the Schr\"odinger spectrum.
I. INTRODUCTION
A classical heat engine consists of a cyclic sequence of reversible transformations over a "working substance", typically a macroscopic mass of fluid enclosed in a cylinder with a mobile piston at one end [1,2]. The two most famous examples are the Otto and Carnot cycles. In particular, the classical Carnot cycle comprises four stages, two isothermal and two adiabatic (iso-entropic) ones. Ideal quasi-static and reversible conditions are achieved by assuming that an external force, which differs only infinitesimally from the force exerted by the internal pressure of the fluid, is applied to the piston in order to let it move extremely slowly [1,2]. On the other hand, the isothermal trajectories are performed by bringing the fluid contained by the cylinder into thermal equilibrium with external reservoirs at temperatures T C < T H , respectively.
A quantum analogue of a heat engine involves a sequence of transformations (trajectories) in Hilbert's space, where the "working substance" is of quantum mechanical nature [3][4][5][6][7][8][9][10][11]. One of the simplest conceptual realizations of this idea is a system composed by a singleparticle trapped in a one-dimensional infinite potential well [3][4][5][6]9]. The different trajectories are driven by a quasi-static deformation of the potential well, due to the application of an external force. Two different schemes of this process have been discussed in the literature, in the context of a non-relativistic particle whose energy eigenstates are determined by the Schrödinger spectrum [3,4,10]. In this paper, we shall revisit these approaches, and we will study the performance of the corresponding heat engine for a single Dirac particle. Since Dirac's equation describes the spectra of relativistic particles, results obtained for this case should reduce to the corresponding ones from Schrödinger's equation in the nonrelativistic limit. As we shall discuss below, the transition between the relativistic and non-relativistic regimes is determined by the ratio λ/L, with λ = 2π /mc the Compton wavelength of the particle, and L the width of the potential well. The non-relativistic limit corresponds to the regime where λ/L ≪ 1, while evidence of the underlying relativistic nature of the spectrum manifests in terms of finite corrections in powers of λ/L. Another limit of theoretical interest is the "ultra-relativistic" case of massless Dirac particles, where λ/L → ∞. An important realization of this former case in solid state systems is provided by conduction electrons in the vicinity of the so called Dirac point in graphene [12][13][14][15].
II. A DIRAC PARTICLE TRAPPED IN A ONE-DIMENSIONAL INFINITE POTENTIAL WELL
The problem of a Dirac particle in the presence of a one-dimensional, finite potential well V (x) is expressed by the Dirac Hamiltonian operator [16,17], Here,α i = 0σ î σ i 0 ,β = I 0 0 −I are Dirac matrices in 4 dimensions, withσ i the Pauli matrices. The domain of this operator is D(Ĥ) = H, with H = L 2 (R) ⊕ L 2 (R) ⊕ L 2 (R) ⊕ L 2 (R) ≡ L 2 (R, C 4 ) the Hilbert space of (complex-valued) 4-component spinorŝ ψ(x) = (φ 1 , φ 2 , χ 3 , χ 4 ), where each component φ i , χ j ∈ L 2 (R) is therefore a square-integrable function in the unbounded domain R. For a finite potential well, of the form V (x) = V 0 Θ(|x| − L/2) with 0 < V 0 < ∞, it is well discussed in the classical literature [16,18] that confinement is possible only if the energy E of the particle inside the well is in the interval |E − V 0 | < mc 2 , which corresponds to an exponentially vanishing probability current outside (x > L/2 or x < −L/2) the confining region. If, on the contrary, the energy is such that E − V 0 < −mc 2 , then so-called Klein tunneling occurs: The particle can tunnel through the barrier with a finite probability current, but paradoxically with an antiparticle character. This behavior and some of its consequences is denominated Klein's paradox [16][17][18]. We remark that this effect occurs when attempting to confine the particle with a finite potential well. The mathematical and physical pictures are rather different when considering the singular limit of an infinite potential well, The singular character of the infinite potential well, exactly as in the more familiar Schrödinger case [19], requires a different mathematical statement of the problem: one needs to define a self-adjoint extension [16,[19][20][21] of the free particle Hamiltonian In general, the domain ofĤ 0 and its adjointĤ † 0 verify D(Ĥ 0 ) ⊆ D(Ĥ † 0 ) [16]. However, physics requires forĤ 0 to be self-adjoint. The self-adjoint extension is obtained by imposing appropriate boundary conditions [16,[19][20][21] on the spinors at the boundary ∂Ω of the finite domain Ω, as discussed in detail in Appendix A. In particular, the condition of simultaneous vanishing of the four components of the spinor at the boundaries x = ±L/2 is not compatible with self-adjointness and, moreover, leads to the trivial null solutionψ(x) = 0 ∀x ∈ Ω. Instead, as shown in Appendix A, the mathematical condition for self-adjointness corresponds to a vanishing probability current at the boundary ∂Ω of the domain Ω = [−L/2, L/2], j 1 (x = ±L/2) = 0, with j 1 (x) = cψ † (x)α 1ψ (x). The physical interpretation of this mathematical condition is rather obvious: the particle is indeed "trapped" inside the infinite potential well, since there is a zero probability current through the boundary walls. This approach has been used in the past, for instance, to model confinement of hadrons in finite regions of space [22,23]. Summarizing, the eigenvalue problem for the self-adjoint extension of the free Dirac Hamiltonian Eq.(3) representing particles "trapped" inside the infinite potential well is given bŷ subject to the boundary conditions As shown explicitly in Appendix A, there is a whole family of eigenfunctions of Eq. (4) in Ω = [−L/2, L/2], satisfying the boundary conditions Eq.(5). This has been investigated for instance in [20,21]. On the other hand, a fundamental discrete symmetry of the Dirac Hamiltonian is its invariance under parity [16][17][18] (P : x → −x, p x → −p x ), [Ĥ 0 ,P ] = 0, which corresponds to a mirror spatial reflection by leaving the spin direction invariant. It is straightforward to show that under parity, the spinor transforms as [16][17][18] Pψ(x)P −1 = e iφβψ (−x). On the other hand, the probability density defined as ρ( Here, the explicit covariant notation γ i =βα i (i=1,2,3), γ 0 =β was invoked. Under parity, the space components of a Lorentz 4-vector invert sign (j i → −j i , i = 1,2,3), whereas the time component j 0 remains invariant [16][17][18], and hence ρ(−x) = ρ(x). For the physically acceptable eigenfunctions of the self-adjoint extension of the Dirac Hamiltonian Eqs.(4),(5), we thus demand that this symmetry is satisfied ∀x ∈ Ω = [−L/2, L/2]. As shown in detail in Appendix A, this leads to a quantization of the wavenumbers, k ≡ k n = nπ/L, for n ∈ N. The spinoreigenfunctions obtained from this analysis arê with associated discrete energy eigenvalues, Here, we have subtracted the rest energy, and λ = 2π /(mc) is the Compton wavelength. The positive sign corresponds to the particle solution [17]. The spectrum predicted by Eq. (7) , as well as the probability density obtained from Eq.(6), can be compared with the corresponding Schrödinger problem, whose eigenvalues are given by Here, a single eigenvalue for the energy is obtained and, moreover, it scales as n 2 , in contrast with the Dirac particle case where a richer scaling with n is observed. In particular, in the regime λ/L ≪ 1, we have: corresponding to the non-relativistic limit. Beyond this regime, relativistic corrections depending on the finite ratio λ/L are observed.
Another interesting limit of Eq.(7) corresponds to a massless Dirac particle with λ → ∞, where the spectrum reduces to the expression This situation may be of interest in graphene systems, where conduction electrons in the vicinity of the so called Dirac point can be described as effective massless chiral particles, satisfying Dirac's equation in two dimensions [12][13][14][15].
III. A SINGLE-PARTICLE QUANTUM HEAT ENGINE
As the "working substance" for a quantum heat engine, let us consider a statistical ensemble of copies of a singleparticle system, where each copy may be in any of the possible different energy eigenstates. We therefore say that the single-particle system is in a statistically mixed quantum state [24]. The corresponding density matrix operator isρ = n p n (L)|ψ n (L) ψ n (L)|, with |ψ n (L) an eigenstate of the single-particle Hamiltonian Eq.(3), corresponding to the spinors defined by Eq.(6). This density matrix operator is stationary, since in the absence of an external perturbation [24] i ∂ tρ = [Ĥ,ρ] = 0. Here, the coefficient 0 ≤ p n (L) ≤ 1 represents the probability for the system, within the statistical ensemble, to be in the particular state |ψ n (L) . Therefore, the {p n (L)} satisfy the normalization condition Trρ = n p n (L) = 1.
In the context of Quantum Statistical Mechanics, entropy is defined according to von Neumann [24,25] as S = −k B Trρ lnρ. Since in the energy eigenbasis the equilibrium density matrix operator is diagonal, the entropy reduces to the explicit expression In our notation, we emphasize explicitly the dependence of the energy eigenstates {|ψ n (L) }, as well as the probability coefficients {p n (L)}, on the width of the potential well L. The ensemble-average energy of the quantum single-particle system is For the statistical ensemble just defined, we conceive two different schemes for a quantum analogue of a thermodynamic heat engine. The first one, that we shall refer to as the Isoenergetic cycle, consists on four stages of reversible trajectories: two iso-entropic and two iso-energetic ones, as originally proposed by Bender et al. [3,4] in the context of a Schrödinger particle. During the iso-energetic trajectories, the ensemble-average energy Eq.(13) is conserved, while during the iso-entropic ones, the von Neumann entropy defined by Eq.(12) remains constant. We distinguish this first scheme from the quantum Carnot cycle to be discussed next, where the iso-energetic trajectories in Hilbert's space are replaced by isothermal processes. During these stages, the system is brought into thermal equilibrium with macroscopic reservoirs at temperatures T C ≤ T H , respectively.
IV. THE ISOENERGETIC CYCLE
The Isoenergetic cycle, a scheme for a quantum heatengine originally proposed by Bender et al. [3,4] in the context of a single Schrödinger particle, is composed by two isoentropic and two isoenergetic trajectories. In particular, during the isoenergetic trajectories, the "working substance" must exchange energy with an energy reservoir [5,6]. A possible practical realization of this cycle was proposed in the context of non-relativistic particles in Ref. [5], where the working substance exchanges energy with single-mode radiation in a cavity, which acts as an energy reservoir.
The system trajectories in Hilbert's space are assumed to be driven by reversible quasi-static processes, in which the walls of the potential well are deformed "very slowly" by an applied external force, such that the distance L is modified accordingly. Along these trajectories, the total change in the ensemble average energy of the system is given by The first term in Eq. (14) represents the total energy change due to an iso-entropic process, whereas the second term represents a trajectory where the energy spectrum remains rigid.
Let us consider first the iso-entropic process, where {p n (L)} = cnt. We remark that this represents a strong sufficient condition for the entropy to remain constant along the trajectory, but is not a necessary one [26]. Under quasi-static conditions, the external force driving the change in the width of the potential well is equal to the internal "pressure" of the one-dimensional system, F = −(∂E/∂L) S . Therefore, the work performed by the system against the external force, when the width of the potential well expands from L = L 1 to L = L 2 , is given by For the case of a Dirac particle, the work performed under iso-entropic conditions is given after Eq.(15) and Eq. (7) by Notice that our sign convention is such that, for an expansion process L 2 > L 1 , the work performed by the system is negative [2], indicating that the ensemble-averaged energy is decreasing during expansion, as in a classical ideal gas.
Let us now consider an iso-energetic process, that is, a trajectory in Hilbert space defined by the equation dE = 0. The solution to this equation, for L ∈ [L 1 , L 2 ], is given along with the normalization condition Eq. (11). Clearly, by definition an iso-energetic process satisfies with δW 1→2 ≡ (δE) {pn(L)}=cnt. and δQ 1→2 ≡ (δE) {En(L)}=cnt. . In the former equation, the integral along the trajectory L 1 → L 2 gives The first term W 1→2 corresponds to the mechanical work performed by the system against the external force, which drives the change in the width of the potential well at constant energy. The second term Q 1→2 = −W 1→2 corresponds to the amount of energy exchanged by the system with the environment, in order to rearrange its internal level occupation. This equation is in analogy with the first law of thermodynamics for macroscopic systems, when considering a reversible process over a classical ideal gas which is being compressed/expanded at constant internal energy conditions. The first term has a precise correspondence with the mechanical work for expansion/compression, whereas the second is in correspondence with the heat exchanged by the gas with the environment in order to satisfy total energy conservation. According to the previous analysis, the heat exchanged by the system with the environment along the isoenergetic process is given by Evidently, Eq.(17) combined with the normalization condition Eq. (11) are not enough to uniquely define the coefficients p n (L) along the iso-energetic trajectory. An exception is the case when the energy scale of all the processes involved is such that only transitions between the ground state (n = 1) and the first excited state (n = 2) are possible. In this effective two-level spectrum, combining Eq.(17) with the normalization condition Eq.(11), the trajectory for the iso-energetic process is described by the following relation with p 2 (L) = 1 − p 1 (L) after the normalization condition Eq. (11). The heat exchanged by the system with the environment during an iso-enegetic trajectory connecting the initial and final states L 1 → L 2 , for the case of a Dirac particle, is given by the expression where E D n (L) was defined in Eq. (7). For the effective two-level system previously described, we shall conceive a cycle, as depicted in Figure 1, which starts in the ground state with p 1 (L A ) = 1. Then, the system experiences an iso-energetic expansion from L A → L B > L A . Then, it experiences an iso-entropic expansion from L B → L C > L B , then an iso-energetic compression L C → L D < L C , and finally it goes back to the initial ground state through an iso-entropic compression L D → L A .
We shall assume that the final state after the isoenergetic process L A → L B corresponds to maximal expansion, that is, the system ends completely localized in the excited state n = 2. In this condition, Eq.(21) reduces to The condition of total energy conservation between the initial and final states connected through an iso-energetic process, for maximal expansion, leads to the equation where p 1 (L A ) = p 2 (L B ) = 1 for maximal expansion. Therefore Eq.(24), given the Dirac spectrum Eq. (7), implies that L B /L A = 2.
The heat released to the environment along this first stage of the cycle is calculated after Eq. (22), .
The next process is an iso-entropic expansion, characterized by the condition p 2 (L B ) = p 2 (L C ) = 1. We shall define the expansion parameter α ≡ L C /L B > 1. The work performed during this stage, with L B = 2L A (as discussed before), is calculated from Eq. (16), The cycle continues with a maximal compression process from L C = 2αL A to L D = αL A under iso-energetic conditions. The energy conservation condition is in this case similar to Eq.(24), with p 2 (L C ) = p 1 (L D ) = 1. The heat exchanged by the system with the environment along this process, applying Eq. (22), is given by the expression where E D n (L) was defined in Eq. (7). The last path along the cycle is an adiabatic process, which returns the system to its initial ground state with p 1 (L D ) = p 1 (L A ) = 1. The work performed during this final stage, as obtained by applying Eq.(16), is given by It is interesting to check that the work along the two isoentropic trajectories cancels, that is W B→C +W D→A = 0. Therefore, the efficiency of the cycle is defined by the ratio .
Here, we have defined the function θ(L) = 1 + (λ/2L) 2 , with λ = 2π /(mc) the Compton wavelength. It is important to remark that in the non-relativistic limit λ/L → 0, the expression in Eq.(29) reduces to the known Schrödinger limit lim λ/L→0 In the "ultra-relativistic" case of a massless Dirac particle, λ/L → ∞, the efficiency converges to the length independent limit lim λ/L→∞ The trend of the efficiency is shown in between both limits in Figure 3. It is worth to remark that, since the expansion parameter α = L C /L B > 1, the efficiency in the strict non-relativistic (Schrödinger) regime λ/L → 0 Eq.(31) is the highest possible one. This is also clear from the asymptotics of the curves displayed in Fig. 3, where the "ultra-relativistic" limit corresponding to massless Dirac particles (λ/L → ∞) indeed represents the less efficient regime for a fixed expansion parameter α.
V. THE QUANTUM CARNOT CYCLE
In this section, we shall discuss the quantum version of the Carnot cycle, as applied to the statistical ensemble of Dirac single-particle systems under consideration. The thermodynamic cycle which defines the corresponding heat engine is composed of four stages or trajectories in Hilbert's space: Two isothermal and two iso-entropic processes.
In the first stage, the system is brought into contact with a thermal reservoir at temperature T H . By keeping isothermal conditions, the width of the potential well is expanded from L A → L B . Since thermal equilibrium with the reservoir is assumed along this process, the von Neumann entropy of the system achieves a maximum for the Boltzmann distribution [24,25] with β = (k B T ) −1 , and the normalization factor is given by the partition function Here, the second expression, as shown in Appendix, is the continuum approximation to the discrete sum, valid in the physically relevant regime λ ≪ L. Here, K 1 (x) is a modified Bessel function of the second kind. From a similar analysis as in the previous section, we conclude that the heat exchanged by the system to the thermal reservoir is given by In the second line, we have done integration by parts, and we made direct use of the definition Eq.(32) of the partition function. The final result follows from substituting the explicit expression for the partition function Eq.(34). Similarly, during the third stage of the cycle, the system is again brought into contact with a thermal reservoir, but at a lower temperature T C < T H . Therefore, the probability distribution of states in the ensemble is p n (L, β C ), as defined in Eq.(33), but with T C instead of T H . The heat released to the reservoir during this stage is given by the expression The second and fourth stages of the cycle constitute iso-entropic trajectories. In order to analyze these stages, we shall derive the "equation of state" for the statistical ensemble of single-particle systems. When substituting the Boltzmann distribution p n (β, L) = [Z(β, L)] −1 exp(−βE D n (L)) into the expression for the von Neumann entropy Eq.(12), we obtain the relation Here, E = Ĥ is the ensemble-average energy, as defined by Eq. (13). The equation of state is obtained from Eq.(37) as In the last line, we have used the explicit analytical expression Eq.(34) for the partition function to calculate the derivative. The equation of state Eq.(38) reflects that the ensemble of systems behaves as a one-dimensional ideal gas. This is not surprising, since the ensemble-average energy is given by where we have substituted explicitly the expression for the partition function, and in the final step we defined z = β mc 2 . Here, K 1 (x) is a modified Bessel function of the second kind. Eq.(39) shows that the ensemble average energy of the system is a function of the temperature solely, from which the ideal gas equation of state follows as a natural consequence. We can thus define the "specific heat" at constant length, which after Eq.(39) is given by where z = βmc 2 . It is interesting to remark that, based on the asymptotic behavior of the modified Bessel functions of the second kind K n (z) ∼ π/2z −1/2 exp(−z) + O(z −1 ), the specific heat defined in Eq.(40) presents the asymptotic limit C L → k B /2 when k B T ≪ mc 2 . This is the well known result for a classic non-relativistic ideal gas in one dimension. This feature and the general temperature dependence of the ensemble specific heat is displayed in Fig. 6. The change in the ensemble averaged energy of the system, for a general process, is dE = T dS − F dL. Since the ensemble-average energy is a function of temperature only, the differential equation for an iso-entropic trajectory (dS = 0) is [27] Separating variables, after some algebra we obtain Integrating Eq.(42) between initial conditions (z 0 , L 0 ) and final conditions (z, L), we have Here, we made use of the identity for the modified Bessel function of the second kind K ′ 1 (z) = −K 0 (z)−z −1 K 1 (z), with z = βmc 2 . It is interesting to check that in the non-relativistic limit z ≫ 1, given the asymptotic behavior of the modified Bessel function of the second kind K n (z) ∼ π/2z −1/2 exp(−z) + O(z −1 ), Eq.(43) reduces to LT 1/2 = cnt. for the iso-entropic trajectory. On the other hand, in the "ultra-relativistic" limit of a massless Dirac particle with z → 0, the iso-entropic trajectory is given by the equation LT = cnt.
We are now in conditions to discuss the second and fourth stages of the Carnot cycle. The second stage of the process corresponds to an iso-entropic trajectory, parameterized in differential and integral form by Eqs.(42) and (43), respectively. The work performed by the system during this process is given by Here, in the second line we have used the differential equation defining the iso-entropic trajectory, Eq.(42). Evaluating the integral in Eq.(44), we explicitly obtain The fourth and final stage of the cycle also corresponds to an iso-entropic trajectory L D → L A , and the work performed by the system against the external applied force is obtained similarly as in Eq.(44), Clearly, after Eqs.(45) and (47), we have W B→C + W D→A = 0, and hence the contribution of the work along the iso-entropic trajectories vanishes. From the equation for the iso-entropic trajectory Eq.(43), we conclude that the length ratios are determined by the temperatures of the thermal reservoirs, From Eq.(47), we also obtain L A /L B = L D /L C . Substituting this relation in the expression for the efficiency of the cycle, from Eq.(35) and Eq.(36) we have Therefore, we have recovered the expression for the efficiency identical to the classical Carnot cycle.
VI. CONCLUSIONS
By considering as a "working substance" the statistical ensemble for a Dirac single-particle system trapped in a one-dimensional potential well, we have analyzed two different schemes for a quantum heat engine. The first, that we have referred to as the Isoenergetic cycle, consists of two iso-entropic and two isoenergetic trajectories. We obtained an explicit expression for the efficiency of this cycle and showed that our analytical result, in the nonrelativistic limit λ/L → 0, reduces to the corresponding one for a Schrödinger particle, as reported in the literature [3]. Our results also indicate that the efficiency for the Isoenergetic cycle is higher in the non-relativistic region of parameters, that is for L ≫ λ, when comparing at the same compressibility ratios α = L C /L B > 1. An exception is the case of massless Dirac particles, with λ = ∞, where it is not possible to achieve non-relativistic conditions. This is of potential practical interest for graphene systems, where conduction electrons are indeed described as massless chiral Dirac fermions [12][13][14][15].
As a second candidate for a quantum heat engine, we discussed a version of the Carnot cycle, composed by two iso-thermal and two iso-entropic trajectories. In order to achieve iso-thermal conditions, we consider that the single-particle system is in thermal equilibrium with macroscopic reservoirs at temperatures T C < T H , respectively. Therefore, the statistical ensemble under consideration is described by the density matrixρ = e −βĤ /Z, with Z[β, L] = Tre −βĤ the partition function. We showed that the statistical properties of the ensemble are such that an equation of state can be defined, as well as a specific heat, in analogy with a classical ideal gas in one dimension. We obtained the equation for the iso-entropic trajectory, which in the non-relativistic limit k B T ≪ mc 2 reduces to the classical result LT 1/2 = cnt. On the other hand, we also showed that in the "ultra-relativistic" limit of a massless Dirac particle, as for instance conduction electrons in graphene, the iso-entropic trajectory is defined by the equation LT = cnt. We also showed that the efficiency for the Quantum Carnot cycle satisfies the same relation that the classical one in terms of the temperatures of the thermostats, that is η = 1 − T C /T H . Therefore, thermodynamics is remarkably robust: The Carnot limit holds classically just as it does in the quantum regime, even in the quantum-relativistic limit.
In general, the domain ofĤ 0 and its adjointĤ † 0 verify D(Ĥ 0 ) ⊆ D(Ĥ † 0 ) [16]. On the other hand, physics requires forĤ 0 to be self adjoint. This condition is mathematically obtained by looking for a self-adjoint extension ofĤ 0 , upon imposing appropriate boundary conditions for the spinors at the boundary ∂Ω of the finite region Ω. Therefore, for any pair of spinors in the domain of the self-adjoint extension of the free Hamiltonian, we request for their inner product to satisfy Here,n is a unit vector normal to the surface ∂Ω. We have used integration by parts, and applied the generalized divergence theorem to transform the volume integral over the region Ω into an integral over the boundary ∂Ω.
Notice that, in particular, condition Eq.(50) implies that for allψ(x) ∈ D(Ĥ 0 ), This mathematical condition has a rather obvious physical interpretation, since the probability current vanishing at the boundary ∂Ω implies that the particle is indeed "trapped" inside the finite region Ω. We remark that this mathematical approach, with analogous boundary condition, has been used in the past to model confinement of hadrons in a finite region of space (so-called "bag model") [22,23]. Therefore, the eigenvalue problem for the self-adjoint extension of the free Dirac Hamiltonian in the onedimensional region Ω = [−L/2, L/2], must be subject to the boundary conditions It is convenient to write Eq.(52) as a pair of coupled differential equations for each of the two components of the spinor, From Eq.(54), we obtain This equation determines the "small" component χ of the spinor in terms of the "large" component φ. The last is obtained upon substitution of Eq.(55) into Eq.(54), as the solution to the eigenvalue problem For the one-dimensional region we are concerned Ω = [−L/2, L/2], we have φ = φ(x), and Eq.(56) becomes Here, we defined the wavenumber The general solution to Eq.(57) is We shall consider a spin-polarized particle, so we select the spin up solutions by setting A 2 = 0, B 2 = 0. With this choice, from Eq.(55) we obtain χ, which combined with Eq.(59) gives the general solution for the fourcomponent spinor By requesting for the condition of vanishing current at the boundaries Eq.(53), we obtain which implies |A 1 | = |B 1 |. We express this condition in the form with θ the relative phase between the two (complex) amplitudes. Therefore, we obtain an entire family of (spin polarized) eigenfunctions for the self-adjoint extension of the Dirac operator Among all the possible phases θ in Eq.(63), the appropriate physical solution should converge to the solution of the equivalent Schrödinger problem in the nonrelativistic limit k ≪ mc. In this former case, the Schrödinger (S) wave functions are of the form ϕ S n (x) = A sin(nπ(x − L/2)/L), with corresponding probability density ρ S (x) ∝ sin 2 (nπ(x − L/2)/L). Therefore, we set the phase difference θ = kL + π, where we have defined A ≡ 2iA 1 e ikL/2 . Clearly, the "small" component of the spinor in Eq.(64) vanishes in the non-relativistic limit k ≪ mc, thus leading to a probability density ρ(x) =ψ † (x)ψ(x) ∝ sin 2 (k(x−L/2)), in agreement with the Schrödinger result. Eq.(64) satisfies the condition j 1 (x) = j 1 (±L/2) = 0, where the probability current is continuous and vanishes everywhere, in particular at the boundaries of the confining region.
A fundamental discrete symmetry of the Dirac Hamiltonian is its invariance under parity [16][17][18] (P : x → −x, p x → −p x ), [Ĥ 0 ,P ] = 0, which corresponds to a mirror spatial reflection, by leaving the spin direction invariant. It is straightforward to show that under parity, the spinor transforms as [16][17][18]Pψ(x)P −1 = e iφβψ (−x). On the other hand, the probability density defined as ρ(x) =ψ † (x)ψ(x) = c −1 j 0 (x) is the time-component of the Lorentz 4-vector j µ = cψ(x)γ µψ (x), withψ(x) ≡ γ 0ψ † (x), and where the explicit covariant notation γ i = βα i (i=1,2,3), γ 0 =β was invoked. Under parity, the space components of a Lorentz 4-vector change sign, whereas the time component remains invariant [16][17][18], and hence ρ(−x) = ρ(x). Therefore, we demand for the eigenfunctions of the self-adjoint extension of the Dirac Hamiltonian to possess this inversion symmetry, whose solutions are Therefore, the spectrum of the self-adjoint extension of the Dirac Hamiltonian representing a single particle trapped in the one-dimensional infinite potential well is discrete, and given by By introducing the definition of the Compton wavelength λ = 2π /(mc), the spinor-eigenfunction Eq.(64), with k n quantized by Eq.(69), can be written as Eq. (6) in the main text. By subtracting the rest energy from Eq.(70), one obtains Eq.(7) for the Dirac particle spectrum.
Appendix B
We here discuss the continuum approximation to the discrete partition function Eq.(32).
The spacing between two consecutive values is given by ∆x = x n+1 − x n = λ/(2L), and hence the expression for the partition function can be written as For physically relevant sizes of the potential well, we shall have λ/L ≪ 1, which allow us to take the continuum limit in the sense of a Riemann sum for the previous equation, Here, in the second line we have made the substitution x = sinh(t), and K 1 (x) is a modified Bessel function of the second kind. | 2013-01-11T17:55:57.000Z | 2012-07-26T00:00:00.000 | {
"year": 2012,
"sha1": "d1ca8e23300d37d47ff8dd4a0bdb9ceafdf1e743",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1207.6149",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d1ca8e23300d37d47ff8dd4a0bdb9ceafdf1e743",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
} |
220516696 | pes2o/s2orc | v3-fos-license | Identification of potential crucial genes and key pathways in osteosarcoma
Background The aim of this study is to identify the potential pathogenic and metastasis-related differentially expressed genes (DEGs) in osteosarcoma through bioinformatic analysis of Gene Expression Omnibus (GEO) database. Results Gene expression profiles of GSE14359, GSE16088, and GSE33383, in total 112 osteosarcoma tissue samples and 7 osteoblasts, were analyzed. Seventy-four normal-primary DEGs (NPDEGs) and 764 primary-metastatic DEGs (PMDEGs) were screened. VAMP8, A2M, HLA-DRA, SPARCL1, HLA-DQA1, APOC1 and AQP1 were identified continuously upregulating during the oncogenesis and metastasis of osteosarcoma. The enriched functions and pathways of NPDEGs include procession and presentation of antigens, activation of MHC class II receptors and phagocytosis. The enriched functions and pathways of PMDEGs include mitotic nuclear division, cell adhesion molecules (CAMs) and focal adhesion. With protein-protein interaction (PPI) network analyzed by Molecular Complex Detection (MCODE) plug-in of Cytoscape software, one hub NPDEG (HLA-DRA) and 7 hub PMDEGs (CDK1, CDK20, CCNB1, MTIF2, MRPS7, VEGFA and EGF) were eventually selected, and the most significant pathways in NPDEGs module and PMDEGs module were enriched in the procession and presentation of exogenous peptide antigen via MHC class II and the nuclear division, respectively. Conclusions By integrated bioinformatic analysis, numerous DEGs related to osteosarcoma were screened, and the hub DEGs identified in this study are possibly part of the potential biomarkers for osteosarcoma. However, further experimental studies are still necessary to elucidate the biological function and mechanism of these genes.
Introduction
Osteosarcoma is the most prevalent primary bone malignancy and the 8th most frequent type of malignancy that disproportionally affects children and young adults [1]. In recent decades, the improvement in osteosarcoma's treatment (surgery and chemotherapy) has largely increased the long-term survival rate (approximately 60-70%) of children and young adult patients with osteosarcoma without distal metastasis [2,3]. However, the etiology remains unknown, and this discourages the prevention and early diagnosis of osteosarcoma. Therefore, it is extremely necessary to explore the mechanisms behind the occurrence and progression of osteosarcoma.
In recent years, the development in molecular biology has provided some new insights into the potential diagnostic and therapeutic biomarkers for osteosarcoma [4]. Genome-wide molecular profiling, which reveals molecular changes in tumorigenesis and progression, has been proved to be an efficient approach to identify key genes [5,6]. However, it requires considerable time and fund to obtain clinical biological samples and subsequently conduct high-throughput genetic detection and analysis. Cumulative studies in the past have shown that re-analyzing gene datasets of previous experiments from online databases is a feasible way to find out biologically and clinically relevant biomarkers (genes) [7][8][9][10][11], and that some of those biomarkers (genes) have even been found to play important roles in osteosarcoma. For instance, by conducting bioinformatics analysis on three datasets deposited in GEO database (GSE36001, GSE19276 and GSE16088), Pan Liu et al. [10] revealed that tumor protein p53 (TP53), mitogen-activated protein kinase 1 (MAPK1), estrogen receptor 1 (ESR1), notch homolog protein 3 (NOTCH3) and caspase 1 (CASP1) might potentially be important osteosarcomaassociated genes. Among them, mutant TP53 was subsequently reported to be associated with poor survival of osteosarcoma patients, because it can increase the cell proliferation, migration, and chemoresistance in osteosarcoma [12]; MAPK1 has been confirmed to be highly expressed in osteosarcoma cells, and can be downregulated by osteosarcoma related tumor suppressive miR-511 [13]. Based on this, regulation of MAPK1 receptor expression may be a novel approach to treat osteosarcoma. Not long ago, Notch3 overexpression also was confirmed to be associated with metastasis and poor prognosis in osteosarcoma patients [14]. All those examples suggest that bioinformatics analysis is a feasible approach to identify specific genes that may provide valuable clues for investigating the pathogenesis of osteosarcoma.
The current study aims to investigate the crucial genes and key pathways potentially involved in osteosarcoma tumorigenesis and development. To achieve this, we integrated bioinformatics analysis based on Gene Expression Omnibus (GEO) datasets. The data obtained indicate that some genes might continue to participate in the occurrence and metastasis of osteosarcoma.
The three gene expression profiles above were downloaded from the Gene Expression Omnibus (GEO) database (https://www.ncbi.nlm.nih.gov/geo/) [18] for identification of DEGs. Detailed information of all datasets included is listed in Table 1.
Data preprocessing
The analysis of raw probe-level data (.CEL files) was performed using the robust multiarray average algorithm (RMA) in the Affy package of R [19]. After background correction and quantile normalization, the expression values were obtained. The averages of the probe set of values were calculated as the expression values for the same gene with multiple probe sets [20].
Identification of DEGs
Identification of DEGs was performed using the LIMMA package of R [21]. The adjusted P-values (adj P-value) were adopted to avoid the occurrence of false-positive results. Using the Benjamini-Hochberg method [22] via the multtest package in R, the FDR (that is, adjusted pvalue) < 0.05 and |log2fold-change (FC)| > 1 were used as the cut-off criteria, as previously reported [7][8][9][10][11]. Online tool EHBIO ImageGP (http://www.ehbio.com/ ImageGP) operated by EHBIO Gene Technology (Beijing) Co., Ltd. (Beijing, China) was applied to generate volcano plot and Venn diagram, respectively, for the visualization of the identified DEGs.
Functional enrichment analysis
GO (Gene Ontology) function and KEGG (Kyoto Encyclopedia of Genes and Genomes) pathway enrichment analyses of the DEGs were performed using the clusterProfiler package of R [8]. The GO and KEGG terms with FDR < 0.05 were regarded as significant functions and pathways.
Protein-protein interaction network construction and module analysis
The Search Tool for the Retrieval of Interacting Genes (STRING; http://string.embl.de/) is a database of protein-protein interactions known and predicted (PPIs) [23]. Based on the STRING online tool, PPIs of the DEGs were constructed with a confidence score ≥ 0.7. Subsequently, the PPI network was visualized by means of Cytoscape software (version 3.7.2). Furthermore, Molecular Complex Detection (MCODE) [24] plug-in in Cytoscape software was applied to explore the significant modules in PPI network. The advanced options set as degree cutoff = 2, K-Core = 2, and Node Score Cutoff = 0.2. Given that it's hard to conduct enrichment analysis based on small gene sets with the clusterProfiler package of R, instead conducted was the GO function enrichment analysis of DEGs in each module using ClueGo [25] and CluePedia [26] plug-ins of Cytoscape software. The GO terms with FDR < 0.05 (Benjamini-Hochberg method) were regarded as significant functions.
Identification of DEGs between normal osteoblasts and osteosarcoma samples
According to the screening criteria, this study enrolled three datasets (Table 1) Table 1).
Functional enrichment analysis of DEGs between Normal tissue and osteosarcoma samples
To further investigate the biological functions of the 74 NPDEGs, GO and pathway analysis were performed using the clusterProfiler package of R. GO analysis (Supplementary Table S2) showed that the NPDEGs between osteosarcoma and normal tissue samples were clustered in 82 significant biological process (BP) categories. As shown in Fig. 2a (top ten BP categories), most were clustered in antigen procession and presentation. NPDEGs were clustered in 43 significant cellular component (CC) categories. As shown in Fig. 2b (top ten CC categories), the most significant CC category was MHC class II protein complex. DEGs were clustered in 4 significant molecular function (MF) categories. As shown in Fig. 2c, the most significant MF category was MHC class II receptor activity. KEGG analysis identified 28 significant pathways, such as phagosome and antigen procession and presentation (Fig. 2b, Supplementary Table S2).
Identification of DEGs between primary and metastatic osteosarcoma samples
Based on the screening criteria, only GSE14359 dataset was selected for identifying genes differentially expressed between primary and metastatic osteosarcoma samples. There were 764 primary-metastasis DEGs (PMDEGs, 309 up-regulated and 455 down-regulated) in GSE14359 ( Fig. 3a and Supplementary Table S3). Interestingly, seven overlapping up-regulated DEGs (VAMP8, A2M, HLA-DRA, SPARCL1, HLA-DQA1, APOC1 and AQP1) were identified between the NPDEGs and PMDEGs (Fig. 3b, Table 2), whereas there was none overlapping down-regulated DEG (Fig. 3c). This suggests that these Functional enrichment analysis of DEGs between primary and metastatic osteosarcoma samples GO analysis (Supplementary Table S4) showed the 764 PMDEGs were clustered in 162 significant BP categories. As shown in Fig. 4a (top ten BP categories), the most significant BP category was mitotic nuclear division.
PMDEGs were clustered in 57 significant cellular CC categories. As shown in Fig. 4b (top ten CC categories), the most significant CC category was extracellular matrix (ECM). PMDEGs were clustered in 16 significant molecular function (MF) categories. As shown in Fig. 4c (top ten MF categories), the most significant MF category was alpha-amylase activity. KEGG analysis identified 25 significant biological pathways, such as cell adhesion molecules (CAMs) and focal adhesion (Fig. 4d, Supplementary Table S4).
PPI (protein-protein interaction) network and module analysis
PPI Network was subsequently analyzed and proteins were selected based on a combined score ≥ 0.7 in STRI NG analysis. There were 49 nodes and 91 interactions among the 74 NPDEGs (Fig. 5a, Supplementary Table S5). In addition, one significant module with a score = 5 was screened out via MCODE, and HLA-DRA was the hub gene with the highest degree of connectivity (Table 3). GO analysis with ClueGO showed that the most significant BP category in this module was enriched in the antigen processing and presentation of exogenous peptide antigen via MHC class II (Fig. 5b, Supplementary Table S6). There were 521 nodes and 2415 interactions among the 764 PMDEGs, and three significant modules with a score ≥ 10 were screened out via MCODE (Fig. 6a, Supplementary Table S7). Module 1 (score = 32.5) included 36 genes, with CDK1, CDK20 and CCNB1 as hub nodes (Table 3). GO analysis with ClueGO showed that the most significant BP category in this module was enriched in the nuclear division (Fig. 6b, Supplementary Table S8). Module 2 (score = 13.8) included 14 genes, with MTIF2 and MRPS7 as hub nodes ( Table 3). The most significant BP category was enriched in the mitochondrial translation (Fig. 6c, Supplementary Table S8). Module 3 (score = 12) included 12 genes, with VEGFA and EG as hub nodes ( Table 3). The most significant BP category was enriched in the platelet degranulation (Fig. 6d, Supplementary Table S8).
Discussion
This study has gained some insights into gene expression modules in osteosarcoma at a genome-wide scale through analyzing three osteosarcoma datasets. A panel of 74 NPDEGs was identified as associated with osteosarcoma tumorigenesis; and 364 PMDEGs were identified as associated with the osteosarcoma metastasis. In addition, it was noticed that seven genes (VAMP8, A2M, HLA-DRA, SPARCL1, HLA-DQA1, APOC1 and AQP1) were continuous upregulating during the oncogenesis and metastasis of osteosarcoma, which suggested that these genes may act as oncogenes and be consistently involved in the pathophysiological process of osteosarcoma. This study further obtained major histocompatibility complex, class II, DR alpha (HLA-DRA) as the hub NPDEGs from the top module with MCODE score = 5, and 7 hub PMDEGs (CDK1, CDK20, CCNB1, MTIF2, MRPS7, VEGFA and EGF) from three top modules with MCODE score > 10. These may be pivotal genes involved in the pathophysiological process of osteosarcoma.
Among these 8 hub genes, HLA-DRA, which is correlated with the procession and presentation of peptide antigen via MHC class II, continued to be up-regulated during the osteosarcoma oncogenesis and metastasis. Apparently, HLA-DRA may have the "driving" function in osteosarcoma. It has been proved as a predictor for metastasis of osteosarcoma [27]. Although until now there is no report about the function and mechanism of HLA-DRA in osteosarcoma, previous studies have shown that HLA-DRA is involved in the evasion of the virus from the immune system [28] and Alzheimer's disease [29]. Pan Y et al. also listed HLA-DRA as one of the crucial genes in the regulatory network of osteosarcoma they constructed from the dataset GSE28424 [30]. These data indicate that the function of HLA-DRA in osteosarcoma is worthy of attention, especially on the topic of whether it plays a pathophysiological role in osteosarcoma through the process of antigen procession and presentation of peptide antigen via MHC class II. The function enrichment analysis results revealed that HLA class II alleles may be a main impactive factor in osteosarcoma. HLA-DQA1 is also an HLA class II variant that has been reported to be associated with the osteosarcoma risks [31]. Profound understanding of those genes' immunologic contribution to the etiology of osteosarcoma may be helpful for selecting rational therapeutic targets.
SPARCL1 is an ECM remodeling gene. It modulates extracellular calcium by binding to collagen I [32,33], which may reveal its potential role in osteosarcoma cell metastasis. Although Zhao SJ et al. [34] reported that SPARCL1 was downregulated in OS by epigenetic methylation of promoter DNA, and that SPARCL1 could suppress osteosarcoma metastasis and recruit macrophages by activation of canonical WNT/β-catenin signaling through stabilization of the WNT-receptor complex, this study, on the contrary, noticed that SPARCL1 continued to be upregulated during osteosarcoma development and metastasis. This contradiction is worthy of further confirmation by collecting clinical samples and expression analysis. Aquaporin 1 (AQP1) is a waterselective transporting protein in cell membranes, and it has been found to be overexpressed in various tumors and promote metastasis and neo-angiogenesis. AQP1 can promote osteosarcoma cell proliferation, adhesion, invasion and tumorigenesis by targeting TGF-β signaling pathway and focal adhesion genes [35], and recruit human BM-MSCs into the osteosarcoma microenvironment [36]. These reports strongly support our current study's analysis result and confirm that AQP1 is an oncogene and metastasis promoter in osteosarcoma. Although seldom previous studies have revealed the expression and role of VAMP8, APOC1 and A2M in osteosarcoma, exploration in some other tumors has well proved that these genes are important tumorrelated regulatory factors [37][38][39]. However, due to the specificity of osteosarcoma in pathogenesis and signaling pathways involved, additional work is needed to extend the current observation and to clarify the potential causal mechanisms underlying the deregulation of these genes in osteosarcoma. With regards to these hub genes identified, cyclindependent kinase 1 (CDK1) and cyclin-dependent kinase 20 (CDK20) belong to serine/threonine protein kinase family. Cyclin B1 (CCNB1) is a pivotal protein responsible for the control of the cell cycle at the G2/M (mitosis) transition. All the three genes are involved in the cell cycle and growth. Reduction of CDK1 activities is crucial for the survival of osteosarcoma cells [40]. Overexpression of CCNB1 can facilitate the growth rate of osteosarcoma cells and increase their sensitivity to paclitaxel [41]. Several drugs were reported to inhibit cell proliferation or induce cell cycle arrest and apoptosis in human osteosarcoma by downregulating CCNB1 and CDK1 [42][43][44][45]. Both mitochondrial translational initiation factor 2 (MTIF2) and mitochondrial ribosomal protein S7 (MRPS7) are proteins implicated in mitochondrial translation. In this study, we have identified MRPS7 and MTIF2 as hub genes involved in the metastasis of osteosarcoma. Mitochondrial translation pathway plays essential roles in programmed cell death. The implication of mitochondria-mediated intrinsic pathway in human osteosarcoma has been observed [46], and inhibition of mitochondrial translation has been reported to be effective and selective in targeting osteosarcoma [47]. Therefore, protein synthesis involved in MRPS7 and MTIF2 within the mitochondrion might also have a potential connection with the development of osteosarcoma. Vascular endothelial growth factor A (VEGFA) is a classic angiogenic factor, which facilitates endothelial proliferation, migration and new vessel formation [48]. Currently, VEGFA has been reported to be very important in evaluating the angiogenesis in osteosarcoma [49]. Inhibition of VEGFA can successfully suppress osteosarcoma growth, metastasis and angiogenesis [50]. All these highlight its therapeutic value in osteosarcoma. Indeed, VEGFA pathway has been prioritized for the development of antiangiogenic therapies in osteosarcoma [51]. Epidermal growth factor (EGF) promotes cell epithelialmesenchymal transition, metastasis, and progression of osteosarcoma by activating MAPK and PI3K/AKT pathway, which can be blocked by the EGFR-specific inhibitor gefitinib [52]. Thus, EGF-targeting agents should be evaluated to prevent osteosarcoma from deteriorating.
Among the 74 NPDEGs identified, notable dysregulation of gene expression was observed clustered in immune related diseases, phagocytosis, antigen procession and presentation. Bone resorption is accomplished by osteoclasts, which can be seen as highly specialized macrophages [53]. Thus, bone microenvironment represents a unique compartment of the immune system, in which immunological cytokines form part of an intercellular crosstalk that is relevant to the development of osteosarcoma [54,55]. Osteosarcoma cells control the recruitment and differentiation of immune infiltrating cells and establish a local immune tolerant environment that is favorable to the tumor growth [56]. This is in agreement with the current demonstration that those NPDEGs in osteosarcoma are clustered in multiple immune diseases and T helper cells differentiation. Besides, osteoblasts can express major histocompatibility complex II (MHC class II) to present antigen [4]. Thus, deregulation of genes involved in antigen presentation may be an early event in osteosarcoma oncogenesis. MHC II is only expressed on the surface of antigen presenting cells (APC), such as macrophages, dendritic cells and B cells. APC presents exogenous peptides or endogenous peptides to helper T cells by binding MHC-II to peptides, and thus informs that the body is being invaded [57]. Previous studies have shown that osteosarcoma cells can express moderate to high levels of Herpes virus entry mediator on the tumor [58], and osteosarcoma cells can [59]. Therefore, during the process of malignant transformation, osteosarcoma cells express some antigenic substances, which are recognized by APC and presented to helper T cells via MHC-II. In this way, APC helps to connect innate and adaptive immunity to tumor. These suggest that MHC-II mediates immune responses in the tumor microenvironment, thus it could be an alternative target for novel immune therapies and targeting antigen presentation may be clinically valuable in early intervention. Among the 764 PMDEGs, notable dysregulation of gene expression was observed in well-known metastatic related pathways including CAMs, Focal adhesion and ECM-receptor interaction. It was also found that the tumor necrosis factor (TNF) signaling pathway, which is always activated in human osteosarcoma cells [60], was significantly correlated to osteosarcoma metastasis. Hence, to abnormalize the function of the TNF signaling pathway might be a potential target for chemotherapy of advanced osteosarcoma [61]. Interestingly, cell cycle is also the key signal involved in osteosarcoma metastasis. Previous reports have revealed that cell cycle and apoptosis are two major dysregulated events in human malignancy cells [62]. Evolution of cancer is a complex process. Potentially oncogenic proliferative signals can couple to the induction of apoptosis, which restricts subsequent clonal expansion and neoplastic evolution. However, tumor progression occurs when these growthinhibitory mechanisms are thwarted by compensatory mutations. Deregulated cell proliferation and the obligate compensatory suppression of apoptosis provide a minimal 'platform' that is necessary to support further neoplastic progression, which in turn propels the tumor cell and its progeny into uncontrolled expansion and invasion [62].
The limitations of this study also should be recognized. First of all, when analyzing the DEGs, in view of the complexity of datasets in the study, it is impossible to consider all important factors-for example, different ages, races, regions, cell lineage as well as tumor stages and classification of patient. Secondly, according to the results, all the seven genes, which are continuously deregulating during the oncogenesis and metastasis of osteosarcoma, are actually up-regulated ones. Yet, the mechanism of upregulation has not been clear. Therefore, more evidences are required to find out the biological foundation. Finally, this study mainly focuses on analyzing the expression levels of genes involved in tumorigenesis and metastasis. Some of these genes have been reported as biomarkers for osteosarcoma, while the role of HLA-DRA, MTIF2, MRPS7 and CDK20, should also be further systematically investigated based on actual diseased tissues or even cell lines and animal models.
In conclusion, this study identified several DEGs that may be involved in the carcinogenesis and metastasis of osteosarcoma through comprehensive bioinformatics analyses, and unveiled a series of hub genes and pathways. However, further experimental studies are needed to elucidate the biological function and underlying mechanism of these genes in osteosarcoma. | 2020-07-15T13:17:23.279Z | 2020-07-14T00:00:00.000 | {
"year": 2020,
"sha1": "64bf3511319df6d59d4d0e108d700d83634527f4",
"oa_license": "CCBY",
"oa_url": "https://hereditasjournal.biomedcentral.com/track/pdf/10.1186/s41065-020-00142-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a74b68ade97413255dac1e7e49aecce9798be591",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
254716439 | pes2o/s2orc | v3-fos-license | Alvarado-Dela Cruz K, Pascual M & Luna-Dizon ME. The Clinical Profile and Outcome of Children with Dengue Encephalitis at the Philippine Children’s Medical Center: A Retrospective Study from January 2011-June 2017
BACKGROUND: Dengue, a mosquito-borne flavivirus, is hyperendemic in the Philippines. One of its rare complication is dengue encephalitis, characterized by altered sensorium, elevated liver enzymes, and high dengue-specific antibody titers. Previously known as non-neurotropic, dengue presents with an increasing incidence of neurologic manifestations. OBJECTIVE: To describe the clinico-demographic profile and outcome of laboratory-confirmed dengue encephalitis
INTRODUCTION
Dengue is a major health problem in most tropical and subtropical areas 1 and is the most rapidly spreading mosquito-borne viral disease in the world. In the last 50 years, incidence has increased 30-fold with increasing geographic expansion to new countries and, in the present decade, from urban to rural settings. An estimated 50-million dengue infections occur annually and approximately 2.5 billion people live in dengue endemic countries. 2 The number of dengue cases reported annually to WHO has increased from 0.4 to 1.3 million in the decade 1996-2005, reaching 2.2 million in 2010 and 3.2 million in 2015. 6,7 In 2013 dengue was estimated to be responsible for approximately 3.2 million severe cases and 9000 deaths. 3 Severe dengue is a leading cause of serious illness and death among children in some Asian and Latin American countries. 3 Dengue encephalitis is an extremely rare manifestation of severe dengue disease. 12 In the Philippines, in which dengue is hyperendemic; the incidence of dengue cases shows an increasing trend in recent years. In January 1 to December 31, 2018, the suspected dengue cases reported nationwide is 42% higher compared to the same time period in 2017, with case fatality rates increasing from 30% in 2015 to 55% in 2018. 37 The clinical spectrum of dengue fever ranges from asymptomatic infection to severe dengue and dengue shock syndrome. In 2009, WHO adjustments in the classification of the disease resulted in the recognition of two main presentations of dengue. These are referred to as dengue fever and severe dengue. Neurological dengue is classified as a form of severe dengue. 38,39 Although dengue virus is classically considered non-neurotropic, in recent years, neurological manifestations of dengue have been documented. 12 Murthy has classified the spectrum of neurological manifestations seen in dengue into 3 categories: 1) those related to neurotropic effect of the virus, like: encephalitis, meningitis, myositis and myelitis; 2) those due to the systemic complications of infection, like: encephalopathy, stroke and hypokalemic paralysis, and 3) finally, post-infectious complications, like: encephalomyelitis, optic neuritis and Guillain Barré syndrome. 11 A prospective case-controlled study conducted by Cam et al. on 5,400 cases of dengue hemorrhagic fever (DHF) in Vietnam, has shown dengue infection causing encephalitis results to significant morbidity in terms of neurologic sequalae. Dengue-associated encephalopathy accounted for 0.5% of all cases. The mortality rate among children with dengue-associated encephalopathy was 22%. 14 Dengue encephalitis patients usually present with altered sensorium, elevated liver enzymes and high antibody titers at the time of admission. 10 Acute encephalitis, defined by the presence of an inflammatory process of the brain in association with clinical evidence of neurologic dysfunction, is a serious and potentially debilitating condition, which may lead to adverse outcomes of prolonged neurologic sequelae or death.
The incidence of dengue has grown dramatically around the world in recent decades. Neurologic involvement occurs in 4%-5% of confirmed dengue. 40 Dengue infection in patients with suspected central nervous system (CNS) infection is noted to range from 4.2% in southern Vietnam 28 to 13.5% in Jamaica 33 whereas, the incidence of dengue among patients with clinical manifestations of encephalitis-like illness ranges from 18% 41 to 22%. 33 Among confirmed neurological dengue cases studies have documented encephalitis to be the presenting clinical manifestation in 52% 33 to 56%. 28 The annual incidence of dengue encephalitis is most likely underestimated, especially in developing countries because of problems with pathogen detection. In the Philippines, the Epidemiology Bureau of the Department of Health established the Philippine Integrated Disease Surveillance and Response (PIDSR) system in 2007, under which the surveillance on Acute Encephalitis Syndrome (AES) and Bacterial Meningitis (BM) falls.
An integrated surveillance for Acute Meningitis-Encephalitis Syndrome (AMES) was established in 2014 as a combination of both AES and BM, that collates data on both conditions. Presently, there are no local studies describing the clinicodemographic profiles and outcomes of dengue encephalitis cases in the Philippines.
This study aimed to provide clinicodemographic profiles and outcomes of pediatric cases of dengue encephalitis; to provide epidemiological data of such in the Philippines for better case detection, prognostication, prevention, counseling of patients and family members, public health interventions, work-up, and subsequent monitoring. Furthermore, the results of the study may be used as a baseline for further studies on dengue infection.
This study described the clinicodemographic profiles and outcomes of children with dengue encephalitis in a tertiary hospital in the Philippines from January 2011 to June 2017. Specifically, this determined the clinicodemographic features of children with dengue encephalitis in terms of the following: age, gender, geographic location, nutritional status, presenting features, clinical signs, history of previous dengue infection and subsequent outcome, receipt of dengue vaccine, presence of co-morbidities, laboratory examinations, and/or imaging techniques (complete blood count, ALT, AST, glucose, serum electrolytes, BUN, Creatinine, PT and PTT, Dengue NS1, Dengue IgG, IgM, CSF IgM-capture ELISA, EEG, Chest X-ray, Cranial ultrasound, Cranial CT scan and/or MRI).
Another objective was to determine the outcome of patients with dengue encephalitis in terms of: (a) Full Recovery (with complete resolution of neurologic signs and symptoms), (b) Partial Recovery (with partial resolution of neurologic signs and symptoms), (c) Presence of neurologic sequelae, or (d) Death.
METHODOLOGY
This is a retrospective observational study, that used purposive sampling to retrieve and review hospital charts of laboratory-confirmed dengue encephalitis cases aged 0-18 years.
Inclusion Criteria
All of the following criteria were fulfilled prior to study enrolment: (1) Children aged 0-18 years, (2) admitted at a tertiary hospital in the Philippines from January 2011 to June 2017, and (3) patients who fulfill the clinical case definition in AMES surveillance, and are laboratory confirmed cases of acute dengue encephalitis.
Exclusion Criteria
Patients with the following were excluded in the study: (1) bacterial, tuberculous, fungal, parasitic, other viral or immune etiology;(2) patients with encephalomyelitis (eg. Acute Disseminated Encephalomyelitis), or (3) patients without samples submitted for routine CSF and serum analyses.
The Child Neurology census from January 2011-June 2017 was reviewed, revealing 3,124 probable cases of Central Nervous System (CNS) infection, 209 cases of which having a final discharge diagnosis of encephalitis/ encephalopathy, records of which were retrieved for review. Of these, 18 cases were eventually discharged as dengue encephalitis/ encephalopathy. The Acute Meningoencephalitis Syndrome (AMES) Surveillance reports from January 2011-June 2017 were retrieved from the National Reference Laboratory, to search for laboratoryconfirmed cases of dengue encephalitis. Of the 18 cases discharged as dengue encephalitis/encephalopathy, 16 had denguespecific IgM antibody in the CSF or serum sample detected by Dengue NS1 or IgM-capture ELISA. Also, four patients who were initially treated as cases of dengue encephalitis were excluded due to the presence of Japanese encephalitis-specific IgM (3 cases) and Chikungunya virus IgM (1 patient) in the CSF. A total of 14 cases were included in the study.
Definition of terms
An Acute Meningoencephalitis Syndrome (AMES) Surveillance case -is a person with sudden onset of fever and at least one of the following: change in mental status (including altered consciousness, confusion or inability to talk), new onset of seizures (excluding simple febrile seizures), neck stiffness and other meningeal signs.
Dengue encephalitis cases -are suspected dengue patients that meet the clinical case definition defined by AMES surveillance, and a laboratory confirmed dengue infection (as defined by the presence of dengue specific-IgM antibody in serum or CSF detected by dengue NS1 or IgMcapture ELISA, in the absence of co-infection with other etiologic agents).
Approval was obtained from the hospital's Institutional Review Board (IRB). Hospital charts of patients who fulfilled all of the inclusion, and none of the exclusion criteria were retrieved and reviewed.
The following clinico-demographic data were noted in the study-defined patient data sheet: age, gender, geographic location, nutritional status, clinical history, receipt of dengue vaccine, presenting features, clinical signs, duration of hospital stay, co-morbidities, management, as well as laboratory and imaging examinations done. Clinical outcomes were categorized as follows: full recovery, partial recovery from neurologic changes, presence of neurologic sequelae; or death. An attempt to retrieve and review the outpatient follow-up charts was done, of which none can be located. Neuroimaging and encephalogram results done post-hospital discharge were located, and were subsequently reviewed.
Descriptive statistics was used to summarize the clinical characteristics of the patients. Frequency and proportion were used for nominal variables, median and range for ordinal variables, and mean and standard deviation for interval/ratio variables. All valid data were included in the analysis. Missing variables was neither replaced nor estimated. STATA 12.0 was used for data analysis.
RESULTS
During the period covered by the study, 14 cases of laboratory-confirmed dengue encephalitis were recorded. All patients were referred to the intensive care unit. Of the 14 patients enrolled, 9 patients were managed as severe dengue, 4 as neonatal sepsis with CNS infection and 1 as Viral Encephalitis, unspecified. Eleven patients with one or a combination of the following concomitant illnesses: Pneumonia (7), Clinical sepsis (4), Generalized epilepsy with global developmental delay (1), Necrotizing fasciitis, chest (1), Patent Ductus Arteriosus (1) were identified, and were all subsequently managed during the hospital stay. with nonspecific signs and symptoms (Table 2). There was gastrointestinal bleeding in two children, with hematemesis in one and coffee ground material in the orogastric tube in another. Three patients (21%) became jaundiced, with no evidence of hepatomegaly. Enlargement of the liver was noted in 2 patients (14%). ( Table 2). The time that elapsed from the onset of the febrile period until the onset of the neurological changes ranged from 1 to 5 days (median of 2 days). More than half (57%) of children developed decrease in sensorium. The youngest patient exhibited spasticity, nuchal rigidity, and a bulging anterior fontanel. Babinski reflex and hyporeflexia were noted in one 10-year-old patient. Seizures, mostly generalized (n=7), were recorded in 71% of patients, and were the most common reason for hospital admission. (Table 3) Upon admission, more than half (57%) of children had depressed hemoglobin for age (Table 4), 5 (83%) of which were within normal range for weight based on nutritional assessment at the time of confinement. Only one (7.1%) patient, aged 6 years, developed hemoconcentration, an evidence of plasma leakage due to increased vascular permeability 19 .
Among those tested, majority had elevated ALT (8 of 10) and AST (5 of 6). One patient with consistently normal BUN registered high creatinine levels (maximum of 114.92 μmol/L). Hypokalemia was noted in half the children whom serum electrolytes were measured, other results were mostly normal. Three had high glucose, while one had hypoglycemia. Partial prothrombin time was prolonged in 40% of 10 children, and PT INR in 50% of these. Half of the patients showed radiologic evidence of pneumonia, 3 (21%) showed pleural effusion. All patients had CSF analysis done, 8 were collected during the first week of illness. Pleocytosis for age was seen in in only one patient. CSF white blood cells (WBC) ranges from 0-8 cells x 10 6 /L (median 2.14 cells x 10 6 /L), all with 100% lymphocytic predominance. Other findings included slight hypoglycorrhachia (14.3%), and a mild increase in the protein level (14.3%) in 2 patients (43 and 45% respectively). Majority of the patients (71.4%) had normal CSF analysis. 14 The signs and symptoms, as well as the characteristic laboratory markers for severe dengue were not seen in the majority of our patients with dengue encephalitis. A study done by Mufazzar in 2006 supports this finding, as he found that not all patients with dengue encephalitis develop complications of severe dengue. 28 Antenatal and post-partum dengue infection secondary to vertical transmission has been documented to occur in neonates in several earlier reports 23,24 . Interestingly, this study found four neonates who had dengue specific IgM via serology, three of which also had dengue IgM in the CSF. None of these neonates were suspected to have an acute dengue infection during the hospital admission and were instead treated as cases of neonatal sepsis. Review of the patients' clinical course revealed that all four neonates fulfilled the minimum criteria for probable dengue. CSF analysis was done due to the consideration of concomitant CNS infection, and samples were sent to AMES surveillance for analysis. Results of the AMES surveillance was not known during the hospital stay of the patients, and all four neonates were discharged. Three neonates fully recovered, while one still showed signs of fair suck, with improved activity upon discharge. Three out of four neonates demonstrated dengue IgM in the CSF, the exception also showing full recovery upon discharge. Two of the four neonates' mothers had an unremarkable maternal history. One mother was febrile upon delivery due to urinary tract infection, the mother of the neonate with partial recovery expired 5 days after delivery due to preeclampsia, and an unknown febrile illness. During the patients' hospital stay, there was no mention whether the mother was worked-up for the possibility of having acute dengue. It is yet to be established what the poor prognostic factors are for neonates presenting with dengue encephalitis, as there are limited studies regarding this.
Pediatric Infectious
Neurologic manifestations due to dengue have been well reported, and has previously been thought to result from the multisystem derangement that occurs in severe dengue infection, with liver failure, shock and coagulopathy causing cerebral insult as opposed to encephalitis defined by a localized invasion of the CNS. Recent studies, however 10,11,13,14,21 , describe a possible direct neurotropic effect of dengue virus. The incidence of dengue with neurologic complications is unclear, with calculations ranging from 0.5% 14 to 6.2% 26 of DHF cases. Kankirawatana et al. states that 18% of children with suspected encephalitis in a Thai hospital were found to have dengue infection. 27 In the absence of a definitive histological examination of the brain, dengue encephalitis is exemplified by the identification of dengue specific antibodies or dengue antigen in the CSF. Detection of IgM in CSF is indicative of viral replication in CNS, but the titer is generally lower and short-lived when compared with serum, making it an unreliable marker. It is because of this that in previous studies, patients were considered as cases of dengue encephalitis when there is serologic evidence of dengue infection, coupled with focal neurologic manifestations or neuroimaging abnormalities. This consideration has also been employed in this study.
In previous studies, mechanism of CNS infiltration has been observed via (1) virus-induced, cytokinemediated breakdown of the blood-brain barrier, (2) via infiltration of virus-infected macrophages, or (3) by direct invasion of the virus itself. In accordance with these recent reports, we found 5 (35.7%) of 14 patients had dengue-specific IgM in the CSF, indicating a localized infection of the CNS. These patients consisted of 3 neonates, and 2 children. Of the 3 neonates, 2 recovered completely prior to discharge, with hospital stay of 8 and 41 days respectively. One neonate with IgM positive CSF exhibited fair activity prior to discharge. Two other children with IgM positive CSF both stayed at the hospital for 27 days, one was discharged with minimal verbal output and occasional disorientation, and one exhibiting focal deficit and whom hypoxic-ischemic encephalopathy was also considered.
The clinical manifestations and findings in this study were consistent with those reported in the literature and reviews of dengue encephalitis. Fever was present in all cases. Following nonspecific signs and symptoms, decreased sensorium and new onset seizures were the most common neurologic manifestations, the latter being the most common reason for consult and subsequent hospital admission. Elevation in liver enzymes, dengue-related nephropathy, glucose and electrolyte derangements, elevated prothrombin time, prolonged activated thromboplastin time, and signs of plasma leakage were seen in some of our patients. It has been well recognized that cerebral dysfunction may result from these findings, and may account for some neurologic manifestations seen. Interestingly, hemoconcentration was not observed in the cases seen in this study. The paucity of subjects limits the investigator in concluding a correlation exists between this observation and severe dengue in general. CSF analysis of the patients showed the following, a minority with slight hypoglycorrhachia and pleocytosis, all with absolute (100%) lymphocytosis, the findings of which were consistent with viral encephalitis in general. The most common EEG and neuroimaging findings were likewise consistent with dengue encephalitis. 10,11,13,14,21,27,35 Most patients manifested with generalized or focal background slowing via EEG, and neuroimaging findings ranged from being normal, to having evidence of cerebral edema, some with changes consistent with acute meningoencephalitis. Testing for correlation between established factors for poor prognosis, which were noted in some of the patients, such as extremes of age, under or over nutrition, presence of co-morbidities, signs of plasma leakage, hepatic involvement, and patient outcome could not be done due to the very limited subjects.
Among the Flaviviridae, antigenic crossreactivity appears to involve a group-reactive antigen shared by all members. In patients with previous Japanese encephalitis, these circulating low-titer antibodies may show cross-reactivity with dengue virus. This was evident in the cases seen in this study as four patients who were initially treated as cases of dengue encephalitis, were excluded in this study due to the presence of Japanese encephalitis-specific IgM (3 cases) and Chikungunya virus IgM (1 patient) in the CSF.
On the basis of previous reports 10,11,13,14,21 and of the findings of this study, dengue infection encompasses an expanding clinical spectrum that rarely involves encephalitis due to a direct viral neurotropism.
Mortality due to dengue encephalitis varies from 5% 22 to 22% 14 in previous studies. The reported morbidity and mortality due to dengue encephalitis itself is low with most survivors recovering fully. 10,34,35 Documented sequelae from encephalitis included weakness, spasticity 35 and focal spasms. 36 Encephalitis accompanied by postinfectious neurological manifestations however may have a prolonged recovery. Our study limited our investigation to laboratory-confirmed dengue encephalitis in the absence of co-infection with other viruses in the CNS, and only a single mortality was observed. The single mortality observed in this study is a 1-year-old male with dengue IgM antibody detected in the serum, who's immediate cause of death was dengue shock, presenting as generalized seizures and hypotension. Neurologic manifestations were observed in 6 (42.9%) of patients upon discharge, ranging from mild to severe. The presence of long-term or permanent neurologic sequelae cannot be inferred since the only follow-up data available were the follow-up EEG of two patients, which showed improvement in the generalized background slowing in one, and a normal EEG in another patient taken 3 weeks from discharge. It would be interesting to know the longterm outcome of each patient using an established outcome scoring system on subsequent patient follow-up consults, so as to determine whether neurologic changes that have been present on discharge would lead to eventual recovery or deterioration. This exercise, however, is beyond the scope of this study.
According to the World Health Organization (WHO), the real burden of dengue encephalitis is underreported. Although CSF analysis for dengue is locally available, and is government subsidized in sentinel hospitals under the national surveillance program, the relative contraindication of performing an invasive procedure in the context of a clinically unstable patient with thrombocytopenia, and the cost of the test in private institutions restricts definitive laboratory confirmation of dengue encephalitis. Clinical dengue infection in the presence of focal neurologic findings is suggestive of the disease, however, laboratory confirmation via CSF analysis is necessary to determine whether the encephalitis is due to dengue neurotropism, or a systemic consequence of severe disease itself.
Due to the potential risk for significant morbidity and mortality, it is recommended that dengue encephalitis be highly considered in patients with severe dengue so that prompt case detection and appropriate management ensue. The small sample size, heterogeneity of clinical profile, and patient response are probably responsible for outcome variations.
CONCLUSION AND RECOMMENDATIONS
In conclusion, dengue encephalitis is emerging as an important, albeit rare entity that should be entertained as a differential diagnosis in dengue patients with neurologic manifestations in all age groups. Likewise, it should be included in the differential diagnosis of any CNS infection in an endemic country, as evidenced by the 4 neonates managed as neonatal sepsis but turned out to be positive for dengue IgM.
It is recommended that prospective studies be done on this subject as we recognize the limitations of a retrospective study. Likewise, longterm follow-up on patients should be performed for prognostication. | 2020-03-12T01:20:07.982Z | 2019-12-01T00:00:00.000 | {
"year": 2019,
"sha1": "ccd85a9e995268e909fde9fe0e6d83af087c1e54",
"oa_license": "CC0",
"oa_url": "https://www.pidsphil.org/home/wp-content/uploads/2019/12/Vol-20-No-2_ALVARADO-DELA-CRUZ_Dengue-Encephalitis-Revised7.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "ccd85a9e995268e909fde9fe0e6d83af087c1e54",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
225365194 | pes2o/s2orc | v3-fos-license | The importance of the nurse in caring for the Kangaroo method: Integrative literature review
— Aim: identify the importance of the nurse in the care of the kangaroo method. METHOD: It is an integrative literature review study, developed according to the production of the following steps: delimitation of the guiding question, establishment of inclusion and exclusion criteria, search and selection by primary studies in the databases, data evaluation and analysis, results presentation and interpretation. Results: the importance of the nurse's assistance in carrying out the kangaroo method, in consideration of he has an important role in this process, by the reason of he is responsible for promoting the care, encouraging the family to be present and acting during the stages of the kangaroo method. CONCLUSION: Therefore, the nurse's role in the kangaroo method is essential to conduct the teachings about the stages of the method at the moment the mother-child is going through this process, thus ensuring the effectiveness of this strategy.
Due to the high prevalence of infant mortality, the kangaroo method was initially conceived in Colombia in 1979 at the Instituto Materno Infantil de Bogotá by Dr. Reys Sanabria and Dr. Hector Martinez, as a proposal to improve the care provided to preterm newborns in that country (Brazil, 2015).
In this context, the policy of Humanized Attention to the Newborn of Low Weight -Kangaroo Method, which was regulated as Ordinance GM No. 693, on July 5, 2000, later revised as Ordinance No. 1,683, July 12, 2007(Brazil, 2017, emerged. This policy brought the qualification of global care to the newborn, benefiting the integral development of the child, the family bond and, as a consequence, the reduction of neonatal mortality rates (Sales et al., 2018).
The Kangaroo Method (MC) is a strategy divided into three stages, aiming at the humanization and participation of parents in neonatal care; the first stage begins in the prenatal period of high-risk pregnancy, followed by admission of the newborn (NB) in the neonatal ICU; in the second stage the baby remains continuously with its mother and the kangaroo position is performed most of the time and; the third stage is characterized by monitoring the child and the family in the outpatient clinic and at home until reaching the weight of 2,500g (Heck et al, 2016;Brazil, 2015).
In the research presented by Balduino (2018), regarding the nursing that will be in charge of this assistance, being the mediator between caring and teaching the family members, so that they come to participate in these strategies, such as early skin-to-skin contact between the RNPT and their family, individualized care, partnership with the family and encouraging breastfeeding (Brazil, 2015).
The nurse has a primary role in caregiving assistance, making it possible to welcome and develop balance in the environment, where the kangoro method will be carried out to introduce the strengthening of affective bonds between mother and child, guaranteeing integral and qualified care (Brazil, 2017;Sales et al., 2018).
The role of the nurse in the MC is essential to conduct the teachings about the stages of the method at the moment the mother-child is going through this process, thus ensuring the effectiveness of this strategy. In this context, the study of Tarcísio (2010), reinforces about the frequent training of the nurse in relation to the MC, the author still makes an observation that, by not passing on the training to the professionals, probably will contribute to the low rates of effectiveness of the second and third stages of the method.
The interest in the study is justified by the importance of the role of the nurse in relation to the practice of the MC, under its limits in the realization of the care of the method for the NT, emphasizing that the mothers have a fundamental role during the process of the stages of the method. In this sense, this study aims to identify the importance of the nurse in relation to the care of the kangaroo method.
II. METHOD
This is an integrative review study of the literature with a methodological approach to reviews, allowing the inclusion of experimental and non-experimental studies for a complete understanding of the phenomenon analyzed (Souza; Silva & Carvalho 2010). It was developed according to the production of the following steps: delimitation of the guiding question, establishment of inclusion and exclusion criteria, search and selection by primary studies in the databases, evaluation and analysis of data, presentation and interpretation of results (Crossetti, 2012& Soares et al., 2014. In the face of this, we tried to answer the guiding question: What is the importance of the nurse in the care of the newborn baby in the performance of the kangaroo method? The search for primary studies in the databases was developed in the period of September 2019, articles published in scientific health journals in the Scientific Electronic Library Online (Scielo), Virtual Health Library (VHL) and Latin American and Caribbean Literature in Health Science (Lilacs) databases, using the descriptors: nursing care, kangaroo method and premature newborn, follows in table 1 of the crossover performed to find as many articles as possible. The selection of primary studies in the databases was followed by reading and analyzing the studies found in accordance with the inclusion criteria, and was thus divided into three stages. In the first stage 1,032 articles were identified, of these 504 were duplicated and eliminated. In the second stage, 528 articles with titles and abstracts available according to the filters used in the research, of which 289 can be chosen.
Only then, after this careful evaluation the full texts were read with 239 articles, 225 not used and discarded because they do not fit the inclusion criteria and because it is not in accordance with the guiding question of this work, so after all this process obtained a final sample of 14 articles. Figure 1 shows all this selection. .
Fig.1: Flowchart of identification, selection and inclusion of Studies
Source: Authors, 2020.
III. RESULTS
Finally, with the final articles found in the literature after the data searches, in what concerns the nurse to be in charge of the care of the kangaroo method to the premature newborn, 3 categories were used for the analysis process: The importance of the benefits of the kangaroo method for the recovery of the premature newborn; The assistance of the nurse in the care of the kangaroo method; The Reception carried out by the nurse in the accomplishment of the kangaroo method. The three categories are in tables 2, 3 and 4.
International Journal of Advanced Engineering Research and Science (IJAERS)
[ The most benefits attributed to the kangaroo method include: reduction of hypothermia, sepsis, hospital stay and mortality risk, positive impact on cognitive and motor development of premature infants, maintenance of stability during transport of premature infants, as well as vital signs at physiological levels, even when performed in NB under mechanical ventilation and hemodynamically stable.
MENEZES, 2017 12
Tiradentes University, International Nursing Congress Benefits from the mother kangaroo method for low birth weight.
The kangaroo method brings numerous benefits, perceived and reported by the mothers themselves, such as the construction of the bond, the approach with the baby favors growth and development, allows quiet sleep, in addition to the security that the Method provides for mothers in the care of the baby and the pleasure in consolidating the maternal role.
The experience of the kangaroo method experienced by mothers in a public maternity ward in Maceió / Alagoas The low weight newborn will be caressed, touched and wrapped in the lap, will feel more welcomed and safe in the mother's lap, because through this method will contribute to the smooth transition to extrauterine life, having the mother as an indispensable role in the care and treatment of the baby in this process of the stages of the MC, especially when it is in the kangaroo infirmary. The importance of the kangaroo method, because it encompasses the family and it will be stimulated to early contact with the baby, forming bond and all this is extremely relevant in the recovery of the newborn.
In summary, the several benefits of the Kangaroo Method for both mother and baby are shown in table 3, regarding the autonomy perceived by mothers about the Kangaroo Method, there was a strengthening of the mother-child bond and the family, improving the clinical picture and its development, thus contributing to the hospital discharge and the nurse is always establishing an effective communication in all stages of the Kangaroo Method. CRUZ, 2017 16 Unime
Kangaroo
Method: The importance of the family in skin-to-skin contact of the preterm newborn.
Promote a humanized and safe approach through skin-to-skin contact (kangaroo position).
SILVA; CRISPIM; FIGUEIREDO, 2017 17
UniSALESIANO Neonatal Intensive Care Unit: mothers' perception of lived experiences and the importance of nursing care and orientation.
Stimulate participation in all the activities developed during the stages of the kangaroo method.
Rev enferm UERJ
The maternal care promotion to the premature newborn: the perspective of problematizing education in health.
To promote newborn care through the kangaroo method, the nursing team should establish effective communication with mothers in order to instrumentalize and empower them to participate in the care of their child in an autonomous way, with the mother being gradually introduced into the care process.
In relation to the results observed in table 4, as for the assistance of the nurse in the care of the premature newborn, this professional needs to promote the participation of the family for the accomplishment of the process of the kangaroo method in all the stages. The conversation with mothers about the Kangaroo Method process is evident in the first stage they follow with special care: in the care of their family, guiding on the participation of the mother and father in the care of the Newborn, as well as stimulating the support of breastfeeding, discussing the experiences of mothers and the difficulties they face due to the health conditions of their Newborn
BEATRIZ Lelis et., 2018 20
Rev enferm UFPE Motherly welcome in the context of prematurity Nursing has the responsibility of welcoming relatives, focusing on the figure of fathers and special care for mothers.
Relationship of Kangaroo
Position Duration and Mother-Child Pre-Term Interaction at Alta Hospital The welcome during the kangaroo method, the interaction and communication of the team with the mothers are of fundamental importance so that the emotional experiences of this period are better elaborated and the suffering of the mothers minimized.
NUNES, Natália P, 2015 22
Rev Bras Promoção Saúde Maternal perception of the experience in the neonatal intensive care unit.
The reception of the multidisciplinary team to the mother who suffers at this moment, should be considered as relevant by the professionals, because the problems experienced can interfere in the realization of the Kangaroo Method, often, due to lack of information, the mothers are dominated by feelings of distrust, despair, fear and incomprehension in relation to the clinical picture of the baby.
IV. DISCUSSION
In this research, specifically, the importance of the nurse in the care of the kangaroo method was identified, besides observing the benefits of this practice, since the mothers have a fundamental role in the realization of the method.
In a study by Viana, Cunha & Leão (2018), they stated that the benefits of CM in relation to the newborn were analyzed, where positive impacts on its cognitive and motor development could be identified, with an improvement in its weight gain and reducing its length of stay, thus avoiding hospital infections. In this research it was possible to observe the benefits perceived by mothers about the Kangaroo Method, already in the studies of Menezes (2017), it positively points out about this perception of mothers that there was a strengthening of the mother-child bond, favoring the growth of this Newborn and provided their involvement in the care of their baby.
According to the author Araujo et al., (2016), if the newborn underweight is caressed, touched and wrapped in the lap, he will feel more welcomed and safe, because this tool is a technology that provides a smooth transition to extrauterine life, having the mother as an indispensable role in the care and treatment of the baby in the transition of the stages of the Kangaroo Method, especially when he is in the infirmary.
To reinforce this discussion, it is worth noting the statement of Balduino (2018), in this study it was observed that the method also contributes to include the family by encouraging early contact with the baby, increasing the bond and affection and improving the recovery of the newborn.
The results of the above approaches intensify that the importance of the nurse beside the mothers to direct the care to the newborn is notorious, because this professional needs to seek strategies that promote the well-being and protection of the baby, possess the technical capacity, based on scientific evidence to guide the care (Sales et al., 2018). Therefore, the kangaroo method is intended to include family participation in this process and to guide the mothers on the importance of exclusive breastfeeding (Klossoswski et al., 2016).
In this way promoting a humanized assistance stimulating the skin-to-skin contact by the kangaroo position (Cruz, 2017). Since the method brings challenges for nurses regarding the best way to teach mothers the care for their babies, these professionals should always demonstrate the importance of all stages of the process and seek to intermediate all these cares (Silva, Crispim & Figueiredo, 2017 importance of their participation in the recovery of their child in this care process. The studies of Pereira et al., (2018), show that it is important to welcome the nurse to mothers and their premature newborn, because they are responsible for passing on relevant information related to the stages of the kangaroo method and the importance of each one for the effectiveness of this strategy, stimulating them about breastfeeding, discussing and sharing about the experiences experienced to overcome this stage (Lelis et al., 2018).
This welcome and interaction needs to be effective, because if the nurse does not pass on information about the process of the stages of the Kangaroo Method, they will be dominated by feelings of distrust and despair, since they will be sensitized when they see the clinical status of their children (Nunes et al., 2017; N. P. Nunes, 2015). But if they are correctly guided these feelings will not dominate them and they will understand that they are part of this bond of care.
Since there are countless attributions developed by nurses in front of all this assistance of the Kangaroo Method, in which they contribute to the success of this tool, guaranteeing the teaching-learning for the mothers by means of the appropriate guidelines about this strategy and its benefits.
V. CONCLUSION
The present study denotes the importance of the nurse's assistance in the care of the kangaroo method, because it has an important role in this process, since it is responsible for promoting the care, encouraging the family to be present and acting during the stages of the kangaroo method.
In addition to encouraging the practice of breastfeeding, improving the clinical condition, and strengthening the affective bond between mother and newborn, the researchers analyzed prove this importance of the nurse as a mediator to favor the care and benefits of the Kangaroo Method for mother-child contributing to hospital discharge. Therefore, all the findings in this research are relevant for the performance of the Kangaroo Method mainly in relation to the reception, the way this professional will approach the mother, from the entrance until her discharge from hospital, passing on the important information for the realization of the Kangaroo Method strategy, thus they acquire the confidence in realizing the kangaroo position. | 2020-07-30T02:02:44.330Z | 2020-07-29T00:00:00.000 | {
"year": 2020,
"sha1": "ee0c2d6f1b46ace65ac0f75b05adc69c12117375",
"oa_license": "CCBY",
"oa_url": "https://ijaers.com/uploads/issue_files/42IJAERS-0720206-Theimportance.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "f29d5cdc84e1f1b30398679ef480fc0332622727",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Psychology"
]
} |
253014404 | pes2o/s2orc | v3-fos-license | Design and Validation of qPCR-Specific Primers for Quantification of the Marketed Terfezia claveryi and Terfezia crassiverrucosa in Soil
Desert truffle crop is a pioneer in southeastern Spain, a region where native edible hypogeous fungi are adapted to the semiarid areas with low annual rainfall. Terfezia claveryi Chatin was the first species of desert truffle to be cultivated, and has been increasing in recent years as an alternative rainfed crop in the Iberian Peninsula. However, its behaviour in the field has yet not been investigated. For this purpose, specific primers were designed for the soil DNA quantification of both T. claveryi and Terfezia crassiverrucosa and a real-time qPCR protocol was developed, using the ITS rDNA region as a target. Moreover, a young desert truffle orchard was sampled for environmental validation. The results showed the highest efficiency for the TerclaF3/TerclaR1 primers pair, 89%, and the minimal fungal biomass that could be reliable detected was set at 4.23 µg mycelium/g soil. The spatial distribution of fungal biomass was heterogeneous, and there was not a direct relationship between the quantity of winter soil mycelium and the location/productivity of desert truffles. This protocol could be applied to tracking these species in soil and understand their mycelial dynamics in plantations and wild areas.
Introduction
Terfezia claveryi Chatin is an edible mycorrhizal hypogeous fungi belonging to the Pezizaceae family that establish mycorrhizal symbiosis with some plants of the Helianthemum genus [1]. Its natural habitats are arid and semiarid environments with a low annual rainfall inputs, mild winters, and warm summers, mainly encompassing countries of the Mediterranean geographical region [2,3]. T. claveryi was the first desert truffle to be cultivated [4], and it is known to be one of the most appreciated desert truffle species on the market [5], together with other known desert truffles (mainly Terfezia boudieri Chatin, Tirmania nivea (Desf.) Trappe and Tirmania pinoyi (Maire) Malençon [6]). In addition, desert truffles are not only an important economic resource, but contain high nutritional and antioxidant properties [7,8], including bioactive compounds with potential health benefits such as antimicrobial, anti-inflammatory, hepatoprotective, and antitumor activities [9][10][11][12].
Recently, the area cultivated with the desert truffle T. claveryi has been increased in semiarid areas of Spain [5,13], becoming an alternative agricultural crop thanks to low water requirements for cultivation [14]. Until now, some abiotic factors or agroclimatic parameters associated with plant management and the control of fungal fruiting have been studied [14][15][16]. Although this knowledge on mycorrhizal plant phenology could helped to stabilise annual fluctuations in yield ascocarps production [17], there are still high fluctuations within the same plantation, resulting in productive and non-productive areas or "patches" [18]. The analysis of ecology, phenology, and interannual fluctuations on mycelial development are also essential for the proper management of mycorrhizal plants producing truffles or mushrooms [19][20][21].
Soil properties such as pH (acid or alkaline) and the host plant species lead to the fruiting of different species of desert truffle [1,5]. In recent years, several studies on the genus Terfezia have been published to clarify and update the phylogenetic relationships among the new species and those already described within the genus [46][47][48][49][50][51][52][53][54]. These studies showed intraspecific genetic variations in the nrDNA-ITS sequence of Terfezia spp., including the identification of some cryptic species [55][56][57], in which only molecular data are required and used for species identification [20,56].
Traditionally T. claveryi and Terfezia crassiverrucosa Zitouni-Haouar, G. Moreno, Manjón, Fortas & Carlavilla have been collected and marketed together in alkaline soils, because no key differences in distribution, host plant, macroscopy, taste, and flavour characteristics can be found [49]. In fact, they are species very similar morphologically and phylogenetically [49]. Consequently, both species share their habitat in plantations and wild areas and have been called "turmas" indistinctly by gatherers. For this reason, and from now on, when the term turmas is used in this study, we refer to both marketed Terfezia species in Spanish alkaline soils (T. claveryi and T. crassiverrucosa). The internal transcribed spacer (ITS) region from ribosomal DNA (ITS1-5.8S-ITS2) has extensively been used as a universal DNA barcode marker for Fungi [58]. This region was selected to design specific primers for the detection and quantification of T. claveryi and T. crassiverrucosa DNA in soil by real-time quantitative PCR (qPCR). Thus, the objectives of this study are as follows: (a) design and check a set of specific primers for the quantification of DNA of these turmas in soil by qPCR approach; and (b) apply this strategy to determine how mycelium is distributed and spread in a desert truffle plantation. A suitable management of the cultivation and watering according to the recommendations described in [5,13,14,17,59] was followed.
Environmental Sampling
In total, 36 soil samples were collected in February 2020 at an equal distance from the surrounding plants and at a depth of 10-15 cm. They were maintained at 10 • C until they were transported to the laboratory and kept at −20 • C until processing. Before DNA extraction, soil samples were dried at room temperature for 24-48 h. As detailed in Figure 1, 18 samples were from the three rows planted in 2016, 12 from the two rows planted in 2018, and 6 from the one row planted in 2019. There was a separation of 2-2.5 m between the samples within each row, and a distance of 1-1.5 m from one row to the next. Moreover, a soil sample was taken as a negative control from a non-productive area outside the plantation, free of H. almeriense mycorrhizal plants. During the fruiting season in spring 2020, 3, 19, and 6 ascocarps of similar weights were collected from 2016, 2018, and 2019 planting areas, respectively.
Soil DNA Extraction
Soil samples were carefully sieved through 500 µm mesh to remove any root fragments, stones, or plant material debris. Then, genomic DNA was extracted in duplicate from 0.25 g of each sample, previously well homogenized, using the DNeasy PowerSoil Kit (Qiagen, Hilden, Germany), according to the manufacturer's instructions. All DNA was eluted in 100 µL of elution buffer (10 mM Tris) and stored at −20 • C until processing. The concentrations of DNA extractions were measured using a NanoDrop ND-2000 Spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA), and the quality was examined by 260/280 nm and 260/230 nm optical density ratios.
In the same way, DNA extracted from a mixture of 113.1 mg T. claveryi active mycelium (T7 strain), from a pure culture in MMN-O liquid medium [60], and 0.1543 g of negative control soil (twice autoclaved), was used for the generation of the standard curve.
Design of Specific Primers for Turmas
ITS-rDNA (ITS1-5.8S-ITS2) sequences of T. claveryi, T. crassiverrucosa and other desert truffle species from GenBank and RefSeq databases (Table S1) were used for primers design by two different web-based software programs: ABI PRISM Primer Express v3.0.1 (Applied Biosystems, Waltham, MA, USA) and ProbeFinder v2.50 (Universal ProbeLibrary, UPL, Assay Design Center) (Roche Molecular Systems, Pleasanton, CA, USA). Multiple sequence alignments were carried out using the MUSCLE algorithm [61] to delimit specific regions for optimal primer selection using MEGA X: Molecular Evolutionary Genetics Analysis across computing platforms v10.0.5 [62].
Direct PCR amplifications from dried ascoma of fungal reference materials (Table 1) were performed in a FlexCycler (Analytik Jena GmbH, Jena, Germany) according to the protocol described by Bonito [67]. Each 25 µL reaction volumes was amplified with ITS1F-ITS4 primer pair [68,69] and it was composed of 0.4 mM for each primer, 0.2 mM for each dNTP, 2.0 mM MgCl 2 , 50 mM KCl, 20 mM Tris-HCl (pH 8.4), 0.04% BSA and 1.25 U of Taq DNA polymerase (Invitrogen). The parameters of the thermal cycler were: initial denaturation for 2 min at 94 • C, 40 cycles consisting of 30 s at 94 • C, 30 s at 55 • C, 1 min at 72 • C, and a final extension for 5 min at 72 • C. PCR products were purified using the EZNA Cycle-Pure-Kit (Omega Bio-Tek), according to the manufacturer's instructions, and sequenced at the Molecular Biology Service of the University of Murcia. In order to check in vitro specificity, DNA extracts of different species of desert truffles were used as templates (Table 1) under qPCR conditions.
Quantitative Real-Time PCR Conditions
A standard curve was generated from 1/10 dilutions of purified DNA standard (amounts of T. claveryi mycelium in soil) with nuclease-free water. Then, the efficiency of the real-time PCR was calculated for each primer pair selected from the value of the slope of the calibration curve [70] (generated as: E = (10 (−1/slope) − 1) × 100), and the primer concentration was optimised in the range of 50 to 200 nM for the chosen combination of primers. In addition, the minimum amount of mycelium detected by this qPCR protocol was established.
Real-time SYBR-Green-dye-based PCR amplification was carried out for in vitro tests and experimental samples in 96-well plates using a QuantStudioTM 5 Flex (Applied Biosystems, Waltham, MA, USA) instrument. Each amplification was performed on 10 µL reaction volumes containing 5 µL of Power SYBR Green PCR Master Mix (2×) (Thermo Fisher Scientific), 0.1 µL of each primer at 10 µM, 3.8 µL of nuclease-free water, and 1 µL of 1/5 diluted DNA template. The thermal cycle protocol was 50 • C for 2 min and 95 • C for 10 min at hold stage followed by 40 cycles of 95 • C for 15 s and 60 • C for 60 s at PCR stage. After that, melting curve analysis was used to delete from the analysis those samples with non-target sequences and secondary structures. Three replicates for each standard DNA dilution, for each sample and for a no template control (NTC), were included for each run. Then, C T (cycle threshold) values were automatically converted to quantities of turmas mycelium in soil (mg mycelium/g soil) by QuantStudio Design & Analysis software v1.4.
Statistical Analysis
Statistical analyses were performed using the stats package in the R software environment (https://www.R-project.org/; accessed on 20 May 2022) [71]. Soil mycelium data were evaluated by Grubbs' test to determine whether one of the values was a significant outlier from the rest (https://www.graphpad.com/quickcalcs/Grubbs1.cfm; accessed on 20 May 2022). Differences among groups of samples were compared using Kruskal-Wallis tests with the kruskal.test function. When the test was significant, post hoc analysis was performed using the dunnTest function in the FSA package [72]. Correlations between soil-detected mycelium and harvested truffles were analysed by Poisson regression using the glm function.
In Silico Primer Screening
The ITS rDNA region is the most commonly used fragment for fungal species identification and as a target for soil fungal diversity studies; however, it shows different intraspecific variability in all groups of fungi and high length polymorphism [58,73,74]. In addition, even though many mycologists advocate LSU region as alternative, the ITS region shows greater efficiency in species discrimination [58]. The consensus sequence was generated from the independent turmas sequences (Table 1) aligned by MEGA X software. This sequence was used as a DNA template, resulting in three sets of designed primers ( Table 2) based on in silico analyses. The specificity of the primers and the amplicons produced was also confirmed against the sequences of GenBank and RefSeq databases (Table S1). ITS regions from multiple alignments of turmas and desert truffle sequences showed short and limited sections located within the ITS2 region for the optimal design of specific primers. This made it difficult to obtain primers automatically, and only the primer set TerclaF1/R1 was generated by ProbeFinder software. Moreover, some of the considerations for proper primer composition made the design even more complicated, because when SYBR Green dye is used as fluorescence marker, the presence of primer dimers, the formation of secondary structures, or non-specific amplifications may induce the detection of false signals [64,75]. All this forced the manual design of the primer set TerclaF2/R2 and primer set TerclaF3/R1, using the parameters already set as closely as possible.
Selection and Validation of qPCR-Specific Primers
In vitro specificity was also confirmed for the three set of primers designed, and nonamplifications were found for other fungal species (Table 1). However, the set TerclaF3/R1 provided lower Ct values, with the same amount of turmas DNA template as the sets TerclaF1/R1 and TerclaF2/R2. Careful focus was taken with non-specific amplifications of other desert truffle species (T. albida, T. grisea, T. eliocrocae, Picoa sp. and Geopora sp.), because they can share the habitat and the host plant with turmas [1,2,47,56,76]. Moreover, other Terfezia species from acid soils under non-Helianthemum sp. host plants were tested for cross-validation.
DNA serially diluted of the standard sample (10-fold dilutions) were performed and a calibration curve was constructed from 10 −1 to 10 −5 dilutions for three sets of primers designed. The results showed the highest efficiency for primer set TerclaF3/R1, 89% (Figure 2), followed by primer set TerclaF2/R2 and TerclaF1/R1 (64% and 58%, respectively). Moreover, coefficients of determination (R 2 ) were always greater than 0.99 in all curves. Finally, the primers combination chosen were TerclaF3/R1 for optimal real-time qPCR assay using SYBR green fluorescence dye, and they were used for subsequent analyses. In addition, the primer concentration was adjusted to 100 nM, and PCR inhibitors were observed when using pure soil DNA extraction as DNA template. Thus, 1/5 dilutions of each soil DNA extraction were sufficient to avoid inhibition in qPCR reactions. This was an important check point in order to prevent a drop in the efficiency of the samples analysed [70].
The minimal fungal biomass that could be reliably detected was set at 4.23 µg mycelium/g soil, because below this value, reproducibility was lost (Figure 2). Sensitivity levels were different to a greater or lesser degree for other ectomycorrhizal fungi, due to the different strategies used for standard DNA and calibration curve. The detection limit for extraradical mycelium of the edible fungi L. deliciosus, Rhizopogon roseolus and Rhizopogon luteolus was 10-fold lower (0.48 µg mycelium/g soil) from the DNA extraction of fresh mycelium in soil [40]. However, in a previous study, L. deliciosus was detected at up to 2 µg mycelium/g soil [39], and B. edulis was detected at around 39 µg mycelium/g soil [44]. Later, minimal quantities of L. deliciosus and B. edulis fungal biomass were detected: 1 and 4 µg mycelium/g soil, respectively [42]. In cases where pure in vitro culture of mycelium is difficult to achieve, such as in Tuber species [77], immature ascocarps have been used for standard DNA extraction [32][33][34]38]. Gryndler et al. [37] linked ITS rDNA copies in the PCR product with the biomass of T. aestivum mycelium for absolute quantification; however, this method has been questioned for comparison studies because there is a large variability in the number of copies of this gene between fungal species [78].
Real-time qPCR protocols could also be affected by the DNA extraction process, in which the quality of the experiment varies depending on the amount of DNA obtained and contaminants co-extracted [79,80]. However, researchers have commonly added control soil to the extraction DNA procedure in order to generate site-specific calibration curves [32,35]. Furthermore, although TaqMan-based qPCR assays that include hydrolysis probes avoid the detection of non-specific products, SYBR-Green-dye-based techniques have shown the same high-performance results when appropriate qPCR protocols are followed [63,81].
Spatial Dynamic of Turmas Mycelium in a Desert Truffle Orchard
A four-year-old desert truffle orchard was sampled for environmental validation of the primer pair selected, TerclaF3/R1. Mycorrhized plants, inoculated with T. claveryi spores, were planted in three different years (2016, 2018, and 2019) ( Figure 1); therefore, mycelia from three different ages could be cohabiting. Soil samples were collected in winter, before the fruiting season (spring) of desert truffles in the Mediterranean area [1]. Moreover, winter is the plant's physiology stage of maximum activity over the year [15]. H. almeriense shows a high photosynthetic rate and gas exchange together with a vigorous vegetative growth and flower bud production [15].
Mycelial distribution in the plantation is shown in Figure 3, in which a high variability in fungal biomass in soil between the samples can be appreciated. The range of fungal biomass detected and quantified was from 0.079 to 4.798 mg mycelium/g soil, and only 2 of the 36 samples were undetected. The specificity was also confirmed through checking melting curves after PCR cycles. No differences in soil fungal biomass were found between years of the planting area (Kruskal-Wallis chi-squared = 0.7417, df = 2, p-value = 0.6901) ( Table 3). However, significant differences were found between the sampling points (Kruskal-Wallis chi-squared = 12.188, df = 5, p-value = 0.0323) ( Table 3). In contrast to the idea of finding a pattern over the years, we detected a heterogeneous mycelial spreading in that plantation, which does not seem to respond to the year of planting. In winter, turmas mycelium may concentrate in those areas where the plant requires nutritional support, either due to sub-optimal soil conditions or due to increased plant needs. Moreover, agroclimatic parameters may also have an effect on mycelial development.
In accordance with the desert truffle life cycle [82,83], the first rainfalls of late summer and early autumn promote primordia formation, which is associated with the high production of ascocarps in the next spring [14,15]. As well as in the genus Tuber [84,85], T. claveryi exhibited a heterothallic lifestyle, which requires the combination of mycelia with different mating type genes in order to form fruiting bodies [82]. Thus, another reason for the heterogeneity of soil mycelium detected in our case study could be related to the mating type mycelia frequency across plantation, but there are still no studies in desert truffles and, thus far, no clear evidence has been found in the genus Tuber either. The detection of both mating types was correlated with productive trees in black truffle plantations [86,87], but in other studies, mating type frequency was extended randomly across plantation, and it was not significantly related to black truffle ascocarps harvested [88,89].
The amount of fungal biomass in winter showed no significant relationship with the amount of ascocarps harvested by year (3,19, and 11 ascocarps collected in 2016, 2019, and 2018, respectively; Poisson regression p-value = 0.573). It seems that the middle age section accumulated higher desert fruiting bodies. Although the plantation was very successful in coming into production in the first year after planting, it did not reach its maximum productivity, which would occur around the eighth year [14]. In natural and cultivated experiments of black truffle, significant differences were found between mycelial abundance and productive areas [31][32][33]87]. T. magnatum mycelium was significantly higher in the surrounding fruiting areas [35]. Other ectomycorrhizal fungi, such as L. deliciosus and B. edulis, showed a non-correlation between the productivity of the different plots with the soil fungal biomass [42], but the soil mycelium was strongly related to climatic parameters. The results derived from this assay should be analysed with prudence, because long-term annual studies across season are necessary to explain the mycelial behaviour of desert truffles in soil, as discussed above for other mycorrhizal fungi.
Although the plantation of our study was irrigated and the competing vegetation had been eliminated, other strategies, such as mechanical tilling practices [90][91][92], should be investigated in order to maintain a youthful plantation and avoid the loss of crops after a few years. Therefore, it is necessary to explore the mycelial dynamics over time, because the plantation may be at risk of ageing, with mycelium and mycorrhizae displaced and no ascocarp production.
Conclusions
In conclusion, the selected primers designed within ITS regions are sufficiently accurate to develop a real-time qPCR protocol for the quantification of fungal biomass of T. claveryi and T. crassiverrucosa in soil samples. The TerclaF3/R1 primer set was tested and validated for the SYBR-Green-based qPCR assay. Moreover, the preliminary study on soil samples from a desert truffle plantation showed no correlation between winter soil fungal biomass and truffle productivity in spring. However, the amount of fungal soil mycelium seemed to trend to decrease with the years, indicating that, after a certain time, the plantation could become unproductive. In-depth knowledge of mycelial dynamics over the years would help us to develop proposals for plantation management to extend the useful life of plantations.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/jof8101095/s1, Table S1: Accession number from GenBank and RefSeq databases (NCBI) of the species used for turmas primers design. Tables 1 and S1. | 2022-10-20T08:57:01.160Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "91949ad394b392fe8d0639e3927e8f386ae5862a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2309-608X/8/10/1095/pdf?version=1666016648",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5a129f59c10ced21b4f903fa66b607384abdd5e2",
"s2fieldsofstudy": [
"Environmental Science",
"Biology",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18257766 | pes2o/s2orc | v3-fos-license | Re-circulating Phagocytes Loaded with Cns Debris: a Potential Marker of Neurodegeneration in Parkinson's Disease?
Diagnosis and monitoring of diseases by measurement of biochemical markers has most commonly been performed on samples of peripheral blood. However, no such markers are available for clinical use in the major diseases of the central nervous system (CNS). In Parkinson's disease circulating biomarkers would find clinical utility in early diagnosis and also monitoring of disease progression. Of particular interest is early diagnosis as this would create .a window of opportunity for treatment with neuroprotective drugs. We have developed a novel strategy for monitoring disease activity in the CNS based on the recognition that tissue injuries incite inflammation and recruitment of phagocytes that engulf debris. We postulated that some of these debris laden phagocytes may return to the peripheral blood and their cargo of CNS proteins could be measured. If CNS antigens can be measured in PBMCs it may be an indicator of active neurodegeneration as the debris engulfed by phagocytes is completely degraded within days. To make this approach more specific to Parkinson's disease we probed PBMC lysates for neuromelanin as a marker of degeneration within the substancia nigra. We performed a proof of principle study in ten subjects with early PD and ten age and sex matched controls. The biomarkers neuromelanin, Tau protein, UCH-L1 and HPCAL-1 were measured in PBMC lysates from these two groups. Neuromelanin and Tau protein mean levels were elevated in PD compared with controls and was extremely statistically significant in both cases. UCH-L1 and HPCAL-1 mean levels were elevated in PD over controls and were not quite significant in both cases. These results suggest that this is a promising new approach for diagnosis and monitoring of PD and potentially other CNS diseases.
Introduction
PD is associated with a loss of neurons, particularly in the Substantia Nigra (SN), whose neurons produce dopamine.Loss of dopamine results in inappropriate neuronal signaling, causing many of the clinical characteristics of PD, particularly those related to motor function.Neuronal degeneration is not limited to the SN, though it is grossly the most evident site.PD is a chronic neurodegenerative disorder involving loss of neurons in various regions of the brain.It is clinically characterized by resting tremor, bradykinesia (a slowness in the execution of movement), and rigidity, often accompanied by postural instability.Motor symptoms represent the most widely recognized clinical diagnostic of PD.Since the appearance of these symptoms correlates with a high degree of neural degeneration [1,2], it is too late to expect any potential neuroprotective therapies to be effective.To facilitate neuroprotective therapies it will be necessary to identify individuals that are developing PD in a prodromal (pre-motor) phase of the disease.This would identify a window of opportunity for prevention of overt PD development.
Non-motor clinical symptoms, such as hyposmia, REM behavior disorder and constipation precede motor symptoms.While these factors could be used in combination to identify a high risk prodromal stage, they are not diagnostic in themselves and could represent a variety of other pathologies.Of these symptoms, olfactory assessment does seem to offer a relatively high degree of both sensitivity and specificity and as such could represent a low-cost, easy identification of at-risk individuals appropriate for further assessment [1,2].There are currently no fluid based biochemical markers for Parkinson's disease in clinical use.
Tissue injuries usually result in some degree of inflammation involving infiltration with phagocytic cells that engulf and remove debris from the site of injury [3].We have postulated that some of these debris laden cells may return to the peripheral blood circulation and by probing for components of the debris we may have a biomarker strategy to monitor neurodegeneration.Furthermore, as the function of these phagocytes is to breakdown the debris, detection of debris components in re-circulating phagocytes indicates a process that is actively happening as the debris is undetectable after a few days [4]!
Here we report the results of a pilot study using this approach to detect CNS components in PBMC lysates from recent onset Parkinson's disease and age and gender matched apparently healthy controls.
Blood
Blood samples from ten apparently healthy controls and ten Parkinson's disease patients were obtained from Sanguine biosciences (Sherman Oaks, CA) for this pilot study.Blood samples were drawn with informed consent under an IRB approved protocol.Early stage Parkinson's disease patients were selected by duration since diagnosis (less than 3 years) or early stage of disease based on the Hoehn and Yahr Scale [5].Apparently healthy controls were age and gender-matched to the Parkinson's patients selected (Table 1).
Antibodies and Peptides
Biotinylated 4B4 peptide was synthesized by the Tufts University Core Facility peptide synthesis service (Tufts University College of Medicine; Boston, MA).The 4B4 sequence was previously reported [6,7].Rabbit anti human Tau protein was purchased from Dako Corp.(Carpenteria, CA).Mouse monoclonal anti UCH-L1 was purchased from (Santa Cruz Biotechnology, Inc.; Dallas, Texas) and Rabbit anti Hippocalcin like 1 was purchased from (Life Span Bioscience Inc.; Seattle, WA).HRP conjugated Streptavadin was purchased from Thermo Scientific (Rockford, IL).HRP conjugated anti mouse and anti rabbit IgG were purchased from Santa Cruz Biotechnology Inc. (Dallas, Texas).
Melanin Staining Kit
A Fontana-Masson stain kit was obtained from American Master Tech Scientific Inc. (Lodi, CA).
Staining Frozen Sections
Fontana-Masson staining was performed according to manufacturer's instructions.4B4 peptide was diluted in PBS + 1% BSA and incubated on frozen sections for 60 minutes.The sections were then washed 3 times with PBS before incubation with a 1:500 dilution of HRP-streptavadin (KPL; Gaithersburg, MD) in PBS + 1% BSA for 60 minutes.The sections were then washed 3 times with PBS before incubation with TMB insoluble substrate (EMD: Millipore; Billerica, MA).Sections were counterstained with Nuclear Fast Red Results and coverslips mounted prior to microscopy.The stained sections were visually scored and photomicrographs taken.
Human Peripheral Blood Mononuclear Cells
PBMCs were prepared from whole blood samples collected in BD VacutainerR Cell Preparation Tubes (CPTTM) with Sodium Citrate as anticoagulant (Beckton, Dickinson and Co; Franklin Lakes, NJ), collecting two tubes per patient.The tubes were centrifuged at 3000 rpm for 30 minutes.The PBMCs from each tube were collected and washed two times in 45mL Phosphate Buffered Saline (PBS), by centrifugation at 1500 rpm for 15 minutes for each wash.The supernatant was aspirated and discarded.The remaining pellets were recombined and hypotonically lysed with 500 μL deionized water.The suspension was brought to isotonic with 10x PBS, and 10 μL protease arrestTM (G-Biosciences; St. Louis, MO) was added.Aliquots were stored at −80 °C.All samples were frozen for less than one month prior to assay.
Protein Quantification
Protein concentrations in PBMC lysates were measured using the Bradford assay (BioRad Laboratories; Hercules, CA) with Bovine Serum Albumin (BSA) as the standard.Standards of 5 to 25 micrograms per mL of protein were prepared in 0.8 mL of PBS.Unknowns were diluted as necessary in PBS.0.2 mL Coomassie Brilliant Blue dye reagent (Bio-Rad) was added to each tube, vortexed and incubated 5 minutes.Absorbance was measured at 595nm.Unknowns were quantified by interpolating absorbance against the standard curve.HRP conjugated anti mouse IgG and HRP conjugated anti rabbit IgG were purchased from Santa Cruz Bioscience (Dallas, Texas).
4B4 Peptide Binding Assay
Human PBMC lysate was normalized to 5 μg/mL protein concentration in 0.5 M NaOH and adsorbed directly onto a polystyrene plate.Wells were coated with 100uL each of lysate dilution, in duplicate or triplicate, then incubated at 60 °C for two hours, followed by 30 minutes at 80 °C.Wells were washed four times with deionized water and blotted.Wells were blocked with 200 μL per well of 1% BSA/0.1 M Glycine in PBS and incubated for 2 hours at room temperature or 4 °C overnight.Wells were washed four times with PBS and blotted.100uL per well of 0.1 μg/mL biotinylated 4B4 peptide diluted in 1% BSA-PBS was added and incubated for 2 hours at room temperature.Wells were washed four times with PBS and blotted.100 μL per well of streptavidin-HRP diluted 1:500 in 1% BSA-PBS was added and incubated 1 hour at room temperature.Wells were washed four times with PBS and blotted.100 μL per well of TMB substrate solution was added and incubated 30 minutes at room temperature.Added 100 μL per well of TMB stop solution and read using BioTek ELx800 plate reader at 450 nm.
Direct ELISA
2.8.1.Tau and Hippocalcin-like 1 PBMC lysate was adjusted to 5 μg/mL protein concentration in PBS and adsorbed onto a polystyrene plate.Wells were coated with 100 μL each of lysate dilution, in duplicate or triplicate, then incubated for two hours at room temperature.Wells were washed four times with PBS and blotted.Wells were blocked with 200 μL per well of 1% BSA/0.1MGlycine in PBS and incubated for 2 hours at room temperature or 4C overnight.Wells were washed four times with PBS and blotted.Added 100uL per well of 1:100 antibody (rabbit anti-human-Tau or anti-human-HPCAL1) diluted in 0.1% BSA/0.05%Tween20 in PBS, and incubated for 2 hours at room temperature.Wells were washed four times with PBS and blotted.Added 100 μL per well of 1:5000 anti-rabbit-IgG-HRP diluted in 1% BSA-PBS and incubated 1 hour at room temperature.Wells were washed four times with PBS and blotted.Added 100 μL per well of TMB substrate solution and incubated 30 minutes at room temperature.Added 100 μL per well of TMB stop solution and read using BioTek ELx800 plate reader at 450 nm.
UCH-L1
PBMC lysate was adsorbed onto a polystyrene plate and blocked as described above.100 μL per well of 1:100 mouse anti-human-UCH-L1 diluted in 0.1% BSA/0.05%Tween20 in PBS was added and incubated for 2 hours at room temperature.Wells were washed four times with PBS and blotted.100 μL per well of 1:5000 anti-mouse-IgG-HRP diluted in 1% BSA-PBS was added and incubated 1 hour at room temperature.Wells were washed four times with PBS and blotted.Color development and measurement was as described above.
Statistical Analyses
Results from the peptide binding assay and ELISA assays were analyzed for statistical significance using the Graphpad QuickCalcs t-test calculator (Graphpad Software Inc.; La Jolla, CA).
Results
Acetone fixed frozen sections of human SN were stained for neuromelanin granules with the Fontana-Masson stain (Figure 1, left) and revealed black staining of neuromelanin containing cells in the SN.Staining of these sections with biotinylated 4B4 peptide revealed a similar distribution of blue stained granules (Fig1-right).
PBMC lysates of PD subjects and controls were probed for the presence of CNS antigens by peptide binding assay (for neuromelanin) and ELISA (Figure 2).Eight of 10 PD subjects had levels of 4B4 binding above the control subjects (Figure 2, top left); p = 0.0082 (Table 2).Seven of 10 PD PBMC lysates had levels of Tau protein above the highest control (Figure 2, top right); p = 0.0047 (Table 2).Hippocalcin like1 was elevated above the highest control value in 6 of 10 PD subjects (Figure 2, bottom left); p = 0.0521 (Table 2) and UCH-L1 was elevated above the highest control value in 5 of 10 PD subjects (Figure 2, bottom right); p = 0.066 (Table 2).
Discussion
The 4B4 peptide has previously been shown to bind to microbial melanins and eumelanin in human skin [6,7] and we have now demonstrated, by histochemical staining, that it also binds to neuromelanin in the substantia nigra of the human brain.This enabled us to use the 4B4 peptide as a reagent for detection of neuromelanin in phagocytes within PBMCs.We found, in this study, that the mean 4B4 binding levels to PBMC lysates in recently diagnosed PD subjects and age and gender matched controls were statistically significantly different by t-test and that eight of the ten PD subjects had a 4B4 binding level that was above the highest control.This would imply that 4B4 binding may be a useful marker of PD.
What of the two PD subjects that were negative?Is it possible that they were misdiagnosed and have an atypical presentation of essential tremor?A DAT scan could answer this question but that data was not available as these blood samples were obtained from a commercial bio-banking source.This does, however, raise the intriguing possibility that a simple and inexpensive blood test may be able to differentiate PD from other movement disorders and aid in obtaining a correct diagnosis.It is also of interest to note that the two PD subjects that were in the control range for neuromelanin also measured in the control range for Tau protein, HPCAL1 and UCH-L1.
We also analyzed these lysates for the presence of other CNS antigens.The Tau protein has been implicated in the pathogenesis of neurodegenerative diseases [8] and may be present in phagocytosed neuronal debris.We found that the mean levels of Tau protein in PBMC lysates in recently diagnosed PD subjects and age and gender matched controls were statistically significantly different by t-test and that seven of the ten PD subjects had a Tau protein level that was above the highest control.This would also imply that Tau protein in re-circulating phagocytes may be a useful marker of PD.
Hippocalcin like 1 is a calcium sequestering protein and is a member of the neuron specific calcium binding proteins family localized to the brain and retina [9].Hippocalcin like-1 may contribute to the calcium-dependent regulation of rhodopsin phosphorylation and may be of relevance for neuronal signaling in the CNS.We analyzed these PBMC lysates for the presence of Hippocalcin like-1 protein.We found that the mean levels of Hippocalcin like-1 protein in PBMC lysates in recently diagnosed PD subjects and age and gender matched controls were not quite statistically significantly different by t-test, however is only two significant figures are considered then the p value is 0.05 which is conventionally accepted as statistically significant.Six of the ten PD subjects had a Hippocalcin like-1 protein level that was above the highest control.This would also imply that Hippocalcin like-1 protein in re-circulating phagocytes may be a useful marker of PD.
Ubiquitin carboxy-terminal hydrolase L1 (UCH-L1) is a de-ubiquitinating enzyme.UCH-L1 expression is specific to neurons and to cells of the diffuse neuroendocrine system and their tumors.It is present in all neurons (accounting for 1-2% of total brain protein) [10].The UCH-L1 gene has been associated with both PD and Alzheimer's disease [11][12][13].
We found that the mean levels of UCH-L1 protein in PBMC lysates in recently diagnosed PD subjects and age and gender matched controls were not quite statistically significantly different by t-test.Five of the ten PD subjects had a UCH-L1 protein level that was above the highest control.This would similarly imply that UCH-L1 protein in re-circulating phagocytes may be a useful marker of PD.
These results imply that debris loaded macrophages return to the peripheral blood circulation.This may be a direct re-entry via local capillaries.Evidence that this may be so was reported in a rodent model of retinal degeneration [14] in which it was observed by electron microscopy that macrophages loaded with photoreceptor debris were re-entering local capillaries.
Monocytes/macrophages are recruited to sites of injury by chemotaxis along gradients of secreted chemokines and this must be occurring in PD.The recent report that severity of PD is associated with circulating level of the chemokine CCL5 is of extreme interest [15].In this study CCL5 was measured in sera of PD subjects at different Hoehn-Yahr stages.Regression analysis revealed an association of CCL5 level with PD progression although it was not a strong association.The regression coefficient was 0.362 indicating that the association is responsible for about one third of the variation in the data and was extremely statistically significant (p = 0.001).An earlier study did not find any association of PD severity with serum levels of the chemokines CCL3, CCL11, CCL24, CXCL8 and CXCL10 [16].Consequently, progression of PD is likely to be due to multiple mechanisms but infiltration of immune cells under the influence of CCL5 may play a significant but not exclusive role.
Conclusion
The prevalence of elevation of marker levels in PBMC lysates over controls was highest for 4B4 binding at 80%, however the lower prevalence of the other markers does not disqualify them from consideration as they may make useful contributions to a biomarker panel and algorithm that could potentially be used for both early diagnosis of PD and monitoring of disease progression. | 2016-10-11T02:19:10.865Z | 2015-02-12T00:00:00.000 | {
"year": 2015,
"sha1": "ec8628fde69010302d7b5231f8cdc2cdd87c001d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3934/medsci.2015.1.26",
"oa_status": "GOLD",
"pdf_src": "Grobid",
"pdf_hash": "ec8628fde69010302d7b5231f8cdc2cdd87c001d",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
} |
54121927 | pes2o/s2orc | v3-fos-license | Proteomics of Bronchoalveolar Lavage Fluid Reveals a Lung Oxidative Stress Response in Murine Herpesvirus-68 Infection.
Murine herpesvirus-68 (MHV-68) productively infects mouse lungs, exhibiting a complex pathology characteristic of both acute viral infections and chronic respiratory diseases. We sought to discover proteins differentially expressed in bronchoalveolar lavage (BAL) from mice infected with MHV-68. Mice were infected intranasally with MHV-68. After nine days, as the lytic phase of infection resolved, differential BAL proteins were identified by two-dimensional (2D) electrophoresis and mass spectrometry. Of 23 unique proteins, acute phase proteins, vitamin A transport, and oxidative stress response factors Pdx6 and EC-SOD (Sod3) were enriched. Correspondingly, iNOS2 was induced in lung tissue by seven days post-infection. Oxidative stress was partly a direct result of MHV-68 infection, as reactive oxygen species (ROS) were induced in cultured murine NIH3T3 fibroblasts and human lung A549 cells infected with MHV-68. Finally, mice infected with a recombinant MHV-68 co-expressing inflammatory cytokine murine interleukin 6 (IL6) showed exacerbated oxidative stress and soluble type I collagen characteristic of tissue recovery. Thus, oxidative stress appears to be a salient feature of MHV-68 pathogenesis, in part caused by lytic replication of the virus and IL6. Proteins and small molecules in lung oxidative stress networks therefore may provide new therapeutic targets to ameliorate respiratory virus infections.
Introduction
Respiratory virus infections have the potential to cause significant lung pathology including acute respiratory distress syndrome (ARDS). In addition to the continual burden of disease from respiratory viruses such as influenza types A and B, respiratory syncytial virus (RSV), parainfluenza viruses, adenovirus, recently emerged coronaviruses responsible for Middle East (MERS-CoV) and severe acute (SARS-CoV) respiratory syndromes, H5N1 and H7N9 pathogenic avian influenza viruses, pandemic swine-origin (H1N1) influenza, and human metapneumovirus target the human lungs [1][2][3][4][5][6][7]. Co-morbid, underlying pulmonary medical conditions including asthma, chronic obstructive pulmonary disease (COPD), and tuberculosis (TB) are associated with severe respiratory virus infections [8][9][10][11]. Moreover, chronic pulmonary diseases such as asthma, COPD, and For a monkeypox virus infection model in macaques [42] and three pathogens in mouse infection models, RSV [43], Staphylococcus aureus [44,45], and Klebsiella pneumoniae [46], proteomics analyses of BAL have identified inflammatory proteins and revealed commonalities in infectious pulmonary pathophysiology. Analysis of mouse BAL by IEF/2DE showed a suppression of antioxidant and oxidative stress proteins during RSV infection [43]. However, no analysis using differential IEF/2DE proteomics in MHV-68 infection of the mouse lung have been published to date.
As many human viruses infect the lung, understanding the proteins present in BAL using MHV-68 as a model may uncover novel aspects of the mammalian host's response to pulmonary viral infections. Using proteomics, we have identified mouse BAL proteins that are differentially up-regulated by virus infection and overexpression of a immunomodulatory cytokine (IL6). Proteins involved in the acute phase response, oxidative stress responses, and vitamin A signaling were salient in the MHV-68 infected lung. Interestingly, these proteins are induced by nine days post-infection (d.p.i.), as the initial phase of MHV-68 infection resolves and lytic replicating virus is cleared from the lungs by T-cell mediated host responses [20,23]. The experimental protocol herein demonstrates the feasibility of differential BAL proteomics to characterize less abundant, highly regulated host factors in BAL fluid.
Viruses and cell Cultures
Wild-type (WT) MHV-68, MHV68/IL6, and red fluorescent protein (RFP)/MHV-68 viruses in this study were all titered by plaque overlay assay on BHK21 cells as previously described [47,48]. Recombinant viruses were generated by co-transfection of MHV-68 genomic DNA and a PCR-generated cDNA encoding the gene to be inserted flanked by MHV-68 sequences corresponding to the MHV-68 genome. MHV68/IL6 virus was generated by homologous insertion of murine cDNA encoding interleukin-6 (IL6) driven by a cytomegalovirus (CMV) immediate early (IE) promoter-enhancer into an intergenic locus near the 5 end of the MHV-68 genome [49]. The RFP/MHV-68 virus was generated in a similar manner whereby a cDNA encoding RFP driven by CMV IE promoter-enhancer was inserted into the ORF28 locus. The ORF28 gene is dispensable for infection of cultured cells and Mus musculus models of MHV-68 infection [50]. Recombinant viruses were selected by plaque-purification, viral DNA was purified and screened for cDNA insertion into expected loci by PCR and restriction fragment digestion followed by Southern blotting, as has been described [48,49,51]. During lytic infection in NIH3T3 cells, expression of IL6 from the MHV68/IL6 virus was confirmed by Western blotting and ELISA; for RFP/MHV-68, expression of RFP was observed by epifluorescent microscopy. To probe for reactive oxygen species (ROS), murine NIH3T3 or human A549 cells were infected with RFP/MHV-68 at a multiplicity of infection (m.o.i) of 1 or 5 and at 4 h or 20 h post-infection (h.p.i.), cells were rinsed in cold 1 × phosphate-buffered saline (PBS), incubated for 5 min at 37 • C in the dark in 1 × PBS containing 5 µM 5/6-carboxy-2 ,7 -difluorodihydrofluorescein diacetate (H 2 DF 2 DA), a compound that exhibits superior photostability compared to other fluorescein derivatives (Invitrogen, Carslbad, CA, USA), washed in 1 × PBS, and then imaged in an epifluorescent microscope. ROS-inducing compounds H 2 O 2 or paraquat (10 µM) were employed as positive controls for H 2 DF 2 DA fluorescence. For examining ROS effects on viral titer, NIH3T3 cells were infected with RFP/MHV-68 (m.o.i. = 0.25) in the absence or presence of 1 mM soluble glutathione (GSH) or 2-25 µM paraquat in media. After 20 h, culture supernatants were diluted 1/2, 1/10, or 1/100, used to re-infect fresh NIH3T3 cells, and RFP fluorescence observed 20 h.p.i. by epifluroescence microscopy using a Zeiss Axiovert epifluorescence microscope (C. Zeiss AG, Oberkochen, Germany). Los Angeles (IACUC protocol number # 1999-058; Approved 1 Jan. 1999;renewed 2004. Twelve-week old male C57/BJ6 mice (Charles River Laboratories, Wilmington, MA, USA) were anesthetized with 0.1 mL (100 mg/kg) ketamine by intraperitoneal (i.p.) injection, and then inoculated with 20 µL DMEM (6 mice) or infected intranasally (i.n.) with 5 × 10 5 pfu of WT MHV-68 (6 mice) or MHV68/IL6 (6 mice) virus diluted in 20 µL DMEM. Mice in each experimental group were housed separately until sacrifice at 6 or 9 d.p.i., when at each timepoint, 3 mice in each experimental group were anesthetized and sacrificed under anesthesia by i.p. injection of 0.1 mL ketamine. Mice were subsequently dissected for bronchoalveolar lavage (BAL) thrice with 1.4 mL sterile 1 × PBS via a rounded 21G syringe inserted by tracheotomy and affixed with suturing thread. Separately, 2 more mice in the DMEM and in each infected group were sacrificed at 7 d.p.i. for whole-lung harvest with snap-freezing of tissue in liquid N 2 , and determination of viral titer and gene expression as described [47,49]. BAL fluid was centrifuged immediately (2000× g, 15 min., 4 • C) to separate soluble, supernatant phases and the cell/debris pellet. Supernatants were kept at −80 • C until processing. Cell pellets were resuspended in 50 µL 0.5% FBS DMEM containing 1 mM EDTA, and monocytes in 5 µL aliquots were counted by trypan blue exclusion test with a hemocytometer. Aliquots of cell fractions (5 µL) were also analyzed by thin smear on poly-lysine coated glass slides followed by fixation and eosin/hematoxylin staining with Hema3 (Thermo Fisher Scientific, Waltham, MA, USA) according to the manufacturer's instructions, and light microscopy (Olympus Corp., Tokyo, Japan).
BAL Fluid Processing
Aliquots of BAL fluid supernatants were used for Sircol collagen assay, viral DNA detection by quantitative PCR, and protein detection by Western blotting. For each experimental infection (MHV-68 and MHV68/IL6), one of the three BAL samples containing significant numbers of erythrocytes was deselected. To have sufficient protein for resolution by proteomics methods, the remaining 2 BAL fluid supernatants were pooled for each experimental condition, precipitated in 95% acetone at −20 • C for 2 h, centrifuged (4 • C, 15 min, 20,000× g), and then resuspended in binding buffer. To reduce abundant immunoglobulins and albumin, Aurum column binding and elution (Bio-Rad, Hercules, CA, USA) was done according to the manufacturer's instructions. Eluents were re-concentrated and desalted by 4:1 acetone (95%, ice-cold) precipitation for 2 h, centrifuged (4 • C, 15 min, 20,000× g), and resuspended in isoelectric focusing (IEF) buffer. A Bradford assay was used to quantify protein concentration prior to and post-processing [52], and SDS-PAGE with SYPRO-Ruby staining was used to observe depletion of abundant albumin bands.
Sircol Collagen Assay
For lung tissue collagen assay, 0.05 g lung tissue was homogenized in 0.5 M acetic acid (1 mL) containing 7.5 mg pepsin, and rotated for 24 h at 4 • C. Samples were briefly centrifuged to pellet debris, and 100 µL of each supernatant was assayed for collagen by Sircol assay as described by the manufacturer (BioColor Ltd., Carrickfergus Belfast, UK). For measuring soluble collagen in BAL fluid, aliquots of 25 µL were subjected to Sircol assay. Collagen concentration was determined by absorbance at 540 nm in a spectrophotometer and titration according to standard curves generated for lung tissue and BAL fluid.
Catalase Assay and Immunoblotting
NIH3T3 cells were lysed in passive lysis buffer and protein content was normalized by a Bradford assay as described previously [52], and lysates were subjected to a catalase activity assay (Sigma, St. Louis, MO, USA) according to the manufacturer's instructions. A standard curve was generated with controlled quantities of H 2 O 2 . H 2 O 2 treatment for 24 h yielded only a minimal induction of catalase activity in this assay. For Western blots, protein lysates separated by SDS-PAGE were Western blotted and probed with specific polyclonal anti-ORF65/M9 anti-sera or anti-catalase antibody (Calbiochem, San Diego, CA, USA) with HRP-linked secondary and electrochemiluminescent detection as described [48].
PCR
For quantitative RT-PCR, total RNA was isolated from mouse lungs 7 d.p.i. and reverse transcribed into cDNA as described [47,49]. Primers to specific murine genes (described in Supporting Information Table S2) were used to amplify transcript cDNA and relative transcript copies determined by the ∆∆C T method with an actin internal control by SyberGreen (Applied Biosystems, Carlsbad, CA, USA) real-time detection on a LightCycler thermocycler (Roche, Indianapolis, IN, USA). Significance of relative gene expression was determined by an unpaired, 2-tailed t-test. Viral DNA representing viral genome copy number was determined for each mouse BAL sample by qPCR with primers specific to MHV-68 genomic ORF65/M9 or ORF57 loci as previously described [48,51].
IEF, 2D-PAGE, Spot Mapping and Densitometry
Eluted BAL proteins (300 µL) were resuspended in IEF buffer containing ampholytes covering the pH 3-10 range (Bio-Rad, Hercules, CA, USA). Samples passively loaded on rehydrated, immobilized 11 cm nonlinear pH 3-10 gradient IPG strips (Bio-Rad, Hercules, CA, USA) and then focused by pI for 18 h ramping over 6 h to a maximum current of 70,000 V-h in a Protean IEF Cell (Bio-Rad, Inc., Hercules, CA, USA). Strips were re-equilibrated for 30 in DTT and then iodoacetamide buffers and proteins separated by mass in a denaturing 8-16% gradient Criterion 2D-PAGE (Bio-Rad, Inc., Hercules, CA, USA). Two-dimensional gels were briefly incubated in 10% methanol/5% acetic acid, rinsed in ddH 2 O, and stained for 3 h with SYPRO-Ruby (Invitrogen, Carlsbad, CA, USA). Gels were imaged under UV light and analyzed to identify differentially-expressed protein spots. Proteins resolved in the pH 4-7 range were sufficiently separated for spot mapping across gels using an integrated ProteomeWorks PD Quest 7.1 imager and software (Bio-Rad, Carlsbad, CA, USA) with manual spot validation. Spots were quantified by peak cross-sectional densitometry using ImageQuant (GE Healthcare, Piscataway, NJ, USA), and normalized to an average of oxytocin-receptor (spot 13) and a common major form of eluted albumin (spot 5) relative to gel image background density. A total of 89 abundant differential spots across the three experimental conditions were excised and in-gel digested in Trypsin Gold MS (Promega, Madison, WI, USA), and alkylated peptides were extracted, dried and stored at −80 • C as described previously [53] for mass spectrometry identification.
Mass Spectrometry
Tryptic peptide digests of proteins were separated on a reverse phase column and identified by tandem micro-LC/MS-MS and in some cases, by MALDI-TOF mass spectrometry, with sample handling as described previously [54,55]. BSA (5 pmol) digested in Trypsin Gold was used to generate positive control spectra for LC/MS-MS and MALDI-TOF experiments, respectively. Briefly, MS-MS spectra were captured on an AB Sciex Qstar quadrapole XL hybrid TOF LC/MS-MS (Applied Biosystems, Foster City, CA, USA) with tandem peptide ion fragmentation running in Information Dependent Acquisition (IDA) mode. Peptide and fragment a-, b-and y-series ions spectra were analyzed by Mascot software (Matrix Sciences, Boston, MA, USA) with peptide tolerance set at <0.5 Da, MS/MS tolerance < 0.8 Da, charge states + 1/ + 2/ + 3/ + 4, 1 tryptic digest miss allowed, oxidation of Cys and Met, with peptide identification by search against the predicted mouse proteome at NCBI and EBI reference databases. From tryptic digests of excised spots, 44 yielded peptide data identifying 23 unique proteins. Positive identification cutoffs were determined on a case-by-case basis with expectation scores < 10 −2 (p < 0.05, for 20 hits), or p < 0.1, for 3 hits, considering multiple peptide hits and supporting MALDI-TOF data in assignment. Another 7 spots did not meet a significance cutoff or poorly matched predicted pI and MW, including annexin A5, hemoglobin fragment, triose phosphate isomerase, matrix metalloproteinase 8, serpin b 3d, and collagens I and VI (p > 0.10). For MALDI-TOF, aliquots of peptide digests were mixed with 200× proportion of α-cyano FHSA matrix dissolved in 70% acetonitrile and 0.1% TFA and spotted with laser ionization and data capture with a low mass gate (500 Da) on an AB Sciex Voyager MALDI-TOF running PD Quest software (Applied Biosystems, Foster City, CA, USA). MALDI peptide data were searched against the mouse proteome using Aldente software [56], with predicted pI and molecular mass data estimated from 2D-PAGE spots.
Bioinformatics Analyses
Functional enrichment among the set of proteins discovered in enriched BAL fluid was analyzed by Ingenuity Pathways Analysis (IPA 7.6, Ingenuity Systems Corp., Redwood, CA, USA) as described [57]; IPA categories were tested for significance by a Benjamini-Hochberg test for false discovery [58]. BAL protein functions were also analyzed by Database for Annotation, Visualization, and Integrated Discovery (DAVID) algorithms [59] to assess significant Gene Ontology (GO), InterPro (IPR), and Protein Information Resource (SP_PIR) annotations. A subnetwork of oxidative stress-associated molecules was discovered and extracted using IPA with manual literature curation to construct a network model [60]. Amino acid sequences of human and mouse TNFAIP8 family proteins were obtained from the UniProt database and CLUSTAL multiple sequence alignments performed and formatted using MAFFT FFT-NS-2 v5.731 [61].
Recovery and Characterization of BAL Fluid from Mouse Lungs Infected with MHV-68
An unexpected observation regarding MHV-68 infection of laboratory mice was that MHV-68 infection could exacerbate pulmonary fibrosis [15,27,[62][63][64][65]. Thus, we also sought to identify proteins induced by MHV-68 infection accompanied by co-expression of murine IL6, a pro-fibrotic cytokine. C57/BJ6 mice were inoculated intranasally (i.n.) with DMEM, or infected with a high titer of WT MHV-68 or a recombinant MHV-68 virus co-expressing murine interleukin-6 (IL6) from a constitutive promoter. To discover secreted or extracellular proteins in virus infection of the mouse lungs, we developed an experimental procedure to analyze the BAL fluid proteome ( Figure 1A). At six d.p.i. and nine d.p.i., BAL fluid was collected and analyzed for cells, protein, soluble collagen, and viral DNA content. At six d.p.i., MHV68/IL6 showed significantly more soluble type I collagen in BAL fluid than WT MHV-68 infection; by nine d.p.i., soluble type I collagen was significantly higher in BAL fluid from both WT and MHV68/IL6 infection in comparison to the uninfected control ( Figure 1B). Subsequent analysis focused on the nine d.p.i. timepoint, for which soluble type I collagen levels in BAL were similar between the WT virus and MHV68/IL6, and good resolution of proteins by IEF/2DE was achieved. Both protein concentration and mononuclear cellularity in BAL fluid were substantially higher in infected vs. uninfected mice at nine d.p.i., and viral DNA was detected ( Figure 1C). MHV-68 viral capsid antigen ORF65/M9 was also present in clarified BAL fluid ( Figure S1B). Differences in extracellular virion DNA or DNA from damaged cells in the lung ( Figure S1A), which are not a direct measurement of infectious virus titer, were not significant (p > 0.1, two-tailed t-test). Numbers of BAL mononucleocytes recovered ( Figure 1C) was similar to a previous report of phenotypic characterization of mononuclear cell infiltrates ( Figure S1C), chemokines and cytokines in MHV-68 infection of the lungs [20]. i.) from C57/BJ6 mice inoculated as above was processed to remove cells, salts, abundant serum proteins and immunoglobulins. Average protein concentration was measured by Bradford assay (±1 s.d.); eluate exhibited significant differences (p < 0.05, unpaired, 2-tailed t-test) between MHV-68 and DMEM and MHV68/IL6 and DMEM. Average viral DNA copy number (1 × 10 5 cp) in BAL in 3 mice for each condition measured by qPCR, and average mononuclear cellularity (1 × 10 5 cells) measured by trypan blue hemocytometry; differences not significant. IEF = isoelectric focusing.
Differential Proteomics Analysis of BAL Fluid
Recovered BAL fluid from nine d.p.i. was pooled for each experimental condition (three mice in DMEM, or two mice for WT MHV-68 and MHV68/IL6, respectively), after processing to remove cells and reduce abundant immunoglobulins, albumin, and salts. Reduction of the most abundant proteins in biofluids is a common approach for reducing proteome complexity to enrich less abundant but biologically interesting proteins [66]. Enriched BAL proteins were analyzed by comparative 2D gel electrophoresis (IEF/2DE) display ( Figure 2). WT MHV-68 and MHV68/IL6 infection induced a considerably more complex proteome than DMEM-inoculated control mice.
Differential Proteomics Analysis of BAL Fluid
Recovered BAL fluid from nine d.p.i. was pooled for each experimental condition (three mice in DMEM, or two mice for WT MHV-68 and MHV68/IL6, respectively), after processing to remove cells and reduce abundant immunoglobulins, albumin, and salts. Reduction of the most abundant proteins in biofluids is a common approach for reducing proteome complexity to enrich less abundant but biologically interesting proteins [66]. Enriched BAL proteins were analyzed by comparative 2D gel electrophoresis (IEF/2DE) display ( Figure 2). WT MHV-68 and MHV68/IL6 infection induced a considerably more complex proteome than DMEM-inoculated control mice. Prominent constitutive and differentially-expressed orthologous proteins were mapped and identified by LC/MS-MS and/or MALDI mass spectrometry. We mapped 39 spots in the pH 3-7 range, and identified 23 unique proteins, of which 20 proteins had high significance scores (p < 0.05), for example peroxiredoxin 6 (Pdx6, spot #15 in Figure 2; see Figure S2 for example of detailed peptide LC/MS-MS data), and three proteins (A2MP, OxtR, and Tnfaip8l2) had marginal scores (p < 0.1) for at least two peptide matches (Supporting Information Table S1). Even though viral ORF65/M9 antigen was detected by Western blot in clarified BAL fluid supernatants ( Figure S1), peptides matching MHV-68 virion proteins [53] in enriched BAL fluid data did not reach the significance cutoff (p > 0.10). Of the proteins identified, 13 were induced by WT MHV-68 infection (six strongly), and five were markedly upregulated in the context of MHV68/IL6 ( Figure 2D). Another four proteins showed a reduced abundance in the context of either virus infection in comparison to DMEM-treated or uninfected control mice ( Figure 2D and Table S1).
Functions of Proteins Induced by MHV-68 in Lungs
While this survey is not a comprehensive list of BAL proteins [34,67], the proteins identified fell into four broad functional groups according to Gene Ontology (GO) classification and the scientific literature (Table 1): (i) acute phase response (APR) and inflammation, (ii) oxidative stress response, (iii) phospholipid metabolism and signaling, and (iv) molecular transport and serum proteins. Most of these proteins have been implicated in inflammation or lung diseases, and many have been identified in proteomics studies of BAL from human patients with ARDS [35,36,68], acute lung diseases [33], IPF [37], or proteomics analysis of serum from patients with severe acute respiratory syndrome (SARS) caused by SARS-coronavirus [69]. The MHV68/IL6 virus induced three antioxidant (thioredoxin-like 4B, peroxiredoxin 2, superoxide dismutase 3) and two acute phase (α2-macroglobulin, CRABP2) proteins substantially more than WT MHV-68. Accordingly, oxidative stress [70,71] and acute phase responses [72,73] have been shown to be regulated by IL6.
ORF65/M9 antigen was detected by Western blot in clarified BAL fluid supernatants ( Figure S1), peptides matching MHV-68 virion proteins [53] in enriched BAL fluid data did not reach the significance cutoff (p > 0.10). Of the proteins identified, 13 were induced by WT MHV-68 infection (six strongly), and five were markedly upregulated in the context of MHV68/IL6 ( Figure 2D). Another four proteins showed a reduced abundance in the context of either virus infection in comparison to DMEM-treated or uninfected control mice ( Figure 2D and Table S1). . BAL was collected from mouse lungs 9 d.p.i., pooled, and processed to enrich for less abundant proteins as described in Methods. Eluted BAL proteins were separated by isoelectric focusing followed by 2D-PAGE and SYPRO-Ruby staining. Proteins resolved in the pH 4-7 range were sufficiently separated for spot identification (red numbers), excision, and tryptic digestion for protein identification by MALDI and/or LC/MS-MS. Spots were quantified by densitometry, normalized as described in Methods, and fold induction over orthologous spots in mock (DMEM) treatment is indicated (D). Significant fold induction (>2.0) of MHV68/IL6 over WT MHV-68 specified.
Functional Enrichment Analysis
To gain a more systematic understanding of protein functions induced in response to MHV-68 infection of the lung, we undertook bioinformatics analyses to identify functional enrichment for the 20 of 23 significant or marginally significant proteins identified in MHV-68 BAL. Albumin, a reference serum protein, and protein fragments (Hydin and OxtR) were not included. Among BAL proteins, significantly enriched functional categories included physiological stress (p = 0.0010), oxidative stress annotations (p < 0.0001), and acute phase response (p = 0.0028) (Figure 3). Oxidative stress response proteins included reactive oxygen species (ROS) and detoxifying enzymes (peroxiredoxins, thioredoxins, and superoxide dismutase). Acute phase response (APR) proteins overlapped with other enriched functions, including physiological stress response and signaling (p = 0.039). Among signaling proteins, vitamin A (retinoic acid) binding was a significant function (p = 0.0094) for three proteins that were also induced in APR: CRABP2, transthyretin (TTR), and plasma retinol binding protein (RBP4). Proteins induced by WT MHV-68 infection were predominantly in the stress response, acute phase, signaling, and oxidative stress categories. Exogenous expression of IL6 in the context of MHV-68 infection primarily induced oxidative stress and APR proteins as well as vitamin A binding protein CRABP2; RBP4 was weakly induced in the context of IL6. Finally, signaling and APR proteins (calcyclin, Clara cell protein 10, and haptoglobin) were found to be less abundant in WT MHV-68 compared to the control ("suppressed by MHV68", Figure 3) but clearly not oxidative stress proteins.
Functional Enrichment Analysis
To gain a more systematic understanding of protein functions induced in response to MHV-68 infection of the lung, we undertook bioinformatics analyses to identify functional enrichment for the 20 of 23 significant or marginally significant proteins identified in MHV-68 BAL. Albumin, a reference serum protein, and protein fragments (Hydin and OxtR) were not included. Among BAL proteins, significantly enriched functional categories included physiological stress (p = 0.0010), oxidative stress annotations (p < 0.0001), and acute phase response (p = 0.0028) (Figure 3). Oxidative stress response proteins included reactive oxygen species (ROS) and detoxifying enzymes (peroxiredoxins, thioredoxins, and superoxide dismutase). Acute phase response (APR) proteins overlapped with other enriched functions, including physiological stress response and signaling (p = 0.039). Among signaling proteins, vitamin A (retinoic acid) binding was a significant function (p = 0.0094) for three proteins that were also induced in APR: CRABP2, transthyretin (TTR), and plasma retinol binding protein (RBP4). Proteins induced by WT MHV-68 infection were predominantly in the stress response, acute phase, signaling, and oxidative stress categories. Exogenous expression of IL6 in the context of MHV-68 infection primarily induced oxidative stress and APR proteins as well as vitamin A binding protein CRABP2; RBP4 was weakly induced in the context of IL6. Finally, signaling and APR proteins (calcyclin, Clara cell protein 10, and haptoglobin) were found to be less abundant in WT MHV-68 compared to the control ("suppressed by MHV68", Figure 3) but clearly not oxidative stress proteins.
Acute Phase and Oxidative Stress Gene Expression in the MHV-68 Infected Lung
To further investigate acute phase and oxidative stress responses in lung tissue, the expression of known host genes involved in these pathways was studied by quantitative real-time (qRT) PCR. Two mice for each condition were infected with WT MHV-68, MHV68/IL6, or mock (DMEM) inoculated, and RNA was extracted from total lung homogenates for qRT-PCR. By seven d.p.i., APR/vitamin A transport genes RBP4 and TTR were upregulated approximately four-and two-fold, respectively, with significantly higher induction by MHV68/IL6 for RBP4 (Figure 4). MHV-68 infection also generally induces oxidative stress genes in the lung by seven d.p.i. (Figure 4)
Acute Phase and Oxidative Stress Gene Expression in the MHV-68 Infected Lung
To further investigate acute phase and oxidative stress responses in lung tissue, the expression of known host genes involved in these pathways was studied by quantitative real-time (qRT) PCR. Two mice for each condition were infected with WT MHV-68, MHV68/IL6, or mock (DMEM) inoculated, and RNA was extracted from total lung homogenates for qRT-PCR. By seven d.p.i., APR/vitamin A transport genes RBP4 and TTR were upregulated approximately four-and two-fold, respectively, with significantly higher induction by MHV68/IL6 for RBP4 (Figure 4). MHV-68 infection also generally induces oxidative stress genes in the lung by seven d.p.i. (Figure 4), including genes encoding lung antioxidant proteins Pdx6, EC-SOD, glutathione peroxidase 3 (Gpx3), and thioredoxin 1 (Trx1), as well as inducible nitric oxide synthetase (iNOS), a pro-inflammatory protein that is capable of generating reactive oxygen species (ROS) and reactive nitrogen species (RNS) as a byproduct of the production of NO messenger [74]. including genes encoding lung antioxidant proteins Pdx6, EC-SOD, glutathione peroxidase 3 (Gpx3), and thioredoxin 1 (Trx1), as well as inducible nitric oxide synthetase (iNOS), a pro-inflammatory protein that is capable of generating reactive oxygen species (ROS) and reactive nitrogen species (RNS) as a byproduct of the production of NO messenger [74]. Table S2.
Lytic MHV-68 Infection Induces Oxidative Stress in Cultured Fibroblasts
To investigate the autonomous contribution of the MHV-68 lytic phase to oxidative stress in infected cells, we studied the role of the MHV-68 lytic phase in the production of ROS in murine NIH3T3 fibroblasts and human lung epithelioid A549 cells. As a control, we treated uninfected NIH3T3 cells with hydrogen peroxide, and induction of ROS was evident by oxidative green fluorescence of H2DF2DA ( Figure 5A Table S2.
Lytic MHV-68 Infection Induces Oxidative Stress in Cultured Fibroblasts
To investigate the autonomous contribution of the MHV-68 lytic phase to oxidative stress in infected cells, we studied the role of the MHV-68 lytic phase in the production of ROS in murine NIH3T3 fibroblasts and human lung epithelioid A549 cells. As a control, we treated uninfected NIH3T3 cells with hydrogen peroxide, and induction of ROS was evident by oxidative green fluorescence of H 2 DF 2 DA ( Figure 5A, upper panel). To examine whether ROS is induced by MHV-68 infection, sub-confluent NIH3T3 or A549 cells were infected with a recombinant MHV-68 virus expressing red fluorescent protein (RFP) from the ORF28 late locus, and then stained with H 2 DF 2 DA at 20 h.p.i. The majority of infected NIH3T3 or A549 cells (indicated by red fluorescence) exhibited a moderate to bright green H 2 DF 2 DA oxidative fluorescence, indicative of a high level of ROS ( Figure 5A, lower two panels). Roughly 15% of infected NIH3T3 cells exhibited bright H 2 DF 2 DA oxidative fluorescence, indicative of a high level of ROS, by 20 h.p.i. (Figure 5A). ROS can lead to the generation of peroxides such as H 2 O 2 , which are reduced by the multi-subunit catalase enzyme. Indeed, catalase enzyme ( Figure 5B) and catalase activity ( Figure 5C) were upregulated in NIH3T3 cells infected with WT MHV-68 (m.o.i. = 1) by 24 h.p.i. In contrast, only basal catalase activity was observed early during MHV-68 infection at two h.p.i. and six h.p.i., analogous to simply adding excess H 2 O 2 ( Figure 5C). ROS did not accumulate by four h.p.i. even in a high titer infection (m.o.i. = 5), but did by 20 h.p.i. (Figure S3), suggesting that cytotoxicity associated with the late lytic cycle of virus infection is required for the generation of ROS. However, ROS induction itself seems to have little effect on MHV-68 infection; modulating cellular redox potential in infected NIH3T3 cells with sub-lethal doses of the oxidative stress inducer paraquat only weakly enhanced lytic expression of RFP/MHV-68, and quenching ROS with soluble glutathione had little discernable effect ( Figure S3). Figure 5C). ROS did not accumulate by four h.p.i. even in a high titer infection (m.o.i. = 5), but did by 20 h.p.i. (Figure S3), suggesting that cytotoxicity associated with the late lytic cycle of virus infection is required for the generation of ROS. However, ROS induction itself seems to have little effect on MHV-68 infection; modulating cellular redox potential in infected NIH3T3 cells with sub-lethal doses of the oxidative stress inducer paraquat only weakly enhanced lytic expression of RFP/MHV-68, and quenching ROS with soluble glutathione had little discernable effect ( Figure S3).
An Oxidative Stress Response Network Induced in Mouse Lungs by MHV-68 Infection
To gain a deeper understanding of the pathways induced in response to MHV-68 infection of the lung and co-expression of IL6, we used Ingenuity Pathways Analysis (IPA) to extract an oxidative stress and inflammatory response network centered on redox proteins identified in this study. While the number of proteins identified in our proteomics study (Table S1) was insufficient for de novo network discovery [60], analog curation using data from the Ingenuity KnowledgeBase and NCBI EntrezGene allowed synthesis of a model depicting regulatory interactions (i.e., activation, inhibition, etc.) among key molecules ( Figure 6). A striking feature of the model is the multi-directional interaction between antioxidant proteins and transcriptional regulatory factors
A.
B.
An Oxidative Stress Response Network Induced in Mouse Lungs by MHV-68 Infection
To gain a deeper understanding of the pathways induced in response to MHV-68 infection of the lung and co-expression of IL6, we used Ingenuity Pathways Analysis (IPA) to extract an oxidative stress and inflammatory response network centered on redox proteins identified in this study. While the number of proteins identified in our proteomics study (Table S1) was insufficient for de novo network discovery [60], analog curation using data from the Ingenuity KnowledgeBase and NCBI EntrezGene allowed synthesis of a model depicting regulatory interactions (i.e., activation, inhibition, etc.) among key molecules (Figure 6). A striking feature of the model is the multi-directional interaction between antioxidant proteins and transcriptional regulatory factors such as COX-2, NF-kB, and iNOS, all of which have been found to be important to gammaherpesvirus infections [25,52,75].
Discussion
We have undertaken a differential proteomics analysis of BAL fluid to gain insight into the pulmonary molecular pathology of respiratory virus infections in mice. As a tractable animal model of gammaherpesvirus infection, the pathogenesis and immune response to MHV-68 in the mouse lung has been a subject of considerable recent inquiry [16,[18][19][20][21]23,47,64]. In such studies, BAL fluid has been used to analyze immune cell infiltration, cytokine/chemokine profiles, and chemotaxis activity, for example [15,20]. We found molecules in BAL fluid that provide additional insight as virus-induced lung injury is resolved, uncovering molecular details (i.e., host factors and pathways) of the host response to a virus in a quantifiable manner. Proteins induced by nine d.p.i. included oxidative stress response proteins, acute phase proteins, signaling molecules and transporters (Table 1). Functional category analyses indicated that redox, acute phase, and vitamin A proteins were significantly enriched in the subset of BAL proteins we identified (Figure 3), suggesting that these processes are induced by nine d.p.i. in MHV-68 infection of the mouse lung.
Discussion
We have undertaken a differential proteomics analysis of BAL fluid to gain insight into the pulmonary molecular pathology of respiratory virus infections in mice. As a tractable animal model of gammaherpesvirus infection, the pathogenesis and immune response to MHV-68 in the mouse lung has been a subject of considerable recent inquiry [16,[18][19][20][21]23,47,64]. In such studies, BAL fluid has been used to analyze immune cell infiltration, cytokine/chemokine profiles, and chemotaxis activity, for example [15,20]. We found molecules in BAL fluid that provide additional insight as virus-induced lung injury is resolved, uncovering molecular details (i.e., host factors and pathways) of the host response to a virus in a quantifiable manner. Proteins induced by nine d.p.i. included oxidative stress response proteins, acute phase proteins, signaling molecules and transporters (Table 1). Functional category analyses indicated that redox, acute phase, and vitamin A proteins were significantly enriched in the subset of BAL proteins we identified (Figure 3), suggesting that these processes are induced by nine d.p.i. in MHV-68 infection of the mouse lung.
Effects of Co-Expressing IL6
As suggested by experiments in IL6-deficient mice [30], IL6 may play a role in inflammation and the development of lung pathology during MHV-68 infection rather than impacting viral replication. Instead of comparing BAL proteins between WT and IL6-deficient mice, we took a different approach to examine the effects of IL-6 by using an MHV68/IL6 virus that over-expresses this cytokine. Co-expression of IL-6 in MHV-68 infection of the mouse lung showed neither significant difference in replication kinetics nor whole lung viral titers in comparison to wild-type virus. However, MHV68/IL6 induced a subset of BAL proteins, including redox, acute phase, and vitamin A signaling/transport molecules ( Figure 2D), as well as type I collagen ( Figure 1B). IL6 gene expression is induced by NF-kB heterodimers, and IL6 in turn signals through an IL6 (CD126) receptor-gp130 co-receptor complex on a subset of B-cells to NF-IL6, a pro-inflammatory transcription factor [76,77]. In the lung, IL6 regulates natural killer (NK) cells responding to MHV-68 infection [30]. KSHV, a human gammaherpesvirus, encodes a viral IL6 homologue that can signal though the gp130 co-receptor found on a range of B-cells, independently of the cellular IL6 receptor [78]. The KSHV lytic transactivator replication and transcription activator (RTA) also activates the human IL6 promoter [79]. In addition, KSHV microRNAs (miRNAs) specifically induce IL6 and IL10 in macrophages [80]. Besides querying the effects of supranormal IL6 levels on infected lung pathophysiology, inclusion of the MHV68/IL6 virus in our BAL proteomics analysis allowed for the development of an analytical IEF/2DE method for differential protein discovery (Figure 2), demonstrating the potential utility of this approach for querying viral mutants.
Oxidative Stress Response Proteins Are Induced in MHV-68 Infection of the Mouse Lung
In the BAL proteome, host antioxidant and oxidative stress response proteins were upregulated by MHV-68 infection (Table 1), including Pdx6, EC-SOD, and a paralogue of GST (GSTm1). In whole lung tissue, genes encoding iNOS, extracellular glutathione peroxidase (Gpx3), and a thioredoxin (Trx1), were also induced ( Figure 4). Co-expression of IL6 in MHV-68 infection further upregulated EC-SOD ( Figure 2D), and induced another peroxiredoxin (Pdx2) and a thioredoxin paralogue (TXNL4B). Induction of antioxidant proteins suggests a pathophysiological response in the lungs to oxidative stress. Induction of oxidative stress has been found in experimental virus infections in vivo, including RSV in mice [43,81], and influenza virus infections in human epithelial cells, mice [82,83] and macaques [57]. Antioxidant proteins can protect lung tissue from oxidative damage, detoxify oxidized phospholipids, and reduce virus-associated ALI [83][84][85]. Oxidative stress induced in respiratory virus infections can also have pleiotropic effects on lung gene expression and inflammatory processes such as cytokine and chemokine production [57,82,86].
Sources of Oxidative Stress
We found that MHV-68 infection of cultured NIH3T3 fibroblasts or lung-derived A549 cells induces ROS and catalase activity ( Figure 5). Similarly, cultured cells infected with respiratory syncytial virus (RSV), rhesus monkey rhadinovirus (RRV, another gammaherpesvirus), or HSV-1 show increased oxidative stress [43,[87][88][89]. Two genes upregulated by MHV-68, COX-2 [52] and iNOS (Figure 4), are capable of directly generating ROS as reaction byproducts [74,90]. Lytic MHV-68 infection proceeds under conditions of oxidative stress, as we found that treating infected cells with paraquat did not inhibit but rather mildly enhanced RFP/MHV-68 virus infection ( Figure S3). In vivo, mice treated with NSAID-targeting COX-2 showed no differences in MHV-68 titers than controls ( [52]). In contrast, cytotoxic T-lymphocyte (CTL) immune control of MHV-68 is impaired in mice deficient in iNOS, resulting in lethality [25]. While viral infection of type I and II lung epithelial cells likely contributes directly to the induction of oxidative stress, there may be other contributing factors in the alveolar microenvironment, such as degranulation of activated innate immune effector cells (alveolar macrophages, natural killer cells, and neutrophils). For example, the Ncf1/NADPH oxidase complex in neutrophils also significantly contributes to ROS and oxidation of phospholipids in lungs insulted with H5N1 highly pathogenic avian influenza (HPAI) [70].
Oxidative Damage to Surfactant Phospholipids
Pulmonary surfactant lipids and proteins have roles in antiviral defense and inflammatory and immune responses against respiratory viruses such as influenza A viruses, RSV, and adenovirus [91]. Conversely, oxidative damage to phospholipids is implicated in ALI caused by viruses such as HPAI, SARS-CoV [70], and RSV [81]. Oxidized phospholipids that accumulate in HPAI and SARS-CoV infections also likely contribute to ALI and hypercytokinemia ("cytokine storm") by signaling though toll-like receptor 4 (TLR4) and TRIF/TICAM1 in macrophages, activating NF-kB and inducing IL6 [70]. We found PLA2G12A, a secreted phospholipase A2 enzyme, highly upregulated in BAL from MHV-68 infected mice at nine d.p.i. (Table 1 and Figure 2D). Phospholipase A2 enzymes are involved in the degradation of damaged (oxidized) surfactant phospholipids including dipalmitoyl phosphatidylcholine (DPPC), a process often upregulated in lung injury. Phospholipase A2 is inhibited by abundant surfactant proteins including surfactant protein A (SP-A; [92]), and Clara cell protein 10 (CC10), which was downregulated in BAL from MHV-68 infection ( Figure 2D). Another protein induced in BAL, Pdx6, may reduce oxidized phospholipids, including DPPC, that have been modified by ROS, allowing lipid recycling in type II epithelial cells or macrophages in the lungs [93][94][95]. Accordingly, surfactant protein expression is altered in chronic MHV-68 infection of interferon gamma receptor (IFNGR)-null mice that display a pathology reminiscent of IPF [15]. These finding suggest a role for lung surfactant lipids and lipid-associated proteins in the pathogenesis of MHV-68 in the lungs.
Comparison to other Respiratory Diseases and Role of Nrf2
In contrast to MHV-68 infection, expression of antioxidant (oxidative stress response) proteins SOD1, GPx1, Pdx6, GSTmu1, and catalase were suppressed during RSV infection in the lungs of mice and human patients [43]. The antioxidant transcription factor Nrf2 was also suppressed in RSV infection, while Pdx2 was induced like in MHV-68 infection. The importance of the Nrf2-mediated response was illustrated in knockout mice, whereby Nrf2 protected lung cells from bronchopulmonary injury by RSV and influenza A virus [81,82]. Interestingly, a close association between oxidative stress and pro-fibrotic inflammation in the lung, marked by elevated Nrf2 expression, is well-established in human patients with IPF and/or interstitial pneumonia [96,97]. Human Pdx2 particularly has also been found upregulated in UIP/IPF lung tissue, in particularly in alveolar macrophages [98]. Thus, the role of Nrf2 in the induction of antioxidant defenses and pro-fibrotic pathophysiology of MHV-68 is in need of further investigation.
Modeling a Complex Relationship
A network of molecules intersects with these antioxidant proteins, including NF-kB, IL6, COX-2, iNOS, Nrf2, and small molecules, suggesting multiple points at which MHV-68 infection might generate an oxidative stress in the lungs ( Figure 6). For example, while it is suspected that NFκB can be activated by oxidative stress in viral infections [81,82], the relationship between NF-kB signaling and gammaherpesvirus pathogenesis is complex and poorly understood. It has been reported that NFκB is activated by MHV-68 lytic replication [63], while paradoxically, NF-kB activation can also inhibit the initiation of MHV-68 lytic replication [75]. Regulation of NF-kB is an enriched function among cellular proteins interacting with MHV-68 proteins [99], and the lytic protein ORF73 promotes ubiquitination and degradation of p65/RelA [100]. Likewise, inhibition of NF-kB leads to upregulation of ROS and reactivation of latent KSHV by activating expression of the lytic transactivator protein, RTA [101]. Moreover, experimental inhibition of NF-kB blocks chemokine responses and development of pulmonary fibrosis in the lung in MHV-68 infection [63]. Functional genomics studies may provide additional molecular insight into virus-host interactions controlling MHV-68 infection and induction of oxidative stress [99].
Acute Phase Response
Appearance of acute phase proteins in the serum is a hallmark of systemic inflammation resulting from infections, including bacterial sepsis, pneumonia, and human immunodeficiency virus (HIV-1) [102][103][104][105]. The release of acute phase proteins from liver hepatocytes or other tissues is dependent on IL6 and other cytokines [106,107]. The finding of acute phase proteins in BAL from MHV-68 infected mice indicate a systemic response to intranasal MHV-68 infection, or leakage of serum proteins into the pleural interstitial and alveolar lumen, consistent with previous findings of UIP pathology in MHV-68 infection [16]. We found acute phase-related proteins in BAL at nine d.p.i, including α1-antitrypsin (A1AT6), α2-macroglobulin (A2MP), α1-acid glycoprotein 1B (A1AG1/AGP), haptoglobin, and vitamin A transport molecules CRABP2, TTR, and RBP4 (Table 1 and Figure 3). AGP is immunomodulatory, induced in experimental pulmonary tuberculosis [108] and influenza [72] in mice. A1AT6 is a protease inhibitor induced by MHV-68, while the endopeptidase haptoglobin is suppressed ( Figure 2D); along with type I collagen accumulation ( Figure 1B), an anti-proteolytic lung tissue remodeling environment is apparent. IL6 has also been suggested to enhance the production of acute phase response proteins in virus infections [106]. Indeed, the protease inhibitor A2MP is induced in MHV68/IL6 infection ( Figure 2D), consistent with higher type I collagen deposition. Finally, vitamin A (retinoic acid, RA) is a signaling molecule carried in the serum as all-trans retinol by an RBP4 and TTR dimer complex [109]. Vitamin A inhibits HSV-1 [110] and KSHV [111] replication in cell culture. Vitamin A can activate the immune response to infection in the respiratory tract, for example, by enhancing Th2 responses and IgA secretion in influenza virus infection of mice [112]. In the lungs, vitamin A counter-acts IL6 and protects against bleomycin-induced fibrotic lung injury [113,114]. The immunoregulatory function of vitamin A is not understood in MHV-68 infection.
Other Immunomodulatory Proteins in BAL Fluid
One gene encoding an immune modulator, Tnfaip8l2, was induced in BAL by MHV-68 infection by nine d.p.i. (Table S2). While LC/MS-MS peptide data identification of this protein was of marginal significance (p < 0.1), the gene encoding Tnfaip8l2 was also weakly upregulated in lung tissue by seven d.p.i. (Figure 4). TNFα-interacting protein 8 members, including Tnfaip8l2 (TIPE2), form a conserved gene family in humans and mice ( Figure S4) involved in immune homeostasis. TIPE2 downregulates inflammatory responses mediated by toll-like receptors (TLR), T-cell receptors (TCR), and NFκB signaling, which in turn promotes Fas-mediated apoptosis in lymphoid cells [115]. Except for Tnfaip8l2 and a weak match to annexin A5, we did not find other cell death regulators in BAL at nine d.p.i. Cell death in MHV-68 infection has been found to be mediated by CD8+ CTL in the lung [23], while a viral Bcl2 encoded in the MHV-68 genome blocks internal cell death mechanisms such as autophagy in infected cells [116].
Limitations of This Study
Our BAL processing protocol enriched for differentially-expressed proteins remaining after the reduction of abundant high-MW macromolecules, including albumin, immunoglobulins, and serum proteins ( Figure 1C). While this approach allowed us to resolve less-abundant proteins, it likely missed potentially interesting proteins associated with the removed macromolecules. We also did not detect a high diversity of cytokines in post-processed BAL, possibly because of their relatively low abundances, interactions with antibodies or albumin, or failure to isolate highly basic proteins in the purification schema. Variant protocols enriching different BAL fractions, or using different BAL solvents, and new mass spectrometry technologies are in development.
Experimental MHV-68 Infection of the Mouse as a Model for Lung Diseases
The induction of and responses to oxidative stress appear to be a common theme in the pathophysiology of interstitial lung diseases, including infections such as MHV-68 (this study and [15]), SARS-coronavirus [69], influenza A virus [70,82], RSV [43], and in chronic diseases such as COPD [117] and IPF [96]. Interestingly, a number of the gene products we identified as differentially regulated by MHV-68 infection in BAL were also found to be associated with these diseases (Table 1). Differential proteomics analysis of mouse BAL fluid opens a new window into understanding the pathogenesis of MHV-68 and other respiratory viruses, and MHV-68 models of chronic lung diseases such as IPF [15]. The proteins identified herein are potential biomarkers for pulmonary virus infections generating high levels of oxidative stress and aggravating other pathophysiological responses, such as acute phase ( Figure 3) and surfactant lipid damage. We propose continuing application of differential BAL proteomics in conjunction with whole-lung genomics and proteomics analyses [57,118] to integrate a systems understanding of immune responses and virus-induced pathological changes to the pleura.
Supplementary Materials:
The following are available online at http://www.mdpi.com/1999-4915/10/12/670/s1, Table S1: Proteomics identification of proteins in murine BAL fluid from MHV-68 infection, Table S2. RT-PCR primers used in this study, Figure S1. Analysis of viral and cellular components in BAL fluid. Figure S2. One of the LC/MS-MS spectra identifying mouse Pdx6. Figure S3. Dynamics of ROS generation in MHV-68 infection of cultured cells. Figure S4. Tnfaip8l2 is a member of a conserved gene family in human and mouse. | 2018-12-02T19:39:54.774Z | 2018-09-03T00:00:00.000 | {
"year": 2018,
"sha1": "3b7e06a340e084847e969e188f5d2a7789af2f25",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4915/10/12/670/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3b7e06a340e084847e969e188f5d2a7789af2f25",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
252566034 | pes2o/s2orc | v3-fos-license | Resistance profiles to antifungal agents in Candida albicans isolated from human oral cavities: systematic review and meta-analysis
Aim To identify the antifungal susceptibility profile of Candida spp. isolated from the human oral cavity was assessed with meta-analyses of observational studies that collected samples from the oral cavity of human subjects. Material and methods Isolated Candida albicans tested by E-test®; disk diffusion test; microdilution and macrodilution; Sensititre YeastOne; and/or FungiTest. Search strategies were conducted on the MEDLINE, Embase, CINAHL, Dentistry, and Oral Sciences, Central, Scopus, and LILACS databases, and gray literature sources. Articles were initially screened by title and then their abstracts. Articles that met the conditions for inclusion were read in full, followed by data extraction. A descriptive analysis was conducted of each study, and the data were tabulated. A first meta-analysis was conducted to assess the resistance of antifungals regardless of systemic comorbidities. An additional stratified analysis was conducted by systemic comorbidity groups for the outcome “resistance” to the antifungals. Results When not grouping Candida albicans isolates by systemic conditions, the lowest resistance rates to the antifungals tested were observed for amphotericin B, nystatin, flucytosine, and caspofungin. In contrast, the highest resistance rates were observed for miconazole and econazole. There was a high degree of heterogeneity and low resistance in general in all analyses, except for the “several associated comorbidities” group, which had high resistance rates. Conclusions Clinical C. albicans isolates had low antifungal resistance. Clinical relevance The presence of concomitant systemic comorbidities appears to be an essential factor that should be considered when evaluating resistance to antifungals for oral isolates. Supplementary Information The online version contains supplementary material available at 10.1007/s00784-022-04716-2.
Introduction
Oral candidiasis is a common fungal infection. In the majority of cases, these lesions are caused by the yeast Candida albicans [1]. Candida is an opportunistic microorganism and its growth increases in the presence of certain local and/or systemic factors [2]. The incidence of this microorganism increases as immune system function declines [3]. Individuals who have poor oral hygiene, xerostomia, removable dentures, human immunodeficiency virus (HIV) infection, or who have been exposed to radiotherapy of the head and neck are more susceptible to oral candidiasis. In 2019, Quindós et al. listed dysbiosis, poor oral hygiene, the anatomic changes linked to aging, dysplasia, smoking and excessive alcohol consumption, endocrine disorders, immunodeficiency in general (not only secondary to HIV, but also due to chemotherapy and neoplasms), and treatment with corticosteroids as potential facilitators of colonization by this microorganism. Other predisposing factors for oral candidiasis include malnutrition, malabsorption, and eating disorders. More specifically, it is said that a diet rich in carbohydrates contributes to the development of oral candidiasis. The following deficiencies have also been linked to increased risk: iron, zinc, magnesium, selenium, folic acid, and vitamins (A, B6, B12, and C) [4].
When treating, it is first necessary to identify predisposing factors and, if present, treat them. After this initial intervention, the patient's immunological status, the specific characteristics of the oral candidiasis (clinical presentation, etiology, susceptibility to antifungals, location, dissemination), and the pharmacological characteristics of the available antifungals (administration, metabolism, clearance, interactions with other drugs, and toxicity) should all be considered [4]. Topical treatment is the first choice for mild cases, which generally respond well to this approach (nystatin or miconazole) [4]. Systemic treatment should be considered if there is fungal dissemination or resistance to topical treatment [5]. A systematic review and meta-analysis conducted by Fang et al. evaluated the efficacy of antifungal drugs on oral candidosis in randomized controlled trials. The authors concluded that itraconazole (capsules or oral solution), miconazole (tablets and oral gel), clotrimazole, fluconazole, ketoconazole, nystatin, and amphotericin B can significantly improve the mycological rate, when compared to the placebo group. They also observed that fluconazole exhibited better results than the other antifungals tested [6].
A wide range of drugs are available for treatment of oral candidiasis and the resistance profiles should be analyzed before making a treatment decision. There is a large body of literature reporting on resistance, especially about azoles. Resistance can occur via the following mechanisms: activation of efflux pumps; mutation of the ERG-11 gene; dysregulation of ERG-11 gene expression; and changes affecting the ergosterol biosynthesis pathway [7,8]. Recently, the ongoing COVID-19 pandemic elicited the discussion on the emergence of fungal infections in critically ill, mechanically ventilated COVID-19 patients. Invasive fungal infections increased mortality among coronavirus patients who are not given antifungal treatment, compared with those who are given antifungal treatment [9], immediate diagnosis and treatment are essential for clinical success. Non-albicans species appear to comprise the group of microorganisms most frequently involved in superinfection cases [10].
The objective of this systematic review and meta-analysis was to trace the antifungal resistance profile of C. albicans strains isolated from the oral cavity of human subjects.
Population, exposure, comparator, and outcomes (PECO) question
A systematic review was conducted according to the items specified on the PRISMA 2020 checklist (Preferred Reporting Items for Systematic Reviews and Meta-Analysis) [11].
The review protocol was registered on the PROSPERO database (Protocol number CRD42020208245). The research question formulated using the PECO strategy was as follows, "What rates of resistance to antifungal agents are reported in studies that have isolated fungi of the genus Candida from the oral cavities of humans?" Data were collected on April 21, 2020.
Eligibility criteria
The study included observational studies that collected samples from the oral cavities of humans and isolated Candida albicans fungi (the PROSPERO protocol was altered: the original registration had stated the Candida genus) and conducted tests for susceptibility to the antifungal agents nystatin, amphotericin b, fluconazole, ketoconazole, miconazole, itraconazole, and others (this was another change to the protocol after registration on PROSPERO: the list of antifungals analyzed was expanded to match the findings of the studies reviewed) using the E-Test®, disk diffusion, and/or microdilution and macrodilution methods (this was another change to the protocol after registration on PROSPERO: the list of laboratory tests analyzed was expanded to match the findings of the studies reviewed). Descriptive literature reviews, letters to the editor, in situ studies, animal model studies, and studies undertaken with extracted teeth or with samples from removable dentures were excluded. Additionally, the authors of studies for which the full text was not available were contacted, but the study was excluded if the full text was not forthcoming.
Search strategy and information sources
Electronic searches were run on MEDLINE (via the PubMed search engine), Embase, CINAHL, Dentistry and Oral Sciences, Central, Scopus, and LILACS databases and in the grey literature, from database inception to April 20, 2020. No publication language filters were applied. Figure 1 illustrates the search strategy used for the MEDLINE database, via the PubMed search engine. The same strategy was also used for the other databases, modified as appropriate. Free or controlled vocabulary search terms were employed (MeSH/ TextWord) as appropriate for each database. The search strategies used on the other platforms are shown in the supplementary material for this study.
Study selection and data extraction
The Zotero 5.0.87 program was used to manage and organize databases constructed with the results of the database searches. The initial selection included many duplicate titles identified by the strategy, which were excluded from the analysis.
In stage 1, two independent reviewers (F. M. and S. Q. S. K.) selected articles by title and, in cases of doubt as to whether an article should be included, the abstract was read. In case of disagreement, a third examiner (T. S. D. P.) decided whether the article should be included. The kappa test was used to determine the agreement between reviewers in the initial evaluation of titles and abstracts (α = 5%; SPSS V. 18.0.0 software, SPSS Inc., Chicago, IL, USA). According to the agreement criterion suggested by Landis and Koch (1977) [12], kappa values < 0.40 represent reasonable agreement; values from 0.41 to 0.60 reflect moderate agreement; values from 0.61 to 0.80 demonstrate substantial agreement; and kappa values from 0.81 to 1.00 are considered indicative of excellent agreement.
In stage 2, all of the articles selected in stage 1 were analyzed to check that they met the inclusion criteria established in the study protocol. Those that did not were excluded.
In stage 3, all of the articles that had not been excluded after stage 2 were assessed for study quality against the inclusion and exclusion criteria and data were extracted. The entire selection process was conducted independently by two different examiners (S. Q. S. K. and P. M. L.). Any disagreements between examiners were adjudicated by a third evaluator (T. S. D. P.), independently, who decided whether the article would be included in the review or not, and proceeded to the next stage. Reasons for exclusion of studies in stage 2 were noted. The data extracted were input to a spreadsheet (article title, author, year of publication, objective, number of study participants, prior exposure to antifungal or antibacterial agents, mean age of study participants, presence and type of underlying disease or systemic comorbidities, presence or type of localized diseases, use of removable dentures, sample collection site, estimate for sample size calculation, number of C. albicans isolated, method employed to determine susceptibility to antifungal agents, method employed to determine identification of C. albicans, antifungal agents tested, and absolute and relative values for strains resistant to each antifungal agent). Data were individually extracted by two evaluators (P. M. L. and S. Q. S. K.). Disagreements between them were adjudicated by a third evaluator (T. S. D. P. or F. M.).
The percentage of resistant strains was calculated for all of the antifungals tested in each study. Groups of microorganisms with intermediate susceptibility profiles were defined as susceptible.
Meta-analysis
Meta-analyses were performed using the meta and metafor packages in RStudio (Version 1.4.1717 © 2009-2021). Analyses were conducted with the random effects model. Combined Candida albicans resistance rates were estimated as percentages (number of resistant strains/total number of strains tested) * 100) with 95% confidence intervals. The generalized linear mixed model (GLMM) for proportions was used, following Schwarzer et al. [13] and the maximum likelihood method was used to estimate variance.
Resistance rate outcomes were calculated for each antifungal and illustrated with forest plots. The degree of heterogeneity was analyzed with the statistical tests Ι 2 and τ 2 . Additionally, a subset analysis was conducted with data broken down by systemic comorbidities (acute lymphoblastic leukemia, diabetes, head and neck cancer, HIV/AIDS, cancer, more than one associated comorbidity, kidney disease, immunocompromise, oral cavity and respiratory tract infection, organ transplantation, and candida bloodstream infections).
All of the studies were included in the meta-analysis, except for subgroups for which there was only one study per comorbidity (oral cavity and respiratory tract infection, acute lymphoblastic leukemia, and immunocompromise) and cases in which an antifungal was only tested for resistance once (luliconazole, lanoconazole, fluconazole and itraconazole combined, and miconazole, itraconazole, ketoconazole, and fluconazole combined).
Analysis of study quality and risk of bias
Risk of bias quality was not assessed because no instrument was found that was applicable to this type of study since it analyzes as outcome results of laboratory analyses with standardized tests performed on Candida albicans collected from the oral cavities of human beings.
Results
The results of the search strategy are illustrated in Fig. 2. The final results of searches on Central, CINAHL, Dentistry and Oral Sciences, Embase, LILACS, PubMed, and Scopus yielded 92, 61, 178, 1399, 240, 785, and 1031 studies, respectively. None of the studies found in the gray literature was included. Some studies were indexed on more than one database, producing 390 duplicates. Application of the inclusion and exclusion criteria resulted in exclusion of 2713 articles during the title/abstract assessment phase and 158 articles when the full texts were read. The inter-rater reliability during study selection was moderate (k = 0.487 [CI 95% 0.424-0.551]; P < 0.0001, percentage agreement = 93.2%). The studies excluded during this stage were those that did not test C. albicans resistance/susceptibility; did not classify results by C. albicans collection sites; that only tested natural extracts; that extracted C. albicans samples from dentures; that did not isolate C. albicans from the oral cavity; that only tested 1 strain of C. albicans; that did not define the number of C. albicans strains; that obtained C. albicans samples from dental abscesses or root canals; that evaluated the anti-cariogenic effect of the drugs tested; that did not report the C. albicans resistance rates; that combined resistant C. albicans strains with those with intermediate resistance; that did not divide groups of healthy patients from those with systemic comorbidities; that tested subtherapeutic doses of the antifungals; that selected certain strains of C. albicans by convenience for laboratory tests other than susceptibility, according to the study objective; and that did not exhibit the results clearly and combined results for C. albicans strains with those for other species.
The authors were contacted at two points during the study. Contact details for authors of 33 articles for which the full text could not be accessed were obtained by searching the internet and used to request the full text. Only 2 of these authors replied. When there was any doubt with relation to the method used to identify Candida albicans in the studies, requests for clarification were sent by email. Requests were sent to 8 researchers, only 3 of whom replied. A total of 88 articles were included in the analysis of resistance. For the meta-analysis, groups of participants from each study broken down by their comorbidities were analyzed when there was also a control group.
Brazil was the country in which the highest number of studies was conducted (29 studies), followed by Iran (6 studies) and the USA (4 studies). The results for year of publication revealed a very wide date range (1984 to 2020). The Data extracted from the studies enabled analysis of 27 groups of samples from patients without systemic comorbidities and 90 groups of samples from patients with associated systemic comorbidities. It is important to point out that a given study may have been counted more than once if it analyzed a control group and a group with comorbidities or administered different treatments to different groups.
The antifungals tested appeared in the following order of frequency: fluconazole (101 groups of samples), amphotericin B (56 groups of samples), ketoconazole ( The most frequently applied tests of antifungal susceptibility used in isolation were as follows: microdilution or macrodilution tests, 76; E-tests®, 15; and disk diffusion tests, 14. Two studies employed more than one analytical method, but separately, i.e., certain antifungals were tested with one method and others with another. Studies that employed more than one test of susceptibility for the same strains and did not observe agreement/report agreement between results were excluded. Some alternative tests were also used, such as Sensititre YeastOne and FungiTest (1 study each). Table 1 lists overall rates of Candida albicans resistance to the antifungals studied. In general, there was a high degree of heterogeneity (illustrated by Ι 2 ). The lowest rates of resistance observed in the analysis of the antifungals tested, regardless of presence of systemic conditions, were for amphotericin B, followed by nystatin, flucytosine, and caspofungin. In contrast, the highest rates of resistance were observed for miconazole and econazole. Table 2 lists rates of Candida albicans resistance to antifungals, by presence or absence of systemic comorbidities. Once more, there is a high level of heterogeneity (illustrated by Ι 2 ) in these analyses.
In the "HIV" group, the lowest rates of resistance were observed for amphotericin b, caspofungin, and nystatin, followed by ketoconazole. The highest rates of resistance in the same group were for itraconazole and fluconazole. In the "diabetes" group, the antifungal with lowest resistance was flucytosine and the highest resistance was to itraconazole. In the "head and neck cancer" group, the lowest resistance was found for amphotericin B and the highest resistance was to fluconazole. In the group "cancer, multiple sites," the lowest rates of resistance were observed to amphotericin b, nystatin, and caspofungin, and the highest resistance was observed to econazole and miconazole. Only 3 antifungals were tested for the "organ transplantation" group. Resistance to amphotericin B and micafungin was lowest and resistance to fluconazole was highest. In the "kidney disorders" group, there was no resistance reported to any of antifungals tested (amphotericin b, fluconazole, itraconazole, and voriconazol).
In the "candidemia" group, no resistance to fluconazole was detected but there were strains resistant to voriconazol. In the "several comorbidities" group, high rates of resistance were observed to all of the antifungals tested: ketoconazole, fluconazole, and itraconazole exhibited ascending rates of resistance in that order. Finally, rates of resistance were low in the "no systemic comorbidities reported" group, with the lowest rates observed for amphotericin b, flucytosine, and fluconazole and the highest rates for ketoconazole and a combination of miconazole with itraconazole. Figures illustrating the meta-analyses for all of the antifungals studied and for all of the associated comorbidities are presented in full as part of the supplementary material to this article.
Discussion
Treatment of oral candidiasis requires consideration of predisposing factors, the severity of clinical status, and the patient's systemic complications, in addition to requiring pharmacological knowledge about available antifungals in order to define the type of treatment to be adopted, whether topical or systemic [4]. It is necessary to isolate strains from patients and monitor their profile of susceptibility to the antifungal agents available and compile these results by conducting systematic reviews and meta-analyses. The objective of the present study was to determine rates of Candida albicans resistance reported by observational studies that isolated these microorganisms from the oral cavities of humans and tested their susceptibility to antifungal agents using laboratory methods. The study also considered the presence of systemic conditions that could modulate this outcome. Topical and systemic treatments were not differentiated, to ensure the clarity of the resistance results.
In view of the volume of data retrieved from the literature by the original search strategy, it was decided to assess the susceptibility of the species Candida albicans only, rather than all Candida species, as had been proposed in the original protocol registered on the PROSPERO database. Different Candida species exhibit varying degrees of susceptibility to the antifungal agents most commonly administered in clinical practice. For example, while C. krusei is intrinsically resistant to fluconazole, C. glabrata exhibits reduced dose-dependent susceptibility compared with other species of Candida [14]. Moreover, Candida albicans accounts for the majority of isolates from samples from oral cavity infections. In 2016, Hertel et al. collected 958 samples from patients, in which C. albicans was the most prevalent species, accounting for 76.8% of isolates [15]. Wright and colleagues corroborate this statement. Candida albicans was clearly and significantly the microorganism with greatest colonization density when compared to the other species isolated in the study (which included: C. glabrata, C. samata, C. parapilosis, C. krusei, and C tropicalis, among others) [16]. In 2017, Lewis and Williams also confirmed that C. albicans is the pathogen most frequently isolated from human oral cavity specimens, present in 80% of samples and the most often identified in both health and disease [17]. The results of the searches for sources related to C. albicans conducted for the present study returned a total of 2713 studies for preliminary analysis (title/abstract), which confirms the relevance of studying C. albicans resistance profile. Although it is indispensable to extend research to other species of Candida, the volume of data produced could make interpretation difficult since many different species of the genus Candida can be found in the oral cavity and would be tested against the many different antifungals (19 in total, including combinations of antifungals) in patients with/without associated comorbidities (11 in total). Future studies will therefore be conducted to analyze these data.
The microdilution method is considered the gold standard for assessing fungal susceptibility [18]. In the present systematic review, there was no standardization of the methods used to assess susceptibility. The studies analyzed used microdilution or macrodilution, E-test®, disk diffusion, Sensititre YeastOne, and FungiTest, in addition to comparing tests against each other. In 2002, Silva et al. compared the broth macrodilution and E-test® methods by determining the minimum inhibitory concentrations (MICs) of four antifungal agents for 59 clinical isolates from the oral cavities of patients with AIDS and an initial diagnosis of candidiasis [19]. These authors observed agreement between methods for C. albicans, in contrast with other species assessed in the study, for which agreement was lower, such as itraconazole for C. krusei (66.7%) and fluconazole, ketoconazole, and amphotericin B for C. tropicalis (75%) [19]. The E-test® has been suggested as an alternative to the broth dilution method established by the Clinical and Laboratory Standards Institute (CLSI) because of its greater practicality. In 1995, Wanger et al. confirmed that the E-test® is equivalent to the method proposed by the CLSI for testing the susceptibility of N studies, number of studies; N, number of fungal strains tested for antifungal susceptibility; n, number of resistant fungal strains tested yeasts and has superior capacity for detecting resistance to amphotericin B [20]. In 2012, Junior et al. compared the disk diffusion method with the method proposed by the CLSI and observed that agreement between the methodologies exceeded 97%, albeit with a limited number of strains. These authors argue that the disk diffusion method can be employed within the laboratory routine, because it is inexpensive and is easier to conduct than macrodilution and microdilution tests, although it does not provide individual MIC values for each strain [21]. Cutoff points for Candida albicans have not been defined for the antifungals miconazole and ketoconazole, so studies assessing these drugs base their results on cutoff points adopted in epidemiological studies. Therefore, since this review included studies that employed different methods of susceptibility analysis, the outcome was defined as the numeric relative frequency of resistance as reported by the researchers, and crude MIC data for each antifungal agent were not employed.
Certain aspects that limited extraction of data for the systematic review and their inclusion in the subsequent metaanalysis should be considered. Silva et al. (2002) compared the E-test® and broth macrodilution methods to test the susceptibility of oral C. albicans isolates to a range of antifungals [19]. Only the results for resistance to itraconazole achieved agreement between the results of both tests and were included in the meta-analysis. The results for fluconazole were different for the same strains when different tests were used and were therefore excluded. The data on resistance to amphotericin B and ketoconazole were not presented clearly, introducing doubt and were also excluded. Kostiala and Kostiala (1984) investigated resistance of C. albicans isolated from the oral cavity to the antifungals amphotericin b, nystatin, clotrimazole, ketoconazole, miconazole, and econazole using broth microdilution [22]. They also conducted susceptibility tests for the same isolate to flucytosine using microdilution and disk diffusion. Since it was impossible to ascertain whether the results were duplicated, it was decided to exclude the data for this agent from the meta-analysis.
The quality of the studies included was not evaluated because there is no validated instrument for assessing the quality of observational studies that considers the specific aspects involved in studies with clinical and laboratory components, specifically those related to microbiology. According to the STROBE document's recommendations on how to correctly report observational studies, it is important to calculate the sample size and report it in the methodology [23]. Unfortunately, these data were not reported in the majority of the articles included since the analyses were based on laboratory results. STROBE also recommends that the characteristics of participants should be described (demographic, clinical, and/or social variables). This item was also omitted in many of the studies included in the systematic review and meta-analysis. Several studies merely stated that the samples were from the oral cavities of humans, without specifying any participant characteristics.
Oral candidiasis is related to impaired host immunity, and it is known that C. albicans, which is a fungal species that is highly abundant in the oral cavity, is the most frequently related to oral candidiasis, which was the reason justifying the exclusion of other species. In a literature review published in 2020, Bhattacharya et al. discussed the molecular mechanisms of action of a number of antifungals and the mechanisms of resistance of Candida. With relation to the antifungals studied, these authors listed two important drug classes used to treat candidiasis: azoles and polyenes. Azoles are more frequently administered to treat Candida infections. They target the enzyme 14α-demethylase (Erg11p), which is important in biosynthesis of ergosterol, the principal sterol component in fungal cell membranes. Polyenes also target ergosterol in the plasmatic membrane and are fungicides. These authors explain that resistance to azoles is an emerging problem that causes therapeutic failure and is the result of several different mechanisms, such as overexpression membrane transporters, altered ergosterol biosynthesis, altered sterol import, genome plasticity, and altered azole import. They also comment on resistance to other drugs [24].
In 2019, Prasad et al. also described other mechanisms of C. albicans resistance which, they argue, are new survival strategies developed by the microorganism and are being discovered over recent years. In their literature review, they report that these microorganisms evolved to respond to a range of environmental stresses (thermal, oxidative, osmotic, changes to pH, and nutrient limitations) [25]. The frequency with which they acquire resistance varies according to the class of antifungal. For example, in 2013, Vincent et al. reported that resistance to polyenes is extremely rare because of the consequences for fitness associated with development of resistance [26]. In contrast, in 2005, Anderson claimed that resistance to azoles is much more prevalent because of their fungistatic nature, which results in powerful selection of surviving populations [27]. In the analysis ignoring systemic conditions conducted in the present study, amphotericin B and nystatin exhibited the lowest rates of resistance. This result confirms the position of Vincent et al. since both drugs are polyenes. In turn, the highest rates of resistance were for agents in the azoles class (econazole and miconazole), which agrees with Anderson. However, the fact that there were high rates of resistance to econazole does not have major clinical implications, since this drug is not often used to treat oral candidiasis, rather it is prescribed for dermatological disorders [28]. In contrast, miconazole is often administered for topical treatment of oral candidiasis [3]. In 2012, Vasquez and Sobel pointed out that this drug had been used to treat superficial fungal infections safely and effectively for approximately 40 years [29].
Considering the antifungals tested, it is important to point out that flucytosine is not used in any of the oral candidiasis treatment protocols, which means that the data related to its resistance profile are irrelevant to clinical applications. Besides that, the results of the present study showed that amphotericin B was the antifungal with the lowest rates of in vitro resistance to oral isolates of Candida albicans. The first-choice route of amphotericin B administration is intravenous. Amphotericin B is almost entirely insoluble in water and has a high molecular weight. These characteristics result in low gastrointestinal permeability and stomach instability, contributing to its low bioavailability when orally administered [30,31]. It has a broad spectrum of action and good activity against Candida species, although a few non-albicans Candida samples may be resistant [32]. The adverse effects of intravenously administered amphotericin B are vascular, respiratory, thoracic, mediastinal, renal, and urinary disorders [33]. Xiao et al. (2022) evaluated the effectiveness of topical application of antifungals commonly used in treating oral candidiasis in a systematic review and meta-analysis. Among the studies included, the topical formulations of amphotericin B analyzed comprised oral suspension (0.5 g, three times a day, for 14 days) and lozenges (10 mg, four times a day, for 30 days). Fluconazole and amphotericin B demonstrated similar results in clinical response, mycological cure, the incidence of adverse reactions, and relapse rates. The authors also indicated that the results might be influenced by the reduced number of studies included, the differences in patient age, the dosage, the course, and the frequency of drug administration [34]. Drew and Perfect emphasized that published data regarding the administration of antifungals by alternative routes are scarce and restricted to uncontrolled case reports or studies with small sample sizes [35]. Fitchenbaum et al. reported that in a group of patients with HIV infection or CDC-defined AIDS, amphotericin B oral suspension had limited efficacy for treating fluconazole-refractory oral candidiasis. Despite the low in vitro resistance rates in Candida albicans oral isolates, amphotericin B would not be the first choice to treat oral candidiasis, especially through alternative routes [36].
Regardless of the drug class employed to treat candidiasis, knowledge of the mechanisms of resistance to antifungals and understanding them as an evolving problem is a prerequisite for dealing with resistance and accelerating development of new therapeutic strategies [37]. The present study also calculated rates of resistance by subsets, which has not been described in the literature previously. These subsets were formed based on studies' reporting of systemic conditions affecting the patients from whom their samples were isolated. The literature suggests that oral candidiasis is associated with use of removable dentures, based on a series of factors, such as poor hygiene, advanced age, polypharmacy, and impaired host immunity [38,39]. This is not a systemic factor but a local one. Data related to use of removable dentures were collected and compiled in tables but were not treated as inclusion or exclusion criteria. The decision was taken to limit the bibliographic review to studies that collected samples from the oral cavity, excluding those that had tested isolates from removable dentures. There is a possibility that C. albicans could undergo phenotypical changes due to nutritional limitations and especially due to formation of biofilms [40,41]. Moreover, cleaning dentures with a toothbrush has been shown to be effective for reducing palate inflammation, preventing and reducing infection by Candida [42].
With regard to the systemic factors, Samaranayake et al. discovered that the association of oral candidiasis with AIDS is reported before the first manifestations of AIDS in the patient [43]. In 2014, Garcia-Cuesta et al. listed the following systemic predisposing factors: hormonal disorders, immunological disorders, endocrine disorders, psychological disorders, xerostomia, drug treatments, and alcohol consumption [44]. Thompson et al. also report that oral candidiasis is one of the most common clinical complications in patients with HIV and can be observed in up to 90% of patients with this systemic condition [45].
In 2019, Quindós et al. reported that colonization by Candida occurs from birth and is greater at extreme ages (babies, children, and the elderly). Among adults, colonization is facilitated by use of removable dentures, on which difficult to eradicate biofilms form, or by the presence of oral changes such as xerostomia, leukoplakia, and oral lichen. They also confirmed that colonization is greater among patients who are given certain medications, such as antibiotics, corticoids, or chemotherapy, or in diabetic patients, hospitalized patients, and people infected by HIV [4].
Systemic conditions are directly linked to the proliferation of Candida and the development of candidiasis. This occurs because Candida is an opportunistic microorganism [2]. In the present study, a series of different comorbidities were analyzed. The subset that exhibited the highest rates of resistance was the subset with several associated comorbidities. In contrast, resistance rates were low in the group with "no systemic comorbidities reported." Among the other subsets, the one with the highest rate of resistance was "cancer, multiple sites," with resistance to econazole. It was not possible to perform exact comparisons between comorbidities because there was no standardization between the antifungals assessed in the different studies. Therefore, the presence of concomitant systemic comorbidities appears to be an important factor to take into consideration when assessing resistance to antifungals in patients.
Conclusion
This systematic review has shown that the majority of the drugs available is effective for treatment of oral lesions caused by C. albicans. It suggests that nystatin may be the topical treatment of choice if systemic comorbidities can be ruled out since it was the antifungal with the lowest rates of resistance. For cases of disseminated candidiasis and/or in patients in whom topical treatment has been ineffective, amphotericin B would be the recommended antifungal to be used via intravenous routes. Presence of concomitant systemic comorbidities appears to be an important factor that should be considered when evaluating resistance to antifungals. The subset that exhibited the highest rates of resistance, regardless of the antifungal tested, comprised people with a range of different health issues. In these cases, combinations of antifungals should be considered. The resistance assessment test most used in previous studies was microdilution (the gold standard), confirming its importance at the laboratory level. Compilation and analysis of published data by meta-analysis enables healthcare professionals to choose medications based on robust scientific evidence. Regardless, it is the responsibility of the prescribing professional to assess each case individually. The recommendations on drug selection suggested in this paper are based entirely on microbiological aspects and do not consider other important individual aspects that must be taken into account when taking prescribing decisions. | 2022-09-29T06:17:33.835Z | 2022-09-27T00:00:00.000 | {
"year": 2022,
"sha1": "5f49fcd809af84c9c92fdf1b7a70ca37b30d3f74",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00784-022-04716-2.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "818fa168754aa51b876227cdbe9cb6a297416d2a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249821767 | pes2o/s2orc | v3-fos-license | Tuberculosis Aortitis and Mycotic Pseudo-aneurysm of the Infra-renal Aorta after Intravesicular BCG Therapy
ABSTRACT We report a patient who presented with a rapidly expanding symptomatic tuberculous aortitis and mycotic pseudo-aneurysm of the infra-renal aorta, after intra-vesical BCG chemotherapy for bladder cancer, treated by required emergency open aneurysm repair. His case highlights this rare complication of intravesical BCG treatment, haematological seeding causing tuberculous aortitis and mycotic pseudo-aneurysm formation of the infra-renal aorta. It also illustrates successful treatment with emergency open surgery, local debridement of mycotic pseudoaneurysm, in-situ surgical reconstruction using a custom bovine-wrap interposition graft to create a neo-aorta and multi-agent anti-tuberculous chemotherapy.
CASE REPORT
A 65-year-old man presented to the Emergency Department with a six-week history of generalised lower abdominal pain radiating posteriorly to his sacroiliac joint. He had past history of bladder cancer (grade 2 pT1 transitional cell carcinoma) and had undergone planned curative Trans-Urethral Resection Bladder Tumour (TURBT) surgery with adjuvant intravesical BCG treatment, receiving 2 cycles of treatment over the last year. A recognised but rare complication of intravesical BCG treatment is the development of Iatrogenic Tuberculosis (TB).
CT Abdomen and Pelvis with contrast was performed in the Emergency Department on the day of presentation which showed a large thick-walled saccular pseudo-aneurysm of the distal infra-renal abdominal aorta with the radiological appearances consistent with a mycotic pseudo-aneurysm with signs of impending rupture. The clinical working diagnosis was of tuberculous aortitis and mycotic pseudoaneurysm formation of the infra-renal aorta. Within 18 hours, the patient underwent emergency open surgery, with local debridement of mycotic pseudoaneurysm and para-aortic tissue and in-situ surgical reconstruction using a custom bovine-pericardial-wrap interposition tube-graft to create a neo-aorta (Figure 2.), with no intra-operative complications. Aortic tissue and fluid samples were taken intra-operatively and sent for microbiological investigations. The patient was treated in conjunction with an infectious diseases specialist with initially empirical antibiotic therapy.
Microbiology results from both blood culture samples, and intra-operative samples confirmed Tuberculosis (TB) with acid-fast bacilli seen on tissue sample processing. The Infectious Diseases specialist treated the patient by commencing the combination chemotherapy regime; Voractiv (Rifampicin, Isoniazid, Pyrazinamide and Ethambutol) and CT Abdomen and Pelvis with contrast was performed in the Emergency department which showed a large thick-walled saccular pseudo-aneurysm of the distal infra-renal abdominal aorta with the radiological appearances consistent with a mycotic pseudo-aneurysm. Signs of impending rupture were evident from a concerning defect in the anterior wall of the aorta, alongside localised reactive changes in periaortic fat stranding and para-aortic lymphadenopathy. the patient made an uncomplicated recovery from surgery and was discharged home following a 17 day stay in hospital.
DISCUSSION
Currently, for high risk, non-muscle-invasive bladder cancers (NMIBC), defined by NICE Guidance 1 following a transurethral resection of a bladder tumour (TURBT), patients are offered the choice of intra-vesicular BCG therapy 2 or a radical cystectomy. In this case, the patient opted for intravesical BCG therapy. In addition to the BCG therapy, the patient received a single dose of mitomycin C, given within 24 hours of resection, which is standard practice of care and reduces recurrence risk by 40-50% with a very low incidence of adverse events 3 . BCG therapy reduces the likelihood of recurrence and progression of bladder tumours that have been managed with transurethral resection 4 . It is thought the anti-tumour effect of BCG therapy results from a T-cell mediated inflammatory reaction. Commonly, following this therapy, patients can experience symptoms such as dysuria, increased urinary frequency and fevers.
The incidence of mycotic aortic aneurysms is rare, with 0.9%-1.3% of aneurysms resulting from an infective cause 5 . The most commonly associated pathogens are Staphylococcus Aureus (most common), Streptococcus spp., Salmonella and Escherichia coli. The incidence of mycotic aneurysms as a result of intra-vesicular BCG therapy is therefore very rare indeed, upon review of the published literature, we found a total of only 35 cases dating back to 1988, including this patient. The interval between final instillation and first presentation of infection in our patient was just over a year which compared to similar cases is a relatively short duration -the median of these cases being 17 months 6 .
Our patient's only presenting symptom was abdominal discomfort for a period of 6 weeks, they did report weight loss but stated this was intentional. The most common presenting symptoms for mycotic aneurysms are pain, 79%, and fever, 48%, and whilst pain was a major feature in our case, fever was not reported, and the patient was apyrexic on admission.
It is thought that there are a number of ways that the causative Tuberculous Bacilli can reach the aortic wall in order to cause the initial infection and aortitis. A risk factor for complications of BCG therapy is commencing the treatment temporally close to the surgical bladder wall trauma after TURBT. It is widely recommended that BCG therapy should not start for several weeks following procedures that could damage the bladder and grant bacteria access to the blood stream 7 . Once bacilli are in the blood stream, and are haematologically spread, they may infect the aorta by entering through damage to the vessel wall secondary to atherosclerosis, or through the vasa vasorum and invading the adventitia or media. Alternatively, the bacilli can infect the vessel by means of a contiguous focus such as a lymph node or paraspinal abscess 8 .
Other than positive blood cultures and aortic tissue taken intra-operatively which grew acid-fast bacilli, there was no evidence of systemic TB in this case. Treatment by open surgery remains the gold-standard for mycotic pseudoaneurysm of the infra-renal aorta, as it offers an opportunity for surgical debridement of infected tissue and tissue sampling for microbiology assessment with confirmation of the diagnosis. Where possible anatomical insitu reconstruction of the aorta is preferred, using autologous vein or as in this case, a custom bovine-pericardium-wrap fashioned into a tube-graft, which is remarkably versatile and resistant to infection. When in-situ reconstruction is not feasible, an alternative approach is aortic ligation and extra-anatomic bypass by axillo-bi-femoral bypass grafting. Endovascular Repair with an Endograft Covered-Stent can offer a temporary solution, particularly in extremis or ruptured aneurysms, but infection typically persists and despite long-term antimicrobial therapy, the majority suffer late infective complications and death.
This case highlights a rare complication of intravesical BCG therapy for Bladder Cancer with the development of a life-threatening Tuberculous aortitis and rapidly expanding pseudoaneurysm of the infra-renal abdominal aorta. Untreated rupture and death would have resulted, but we are pleased to report that after open surgical repair and adjuvant combination anti-tuberculous chemotherapy this man made an uncomplicated recovery. It is important for clinicians to consider the possibility of a mycotic aneurysm in patients with a history of bladder cancer managed with BCG therapy. This allows the conduction of relevant microbiological investigations so that positive diagnoses can facilitate emergency surgical treatment to be complimented by targeted anti-microbial therapy for a successful outcome. | 2022-06-19T05:12:34.238Z | 2022-05-01T00:00:00.000 | {
"year": 2022,
"sha1": "5c698609b7089816f603bdf030c0cd2def8c122c",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "5c698609b7089816f603bdf030c0cd2def8c122c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3307266 | pes2o/s2orc | v3-fos-license | Docking, thermodynamics and molecular dynamics (MD) studies of a non-canonical protease inhibitor, MP-4, from Mucuna pruriens
Sequence and structural homology suggests that MP-4 protein from Mucuna pruriens belongs to Kunitz-type protease inhibitor family. However, biochemical assays showed that this protein is a poor inhibitor of trypsin. To understand the basis of observed poor inhibition, thermodynamics and molecular dynamics (MD) simulation studies on binding of MP-4 to trypsin were carried out. Molecular dynamics simulations revealed that temperature influences the spectrum of conformations adopted by the loop regions in the MP-4 structure. At an optimal temperature, MP-4 achieves maximal binding while above and below the optimum temperature, its functional activity is hampered due to unfavourable flexibility and relative rigidity, respectively. The low activity at normal temperature is due to the widening of the conformational spectrum of the Reactive Site Loop (RSL) that reduces the probability of formation of stabilizing contacts with trypsin. The unique sequence of the RSL enhances flexibility at ambient temperature and thus reduces its ability to inhibit trypsin. This study shows that temperature influences the function of a protein through modulation in the structure of functional domain of the protein. Modulation of function through appearance of new sequences that are more sensitive to temperature may be a general strategy for evolution of new proteins.
Results
Effect of physicochemical parameters on stability of purified MP-4. MP-4 was found to be a dominant protein in Mucuna pruriens seed proteome and was purified from 60% ammonium sulfate fractionation using gel filtration chromatography (Sephacryl-200 column) (Fig. 1a). Further, purity of MP-4 was ensured through analysis of 12% SDS-PAGE (Fig. 1a inset). This monomeric protein was further analysed for its stability.
The effect of temperature and pH on MP-4 was analysed through circular dichroism (CD) spectroscopy. Data showed that MP-4 is stable from 20 °C to 60 °C as no significant changes in spectra were observed. While beyond 60 °C minor change was seen in the spectrum (Fig. 1b). The spectral peak started shifting towards 220 nm wavelength, indicating the initiation of unfolding of the protein.
Various suitable buffers were used in order to check the effect of pH on stability of MP-4. Even at high pH (pH 10), no significant change in molar ellipticity values were observed (Fig. 1c). On the contrary, a marked change in the spectrum was seen at pH 3. This implies that the structural transition had started between pH 5 to pH 3. Interestingly, the protein precipitated on lowering the ionic strength. These analyses indicated that MP-4 is stable across a wide range of temperature and pH conditions.
Kinetics and thermodynamics of MP-4 trypsin binding.
Having established the role of temperature and pH on stability of MP-4, kinetics and thermodynamics were studied to analyse optimum binding between MP-4 and trypsin. Binding experiments were carried out on CM5 SPR chip from 15 °C to 35 °C at an interval of 5 °C. Kinetics rate constants and change in free energies at equilibrium were calculated at the same temperature ranges. A typical sensorgram is shown in Fig. 2a results from 1:1 two-state fitted models are tabulated in Table 1 and shown in Fig. 2b. At 25 °C, MP-4 binds to trypsin with k ass 13.8 × 10 4 M −1 s −1 and k diss 65.3 × 10 −2 s −1 resulting into 4.73 μM affinity with negligible error in the fitting as reflected through sensorgram and χ 2 . It was observed that upon increase of temperature from 15 °C to 35 °C, there was a 4-fold increase in the association rate constant and approximately 2-fold increase in k diss that resulted into 0.45-fold decrease in equilibrium dissociation constant (K D ) for trypsin (Table 1). Scientific The temperature sensitivity of MP-4 binding for trypsin was further examined. Towards this purpose, enthalpy and entropy contributions for free energy of association and dissociation phase as well as the free energy at equilibrium were calculated through SPR experiments. The increase in temperature from 15 °C to 35 °C was accompanied by a decrease in K D from 17.76 μM to 8.01 μM for trypsin. Data also indicate that increase in k ass and k diss are mainly due to increase in temperature. K D versus temperature and natural log of K D versus inverse of temperature for trypsin is shown in Fig. 2c, and values are tabulated in Supplementary Table S1. These values were further used in calculating the change in enthalpy (∆H), entropy (∆S) and Gibbs free energy (T∆S) during association and dissociation phases (Supplementary Table S1). The relative contributions of these modules signify the potential modes of binding. Supplementary Table S2 illustrate the change in Gibbs free energy of equilibrium upon binding of trypsin as a function of temperature. Change in enthalpy and entropy assistance towards Gibbs free energy for association and dissociation phase as well as equilibrium was estimated through Arrhenius value (natural log of K D versus inverse of temperature) ( Fig. 2c and d) 19 . This indicates the absence of steric hindrance at the interaction sites. While the change in entropy (T∆S) in a range of temperatures is not very significant and is positive for this complex, it is possible that solvent molecules at the interface were squeezed out during complex formation (Supplementary Table S2). It is remarkable to note that, at all temperatures, entropy is highly favourable more than compensating for marginally unfavourable enthalpy. This suggests that binding can primarily be driven by hydrophobic interactions involving the nonpolar groups.
Docking and molecular dynamics simulation. Results of thermodynamics experiments are in accordance with studies on bioactive molecules i.e. binding affinities of MP-4 for trypsin are high at physiological temperature (25 °C and 30 °C) while drops at lower and higher temperatures. To decipher the cause of such behaviour at the molecular level, docking experiments followed by molecular dynamics simulation and interaction studies were performed. The docked complex whose binding energy was closest to the experimental value was chosen as the best structure for subsequent analysis.
Free MP-4 (PDB id: 5DSS) and trypsin (present as a complex in PDB id: 1AVW) molecules were subjected to 200 ns explicit solvent molecular dynamics simulation and clustered to choose dominant conformational states for docking ( Supplementary Fig. S1). In case of MP-4, clusters with ≥ 5% population size were screened and frames from four clusters were chosen for further screening (Supplementary Table S3). While in case of trypsin, frames from the dominant cluster was chosen for subsequent analysis (Supplementary Table S3). Of these, 7 best combinations (termed as complexes henceforth) were fed into HADDOCK server for docking (Supplementary Table S4). Successfully docked complexes were analyzed based on Z-score and HADDOCK score. Surface area and binding energies of the complexes were calculated through PISA server. The best energy values obtained were −11.7 Kcal/mol and −11.4 Kcal/mol (Supplementary Table S4). Interaction studies performed on the best MP-4-trypsin model showed that only one residue (Arg71) of RSL of MP-4 was able to make electrostatic interaction with three residues (Phe39, Ser192 and Gly190) of trypsin active site. There was also limited hydrophobic interaction between Leu70, Ile69 and Phe75 of MP-4 with trypsin molecule (Phe91, Gly93, Leu96, Gly209, and Met101) in the complex. We had also docked MP-4 with trypsin by taking into account flexibility essential for interaction in rationally designed experiment in HADDOCK server. Based on past studies, residues 187-192, 207-210 and 218-221 of trypsin and residues 67 to 72 of RSL of MP-4 were kept fully flexible as these comprised of the catalytic pocket 20 . Initially ~200 MP-4-trypsin energy minimized structures were obtained by optimized run. The top selected model had ΔG of −17.56 Kcal/mol, which is closer to the experimental value (−20 Kcal/mol at 25 °C) (Supplementary Table S5). Interaction studies of this complex showed many electrostatic and hydrophobic contacts at the interface (Supplementary Table S6). Since both interaction studies and binding energy of this docked complex yielded better results (Supplementary Table S5), we proceeded with this model for further analysis as the values justify its selection as the best model.
The best docked structure of MP-4 and trypsin is shown in Fig. 3a and electrostatic interactions are shown in Fig. 3b. The analysis showed that only two residues Gln68 and Thr73 of RSL are involved in electrostatic interactions while other two residues of MP-4 Asp77 and Thr78 interact with the non-catalytic pocket of trypsin. Four residues Ile66, Ile69, Pro71 and Thr73 showed hydrophobic interactions and are not favoured by charged catalytic pocket. These interactions are further compared with known strong trypsin inhibitor complexes such as porcine pancreatic trypsin/soybean trypsin inhibitor and trypsin with bovine pancreatic trypsin inhibitor. Intensive interactions are made by the P1 residue of RSL of these two complex structures i.e. arginine and lysine. While in the case of MP-4, P1 position is occupied by isoleucine. The interaction details for MP-4 RSL with trypsin are tabulated in Supplementary Table S5. It is interesting that strong inhibitors as mentioned above show high affinity with K D values 4.8 × 10 −10 M and 6.08 × 10 −14 M as compared to 4.7 × 10 −6 M in MP-4-trypsin complex. This is because of deeply buried nature of P1 residues in porcine pancreatic trypsin/soybean trypsin inhibitor and trypsin with bovine pancreatic trypsin inhibitor.
The crystal structure of MP-4 RSL has been seen to be docked over catalytic part of trypsin and occupies 1029.5 Å 2 intermolecular buried area with trypsin. In strong trypsin inhibitor complex, porcine pancreatic trypsin/soybean trypsin inhibitor (1AVW) and trypsin with bovine pancreatic trypsin inhibitor (4Y0Y) structures intermolecular buried area, 870.3 Å 2 and 727.9 Å 2 respectively, was calculated through PISA with extensive and effective interactions of P1 with the catalytic pocket of trypsin. In strong inhibitors, typical range of buried surface area of 600-900 Å 2 is commonly observed 21 . Structures of RSL of MP-4, 1AVW and 4Y0Y with trypsin are shown in Supplementary Figure S2. These analyses clearly suggest that although there is a high intermolecular surface area in MP-4 trypsin complex, only limited interactions are present. This essentially implies weak inhibitory nature of this protease inhibitor. Thermodynamics data suggest that despite being a weak inhibitor, the behaviour of MP-4 can be explained to be similar to other enzymes at optimized temperature deciphered from biochemical and SPR experiments.
Conformational/interaction features of MP-4-trypsin complex revealed through MD simulation.
In order to evaluate the activity as a function of temperature, dynamics of the complex was further examined by subjecting to 500 ns molecular dynamics simulation at two different temperature conditions (15 °C and 25 °C). Conformational ensembles obtained from 15 °C and 25 °C runs were extracted and analysed using CPPTRAJ 22 programme. The trajectory of MP-4-trypsin complex at 15 °C was stable at an RMSD of 3.0Å-4.0 Å and at 25 °C at 3.5Å-4.5 Å (Fig. 4). Frames from both runs were clustered with a radius of 2 Å of Cα from the centroid using kclust utility of MMTSB 23 toolkit in AMBER14 24 . The number of clusters indicates inherent flexibility in the molecule to adopt different conformational states permissive for optimal binding. Nine conformational clusters were obtained for simulations at 15 °C (Fig. 4a) and 11 conformational clusters were obtained during simulation at 25 °C (Fig. 4b) indicating wider conformational landscape of a molecule for interaction at higher temperature. This was also reflected in the B-factor analysis as residue mobility was high at 25 °C as compared to that at 15 °C (Fig. 5).
Individual representative structures of each cluster were analyzed for binding energy, interface surface area, nature of interactions. At 15 °C the binding energy was in the range of −4 to −8.9 Kcal/mol with interface surface area in the range of 653 to 900 Å 2 . Due to less interface surface area, very limited electrostatic interactions and hydrophobic interactions were seen. On the contrary, at 25 °C binding energies were in the range of −2.6 to −9.6 Kcal/mol with surface area 900 to 1100 Å 2 . MP-4 and trypsin interface at 25 °C showed intense hydrogen and salt bridges along with hydrophobic interactions. The conformation of RSL was further analyzed which showed significant difference in each cluster. Interestingly, the RSL is relatively more rigid at 15 °C (Fig. 6a) in contrast to that at 25 °C (Fig. 6b). This is also reflected in number of interactions of RSL of MP-4 with trypsin molecule. At 25 °C, most of the interactions of RSL are achieved through Arg72, Ile69, Leu70, Pro71, Arg67, Thr73, Gln68, and Arg67 while at 15 °C interactions are limited to Arg67, Leu70, Arg72, and Ile74. Therefore, the RSL across the trajectory at 15 °C exhibited geometrical restrictions. The conformational window, however, was broad at 25 °C suggesting that the molecule searches all possible conformational space and different bonding patterns to find an optimal geometry for interaction at that temperature (Fig. 6b).
Free energy of binding between MP-4 and trypsin was calculated for both the trajectories by using MMGB/ SA. The free energy (ΔG) of binding across the trajectory at 15 °C was −31 Kcal/mol whereas at 25 °C it was −36 Kcal/mol (Fig. 7). These values are comparable with experimental values as ΔG was low at 25 °C as compared to that at 15 °C suggesting higher affinity at an optimum temperature similar to other receptor-ligand complexes. However, due to different sets of interactions of RSL at 25 °C, a stable complex could not be formed. Hence, MD simulation validates the weak inhibitory nature of MP-4.
Discussion
Kunitz-type protease inhibitors (KTPI) are extremely stable molecules. Although not a classical KTPI, our study shows MP-4 to be stable even at pH 10 and temperature of 60 °C. Structure of MP-4 18 , showed that MP-4 adapts the trefoil fold similar to other KTPI. The MP-4 structure consisted of 12 antiparallel strands connected by long loops and two internal disulfide bonds between Cys45-Cys90 and Cys145-Cys152 residues. Studies suggest that the presence of intra-molecular disulfide bonds 25 , hydrophobic core and buried polar groups to be the probable reasons for their stability which is very reasonable for MP-4 structure as well 18 . Besides, like other Kunitz-type protease inhibitors, interaction of MP-4 with its cognate proteases, is governed by a positive enthalpy. Thermodynamics analysis of MP-4 has shown that enthalpy (∆H) is positive i.e. 4 Kcal/mol and ranges from 15°-35 °C.
Rise in temperature is expected to enhance the conformational mobility of loop regions in proteins. Our approach of combining molecular dynamics simulations with thermodynamic experiments shows that the behaviour of MP-4 as a function of temperature follows the conventional rule. Typically, it means that at low temperature the spectrum of accessible conformations is limited while at higher temperature it becomes flexible disabling effective interactions. Only at an intermediate temperature, the molecule is able to fine-tune its microenvironment to bind incoming ligands. Kinetics and energetics studies performed for MP-4 with trypsin enzyme in the range of 15°-35 °C show optimal binding at 25°-30 °C that decreases at low and high temperatures, as expected 26 . The data shows that conformational flexibility is limited at low temperature and the molecule tends to be in a relatively frozen state. However, an ambient temperature (25 °C in this case) facilitates thermal motion in the molecule allowing it to search the conformational space for preferential binding thereby increasing its binding affinity comparable with thermodynamics results. Consistent with previous reports on serine protease inhibitor complexes, majority of the interacting residues are confined to the RSL of MP-4 and it has the same kind of surface protrusions that facilitate blockage of enzyme activity 27,28 . However, even at the optimal temperature of 25 °C, a stable interaction could not be achieved due to increasingly changing bonding pattern of MP-4 with trypsin across the trajectory. This could be a probable reason for MP-4 behaving as a weak inhibitor even at the optimal temperature. Apart from this, sequence variability of RSL is observed between other STI (SPYRIRFI) and MP-4 (IREILPRTI). As a consequence, MP-4 RSL does not make many hydrophilic and hydrophobic interactions in the complex vis-à-vis other STI complexes. Limited RSL intra-chain interactions as observed in most KTPIs are, however, significantly less in case of MP-4 due to which it cannot be held in the right conformation for inhibitory activity. Hence, network of interactions critical for higher affinity and inhibitory activity of protease inhibitors is absent in MP-4. The interactions formed between MP-4 and trypsin are such that the enzyme dissociates with ease and can bind to the cognate substrate to achieve proteolysis.
The effect of temperature on conformational flexibility of functionally important regions is difficult to measure utilizing biochemical or biophysical tools. The combination of thermodynamics and MD simulations employed in the present study represents a viable approach to ascertain the temperature-dependent conformational variability of functionally important regions. The MP-4 proteins are a weak protease inhibitor even though it exhibits high structural homology to KTPIs. The rationally designed experiments using a combination of thermodynamics and further validation by MD simulation has helped decipher how a protease inhibitor with coherent biochemical and thermodynamics properties with other protease inhibitors is still a weak inhibitor.
Our study validates kinetics of binding reported for various proteins in general and enzymes in particular. However, such an observation has not been reported for MP-4-trypsin complex and not even for well-known STI-trypsin complex. We have for the first time demonstrated kinetics associated with dynamics of MP-4-trypsin using different physical parameters. Coupling of in-vitro and in-silico experiments have provided new insights regarding optimal binding in this system. The same principles and diverse biophysical techniques can be applied for other protein-protein interaction behavioural studies to assess factors associated with optimal physiological interaction in a more realistic scenario. The favourable conformation of ligand is required for efficient binding with the receptor and the active conformation may be dependent on physical factors such as temperature. The rise of new function in proteins may be due to mutations that change the temperature dependent flexibility of functional regions. Drug resistance is a major public health problem of global relevance. We believe that engineering flexibility in small molecule inhibitors using the aforementioned approach will enhance conformational adaptability and prevent loss of affinity due to mutations in target protein.
Methods
Purification. Mucuna pruriens seeds were purchased from M/S Shidh Seeds Sales Corp. (Dehradun district, India). The partial characterization of seed proteome and purification of one of the dominant protein was previously standardized and reported by Kumar et al. 18 . Briefly, initial purification was done using pH (50 mM sodium acetate buffer, pH 5.0) based protein extraction. Gel filtration chromatography (GFC) of 60% ammonium sulfate was carried out using Sephacryl 200 preparative column (Amersham Pharmacia Biotech Inc) in 50 mM phosphate buffer, pH 7.2, with 140 mM NaCl, at a flow rate of 1 ml/min at 280 nm. The last peak on the GFC chromatogram was identified as MP-4 protein on the basis of retention time (4.5 hrs) on the GFC column (85 cm Bed height, GE Healthcare) and homogeneity was evaluated on 12% SDS-PAGE. Protein concentration was estimated by a BCA protein assay (Pierce Biotechnology) using BSA (Sigma) as the standard.
Stability studies at various physiological conditions. Circular dichroism (CD) was performed on Jasco-700 spectropolarimeter equipped with Jasco PTC-348W temperature controller in the far-UV region (200-250 nm). 24 μM of protein was dissolved in 20 mM phosphate buffer, pH 7.2. Spectra were recorded at the temperature ranging from 20 °C-100 °C at an interval of 20 °C in a 1 mm path length quartz cuvette (200 μl, Hellma). The scanning speed was 20 nm min −1 and each spectrum was recorded as an average of 5 scans. The effect of pH on the secondary structure was further analysed by performing the experiment with the range of suitable buffers (citrate 3-6, phosphate 7-8 and glycine 9-10 pH range) with the same protein concentration at 37 °C. Spectra (CD mdeg) were converted into mean residue molar ellipticity [θ] MRW (deg cm 2 dmol −1 ) and analysed using JASCO spectral analysis program.
Preparation of the SPR sensor chip. All affinity measurements were carried out on BIAcoreT200 system (GE healthcare). 2 µM MP-4 (ligand) was dissolved in 10 mM sodium acetate pH 4.0 and amine coupled to a CM5 (carboxymethylated)-certifiedgrade sensor chips using an equal mixture of EDC/NHS (N-ethyl-N-(dimethylaminopropyl)carbodiimide; N-hydroxysuccinimide) (BIAcore amine coupling kit). Approximately 178 RU was achieved on immobilization and such less immobilization ensured the minimization of mass transport effect. The protein was immobilized at the flow rate of 5 μl min −1 for 120 seconds and the unreacted active sites were blocked with 1 M ethanolamine (BIAcore amine coupling kit). The running buffer was composed of 10 mM HEPES pH 7.4 containing 150 mM NaCl, 3 mM EDTA and 0.005% surfactant P20 (GE Healthcare). One flow cell was not immobilized with protein whereas EDC/NHS and ethanolamine was used. This flow cell treated as a control for monitoring any non-specific binding of ligand. describing 1:1 binding between analyte (A) and ligand (L) [A + L ↔ AL] as well as we have also seen data are more appropriately fitting in two-state (conformational change) model [A + L ↔ AL ↔ AL*] [29][30][31] . Equilibrium dissociation constant (K D = kd/ka) was calculated from fitted sensorgram. Chi-square (χ 2 ) value was strictly monitored during the data fitting. χ 2 is the standard statistical value indicates signal-to-noise ratio or the closeness of the fit between the experimental data and the model. Typically, χ 2 values should be less than 10 percent of Rmax.
Calculation of thermodynamics parameters. Equilibrium dissociation constant (K D ) values for each
set of experiments were further used for calculation of free energies of reaction between analyte and ligand. ∆Geq = −RTlnK D /c o , R is the Rydberg's constant (1.09677 × 10 3 m −1 ), T is the temperature in Kelvin, and c o is the standard state concentration (1 mol l −1 ). Heq was derived by directly fitting the experimental data to the integrated form of van't Hoff equation K D = K°D e H°/R(1/T°−1/T) e CT°/R(1/T°−1/T) (T°/T) C/R , where C refers to heat capacity change at constant pressure and the degree symbol refers to parameter values at a reference temperature. Entropy change was then calculated as ∆Geq = ∆Heq−T∆Seq. Molecular Docking. HADDOCK web server v.2 is a robust global docking program based on molecular mechanics (MM) and performs rigid body energy minimization, simulated annealing and water refinement 32 . The program searches conformational space of the two protein partner in the system and docks the molecules in low-energy orientations by either involve all-atom structures or through some degree of coarse-graining. The advantage of HADDOCK is that one can explicitly define the backbone flexibility hence provide conformational plasticity while retaining the biochemical information.
MP-4 structure (PDB id-5DSS) was docked to trypsin in-silico by using HADDOCK web server. MP-4 structure was aligned to soybean trypsin inhibitor (STI) structure in soybean trypsin inhibitor-trypsin complex (PDB id-1AVW). The coordinates of STI were substituted by the coordinates of MP-4 in the complex. In the defined docking mode, residues 187-192, 207-210 and 218-221 of trypsin were kept as active residues because these stretches of amino acids are generally involved in interaction in the other known complexes. Similarly, residues from 67 to 72 of MP-4 were defined as active flexible residues. Approximately 200 structures in 5 clusters were obtained from HADDOCK server, which represented 95% of the water-refined models. The top cluster of MP-4-trypsin was selected primarily on the basis of Z-score and change in free energy, Kcal/mol (∆G). PISA server (http://www.ebi.ac.uk/msd-srv/prot_int/) was used to calculate total buried surface area, nature of interactions and residues involved in interactions at docking site 33 . Visualization and preparation of structure figures were done using PyMOL 34 .
Molecular Dynamics (MD) simulation. The best model of MP-4-trypsin complex obtained from
HADDOCK server was assessed for various parameters and the complex was manually analysed for steric clashes with neighbouring residues. PROCHECK 35 was used for analysing the stereochemistry of the residues. On the basis of minimal clash and better values of Z-score, ∆G and stereochemistry, the best model was selected for molecular dynamics simulation.
Molecular Dynamics Simulations were carried out for free MP-4 and trypsin for 200 ns and for the selected best docked model for 500 ns at 15 °C and 25 °C using ff12SB force field of Amber 14 24,36,37 . The structures were explicitly solvated with a 10 Å TIP3P water box from the outermost atoms of the molecules. Periodic boundary conditions were used and the net negative charge neutralized with Na + counterion using tleap program of AMBER 14 38 . Prior to production dynamics; temperature and pressure were equilibrated at 25 °C and 15 °C and at 1 atmospheric pressure. At constant volume, equilibration was performed for 50 ns. The coordinates were saved after every 10 ps during the simulation such that the 50 ns trajectories consisted of 5000 MD sub-structures. All simulations were carried out with sander and performed on CUDA version of PMEMD NVIDIA K20X GPU support from Amber 14 accessible from RHEL workstation. All simulations were repeated to verify AMBER results.
Analysis of MD Trajectories. Trajectories were analysed using CPPTRAJ program. 500 intermediates/ snapshots were extracted out from the 500 ns AMBER trajectories at an interval of 1 ns. The extracted frames were clustered for comparative analyses of conformational ensembles obtained from 15 °C and 25 °C simulations. Clustering was performed by kclust utility in the MMTSB (Multiscale Modeling Tools for Structural Biology) suite with clustering radius of 2.0 Å from the centroid for backbone, Cα 23 . Various types of structural properties were evaluated by calculating the backbone root-mean-square deviation (RMSD), H-bond analysis. Investigation of mobility in the molecules at the two temperatures were done by estimation of flexibilities of residues in terms of root-mean-square fluctuation (RMSF) and B-factors using CPPTRAJ module of AMBER14 for frames extracted at 100 ps. Molecular Mechanics Generalized Born Surface Area (MM-GBSA) 39 was used to measure binding energy (∆G) between the complexes at 25 °C and 15 °C. The 5000 snapshots obtained throughout the 500 ns simulations were applied for MM-GBSA free energy calculations. The method has been used as a reliable technique for accurate calculation of free energy of binding (∆Gbind) using the following equation: | 2018-02-16T23:04:04.366Z | 2018-01-12T00:00:00.000 | {
"year": 2018,
"sha1": "0a86928924d518315b9c6050b5eb21913d2befbf",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-18733-9.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6b921940f4791b8df526464b2e4b026c2e60b4be",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
262096044 | pes2o/s2orc | v3-fos-license | Difference in microbiome compositions of healthy peri-implant sulcus and peri-implantitis sulcus from the same patient
Objective The objective of this study is to compare the microbiome of healthy peri-implant sulcus (C) and peri-implantitis sulcus (U) from the same patient and analyze the difference in the microbiome composition. Materials and methods DNA samples of subgingival biofilms from 10 C (control group) and 10 U (uncontrolled group) sites were sent to Microbiome Center in Korea Research Institute of Biomedical Science and analyzed using 16s rRNA gene amplification and sequencing (MiSeq, Illumina) and human oral microbiome database (HOMD). Results At the phylum level, Firmicutes and Proteobacteria were more abundant in group C, while Firmicutes and Bacteroidetes were dominant in group U. At the genus level, the core peri-implant microbiome was Streptococcus in group C. On the other hand, the core peri-implant microbiome was Porphyromonas, especially P. gingivalis in group U. Conclusion In this study, the microbiome composition of peri-implantitis sulcus was different from that of healthy peri-implant sulcus from the same patient. The peri-implantitis microbiome was pathogen-enriched and was similar to the microbiome associated with periodontitis.
Introduction
Dental implant installment has become a popular treatment method for rehabilitating edentulous patients.Despite its high success rate, implant treatment has complications, such as peri-implantitis, a bacterial biofilm-associated pathological condition characterized by inflammation and bone loss [1].Bacteria associated with this destructive disease are part of the normal oral microbiota [2] since bacteria from saliva and supragingival plaque on the implant surface colonize peri-implant sulcus [3].The microbiome of healthy peri-implant sulcus is characterized by a low ratio of anaerobic to aerobic species and few periodontal pathogens [4].However, under certain ecological shifts, bacteria associated with inflammation become dominant and pathogenic, acting in concert [2].
In order to understand and treat peri-implantitis, it is important to analyze changes in the peri-implant microbiome caused by the ecological shift.In this study, one of the culture-independent methods, 16s ribosomal RNA (rRNA) gene sequencing was used to characterize the peri-implant microbiome.Unlike other close-ended molecular approaches, 16s rRNA Illumina sequencing identifies untargeted but potentially relevant taxa by an open-ended characterization of the microbiome under study [5].Thus, this method can identify high levels of microbial diversity.In addition, 16s rRNA gene sequencing is cost-effective since higher sequence quality can be obtained at a much lower cost per sequence [5].The detailed analysis of the peri-implant microbiome helps not only understand the etiology of peri-implantitis but also develop individualized and targeted treatment strategies, which are more effective than generic treatments such as mechanical debridement and antiseptic application [6].
The purpose of this study was to compare the microbiome compositions of healthy peri-implant sulcus and peri-implantitis sulcus from the same patient using 16s rRNA gene sequencing and analyze changes in the peri-implant microbiome.
Patient selection
This study was approved by the Institutional Review Board of the Catholic University of Korea, Uijeongbu St. Mary's Hospital (UC18OESI0147) and funded by the Catholic University of Korea, Uijeongbu Institute for Clinical Medicine.The study was in compliance with STROBE guideline.Eleven study subjects were selected according to the following criteria -patients who had at least Fig. 1.Sample size and power calculations using G power.
H.J. Jung and W. Lee one 1-year checkup appointment after placing implant fixtures and installing fixed prostheses, patients who had at least one healthy implant and one peri-implantitis implant in posterior regions, healthy male/female adults aged under 80, and patients who understood the research objective and provided informed consent.Exclusion criteria were (i) pregnancy, (ii) diabetes, (iii) HIV infection, (iv) currently taking systemic immunosuppressant medications, (v) currently taking any bisphosphonate, and (vi) antibiotic therapy within 3 months before sampling.
In this study, the taxa count distributions between sample groups were compared.Distance matrices identifying the beta diversity between samples were analyzed, and the distribution of distances within groups to those between groups was then compared.Sample size and power calculations for the design of this study were performed using data from the pilot study to obtain appropriate intra-and between-group distance distributions (Fig. 1).
Observation categories and measurement instruments
History taking of patients was done to check antibiotic use within the past 3 months and systemic health condition.In a clinical appointment, the pocket depths of six sites (mesio-, mid-, disto-buccal and mesio-, mid-, disto-lingual) at selected implants were measured using sterilized perio probes.Sterilized endodontic paper points were placed in four sites (mesial, distal, buccal, lingual) of the peri-implant sulcus to obtain subgingival plaque.This sampling technique has been an internationally popular method for microbial culture studies [7].The subgingival plaque was acquired from healthy peri-implant sulcus and diseased peri-implantitis sulcus of the same patients.
Healthy peri-implant sulcus showed no local infection, swelling, fistula, BOP, or peri-implant mucositis.Implants with periimplantitis were selected based on the definitions presented in the 2017 World Workshop on the Classification of Periodontal and Peri-Implant Diseases and Conditions [8].In the clinical setting, bleeding on probing was used to detect soft tissue inflammation, and progressive bone loss was identified by the comparison of periapical radiographs taken after implant prosthesis installation and checkup appointments.BOP, increase in probing depth of peri-implant sites, and radiographic bone loss were used to determine case definitions for peri-implantitis.
In this study, the definitions of per-implantitis were as follows: bleeding on probing with or without suppuration, mean probing depth of 5-6 mm or more, bone loss higher than 3-4 mm, and configurations of intrabony (Class I) defect or suprabony(Class II) defect or both combined on periapical radiographs.
Research methods
In a dental clinic at the Catholic University of Korea, Uijeongbu St. Mary's Hospital, the subgingival plaque was sampled from healthy and diseased peri-implant sulcus of patients who had more than one implant with peri-implantitis among implant prostheses under more than a year of functional loading.Supragingival plaque of selected implants was removed using sterilized gauze, and subgingival plaque was sampled from four sites (mesial, distal, buccal, lingual) of the implant sulcus by placing sterilized paper points for 10 s.Acquired samples were put in cryo tubes, which were then stored in a liquid nitrogen tank.The stored samples were sent to the Korea Research Institute of Biomedical Science for DNA extraction and analysis (Fig. 2) [9].Research bias was avoided since DNA analysis of the collected data was done by the independent research institute.
Statistical analysis
Analysis of the composition of microbiomes (ANCOM) was used to compare microbiomes from 10 healthy(C) and 10 unhealthy peri-implantitis(U) sulcus.Nonparametric two-sample t-tests using Monte-Carlo permutations compared alpha diversity measures of 10 C and 10 U sites.Nonparametric significance tests, permutational multivariate analysis of variance (PERMANOVA), and analysis of similarities (ANOSIM) based on Bray-Curtis dissimilarity coefficients were used to analyze differential taxonomic groups between C and U sites.The relative abundance of the top hits was analyzed using the pairwise Wilcoxon signed-rank test (significant at p < 0.05 after Bonferroni's correction).
Clinical outcomes
One of 11 patients originally enrolled in the study dropped out, so 10 samples were collected from healthy (C) and peri-implantitis (U) implant sulcus of 10 patients.Healthy peri-implant sulcus showed no local infection, swelling, fistula, BOP, or peri-implant mucositis.The mean values of probing depth were less than 4 mm.Peri-implantitis sites showed BOP and mean probing depth values of ≥5-6 mm (Table 1).Also, bone loss was higher than 3-4 mm in some or all sites.In addition, configurations of intrabony defect, suprabony defect, or both combined were observed on periapical radiographs of peri-implantitis sites.
Analysis of alpha diversity
10 samples from healthy (C) and peri-implantitis (U) sulcus of 10 patients were analyzed.Target read count, Shannon species diversity, and Good's coverage of library were similar between group C and U (Table 2).
Phylum level
At the phylum level, when the samples of healthy (C) peri-implant sulcus and unhealthy (U) peri-implantitis sulcus were compared, the composition of Bacteriodetes increased in group U except subject 2. Also, the composition of Proteobacteria decreased in group U except for subject 2 and 7 (Fig. 3).
Genus level
At the phylum level, when the samples of group C and U were compared, the composition of Streptococcus decreased in group U, and that of Porphyromonas increased in group U for all subjects (Fig. 5).
Mean relative abundance was compared.At the genus level, the microbiome of group C was mainly composed of Streptococcus (21.2%), while the microbiome of group U consisted of Streptococcus only by 7.9%.4.9% of group C was Neisseria genus, whereas only 1.0% of group U was Neisseria genus.On the contrary, the microbiome of group U was mainly composed of Porphyromonas (13.2%), specifically Porphyromonas gingivalis (10.4%).(Fig. 6).
Discussion
In this study, the microbiome of healthy (C) peri-implant sulcus constitutes Firmicutes and Proteobacteria at the phylum level and Streptococcus at the genus level.The microbiome of unhealthy (U) peri-implantitis sulcus is composed of Firmicutes and Bacteroidetes at H.J. Jung and W. Lee the phylum level and Porphyromonas, especially P. gingivalis at the genus level.The study by Sanz-Martin et al. reported that healthy peri-implant sites were colonized by Proteobacteria and Actinobacteria phyla.Also, Streptococcus (phylum Firmicutes) and Neisseria were abundant in healthy peri-implant sites [5].On the other hand, peri-implantitis sites harbored genera Porphyromonas (phylum Bacteroidetes), Treponema (phylum Spirochetes), and Filifactor (phylum Firmicutes).In addition, these diseased sites contained higher levels of classic pathogens, the so-called 'red complex' (Porphyromonas gingivalis, Tannerella forsythia, Treponema denticola).The results of this study align with previous findings by Sanz-Martin et al. [5].
The microbiome of healthy peri-implant sites mainly consists of Gram-positive cocci, non-motile bacilli, and few Gram-negative anaerobic species [12].As inflammation progresses, bone loss occurs, and the peri-implant pocket deepens.Peri-implantitis is characterized by bleeding on probing and deep peri-implant pocket, which causes the ecological shifts favoring anaerobic bacteria over aerobic species due to low oxygen condition.In most cases, the composition of the peri-implantitis microbiome is similar to that of the periodontitis microbiome, dominated by Gram-negative bacteria.Gram-negative, black-pigmented, motile, and anaerobic species are commonly found in deep periodontal pockets [13].The majority of the peri-implantitis 'checkerboard' studies based on the traditional culture studies of periodontitis sites confirmed that the peri-implant pocket shares a similar microbial profile with the periodontal pocket.The cluster of red complex (P.gingivalis, T. forsythia, T. denticola) inhabited peri-implantitis sites more abundantly than healthy sites [14].The change of bacterial profile from group C to group U in this study supports these previous findings.
This study sampled biofilms from healthy peri-implant and diseased peri-implantitis sites of the same patient.It was reported that microbiome compositions of peri-implant submucosa differed greatly among individuals, and these inter-subject variations even outweighed differences between healthy and peri-implantitis sites [15].Therefore, comparing the different groups of samples from the same individuals can be crucial to eliminate the effects of confounding factors caused by individual differences [16].For this advantage, the studies done by Song et al. and Ganesan et al. compared pairs of healthy and diseased implants from the same patients to analyze differences in microbiome and gene expression [16,17].Thus, it is evident that in this study, the difference in bacterial compositions of healthy peri-implant and peri-implantitis sulcus is due to the ecological shift favoring anaerobes over aerobes caused by the deep peri-implant sulcus and not due to host difference.
This study has limitations in that it has a small sample size and only compares the microbiome compositions of healthy and periimplantitis sulcus.In further studies, sample sizes should be larger to approximate the population more closely.Also, it will be more informative to include the microbiome composition of a transitional stage from healthy to peri-implantitis, such as peri-implant mucositis.In addition to that, it will be interesting to compare the microbiome compositions of healthy and diseased periodontal H.J. Jung and W. Lee pockets and healthy peri-implant and diseased peri-implantitis sites from the same patient.
Conclusion
In this study, Firmicutes and Proteobacteria phyla and Streptococcus genus colonized healthy peri-implant sulcus, whereas Firmicutes and Bacteroidetes phyla and Porphyromonas, especially P. gingivalis genus are abundant in unhealthy peri-implantitis sulcus.In conclusion, the microbiome composition of unhealthy peri-implantitis sulcus was different from that of healthy peri-implant sulcus from the same patient.The peri-implantitis microbiome was anaerobe and pathogen-enriched and was similar to the microbiome associated with periodontitis.
H.J. Jung and W. Lee
Table 1
Sample characterization and clinical outcomes from healthy (C) and peri-implantitis (U) sulcus.
Table 2
Alpha diversity.1) Number read count: the number of nucleotides sequenced by Next Generation sequencing.2) Shannon species diversity: microorganism diversity index.3) Good's coverage of library: detection ratio of microorganisms in a sample. | 2023-09-22T15:06:24.145Z | 2023-09-01T00:00:00.000 | {
"year": 2023,
"sha1": "ee330798816968faae7467db0da7983e76dc5a3e",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2405844023075114/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "169f64b2616e719ccb3a2f2d80b4189b3b961b39",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
260889560 | pes2o/s2orc | v3-fos-license | Influence of Isomeric Composition and Sample Handling on the Liquid Density of Hydrofluorethers Measured by Vibrating Tube Densimeter at 0.1 MPa
Hydrofluoroethers (HFEs) represent a new family of promising engineering fluids suitable for technical cleaning and cooling of electronic and magnetic devices or as admixtures in refrigerant blends. Here, we report accurate data for the liquid density at 0.1 MPa and temperatures from 273.15K\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$273.15\,\text{K}$$\end{document} to 343.15K\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$343.15\,\text{K}$$\end{document} for a series of five HFEs, namely, HFE-7000, HFE-7100, HFE-7200, HFE-7300, and HFE-7500. A highly sensitive vibrating tube densimeter with a borosilicate glass U-tube calibrated according to the procedure by Prokopová et al. (J Chem Thermodyn 173:106855, 2022) provided density data with an expanded uncertainty (k=2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k=2$$\end{document}) of 0.13kg·m-3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0.13\,\text{kg}\cdot \text{m}^{-3}$$\end{document}. Influences such as sample degassing, water content, or sample temperature before its dosing into the densimeter are discussed. Thanks to the high sensitivity of the used densimeter, an unexpected shift in the density of different HFE-7100 and HFE-7200 liquid samples was detected. Unlike other HFEs, HFE-7100 and HFE-7200 are mixtures of two hardly separable isomers, which were so far considered having identical thermophysical properties. Utilizing nuclear magnetic resonance spectroscopy, the ratio of n-isomer and iso-isomer was inspected for various liquid samples. In the range of iso-isomer mole fraction from 0.61 to 0.77, the new measurements revealed density differences of more than 5kg·m-3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5\,\text{kg}\cdot \text{m}^{-3}$$\end{document} in case of HFE-7100 and of about 3kg·m-3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3\,\text{kg}\cdot \text{m}^{-3}$$\end{document} in case of HFE-7200. Consequently, for some applications, the properties of different HFE isomers cannot be considered identical. The Rackett-type correlation for the saturated liquid density was fitted using the new and the literature data.
Introduction
The fluorinated ethers, also known under the commercial name Novec TM intro- duced by the 3M company [1], are receiving increasing attention from researchers and engineers as promising alternatives to chloro-fluoro-hydrocarbons over the past two decades.Hydrofluoroethers (HFEs) are excellent heat transfer fluids suitable for cooling of electronic and magnetic devices, cleaning and blowing agents, or lowboiling components in refrigerant blends.In comparison with other commonly used engineering fluids, HFEs cause zero ozone depletion and have relatively low global warming potential (GWP) and high dielectric constant.However, it should be noted that since HFEs belong to per-and polyfluoroalkyl substances (PFASs), their wider use may be limited in future.For an effective and reliable use of fluorinated ethers, the accurate description of their thermodynamic properties such as density, vapor pressure, excess molar volume, and transport properties such as viscosity or surface tension is crucial.Several research groups have recently provided valuable experimental data on some of these properties [2][3][4][5] and developed predictive models mostly based on the SAFT-type equations of state [6][7][8].Nevertheless, the introduced models have limited ranges of validity and for some applications unsatisfactory accuracy.The available property data are still rather limited in order to develop and verify accurate models valid over wide temperature and pressure ranges such as empirical multiparameter equations of state [9] as included in NIST's REFPROP package [10].
An interesting phenomenon that has not been addressed in detail is the isomeric composition of some HFE fluids, namely, HFE-7100 which is a trade name of binary isomer mixture of 1,1,1,2,2,3,3,4,4-nonafluoro-4-methoxybutane and 1,1,1,2,3,3-hexafluoro-3-methoxy-2-(trifluoromethyl)propane, and HFE-7200 being a trade name of binary isomer mixture of 1-ethoxy-1,1,2,2,3,3,4,4,4-nonafluorobutane and 1-ethoxy-1,1,2,3,3,3-hexafluoro-2-(trifluoromethyl)-propane.Since the isomers of the HFE-7100 and HFE-7200 mixtures are similar substances with the same functional groups, the relative volatility of the substances can be expected to be close to 1.As a result, the isomers are difficult to separate from each other, e.g., by gas-liquid chromatography [11].Therefore, most producers, including the 3M company, declare HFE-7100 and HFE-7200 as mixtures of two inseparable isomers with essentially identical properties.Consequently, the thermophysical properties of these fluids such as the boiling point or liquid density are reported as for a pure substance [12,13].On the other hand, our new experiments slightly undermine this simplification.Sensitive measurements carried out with the vibrating tube densimeter (VTD) revealed density differences of up to 6.4 kg ⋅ m −3 and 3.1 kg ⋅ m −3 between different batches of HFE-7100 and HFE-7200, respectively.These differences are remarkably larger compared to other single-isomer HFE fluids, i.e., HFE-7000, HFE-7300, and HFE-7500, which showed only small differences in densities of up to 0.6 kg ⋅ m −3 between the different production batches.The detected five to ten times higher discrepancies in the liquid density point to the effect of isomeric composition of HFE-7100 and HFE-7200.
The main goal of this work is to provide reference liquid density data at 0.1 MPa for a series of HFE fluids, specified in Table 1, and to shed light on the influence of HFE isomeric composition.The composition of the mixture may change during sample handling due to different evaporation of individual components.We have therefore tried to trace various experimental effects such as temperature during the sample handling or different level of degassing that can influence the sample composition and the measured liquid density.Accurate data for liquid density are necessary for the next step of description of HFE properties, e.g., for the development of multiparameter equations of state [9].
Experimental
A highly sensitive vibrating tube densimeter with a borosilicate glass U-tube was used for the measurement of liquid density at barometric pressure for a series of different HFE samples.The new data were collected in the temperature range from 273 K to the vicinity of the normal boiling point.
Materials
Description of all samples of five different HFE fluids and one pure isomer 1,1,2,3,3,3-hexafluoro-1-methoxy-2-(trifluoromethyl)propane, here called HFE-7100-iso, including their purity and water content is summarized in Tables 1 and 2. No further purification was applied except for the degassing of 5 ml sample in a 10-ml plastic syringe by applying repeatedly a slight vacuum for about 10 s.Samples used for the reference measurement of liquid density at 0.1 MPa were degassed four times.Further details on the sample degassing and the change of sample composition in case of two-isomer liquids due to evaporation are given in the Results section.The water content was measured utilizing the Karl-Fischer coulometric titrator (Mettler Toledo C30) or taken from analytical certificates.It is recommended to have at least 50 μg of water in the sample in case of coulometric analysis.For very dry samples, for which the measured water content was under the detection limit (UDL) of the titrator, a water mass fraction lower than 10 × 10 −6 was assumed.With regard to values given in Table 2, the HFE samples can be considered almost waterfree.All samples were stored in the refrigerator at a temperature of 278 K.The samples were taken directly from the bottle with a sterile syringe, quickly degassed and dispensed into the VTD U-tube or further processed in order to intentionally alter the composition of isomeric mixtures.
Vibrating Tube Densimeter
Vibrating tube densimeters are very sensitive instruments for the measurement of fluid densities ranging from gases to dense liquids.When carefully calibrated, the standard combined uncertainties in the order of ± 0.10 kg ⋅ m −3 can be achieved [15-17] for the high-pressure instruments equipped with the metal U-or V-shaped vibrating tubes.The low-pressure units with glass tubes can attain even one order of magnitude lower standard combined uncertainties of ± 0.010 kg ⋅ m −3 [18-20].
In this work, the vibrating tube densimeter of Anton Paar, model DMA 5000 M [21] with a borosilicate glass U-tube was used.With a resolution down to 0.001 kg ⋅ m −3 , the instrument belongs to the most sensitive VTDs.The densimeter was operated and calibrated according to the procedure described in previous work [20].The calibration technique inspired by the approach of Fritz et al. [22] employs a series of repeated measurements with ultra-pure water and dry air.In short, the fluid density can be determined from the following equation where PQ = ∕ ref is a relative period of oscillation, A and B are temperature-depend- ent parameters obtained from the reference measurements with water and air over the temperature range from 273 K to 363 K, and ΔD 0 (T) = D 0,sample (T) − D 0,air (T) is the damping difference of the measured sample and dry air at a given temperature T. V 1 and V 2 are the damping coefficients reflecting the influence of sample viscosity.
The HFE samples were measured in the range from T = 273 K to the temperatures 4 K to 60 K below the normal boiling point depending on the sample volatility.Due to the rather high evaporation of most HFE samples, both the inlet and outlet openings of the VTD U-tube were loosely closed with teflon plugs to guarantee constant sample composition during the whole measurement.Several experimental setups leading to the prevention of evaporation of the sample from the U-tube were tested.The differences in sample densities obtained in the way of the U-tube plugged and unplugged were below 0.030 kg ⋅ m −3 , i.e., they lie within the expanded uncertainty ( k = 2 ) of the density data of 0.13 kg ⋅ m −3 .Each sample measurement was followed by water measurements and at least every 10 days by dry air measurements to check the calibration parameters and stability of VTD over time.The final densities of the HFE samples were calculated based on the temperature dependent parameters A and B obtained from the measurement campaign of at least 5 water and 3 air measurements.
Results and Discussion
Tables 3, 4, 5, 6, and 7 summarize the experimental data for the liquid density of five HFE fluids.The density data represent the average values of two to three measurements with different samples from each batch specified in Table 2 the measurements were carried out with the U-tube inputs loosely plugged, the liquid pressure can be considered slightly higher than the reported barometric pressure measured with a high-precision external pressure gauge Druck DPI 142 (GE, USA).An internal Pt100 temperature probe located close to the tip of the U-tube was inspected as described in previous work [20].
Isomeric Composition of HFE-7100 and HFE-7200
Tables 8 and 9 provide the density data of different HFE-7100 and HFE-7200 samples depending on the mole fraction of the iso-isomer at three temperatures of 283 K, 298 K, and 303 K. New data are compared with the literature sources containing the information on the composition of the isomeric mixture.The composition dependence of the liquid density at 0.1 MPa of all known samples is depicted in Fig. 3.As can be seen, the density of both liquids gradually increases with the higher content of the iso-isomer.In the typical composition range with the mole fraction of iso-isomer from 0.6 to 0.8, the detected density difference reaches 5 kg ⋅ m −3 and 3 kg ⋅ m −3 in case of HFE-7100 and HFE-7200, respectively.These differences considerably overreach the expanded uncertainty of most experimental data, which are typically in the range of 0.1 to 1 kg ⋅ m −3 .It should be noted that the influence of isomeric composition of HFE-7100 and HFE-7200 can be neglected in most engineering applications.On the other hand, it needs to be considered in the development of accurate property models such as empirical multiparameter equations of state [9].Moreover, due to favorable properties such as low hygroscopicity, HFE fluids are viewed as good candidates for calibration liquids, e.g., for accurate density measurement.If so, we recommend focusing predominantly on the single isomeric HFE fluids.
Temperature Correlation for Saturated Liquid Density of HFEs
The new density data obtained at 0.1 MPa together with the low-pressure data taken from the literature were fitted with the Rackett-type correlation [26].The correlation, given in the form of Eq. 2, provides good predictions for the saturated liquid density of all five HFEs.
In Eq. 2, parameter A approximates the critical density crit and B can be considered as the critical compressibility factor Z crit .We note that Vetere [26] employed a universal value of exponent N of 2/7 that can be used for less described liquids.In general, various values for N usually in the range from 0.1 to 0.5 are used when representing low-pressure experimental data, e.g., as reported by Outcalt et al. [4,27].
Except for HFE-7000, exponent N was set equal to 0.30 which provides good representation of all available density data by achieving acceptable reproduction of the critical point estimated by Aminian et al. [28].Table 10 summarizes the parameters of Eq. 2 for the five selected HFEs obtained from the minimization of the following objective function with N exp standing for the number of density data points, U i for the expanded uncer- tainty (k = 2) , and w i for the weight lying between 0 and 1.The weight equaled to 1.0, except for the data points from the previous work [6] whose weight was set to 0.1-more details are given in the discussion of Figs. 4, 5, 6, 7, and 8. Average values of the standard deviation of the correlated parameters are also provided.Critical temperature T crit was taken from Aminian et al. [28].Values of parameter A are approximately 4 % higher than the critical density crit estimated recently by Aminian et al. [28].It shall be emphasized that correlation (2) is based on the experimental data in the temperature range typically from 273 K to 363 K.However, it is expected to provide reasonable prediction for the saturated liquid density also at higher temperatures as it follows the temperature trend toward the critical point density.
Table 10 also provides the values for the expanded relative deviation of Eq. 2 from the correlated experimental data.The relative deviations for HFE-7100 and HFE-7200 are considerably larger compared to other three single isomeric HFEs.This indicates greater scatter of the density data due to the two-isomer composition of HFE-7100 and HFE-7200.10.The blue dash-dotted lines correspond to correlations given as linear functions of temperature in 3M datasheets [12,13,[29][30][31].As can be seen, the 3M correlations deviate from the available experimental data by several tenths of a percent.The largest discrepancies can be seen in case of HFE-7100 as shown in Fig. 5.It should be noted that the 3M correlations were developed before most of the experimental data were published and as such should be viewed as preliminary engineering correlations.Out of recent experimental data, the measurements by Rausch et at.[2], carried out with a VTD under the saturated conditions, were found to show the best internal consistency over wide temperature range from 273 K to 363 K for all five selected HFEs.We note that our previous data collected with the single-sinker buoyancy method [6] show slightly different temperature slopes than other data for most HFEs.We suspect that this discrepancy occurred due to possible temperature gradients and convection in the relatively large liquid sample with volume of around 100 ml.On the other hand, the buoyancy method is not influenced by the sample viscosity, which may cause substantial errors in the data obtained with a VTD [19].The good agreement between the buoyancy method and the new VTD data in the vicinity of the laboratory temperature of 298 K, where the possible temperature gradients were the lowest, provides additional verification of the viscosity correction employed in Eq. 1 for relatively low-viscosity HFE fluids.Furthermore, the agreement of both measuring techniques close to 298 K and with the data by Rausch et al. [2] for most of the single-isomer HFE samples confirms the VTD calibration according to Prokopová et al. [20], which was extrapolated to rather high densities from 1300 to 1700 kg ⋅ m −3 .A preliminary, i.e., so far not officially published, multiparameter equation of state by Zhou and Lemmon [34] is available in REFPROP package v. 10 [10] for
Black dashed lines indicate the expanded relative deviation of Eq. ( 2) Fig. 8 Relative deviation of the Rackett-type correlation (2) for HFE-7500 from the densities taken from literature [2,3,6,36,38,41], new data, and 3M correlation [31].Black dashed lines indicate the expanded relative deviation of Eq. 2 HFE-7000.The predictions of the equation of state for the saturated liquid density are represented by the orange solid line in Fig. 4 and show quite good agreement with the new measurements in the temperature range from 273 to 303 K although our new data were not used to fit this equation of state.At higher temperatures, the equation shows an increasing deviation from the Rackett-type correlation (2), which follows the trend of the data by Rausch et at.[2].
Another preliminary multiparameter equation of state has been recently introduced for HFE-7100 (denoted also as RE449mccc) in the supplement of a publication by Huber and Lemmon [37].Unlike in case of HFE-7000, the equation of state follows the trend of the density data by Rausch et al. [2] over the entire temperature range as shown in Fig. 5. Similarly as shown in Fig. 3a, the influence of the isomeric composition of HFE-7100 on the experimental data can be clearly seen.The liquid density systematically increases with the increasing mole fraction of iso-isomer in the samples.The data with similar iso-isomer mole fraction of around 60 % to 62 % by Cendon et al. [24], Pineiro et al. [23], and Rausch et al. [2] agree well with each other.The new data with the mole fractions from 67 % to 100 % are shifted to higher densities.For some of the data sources, the isomeric composition could not be traced.However, from the trend of the data shown in Fig. 5, one can assume that the iso-isomer mole fraction could be around 60 %, 69 %, and 76 % in case of the data by Qi et al. [32], Tanaka [35], and Vinš et al. [6], respectively.Unfortunately, in case of the first batch (no.25584) measured in this work, the isomeric composition was not investigated either.One can assume that the iso-isomer mole fraction was approximately 75 %.Based on private communications, the samples measured by Muñoz-Rujas et al. [5] should have the iso-isomer mole fraction of around 52 %.However, the data show remarkably good agreement with other data with the iso-isomer mole fraction around 60 %.The possible composition of the samples employed by Shiflet and Yokozeki [36] is hard to judge due to the larger scatter of data.
A similar dependence of liquid density on isomeric composition can be seen in Fig. 6 for the other two-isomer component HFE-7200.A gradual increase in density can be seen with the increasing mole fraction of the iso-isomer.The data by Rausch et al. [2] and Muñoz-Rujaz et al. [25] with the iso-isomer mole fraction between 61 % and 62 % are in good agreement.The new data obtained in this work for the samples with an iso-isomer mole fraction between 67 % and 73 % are systematically higher by approximately 0.1 to 0.2 %.Based on the data shown in Fig. 6, one can again guess the isomeric composition of other data sources.The data by Fang et al. [38] and Pineiro et al. [23] seem to have an iso-isomer mole fraction of around 61 % due to the remarkably good agreement with the data by Rausch et al. [2] and Muñoz-Rujaz et al. [25].Our previous data obtained with the single-sinker buoyancy method [6] seem to have the mole fraction of around 72 %.
Figure 7 shows the density data for HFE-7300 compared to the Rackett-type correlation (2).As can be seen, all experiments are in very good agreement except for the different temperature slope in case of the single-sinker buoyancy data from the previous study [6], which was discussed above.The two batches of HFE-7300 specified in Table 2 show a slight mutual offset of about 0.5 kg ⋅ m −3 .As the batches have comparable purity, low water content and were handled in the same manner, the 139 Page 16 of 21 difference is assumed to be the batch dependency similar to other studies, e.g., by Sommer et al. [40] for toluene.In case of HFE-7300, the 3M correlation [30] provides relatively good predictions over the temperature range from 275 K to 345 K.
The density data for HFE-7500 compared to correlation (2) are provided in Fig. 8.The new data for batch no.21099 are in excellent agreement with the data by Rausch et al. [2] and Muñoz-Rujaz et al. [41] over the whole temperature range.The densities of the other two batches are slightly shifted by about 0.4 to 0.5 kg ⋅ m −3 which is considered to be a batch dependency as observed for HFE-7300.The data by Lafitte et al. [3] and Fang et al. [38] show only slight discrepancies which are, except for a single point by Fang et al. at 293 K, fully within the interval of the expanded relative deviation of the Rackett-type correlation (black dashed lines in Fig. 8).The singlesinker buoyancy data [6] and the data by Shiflett and Yokozeki [36] show larger deviations.Both datasets [6,36] have rather high expanded uncertainties of around 0.5 to 4.0 kg ⋅ m −3 .Their relevance was therefore lowered in the development of the density correlation (2).Besides, the weight w i of the single-sinker buoyancy data [6] was lowered to 0.1, as discussed above.
Other Effects Influencing Measurements with a VTD at 0.1 MPa
Due to the high sensitivity of the VTD used, additional factors affecting the density measurement could be examined.The high resolution of 0.001 kg ⋅ m −3 enables to inspect, for example, the gradual influence of sample degassing or the evaporation of two-isomeric samples at elevated temperatures.
Sample Degassing
As mentioned in section 2.1, the 5-ml liquid samples were degassed by applying repeatedly a slight vacuum in a 10-ml syringe.Our aim was to inspect the degree of degassing depending on the number of vacuum cycles and its influence on the measured liquid density.The samples, tempered inside a refrigerator at a constant temperature of 278 K, were taken from the 1-liter bottle, quickly degassed by different numbers of vacuum cycles in a syringe and applied directly into the VTD U-tube.The liquid density was measured at three different temperatures from 293 to 303 K. Figure 9 shows the observed variation of density depending on the number of degassing cycles for the two-isomer liquid HFE-7100 and the single-isomer liquids HFE-7000, HFE-7300, and iso-isomer of HFE-7100-iso with regard to the non-degassed samples.
Independent of the isomeric composition, almost identical density increase was observed for all samples taken from the bottle tempered at 278 K.A maximum increase of density of 0.75 kg ⋅ m −3 was observed after 10 degassing cycles, with the steepest change occurring after the first 4 cycles.The measured density did not change any further with increasing number of cycles for any of the samples including HFE-7100.This indicates that the degassing technique did not affect the isomeric composition of the two-isomer sample and the increase in density corresponds only to sample degassing.To verify this, an additional experiment with two samples of single-isomer liquids HFE-7000 and HFE-7100-iso tempered at the laboratory temperature of 297 K was performed.In this case, the degassing technique shows a similar trend, however, with a lower increase in density of around 0.5 kg ⋅ m −3 .This is considered to be due to the lower solubility of air in HFEs at higher temperature.
These experimental tests showed that regardless of the possible isomeric composition or the temperature at which the sample was degassed, the liquid density increased approximately by 0.40 kg ⋅ m −3 and 0.65 kg ⋅ m −3 after 4 degassing cycles (i.e., standard degassing approach described in Sect.2.1) at the degassing temperature of 278 K and 298 K, respectively.We note that the additional change in density due to further degassing cycles lies in the expanded uncertainty of the data presented.
Sample Temperature Before Its Dosing into the VTD U-tube
Another experimental test focused on the temperature of the sample and its handling prior to dosing into the VTD U-tube.Application of the liquid sample from a bottle into the densimeter using a syringe typically takes about 1 to 3 min, including degassing, flushing the U-tube with fresh sample, and capping the U-tube in-and outlet.Two-isomer mixture HFE-7100 (b.n.24865) and single-component HFE-7300 (b.n.20176) were measured at two different initial temperatures: once stored at the laboratory temperature of 298 K and once refrigerated at 278 K.The samples were not degassed in this case in order to speed up the sample handling.The density measurements at each storage temperature were repeated three times using the same procedure over the temperature range from 273 to 323 K.The density of the refrigerated samples was approximately 0.025 kg ⋅ m −3 lower compared to the warmer samples.This tiny difference, lying well within the expanded uncertainty of the measured density, is believed to be due to the higher amount of air dissolved in the colder samples.It follows that the quick sample handling and the initial storage temperature do not affect the accuracy of the obtained density data.The effect of exposure time to ambient air during the sample handling was inspected using the samples heated for 30 and 60 minutes in an uncovered beaker at a temperature of 308 K prior to dosing into the densimeter.For both HFE liquids, the density measured after 60 minutes was slightly higher compared to that after 30 minutes.The difference, which did not exceed a value of 0.040 kg ⋅ m −3 , appears to be due to the longer release of air dissolved in the liquid samples to the environment.On the other hand, a considerable difference was observed between HFE-7100 and HFE-7300 when comparing the quickly handled samples described in the previous paragraph with samples heated to T = 308 K for tens of minutes.In the case of single-isomer HFE-7300, the density of sample heated to T = 308 K was only 0.057 kg ⋅ m −3 higher than in case of quickly dosed samples with the initial tempera- ture of 298 K or 278 K.In contrast, the density of two-isomer HFE-7100 showed a noticeable increase of 0.275 kg ⋅ m −3 due to the heating of the sample in ambient air.Although it is difficult to confirm, it appears that this five times higher difference was caused by the change of composition during evaporation of the HFE-7100 samples rather than by a reduced content of dissolved air.
In summary, exposure of the sample to the ambient air must be kept as low as possible, especially when accurate and reproducible density measurements of liquid mixtures are to be performed using the sensitive barometric VTDs.
Conclusion
New data for the liquid density for five different hydrofluorethers were measured using a vibrating tube densimeter Anton Paar, model DMA 5000 M. The density data were measured at approximately 0.1 MPa over the temperature range from 273 K to the vicinity of the normal boiling point with an expanded combined uncertainty ( k = 2 ) of 0.13 kg ⋅ m −3 .Due to the high sensitivity of the employed VTD and its careful calibration according to the procedure introduced in previous work [20], several interesting effects influencing the measured density were investigated on different batches of HFE samples.HFE-7100 and HFE-7200 are binary mixtures of two hardly separable isomers, whose thermophysical properties have been considered identical so far.However, new experiments showed a systematic shift in the liquid density of HFE-7100 and HFE-7200 depending on the mole fraction of the isoisomer.In the typical range of iso-isomer mole fractions between 60 % and 100 %, the density varies by more than 5 kg ⋅ m −3 and 3 kg ⋅ m −3 in case of HFE-7100 and HFE-7200, respectively.Other experimental tests indicated the influence of sample handling before dosing into the VTD U-tube such as degassing, storage temperature, and exposure time to ambient air.
Based on the literature data for the low-pressure and the saturated liquid density and the new measurements, the Rackett-type correlation (2) was developed.The correlation provides good estimates for the saturated liquid density of all five HFEs.The expanded ( k = 2 ) relative deviation from the experimental data is lower than 0.2 % for all HFEs, except for HFE-7100 where the isomeric composition causes larger deviation.The new experimental data can be used in possible improvement of the preliminary multiparameter equations of state for HFE-7000 [34] and HFE-7100 [37] implemented in REFPROP [10], when especially the influence of the isomeric composition of HFE-7100 should be taken into account.
F NMR spectra (75.72 MHz, CDCl3) were measured using a 80 MHz instrument Spinsolve 80 ULTRA (Magritek, Germany) at a room temperature.The chemical shifts ( ) are given in ppm.As depicted in Figs. 1 and 2, signals at − 80.2 ppm (iso-) and − 86.6 ppm (n-) were used to determine the 139 Page 4 of 21
Figures 4 , 5 , 6 , 7 ,
and 8 compare the literature data and the new data for the liquid density at 0.1 MPa and the saturated liquid density of five selected HFEs with the Rackett-type correlation(2).The horizontal black dashed lines indicate the expanded relative deviation of the correlation given in Table et al. (2007) Qi et al. (2014) Vins et al. (2021) Ohta et al. (2001) @ p sat Rausch et al. (2015) @ p sat prelim.EOS -REFPROP v.10 @ p sat this work -b.n.20145
Fig. 9
Fig. 9 Variation of density depending on number of degassing cycles in a syringe for samples stored at the laboratory temperature 298 K and refrigerated at 278 K; densities measured with VTD at 293 K (a) and at 303 K (b)
Table 1
List of investigated hydrofluoroethers
Table 2
Specification, purity, water content, and isomeric composition of measured samples
Table 3
Average density of HFE-7000 sample no.20145 including the expanded uncertainties U( ) with k = 2 at the barometric pressure of 989.00 hPa a a Standard uncertainties:
Table 4
Average density of HFE-7100 samples including the expanded uncertainties U( ) at an average barometric pressure of 986.19 hPa Standard uncertainties:
Table 5
Average density of HFE-7200 samples including the expanded uncertainties U( ) at an average barometric pressure of 990.44 hPa Standard uncertainties:
Table 6
Average density of HFE-7300 samples including the expanded uncertainties U( ) at an average barometric pressure of 982.27 hPa Standard uncertainties: u
Table 7
Average density of HFE-7500 samples including the expanded uncertainties U( ) at an average barometric pressure of 986.16 hPa Standard uncertainties:
Table 9
Liquid density depending on the iso-isomer content in the HFE-7200 samples | 2023-08-15T13:36:01.490Z | 2023-08-14T00:00:00.000 | {
"year": 2023,
"sha1": "d2c50b75ca4ca1771cc3b24acae04dcdeeb9ec6d",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10765-023-03247-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "82163a2238f3a05e9e01d96ad26eab46d6d4f889",
"s2fieldsofstudy": [
"Engineering",
"Chemistry"
],
"extfieldsofstudy": []
} |
259230890 | pes2o/s2orc | v3-fos-license | COVID-19 after rituximab therapy in cSLE patients
Childhood-onset systemic lupus erythematosus (cSLE) is an autoimmune disease associated with significant morbidity and mortality. Rituximab is a B-cell depleting therapy utilized in the treatment of SLE. In adults, rituximab has been associated with increased risk of adverse outcomes in patients who develop coronavirus disease 2019 (COVID-19). We aimed to assess the impact of prior rituximab treatment on clinical outcomes from Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) infection in children with SLE. To describe the impact of rituximab on outcomes from SARS-CoV-2 infection, we conducted a retrospective study of pediatric SLE patients in our center diagnosed with COVID-19 who had previously received rituximab between February 2019 and October 2022. Patients’ clinical characteristics, disease activity, and outcomes were assessed. Of the eight subjects assessed, five required hospitalizations for COVID-19, four required ICU admission, and two were seen in the emergency department for their symptoms. One patient ultimately expired from her illness. The median time between rituximab administration and COVID-19 diagnosis was 3 months. We assessed the clinical outcomes, including the need of ICU admission and fatal outcome, of COVID-19 in our cSLE patient population after rituximab administration. Approximately 60% of our patients required hospitalization for their illness, and seven out of eight patients required healthcare utilization to include hospitalization and/or emergency department visits.
Introduction
Rituximab, an anti-CD20 therapy, is used widely across a variety of autoimmune conditions, including childhood-onset systemic lupus erythematosus (cSLE). 1 Rituximab administration is associated with prolonged B-cell depletion and decreased humoral response. 2,3 It remains unclear, however, whether treatment with rituximab therapy increases the risk of severe infection from Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2).
Patients with persistent COVID-19 positivity by polymerase chain reaction (PCR) have been shown to have a more prolonged, often relapsing remitting course, and clinically have worse outcomes. [4][5][6] Patients who have recovered from COVID-19 infection develop T-cell and B-cell memory, 7 the latter of which is impaired with rituximab therapy and can affect rates of re-infection with COVID-19. 8 Additionally, a study by Furlan et al. demonstrated that depletion and/or functional impairment of T-cells through disease modifying antirheumatic drugs may account for the failure of convalescent plasma in B-celldepleted individuals with autoimmunity; T-cell depletion or functional impairment may be one of the reasons why patients treated with disease modifying antirheumatic drugs who have also received B-cell depleting therapies should be considered high risk for poor outcomes and mortality associated with COVID-19. 9 The clinical course TherapeuTic advances in vaccines and immunotherapy of COVID-19 in patients with primary and secondary humoral immune deficiencies has been shown to be more severe, and although it has not been linked to higher prevalence of death, 4,6,10,11 the development of antibody response and viral clearance are affected by the time since exposure to rituximab. 12 A study by Levavi et al. 13 suggested that approximately 35% of adults receiving rituximab for non-malignant disease were admitted to the ICU for COVID-19 treatment. Given these findings, we sought to assess the impact of rituximab therapy on clinical outcomes from SARS-CoV-2 infection in our cSLE patient population.
Methods
We conducted a retrospective chart review of cSLE patients diagnosed with COVID-19 who had received rituximab from February 2019 to October 2022 and had followed up at the pediatric rheumatology clinic at Emory University and Children's Healthcare of Atlanta. All patients met at least 4 of the 17 Systemic Lupus International Collaborating Clinics (SLICC) classification criteria for systemic lupus erythematosus (SLE). For the SLICC criteria, this included at least one clinical and one immunologic criterion. 14 Nephritis was classified according to the International Society of Nephrology classification for lupus nephritis. 15 To provide for therapeutic effect, subjects were included if they had received rituximab between 1 and 8 months prior to positive COVID-19 testing. This time period was selected for the span of reported B-cell depletion after rituximab administration. 16 A confirmed case of COVID-19 was defined as a positive result on a reverse transcriptase polymerase chain reaction (RT-PCR) SARS-CoV-2 assay obtained by nasopharyngeal swab. Patients were included in the analysis regardless of the presence or absence of COVID-19-related symptoms at the time of RT-PCR testing. A retrospective chart review was conducted to evaluate clinical characteristics, epidemiological characteristics, disease and illness severity, and outcome. After consultation with the local institutional review board, no ethics board approval was required in accordance with the policy of our institution.
Results
We identified eight patients with cSLE treated with rituximab and subsequently diagnosed with COVID-19 at our center. Patient disease characteristics and epidemiologic data are summarized in Table 1. All patients were female. Of the eight subjects assessed, five required hospitalization for COVID-19 to the General Ward, four required ICU admission, and two were seen in the emergency department for their symptoms. One patient ultimately expired from her illness. The median time between most recent rituximab administration and COVID-19 diagnosis was 3 months. We have outlined patients' immunosuppressive regimens, clinical course, and COVID-19 treatments as well as time since last rituximab administration (Table 1).
Patient 1
A 16-year-old female originally presented at 14 years of age with symptoms of limited range of motion, swelling, and pain in her knees, along with intractable headache and fever. She was diagnosed with Systemic Lupus Erythematosus (SLE) with antinuclear antigen (ANA) and Smith antibody positive and was subsequently started on mycophenolate mofetil, hydroxychloroquine, and prednisone at her home facility. She also developed worsening creatinine and proteinuria conditions and subsequently started hemodialysis. Renal biopsy was not obtained prior to starting these therapies. She presented to our center in 2022 in order to establish care for her worsening disease, which included worsening creatinine and eye pain in the setting of newly diagnosed retinal detachment by recent ophthalmologic evaluation.
Additional work up on admission to our facility revealed microhemorrhage on brain imaging with concern for central nervous system (CNS) vasculitis, ground glass opacities on chest imaging consistent with pulmonary vasculitis, and left main coronary and left anterior artery dilation on echocardiogram. Patient had end-stage renal disease due to lupus nephritis, with renal biopsy revealing 100% glomerulosclerosis with tubular atrophy. She was admitted to our center for intravenous (IV) pulse dose methylprednisolone, cyclophosphamide and rituximab infusions (last administered September 2022). During the end of her hospitalization in October 2022, she developed tachypnea and crackles on lung auscultation. She was found to be hypogammaglobulinemic with an IgG of 271 mg/dL (Table 1). Due to respiratory distress, she was transferred to the
Patient 2
A 17-year-old female originally presented at 14 years of age with autoimmune hemolytic anemia, hypocomplementemia, elevated inflammatory markers along with ANA and double-stranded DNA (dsDNA) and was diagnosed with cSLE. Patient was originally maintained on prednisone, azathioprine, and hydroxychloroquine. She developed persistent headaches with vomiting and underwent brain magnetic resonance imaging (MRI), which showed leptomeningeal enhancement with concern for CNS disease and prompted initiation of rituximab therapy (last administered April 2022) along with transition from azathioprine to mycophenolate mofetil and subsequently mycophenolate sodium due to nausea. She also underwent lumbar puncture with concern for idiopathic intracranial hypertension for which she started acetazolamide.
She developed cough, fever, fatigue, myalgia, nausea, and sore throat and had a positive at home test for COVID-19 PCR in August 2022. She was treated with nirmatrelvir/ritonavir because of prolonged headaches and fatigue when seen in clinic approximately 1 month after diagnosis. She did not require hospitalization. Patient was COVID-19 unvaccinated. In December 2021 she developed a fever, chills, and sore throat with myalgia and cough prompting presentation to the emergency department and subsequent 3-day hospitalization at an outside facility. Patient was discharged and 3 days later had seizure activity and altered mental status. Her brain MRI was consistent with posterior reversible encephalopathy syndrome in the setting of hypertension, requiring admission to the PICU for nicardipine drip. She had a COVID-19 PCR positive on admission and started on remdesivir, IVIG 500 mg/kg given her hypogammaglobulinemia (IgG 142 mg/dL), as well as 5 days of dexamethasone. She clinically deteriorated and was empirically treated with vancomycin, cefepime, acyclovir, and micafungin in the setting of positive herpes simplex virus (HSV) serum testing. She was intubated due to altered mental status and hypoxic respiratory failure in the setting of acute COVID-19 infection. Later she required escalation to an oscillator with inhaled nitric oxide during her third week of hospitalization. Ultimately, the patient expired from pulmonary hemorrhage and disseminated intravascular coagulation despite these interventions in her fourth week of hospitalization. Patient was COVID-19 unvaccinated.
Patient 4
A 19-year-old female originally presented at 17 years of age with malar rash, vasculitic lesions, myalgias, and arthralgia of her bilateral wrists. She had hypocomplementemia, positive ANA, Coombs positive hemolytic anemia, lymphopenia, positive Volume 11 6 journals.sagepub.com/home/tav TherapeuTic advances in vaccines and immunotherapy anti-dsDNA antibody, and was diagnosed with SLE. She began methotrexate, prednisone, mycophenolate mofetil, and hydroxychloroquine for her disease but ultimately required escalation to rituximab in February 2020 given ongoing disease activity. In June 2020 she was admitted with diffuse body aches and initial concerns for a lupus flare, but ultimately found to be COVID-19 PCR positive (recent family COVID-19 exposure). She lacked fevers, shortness of breath, cough, nausea, and vomiting during her illness. She was discharged approximately 24 h after admission on an increased prednisone dose (40 mg daily). The patient was COVID-19 unvaccinated.
Patient 5
An 18-year-old female originally presented at 16 years of age with polyarthritis, malar rash, serositis, lymphopenia, positive anti-myeloperoxidase, ANA, and anti-dsDNA antibodies, Coombs positive hemolytic anemia. She was ultimately diagnosed with SLE. She also presented with coronary artery vasculitis involving her left anterior descending artery and Class IV lupus nephritis, prompting treatment with pulse dose methylprednisolone, cyclophosphamide (six doses), and rituximab (three courses, last administered September 2021). She was otherwise maintained on hydroxychloroquine, mycophenolate sodium, and prednisone for disease control. She developed a cough, nasal congestion, headache, chest pain, and shortness of breath, prompting presentation to the emergency department in December 2021. She did not require hospitalization. Prior to this illness, she received three doses of the COVID-19 vaccine (BNT162b2 [Pfizer/ BioNTech).
Patient 6
A 16-year-old female originally presented with headache, left facial numbness, and an MRI brain consistent with neuromyelitis optica. She had positive ANA, elevated erythrocyte sedimentation rate, positive anti-Ro/SSA antibody, and was diagnosed with SLE. She started pulse dose methylprednisolone, as well as rituximab upon diagnosis, and rituximab was continued for maintenance therapy, ultimately receiving a total of six doses, last administered in July 2022. She was also maintained on hydroxychloroquine and mycophenolate mofetil. She was seen in the emergency department in August 2022 with cough, hemoptysis, and fever and tested positive for COVID-19 by PCR. Chest X-ray was obtained and she was diagnosed with right lower lobe pneumonia with concern for secondary bacterial infection. She was treated outpatient with a 7-day course of amoxicillin and did not require hospitalization. Patient was COVID-19 unvaccinated.
Patient 7
A 20-year-old female originally presented at 9 years of age with lymphadenopathy and arthritis of her shoulders and elbows. Subsequent lab workup revealed positive ANA, hypocomplementemia, and positive anti-dsDNA antibody with ultimate diagnosis of SLE. She began hydroxychloroquine and prednisone. Her course was complicated with development of Class IV-Class V lupus nephritis, and she underwent renal transplant in 2017. She was induced with cyclophosphamide and rituximab, with her last infusion prior to COVID-19 infection being in February 2020. She was maintained on tacrolimus, mycophenolate mofetil, prednisone, and hydroxychloroquine. In September 2020 she developed nasal congestion along with 1 week of fevers. Patient was admitted and tested positive for COVID-19 via PCR.
She was treated with IV ceftriaxone with concern for secondary bacterial pneumonia per chest X-ray on admission. Additionally, she was administered IVIG 500 mg/kg and was diagnosed with hypogammaglobulinemia (IgG 434 mg/dL). She required transfer to the PICU for increased respiratory support via high flow nasal cannula. She required a six-day total hospitalization, four of which were spent in the PICU. She received convalescent plasma due to worsening hypoxia and the presence of hypogammaglobulinemia in the setting of severe B-cell depletion after optimization of her prednisone dosing (60 mg daily) and lack of clearance of the virus. Patient had significant clinical improvement approximately 72 h after administration of convalescent plasma. Repeat COVID-19 PCR testing was not performed prior to discharge. Patient had received two doses of the COVID-19 vaccine (Pfizer).
Patient 8
A 19-year-old female originally presented at 16 years of age with fatigue, malar rash, and weight loss. She had positive ANA, positive anti-dsDNA antibody, and hypocomplementemia and was diagnosed with SLE. She received prednisone and hydroxychloroquine, along with rituximab, for disease control, with last infusion in September 2019. In January 2021, she developed fever, chest pain, and shortness of breath after exposure to multiple ill family members and had COVID-19 PCR positive. She was originally seen at an outside emergency department where a chest computed tomography scan showed concern for pneumonia. She was prescribed a 7-day course of azithromycin and discharged home. Approximately 1 week later, she re-presented to the emergency department due to persistent fevers and development of tachypnea. She was admitted to the floor but ultimately transferred to the PICU for increased respiratory support of high flow nasal cannula, which was discontinued after 5 days. She required an additional 2 days of nasal cannula on the General Ward prior to discontinuation of respiratory support. Patient received remdesivir for a total of 5 days as well as dexamethasone for 8 days prior to discharge home. She had received one dose of the COVID-19 vaccine (Pfizer).
Discussion
Since the start of the COVID-19 pandemic, there has been an attempt to identify which patients are at highest risk of poor outcomes from infection. Data from studies in adults has demonstrated that rituximab in particular may place patients at risk of severe infection. Our case series evaluated the severity of COVID-19 infection after rituximab therapy in children with cSLE. Patients treated with B-cell-depleting therapies often develop a failure to seroconvert after primary infection, independent of viral load and time to viral clearance as well as impairment of vaccine response. 4,17,18 Studies have shown that rituximab therapy depletes memory B cells, which in turn may cause persistent hypogammaglobulinemia with the possibility for infectionrelated complications. 3 A study by Ihlow et al. 19 found that in COVID-19 deceased adult patients there was B-cell depletion in either bone marrow or spleen with complete plasma cell depletion and severe lymphocytopenia in the peripheral blood. The authors also found there was a tendency toward higher pulmonary SARS-CoV-2 ribonucleic acid load in COVID-19 patients with B-cell depletion in the setting of active disease. 19 In contrast, a study by Shuwa et al. 20 found alterations in B-and T-cell function in hospitalized adult patients with active COVID-19. Specifically, the authors found a propensity for pro-inflammatory (IL-6+) B-cell expansion in acute COVID-19, as well as increased expression of CD8+ T-cells, as well as perforin, granzyme, and CD107a during the acute disease process. Of note, for our patient cohort the patients with the lowest absolute total T-cell counts as well as IgG levels less than 300 were overall associated with severe to critical disease (Table 1). Previous studies have shown T-cell responses are impaired in severe SARS-CoV-2 infection, and T-cell immunity plays a vital role in the control of SARS-CoV-2. 21 Additionally, a study by Govender et al. 22 elucidated long-term alterations in T-cell populations associated with COVID-19 pathogenesis. This suggests T-cell depletion and/or functional impairment in the setting of B-cell depletion and hypogammaglobulinemia with rituximab use may play a role in worse outcomes in the cSLE patient population, although this association needs to be evaluated with further studies.
According to the Centers for Disease Control, during March to February 2022, weekly COVIDrelated hospitalization rates for children across the United States were 14.5 per 100,000. 23 Additionally, monthly ICU admission rates were approximately 3.5 times as high during the Omicron predominance peak (10.6) compared to the Delta predominance peak (3.0). 23 In-hospital death was associated with 0.6% of total hospitalizations. 23 In comparison, approximately 60% of our patient cohort required hospitalization for their symptoms as a result of immunocompromised status (Table 1). Of those admitted, 80% required ICU level care and ultimately one expired (12.5% of our cohort) (Table 1). Overall, our cohort required both higher rates of healthcare utilization and higher level care compared to national averages, although ultimately four patients recovered without need for hospitalization.
All patients in this series were taking additional immunosuppressive therapies, including mycophenolate mofetil, prednisone, and hydroxychloroquine. Three out of eight patients had Volume 11 8 journals.sagepub.com/home/tav TherapeuTic advances in vaccines and immunotherapy been exposed to cyclophosphamide previously. While these medications do not have the prolonged bioavailability of rituximab, many of these medications were repeatedly administered over the course of their treatment. The impact of these ancillary therapies on immune response to COVID-19 remains unclear and should be investigated in future studies in children.
Within our cohort, three patients were diagnosed with superimposed bacterial or viral pneumonia in the setting of acute COVID-19 infection, with one ultimately expiring in the setting of disseminated intravascular coagulation. Adult literature reports that, while co-infection at COVID-19 diagnosis is uncommon, patients with community-acquired co-infections and hospital-acquired superinfections had worse outcomes. 24,25 Patients undergoing immunosuppressive therapy for underlying diseases including malignancy and autoimmune diseases were reported to have several opportunistic infections at the time of COVID-19 diagnosis, to include HSV, Mycobacterium tuberculosis, and Toxoplasma gondii. 26 Specifically in rheumatologic patients receiving rituximab, many required hospitalizations with respiratory support due to clinical decompensation. 27 A report by Yarahmadi et al. indicated that, of a cohort of 13 adult patients who had received rituximab including three with SLE, three with systemic vasculitis, five with rheumatoid arthritis, and two with Sjögren syndrome, eight patients were hospitalized, and three ultimately died from acute respiratory distress syndrome. 27 As previously mentioned, there is limited data assessing COVID-19 outcomes in the childhood-onset SLE population prior to this report.
Limitations of our case series include the single center nature of our review, as well as small sample size. Additionally, the relationship between the timing of therapeutic intervention, onset and duration of PCR positivity, and disease-related outcomes was not assessed in our review given limited follow-up PCR data, and should be investigated in future studies. Further studies are required to elucidate the relationship between concomitant or past medication exposure and specific disease-related processes on the clinical course of COVID-19 infection and outcome after rituximab administration. Future studies should assess the relationship between lupus disease activity, such as through the SLE Disease Activity Index scoring as well as clinically distinct disease phenotypes, and COVID-19 severity both for patients who have received rituximab and those who have not received B-cell depleting therapies. Additionally, our study did not elucidate an association with COVID-19-related outcomes and vaccination status. Given previous reports of suboptimal immune responses in the setting of immunosuppression, additional studies should focus on serologic responses to COVID-19 vaccination in this particularly vulnerable patient population. 28
Conclusion
We assessed the clinical outcomes, including the need of ICU admission and fatal outcome, of COVID-19 in our cSLE patient population after rituximab administration. Our review suggests T-cell depletion and/or functional impairment in the setting of B-cell depletion and hypogammaglobulinemia with rituximab use may play a role in worse outcomes in the cSLE patient population, although this association needs to be evaluated with further studies. Further investigation of this vulnerable patient population in larger sample sizes is required to further understand these relationships between COVID-19 immunity and immunomodulatory therapies.
Ethics approval and consent to participate
This study was not considered Human Research by the Children's Healthcare of Atlanta Institutional Review Board.
Consent for publication
Telephone consent was obtained from the patients in our publication for ease of family and minimal risk of the study. | 2023-06-24T05:10:49.546Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "68ebdff8e6d62da99d8011ace906eb98677cbc09",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1177/25151355231181242",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "68ebdff8e6d62da99d8011ace906eb98677cbc09",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
29749854 | pes2o/s2orc | v3-fos-license | Interference Characterization in Downlink Li-Fi Optical Attocell Networks
Wireless access to data using visible light, popularly known as light-fidelity (Li-Fi), is one of the key emerging technologies which promises huge bandwidths and data rates. In Li-Fi, the data is modulated on optical intensities and transmitted and detected using light-emitting-diodes (LED) and photodiodes respectively. A network of such LED access points illuminates a given region in the form of attocells. Akin, to wireless networks, co-channel interference or simply interference is a major impediment in Li-Fi attocell networks. Also, when in such networks, the field-of-view (FOV) of a photodiode is limited, the network interference distribution gets affected significantly. So, for any given network scenario, interference characterization is critical for good system design. Currently, there are no good closed-form approximations to interference in Li-Fi attocell networks, that can be used for the analysis of signal-to-interference-plus-noise-ratio (or coverage), particularly for the case of limited FOVs. In this paper, using a technique from Fourier analysis, we provide a very close approximation to interference in one and two dimension Li-Fi attocell networks for any given finite inter-LED separation. We validate the interference approximation by providing theoretical error bounds using asymptotics and by performing numerical simulations. We show that our method of approximation can be extended to characterize interference in limited FOV scenarios as well.
I. INTRODUCTION
Light-Fidelity (Li-Fi) is being seen as one of the key emerging technologies to provide wireless access of data using visible light at high data rates [1]. In Li-Fi, the data is usually intensity modulated to the visible light using light emitting diodes (LED), also called as downlink Li-Fi access points. The modulated intensities travel through an optical channel and are detected by a receiver photodiode (PD). There have been several experiments conducted [2], [3], [4], [5], [6] to determine the optical wireless channel model and how it behaves with the transmitted visible light intensities. The channel is usually modelled as a linear and time invariant system [7] and as a result, time varying fading on the line-of-sight links is absent.
The LED access points are usually arranged in a regular geometry to form a Li-Fi attocell network. In such a network, the LEDs simultaneously transmit information packets on modulated intensities of different colours or light wavelengths. The LEDs transmitting on the same optical wavelength can be considered as co-channel interferers. Co-channel interference or simply interference in downlink of such networks, is one of the limiting factors which decreases the downlink system throughput. The interference experienced inside the attocell of the serving LED, depends on the location of the user relative to the interferers and the field-of-view (FOV) of the PD 1 . Additionally, the limitation of the FOV significantly affects the network interference distribution inside the serving attocell compared to the case when FOV is π 2 radians. So, for both the scenarios of FOV, the characterization of interference and Signal-to-Interference-plus-Noise-Ratio (SINR), is critical to understand the system performance and for good system design. Moreover, a simple closed form characterization, for both the cases of FOV, can be further used for simple analytical computation of other metrics like probability of coverage and area spectral efficiency.
A. Related works and common approaches
In [8], [9], [10], the SINR has been used to analyze fractional frequency reuse and angle diversity schemes, where the interference is calculated by numerical techniques. The order or number of terms of the interference summation increases linearly with the size of the network and one has to resort to simulations for understanding the behaviour of the system. In [11], the downlink system performance and interference are analyzed in Li-Fi optical attocell networks. There, for a deterministic hexagonal geometry, the interference in an infinite attocell network is approximated by only the first layer of hexagonal interferers around the central attocell using the flower model approximation [12]. Similarly in [13], the interference is obtained as a finite summation over the six interferers in the first layer of the hexagonal LED arrangement. But in a Li-Fi attocell network, when the inter LED separation reduces, more layers need to be considered into the interference approximation and hence the first layer approximation remains sub-optimal. Moreover, such approximations cannot be extended to any other deterministic lattice and any finite separation between the LEDs. Further, the analysis has been 1 When the FOV < π 2 radians, an interfering LED which does not have a line of sight link within the PD's FOV range, cannot be considered as a potential interferer. done only for the case of FOV = π 2 radians. In [14], the problem of orientation and FOV of the PD in Li-Fi networks has been discussed to derive closed form expressions for the channel gain characteristics and probability of coverage. But, the characterization for both one and two dimension attocell networks and for any separation distance between the LEDs has not been shown. In [15], for the calculation of outage probability and SINR in a random deployment of LEDs, the interference is characterized by extracting its moments from its complementary function and approximation similar to the one in [16]. But an explicit simple closed form expression for interference in a deterministic LED arrangement has not been provided.
B. Our approach and contributions
The contributions of this paper are as follows: • We assume a regular arrangement of LEDs in both one and two dimensions. For such an arrangement of LEDs, a close approximation to interference has been proposed for any given finite separation between the LEDs. Here we assume that the FOV of the PD used in the network is = π 2 radians. So, being a simple closed form expression, large scale network summations are shown to be circumvented using this characterization. • The above results are generalised to characterize the interference when the photodiodes used in the environment have an FOV < π 2 radians. • Theoretical error bounds have also been provided for the approximation using asymptotics, which give a clear idea on how good is the approximation for a given set of network parameters. The error bounds are validated through extensive numerical simulations. This paper is arranged as follows. Section II describes the downlink system model and the arrangement of Li-Fi LEDs in both one and two dimension attocell network models. Section III is the main technical section of the paper, which describes our interference characterization (along with the FOV limitation case) in both one and two dimensions. The paper concludes with Section IV.
II. DOWNLINK SYSTEM MODEL
In this section, we describe the assumptions made for the line of sight channel model and derive the SINR at any location on the ground in such a communication scenario. Also, we describe the attocell network models, considered in this study, for both one and two dimensions. The attocell dimension or the attocell length, both refer to the inter-LED separation in the network.
A. Propagation channel assumptions
The optical wireless channel is considered as a linear time invariant attenuation channel [17]. Further, for simplicity, the small scale path loss or fading due to multi path is neglected in this work. In Li-Fi, the baseband signal modulates the intensity of the optical signal, not the amplitude or phase. This is called the intensity-modulation and direct-detection (IM/DD).
In [18], various modulation techniques for Li-Fi have been discussed and compared. In this study, we consider a single carrier method of IM/DD, namely the non-return-zero-on-offkeying (NRZ-OOK) 2 . Moreover, we neglect any non-linear effects of the LED during intensity modulation. Figure 1. This figure shows the free space line-of-sight (LOS) light propagation geometry. The triangular shaped LED source is at a height h and distance d from the origin (0, 0) and is tagged to the PD at a distance z on the ground. The PD has a given field-of-view (FOV) θ f . The free space LOS link from the LED to PD is shown by the dashed line. The angles θ d,t and θ d,r are respectively the transmission angle at the LED and incidence angle to the PD with respect to the normals drawn as dotted lines. We assume that the PD has no orientation towards the LED and its surface is parallel to the ground. So, we have θ d,t = θ d,r . θ h is the half power semi angle (HPSA) of the LED. In this figure, the distance D d , on ground, between the LED and the PD is z+d. This is adapted from [9].
Consider the free space Li-Fi downlink of an LED-PD communication scenario shown in Fig.1 (dashed line). Let the light source be at an elevation height h and distance d from the origin (0, 0) and let the PD be at a distance z on the ground. θ d,t is the transmission angle from the LED which is at a distance d from the origin and θ d,r is the angle of incidence at the PD, from the same LED. We assume that the PD has no orientation towards the LED and its surface is parallel to the ground. So, we have θ d,t = θ d,r . θ f denotes the FOV of the PD, which is the maximum angle to which the received rays can be detected. θ h denotes the half-power-semi-angle (HPSA) of the transmitter LED, which is the angle at which the optical power becomes half of the power at normal. Let A pd be the light receiving cross sectional area of the PD. Let D d (which equals z + d in Fig.1, but not shown explicitly) be the distance on ground, between the PD (located at a distance z on the ground from (0, 0)) and the LED (located at (d, h) from (0, 0)). From [9, Eqn. 1], the channel gain from the LED to the PD with a given FOV θ f is where m = − ln(2) ln(cos(θ h )) is the Lambertian emission order of the LED and ρ(D d ) is the FOV constraint function defined as
B. The SINR expression
Extending the above discussion, we consider the downlink of a Li-Fi attocell network in one dimension to derive the SINR expression. In the attocell network, all the LEDs, as data access points, illuminate a given region in the form of attocells. An attocell is the region of data coverage due to illumination on the ground (or surface) by a particular LED, where, this LED becomes the nearest data source to a PD to be tagged upon, inside that region. The optical attocell dimensions are in the range of metres. The co-channel LEDs, which illuminate at the same visible light wavelength, interfere. We consider interference at the PD only due to line of sight LEDs, fixed at a height h and symmetrically arranged with uniform separation a in an infinite one dimension corridor as shown in Fig. 2. There are infinite number of LEDs (circular dots) arranged at an equal interval a, all along the corridor, installed at a height h. The rectangular dotted regions on ground depict the attocells corresponding to each LED above. The user PD (small cuboid) at (z, 0) (inside one of the attocell), receives data wirelessly from the tagged-LED corresponding to the attocell in which it is located. Here, that attocell is highlighted as dash-dot. All other LEDs are co-channel interferers. Here, we assume that the user PD moves only along the thick line on ground, i.e length of the corridor.
We assume that all the LEDs operate at the same optical wavelength and transmit at same average optical power P o . So, all the LEDs, other than the tagged-LED at (0, h), are interferers, as shown in Fig. 2. We calculate the SINR γ(z), at every PD location z, inside the attocell. Let x i (t) be the baseband signal, during the time slot t, from each i th LED in the network before transmission. Let s i (t) be the optical IM signal on baseband signal x i (t), during the time slot t. Using the gain expression in (1) and the geometry of the links in Fig. 1, we can modify G d (z) (and distance D d from every other LED at (ia, h)) as Now, the signal current I(z, t) (in amperes), received at the PD, at (z, 0) with responsivity R pd , during the time slot t is given as In (3), n(t) is the noise current at the PD, which is modelled as additive white Gaussian noise, has a noise power spectral density of N o . If the total IM bandwidth of the receiver PD is W (which can be assumed as the total system bandwidth), then the total receiver noise variance σ 2 , at the PD is From [19], the average transmit optical power P o , for every i th LED can be defined as where E[.] is the expectation operator over time slot t. The average received current , at the PD from the i th LED, after suffering through the channel gain So, γ(z), at user position z is Now, substituting for G ia (z) from (2) into (4) and further rearranging the constants, we have where Ω is given as
C. Attocell network models
In this work, we consider two cases of lighting described below.
1) One dimension infinite corridor network: We consider an infinite length corridor, along which an infinite number of LEDs are arranged with uniform spacing a, as shown in Fig. 2. Importantly, we also assume that all the LEDs are Li-Fi capable and all transmit data at the same time along with illumination. The corresponding derivation for SINR was shown in the previous subsection and was derived in (5) as where the interference term 3 I ∞ (z) in (6), is given as Also, for a PD with an FOV θ f = π 2 radians, I ∞ (z) in (7) can be written as 2) Two dimension infinitely spread square grid network: The two dimension network model is shown in Fig. 3. Let the user PD be located at distance z = d 2 x + d 2 y from the origin inside the respective attocell of the LED. Here, the tagged-LED, considered at (0, 0, h), has an attocell symmetrically around it on the ground, as a square of dimension a. Similar to the one dimension model, importantly, we here too assume that all the LEDs are Li-Fi capable and all transmit data at the same time along with illumination. From the one dimension case, the same expression for the SINR can be extended to a two dimension scenario. Let the interfering LEDs, indexed by (u, v), be located at (u, v, h). Now, for Infinite two dimension plane the attocells corresponding to each LED above. The user PD (small cuboid) at (dx, dy, 0) (inside one of the attocell), receives data wirelessly from the tagged-LED corresponding to the attocell in which it is located. Here, that attocell is highlighted as dash-dot. All other LEDs are co-channel interferers.
Here we assume that the user PD can move anywhere on the ground plane.
The SINR γ(d x , d y ), at a distance z from origin is where the interference term I ∞ (d x , d y ) is given by For a PD of FOV θ f = π 2 radians, I ∞ (d x , d y ) in (9) can be written as A closed form expression for the interference term in (7) and (9) (or (8) and (10) for FOV= π 2 radians) is required in both one and two dimension scenarios, which is discussed in the following section.
III. INTERFERENCE CHARACTERIZATION
In this Section, we characterize the interference as a closed form approximation using the Poisson summation theorem [20], which is stated for reference: where is the Fourier transform of q(x).
In the following subsections for one and two dimension network models in succession, we first proceed with our interference characterization for FOV θ f = π 2 radians. In a simultaneous subsection, we show that our method of interference characterization using Fourier analysis, can be extended for the case of FOV θ f < π 2 radians.
A. One dimension model with FOV = π 2 radians We now look at the interference characterization using the above Poisson summation theorem.
Theorem 2. Consider a photodiode, with θ f = π 2 radians, situated at a distance z (inside an attocell) from the origin, in an infinite one dimension corridor network of Li-Fi LEDs, emitting light with a Lambertian emission order m, installed at a height h with uniform inter-LED separation distance a. Then, for a wavelength reuse factor of unity, the interference I ∞ (z), caused by the co-channel interferers at the photodiode is .
is the modified bessel function of second kind.
Proof. The proof is provided in Appendix A.
In the next proposition, we quantify the error when the summation in the above infinite series is truncated after 4 k terms using the asymptotic notation 5 O(.).
Proposition 1. From Thm. 2, for a finite integer k, the interference inside an attocell can be approximated to a closed form expression as
wherê Proof. The proof is provided in Appendix B.
Since in practice, the number of LEDs are finite, we also look at I n (z), i.e., looking at interference by a finite number of LEDs in (8). In Fig. 4, we observe that as the number of interferers n increases, the interference I n (z), saturates to a constant value which is I ∞ (z). So, the approximation results in Prop. 1 hold true for finite number of LEDs as well, even though the results are derived for an infinite corridor.
We now try to understand the interference characterization in Prop. 1 by taking a few theoretical examples and further validation through numerical simulations. Firstly, in Prop. 1, the interference for any position z of the user inside the attocell, always has a constant term given as This term represents the average spatial interference seen at all locations. 4 Over-usage: This variable has been used twice in the paper, but in two completely disjoint and separate contexts. In the one dimension model, k represents the number of terms in the approximation that needs to be considered. Again, in the description of the two dimension model, we have used k to denote the frequency term. This is due to lack of variables and the authors assure that this, in no way affects the understanding of the paper. 5 The asymptotic notation f (n) = O(g(n)) is defined as, ∃no and ∃k 1 > 0 ∋ ∀n > no, f (n) ≤ k 1 × g(n).
Number of interferers (n)
Interference We see that the asymptotic error in (12) becomes exponentially small when h a is large. Hence the interference can be well approximated with small values of k as long as the ratio h a is large. Hence, considering only k = 0 term, the interference can be approximated as For example, if we consider a = 0.2m and h = 2.5m, leading to h a = 12.5, we can choose k = 0 and have a theoretical error bound of O(e −25π ). This can be verified from Fig. 5. We see that all the terms from w = 1 have negligible contribution.
When h a is not large, a few more terms (w) are necessary to improve the approximation accuracy. For example, in Fig. 6, when we consider h = 2.5m and a = 0.5m, leading to h a = 5; w = 0 and w = 1 are significant, with an error bound on w > 1 as O(e −20π ). So, k = 1 orÎ 1 (z) is a good approximation for this case. Further, in most practical cases, the ratio h a varies between 2.5 to 5. So, the above approximation toÎ 1 (z) i.e.
can be extended in general to this practically seen range 6 of h a because we still have a theoretical asymptotic error bound on w > 1 as O(e −10π ).
For numerical validation, firstly, from Fig. 7, we see the tightness of this approximation, for the above given range of h a . We proceed by considering h = 2.5m and plotting the interference w.r.t. the variation of a from 0.1m to 1m (i.e. h a in the range of 25 to 2.5). We see that I n (z) (for n = {4, 10, 20, 40}) andÎ 1 (z) are tightly bounded with each other, which validates our approximation. Now, from the above numerical validation for k = 1, we take a given value of a = 0.5m and proceed for further numerical validation w.r.t various system parameters h, θ h and z in Fig. 8, 9 and 10 respectively. The corresponding graphs for the approximation errorê = |I n (z) −Î 1 (z)| are respectively shown in Fig. 11, 12 and 13 for different number of interferers n. All the simulations are obtained using the parameter values given in Table I.
From Fig. 8 and it's corresponding approximation error plot in Fig. 11, we observe that for any given height h, as the number of interferers increase, the errorê, decreases. On the log axis, we observe a maximum errorê max in the order of 10 −4 with respect toÎ 1 (z), that is in the order of 10 −2 . This error further reduces as the number of interferers is increased. The same can be observed with the variation of HPSA in graphs of Fig. 9 and the error plot in Fig. 12, whereê max is in the order of 10 −7 , forÎ 1 (z) in the order of 10 −3 . Again, this error reduces as the number of interferers increases. Similarly, in graphs of Fig. 10 and Fig. 13, we observeê max in the order of 10 −5 , forÎ 1 (z) in the order of 10 −2 . So, when compared with the interference values, these errors are small, which numerically validates the approximation toÎ 1 (z).
As seen in the above example, Prop. 1 essentially implies that for a given value of h a the approximation toÎ k (z) is tight and very close to the actual interference I ∞ (z) in (8), with an approximation error bounded by an exponential decay. So, this characterization can be summarized as This also implies that our characterization provides closed form analytical bounds for interference in finite LED networks.
B. One dimension model with FOV θ f < π 2 radians We now look at the interference characterization when θ f < π 2 radians. Here we show that, the Fourier analysis method can be used to give a suitable interference approximation for such cases as well. The infinite summation in (8) becomes a finite summation, when the FOV constraint function ρ(D d ), acts on every interferer. From the proof of Thm. 2, we can modify the function q(.) in (18) as The Poisson summation theorem can be used to obtain a similar result as in the previous subsection if the Fourier transform of q ′ (x) can be obtained. Hence the Fourier transform of q ′ (x) equals Hence we have the following Lemma. k ≥ 1 we have Proof. Follows from the Poisson summation theorem and approximations.
The constant term evaluated at w = 0 is where 2 F 1 (.; .; .) is the generalized hypergeometric function. As earlier, this represents the average spatial interference seen at all locations. A closed form expression for Q ′ w a can be simply evaluated using numerical integration.
We consider h = 2.5m and a = 0.5m, leading to h a = 5 to numerically validate (13) for k = 1 over various values of θ f and compare it with I ∞ (z) in (7). In Li-Fi attocell networks, if the FOV θ f < θ o = tan −1 a h , the PD does not experience any interference. Here the ratio a h = 0.2 and θ o = 0.197 radians. So, in Fig. 14, we observe that both I ∞ (z) andÎ ′ 1 (z) drop down to zero once θ f < θ o = 0.197 radians. Also, for θ f > θ o , both the graphs, I ∞ (z) and I ′ 1 (z) are tightly bounded, which numerically validates our proposition in Lem. 1 for k = 1. Also, as θ f → 1.57(= π 2 ) radians, the interference values converge to the earlier case of θ f = π 2 radians.
So, the approximation above in Lem. 1 is a good approximation for various practical parameter values based on the choice of k. As shown above, if we choose h = 2.5m and a = 0.5m, considering k = 1 is sufficient. When h a becomes small, a few more terms are necessary to improve the approximation accuracy. Figure 14. (One Dimension Model) (θ f < π 2 radians) Here the variation of I ′ 1 (z) is drawn for a linear variation of the FOV θ f of the receiver photodiode (PD). I∞(z) from (7) (or In(z) for n = 20) is also drawn to validate the same. We consider a = 0.5m, the half-power-semi-angle (HPSA) θ h of the LED as π 3 radians, the height h of the LED as 2.5m and z = a 2 .
C. Two dimension model with FOV θ f = π 2 radians We now extend the result for two dimensions. Theorem 3. Consider a photodiode, with θ f = π 2 radians, situated at a distance z = d 2 x + d 2 y (inside an attocell) from the origin, in an infinite two dimension plane network of Li-Fi LEDs arranged as a regular square lattice of dimension a, emitting light with a Lambertian emission order m and installed at a height h. Then, for a wavelength reuse factor of unity, the interference I ∞ (z), caused by the co-channel interferers at the photodiode is Here is the modified bessel function of second kind.
Proof. The proof is provided in Appendix C.
In the next proposition, we quantify the error when the summation in the above infinite series is truncated after j × l terms using the asymptotic notation O(.), similar to the one dimension case. Proposition 2. From Thm. 3, for finite integers j ≥ 0 and l ≥ 0, the interference inside an attocell can be approximated to a closed form expression as whereÎ j,l (d x , d y ) Proof. The proof is provided in Appendix D.
We give similar theoretical and numerical validations to the two dimension model, as that of the one dimension model.
Since in practice, the number of LEDs are finite, we also look at I n (d x , d y ), i.e., looking at interference by a finite number of LEDs in (10). In Fig. 15, we observe that as the number of interferers n increases, the interference I n (d x , d y ), saturates to a constant value which is I ∞ (d x , d y ). So, the approximation results in Prop. 2 hold true for finite number of LEDs as well, even though the results are derived for an infinite plane.
We now try to understand the interference characterization in Prop. 2 by taking a few theoretical examples and further validation through numerical simulations. Firstly, in Prop. 2, the interference for any position (d x , d y ) of the user inside the attocell, always has a constant term given as .
This term represents the average spatial interference seen at all locations.
Number of interferers (n)
Interference In(dx, dy), with respect to the number of interferers n, is drawn for different height h of the LED installation. We consider a = 0.5m, the half-power-semiangle (HPSA) θ h of the LED as π 3 radians with dx = dy = 0.
As in the one dimension model, the approximation error depends on the ratio h a . For larger values of h a , we can choose j = l = 0 leading to which can be verified from Fig. 16. We see that all the terms from w = k = 1 have negligible contribution.
When h a is not large, a few more terms (w, k) are necessary to improve the approximation accuracy. For example, in Fig. 17, when we consider h = 2.5m and a = 0.5m, leading to h a = 5; w = k = 0 and w = k = 1 are significant, with an error bound, similar to that in one dimension, for w > 1 and k > 1 as O(e −24π ). So, j = l = 1 orÎ 1,1 (d x , d y ) is a good approximation in this case. Further, in most practical cases, the ratio h a varies between 2.5 to 5. So, the above approximation toÎ 1 ,1 (d x , d y ) i.e.
can be extended in general to this practically seen range 7 of h a because we still have a theoretical asymptotic error bound on w > 1 and k > 1 as O(e −12π ). For numerical validation, firstly, from Fig. 18, we see the tightness of this approximation, for the above given range of 7 For lower values of h a i.e. < 2.5, higher values of (j, l) may have to be considered to improve the approximation accuracy. Also, from the proof of Prop. 2, (j, l) should be chosen such that ⌉ for a good approximation. h a . We proceed by considering h = 2.5m and plotting the interference w.r.t. the variation of a from 0.1m to 1m (i.e. h a in the range of 25 to 2.5). We see that I n (d x , d y ) (for n = {8, 15, 24, 35}) andÎ 1,1 (d x , d y ) are tightly bounded with each other, which validates our approximation. Now, from the above numerical validation for j = l = 1, we take a given value of a = 0.5m and proceed for further numerical validation w.r.t various system parameters h, θ h and z = d 2 x + d 2 y in Fig. 19, 20 and 21 respectively. The corresponding graphs for the approximation error ξ = |I n (d x , d y ) −Î 1,1 (d x , d y )| are respectively shown in Fig. 22, 23 and 24 for different number of interferers n. All the simulations are obtained using the parameter values given in Table I. In(dx, dy), is drawn with respect to a linear variation of the inter-LED spacing a for different number interferers n in the network. The graph for the proposed interference expressionÎ 1,1 (dx, dy) from approximation is also drawn. We consider height h of the LEDs as 2.5m, the half-power-semiangle (HPSA) θ h of the LED as π 3 radians and the position of the receiver photodiode (PD) as dx = dy = 0.
From Fig. 19 and it's corresponding approximation error plot in Fig. 22, we observe that for any given height h, as the number of interferers increase, the errorξ, decreases. We observe a maximum errorξ max in the order of 10 −8 with respect toÎ 1,1 (d x , d y ), that is in the order of 10 −3 . This error further reduces as the number of interferers is increased. The same can be observed with the variation of HPSA in graphs of Fig. 20 and the error plot in Fig. 23, whereξ max is in the order of 10 −5 , forÎ 1,1 (d x , d y ) in the order of 10 −1 . Again, this error reduces as the number of interferers increases. Similarly, in graphs of Fig. 21 and Fig. 24, we observeξ max in the order of 10 −7 , forÎ 1,1 (d x , d y ) in the order of 10 −2 . So, when compared with the interference values, these errors are small, which numerically validates the approximation toÎ 1,1 (d x , d y ).
As seen in the above example, Prop. 2 essentially implies that for a given value of h a the approximation toÎ j,l (d x , d y ) is tight and very close to the actual interference I ∞ (d x , d y ) in
Number of interferers (n) = 24
Number of interferers (n) = 35 Figure 19. (Two Dimension Model) Here the variation of interference In(dx, dy), is drawn with respect to a linear variation of the height h of installation of the LED for different number interferers n in the network. The graph for the proposed interference expressionÎ 1,1 (dx, dy) from approximation is also drawn. We consider a = 0.5m, the half-power-semi-angle (HPSA) θ h of the LED as π 3 radians and dx = dy = 0. In(dx, dy), is drawn with respect to a linear variation of the half-powersemi-angle (HPSA) θ h of the LED for different number interferers n in the network. The graph for the proposed interference expressionÎ 1,1 (dx, dy) from approximation is also drawn. We consider the attocell length a = 0.5m, the height h of the LED as 2.5m and dx = dy = 0.
(10), with an approximation error bounded by an exponential decay. Hence, the above discussion can be summarized as Similar to the one dimension model, this also implies that our characterization provides closed form analytical bounds for interference in finite LED networks.
D. Two dimension model with FOV θ f < π 2 radians We now look at the interference characterization when θ f < π 2 radians. Here we show that, the Fourier analysis method can be used to give a suitable interference approximation for such x + d 2 y of the receiver photodiode (PD), radially inside the attocell for different number interferers n in the network. We consider a = 0.5m, the half-power-semi-angle (HPSA) θ h of the LED as π 3 radians and the height h of the LED as 2.5m.
Hence we have the following Lemma.
Lemma 2.
For an FOV θ f < π 2 radians and finite integers j ≥ 0 and l ≥ 0 we have where Proof. Follows from the Poisson summation theorem and approximations.
The constant term evaluated at w = k = 0 is As earlier, this represents the average spatial interference seen at all locations. A closed form expression for Q ′ ( w a , k a ) can be simply obtained from numerical integration.
Similar to the one dimension model, we consider h = 2.5m and a = 0.5m, leading to h a = 5 to numerically validate (15) for j = l = 1 over various values of θ f and compare it with I ∞ (d x , d y ) in (9). In Li-Fi attocell networks, if the FOV θ f < θ o = tan −1 a h , the PD does not experience any interference. Here the ratio a h = 0.2 and θ o = 0.197 radians. So, in Fig. 25, we observe that both I ∞ (d x , d y ) andÎ ′ 1,1 (d x , d y ) drop down to zero once θ f < θ o = 0.197 radians. Also, for θ f > θ o , both the graphs, I ∞ (d x , d y ) andÎ ′ 1,1 (d x , d y ) are tightly bounded, which numerically validates our proposition in Lem. 2 for j = l = 1. Also, as θ f → 1.57(= π 2 ) radians, the interference values converge to the earlier case of θ f = π 2 radians, for the two dimension model, giving similar validation results as in the one dimension model. So, the approximation above in Lem. 2 is a good approximation for various practical parameter values based on the choice of (j, l). As shown above, if we choose h = 2.5m and a = 0.5m, considering j = l = 1 is sufficient. When h a becomes small, a few more terms are necessary to improve the approximation accuracy.
IV. CONCLUSION
In this work, the Poisson summation theorem has been used to provide a simple closed form approximation to co-channelinterference in Li-Fi attocell networks for both one and two Figure 25. (Two Dimension Model) (θ f < π 2 radians) Here the variation of I 1,1 (dx, dy) is drawn for a linear variation of the FOV θ f of the receiver photodiode (PD). I∞(dx, dy) from (9) (or In(dx, dy) for n = 24) is also drawn to validate the same. We consider a = 0.5m, the half-power-semiangle (HPSA) θ h of the LED as π 3 radians, the height h of the LED as 2.5m and dx = dy = 0.
dimensions. We also show that the approximation has an error that is tight with respect to an exponential decay for a given set of system parameters. The advantage of this characterization is, it can be used to compute interference power with a high degree of accuracy for any given finite separation between the LEDs and provide upper bounds for interference in finite attocell networks. Using this characterization, large scale network interference summations can be circumvented and important metrics like probability of coverage, area spectral efficiency, optimal LED spacing etc. can be analytically computed in an easy way. Further, we show that our method of Fourier analysis can be extended to characterize interference when the user PDs have limited FOVs as well.
APPENDIX A PROOF OF THEOREM 2
Proof. From (7), for θ f = π 2 , we can write the interference term I ∞ (z) as We scale and shift the function q(i) in (11) as q(z + ia) and using the time shifting [ In (17), from (16), we consider a real and even function q(i) given as Correspondingly it's Fourier transform Q(w) = ∞ −∞ q(x)e −ι2πwx dx will also be real and even [22,Prop.4.3.3] and is given as Now, substituting (18) and (19) into (17) We remove the redundant addition of i = 0 term from both sides of (20), which refers to the signal power from the tagged LED source at origin and we get Now, that the Fourier transform Q(w) is real and even, we can modify (21) as 2 2−β √ 2πh 0.5−β (2πw) β−0.5 K β−0.5 ( 2πhw a ) cos( 2πwz a ) a 0.5+β Γ(β) where (a) follows from the fact that Q(w) is real, (b) follows from the fact that Q(w) is even and hence proving the theorem.
Using this result and the fact that for large k or large h a , w 1 = k + 1, we have proving the proposition.
APPENDIX C
PROOF OF THEOREM 3
Proof. From (9), for θ f = π 2 , we can write interference term I ∞ (d x , d y ) as For two dimensions, we can scale and shift the function q(u, v) in (11) as q(d x +ua, d y +va) and from [25] Now, from (26), q(u, v) can be expressed as a real and even function, given as q(u, v) = (u 2 + v 2 + h 2 ) −β .
Let Q(s) be the radial Fourier transform of q(r). We evaluate this using the Hankel function [21] for two dimensions. The Hankel function for n dimensions is defined as | 2017-12-13T10:49:10.000Z | 2017-12-13T00:00:00.000 | {
"year": 2017,
"sha1": "5989000969210c83dd8b536132bc031565051b20",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1712.04694",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6b6512983429e9c379e3b765acb93ccf0f008fd4",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
41046225 | pes2o/s2orc | v3-fos-license | Bi-Objective Optimization Method and Application of Mechanism Design Based on Pigs ’ Payoff Game Behavior
It takes two design goals as different game players and design variables are divided into strategy spaces owned by corresponding game player by calculating the impact factor and fuzzy clustering. By the analysis of behavior characteristics of two kinds of intelligent pigs, the big pig’s behavior is cooperative and collective, but the small pig’s behavior is noncooperative, which are endowed with corresponding game player. Two game players establish the mapping relationship between game players payoff functions and objective functions. In their own strategy space, each game player takes their payoff function asmonoobjective for optimization. It gives the best strategy upon other players. All the best strategies are combined to be a game strategy set. With convergence and multiround game, the final game solution is obtained. Taking bi-objective optimization of luffing mechanism of compensative shave block, for example, the results show that the method can effectively solve bi-objective optimization problems with preferred target and the efficiency and accuracy are also well.
Introduction
Multiobjective optimization problem in actual engineering design is very common.The essential characteristics of multiobjective optimization are as follows: 1 there exist several objective interests; 2 the status of the various objectives are different and have conflicts.The solution methods are diverse; the latest research is as follows: Akbari and Ziarati 1 applied a novel bee swarm optimization method to obtain a uniformly distributed Pareto front.Ismail et al. 2 proposed a new self-organizing genetic algorithm for multiobjective optimization problems to obtain a better value as compared to the existing weighted-sum methods.Lee et al. 3 used the multiobjective fuzzy optimization method to obtain the optimal parameters of rotor experimental apparatus.Ding et al. 4 proposed a new multiobjective optimization algorithm named KSVC-SPEA to effectively achieve the overall performance of injection molding machine.
In recent years, considering the similarity between multiobjective design and the game, game theory has been used to solve multiobjective design problems, especially for practical problems in engineering fields.According to the different behaviors of each game player seeking for benefit, the game can be divided into noncooperative game and cooperative game.In a noncooperative game, each player benefits from competitive behavior patterns and the typical models are Nash equilibrium game model and the Stackelberg oligopoly game model.A cooperative game is defined as game players abiding by a binding agreement, benefiting from cooperative behavior patterns.The typical binding agreements contain three types, which are known as the "self-interest do not harm the others" competitive and cooperative game model , "You have me, I have you" coalition cooperative game model , and "all for one and one for all" unselfish cooperative game model .About noncooperative game to solve multiobjective design, Spallino and Rizzo 5 proposed a noncooperative game optimization method based on evolutionary strategy in the multiobjective design of the composite laminate, which treated each game player as an equal body and eventually found a Nash equilibrium point through negotiation functions.Neng-gang et al. 6 established a multiobjective game design technology roadmap and key indicators based on the Nash equilibrium model and the Stackelberg oligopoly game model and successfully applied to multiobjective optimization design such as gravity dam, structure of arch-arch ring, and luff mechanism of compensative sheave block.In the use of cooperative games to solve multiobjective design, Chen and Li 7 proposed three-tier two-objective optimization method and applied this method to the manufacture of concurrent product and process optimization; Neng-gang et al. 8 adopted a competitive-cooperative game model to conduct a multi-objective optimization design and obtained a good design.However, whether the non-cooperative game methods or the cooperative game methods are used to solve multi-objective design problems, if the game method is selected, behavior modes of all players remain unchanged during the whole process.But this is an ideal situation.Each player's behavior is diverse in many survival games in nature.Neng-gang et al. 9 proposed a mixed game model according to the diversity of behavior patterns caused by differences in resources and endowment of each player.Through the bionics of the survival mechanisms of reproduction of lizard species, a typical mixed game model is presented, which consists of both competitive behavior patterns and cooperative behavior patterns of "all for one and one for all" and "benefits oneself but do not harm other people".This method is very good to solve the oneness problem of constructing payoff functions, but there exist two shortcomings as follows. 1 It can only be applied to three objectives or more than three objectives and cannot solve two-objective optimization problems.2 It cannot solve "principal and subordinate" optimization problems.That is, it cannot solve the optimization problem with target preference.To compensate this deficiency and improve the game method for solving optimization problems, bi-objective optimization method is proposed based on pigs' payoff behavior, which can be applied in two-objective optimization problem with target preference.
Pigs' Payoff Game Model
American economist named Nash the Nobel economic prize winner has proposed "Pigs' Payoff".It is shown in Figure 1 and is as follows: there are a big pig and a small pig in the pigsty.One side of the pigsty has food slot and the other side has food control button.
Whether the big pig or the small pig will pay 2-unit energy cost if it presses the food control button and 10-unit food will fall into food slot in return.If the big pig first arrives in the food slot, the benefit ratio of the big pig to the small pig is 9 : 1.If the big pig and the small pig arrive in the food slot at the same time, the benefit ratio of the big pig to the small pig is 7 : 3.
If the small pig first arrives in the food slot, the benefit ratio of the big pig to the small pig is 6 : 4. The payoff matrix is shown in Table 1.In premise of both the big pig and small pig having intelligence, the final game result is that the big pig presses the button and the small pig dose not press the button but chooses to wait 10 .
From the result of the behavior, the strategy of waiting is a selfish behavior of noncooperation and the strategy of pressing the button is a collective behavior of cooperation.Hence, two game players the big pig and the small pig adopt two different behavior modes and constitute a hybrid game mode.The equilibrium solution 4, 4 is Pareto solution.
The Technology Principle
The design variables: X x 1 , x 2 , . . ., x n ∈ Ω n , let the objective functions be minimized: where n is the number of design variables.q is the number of constraint conditions.Ω n is the feasible space of design variables.Meanwhile, the definition of game is as follows: G m represents one game.If G m has 2 players Illustration: the implication of number of players is equal to the number of objective functions , the sets of available strategies are denoted by S 1 , S 2 .The payoff functions are u 1 , u 2 .Hence, the game with 2 players can be written as G m S 1 , S 2 ; u 1 , u 2 .The basic idea for bi-objective optimization method based on game is as follows: 1 there are 2 design objectives, which are seen as 2 players and the design variables X are divided into strategy subsets S 1 , S 2 of the corresponding players by certain technical methods. 2 According to the specific game model, mapping relationships are established between the payoff functions u and objective functions F. 3 Each player takes its own payoff function as its objective and gets a single-objective optimal solution in its own strategy subset.So this player obtains the best strategy versus other players.The best strategies of all players form the group strategy in this round.The final equilibrium solutions can be obtained through multiround game according to the convergence criterion.
The payoff function u is closely related to the game model.The different behavior characteristics of the big pig and small pig, respectively, are assigned to the corresponding game players based on pigs' payoff game behavior model; then, the payoff functions u is constructed according to the corresponding behavior characteristics.
Game Player's Strategy Subset Computation
Fuzzy mathematics has been successfully used in the related design fields with the multidisciplinary cross research.Fuzzy mathematics has been successfully applied in filter design 11 , T-S fuzzy systems 12, 13 , and T-S fuzzy stochastic systems 14 and abundant research results are obtained.In this paper, the design variables are divided into each game players strategy subsets S 1 , S 2 by calculating the impact factor and fuzzy clustering based on fuzzy mathematics.
Computation steps are as follows.
1 Optimize 2 mono-objectives; then obtain optimal solution where 2 Every x j is divided into T fragments with step length Δx j in its feasible space; Δ ji is an impact factor x j affecting the objective f i and is shown as
3.2
To avoid the different functions' self-affecting, make impact factors dimensionless: 1, . . ., n , and Δ j means the impact factor set of j on all the players.The purpose is classifying highly similar samples as one classification; this paper uses a similar degree approach to reflect the samples' similarity relation.Select any two samples Δ k and Δ l and analyze their similarity relation; define a fuzzy relation function by normal distribution: where μ i Δ k , Δ l is the fuzzy relation between Δ k and Δ l in the ith objective function.
The correlation degree of Δ k and Δ l is 4 Establish the matrix R based on r kl and do fuzzy clustering to matrix R: R Classification results of Δ represent the classification results of X because of a oneto-one relationship between Δ {Δ 1 , Δ 2 , . . ., Δ n } and X {x 1 , x 2 , . . ., x n }.
5 According to fuzzy clustering, divide the design variables X into strategy subsets S 1 , . . ., S m and assign the strategy subset to the corresponding player by the average value of impact factors.According to a statistical viewpoint 15 , when the number of design variables and objective functions is small, we can directly divide variable sets X into strategy space S 1 , S 2 according to the value of impact factor.When the number design variables and objective functions are large, fuzzy clusterings are needed.Meanwhile, according to experience, we can first classify variables with strong correlation as a sample to reduce the complexity of clustering analysis.
Input system's classification control value is M and maximal sample number is P ; each with sample as one classification, the system is Δ 1 , Δ 2 , . . ., Δ n .
The steps of clustering are as follows.
1 Calculate the correlation degree r kl and build matrix R 0 ; attention: r kl r lk , r kl > 0.
2 Set maximum value of matrix R 0 to be r ab , r ab max k,l∈{1,2,...,n} r kl and classify Δ a and Δ b into a new classification Δ s ; if the sample number is larger than P , then combine the second maximal value of R 0 .
3 Combine Δ c c 1, 2, . . ., n; c / a, c / b and Δ s into a new classification system, calculate its correlation degree, and build a new matrix R 1 ; the correlation degree of any classification Δ c and Δ s is r cs min{r ca , r cb }.
4 Repeat procedures 1 , 2 , and 3 until the system classification number equals control value M.
Behavior Modes and Construction of Game Payoff Functions
The characteristic of the small pig is competitive behavior mode and its corresponding game payoff function is as follows: where F is a reference value, which can eliminate the differences in the magnitude for each objective function.In this paper, the initial objective function value is chosen to be F.The characteristic of the big pig is cooperative behavior mode and its corresponding game payoff function is as follows: where m j 1 w ij 1 value of w ii reflects the degree of considering its own interest.The greater the value is, the lower the cooperative degree is.
Algorithm Procedures and Flow Chart
1 Obtain strategy subset S 1 , S 2 attached to each player through calculating the impact factor and fuzzy clustering.
2 Payoff functions u i to any ith player i 1, 2 is constructed according to the characteristic of the small pig and big pig proposed by Section 3.2 above.For any player i i 1, 2 , solve the optimal strategy s * i ∈ S i , and make payoff minimum 5 Define optimal strategy permutation s 1 s * 1 ∪ s * 2 .Then judge the feasibility of s 1 .If g k s 1 ≤ 0 k 1, 2, . . ., q does not satisfy, turn to step 3 .Otherwise, compute the distance between s 1 and s 0 which is called the Euclidean norm.Then examine whether the distance satisfies the convergence criterion s 1 − s 0 ≤ ε or not ε is a decimal parameter given in advance .If it satisfies, the game is over; if not, let s 1 displace s 0 and turn to step 4 to repeat.The algorithm chart is shown in Figure 2 illustration: if the big pig stands for F 1 , then the small pig stands for F 2 and if the big pig stands for F 2 , then the small pig stands for F 1 .
The Design Model
The luff mechanism of compensative sheave block is a working device, which can realize mechanical loading range and is widely used in hoisting machinery.In its working process, there exists the stability goal; namely, the goods need to move along the horizontal path.On the other hand, there also exists the economic goal; namely, it needs less energy consumption.So, design problems have multiobjective optimization issues.The luff mechanism of compensative sheave block is shown in Figure 3.The design variables are X x 1 , x 2 , x 3 , x 4 , x 5 .Constraints need to meet upper and lower limits of design variables and amplitude range cannot exceed the prescribed range.The objective functions are F 1 stability index and F 2 economic index .
The Objective Function of the Stability Index
Consider where R is the amplitude of fluctuation and α is the elevation.The mechanism in the biggest amplitude is the starting point and the rise quantity relative to the starting point is Δz t in any time t, Δz t L sin ωt − sin α 1 x 3 cos α 1 x 5 − cos ωt x 5 , 4.2 where ω α 2 − α 1 /T is angular velocity, T is the total time of the fluctuation, α 1 is the elevation in R max the maximum luffing , and α 2 is the elevation in R min the minimum luffing .
The fall quantity relative to the starting point is Δl t in any time t due to the rope releasing: where m q is the number of wire rope of lifting pulley and m b is the number of wire rope of compensation pulley, where where Δh t is the deviation relative to the starting point in any time t.
Mathematical Problems in Engineering 9
So, the objective function of the stability index is as follows:
The Objective Function of the Economic Index
The energy consumption is P t in any time t.
where M q t is the torque.For no frame balance system, it is as follows: where y B L cos ωt − x 4 sin ωt x 5 , z B L sin ωt x 4 cos ωt x 5 , G is the gravity of the goods, β arctg z B − x 2 / y B − x 1 , G is the gravity of the arm frame, and ξ is the ratio of the distance center of gravity of the arm frame from O point in Figure 3 to arm length-L.So, the objective function of the economic index is as follows: 4.8
Calculation Statement
The paper takes the luff mechanism of compensative sheave block shown in Figure 3 as application object.G 31360 N, G 13720 N, L 14 m, f 0.7 m, ξ 0.5, r 0.2 m, m b 6, m q 2. R max 12 m, R min 5.8 m, −0.2.946 kJ.
5.1
The impact factors are shown in Table 2. 3.
According to Table 3, because the maximum value is 90.695, S a S 1 {x 1 , x 2 } is the strategy subset of F 1 .S b S 2 {x 3 , x 4 , x 5 } is the strategy subset of F 2 .
Calculation Results
There exist two kinds of cases.Case 1 is that the big pig stands for F 2 and the small pig stands for F 1 .Case 2 is that the big pig stands for F 1 and the small pig stands for F 2 .
We take case 1; for example, the detailed calculation steps are as follows.
1 Take the corresponding values of the initial design in strategy subsets S 1 , S 2 as the initial feasible strategies s a Seek the optimal strategy s * 1 ∈ S 1 and minimize the payoff function, b Seek the optimal strategy s * 2 ∈ S 2 and minimize the payoff function, 3 Define strategy permutation s 1 s * 1 ∪ s * 2 .Then, justify the feasibility of s 1 .If s 1 does not satisfy constraint conditions, turn to step 1 .Otherwise, compute 2 /5 and examine whether it satisfies the convergence precision ε ε is 0.0001 in this paper .If it satisfies, the game is over; if not, let s 0 s 1 and turn to step 2 to iteration loop.Illustration: for case 2, u 1 is constructed according to cooperative behavior mode and u 2 is constructed according to noncooperative behavior mode. 5.
The compared results are shown in Table 6.Illustration: multiobjective fuzzy optimization method is adopted in 17 and multiobjective Nash equilibrium game method is adopted in 6 .The comparison of deviation trajectory cases 1 and 2, 6, 16, 17 is shown in Figure 4.
According to Table 6, we can know that F 1 in case 1 is better than 6, 16, 17 and case 2, and that F 1 in case 2 is the worst.According to Figure 4, the deviation trajectory in case 1 is better than 6, 16, 17 and case 2. F 2 in case 2 is better than 6, 16, 17 and case 1. F 2 in case 1 is the worst.The results show that the method can effectively solve bi-objective optimization problems with preferred target and that multiobjective fuzzy optimization method 17 is an effective method without preferred target both F 1 and F 2 are better than realistic optimization results 16 .
By analyzing the results, we can know three conclusions as follows. 1 The game player with noncooperative characteristic of the small pig has greater advantage in the pursuit of its own interests than the game player with cooperative characteristic of the big pig. 2 If the designers have target preference, they need to take the preferred target as the small pig side and take another target as the big pig side.3 The satisfactory equilibrium solution can be obtained through less iteration rounds because the design variables are decomposed into the strategy subset owned by 2 game players.
To reveal the influence of w ii on the final solutions, w ii 0.1, 0.3, 0.5, 0.7, respectively.The results are shown in Tables 7 and 8.In case 1, the big pig stands for F 2 and the greater the value of w 22 is the cooperative degree is lower , the better the final value of F 2 is.In case 2, the big pig stands for F 1 and the greater the value of w 11 is the cooperative degree is lower , the better the final value of F 1 is.strategy subsets named S 1 , S 2 , and the constraints in multiobjective problems can be regarded as constraints in the game method.Through the specific technological means, the design variables can be divided into each game players strategy subsets S 1 , S 2 and two payoff functions u are constructed based on pigs' payoff game behavior.
Mathematical Problems in Engineering
The paper constitutes a hybrid game mode and proposes the detailed solution steps.
4 For optimization problems with preferred target, the designers need to emphasize one design goal.For this problem, there exist traditional methods such as weighting method by adjusting the weight of each goal , hierarchical sequence method by adjusting the objective optimization order , and goal programming method.In this paper, one new bi-objective optimization game method is proposed based on pigs' payoff game behavior for solving optimization problems with preferred target.It takes bi-objective optimization of luffing mechanism of compensative shave block; for example, the results show that the method can effectively solve the bi-objective optimization problems with preferred target designers need to take the preferred target as the small pig side and take another target as the big pig side , the efficiency and accuracy are well, and the solution is obtained only through fewer game rounds.
3
Generate the initial feasible strategies in the strategy set of each player randomly and then form a strategy permutation s 0 {s strategys s(1) -optimal strategy permutation
2
Perform the following two single-objective optimization.
Figure 4 :
Figure 4: The comparison of deviation trajectory.
Table 1 :
The payoff matrix.
Table 2 :
The impact factors.
Table 3 :
The impact factors of strategy subsets to objection functions.M 2, and P 3.Because r 12 1.66393 is the maximum value of matrix R, x 1 and x 2 belong to one class.Namely, S a {x 1 , x 2 } and S b {x 3 , x 4 , x 5 }.The impact factors bof strategy subsets to objection functions are shown in Table
Table 4 :
The iterative process of Case 1.
F 2 28.255 KJ.Iterative process is shown in Table
Table 5 :
The iterative process of Case 2.
Table 6 :
The compared results.
Table 7 :
The influence of w 22 on the final solutions in Case 1.
Table 8 :
The influence of w 11 on the final solutions in Case 2. | 2017-08-17T02:35:13.732Z | 2012-12-23T00:00:00.000 | {
"year": 2012,
"sha1": "3004fd11120138728dd1085b7a14b32ca5445839",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2012/808161.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3004fd11120138728dd1085b7a14b32ca5445839",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
232099301 | pes2o/s2orc | v3-fos-license | CD44 Targeted Nanomaterials for Treatment of Triple-Negative Breast Cancer
Simple Summary Triple-negative breast cancer (TNBC) is one of the most challenging tumors with aggressive behavior, low recovery rate, poor prognosis, high metastatic potential, and rapid relapse compared to other breast cancer subtypes. Conventional therapies currently have minimal effect on TNBC; thus, using combination therapies is a valid strategy to enhance drug activity and minimize the overall adverse effect. Therefore, combining drugs with a different mechanism of actions such as apoptosis inducers and JAK/STAT3 inhibitors improved TNBC cell lines killing activity in vitro and in vivo. To further improve the hydrophobic drug activity, CD44 targeted polymeric nanoparticles (CD44-T-PNPs) were utilized by encapsulating hydrophobic drug (CFM-4.16) in CD44-T-PNPs to enhance the drug solubility, tumor accumulation, and most importantly, enhance drug potency. Tagging our PNPs with Hyaluronic acid (HA) enhanced tumor accumulation, reduced off-target distribution, and improved therapeutic efficacy. Abstract Identified as the second leading cause of cancer-related deaths among American women after lung cancer, breast cancer of all types has been the focus of numerous research studies. Even though triple-negative breast cancer (TNBC) represents 15–20% of the number of breast cancer cases worldwide, its existing therapeutic options are fairly limited. Due to the pivotal role of the presence/absence of specific receptors to luminal A, luminal B, HER-2+, and TNBC in the molecular classification of breast cancer, the lack of these receptors has accounted for the aforementioned limitation. Thereupon, in an attempt to participate in the ongoing research endeavors to overcome such a limitation, the conducted study adopts a combination strategy as a therapeutic paradigm for TNBC, which has proven notable results with respect to both: improving patient outcomes and survivability rates. The study hinges upon an investigation of a promising NPs platform for CD44 mediated theranostic that can be combined with JAK/STAT inhibitors for the treatment of TNBC. The ability of momelotinib (MMB), which is a JAK/STAT inhibitor, to sensitize the TNBC to apoptosis inducer (CFM-4.16) has been evaluated in MDA-MB-231 and MDA-MB-468. MMB + CFM-4.16 combination with a combination index (CI) ≤0.5, has been selected for in vitro and in vivo studies. MMB has been combined with CD44 directed polymeric nanoparticles (PNPs) loaded with CFM-4.16, namely CD44-T-PNPs, which selectively delivered the payload to CD44 overexpressing TNBC with a significant decrease in cell viability associated with a high dose reduction index (DRI). The mechanism underlying their synergism is based on the simultaneous downregulation of P-STAT3 and the up-regulation of CARP-1, which has induced ROS-dependent apoptosis leading to caspase 3/7 elevation, cell shrinkage, DNA damage, and suppressed migration. CD44-T-PNPs showed a remarkable cellular internalization, demonstrated by uptake of a Rhodamine B dye in vitro and S0456 (NIR dye) in vivo. S0456 was conjugated to PNPs to form CD44-T-PNPs/S0456 that simultaneously delivered CFM-4.16 and S0456 parenterally with selective tumor targeting, prolonged circulation, minimized off-target distribution.
. Momelotinib and CFM4.16 synergistic combination. Momelotinib is a JAK-STAT inhibitor. JAK-STAT pathway is a direct signaling pathway transfer the signal from extracellular to nucleus to control the expression of certain genes. The upregulation of the JAK-STAT pathway is the key player in different cancers, and its regulation is under clinical investigation for cancer therapy. The principal components of this pathway are: cytokines-receptor complex, JAK, and STAT proteins. Mechanistically, when the ligand bind to its corresponding JAK-associated receptor (1), the receptors arms are brought into proximity, which enables transphosphorylation between the two JAK molecules (2). The activated phosphorylated JAK subsequently phosphorylates the receptor arms, which is the binding site for the latent transcription factors STAT (3). After the STAT molecules bind to the receptor arms (4), they become ready for phosphorylation by JAK (5). Once phosphorylated, the two STAT monomers dimerize through reciprocal phosphotyrosine-SH2 domain interaction (6). The STAT dimer is an active transcriptional factor that is translocated to the nucleus (7) and binds to a specific DNA sequence in the target gene promoters by DNA-binding domain to control transcription of specific genes (8). Momelotinib Momelotinib is a JAK-STAT inhibitor. JAK-STAT pathway is a direct signaling pathway transfer the signal from extracellular to nucleus to control the expression of certain genes. The upregulation of the JAK-STAT pathway is the key player in different cancers, and its regulation is under clinical investigation for cancer therapy. The principal components of this pathway are: cytokines-receptor complex, JAK, and STAT proteins. Mechanistically, when the ligand bind to its corresponding JAK-associated receptor (1), the receptors arms are brought into proximity, which enables transphosphorylation between the two JAK molecules (2). The activated phosphorylated JAK subsequently phosphorylates the receptor arms, which is the binding site for the latent transcription factors STAT (3). After the STAT molecules bind to the receptor arms (4), they become ready for phosphorylation by JAK (5). Once phosphorylated, the two STAT monomers dimerize through reciprocal phosphotyrosine-SH2 domain interaction (6). The STAT dimer is an active transcriptional factor that is translocated to the nucleus (7) and binds to a specific DNA sequence in the target gene promoters by DNA-binding domain to control transcription of specific genes (8). Momelotinib antagonizes the ATP binding to JAK1/2, leading to inhibition of the JAK-STAT pathway. CFM4.16 is CARP-1/APC/C interaction inhibitor. APC/C is E3 ubiquitin ligase responsible for tagging cell cycle proteins for proteasomal degradation for the metaphase/ anaphase cell cycle transition. Aberrant APC/C system is associated with cancer progression. Mechanistically, the process starts with latent ubiquitin (Ub) molecules present in the cells (1 ), which is activated by Ub-activating enzymes (E1) in an ATPdependent manner (2 ). The activated (Ub)will be transferred to a Ub-conjugating enzyme (E2) (3 ), which will conjugate the Ub molecules to activated Ub ligase (E3)/APC/C. The E3/APC/C is under the control of CDK-1(cyclin-dependent kinase-1), CARP-1, and CDC 20 (cell division cycle protein 20).The CDK-1 phosphorylates the APC/C, while CARP-1 binds to the APC2 subunit of APC/C for coactivation. Then the phosphorylated APC/C will bind to CDC 20 to be fully activated (4 ). Then The (E2) conjugates the Ub molecules to activated Ub ligase (E3)/(APC/C) (5 ). Activated APC/C system ubiquitinates the securin protein (chaperone) to be marked for proteasomal degradation (6 &7 ). After the degradation of securin, the separase (separin) will be activated and will break down the cohesin protein between two sister chromatids (8 ). Break down of cohesin will lead to the separation of two sister chromatids in anaphase (10 ). Thus, APC/C is responsible for maintaining normal chromosome number and genetic stability. Also, APC/C is responsible for turning over S/M cyclins to terminate mitosis. The proteasome catalytic unit will degrade the tagged protein into a small peptide chain and Ub. The Ub will be reused, and the fate of the peptide chain will depend on the cell needs; either it will be repurposed for protein synthesis or energy production (9 ). The simultaneous down-regulation of STAT3 and APC/C activation is the underlying mechanism for their synergism.
Furthermore, the study has developed an apoptosis inducer (CFM-4.16) that stands for CARP-1 functional mimetics (CFMs). CARP-1/CCAR1 (Cell cycle and apoptosis regulator 1) is a peri-nuclear phospho-protein which plays a vital role in regulating cell proliferation and apoptosis pathways. Two E3 ubiquitin ligases govern the cell cycle; APC/C (anaphasepromoting complex/cyclosome) and SCF (Skp1, Cullins, F-box proteins). They tag various regulatory proteins with ubiquitin to be degraded with the proteasome to control many cell functions such as cell cycle, signal transduction, and DNA replication [14,15]. APC/C mediates metaphase/anaphase cell cycle transition and it is under the control of CDK-1(cyclin-dependent kinase-1), CARP-1, and CDC 20 (cell division cycle protein 20). CFM 4.16 is inhibitor of CARP-1-APC/C interaction in APC2 subunit leading to (1) interfere with the cell cycle transition function of APC/C; (2) accumulation of CARP-1. Accumulated CARP-1 induces apoptosis by stimulating tumor suppressors, such as p53, caspase-9, and p38 MAPK, and inhibiting oncogenes, such as c-Met and c-Myc. CARP-1 knocking -down resulted in apoptosis resistance, which indicates the importance of CARP-1 for cell proliferation and apoptosis [16,17], as shown in Figure 1.
Following the recommendation to apply a simultaneous combination therapy in cases of urgent mitigation of tumor burden is required, especially in advanced metastatic cancers, the study has opted for a co-administration MMB and CFM-4.16 for malignant TNBC. In an attempt to surpass the outcome of monotherapy, the employed simultaneous combination paradigm has pivoted upon a repurposing of combined old conventional drugs-reaching the result that: Simultaneous combination overcomes multidrug resistance with increased survivability [8].
Due to their illustrated ability to improve chemotherapy safety profile, enhance cancer targetability, allow burst/sustained drug release, and prevent premature drug degradation, nanoparticles (NPs) have been lately extensively investigated-particularly their effectiveness for cancer therapy [18]. One NP cargo could be used for theranostic purpose or a simultaneous combination. The bioavailability of NPs-encapsulated drugs can be improved by anti-fouling or zwitterionic agents, as they will delay NP renal filtration and prevent non-specific accumulation in the reticuloendothelial system (RES) [18]. Many nanoformulations are successfully marketed for cancer therapy as they can overcome the limitation of conventional anticancer therapy such as Doxil ® , Abraxane ® , and Genexol-PM ® [18].
In the study, the employed polymeric NPs (PNPs)-one of multiple NPs platforms which consists of block copolymers, D-alpha-tocopheryl polyethylene glycol succinate (Vitamin E TPGS), and styrene-maleic acid (SMA)-showed improvement in drug solubility and in vitro and in vivo biodistribution. The biocompatibility and degradability of the chosen nanoplatform made it the right candidate for hydrophobic drug delivery. TPGS is an approved FDA delivery carrier due to its inherent preferential features, as well as its ability to inhibit P-glycoprotein (P-gp) associated with multidrug resistance (MDR) [19]. SMA is well-suited for clinical translation due to low cost and ease of processing [20,21]. PNPs surface can be decorated with targeting molecules to increase the bio-affinity to cancerous cells. Hyaluronic acid (HA) is the targeting ligand for a cluster of differentiation-44 (CD44) receptors. CD44 normally expressed in embryonic cells, bone marrow, and connective tissue, CD44 is abnormally extensively expressed in pancreatic, breast, and lung cancers, especially in stem cell subpopulations. CD44 expression indicates poor prognosis, metastasis, EMT mediated chemotherapeutic resistance, and low survivability. For that matter, many contemporary studies support the potential benefits of combining chemo or radiotherapy with CSCs-targeting therapy to overcome tumor resistance and relapse. CD44 in breast cancer is associated with increased (P-gp) and Bcl gene expression responsible for MDR and apoptosis resistance. Based on the clinicopathological impact of CD44 in breast cancer basal type, it is applied as a molecular diagnostic marker, prognostic tool, therapeutic target, targeting ligand-receptor in various stages of clinical development. It is worth noting that whereas CD44 binds to several ligands such as chondroitin, osteopontin, fibronectin, collagen, and serglycin/sulfated proteoglycan, HA remains the specific ligands for CD44 and its all isomers. Moreover, HA is considered the main extracellular matrix component expressed by cancer and stromal cells [22]. HA is widely implemented in cancer therapies due to its intrinisic properties such as biocompatability, biodegradability, safety, non-immungenicity, non-inflammatory, anionic charge, simple linear structure, and ease processing by modifing its funcional groups such as carboxy, hydroxy and N-acetyl groups. Based on the above, tagging our PNPs with HA enhanced tumor accumulation, reduced off-target distribution, and improved therapeutic efficacy.
The Cytotoxicity of The Individual Drugs and Their Combinations Studies
Both MMB and CFM-4.16 exhibited dose-dependent cytotoxicity with IC50 values of (4.2 & 3.4 µM) and (10.8 & 12.8 µM) in MDA-MB-231 and MDA-MB-468, respectively at 72 h as illustrated in Figure 2A,B. MMB potentiated the cytotoxicity of CFM-4.16 in TNBC cell lines with marked dose reduction and cell whipping, as illustrated in Figure 2 and Figure S1, at which the data is analyzed by COMPUSYN software. The isobolograms in Figure 2C showed that MMB + CFM-4.16 combination had many synergistic points with CI < 1, while others with CI>1 are antagonistic and those with CI = 1 are additive. The MMB + CFM-4.16 showed effective synergism accompanied by high fraction affected (Fa > 0.5) and CI < 1, as shown in Figure 2D. Also, MMB reduces the required concentration of CFM-4.16, as it is evident by (DRI) in Figure 2E. Combination points with DRI > 1 is favorable, DRI = 1 are with no effect, and DRI < 1 are unfavorable. All the CI points of the MMB + CFM-4.16 in both MDA-MB-231 and MDA-MB-468 are present in the Figure S1. MMB + CFM-4. 16 Targeted SMA-TPGS Carrier and Targeted HA-SMA-TPGS-Carrier The synthesized non-targeted SMA-TPGS (NT-PNPs) and targeted HA-SMA-TPGS (CD44-T-PNPs) carriers were characterized by FTIR and 1 H NMR spectroscopy, as shown in (Figures S2 and S3). The reaction between TPGS and SMA generates SMA-TPGS compound, which was confirmed in IR spectra by the presence of four characteristic bands at 3500, 3000,1750, 1250 cm −1 due to O-H, C-H of aromatic hydrocarbon, C=O, and C-O-C groups, respectively. 1 H NMR spectrum of SMA-TPGS shows the characteristic chemical shift at 6.685-7.153 ppm (aromatic H peaks of SMA) and 4.014 ppm (CH2-O-of TPGS). While HA, TPGS, and SMA underwent three pots reaction to produce a product identified as HA-SMA-TPGS. The IR spectra exhibit four bands at 3500, 3100, 1750, 1150, and 1050 cm −1 due to OH, C-H of aromatic hydrocarbon, C=O, C-N, and C-O-C groups, respectively. N-H band is overlapped with the O-H band in the IR spectrum. 1 H NMR spectrum of HA-SMA-TPGS shows the characteristic chemical shift at 6.711-7.2 ppm (aromatic H peaks of SMA), 4.014 ppm (CH2-O-of TPGS), and 4.4-4.6 ppm (hyaluronic acid H in sugar rings). The retention of the characteristic IR and 1 H NMR peaks of the individual monomers (HA, SMA, TPGS) in the produced SMA-TPGS and HA-SMA-TPGS indicate successful coupling and confirm the formation of the conjugated polymers. The nontargeted and targeted conjugates will be self-assembled into PNPs in the aqueous media due to the presence of hydrophobic SMA polymer and hydrophilic TPGS and HA polymers. The formed PNPs will be water-soluble with a hydrophobic core. The hydrophobic core could be physically or chemically incorporated with hydrophobic drugs for parenteral administration.
Preparation and Characterization of CFM-4.16 Loaded Polymeric Nanoparticles (PNPs)
The TEM revealed that NT-PNPs and CD44-T-PNPs are spherical with a smooth surface. The DLS average particle size of NT-PNPs was 81.5 nm and of CD44-T-PNPs was 98.1 nm with a narrow polydispersity index (PDI) 0.176 and 0.169, respectively, consistent with TEM results. The surface charge of NT-PNPs was 6.57 ± 2.94 mV and of CD44-T-PNPs was -7.25 ± 2.94, as shown in (Figure 3). CD44-T-PNPs had a larger particle size and negative zeta potential, attributed to wrapping the PNPs surface with HA. The water solubility of both formulations favors their parenteral administration. The loading contents (LC%), encapsulation efficiency (EE%), and yield% of NT-PNPs and CD44-T-PNPs are presented in (Table 1) as mean ± SD, n = 3. The synthesized non-targeted SMA-TPGS (NT-PNPs) and targeted HA-SMA-TPGS (CD44-T-PNPs) carriers were characterized by FTIR and 1 H NMR spectroscopy, as shown in (Figures S2 and S3). The reaction between TPGS and SMA generates SMA-TPGS compound, which was confirmed in IR spectra by the presence of four characteristic bands at 3500, 3000,1750, 1250 cm −1 due to O-H, C-H of aromatic hydrocarbon, C=O, and C-O-C groups, respectively. 1 The nontargeted and targeted conjugates will be self-assembled into PNPs in the aqueous media due to the presence of hydrophobic SMA polymer and hydrophilic TPGS and HA polymers. The formed PNPs will be water-soluble with a hydrophobic core. The hydrophobic core could be physically or chemically incorporated with hydrophobic drugs for parenteral administration.
Preparation and Characterization of CFM-4.16 Loaded Polymeric Nanoparticles (PNPs)
The TEM revealed that NT-PNPs and CD44-T-PNPs are spherical with a smooth surface. The DLS average particle size of NT-PNPs was 81.5 nm and of CD44-T-PNPs was 98.1 nm with a narrow polydispersity index (PDI) 0.176 and 0.169, respectively, consistent with TEM results. The surface charge of NT-PNPs was 6.57 ± 2.94 mV and of CD44-T-PNPs was -7.25 ± 2.94, as shown in (Figure 3). CD44-T-PNPs had a larger particle size and negative zeta potential, attributed to wrapping the PNPs surface with HA. The water solubility of both formulations favors their parenteral administration. The loading contents (LC %), encapsulation efficiency (EE %), and yield % of NT-PNPs and CD44-T-PNPs are presented in (Table 1) as mean ± SD, n = 3. Figure 4B. Higher cellular uptake of CD44-T-PNPs affirmed in both cell lines is probably due to receptor-mediated endocytosis followed by HA/CD44 interaction. CD44-targeted nanomaterials could be a useful tool for selective cytotoxicity in TNBC. Figure 4. Blue fluorescence refers to Hoechst stained nuclei, while red fluorescence refers to the Rhodamine B uptake signal. In MDA-MB-231, CD44-T-PNPs/Rhod had 2.5 folds better tumor accumulation compared to NT-PNPs/Rhod, as shown in Figure 4A. While in MDA-MB-468, the CD44-T-PNPs had 2.1 folds better tumor accumulation compared to NT-PNPs, as shown in Figure 4B. Higher cellular uptake of CD44-T-PNPs affirmed in both cell lines is probably due to receptor-mediated endocytosis followed by HA/CD44 interaction. CD44-targeted nanomaterials could be a useful tool for selective cytotoxicity in TNBC.
Targeted PNPs Combination has Exhibited Remarkable Anticancer Activity Compared to Free Drugs Against TNBC Cell Lines
The MMB+CFM-4.16 combination had a more cytotoxic effect than individual MMB and CFM-4.16, as confirmed Figure Figure 6. Interestingly, T combo (CD44-T-PNPs +MMB) had a significant potent cytotoxic effect compared to the free combo, as shown in Figure 6. T combo decreased the cells viability percentage by 2.5 and 2.16 folds in MDA-MB-231 and MDA-MB-461, respectively, compared to the free combo (MMB+CFM-4.16). Moreover, the results illustrated in Figure 7 show a decrease in the proliferation capacity, increase in the cellular detachment, and significant change in the cellular morphology in the combination wells compared to control negative. In combinations wells, especially the targeted one, the cells appear as a star shape with tapered ends with very low density. The co-treatment of MMB + CFM4.16 either in free or in PNPs form inhibited cell migration and wound closure at both 24 and 72 h. Figure 7 show a decrease in the proliferation capacity, increase in the cellular detachment, and significant change in the cellular morphology in the combination wells compared to control negative. In combinations wells, especially the targeted one, the cells appear as a star shape with tapered ends with very low density. The co-treatment of MMB + CFM4.16 either in free or in PNPs form inhibited cell migration and wound closure at both 24 and 72 h. The ability of our combinations to induce ROS-dependent apoptosis is shown in Figure 8A,B. Individually, drugs exhibited a slight increase in ROS production in both MDA-MB-231 and MDA-MB-468. In MDA-MB-231, the ROS generation was increased by 1. 55 16, free combo, NT combo, and T combo, respectively compared to negative control. The NT and T carriers had a slight attenuation in ROS production due to vitamin E's antioxidant effect in their content [23]. These results suggested that MMB + CFM-4.16 synergistic effect due to the increase in ROS generation. Moreover, T combo, with its unprecedented ROS production, alter the redox environment promoting oxidative stress-induced cancer cell death. 16, free combo, NT combo, and T combo, respectively compared to negative control. The NT and T carriers had a slight attenuation in ROS production due to vitamin E's antioxidant effect in their content [23]. These results suggested that MMB + CFM-4.16 synergistic effect due to the increase in ROS generation. Moreover, T combo, with its unprecedented ROS production, alter the redox environment promoting oxidative stress-induced cancer cell death. In addition, caspase 3 and caspase 7 which are crucial for the execution phase of cellular apoptosis have been also investigated. Caspase 3 activation is a conclusive marker for the irreversible commitment of cellular apoptosis. The results have revealed enhancement of caspase 3/7 activity in the free combo compared to the individual drugs and control groups in both cell lines. In MDA-MB-231, caspase activity increased by 1.13 for MMB, 1.39 for CFM-4.16, 1.52 for the free combo, and 1.62 for NT combo compared to negative control cells. In MDA-MB-468, caspase activity increased by 1.18 for MMB, 1.25 for CFM-4.16, 1.67 for the free combo, and 2.01 for NT combo compared to negative control cells. Interestingly, the T combo markedly stimulated caspase 3/7 activity over the free combo; by 1.76 in MDA-MB-231 and 2.14 folds MDA-MB-468, as shown in Figure 9. In addition, caspase 3 and caspase 7 which are crucial for the execution phase of cellular apoptosis have been also investigated. Caspase 3 activation is a conclusive marker for the irreversible commitment of cellular apoptosis. The results have revealed enhancement of caspase 3/7 activity in the free combo compared to the individual drugs and control groups in both cell lines. In MDA-MB-231, caspase activity increased by 1.13 for MMB, 1.39 for CFM-4.16, 1.52 for the free combo, and 1.62 for NT combo compared to negative control cells. In MDA-MB-468, caspase activity increased by 1.18 for MMB, 1.25 for CFM-4.16, 1.67 for the free combo, and 2.01 for NT combo compared to negative control cells. Interestingly, the T combo markedly stimulated caspase 3/7 activity over the free combo; by 1.76 in MDA-MB-231 and 2.14 folds MDA-MB-468, as shown in Figure 9. The expression of CD44 in ectopic tumor xenograft collected from the TNBC-bearing mice model was investigated by immunohistochemistry. The intense bright green fluorescence indicates the high expression level of CD44, as shown in Figure 11. Molecular characterization leads to the discovery of biomarkers and targeted therapy, which is the basis of personalized medicine. The association of CD44 with tumorigenesis induction, poor prognosis, aggressiveness, relapse, and chemotherapeutic resistance of TNBC has been the underlying rationale behind the study's choice of CD44 as an excellent biomarker for site-specific payload delivery to TNBC.
Animal Studies 2.4.1. CD44 Receptors Are Overexpressed in Tumors of TNBC-Bearing Mice Model
The expression of CD44 in ectopic tumor xenograft collected from the TNBC-bearing mice model was investigated by immunohistochemistry. The intense bright green fluorescence indicates the high expression level of CD44, as shown in Figure 11. Molecular characterization leads to the discovery of biomarkers and targeted therapy, which is the basis of personalized medicine. The association of CD44 with tumorigenesis induction, poor prognosis, aggressiveness, relapse, and chemotherapeutic resistance of TNBC has been the underlying rationale behind the study's choice of CD44 as an excellent biomarker for site-specific payload delivery to TNBC.
NIR Imaging and Biodistribution and Inducible DNA-DSBs
The theranostic PNPs is an emerging aspect of a precise medicine. It consists of targeting ligand, therapeutic agents, and imaging agents. Sufficiently accumulated in the tumor by enhanced permeability and retention (EPR) and receptor-mediated endocytosis, theranostic NPs is helpful for early diagnosis, image-guided surgery, and tracking drug distribution, accumulation, sustained release, and efficacy. Being less toxic and cost-effective, NIR imaging in TNBC-bearing mice has been opted for by the study. The results have illustrated significant tumor homing of CD44-T-PNPs/S0456, followed by NT-PNPs/S0456, compared to control (free S0456) at both 24 and 72 h, as shown in Figure 12. In addition, the CD44-T-PNPs/S0456 exhibits no off-target accumulation, especially in the liver, compared to NT-PNPs/S0456 at both 24 and 72 h as shown in whole-body and dissected organ imaging Figure 12. The high intensity of the CD44-T-PNPs/S0456 group in the kidney at 72 h has revealed the following (1) hydrophilic nature of the formulation enabled its renal clearance; (2) the formulation had prolonged sustainable property till 72 h; (3) renal mediated excretion reduces liver toxicity; (4) HA surface coating reduced NPs immunogenicity and elimination by RES. The selective homing of CD44 targeted PNPs profoundly support the rational application and clinical translation of it to the theranostic platform of metastatic TNBC. To support the therapeutic efficacy, a TUNEL assay was performed-indicating the results that: the T combo was significantly able to push the tumor cells to late-stage apoptosis when compared to individual drugs, as shown in
NIR Imaging and Biodistribution and Inducible DNA-DSBs
The theranostic PNPs is an emerging aspect of a precise medicine. It consists of targeting ligand, therapeutic agents, and imaging agents. Sufficiently accumulated in the tumor by enhanced permeability and retention (EPR) and receptor-mediated endocytosis, theranostic NPs is helpful for early diagnosis, image-guided surgery, and tracking drug distribution, accumulation, sustained release, and efficacy. Being less toxic and costeffective, NIR imaging in TNBC-bearing mice has been opted for by the study. The results have illustrated significant tumor homing of CD44-T-PNPs/S0456, followed by NT-PNPs/S0456, compared to control (free S0456) at both 24 and 72 h, as shown in Figure 12. In addition, the CD44-T-PNPs/S0456 exhibits no off-target accumulation, especially in the liver, compared to NT-PNPs/S0456 at both 24 and 72 h as shown in whole-body and dissected organ imaging Figure 12. The high intensity of the CD44-T-PNPs/S0456 group in the kidney at 72 h has revealed the following (1) hydrophilic nature of the formulation enabled its renal clearance; (2) the formulation had prolonged sustainable property till 72 h; (3) renal mediated excretion reduces liver toxicity; (4) HA surface coating reduced NPs immunogenicity and elimination by RES. The selective homing of CD44 targeted PNPs profoundly support the rational application and clinical translation of it to the theranostic platform of metastatic TNBC. To support the therapeutic efficacy, a TUNEL assay was performed-indicating the results that: the T combo was significantly able to push the tumor cells to late-stage apoptosis when compared to individual drugs, as shown in Figure 13.
Discussion
TNBC is one of the most challenging tumors with an aggressive behavior, low recovery rate, poor prognosis, high metastatic potential, and rapid relapse compared to other breast cancer subtypes. TNBC abbreviation derived from the deprivation of three types of
Discussion
TNBC is one of the most challenging tumors with an aggressive behavior, low recovery rate, poor prognosis, high metastatic potential, and rapid relapse compared to other breast cancer subtypes. TNBC abbreviation derived from the deprivation of three types of Figure 13. The capability of T combo to enhance the onset of apoptosis in TNBC-bearing mice by CD44. HA mediated endocytosis. T combo was able to drive the tumor to the late-stage apoptosis, which is based on generating multiple DNA double-strand breaks (DSBs) with accessible 3'-hydroxyl (3'-OH) groups that were detected by TUNEL. Elevated apoptosis is indicated by brownish discoloration or dark brown spots. Note: T combo (CD44-T-PNPs + MMB) where MMB means momelotinib. Magnification is 40×.
Discussion
TNBC is one of the most challenging tumors with an aggressive behavior, low recovery rate, poor prognosis, high metastatic potential, and rapid relapse compared to other breast cancer subtypes. TNBC abbreviation derived from the deprivation of three types of receptors; estrogen receptors (ER), progesterone receptor (PR), and human epidermal growth factor (HER2) [24]. The discovery of these biomarkers by Perou 2000 played an essential role in the development of targeted, personalized medicine. Due to the lack of these receptors, TNBC patients can not benefit from currently available receptor-targeted systemic therapy, making surgery and chemotherapy the only available option.
However, traditional chemotherapeutic agents' application has not been without potential side effects, suboptimal outcomes, and tumor resistance development. In addition, even though most of the chemotherapeutic agents have the capacity to attack fast-growing cancerous cells, they wipe out fast-growing healthy cells too, such as: bone marrow cells, hair follicle cells, and cells lining the gastrointestinal tract (GIT) [25][26][27]. In the same vein, despite the fact that most chemotherapeutic agents can trim tumor growth, its effect is not long-lasting and is followed by rapid proliferation and invasion, which is the reason for the development of chemotherapeutic resistance. Drug resistance is considered the main obstacle that breast cancer patients have to confront and is responsible for chemotherapeutic failure. The hurdle of drug resistance could be innate, acquired, or cross-resistance/MDR with different underlying mechanisms such as drug sequestration by P-gp, proliferation potential of mutated cancer stem cells (CSCs), altering drug target, modification of DNA repair strategies, altering drug detoxification, and invalid apoptotic regulators such as p53 [28,29]. There upon appears the urgent need for new approaches such as immunotherapeutic approach [30], new agents that do not exhibit cross-resistance such as ixabepilone [28], targeting cancer stem cells, and unique combination strategies.
The current study establishes a novel combination strategy based on the sensitization of TNBC to CFM-4.16 by MMB, which has proven an unequivocally capacity to overcome single-agent prone -therapeutic resistance, reduce systemic toxicity, and enhance the therapeutic index. Moreover, CFM-4.16 water solubility was enhanced by SMA-TPGS carrier as well as cell uptake using CD44 targeting ligand in vitro and in vivo. It is worth noting that the possibility of using this cargo for the theranostic purpose has been likewise investigated.
PNPs were first nominated for cancer therapy in the early 1980s. PNPs able to increase the water solubility of hydrophobic drugs as they consist of a hydrophilic shell, which interacts with the external aqueous media, and a hydrophobic core, which acts as a depository for hydrophobic drugs [31]. This is in addition to their ability to enhance drug retention in tumor tissue by EPR effect as well as prolonging plasma half-lives by escaping renal elimination [18,32]. One of the polymeric micelles that have been FDA approved for breast cancer is Genexol-PM ® [33]. The study has opted for a carrier, which is a block copolymer consisting of SMA and Vitamin E-TPGS, decorated by HA as a targeting ligand. Established as biologically safe and immunostimulant [34,35] SMA was clinically approved for the treatment of hepatoma in Japan in 1993 [36,37]. Having a high glass transition temperature, it increases NPs stability and controls the drug release. Moreover, the carboxyl group of maleic acid on SMA's hydrophilic surface enables surface modification by conjugation to the targeting ligand, such as HA [20,21,38]. Vitamin E-TPGS is FDA approved drug adjuvant which is widely used as a pharmaceutical emulsifier, stabilizer, and permeation and bioavailability enhancer of hydrophobic drugs with a potent P-gp inhibition and apoptosis induction [19,39,40].
PNPs is a promising drug delivery to overcome poor solubility, limited selectivity, and systemic cytotoxicity. It is well documented that PNPs accumulate in the tumor site in a high concentration by utilizing EPR. The main challenge against passively delivered PNPs is that angiogenesis is not uniformly distributed throughout the tumor leading to a disproportional distribution of NPs by EPR [41]. The active targeting based on the microenvironmental difference between cancer and healthy cells is crucial to add selectivity to EPR and overcome its limitations. One such difference is the expression levels of CD44 [42]. CD44 is a cell surface glycoprotein that is overexpressed and intensively involved in TNBC carcinogenesis [22,43]. Basal epithelial, basal mesenchymal TNBC, and CSCs are enriched with the CD44 receptor [44]. Therapeutic failure and cancer relapse are mainly due to the inability to eradicate CSCs [13,45]. Targeting of cancer cells and CSCs will effectively reduce tumor burden and relapse. CD44 has a tremendous binding affinity to HA, which attracts our attention to develop HA-based NPs leading to an increase in the affinity of NPs to cancer cells and CSCs and selective toxicity due to HA-CD44 receptor-mediated endocytosis [42,43]. In our study, HA-PNPs have proved their significance in developing CD44-T-PNPs with high cellular uptake in vitro, preferential tumor accumulation in vivo, and theranostic potential by enveloping both CFM 4.16 and S0456. That came in agreement with HA-SMA-NMS that effectively delivered CDF to the aggressive CD44+ stem-like pancreatic cancer cells [46] and HA-TPGS-DOX that increased doxorubicin cytotoxicity in MCF-7/ADR [47]. Adding vitamin E-TPGS improved Genexol-PM uptake and PTX cytotoxicity due to the enhancement of membrane fluidity and MDR inhibition; the IC 50 was reduced by 4.4 folds compared to Genexol-PM [40].
Many physiochemical characters of NPs affect their cellular interaction, such as shape, size, surface charge, and hydrophobicity. The study has opted for spherical with a smooth surface, <100 nm, slightly anionic, and water-soluble, which have proven ideal for use in the drug delivery system (DDS). Most of the developed NPs applied for DDS are spherical due to their being easy manufacting.. Even many studies showed that rod or disc NPs have a more favorable effect than spherical, there are contradictory studies [48]. This contradiction is owing to differences in the material composite, tested cell lines, and analyzing techniques [49][50][51]. Another reason why spherical-shaped NPs have been preferred is their lower reactivity and toxicity than fiber-shaped NPs [52]. In addition, whereas the non-spherical ones tumble with the flow, spherical shaped NPs are known for their ease of motion [53]. Furthermore, PNPs' smooth surface have the ability to reduce the phagocytosis process [54] and deposition rate [55]. The size of NPs is a determent factor for its clinical application. Our PNPs are able to escape glomerular filtration (<5 nm) and trapping by RES (>150 nm) [56]. Also, it is in the favorable size range for cellular uptake and extravasation via EPR. As the cut-off size of the endothelial gaps in tumor blood vessels ranges from 200 nm to 1.2 µm based on tumor type; therefore, NPs ≤ 200 nm are widely used for passive and active targeting of the tumor [57]. Controlling NPs size is critical for reducing genotoxicity (∼10 nm) and cytotoxicity to healthy cells [58]. NPs surface charge plays a vital role in drug loading, circulation time, cellular uptake, cellular cytotoxicity, NPs stability, and clearance by RES. The cationic NPs have many safety concerns, as they are strongly attracted to the negatively charged cell membranes leading to destabilization of cell membranes, leakage of cytoplasm, and, subsequently, cell lysis. Damage includes the endothelial lining of blood vessels, RBCs, and healthy cells. Elimination of NPs by RES based on their zeta potential is a controversial issue. It is generally accepted that neutral or slightly anionic NPs are preferred for their safe parenteral administration, lower systemic toxicity, higher tumor accumulation, prolonged circulatory lifetime, and less off-target uptake [53,56,59,60]. Our hydrophilic PNPs are far more vulnerable to immune detection as the hydrophilic NPs repel opsonions that reduce their recognition by mononuclear phagocyte system (MPS) and increase their circulatory time [49,61].
NT-PNPs and CD44-T-PNPs have an adequate LC% with respect to the polymer amount compared to other polymeric NPs preparation [62]. As it is well established that TNBC has tremendous esterase and hyaluronidase activity [63,64]. Both PNPs are esterase-responsive while only CD44-T-PNPs are additionally hyalu-ronidase-responsive. The carbonic ester bonds and the glycosidic bonds in the NPs will be disassembled by intracellular esterase and hyaluronidase, followed by releasing the antineoplastic agent. This will increase tumor specificity, reduce off-target toxicity, prevent premature drug release and circulation stability [63]. Our PNPs physicochemical properties are consistent with the marketed Genexol-PM ® that had a smooth spherical shape, −4.36 mv,16.67% LC, and two folds reduction in IC 50 compared to free PTX in A549 [40].
The current study has illustrated a promising synergism supported by the in vitro and in vivo anticancer activity. The co-treatment of momelotinib (MMB) and CFM-4.16 increases the downregulation of P-STAT3 accompanied by the upregulation of CARP-1, especially in T-combo. STAT3, particularly phosphorylated by JAK2, is one of breast cancer clinical significance [11]. STAT3 enhances cell proliferation by activating cyclin-dependent kinases (CDKs) by upregulation of cyclin D2 and downregulation of p21. STAT3 induces transcription of hypoxia-inducible factor (HIF-1a). STAT3/HIF-1a axis plays a crucial role in adapting tumor cells to the hypoxic environment associated with cancer progression. Also, STAT3 leads to overexpression of VEGF responsible for angiogenesis. Tumor invasion and metastasis are under the regulation of STAT3 by different mechanisms such as; induction of transcription of matrix-degrading enzymes such as matrix metalloproteinase and activation of epithelial to mesenchymal transition (EMT). STAT3 inhibition, either by knocking down or by pharmacological inhibitors, was found to suppress tumor invasion and metastasis in vivo and in vitro [65]. Inhibition of the JAK/STAT pathway has increased the sensitivity of resistant breast cancer cells to doxorubicin [66]. Worth emphasizing is, JAK2 /STAT3 is a prerequisite for the maintenance and proliferation of CSC of breast cancer and developing of chemo-and radio-resistance [13,67]. So, the JAK2/STAT3 in CSC is a potential target for developing a successful strategy to improve breast cancer patients' therapeutic outcomes.
The study has revealed that the combination of MMB and CFM-4.16 has induced apoptosis via JAK2/STAT3 inhibition-mediated ROS generation. Recently it has been reported that P-STAT3 is inversely related to ROS production [68][69][70]. ROS is regarded as a double-edged sword in cancer cells as a slight increase of ROS leads to cancer initiation and progression, while high levels of ROS induce cell senescence. It is known that the level of cancer intrinsic ROS is relatively higher than that of normal cells; thus, increasing the ROS production by chemotherapy will effectively eradicate cancer cells, but it will be inadequate to trigger apoptosis in healthy cells with a low level of intrinsic ROS [71]. Redox imbalance induces apoptosis by disturbing mitochondrial membrane potential, enhanced mitochondrial membrane permeability to pro-apoptotic proteins, including cytochrome c. In the cytosol, cytochrome c binds to Apaf-1 to form an apoptosome, which in turn activates caspase-9. Overactivation of caspase-9, in addition to caspase-8, p38 MAPK, upregulation of CARP-1, and PARP cleavage, took place by CFM-4.16 [16,17]. Taken together, activate the caspase 3/7 cascade pathway that was confirmed in our results leading to DNA damage, cell shrinkage, and cellular detachment [72], which were proved by TUNEL, cellular viability, and morphology (Summarized in Figure S4).
Optical imaging by targeted NIR dye has been evolved to enable observation of cancer burden and progression under various therapeutic strategies and stages and rapid monitoring of molecular events occurring within cells. Rapid assessment of the therapeutic efficacy in vivo is highly needed. As in relatively slow-growing models, the caliper measurements are unable to detect the difference for several days. Also, in orthotopic, metastatic, and systemic models, the longitudinal measurements of tumor burden are not possible. In our attempt, we provided targeted theranostic NPs, which was able to deliver both CFM-4.16 and S0456 NIR dye to the tumor site with limited off-target distribution compared to the non-targeted one. The results also showed that CD44-T-PNPs had an excellent prolonged effect in vivo confirmed by delayed renal clearance to 72 h. Probably because HA surface modification could act as a protective coating that reduces NPs opsonization and immunogenicity, leading to escape catching by RES in the blood [73,74]. The water solubility of the NPs will overcome the low solubility of anticancer agents and provide safe bio-elimination of the NPs. To the best of our knowledge, there is no FDA approved theranostic for TNBC, which is still an essential need in the clinical setup [75]. The intrinsic properties of our PNPs pave its application as a treatment option for TNBC.
Materials (Cell Lines and Chemicals)
TNBC cell lines MDA-MB-231 and MDA-MB-468 have been used as in vitro and in vivo model for human TNBC overexpressing CD44 receptors. Both cell lines were cultured in high glucose DMEM medium with 10% fetal bovine serum (FBS) and 1% penicillin-streptomycin. Cell culture was maintained at 37 • C and 5% CO 2 conditions. Momelotinib was purchased from Adooq Bioscience (Irvine, CA, USA). CFM-4.16 were synthesized as described before [17,76]. SMA (M = 1.6 kDa) and sodium bicarbonate were purchased from Sigma-Aldrich (St. Louis, MO, USA). Vitamin E TPGS was purchased from Antares Health Products, Inc. (Jonesborough, TN, USA). Hyaluronic acid (MW = 13 kDa) was purchased from CosChemSupply (Los Angeles, CA, USA). EDC was purchased from CovaChem (Loves Park, Illinois, USA). All the other reagents used were of analytical grade. Cell culture DMEM, FBS, penicillin-streptomycin were purchased from GIBCO (Waltham USA, MA, USA).
Screening of In Vitro Cell Viability (MTT Assay) and Combination Study
The cytotoxicity of CFM 4. 16 of each drug. The cells have been treated by the noted concentrations for 72 h, followed by viability measuring by MTT assay, as previously described. The combination index (CI) and dose reduction index (DRI) were calculated by COMPUSYN software (COMPUSYN Inc, Paramus, NJ, USA). The CI is a representative quantitative measurement for the degree of drug interaction; CI < 1, CI = 1, and CI > 1 indicate synergism, additivity, and antagonism, respectively. The DRI is an indicator of how many folds the dose of each drug in a synergistic combination is reduced compared to the required dose of each drug alone to obtain the same effect. SMA-TPGS and HA-SMA-TPGS were prepared according to our previously reported method [38]. For the synthesis, 30 mg HA and 70 mg TPGS were dissolved in 50 mL deionized water (DI), then 200 mg/5 mL NaHCO3 was added to the solution. Then the pH was adjusted to 8.9, and 105 mg SMA in 10 mL DMSO was added. The reaction was left overnight until the solution becomes clear. The only difference in the NT carrier is that HA was not initially added. Both SMA-TPGS (NT carrier) and HA-SMA-TPGS (T carrier) were purified by dialysis bag (MWCO 2 kDa) for 24 h, lyophilized then characterized by FTIR and 1 H-NMR.
Preparation and Characterization of CFM-4.16 Loaded Polymeric NPs (PNPs)
Loading of CFM-4.16 was carried out according to our reported method [76]. First, 50 mg of carrier polymer was dissolved in 50 mL of DI water. Then 15 mg/mL of CFM-4.16 dissolved in DMSO was added to the polymer solution. Then 20 mg of EDC was added, and the pH was adjusted to 5.0, then 11 each for 30 min. Finally, the pH was adjusted to 8.0, and the free CFM
Loading Capacity (LC%) and Encapsulation Efficiency (EE%)
The LC% and EE% of NT-PNPs and CD44-T-PNPs were measured by high-performance liquid chromatography (HPLC). The mobile phase consisted of acetonitrile 65%, methanol 20%, and 10 mM potassium dihydrogen phosphate (KH 2 PO 4 ) with (pH 2) 15%, and the readout wavelength was 309 nm. Briefly, 1 mg of each formulation/1ml DI water was prepared, then 100 µg, 50 µg, and 25 µg concentrations were prepared using the mobile phase as a diluent. The average of triplicate injections of each sample was used on the standard curve equation; then, the LC%, EE%, and yield were calculated as follows: Loading Capacity (LC%) = (Amount of CFM4. 16 (Table 2). The effect of the combination on the morphology and metastasis was studied on MDA-MB-231. The cells were plated at 90% confluence in 6 well plates and treated with the indicated concentrations of the noted compounds for the selected time points. The cells were then fixed with 70% ice-cold ethanol for 10 min and stained with 0.4% crystal violet for 1 h. The stain was poured off, and the plates were washed and dried at room temperature. The wells were photographed at 10x. The only difference in wound healing assay that each well was scratched with a sterile 200 µL micropipette tip after 24 h incubation. The wound margin was photographed by an EVOS FL Auto (Life Technologies) microscope at 10× magnification at 0, 24 , and 72 h of treatment.
Detection of ROS Generation
ROS generation was detected using H 2 DCFDA according to the manufacturer's instructions. Briefly, the cells were plated in 70% confluency in 6 well plates (MDA-MB-231 0.5 × 10 6 cells/well and MDA-MB-468 1 × 10 6 cells/well). After 24 h, the cells were treated with the indicated concentrations of the noted compounds for 12 h, and H 2 O 2 was used as control positive. The media was then removed, wells were washed with PBS, and stained with 5 µM H 2 DCFDA for 30 min. Finally, the wells were washed with PBS and imaged by (EVOS FL Auto, Life Technologies) microscope (10×) with 10 s elapsed time. The fluorescence intensity (excitation 485 nm; emission 530 nm) was measured by ZEN 2012 blue edition software, and the difference in ROS production was calculated by GraphPad Prism [77].
Caspase 3/7Activity Assay
Caspase 3/7 activity was measured using the Caspase-Glo ® 3/7 assay (Promega, Madison, WI, USA) according to the manufacturer's recommendations. Cells were seeded in 96 well plates; the next day, cells were treated with the indicated concentrations of the noted compounds for 24 h. Then Caspase-Glo 3/7 reagent was added, and the plates were incubated for 60 min. Luminescence was measured using the microplate reader, and significance calculated by GraphPad Prism.
Western Blot Analysis
The ability of our combination to restore the balance between the CARP-1 tumor suppressor gene and STAT3 oncogene was determined by western blot. The cells were plated in 70% confluency (1.5 × 10 6 /100 mm plate for MDA-MB-23 and 3 × 10 6 /100 mm plate for MDA-MB-468). After 24 h, the cells were treated with the indicated concentrations of the noted compounds for 12 and 48 h. The cells were harvested and lysed by RIPA buffer with protease and phosphatase inhibitor cocktail (Thermo Scientific, Waltham, MA, USA) for 15 min at 4 • C. The lysates were then centrifuged at 12,000 rpm at 4 • C for 15 min. The supernatant was collected, and the protein concentration was determined by the Protein Assay Kit (Thermo Scientific). After protein normalization, 20 µg of protein extract from each sample was separated by SDS-polyacrylamide gel electrophoresis (SDS-PAGE) 8%, followed by wet transferring to polyvinylidene difluoride (PVDF) membrane (Bio-Rad, Hercules, CA, USA) by standard procedures. The non-specific binding sites were blocked by 5% skimmed milk in 1× TBST for 1 h. Membranes were incubated with the noted dilution's of the primary antibodies, as shown in (Table S1) overnight at 4 • C followed by incubation with 1:10,000 horseradish peroxidase-conjugated anti-rabbit secondary antibodies for 2 h at RT. The antigen-antibody complexes were detected with the ECL chemiluminescence detection system (Amersham Biosciences, Little Chalfont, Buckinghamshire United Kingdom) and exposure to X-ray film (X-Omat, Kodak, Rochester, NY, USA). The same membranes were then re-probed with anti-GAPDH antibody as an internal control.
Animal Husbandry and Tumor Induction
Female mice were purchased from Jackson Laboratories, housed in a sterile environment on a standard 12 h light/dark cycle, and kept on regular rodent diet and water. All animal procedures were approved by the Wayne State Animal Care and IACUC committee in accordance with NIH guidelines. Mice were subcutaneously injected in their right flanks with MDA-MB-231(5.0 × 10 6 cells per mouse) suspended in Matrigel. Tumor growth was measured in two perpendicular directions weekly with a caliper. Tumor volumes were calculated using formula 0.5 × a × b 2 , where a is the measurement of the longest axis, and b is the other perpendicular axis. Tumors were allowed to grow for one month till the tumor became palpable with an average size of 431.5 mm 3 .
CD44 Expression in TNBC Bearing Mice Model by Immunohistochemistry
CD44 expression was studied after obtaining tumor from TNBC bearing mice. The tumor was sectioned to 5 µm paraffin-embedded tissue sections. These sections were permeabilized with (500 µL Triton X + 25 g BSA + 500 mL PBS) 5 min, three times. Then tumor sections were blocked by 5% BSA for 1 h. The tumor area was circled by immunopen and incubated with Alexa Fluor 488 anti-mouse/human CD44 antibody (Biolegend San Diego, CA, USA) overnight in the fridge. The next day, sections were washed, and nuclei were stained with Hoechst 33342 (1 µg/mL) for 15 min. Then sections were rewashed and dried, followed by adding the mounting media and coverslips. Imaging by confocal microscope at 10× and 63× oil immersion lens adjusted for the blue channel (352-461) for Hoechst stained nuclei and green channel for Alexa fluor anti-CD44 antibody (488-519).
TUNEL Assay
One of the critical hallmarks of late apoptosis is extensive genomic DNA fragmentation. This process generates multiple DNA double-strand breaks (DSBs) with accessible 3 -hydroxyl (3 -OH) groups that allow apoptosis detection by TUNEL assay. For this study, animals were divided into four groups; control negative, momelotinib 5 mg/kg, CFM-4. 16 15 mg/kg, and T combo group received 5 mg/kg momelotinib + 15 mg/kg CD44-T-PNPs. Animals received two doses every other day, then all animals were sacrificed, and the tumor tissues were sent to the Biobank core facility for paraffin-embedded tissue sectioning for TUNEL assay. The treatment with elevated apoptosis is indicated by increased brown staining or dark-brown spots. This short-term study was carried to see the capability of the CD44-T-PNPs combo to enhance the onset of apoptosis when compared with individual drugs.
NIR Imaging and Biodistribution Study
The feasibility of using the NT-PNPs and CD44-T-PNPs as a theranostic tool was tested in TNBC-bearing animal model. Animals were divided into three groups; free S0456 NIR dye, NT-PNPs/S0456 conjugate, and CD44-T-PNPs/S0456 conjugate. Conjugation of both formulations with S0456 NIR dye was carried as following; 10 mg of the already prepared NT-PNPs and CD44-T-PNPs were dissolved in 1:1 mixture of chloroform and methanol. Then 1 mg of S0456 NIR dye dissolved in 1 mL DI water was added to the chloroformmethanol mixture. Then chloroform and methanol were evaporated by Rotavapor (R-205, Buchi, Flawil, Switzerland). The solutions were dialyzed using a dialysis bag (MWCO 3.5 kDa) for 3 h then lyophilized. The incorporation of S0456 NIR dye to PNPs was measured by UV-spectrophotometer (Hitachi 2910) adjusted to 789 nm wavelength and calculated based on the standard curve equation. Mice were injected in the tail vein with 10 nmol/mouse of free S0456, NT-PNPs/S0456, and CD44-T-PNPs/S0456. Mice were imaged at 24 and 72 h post-injection using an In Vivo MS FX Extreme system (Carestream, San Diego, CA, USA); light source: 400 W xenon, monochrome interlined, fixed lens (10×), cooled (−60 • C), CCD camera, with 750 nm-830 nm wavelength for fluorescence, and X-ray images were captured. Both fluorescence and X-ray images of the mouse were merged to demonstrate the localization of NPs. Importantly, to understand the behavior and distribution of PNPs to tumor vs. healthy tissues, the NIR fluorescence of organ bio-distribution was carried out 72 h post-injection.
Conclusions
Despite the undeniable achievements in oncotherapy, cancers' unsatisfactory survival rates remain an issue of concern and a challenge that scientific research is yet, to overcome. Even though chemotherapy has proven successful as a therapeutic paradigm, numerous elements deny the possibility of depending solely on it. For instance, poor solubility and off-targeting biodistribution are among the many existing challenges that limit cancer chemotherapy's efficacy. Not only does this pr5event parenteral administration and leads to chemotherapeutics toxicity, the rapid evoking of chemotherapeutic resistance also leads to tumor relapse. Thereupon, in an attempt to address these challenges, NPs drug delivery systems (DDS) and combination strategies are currently being investigated. In this study combination of momelotinib with the CD44 directed CFM-4.16 PNPs was able to wipe out cancer cells efficiently compared to individual drugs. The developed nanomaterials' intrinsic properties such as biodegradability, water-solubility, loading contents, and cellular internalization potentiate its application as an excellent DDS for hydrophobic CFM-4.16 and hydrophilic S0456 NIR dye. The study has opted for PNPs of smooth spherical shape, acceptable size (<100 nm), a slightly negative charge, and selective tumor uptake. Upon administration, the CD44-T-PNPs ester backbone was hydrolyzed into non-toxic products with gradual releasing of the CFM-4.16 molecules and S0456 in the tumor site with low off-target distribution, which pave its application as a promising tool for the theranostic purpose.
Data Availability Statement:
The data presented in this study are available in this article (and supplementary material). | 2021-03-04T05:46:06.817Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "daae13d9ce1a0858dc69cf712f15f7ece88348fa",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/13/4/898/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "daae13d9ce1a0858dc69cf712f15f7ece88348fa",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
81194826 | pes2o/s2orc | v3-fos-license | PREVALENCE OF LOW BACK PAIN AMONG BANKERS OF LAHORE , PAKISTAN
Banking has a great importance in any nation around the globe. It has helped in emerging the dynamic parts of the economy and guides a new dawn of development. Banks are one of the most important parts of any country. In this modern time money and its necessity is very important. A developed financial system of the country can ensure scope for attaining economic development. A modern bank provides valuable services to a country. To attain development there should be a good developed financial system to support not only economic but also the society. So, a modern bank plays a vital role in socioeconomic
INTRODUCTION
ow back pain (LBP) is a major work-Lrel ated health problem.In musculoskeletal health care low back problems are one of the most expensive 1 situations.Among all musculoskeletal problems low back pain is most 2 common.Along with low back pain the musculoskeletal disability is also one of the most important cause around the 3 world.Low back pain is further classified into three types acute, subacute and chronic low back pain.Low back pain that continues for less than six weeks is called acute low back pain.Low back pain that occurs between the time period of six weeks and three months is called subacute low back pain and the back pain that goes on for more than three months is known as 4 chronic low back pain.Incorrect sitting posture is a cause of pain at lower back.Cervical spine pain or neck pain can be 5 caused by incorrect sitting posture.More than 80% of population will experience an occurrence of low back 6 pain at some point in life.
Banking has a great importance in any nation around the globe.It has helped in emerging the dynamic parts of the economy and guides a new dawn of development.Banks are one of the most important parts of any country.In this modern time money and its necessity is very important.A developed financial system of the country can ensure scope for attaining economic development.A modern bank provides valuable services to a country.To attain development there should be a good developed financial system to support not only economic but also the society.So, a modern bank plays a vital role in socioeconomic was 37.4%.There are many factors that can trigger low back pain in bankers some of them are there sitting posture, job pressure, psychosocial stress, job tenure, and working hours.As there are limited studies done in our setup, we conducted this study to find out the prevalence of low back pain among bankers of Lahore, Pakistan.
METHODS
In this cross-sectional study 164 bankers were conveniently included in the study during the time period of April 2017 to September 2017.Bankers between age group of 22 to 58 were selected and interviewed in this study because mostly bankers are from in age group.Questionnaires were distributed among bankers.Total 250 questionnaires were distributed among which 150 were given by hand and rest were sent through email 70 questionnaires were eliminated as they were not according to the including criteria (subjects between 22-58 years).Prevalence of low back pain and problems in daily life activities due to low back pain were recorded using Japanese Orthopedic Association Back Pain Evaluation 14 Questionnaire.
Sample size of this study was 164 that were calculated by using Epi tools sample size calculator software.Formula used for this purpose was n = 22 (Z ?P(1 ?P))/e .Where Z = value from standard normal distribution corresponding to desired confidence level (Z=1.96 for 95% CI), P is expected true proportion, e is desired precision (half desired CI width).Data was collected from bankers working in different banks of Lahore, Pakistan.Study was completed within 6 months after the approval of synopsis.
Bankers who were working in
Government and private banks were included i.e.Dubai Islamic Bank, Allied Bank, National Bank of Pakistan, Silk Bank and Habib Bank limited and other banks were excluded.Participants with any recent accident, lumber spine fracture, tumor and surgery were also excluded.
The variables were defined using descriptive statistics including frequency and bar charts.Japanese Orthopedic Association Back Pain Evaluation Questionnaire was used in this study.The consent forms were signed from all bankers before giving them questionnaire.Statistical Analysis was done by using SPSS version 21.
Ethical Approval: The study did not involve invasive procedures, or personal identifying data.The women were interviewed only for getting information on their baseline characteristics.Therefore, it was not necessary to seek formal external ethical approval.Written consent form was given to participating bankers.
The minimum age of respondents was 22 years and the maximum age was 58 years in this study, mean age of respondents was 30.46 ± 6.57.Total of 164 subjects were included in the study with 113 (68.90%) males and 51 (31.10%) females.The prevalence of low back pain in bankers was 52.4%
This cross-sectional study was conducted to find out the prevalence of low back pain among bankers of Lahore, Pakistan.This study shows that low back pain was common in male than female.There were 113 (68.90%) males and 51 (31.10%) females.A survey was conducted in Kancheepuram district which determined the occurrence of musculoskeletal disorders and related disabilities amongst bank staff.They described annual occurrence of the musculoskeletal disorders was 33.8% while that of the related disability was 8.5%.For both the disorders and related disabilities the occurrence of provincial musculoskeletal disorders was maximum in the lower back and earlier studies also stated same 13,15 results with greater occurrence.The present study showed that prevalence of low back pain among bank workers of Lahore, Pakistan is 52.44% whereas results of a study conducted among bank staff of Yazd city stated that 16 prevalence of low back pain is 18.6%.Another research was conducted in Tamil Nadu, India to find out occurrence of musculoskeletal disorders among bank staff stated that prevalence of low 17 back pain was 51.8%.
Lower back, upper back, neck, shoulder, wrist and hand were the regions where most of the regional The prevalence of low back pain is high among bankers.Due to type of job the low back pain prevalence was found to be more among the men than women.It was found that due to low back pain bankers were not able to do daily life activities.The present study also found that many bankers remain seated for long time due and change their posture to relieve low back pain.
REFERENCES
This study is limited to Lahore city of Pakistan.We only found the prevalence among bankers.More researches are needed to investigate the strength of the muscle or level of weakness of muscles of low back. | 2019-03-18T14:04:20.398Z | 2018-06-30T00:00:00.000 | {
"year": 2018,
"sha1": "613eacf9641bd85259bef3211341e003ef5fc69d",
"oa_license": "CCBYNC",
"oa_url": "https://www.kmuj.kmu.edu.pk/article/download/17948/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "613eacf9641bd85259bef3211341e003ef5fc69d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219574763 | pes2o/s2orc | v3-fos-license | Percutaneous endoscopic gastrostomy – Too often? Too late? Who are the right patients for gastrostomy?
Percutaneous endoscopic gastrostomy is an established method to provide nutrition to patients with restricted oral uptake of fluids and calories. Here, we review the methods, indications and complications of this procedure. While gastrostomy can be safely and easily performed during gastroscopy, the right patients and timing for this intervention are not always chosen. Especially in patients with dementia, the indication for and timing of gastrostomies are often improper. In this patient group, clear data for enteral nutrition are lacking; however, some evidence suggests that patients with advanced dementia do not benefit, whereas patients with mild to moderate dementia might benefit from early enteral nutrition. Additionally, other patient groups with temporary or permanent restriction of oral uptake might be a useful target population for early enteral nutrition to maintain mobilization and muscle strength. We plead for a coordinated study program for these patient groups to identify suitable patients and the best timing for tube implantation.
INTRODUCTION
The method of percutaneous endosocopic gastrostomy (PEG) as a tool for enteral nutrition was first described in 1980 by Gauderer et al [1] . Since then, PEG has evolved as the method of choice in patients with apparent or imminent long-term restriction of oral nutrition. Gastrostomy is easy to install percutaneously using translucency during gastroscopy. The tube needs some care, which is largely standardized and, if necessary, can be easily removed by simple gastroscopy.
When a technique comes of age, it is time to review its current practice as well as the indications for and complications of this intervention. Is enteral nutrition indeed superior to parenteral nutrition? Are patients who receive a gastrostomy appropriately chosen for this intervention? Do we need more data to assess the usefulness of PEG in certain situations?
ENTERAL VS PARENTERAL NUTRITION
There is ample evidence from experimental and clinical studies that enteral nutrition (orally or via a tube) confers many positive effects in comparison to parenteral nutrition. These effects include preservation of the intestinal mucosal barrier, reduction of intestinal and other infections and improvement of the overall prognosis of patients with long-term artificial nutrition [2][3][4][5][6][7] . Additionally, parenteral nutrition requires administration of lipid formulations via a port system, which promotes port infections and septic complications. In a meta-analysis comprising almost 4000 patients who had undergone surgery for gastrointestinal (GI) tumors, parenteral nutrition was associated with a significantly higher rate of infectious and noninfectious complications [8] . In a very recent Japanese study, enteral nutrition via PEG was associated with a significantly longer survival (median survival of 317 vs 195 d) compared to parenteral nutrition in older patients with dysphagia [9] . Therefore, as far as it is technically and functionally feasible, enteral nutrition is preferable to parenteral nutrition. This is also emphasized by the ESPEN guideline for ethical aspects of artificial nutrition, which recommends enteral over parenteral nutrition in order "to support intestinal functions to the greatest possible extent" [10] .
COMPLICATIONS AND TYPE OF ACCESS AND TUBE
Several large case series have investigated complication rates in PEG patients. Severe complications during or immediately after gastrostomy are rare (1.8%) and include bleeding, perforation and peritonitis [11] . Late complications occur in approximately 5% of patients and are mostly associated with nursing failures, leading to tube leakage or blockage, mucosal overgrowth of the retaining plate in the stomach ("buried bumper") or aspiration. Mild local infections at the tube insertion site have been reported in approximately 11% of cases [11,12] and require only local treatment. More recent studies have reported severe complications (acute and during feeding) in 3.8%-10% of PEG patients [13,14] . Patients with dementia did not have significantly more complications than those without dementia in one large study [15] , but this remains controversial.
To ensure maximal effect of enteral nutrition via tube feeding, before gastrostomy, basic considerations are necessary for each individual case to check suitability of the patient and the clinical situation for this intervention (see Table 1). These considerations should also encompass alternative interventions such as metal stents or surgical procedures.
The pull method is the standard procedure for gastrostomy and tube implantation. Since 2000, a push-/introducer-PEG method has also been possible; this method is extremely attractive for patients with pharyngeal or esophageal tumor stenosis precluding gastroscopic access to the stomach [16] . However, in our clinical experience as well as according to existing data, whenever possible, the pull-PEG method should Table 1 Basic considerations for percutaneous endoscopic gastrostomy implantation and typical access types
Basic considerations for PEG implantation
Is oral nutrition -for whatever reason -so inadequate that intervention is justified?
Is enteral nutrition likely to be necessary for at least 3 wk?
Is the intestine distal to the access path functional?
Are risk factors for complications absent?
Is the anatomy suitable for PEG? Is compliance sufficient for PEG handling (feeding in (half) upright position, infection prophylaxis, mobilization of the PEG tube, etc.)?
Typical access types
Pull-PEG (Ponsky- Gauderer) After diaphanoscopy, primary puncture with a trocar followed by pulling the tube with a thread through the esophagus Push-/Introducer-PEG (Russell) With diaphanoscopy, primary gastropexy followed by direct introduction of a balloon-fixed tube PEG: Percutaneous endoscopic gastrostomy.
be preferred due to lower complication rates and better handling [17,18] .
ACCEPTED INDICATIONS FOR GASTROSTOMY
Percutaneous endoscopic gastrostomy has been established as a treatment option for transient or permanent dysphagia due to neurologic disorders, e.g., stroke [19,20] . In the same way, patients with oncological diseases of the mouth and throat as well as the esophagus can benefit from a temporary PEG tube during multimodal therapy, especially during radiotherapy. Ensuring adequate nutrition allows the therapy to be carried out in a timely manner and at full dose by preventing weight loss and, thus, ultimately improves patient prognosis [21] ( Table 2).
DEMENTIA -THE MOST DOUBTFUL INDICATION FOR GASTROSTOMY
Patients with degenerative cerebral diseases, above all dementia, have increasingly received gastrostomies and represented in some studies and regions the largest group of tube feeded patients [22][23][24] . Given the lack of evidence for a benefit in this patient group, this issue generates debates already for decades. In a time with an increasing economic health burden, a necessity to improve the efficiency of health care in an aging society and health care workers often pressed for time, this development is understandable but must be viewed with great skepticism. Frequently, the indication of gastrostomy is the result of an acute deterioration in the health state and/or expression of a state of emergency in caring for these patients. Occasionally, cultural or religious reasons also play a role when relatives do not approve limiting therapy, although the quality of life is already dramatically reduced, and the prognosis is limited. Sometimes, gastrostomy is advocated because people caring for the patient, including their physicians, are unable to cope with difficult nursing and medical situations.
Comfort feeding [25] is propagated as an alternative to artificial nutrition, but this approach requires more human resources, is very cost-intensive and probably cannot be executed in high numbers in today's care structures. From a practical point of view, it is understandable that gastrostomy is performed to keep processes and personnel structures within affordable limits in a nursing home, but this approach often does not meet the needs of the patient. Eventually, gastrostomy, as well as long-term tube feeding, carry similar risks as other interventional measures [26,27] ; additionally, it may detain patients from the pleasures of tasting and of social contacts. Furthermore, advanced dementia patients tend to manipulate access points and tubes and thereby are prone to injure themselves. A risk-benefit analysis is therefore particularly important in any patient group and should be provided to the patient and/or his relatives.
The wish of supporting the nutrition of demented patients using tube feeding leads to a high rate of gastrostomies in patients with already advanced disease. Often these patients already suffer from progressive malnutrition and immobility. In many studies with demented patients, the complication rate of gastrostomy is unacceptably high [28,29] . We and others think that this is more related to patient factors than an innate risk of the intervention [30] . This view is supported by data from studies showing that control patients (with no PEG) had a very similar or even worse mortality [29,31] , and patients with only mild dementia had a significant higher benefit than those with advanced dementia [28] . We call this the PEG paradox -choosing the patients too late for the intervention leads to missing benefit and greater harm including higher morbidity and mortality.
A Cochrane systematic review conducted in 2009 did not find a single randomized controlled trial that investigated the benefits of tube feeding in patients with dementia [32] . Consequently, recent guidelines do not encourage gastrostomy in patients with advanced dementia [33] , although clear and high-quality data in this clinical field are lacking. Table 3 shows the recent studies that examined the effects of tube feeding in patients with dementia [34][35][36][37][38][39] . Reviews and meta-analyses [40][41][42] mostly identified two severe problems of PEG studies in dementia patients. First, no randomized, prospective, properly controlled studies have been conducted. Most available studies have retrospective designs and suffer from a huge selection bias, and control groups are poor or unmatched. Second, in most studies, patients with dementia are not properly staged and are treated as a homogenous patient group. This prevents the identification of subgroups (e.g., patients with only mild to moderate dementia) that might benefit from enteral nutrition via tube feeding. Other problems include poor exclusion and inclusion criteria, inappropriate outcome measures and small sample sizes [42] .
NON-NEUROLOGICAL PATIENT GROUPS WITH POSSIBLE BENEFIT
In our opinion and clinical experience, there are other patient groups in clinical medicine that could benefit significantly from early gastrostomy. Even though it is hardly supported by study data, patients with chronic pancreatitis and pronounced (postprandial) pain syndrome often benefit from tube feeding that prevents weight loss, maintains mobility and physical activity, and thus, improves their quality of life. In our clinical experience, pulmonary cachexia in chronic obstructive pulmonary disease (COPD) patients can also be either avoided or alleviated by early PEG application. Although COPD has been identified as a risk factor for early mortality in patients with a PEG tube for other indications [43] , there is not a single study investigating the effect of early enteral nutrition in patients with COPD who manifest cachexia or are at risk for malnutrition. In many cancers, even cancer outside the GI tract such as lung, prostate and hematological tumors, malnutrition is frequent [44] ( Table 4). Early and consistent enteral nutrition can enable timely and doseappropriate chemotherapy and thus improve prognosis, since weight loss is one of the main risk factors for premature death in many cancers [45][46][47] . At least for the quality of life endpoint, this has already been shown in several studies [48] , but proof for hard endpoints such as overall survival is currently lacking.
It is also conceivable that patients with other severe diseases (such as ulcerative reflux disease or severe eosinophilic esophagitis) may also benefit from gastrostomy, even if they are young. However, supporting data are lacking. Therefore, physicians are often reluctant to consider gastrostomy in these otherwise healthy and, often, young patients. At present, such decisions must remain extremely individualized. To what extent an intermittent PEG system in this patient population can contribute to the maintenance of a certain body weight and, thus, help to avoid physical weakness should be the subject of future studies. Nevertheless, data regarding the prognosis of such patients with or without enteral nutrition are quite important and economically and individually relevant; for example, for employment biographies.
TIMING OF GASTROSTOMY
In the neurological field, gastrostomy also represents an important therapeutic option for patients with amyotrophic lateral sclerosis (ALS), depending on the overall situation and the preference of these patients [19] , who are conscious until their death. Weight loss in these patients is present very often, even without dysphagia [49] . Recent data also indicate that the time of tube insertion should be advanced compared to the current approach [50] . Patients with ALS had a significant better survival if enteral nutrition was initiated before the presence of weight loss [49] . To date, this aspect of the "timing" of gastrostomy has been disregarded. Earlier continuous enteral nutrition has the potential to improve prognosis significantly and should be considered in future studies. "Early" in this respect would mean gastrostomy before the underlying disease (regardless whether neurological or non-neurological) has caused significant malnutrition and weight loss accompanied by catabolism or restricted mobility. Here, the GLIM criteria can play an important role (with the underlying disease as etiologic criterion and a clear cut anticipatory definition of the phenotypic criterion) [51] . Timing of the intervention by such criteria would improve the patient selection and reduce the complication rate. With early gastrostomy, the prevalence of low albumin, higher age and higher comorbidity (all risk factors for worse outcome [29] ) would be lower in patients selected for this intervention. This may close the circle of argumentation in the case of patients with dementia; much more than before, gastroenterologists must also learn to assess patients with chronic degenerative cerebral diseases. These diseases will increase substantially during the next decades. In patients with very advanced stages of dementia with complete immobility, lack of speech production and contractures, a gastrostomy is probably more harm-than useful. However, patients with early or moderate dementia, for whom we have not thought about enteral feeding so far, could possibly benefit from tube feeding.
Early tube feeding could prevent the progressive immobility of dementia patients and, thus, preserve their quality of life for longer. Data regarding these patients are extremely scarce (see discussion above), but a few subgroup analyses as well as some studies with better defined patient groups support this view [28,36,52] . In a large Japanese study, the selection of patients with early or moderate dementia increased the proportion of patients with a benefit as measured by the level of independent living four times as compared to patients with advanced dementia [28] .
However, in studies regarding nutritional support for dementia patients, no general benefits were obtained in cognitive tests [33] . Therefore, while dementia cannot be stopped, mobility and quality of life may be maintained longer. To date, due to this poor data situation, tube feeding and parenteral nutrition have only been recommended "to overcome a crisis situation" and "for a limited time" in the guidelines for this group of patients overall, and not at all or only as "very rare exception" for patients in late stages [33] .
CONCLUSION
In our opinion, we must therefore pay attention to the following: Patients with dementia in very advanced stages should no longer be treated with artificial nutrition of any kind. We must explain this to the relatives and referring doctors. We must draw their attention to the data that suggest more and more severe complications in these patients than in less seriously ill patients as well as to the missing benefit for these patients. On the other hand, we may have to think about tube feeding at an earlier stage for patients at nutritional risk due to temporary or chronic restrictions of oral feeding. These patients should be made more consistently aware of the possibility of a gastrostomy before weight loss or even catabolism has occurred. This can affect younger, otherwise completely healthy patients as well as dementia patients in an earlier, still mobile stage.
In summary, while there may not necessarily be a current under-or over-utilization of PEG, there is a need to improve patient selection. To achieve this goal, we need more prospective randomized controlled studies to better define the indications for PEG in the patient groups and conditions outlined above. 5 | 2020-06-04T09:05:57.476Z | 2020-05-28T00:00:00.000 | {
"year": 2020,
"sha1": "bbdfb2048c52f23f733cc6b45752bbc5711d8a1d",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v26.i20.2464",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "141f7f95722b976193d5cbff7ef7190ec706d641",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
239044183 | pes2o/s2orc | v3-fos-license | Naive Bayes modification for intrusion detection system classification with zero probability
Received Jan 25, 2021 Revised Apr 30, 2021 Accepted Jul 25, 2021 One of the methods used in detecting the intrusion detection system is by implementing Naïve Bayes algorithm. However, Naïve Bayes has a problem when one of the probabilities is 0, it will cause inaccurate prediction, or even no prediction was found. This paper proposed two modifications for Naïve Bayes algorithm. The first modification eliminated the variable that has 0 probability and the second modification changed the multiplication operations to addition operations. This modification is only applied when the Naïve Bayes algorithm does not find any prediction results caused by zero probabilities. The results of this research show that the value of precision, recall, and accuracy in the modification made tends to increase and better than the original Naïve Bayes algorithm. The highest precision, recall, and accuracy are obtained from modification by changing the multiplication operation to the addition. Increasing precision can reach 4%, increasing recall reaches 2% and increasing accuracy reaches 2%.
INTRODUCTION
Network and data security are some of the most important things for an agency at this time. Various types of attacks that occur through the internet against networks and data encourage agencies to implement various systems to detect and prevent attacks that occur [1]. One system that is often used to detect attacks is intrusion detection system (IDS). IDS is a system used to automate the process of detecting suspicious activity in the network and analyze the possibility of attacks in these activities [2], [3]. There are several methods used in IDS to detect, including anomaly detection and misuse detection. Anomaly detection is detection by comparing the state of an existing activity with the state when a normal activity, while misuse detection is detection by matching the activity pattern with a pattern contained in a database that has been previously defined [4]. Apart from these two methods, several studies have been carried out to conduct detection, prediction, or classification using data mining algorithms [5]- [8]. One algorithm that can be used to predict IDS is Naïve Bayes [9], [10] which gives good accuracy.
Naïve Bayes algorithm is a classification algorithm that is quite good and is often used in various studies [11]- [13]. This algorithm can be used for simple classification with fixed Y variable and also for text classification [14]- [16]. Laga and Sarno [17] showed that Naïve Bayes gave the best accuracy from other classification methods, such as KNN, SVM, and random forest. However, the Naïve Bayes algorithm still has a drawback, that is, if the probability value from one of the variables is 0, it can make the final comparison result 0, which can lead to inaccurate prediction results [15], [17]- [20]. Research [15], [17] overcomes zero probability with RB-Bayes, while research [20] uses Hybrid N-gram, and research [19], [20] uses multinomial Naïve Bayes.
Based on the previous research [15], [17]- [20], it can be seen if the prediction results from the testing data are not found due to the opportunity 0. Therefore, it is necessary to modify the Naïve Bayes algorithm to overcome the existing problems. This paper proposed the modification of Naïve Bayes algorithm to overcome opportunity 0 in the dataset. In this research, the Naïve Bayes algorithm and some Naïve Bayes modifications are implemented in a web-based application and analyze whether the modifications made can improve the accuracy of prediction of attacks in IDS or vice versa. The first modification is to eliminate the variable that has a probability value of 0, while the second modification changes the calculation from multiplication to addition. Both of these modifications are applied when the Naïve Bayes algorithm does not find any classification results.
RESEARCH METHOD
The research method used in this paper is in Figure 1. Each stage is carried out in stages and sequentially.
Problem identification
Problem analysis is the initial stage for identifying a case or problem [21], [22]. This stage is the initial stage which aims to determine the problems that exist in the Naïve Bayes algorithm, especially in predicting attacks in the network. The problem obtained at this stage is that there is an opportunity value of 0 in Naïve Bayes that can make the prediction results inaccurate and the lack of the ability of IDS to predict attacks in the network.
Problem Identification
Data Collection Data Preprocessing Implementation Testing
Data collection
The data in this study came from the NSL-KDD 99 dataset. NSL-KDD 99 is a dataset resulting from the development and reduction of fundamental problems from the KDD 99 dataset. The dataset used is small training set.csv and KDDTest + .csv [23]. Some of the advantages of the NSL-KDD 99 dataset compared to the original KDD 99 dataset include: a. The data contained in the training data is not excessive so the classification results are not biased. b. There is no data duplication in the testing data. c. The amount of data in training and testing data makes sense, which makes it affordable to run experiments on complete datasets without having to randomly select a small portion.
Data preprocessing
In this stage, several processes are carried out to process the data before classification is performed using the Naïve Bayes algorithm. The process includes: a. data cleansing b. feature selection c. variable discretization
Implementation
At this stage, the application starts to be built by the design made in the previous stage. The application is realized in the web form with PHP programming language and using MySQL database.
Testing
The next stage after implementation is testing the system. This stage is carried out to test the Naive Bayes algorithm and the modifications that have been made. The tests carried out are divided into 2, namely algorithm testing and testing of the precision, recall, and accuracy values.
RESULTS AND DISCUSSION
This section is a discussion of the research that has been done. Starting from the preprocessing stage, application implementation, and testing.
Preprocessing
In this stage, several processes are carried out to process the data before classification is performed using the Naïve Bayes algorithm. This stage is implemented because preprocessing can improve the accuracy of Naive Bayes [24]. The process includes:
Data cleansing
This stage is done to eliminate the data in the testing data with the Y variable that is not contained in the training data and to change the class classification (variable Y) from the previous one as the name of the attack to the type of attack so that the number of Y variables is lower so the system performance can be faster. Attack names and attack types can be seen in Table 1.
Feature selection
This stage aims to reduce the number of X variables so they are not too many and to improve the accuracy of the predictions produced. The method used in feature selection is correlation-based features selection (CFS). CFS chooses X variables that have the highest correlation with Y variables but has the fewest correlations between X variables. The Feature Selection process in this study was carried out using WEKA tools which produced 10 X variables out of a total of 41 existing X variables. The list of variables are : flag, src_bytes, dst_bytes, hot, logged_in, count, srv_serror_rate, diff_srv_rate, dst_host_diff_srv_rate, and dst_host_srv_diff_host_rate.
Variable discretization
This stage aims to change the variables in the dataset which are of the continuous type to discrete types. The discretization method used in this stage is supervised discretization because the variable X correlates directly with the Y variable. The results of variable discretization are shown in Table 2.
Implementation
At this stage, the application starts to be built by the design made in the previous stage. The application is realized in the web form with PHP programming language and using MySQL database. There are 3 algorithms applied in the application, including:
Naive Bayes
Naïve Bayes is a simple probability classification based on Bayes' Theorem where each feature/variable is assumed to be independent of each other. Bayes' theorem was put forward by a British scientist named Thomas Bayes as a theorem for predicting future opportunities based on experience [24]. The Bayes theorem equation can be seen in (1)
Modification 1
From the example calculation done above, it can be seen if there is a problem where no prediction results are found because all the classes have a probability value of 0. Therefore, in modification 1 this is done by removing variables that have a 0 value, so the probability of each class when there is no comparison 0 value.
Modification 2
In this modification 2, to overcome the probability value of 0 on Naïve Bayes is to change the multiplication operation into an addition so that the probability results of each class are not worth 0.
Application implementation
The implementation of the previously created design resulted in a web-based application to test the modifications made in this study. In this application, there is one admin user who acts as a data manager. Admins are required to log in before they can manage the data in the system. On the main page, several menus can facilitate the admin to manage data, including training data, testing data, and testing page. In the data training and data testing menu, there are submenu namely the view data menu shown in Figure 2. On the manage data page, the admin can input data either through the form provided or through CSV import using the import data button. In addition, the admin can also delete all data that has been entered by using the delete all button.
On the data view page, Admin can view, edit, and delete data that has been entered. On the data testing list, there is a button that can be used to start the classification process. The testing menu can be used to see the results of the classification process that has been carried out by the system. The page views of the tests are shown in Figure 3.
Testing
Testing is a way to assess quality from an algorithm [25]. This stage is carried out with 2 methods, including algorithm test and precision, recall, and accuracy test.
Algorithm test
Algorithm testing is done by comparing the results of manual calculations with calculations performed by the application. If the calculation result manually is the same as the calculation result using the application, it indicates that the application has performed the calculation correctly. The comparisons compared are the calculations with the Naive Bayes algorithm, the Naive Bayes algorithm with modification 1, and the Naive Bayes algorithm with the 2nd modification.
This test is carried out using 10 training data and 3 testing data. The calculation is done using manual calculations and calculations with applications that have been built. From the results obtained, manual calculations and calculations using the application have the same results. This shows that the application built has implemented the Naïve Bayes algorithm and the two modifications are appropriate.
Precision, recall, and accuracy test
Testing precision, recall, and accuracy is done by calculating the value of precision, accuracy, and recall of the Naïve Bayes algorithm and the two modifications made. Precision is a calculation of the estimated proportion of positive cases that is formulated in (2) [26], [27]: A recall is a calculation of the estimated proportion of positive cases that are correctly identified and as shown in (3): Accuracy is a calculation of the proportion of the total number of correct predictions and as shown in (4): where: TP: True Positive TN: True Negative FP: False Positive FN: False Negative In this test, the testing data used has a fixed amount of 300 data while training data starts from 200 data to 1200 data with the addition of 200 data for each test. This is done to analyze the value of precision, recall, and accuracy of the Naïve Bayes algorithm along with the modifications applied. The results of testing precision, recall, and accuracy can be seen in Figures 4, 5, and 6.
Results analyze
The application that was built in this study has one actor, namely the administrator who has access rights to manipulate training data and data testing and classification testing. Testing of applications that have been built is done by 2 methods, namely algorithm testing and testing precision, recall, and accuracy. The precision, recall, and accuracy testing on the Naïve Bayes algorithm and the two modifications showed an increase with increasing training data. This is because increasing the amount of data can increase the possibility of the same data so that increasing the data can increase the precision, recall, and accuracy values The maximum value of precision in Naive Bayes is 76.83% in the training data of 1200 data. While for the same training data, the precision value in modification 1 is 79.83% and the precision value in modification 2 is 80.83%. This shows that there is an increase in precision in modifications 1 and 2 with the highest value obtained by modification 2. The maximum value of recall on Naive Bayes is 85.52% for 1200 training data. Whereas in the same training data, the recall value in modification 1 was 86.52% and the recall value in modification 2 was 87.52%. This shows that there is an increase in recall on modifications 1 and 2 with the highest value obtained by modification 2. The maximum value of accuracy at Naive Bayes is 87.33% for 1200 training data. While for the same training data, the accuracy value on modification 1 is 88.33% and the accuracy value on modification 2 is 89.33%. This shows that there is an increase in accuracy at modifications 1 and 2 with the highest value obtained by modification 2. Based on the tests that have been done, it can be concluded that modification by eliminating a variable that has a value of 0 and modification by changing the multiplication operation by addition can increase precision, recall, and accuracy. The highest precision, recall, and accuracy is obtained from modification by changing the multiplication operation with the addition of the value resulting in the possible value of 0. Increasing precision can reach 4%, increasing recall reaches 2% and increasing accuracy reaches 2%.
CONCLUSION
Based on the tests that have been done, it can be concluded that the precision, recall, and accuracy testing on the Naïve Bayes algorithm and the two modifications showed an increase with increasing training data. Besides that, modification by eliminating a variable that has a value of 0 and modification by changing the multiplication operation by addition can increase precision, recall, and accuracy. The highest precision, recall, and accuracy is obtained from modification by changing the multiplication operation with the addition of the value resulting in the possible value of 0. Based on the results obtained, to achieve better results it is recommended that improvements be made to the modifications that have been made in subsequent studies. 0,00% 20,00% 40,00% 60,00% 80,00%100,00% 200 600 1000 | 2021-10-20T15:11:54.062Z | 2021-10-01T00:00:00.000 | {
"year": 2021,
"sha1": "f015d831928958371c341d24042308737047d4c5",
"oa_license": "CCBYSA",
"oa_url": "https://beei.org/index.php/EEI/article/download/2833/2335",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "01fc1ed6d51ff4f48d7b6b5faf11dfa49f3b3d89",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
117327430 | pes2o/s2orc | v3-fos-license | Analysis and discussion on several problems when testing the thickness of reinforcement cover of concrete component
Reinforcement cover of concrete component plays a very important role to ensure the durability of various types of structures and the effective anchorage between steel reinforcement and concrete. This paper discusses and analyzes the problems occurred when testing the thickness of reinforcement cover of concrete component, so as to provide reference and help for related work.
Introduction
The concrete component is composed of two kinds of materials, namely concrete and steel reinforcement. Both materials work together to make them have better bearing capacity. The use of steel reinforcement can make up for the shortcomings of the poor tensile strength of concrete, so that the concrete can be effectively wrapped with the concrete with good compression strength. Nowadays, the design values of reinforced concrete cover is designed and constructed according to the requirements of GB50010-2010 (2015 Edition) "design specification for concrete structures" and with the combination of design drawings. The acceptance or inspection work is mainly based on the requirements of GB 50204-2015 "acceptance specification for concrete structure engineering construction quality". In this paper, several problems encountered in the process of field test are analyzed and studied.
Common problems
In the actual project, the upper negative bending moment of concrete component often sinks due to the trampling of workers during the construction. The insufficient lower positive bending moment and other construction defects make the great shift or deviation of steel reinforcement. They are the reasons why the thickness of reinforcement cover fails to meet the design requirements in actual project. So how to analyze and deal with excessive deviation (smaller or larger)? What is the relationship between the deviation problem and the size deviation? How to select components and evaluate the qualification rate? What is the significance of using GB/T50784-2013 to test the performance of the cover? And ext. The common problems above are discussed and analyzed in this paper. The thickness of the cover is calculated according to bending bearing capacity of positive rectangular section in GB50010-2010 (2015 Edition) "design specification for concrete structures", namely M=α1fcbx(h0-x∕2)-f,yA,s(h0-a,s). The formula above shows that there is linear relationship between the bending bearing capacity of positive section and the effective height of the section H0.h0= the height of the section -the thickness of the cover -the radius of the steel reinforcement. Therefore, when the height and the steel reinforcement are in certain condition, the smaller the cover is, the greater the bearing capacity of the section will be. But it is not the smaller the cover is, the better it will be. The bond and durability of the concrete to the steel reinforcement should be considered.
Analysis of the influence of plate thickness and thickness of cover on bearing capacity.
One project has the structure of 3 frames,the strength grade of concrete is C30, the steel reinforcement is HRB400, two-layer and two-direction is 10@200,and the plate thickness is 100mm. The allowable deviation of plate thickness is from-5mm to +10mm, and the allowable deviation of cover thickness is from -5mm to +8mm, (the design value of cover is 15). The size of outdoor air conditioning board of the project is 800mm (length) x 600(width) x 100mm (thickness). (see table 1) Among them, the design value is as follows: the plate thickness is 100mm; the cover thickness is 15mm; and the flexural capacity is 8.5 kN, m/m. According to the table, when the plate thickness is 95mm and the cover thickness is 23mm, the bearing capacity is 6.6kN˙m/m, which is 78% of the design value. Therefore, in actual project, it is necessary to test the thickness of the plate when the thickness of the thickness deviation of cover is large. Finally, according to the numerical value of the thickness of the plate and the thickness of the cover, we can comprehensively determine whether the bearing capacity of the plate meets the design requirements. It is necessary to adopt the necessary measures to reinforce the plate which can not meet the requirements of the design. Now the commonly used reinforcement methods for bending components are bonding carbon fiber, adhesive steel method and so on.
Analysis of the influence of the thickness of cover on the bearing capacity of the component
According to the table above, when the plate thickness is 95 and the cover thickness is 7mm, the bearing capacity is 8.9 kN m/m. It can meet the requirements of bearing capacity, but the cover thickness is less than the diameter of steel reinforcement, which does not meet the standard requirements. This situation requires durability treatment, such as the use of polymer mortar. If it is found that there are many plates of which the thickness is less than 10mm in the project, it is recommended to adopt universal inspection to record the unqualified points, and to deal with the unqualified parts for durability, so as to ensure the durability of the components.
In the test of the air conditioning board mentioned above(the design value of negative bending moment is 20), the measured data are as follows: 12, 15, 16, 18, 20, 22, 20. The value 12 is less than the negative deviation of the design value, which does not meet the requirements of the specification and needs durability treatment. Long term outdoor components should strictly follow the standard requirements. Once unqualified components are found, they need to be dealt with. The smaller deviations need to be treated for durability. The larger deviation should be checked and analyzed, and the components that do not meet the requirements of bearing capacity need to be properly estimated for treatment.
Test method table of beam and board component
According to JGJT152-2008 "technical specification for reinforcement detection in concrete" and GB 50204-2015 "acceptance specification for construction quality of concrete structures", the following types of components, testing items and methods and matters needing attention are shown in Table 2. What should be noted for beam component is that the general design values are the design values of the most lateral reinforcement. The acceptance specification requires that the design value to test the actual steel reinforcement should be the design value of the most outer reinforcement cover and the diameter of the stirrup.
Evaluation table of qualified rate of beam and board component
According to JGJT152-2008 "technical specification for reinforcement detection in concrete" and GB 50204-2015 "acceptance specification for construction quality of concrete structures", the following qualification rate for different types of components are shown in Table 3. The qualification rate is not less than 90%, and the range of deviation is between -3.5mm and 15mm Qualified Any component with a qualified rate less than 90% or a deviation of less than -3.5mm or more than 15mm Unqualified The qualification rate is less than 90%, but not less than 80%, and the range of deviation is between -3.5mm and 15mm.
In the same number of tests, the plate of which the qualified rate of the two sampling is not less than 90% is qualified, otherwise it is not qualified Beam(non cantilever beam + cantilever beam) The qualification rate is not less than 90%, and the range of deviation is between -2.5mm and 12mm Qualified Any component with a qualified rate less than 90% or a deviation of less than -2.5mm or more than 12mm
Unqualified
The qualification rate is less than 90%, but not less than 80%, and the range of deviation is between -2.5mm and 12mm.
In the same number of tests, the beam of which the qualified rate of the two sampling is not less than 90% is qualified, otherwise it is not qualified
Comparison and analysis of testing and acceptance criteria for the performance of reinforcement cover
According to the clause 9.3.5 in GB/T 50784 " technical specification for on-site test of concrete structures", when testing the structure performance,some requirements should be met.In this paper, two kinds of evaluation methods are compared and analyzed according to a project example. The actual value of the thickness of the reinforcement cover is shown in Table 4. Table4 Notes: the presumed value of performance test is measured according to the "technical standard for on-site test of concrete structures". When the difference between the upper and lower limit values and 10% of the average value are estimated, the test value of the concrete cover thickness of the is determined.
From table 5 we can see that presumed value of the thickness performance of the reinforced concrete cover is not necessarily related with the measurement and evaluation based on GB 50204-2015. The performance testing is the overall evaluation of the project of all components, which emphasizes the numerical average of four types of components. It has poor component, and it may be presumed to be able to meet the design requirements. The test according to GB 50204-2015 is mainly to check whether there is poor component and the qualified rate, and both of them need to meet the requirements. It is a strict basis to appraise a project. This paper suggests that the project should not only meets the qualification , but also consider the presumed value of tested performance of four kinds of component, so as to reflect the overall performance of cover thickness in the project.
Conclusion
The thickness of reinforcement cover can not be ignored as an index of acceptance. It is related to the bearing capacity and durability of the components and structure, so that engineers and technicians can not neglect this index, otherwise it may cause unnecessary troubles in the future.This paper makes an analysis on the problems when testing the thickness of reinforcement cover. Through the example of project, it analyzes the influence of the thickness of the cover on the bearing capacity and durability of the component, and proposes that the floor detection, calculation and analysis should be carried out for the component with large deviation in the cover. It puts forward the table of the test method of component cover and the evaluation table of the qualified rate of the beam and plate component. It proposed that the evaluation project should not only consider the qualified rate, but also refer to structure performance so as to test whether the presumed value meets the design requirements. | 2019-04-16T13:27:44.413Z | 2018-03-01T00:00:00.000 | {
"year": 2018,
"sha1": "3acf3060f84e296869b8f7ddbe793b94e1e2b2a4",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/128/1/012192",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "43782c22e558bd60d76167560e44fb65db49eed0",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
7322775 | pes2o/s2orc | v3-fos-license | Autonomic nervous alterations associated with daily level of fatigue
Background Fatigue is a common symptom in both sick and healthy people. We examined autonomic nervous alterations associated with fatigue to clarify the mechanisms underlying fatigue. Methods The study group consisted of 19 healthy participants who performed a 2-back test for 30 min as a fatigue-inducing mental task session. Before and after the session, they completed the advanced trail making test (ATMT) for 30 min for mental fatigue evaluation, subjective scales to measure fatigue sensation, and underwent electrocardiography to allow assessment of autonomic nerve activities. Results After the fatigue-inducing task, the total error counts on the ATMT tended to increase (P = 0.076); the ATMT for total trial counts (P = 0.001), the subjective level of fatigue (P < 0.001), and the % low-frequency power (%LF) (P = 0.035) increased significantly; and the % high-frequency power (%HF) decreased compared with before the fatigue-inducing task although this did not reach the statistical significance (P = 0.170). Although LF measured in absolute units did not change significantly before and after the fatigue-inducing task (P = 0.771), and HF measured in absolute units decreased after the task (P = 0.020). The %LF and LF/HF ratio were positively associated with the daily level of fatigue evaluated using Chalder's fatigue scale. In addition, %HF was negatively associated with the fatigue score. Conclusions Increased sympathetic activity and decreased parasympathetic activity may be characteristic features of both acute and daily levels of fatigue. Our findings provide new perspectives on the mechanisms underlying fatigue.
Background
Many people experience fatigue after or during a prolonged period of activity [1]. Large community surveys have reported that up to half of the general adult population complains of fatigue [2,3]. In Japan, more than half of the general adult population complains of fatigue, and more than one third of the population suffers from chronic fatigue [4]. Acute fatigue is a normal phenomenon that disappears after a period of rest; in contrast, chronic fatigue is sometimes irreversible and the compensation mechanisms that are useful in reducing acute fatigue are not effective [5]. Therefore, it is important to clarify the mechanisms underlying fatigue, and in particular, long-term fatigue.
Fatigue-related alterations of autonomic nerve activities have been reported in patients with chronic fatigue syndrome (CFS) [6][7][8][9][10][11], multiple sclerosis [12][13][14], and primary biliary cirrhosis [9,15]. These reports suggest that changes in autonomic nerve activity are related to the mechanisms underlying fatigue. However, this relationship has been demonstrated only in patients with specific diseases and not in healthy subjects.
Recently, we demonstrated that decreased parasympathetic activity and increased sympathetic activity were induced in healthy volunteers following a 30-min fatigue-inducing mental task session [16]. As chronic or daily levels of fatigue can be evaluated using a paperand-pencil questionnaire [17], the relationships between daily levels of fatigue and alterations of autonomic nerve activities may be identified. In addition, we can evaluate acute and daily levels of fatigue simultaneously in the same participants by using previously performed fatigueinducing and fatigue-evaluating experiments [16]. The aim of the present study was to determine alterations in autonomic nerve activities associated with daily levels of fatigue as well as acute fatigue.
Participants
Nineteen healthy volunteers (mean age, 43.6 ± 10.1 years; 15 women and 4 men) were enrolled. None of the participants had a history of medical illness. Participants with a history of health problems, taking chronic medication or supplemental vitamins, and those who weighed < 40 kg [18][19][20][21][22] were excluded. Good health was assessed by physical examination, blood pressure, and heart rate. The protocol was approved by the Ethics Committee of Osaka City University, and all participants provided written informed consent.
Experimental design
The day before the experiment, participants finished dinner by 9:00 pm and then fasted overnight. The following morning, they had breakfast before the visit. At 10:00 a.m., after the visit, they started the experiment. Before the start of the experiment, a paper-and-pencil questionnaire was distributed to participants to evaluate their daily level of fatigue. As a fatigue-inducing mental task session, participants performed 2-back test [23] trials for 30 min [24], and as a fatigue-evaluating mental task, they performed the advanced trail making test (ATMT; [25]) for 30 min [24] before and after the fatigue-inducing task session. Just before and after the fatigue-inducing session, they recorded their subjective sensation of fatigue on a visual analogue scale (VAS) from 0 (no fatigue) to 100 (complete exhaustion) [26] and underwent electrocardiography (ECG) with their eyes closed for 1 min while sitting on a chair. VAS and ECG recordings were performed before the ATMT trials. This study was conducted in a quiet, temperatureand humidity-controlled environment. For 1 day before the visit, participants refrained from intense mental and physical activities and caffeinated beverages, consumed a normal diet, and maintained normal sleeping hours.
Questionnaire
A paper-and-pencil questionnaire was distributed to participants. The severity of daily level of fatigue was measured using Chalder's fatigue scale (Chalder et al. 1993), which has been previously used in Japanese participants [27]. The reliability and validity of the Japanese version of Chalder's fatigue scale to evaluate the severity of daily fatigue have been previously confirmed [27]. The fatigue scale consists of 11 questions using a 4-point (0-3) Likert scale that allows the following responses: 0 = less than usual; 1 = no more than usual; 2 = more than usual; 3 = much more than usual during the past several weeks. The total score for the 11-item fatigue scale ranges from 0 to 33, with higher scores indicating a greater degree of daily fatigue.
Fatigue-inducing mental task
As a fatigue-inducing mental task, participants performed the 2-back test for 30 min [24]. During this task, one of four letters was presented on a display of a personal computer every 3 sec, and they had to judge whether the target letter presented at the center of the screen was the same as the one that had appeared 2 presentations before. If it was, they were to press the right mouse button with their right middle finger; if it was not, they were to press the left mouse button with their right index finger. They were instructed to perform the task trials as quickly and as correctly as possible. The results of each 2-back trial, that is, a correct response or error, were continuously presented on the display of the personal computer.
Fatigue-evaluating mental task
As a fatigue-evaluating mental task, participants performed the ATMT for 30 min [24]. In this test, circles numbered from 1 to 25 were randomly placed on the display of a personal computer, and participants were required to use a computer mouse to touch these circles in sequence, starting with number 1. Tasks A, B, and C all ended when they touched the 25th target. They continued directly with the next Tasks B, C, and A, in that order, on and on for 30 min. The number of hits counted and the time were counted. In task A of the ATMT, when they touched a target circle, it remained in the same position, but the color changed from black to yellow. The positions of the other circles remained the same. In task B of the ATMT, when they touched the first target circle, it disappeared, and circle number 26 appeared in a different position on the screen. The positions of the other circles remained the same. For example, touching circles 2, 3, and 4 resulted in their disappearance and the addition of circles 27, 28, and 29 on the screen, so that there were always 25 circles on the screen. In task C of the ATMT, when they touched the first target circle, it disappeared and circle number 26 appeared in a different position on the screen and the position of all other circles changed at random. As in task B, there were always 25 circles on the screen. Participants performed tasks A, B, and C consecutively.
They were instructed to perform all task trials as quickly and as correctly as possible.
Electrocardiographic analyses
ECG was recorded using active tracer AC301 (Global Medical Solution Inc., Tokyo, Japan), and the ECG was analyzed using MemCalc for Windows (Global Medical Solution Inc.). Data were analyzed offline after analogueto-digital conversion at 250 Hz. R-R wave. Irregularities in the ECG recordings were excluded from the analyses. Variability was measured as an indicator of autonomic nerve activity. For frequency domain analyses of the R-R wave intervals, the percent of low-frequency power (LF) was calculated as the power within the frequency range of 0.04 to 0.15 Hz, and the percent of high-frequency power (HF) was calculated as that within the frequency range of 0.15 to 0.4 Hz. LF and HF were measured in absolute and normalized units; normalization was performed by dividing the absolute power by the total variance and then multiplying by 100. The %HF is vagally mediated [28][29][30], but the %LF originates from a variety of sympathetic and vagal mechanisms [28,31]. The LF/HF ratio represents the sympathetic to parasympathetic balance [32].
Statistical analysis
Values are shown as mean ± SD unless otherwise noted. Paired t-tests were used to evaluate the differences before and after the mental fatigue-inducing task as for the ATMT performances and VAS scores and Wilcoxon's signed rank tests as for the indices of the heart rate variability. Pearson's correlation analyses were conducted to evaluate relationships between two variables. All P values were 2-tailed, and P values less than 0.05 were considered statistically significant. Statistical analyses were performed using SPSS 17.0 software package (SPSS Inc., Chicago, IL).
Results
Task performances, subjective level of fatigue, and ECG variables before and after the fatigue-inducing mental task are shown in Table 1. After the fatigue-inducing task, the total error counts of the ATMT during the fatigue-evaluating mental task tended to increase compared with before the fatigue-inducing task, although differences did not reach statistical significance (P = 0.076). In addition, after the fatigue-inducing task, the total trial counts (sum of the counts to touch the circles in sequence) of the ATMT (P = 0.001) indicating that they became faster after the fatigue-inducing task, subjective level of fatigue (P < 0.001), and %LF (P = 0.035) increased significantly, whereas %HF decreased although this did not reach statistical significance (P = 0.170). Although LF measured in absolute units did not change significantly before and after the fatigue-inducing task (P = 0.771), HF measured in absolute units decreased after the task (P = 0.020). The LF/HF ratio did not change significantly before and after the fatigue-inducing task (P = 0.805).
Relationships between Chalder's fatigue scale score and task performances on the ATMT before the fatigueinducing mental task are shown in Figure 1. The total error and trial counts were not associated with the Chalder's fatigue scale score.
Relationships between Chalder's fatigue scale score and ECG variables before the fatigue-inducing mental task are shown in Figure 2. %LF and LF/HF ratio were positively associated with the Chalder's fatigue scale score, and %HF was negatively associated with the fatigue score.
Discussion
The present study showed that, after an acute fatigueinducing mental task, subjective levels of fatigue, %LF, and total error counts on the ATMT (tendency) increased and %HF decreased (although this did not reach statistical signigicance) compared with before the fatigue-inducing task. In addition, %LF and LF/HF ratio were positively, and %HF was negatively associated with the Chalder's fatigue scale score. Figure 1 Relationships between Chalder's fatigue scale score and task performances. The task performances were assessed using total error counts (a) and total trial counts (b) of the fatigueevaluating mental task. Linear regression lines, Pearson's correlation coefficients (R), and P values are shown. These findings are consistent with the results of our previous report [27], in which decreased parasympathetic and increased sympathetic activities were caused after a 30-min fatigue-inducing mental task. The brain network, including the prefrontal cortex (PFC) and anterior cingulate cortex (ACC), has been shown to play an important role in the regulation of autonomic nervous activities [33]. Decreased parasympathetic activity and increased sympathetic activity are interpreted as a state of autonomic hypervigilance [34,35], and sympathoexcitatory subcortical circuits are normally under the inhibitory control of the PFC [34][35][36]. In addition, the ACC is related to the regulation of parasympathetic activity [37,38]. Because impaired selective attention assessed by increased error counts of the ATMT [24] was observed after the fatigue-inducing task, and the selective attention process activates the PFC and ACC [39][40][41][42], acute mental load might introduce temporary dysfunctions in the PFC and ACC to cause decreased parasympathetic and increased sympathetic activities.
Decreased parasympathetic nerve activity and increased sympathetic activity have also been observed in patients with CFS [10,11,43]. Because bilateral reduction of grey-matter volume in the PFC [44] and decreased cerebral blood flow [45] and reduction of serotonin transporters [46] in the ACC were reported in the patients with CFS, decreased parasympathetic and increased sympathetic activities may be induced by the chronic anatomical and/or functional alterations in the PFC and ACC in these patients. Hence, chronic fatigue is characterized as decreased parasympathetic and increased sympathetic activities, and the pathophysiological background may be explained by chronic alterations in the PFC and ACC.
Limitations
The present study has several limitations. First, the study included a small number of participants. In addition, we did not obtain the information as for such as smoking habit or lifestyles, and a great majority of women was included maybe because we recruited the participants via advertisement. Studies involving a larger number of participants and more detailed information regarding the participants are needed to allow for generalization of these results. Second, conclusions about cause-and-effect relationships cannot be made due to the cross-sectional nature of the data. Third, heart variability indices are measures of autonomic modulation of the sinus node-not autonomic tone. Heart rate variability indices must be interpreted in light of the heart rate itself. Finally, the daily level of fatigue was evaluated using self-reports, and as such, was subjective. An objective biomarker for daily level of fatigue has been developed [47]. The number of saliva human herpesvirus (HHV)-6 DNA copies was decreased after holidays for approximately 1 week [47]. The reliability and validity of the results of this study should be confirmed using this biomarker.
Conclusions
The present results provide evidence that increased sympathetic activity and decreased parasympathetic activity are associated with both the acute and daily level of fatigue. Because increased sympathetic activity and decreased parasympathetic activity have been reported in patients with CFS [10,11,43], these alterations of the autonomic nerve activities may be common characteristics of fatigue. Based on these findings, transitional mechanisms from acute fatigue to chronic fatigue and chronic fatigue to chronic fatigue-associated disease might be clarified. Our findings provide new perspectives on the mechanisms underlying fatigue. | 2014-10-01T00:00:00.000Z | 2011-10-27T00:00:00.000 | {
"year": 2011,
"sha1": "facf10bc065940b6ae5c935b1447a3baf0e25101",
"oa_license": "CCBY",
"oa_url": "https://behavioralandbrainfunctions.biomedcentral.com/track/pdf/10.1186/1744-9081-7-46",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "facf10bc065940b6ae5c935b1447a3baf0e25101",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
232364010 | pes2o/s2orc | v3-fos-license | MicroRNA-223-5p suppresses the progression of nasopharyngeal carcinoma by targeting DCLK1
The aim of the present study was to investigate the function of microRNA (miR)-223-5p in the malignant biological behavior of nasopharyngeal carcinoma (NPC) and elucidate the underlying molecular mechanism. The expression levels of miR-223-5p and doublecortin-like kinase 1 (DCLK1) were detected via reverse transcription-quantitative PCR analysis. Cell viability was evaluated using Cell Counting Kit-8 assay. Cell migration and invasion were measured via Transwell assays, while a luciferase reporter assay was conducted to identify the interaction between miR-223-5p and DCLK1. The results demonstrated that miR-223-5p expression was significantly downregulated, whereas DCLK1 expression was significantly upregulated in NPC tissues and cells. Moreover, both miR-223-5p overexpression and DCLK1 silencing markedly suppressed the progression of NPC. It was also observed that miR-223-5p directly targeted DCLK1 and decreased its expression. Furthermore, it was suggested that DCLK1 overexpression may partially reverse the suppressive effects of miR-223-5p on the progression of NPC. Collectively, the results of the present study indicated that miR-223-5p may suppress NPC progression by targeting DCLK1, thereby indicating a novel potential approach to the diagnosis and treatment of NPC.
Introduction
Nasopharyngeal carcinoma (NPC) is a type of cancer that originates from epithelial cells in the nasopharynx (1). According to GLOBOCAN 2012, 86,691 new NPC cases and 50,831 NPC-associated deaths were reported worldwide (2). Although NPC is relatively rare in the rest of the world, its prevalence is relatively high in Southern China and Southeast Asian countries (3). Epidemiological data indicate that the NPC incidence rate has gradually decreased and the mortality rate of NPC has significantly declined in recent years due to the advances in diagnostic and therapeutic strategies (4). However, the 5-year survival rate of patients with NPC remains very low due to recurrence and metastasis, particularly in cases with late-stage disease (5). Therefore, it is necessary to identify new and efficient methods for NPC treatment.
MicroRNAs (miRNAs/miRs) are short-chain non-coding RNAs that are 19-24 nucleotides in length. Previous studies have reported that miRNAs play key roles in various cellular functions, such as proliferation, metastasis, apoptosis and chemoresistance (6). miRNAs usually bind to the 3'-untranslated region (UTR) of their target mRNAs and inhibit translation (7). Numerous studies have revealed that miR-223-5p is significantly downregulated in cancer tissues and cells, suggesting its inhibitory effect on tumor progression. For example, miR-223-5p may inhibit the progression of non-small cell lung cancer by modulating E2F transcription factor 8 (8), suppress the malignant phenotype of prostate cancer cells via modulating ETS transcription factor ERG (9) and repress the aggressiveness of bladder cancer cells (10). However, the potential regulatory role of miR-223-5p in NPC has not been extensively investigated.
Doublecortin-like kinase 1 (DCLK1) is a kinase highly expressed in various types of cancer (11) and has been identified as a potential oncogene implicated in the progression of human cancers, such as pancreatic cancer (12), lung squamous cell carcinoma (13), ovarian clear cell carcinoma (14), intestinal tumors (15) and basal-like breast cancer (16). In addition, DCLK1 has been reported to contribute to the tumorigenic process of colorectal cancer by downregulating miR-200c (17). However, whether DCLK1 is implicated in NPC progression and the underlying mechanism remain to be investigated.
The present study was undertaken to investigate miR-223-5p and DCLK1 expression levels in NPC tissues and cells, and to determine the effects of these factors on the viability, migration and invasion of NPC cells. The aim was to elucidate whether miR-223-5p can suppress NPC progression by downregulating DCLK1 expression, in the hope of providing a novel approach to the diagnosis and treatment of NPC.
Materials and methods
Clinical tissues. A total of 28 paired samples of NPC and adjacent normal nasopharyngeal tissues were obtained from patients undergoing surgery at The Third Affiliated Hospital of Soochow University (Changzhou, China) between March 2016 and January 2019. All the tissues were reviewed by two pathologists blinded to the clinicopathological information and were immediately frozen in liquid nitrogen after surgical excision. None of the patients with NPC had received anticancer treatments prior to surgery. The present study was approved by the Ethics Committee of The Third Affiliated Hospital of Soochow University, and written informed consent was obtained from each patient enrolled in the study.
Cell culture and transfection. Human NPC cell lines (6-10B, 5-8F, SUNE-1, SUNE-2 and C666-1) and a human nasopharyngeal-derived epithelial cell line (NP69) were obtained from the Cell Bank of the Chinese Academy of Science. Cells were cultured in DMEM (Gibco; Thermo Fisher Scientific, Inc.) containing 12 U/l gentamicin and 100 ml/l inactivated FBS (Gibco; Thermo Fisher Scientific, Inc.) in a humidified atmosphere containing 5% CO 2 at 37˚C, and were passaged once every 2 days.
Cell Counting Kit (CCK)-8 assay.
After 48 h of transfection, the cells were plated into 96-well plates at 5,000 cells per well. Then, 20 µl CCK-8 reagent (Dojindo Molecular Technologies, Inc.) was added into each well after the cells were cultured for 0, 48, 72 and 96 h at 37˚C with 5% CO 2 . Subsequently, the cells were incubated at room temperature for 4 h. The optical density of each well was measured using a microplate reader (Bio-Rad Laboratories, Inc.) at a wavelength of 450 nm. The cell viability was considered to be directly proportional to the optical density of the cells in this assay.
Transwell assay. Cell migratory and invasive abilities were analyzed in 96-well Transwell chambers with an 8-µm pore membrane (Corning, Inc.) following the manufacturer's instructions. A total of 100 µl cell suspension (1x10 5 cells/ml) was added to the upper chamber, and 600 µl medium supplemented with 10% FBS was added to the lower chamber. After incubation at 37˚C overnight, non-migrating cells in the upper chamber were removed with a cotton swab, and the upper chamber was washed three times with PBS. Then, the cells were fixed with 4% paraformaldehyde at room temperature for 30 min and stained with 0.1% crystal violet solution at room temperature for ~20 min. In order to observe the migrating cells attached to the lower surface of the chamber, five fields of view were randomly selected and the cells were counted under a light microscope (Olympus Corporation; magnification, x200). For the detection of cell invasion, the upper surface of the chamber was pre-coated with Matrigel ® for 1 h at room temperature, and the following experimental steps were the same as those described for the detection of cell migration.
Immunohistochemistry (IHC). The tissue expression of DCLK1 was evaluated via IHC based on the intensity and the proportion of positively stained cells as previously described (19). Sections were incubated with primary antibody against DCLK1 (1:1,000; cat. no. 62257; Cell Signaling Technology, Inc.) overnight at 4˚C, followed by incubation with goat anti-rabbit secondary antibody (cat. no. ab205718; 1:2,000; Abcam) at room temperature for 20 min. The sections were stained with 3,3'-diaminobenzidine and the nuclei were counterstained with hematoxylin for 5 min at room temperature. Images were captured under a light microscope (Olympus Corporation; magnification, x200).
Western blotting. Total protein was isolated from transfected cells using RIPA lysis buffer (Beyotime Institute of Biotechnology) and the protein concentrations were determined using a BCA kit (Thermo Fisher Scientific, Inc.). Subsequently, 10 µg protein/lane was separated via SDS-PAGE on 10% gels (Bio-Rad Laboratories, Inc.) and was electrotransferred to PVDF membranes. After blocking with 5% non-fat milk for 2 h at room temperature, the membranes were first incubated with primary antibodies against Notch receptor 1 (Notch1; 1:1,000; cat. no. ab52627; Abcam) and GAPDH (1:1,000; cat. no. ab8245; Abcam) overnight at 4˚C, followed by incubation with corresponding HRP-conjugated secondary antibody (1:1,000, cat. nos. ab6789 and ab205718; Abcam) for 1 h at room temperature. Blots were visualized with an enhanced chemiluminescent detection system (EMD Millipore). Protein expression was measured using Image-Pro ® Plus software (version 6.0; Media Cybernetics, Inc.).
Statistical analysis. All experiments were repeated at least 3 times. All the data collected in the experiments were analyzed with GraphPad Prism 7 (GraphPad Software, Inc.). Data are presented as the mean ± SD. The association between miR-223-5p or DCLK1 expression and clinicopathological characteristics of patients with NPC was analyzed using χ 2 tests. Both paired and unpaired Student's t-tests were used for comparisons between two groups, and one-way ANOVA followed by Tukey's post hoc test was conducted for comparisons among multiple groups. P<0.05 was considered to indicate a statistically significant difference.
Results
miR-223-5p and DCLK1 expression levels in NPC. miR-223-5p expression was measured via RT-qPCR, and the results demonstrated that miR-223-5p expression was decreased in NPC tissues and cell lines (6-10B, 5-8F, SUNE-1, SUNE-2 and C666-1), compared with that in adjacent normal nasopharyngeal tissues and NP69 human nasopharyngeal-derived epithelial cells (Fig. 1A and B). DCLK1 expression was also examined in the present study. The RT-qPCR results indicated that DCLK1 expression was significantly increased in NPC tissues and cell lines (6-10B, 5-8F, SUNE-1, SUNE-2 and C666-1) compared with that in adjacent normal nasopharyngeal tissues and NP69 cells (Fig. 1C and D). Moreover, the expression of DCLK1 in NPC tissues was higher compared with that in adjacent normal nasopharyngeal tissues, as determined via IHC (Fig. 1E). It was observed that low miR-223-5p or high DCLK1 expression was associated with distant metastasis and TNM stage, while there was no significant association with sex or age in patients with NPC (Table I). These results suggested that miR-223-5p expression was downregulated and DCLK1 expression was upregulated in NPC tissues and cell lines. 5-8F and 6-10B cells were selected for subsequent experiments due to their high DCLK1 expression.
DCLK1 silencing suppresses viability, migration and invasion of NPC cells.
To investigate the function of DCLK1 in NPC tumorigenesis, si-DCLK1 or si-NC were transfected into 5-8F Table I. Association between miR-223-5p or DCLK1 expression and clinicopathological characteristics in nasopharyngeal carcinoma. and 6-10B cells. The transfection efficiency was confirmed via RT-qPCR (Fig. 3A). The CCK-8 assay results demonstrated that DCLK1 knockdown inhibited the viability of NPC cells (Fig. 3B). Moreover, the Transwell assay results further confirmed that DCLK1 silencing suppressed the migration and invasion of NPC cells (Fig. 3C and D).
miR-223-5p overexpression suppresses NPC cell viability, migration and invasion via the DCLK1/Notch1 signaling pathway.
To examine whether miR-223-5p overexpression suppressed cell viability, migration and invasion in NPC via DCLK1, 5-8F and 6-10B cells were transfected with NC mimics, miR-223-5p mimics, miR-223-5p + Ctrl and miR-223-5p + DCLK1, and then DCLK1 expression, viability, migration and invasion were measured in 5-8F and 6-10B cells. DCLK1 expression was found to be markedly increased in NPC cells transfected with DCLK1 overexpression plasmid (Fig. 5A). In addition, it was found that miR-223-5p overexpression not only decreased DCLK1 expression in 5-8F and 6-10B cells, but also inhibited the viability, migration and invasion of 5-8F and 6-10B cells. However, DCLK1 overexpression could partially reverse the suppressive effects of miR-223-5p overexpression on the viability, migration and invasion of 5-8F and 6-10B cells (Fig. 5B-E). A previous study reported that DCLK1 regulated tumor metastasis and epithelial-to-mesenchymal transition of gastric cancer cells via the Notch1 signaling pathway (22). Therefore, it was hypothesized that miR-223-5p and DCLK1 may regulate NPC progression via Notch1 signaling. The results suggested that overexpression of miR-223-5p decreased the protein expression of Notch1, which was partially restored following overexpression of DCLK1 in 5-8F and 6-10B cells (Fig. 5F). These results indicated that the effects of miR-223-5p and DCLK1 on NPC progression may be mediated via the Notch1 signaling pathway.
Discussion
Multiple factors, such as Epstein-Barr virus infection, environmental factors and genetic susceptibility genes, have been reported to contribute to the development of NPC (23). Since the two major causes of mortality among patients with NPC are recurrence and metastasis, it is crucial to further elucidate the molecular mechanism underlying NPC progression. As a result, additional personalized treatments may be developed and provided to patients with NPC. The present study demonstrated that miR-223-5p was downregulated in NPC tissues and cells, and acted as a tumor suppressor by suppressing the malignant behavior of NPC cells.
Abnormally expressed miRNAs may play key roles in NPC progression by facilitating cell proliferation, invasion and angiogenesis. For example, miR-216b was found to suppress NPC growth by downregulating KRAS expression (24), and miR-26a inhibited NPC cell proliferation and cell cycle by suppressing enhancer of zeste 2 polycomb repressive complex 2 subunit (25). miR-142-3p silencing may contribute to NPC progression via modulating zinc finger E-box binding homeobox 2 (26). Moreover, miR-223 was found to be involved in various types of cancer. For example, miR-223 expression was found to be suppressed in hepatocellular carcinoma and facilitated Stathmin1 expression (27). miR-223 also suppressed the progression of prostate cancer via regulating integrin subunit α3/integrin subunit β1 signaling (28). Additionally, miR-223 suppressed cell proliferation and migration in NPC via targeting MAF bZIP transcription factor B (29). The aforementioned studies indicated the important role of miR-223 in NPC progression. miR-223-5p is the passenger strand of the miR-223 duplex (10). Consistent with all these studies, the present study demonstrated that miR-223-5p expression was downregulated in NPC tissues compared with that in adjacent normal nasopharyngeal tissues. In the present study, RT-qPCR analysis revealed low miR-223-5p expression in NPC tissues and cells. The results of the CKK-8 and Transwell assays suggested that miR-223-5p overexpression may suppress cell viability, migration and invasion in NPC. Therefore, the present study demonstrated the inverse association between miR-223-5p expression and NPC tumor progression.
Previous studies have reported that aberrant DCLK1 expression may be closely associated with the malignant biological properties of tumors. For example, miR-613 was found to suppress the growth and invasion of human hepatocellular carcinoma by inhibiting DCLK1 (30), whereas miR-137 inhibited the malignant behavior of colon cancer via downregulation of DCLK1 (31). miR-424 also suppressed the viability and invasion of neuroblastoma cells by directly modulating DCLK1 (32). In addition, high expression of DCLK1 has been demonstrated to promote the progression of human pancreatic cancer (33). In the present study, DCLK1 was predicted as a functional target gene of miR-223-5p in NPC. Subsequently, DCLK1 expression was measured in NPC tissues and cells, and it was observed that DCLK1 was highly expressed in NPC. The effect of DCLK1 knockdown on NPC was similar to that of miR-223-5p overexpression. Moreover, DCLK1 overexpression was able to reverse the inhibitory effect of miR-223-5p on NPC cell viability, migration and invasion.
Furthermore, DCLK1 has been reported to be involved in the progression of human cancers via several signaling pathways. For example, Wang et al (34) reported that DCLK1 facilitated progression of breast cancer via the Wnt/β-catenin signaling pathway. Liu et al (22) indicated that DCLK1 promoted the epithelial-to-mesenchymal transition of gastric cancer cells through Notch1 signaling. The inhibition of the Notch1 signaling pathway is considered as an effective target for human cancer treatment (35). In the present study, it was observed that overexpression of miR-223-5p decreased the expression of Notch1, while this effect was partially reversed by the overexpression of DCLK1. Therefore, these results suggested that miR-223-5p and DCLK1 may regulate the tumorigenic process of NPC via the Notch1 signaling pathway.
In conclusion, the present study demonstrated that miR-223-5p expression was downregulated in NPC tissues and cells, and miR-223-5p functioned as a tumor suppressor in NPC. miR-223-5p overexpression may decrease the viability, migration and invasion of NPC cells, and suppress tumor progression via downregulating DCLK1. The present findings may improve our understanding of the mechanism involved in the progression of NPC mediated by miR-223-5p, and prompt further investigation of novel targeted therapies based on miRNA-mRNA networks for patients with NPC. | 2021-03-27T05:15:04.584Z | 2021-03-18T00:00:00.000 | {
"year": 2021,
"sha1": "178d10bd591c25164ea3390b8549ec88bbad450b",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/ol.2021.12657/download",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "178d10bd591c25164ea3390b8549ec88bbad450b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258888762 | pes2o/s2orc | v3-fos-license | Reciprocal costimulatory molecules control the activation of mucosal type 3 innate lymphoid cells during engagement with B cells
Innate lymphoid cells (ILCs) are the counterpart of T helper cells in the innate immune system and share multiple phenotypes with T helper cells. Inducible T-cell costimulator (ICOS) is recognized on T cells and participates in T-cell activation and T and B-cell engagement in lymphoid tissues. However, the role of ICOS in ILC3s and ILC3-involved interactions with the immune microenvironment remains unclear. Here, we found that ICOS expression on human ILC3s was correlated with the activated state of ILC3s. ICOS costimulation enhanced the survival, proliferation, and capacity of ILC3s to produce cytokines (IL-22, IL-17A, IFN-γ, TNF, and GM-CSF). Via synergistic effects of ICOS and CD40 signaling, B cells promoted ILC3 functions, and ILC3-induced T-cell-independent B-cell IgA and IgM secretion primarily required CD40 signaling. Hence, ICOS is essential for the nonredundant role of ILC3s and their interaction with adjacent B cells.
INTRODUCTION
Innate lymphoid cells (ILCs) are heterogeneous lineage neg innate immune cells that have emerged in the past 10 years. As the innate counterparts of T helper cells, ILCs are divided into ILC1s, ILC2s, and ILC3s; the different ILCs have different transcription factor requirements and cytokine-producing capacities, resulting in their involvement in different types of immune responses, especially in mucosal immunity. Among the three major subsets, ILC3s produce a large amount of lineage-specific interleukin (IL)-22, IL-17, and other products, such as granulocyte macrophage colony-stimulating factor (GM-CSF). As a result of their polyreactive phenotype such as major histocompatibility complex ΙΙ (MHCΙΙ) [1,2], Toll-like receptors [3], and nonspecific molecules, such as B-cell activation factor (BAFF) [4,5], ILC3s impact multiple immune cells in the microenvironment directly and indirectly. However, studies on the interaction between ILC3s and other immune cells, in particular the impact of the immune microenvironment on ILC3s, are still insufficient, and their results are unclear.
Inducible T-cell costimulator (ICOS) is primarily expressed on activated T cells and belongs to the CD28 coreceptor family [6]. ICOS ligand (ICOSL) is expressed on various immune and nonimmune cells, including B cells, dendritic cells (DCs), macrophages, and epithelial cells [7]. The pattern of ICOS and ICOSL expression determines the diversity of intercellular interactions initiated by ICOS and ICOSL interaction. For example, the ICOS and ICOSL interaction can not only mediate plasmacytoid DC (pDC)induced IL-10 production by Foxp3 + T cells [8] but also promote follicular T helper (Tfh) cells and the B-cell response in lymphoid tissues [9,10]. Since ILCs were discovered, studies have identified the expression of ICOS [11,12] and ICOSL [11] on ILC2s, wherein it affects ILC2 homeostasis and the production of IL-5 and IL-13 [11]. Recently, ICOS has been detected on the surface of human ILC3s [13]; however, its role in ILC3 biology has not been extensively evaluated. Therefore, investigating the influence of ICOS on ILC3s and the specific role of ICOS signaling in mediating the crosstalk between ILC3s and the immune microenvironment is of great importance for the study of ILC3 immunological characteristics and the role of ILC3s in mucosal immunity.
ILC3s regulate B-cell homeostasis and function in different tissues [5]. In a T-cell-independent manner, human retinoic acid receptor-related orphan receptor-γt (RORγt) + ILCs activate naïve B, marginal zone B, and plasma cells by expressing BAFF, CD40L, and Delta-like 2, a Notch ligand [4,5], further inducing IgA-switching in lymphoid structures [14,15]. Through a T-cell-dependent process, ILC3s present antigens to Tfh cells to regulate IgA switching in B cells within gut-associated lymphoid tissues [16]. In general, the high colocalization with B cells, complicated phenotype, and tissue-resident characteristics of ILC3s jointly underly the close relationship between ILC3s and B cells. However, because of the low proportion and absolute count of ILC3s, especially in humans, the influence of B cells on ILC3s and the mechanism of their interaction are still not thoroughly understood. Because B cells were the first recognized ICOSL-expressing cells [17] and considering the expression of ICOS on ILC3s, it is of great importance to explore the role of the ICOS/ICOSL pathway in the interaction between ILC3s and B cells.
Here, we found that ICOS was primarily expressed on ILC3s in human solid tissues, and ICOS expression was upregulated after stimulation with IL-7 + IL-2 + IL-1β + IL-23. ICOS stimulation by ICOSL improved the survival, proliferation, and activation of ILC3s, including increasing IL-22 and GM-CSF production and enhancing CD69 expression. ILC3 and B-cell interaction promoted ILC3 and B-cell survival and proliferation and facilitated the production of IL-22 by ILC3s and the secretion of immunoglobulins by B cells. Moreover, the effect of B cells on ILC3s was synergistically mediated by the ICOS and CD40 pathways, and the ILC3 support of B cells primarily required CD40 signaling. Thus, this study reveals the unique role of ILC3s in the mucosal immune system and the regulatory mechanism connecting ILC3s and adaptive immune cells.
Identification of costimulatory molecule ICOS expression on human ILC3s
To investigate ICOS expression on ILC subgroups, flow cytometry analysis combined with t-distributed stochastic neighbor embedding (t-SNE) analysis of lymphocytes isolated from noninflamed palatine tonsils was applied; the results showed that ILC3s were the predominant population within tonsillar ILCs ( Fig. 1A-D). Despite containing distinctive clusters, tonsillar ILCs (lineage − CD127 + ) shared some characteristics with tonsillar T cells, including partial expression of ICOS and CD69 as well as low expression of CD40L (Fig. 1A). In contrast, the natural cytotoxicity receptor (NCR) NKp44 was expressed on ILCs but not on T cells (Fig. 1A). Among the three defined ILC subgroups, ICOS expression was detectable on ILC3s (lineage − CRTH2 − CD117 + ), some ILC1s (lineage − CRTH2 − CD117 − ), and some cells resembling ILC2s (lineage − CRTH2 + CD117 − ) as previously reported [11] (Fig. 1B, Supplementary Fig. 1a-c). Additionally, ICOS high tonsillar lymphocytes contained not only CD3 + and CD4 + cells but also a certain population of ILC3-like cells (Fig. 1E); furthermore, the percentage of ICOS + ILC3s in the tonsil and lung was generally higher than that in peripheral blood (PB) (Fig. 1F), and this higher expression was accompanied by the expression of the ILC3 activation-related biomarker NKp44 [18,19] (Supplementary Fig. 1d).
Some known activation-associated molecules were detected on ICOS + ILC3s but to a lesser extent on ICOS − ILC3s (Fig. 1G). However, the expression of CD28 and the conventional checkpoint PD-1 was low on tonsillar ILC3s, even on ICOS + ILC3s (Fig. 1G). In contrast to ILC2s [11], ILC3s lacked ICOSL expression (Fig. 1G). Collectively, the costimulatory molecule ICOS may play a more indispensable role in ILC3 biology than in T helper cell or ILC2 processes.
ICOS + ILC3s show more activation-associated and helper-like characteristics To investigate the effect of ICOS on the characteristics of ILC3s, tonsillar ILC3s, characterized by RORγt ( Supplementary Fig. 2a, b), were isolated and then activated in vitro. When stimulated by IL-7 + IL-2 + IL-1β + IL-23, the ICOS level on ILC3s further increased, and ILC3s were completely converted to ICOS + ILC3s on Day 4. In contrast, despite some increase in expression, the degree of ICOS upregulation was not as remarkable after IL-7 + IL-2 stimulation as that after IL-7 + IL-2 + IL-1β + IL-23 stimulation ( Fig. 2A). This result was confirmed using PB-derived ILC3s ( Supplementary Fig. 2c). In line with this, ILC3s were enlarged after IL-7 + IL-2 + IL-1β + IL-23 stimulation, with inflated nuclei and cytoplasm and several vesicles within the cytoplasm (Fig. 2B), which were accompanied by enhanced ILC3 survival, activation, and cytokine (IL-22, IL-17A, TNF, IFN-γ, and GM-CSF) production ( Fig. 2C, Supplementary Fig. 2d, e). These findings suggested that ICOS may act as an activation-associated molecule on the ILC3 surface.
In line with the ICOS expression results, ILC3s and CD4 + T cells showed remarkably similar cell size and morphology and a similar nuclear-cytoplasmic ratio, with oval-or bean-shaped single nuclei and oligoplasmic lymphoid morphology (Fig. 2I, J). To investigate their functional heterogeneity, ICOS −/+ ILC3s and ICOS −/+ CD4 + T cells were cultured at the same cell count and concentration. The results showed that the levels of cytokines secreted by ICOS + ILC3s, especially that of IL-22, were higher than those secreted by ICOS − ILC3s at Day 4 ( Fig. 2K) and Day 7 ( Supplementary Fig. 4a). The levels of some cytokines, such as IL-22, TNF, IFN-γ, and GM-CSF, produced by ICOS + ILC3s were higher than those produced by ICOS + CD4 + T cells. In contrast, the IL-17A-secreting capacity of ICOS + ILC3s was lower than that of ICOS + CD4 + T cells. These results indicated that tonsil-resident ICOS + ILC3s might play a unique role in the immune environment via specific secretion of cytokines even though there are fewer ICOS + ILC3s than T helper cells.
ICOS costimulation promotes the production of IL-10 by regulatory T cells (Tregs) [8], and ILCs contain a subgroup of IL-10 + ILCs, which are known as regulatory ILCs (ILCregs) [24]. Thus, we investigated whether ICOS + ILC3s have the characteristics of ILCregs. Neither ICOS − ILC3s nor ICOS + ILC3s had the ability to secrete IL-10; however, both ICOS -CD4 + T and ICOS + CD4 + T cells secreted some IL-10 ( Supplementary Fig. 4b), in agreement with previous studies [25,26], suggesting that ICOS + ILC3s and ILCregs [24] are two distinct and nonoverlapping ILC subpopulations. Collectively, the role of ICOS in ILC3s is heterogeneous, distinct from that in ICOS + T helper cells, and cell type-specific.
ICOS costimulation enhances ILC3 functions
To elucidate the influence of the ICOS/ICOSL interaction on human ILC3s, we used ICOS costimulation assays as previously reported [27,28]. The results showed that costimulation of ILC3s with precoated human recombinant soluble ICOSL protein (rsICOSL) enhanced the survival, proliferation, and activation of ILC3s, as indicated by an increased proportion of viable, ki-67 + and CD69 + ILC3s (Fig. 3A), even without NKp44 + and CD25 + ILC3 expansion ( Supplementary Fig. 5a). In addition, the concentrations of IL-22, IL-17A, IFN-γ, TNF, and GM-CSF were significantly Supplementary Fig. 1a in detail; t-distributed stochastic neighbor embedding (t-SNE) analysis was performed using live tonsillar CD45 + cells. A Using identified biomarker sets, the T-cell cluster was manually gated as CD3 + cells (black), and the cluster of total innate lymphoid cells (ILCs) was identified as lineage (CD3 CD19 CD20 CD14 CD94 CD34 CD1a CD11c CD123 TCRα/β TCRγ/δ FcεRIα) − CD127 + (red). B In ILCs, a second t-SNE analysis was conducted to focus on three clusters corresponding to ILC1s (green), ILC2s (blue), and ILC3s (red). Normalized intensities of each biomarker were calculated and displayed on the plot. The results correspond to three donors. Frequencies (C) and cell counts (per gram) (D) of total ILCs (lineage − CD127 + ), ILC1s (lineage − CD127 + CRTH2 − CD117 − ), ILC2s (lineage − CD127 + CRTH2 + ), and ILC3s lineage − CD127 + CRTH2 − CD117 + ) from gated sets of live tonsillar CD45 + cells are shown in FACS dot plots (n = 11). E The inducible T-cell costimulator (ICOS) high lymphocytes were gated and indicated as CD3 − or CD4 − ILC3s. F ICOS expression of ILC3s (left) and frequencies of ICOS + ILC3s in ILC3s from human tonsil, distal lung sample (more than 5 cm from the tumor lesion) from donors with lung cancer, and peripheral blood of healthy donors (n = 5-11). G Coexpression of ICOS and common biomarkers on tonsillar ILC3s. The error bars indicate the mean ± standard error of the mean (SEM) shows the DEGs related to the immune system and environmental information processing. DEGseq analysis (log 2 of fold change ≥ 1.5, Q ≤ 0.005) was used to identify the DEGs (n = 3). The results are shown as a scatter plot of the Z score (row direction) for fragments per kilobase million mapped (FPKM). E-H Purified tonsillar ICOS − and ICOS + ILC3s were treated with IL-7 + IL-2 + IL-1β + IL-23 and analyzed on Day 4 and/or 7. E Representative fluorescence-activated cell sorting (FACS) profiles for ICOS − ILC3s (light blue) and ICOS + ILC3s (light red) overlapped by ICOS and NKp44 expression at Day 0, 4, and 7. F Change in relative ICOS expression on ICOS − and ICOS + ILC3s on Day 0, 4, and 7 (n = 3). G, H Flow cytometry profiles and summary of the frequencies of viable (G upper), ki-67 + ILC3s (G upper-middle), NKp44 + ILC3s (G lower-middle), and CD69 + ILC3s (G lower) as well as IL-22 + ILC3s (H upper) and IL-17A + ILC3s (H lower) treated with IL-7 + IL-2 + IL-1β + IL-23 on Day 4 (n = 4-10). I Morphological comparison of ILC3s vs. CD4 + T cells sorted from tonsillar lymphocytes. J Representative FACS profiles for ICOS expression of tonsillar CD4 + (left) and CD8 + T cells (right). K Sorted tonsillar ICOS −/+ ILC3s and ICOS −/+ CD4 + T cells were incubated at the same cell concentration and stimulated by IL-7 + IL-2 + IL-1β + IL-23 (for ILC3s) or anti-CD3/CD28 antibody (for CD4 + T cells) in vitro. On Day 4, IL-22, IL-17A, total TNF, GM-CSF, and IFN-γ levels in the supernatants were determined using enzyme-linked immunosorbent assay (ELISA) and cytometric bead array (CBA) (J). The results generated from different donors (n = 4-8) were combined. The error bars indicate the mean ± SEM. Statistical significance was determined using Student's paired t-test (C, F-H) or unmatched one-way analysis of variance (ANOVA) and Tukey's multiple comparison test (K) upregulated after ICOSL engagement (Fig. 3B). However, free rsICOSL failed to enhance cytokine production by ILC3s (Supplementary Fig. 5b), which indicated that the effect of ICOS costimulation on ILC3s primarily depended on ILC3 cell-to-cell contact via surface molecules with the ICOSL provider rather than via interaction with free ICOSL in the microenvironment.
Bidirectional promotion between tonsillar ILC3s and autologous B cells B cells are not only the main provider of ICOSL [9,29] but also the predominant immune cells in the tonsil (Supplementary Fig. 6a). Resembling T cells, RORγt + CD3 − ILC3s also colocalized with B cells in vivo (Fig. 4A). To study the bidirectional effects between ILC3s and B cells, autologous tonsillar ILC3s and B cells were sorted and cocultured at different ratios (Fig. 4B). The results showed that ILC3s and B cells promoted the survival of each other, and the survival of these cells was ratio-dependent (Supplementary Fig. 5b).
Regarding the influence of ILC3s on B cells, ILC3s promoted the survival and proliferation of B cells, particularly in the presence of oligonucleotide (ODN) 2006 (a class B CpG ODN that can be used for B-cell activation as a ligand of TLR9, Fig. 4C), and the level of detectable ICOSL on the surface of B cells was reduced (Fig. 4D), suggesting that ICOSL was shed due to interactions with ICOS + Fig. 3 ICOS costimulation enhances the survival, proliferation, and activation of ILC3s. A, B Tonsillar ILC3s were incubated in human recombinant soluble ICOSL protein (rsICOSL)-precoated or uncoated wells (control) in the presence of IL-7, IL-1β, and IL-23 for 4 days. A Representative flow plots and quantification of viable ILC3s (upper), ki-67 + ILC3s (middle), and CD69 + ILC3s (lower). The numbers in the quadrant represent the percentages of cells (n = 5-9). B Cytokines in the supernatants at Day 4 as determined using ELISA and CBA (n = 9-14). C Histogram comparing ICOSL expression among the CD32/ICOSL-expressing cell line (red line) and CD32-expressing cell line (black line), with the isotype control shown in gray. D Purified tonsillar ILC3s were cocultured with the parental CD32 line (control) and CD32/ICOSL line in the presence of IL-7, IL-1β, and IL-23 for 4 days. Cytokine levels in the supernatants were determined using ELISA and CBA (n = 6-10) on Day 4. The error bars indicate the mean ± SEM, and P values were determined using a two-tailed Student's paired t-test ILC3s [30]. Because IL-1β induces the production of IgA, IgM, and IgG by B cells (Supplementary Fig. 7a), in this ILC3-B-cell coculture system, IL-1β was excluded. Functionally, ILC3s promoted the secretion of IgA and IgM, but not IgG, by B cells; however, this impact was not remarkable in the absence of ODN2006 stimulation (Fig. 4E). As the ICOSL level is downregulated after interaction with ICOS [31], the ICOS signaling pathway may participate in the process of B-cell priming by ILC3s.
In the supernatant of the ILC3 and B-cell coculture system, we found high IL-10 levels (Fig. 4H), which was consistent with previous findings that ILC3s induce B-cell differentiation into IL-10producing regulatory B cells (Bregs) [4]. We next determined whether IL-10 was generated by ILC3s. We found that IL-10 was undetectable in the supernatant of ILC3s regardless of stimulation by IL-7 + IL-2 + IL-1β + IL-23 or rsICOSL, but its expression was triggered when B cells were added. (Fig. 4H, I). Furthermore, the IL-10 in the ILC3-B-cell coculture system was mainly derived from B cells and not ILC3s (Fig. 4J). These results suggest that neither activation by IL-7 + IL-2 + IL-1β + IL-23 nor costimulation of ICOS triggers IL-10 production by ILC3s.
Reciprocal promotion of ILC3s and B cells partially requires the ICOS/ICOSL interaction
We next explored whether the reciprocal effects of ILC3s and B cells are contact-dependent or contact-independent. A Transwell assay showed that when cell contact was blocked, the B-cellinduced ILC3-specific production of IL-22, IL-17A, IFN-γ, TNF, and GM-CSF was eliminated (Fig. 5A). Similarly, the ILC3-induced upregulation of IgA and IgM secretion by B cells was inhibited (Fig. 5B). These observations were in line with the decreased IL-10 levels in B cells (Fig. 5C). Although some secreted extracellular factors, such as ILC3-derived BAFF and B-cell-derived IL-15, promote crosstalk between ILC3s and B cells [4,5], these results indicate that direct contact between ILC3s and B cells is essential for their reciprocal promotion.
To further investigate the role of ICOS signaling in the ILC3-Bcell interaction, we conducted blocking assays using an ICOSneutralizing antibody followed by functional tests of ILC3s and B cells. ICOS blockade resulted in a partial decrease in IL-22, IL-17A, TNF, GM-CSF, and IFN-γ secretion by ILC3s (Fig. 5D) but little reduction in IgA, IgM, and IL-10 expression in B cells (Fig. 5E, F). These results suggest that ICOS is partially involved in promoting ILC3 activation during the interaction of ILC3s and B cells. Although IL-21 is involved in ICOS-mediated Tfh and B-cell interactions in lymphoid tissues, such as the tonsil [32,33], neither ICOS -ILC3s nor ICOS + ILC3s produced IL-21 ( Supplementary Fig. 7g). Collectively, the ILC3-B-cell interaction is partially ICOS signaling-dependent, and it may involve other molecules that induce contact between ILC3s and B cells.
B-cell-reinforced ILC3 activation requires the cooperation of ICOS and CD40 signaling
The costimulatory molecule CD40 generally supports the ICOS/ ICOSL interaction in T-cell-B-cell responses [9,34]. Thus, we investigated the role of CD40 signaling in the ILC3-B-cell interaction. Of note, CD40L expression was not detected on fresh tonsil-derived, lung-derived, or PB-derived ILC3s (Fig. 6A) but appeared on the ILC3 surface after activation by IL-7 + IL-2 + IL-1β + IL-23 (Fig. 6B). The percentage of induced CD40L + ILC3s was further increased after activation (Fig. 6C). Moreover, ICOS + ILC3s had a stronger potential for CD40L expression, and the percentage of CD40L + ILC3s increased after ICOSL engagement (Fig. 6D, E). During Tfh-B-cell responses, the ICOS/ICOSL interaction promotes the expression of CD40L on T cells [9,34,35], suggesting that ILC3s may function similarly to Tfh cells during interaction with B cells, and the effect of CD40 likely occurs later than the ICOS/ ICOSL interaction.
To determine the role of CD40 signaling in the ILC3-B-cell interaction, we used a CD40-neutralizing antibody in the ILC3 and B-cell coculture system to perform a CD40 blockade assay. The results showed that once ILC3s interacted with B cells, CD40L on ILC3s was undetectable (Fig. 6F), and CD40 blockade inhibited the ILC3-induced upregulation of IgA, IgM, and IL-10 levels in B cells (Fig. 6G, H). In addition, the B-cell-induced increase in ILC3 cytokine secretion was partially inhibited after CD40 blockade (Fig. 6I), suggesting that CD40 signaling mediated the interaction between the two cells.
To further investigate the relationship between ICOS and CD40 signaling in the ILC3-B-cell interaction, we prepared anti-ICOS and anti-CD40 antibodies in gradient concentrations. Regarding ILC3s, the inhibitory effect of anti-CD40 antibody on ILC3-produced IL-22 was dose-dependent in the absence of anti-ICOS antibody. In the presence of an optimal dose of anti-ICOS antibody plus an altered concentration of the anti-CD40 antibody, IL-22 was inhibited much more significantly; however, the dosedependence of the anti-CD40 antibody was partially weakened (Fig. 6J). In B cells, IgA and IgM expression was inhibited after CD40 blockade in a dose-dependent manner in the absence of anti-ICOS antibody. The anti-ICOS antibody acted synergistically with the anti-CD40 antibody; however, the inhibitory function of the anti-ICOS antibody was concealed in the presence of abundant anti-CD40 antibody (Fig. 6K). In summary, ICOS and CD40 signaling play a synergistic role in the B-cell-mediated promotion of ILC3s, with ICOS signaling acting as a major component. In addition, ILC3-induced enhancement of B-cell functions is primarily dependent on CD40 signaling.
DISCUSSION
ILC3s are equipped with not only APC-like features such as MHCII [16,22,23,36] and OX40L [37,38] expression but also helper-like characteristics such as expression of the costimulatory molecule CD40L [9,34,35], and thus, they participate in T-cell and APC regulation. The complex functions of ILC3s in the immune microenvironment are determined by their phenotype; however, the underlying mechanism is not completely understood, especially in humans. Our findings identified the expression of ICOS on ILC3s at the protein level in multiple human tissues. Although ICOS expression can also be found on ILC2s [11,39] and ILC1s, we failed to compare the ICOS-related differences among human ILC subsets because of the low absolute number of ILCs in the majority of tissues.
ICOS + ILC3s share many features with activated T helper cells, such as the expression of IL-2, IL-22, NKp44, NKp46, CXCR5, and PD-1; however, their features are not entirely the same [40]. For instance, CD28, a secondary signal involved in T-cell activation, commonly colocalizes with ICOS on T helper cells, but it is almost undetectable on solid tissue-or PB-derived ILC3s. In addition, RNA sequencing analysis revealed the expression of some costimulatory molecules in ICOS + ILC3s, such as LIGHT, TIGIT, and CTLA4 (Fig. 2D), so additional studies are required. The increased ICOS level in ILC3s was accompanied by enhanced survival and proliferation and specific cytokine production after IL-7 + IL-2 + IL-1β + IL-23 activation, indicating that ICOS can be considered an activation-related molecule for ILC3s. Although the ICOS − and ICOS + ILC3 profiles differed, they had similar phenotypes after stimulation with IL-7 + IL-2 + IL-1β + IL-23. In ICOS costimulation assays for ILC3s, suboptimal stimuli were identified as indispensable, similar to the case for ICOS costimulation of T cells [27,28]. This result suggests that ICOS signaling may be a secondary signal for ILC3 activation, assuming that IL-1β and IL-23 act as the primary activators.
B cells can acquire help from Th cells and ILC3s. In this study, the T-cell-independent B-cell help of ILC3s resembled the interaction of Tfh and B cells and required ICOS and CD40 signaling; however, the difference was that the B-cell-trophic cytokine IL-21 [32,33], which is related to Tfh cells, was undetectable in ICOS − and ICOS + ILC3s. Further study is required to determine whether there is a competitive relationship between T-cell-dependent and ILC3 (helper ILC3 and LTi)-dependent B-cell switching and to identify any related factors [2]. In contrast to the promoting effects of ILC3s on B cells in the human spleen [5], we found that tonsillar ILC3s promoted the secretion of IgA and IgM but not IgG, indicating that in addition secreting cytokines such as IL-22, ILC3s also regulate mucosal immunity by promoting IgA and IgM production by B cells.
To date, there have been few studies regarding the influence of other immune cells on ILC3s and the relevant mechanisms. 5). The results are representative of five (J) or six (K) experiments. The error bars indicate the mean ± SEM. *P < 0.05, **P < 0.005, ***P < 0.0005, ****P < 0.0001, and NS P ≥ 0.05. Statistical significance was determined using Student's paired ttest (C-F, H, J, and K) or matched one-way ANOVA and Tukey's multiple comparison test (G, I) Despite the low cell number and proportion in human tissues, we primarily focused on the effects of B cells on ILC3s during their interaction. Of note, in the absence of oppression by T cells [41], B cells promoted ILC3 survival, proliferation, and function, which was dependent on ICOS and CD40 signaling. In addition to B cells, ICOSL-expressing cells include DCs [42], macrophages [39], and somatic cells, such as human umbilical vein endothelial cells [43], and even ILC2s [11,39,44]. Our finding that ICOSL was undetectable on human tonsillar ILC3s contradicted the findings of ICOSL expression on murine ILC2s [11], thereby providing evidence for a difference between ILC2s and ILC3s [11,45]. Thus, the ICOS-mediated interactions between ILC3s and other cell types need further study. Considering that IL-1β and IL-23 are triggered by foreign antigens [38,41], together with the fact that the ICOS/ICOSL pathway mediates the inflammatory response [46,47], the role of the ICOS/ICOSL pathway in the interaction between ILC3s and B cells in diseases like infection and tumorigenesis needs to be further studied.
ILC3s maintain a strong capacity for cytokine production even after prolonged stimulation in vitro, and ILC3s show obviously high levels of proliferation and activation once they are isolated from T cells. However, in human and murine tissues, ILC3s represent a small population [13,48,49] in both steady state [13,50] and disease [13,51]. Thus, it is possible that non-tissuespecific and broad-spectrum inhibitory factors around ILCs restrain the expansion of ILCs inside the body. These factors may have competitive effects against prominent T cells [41] or suppressive effects against immune regulatory cells, such as Tregs, Bregs, and ILCregs, [24] and may include cytokines that maintain immune homeostasis among multiple immune cells.
In conclusion, ICOS expression on human ILC3s functions as a secondary activation signal and contributes to ILC3 survival, proliferation, and production of cytokines, including IL-22 and IL-17A. The direct contact between autologous ILC3s and B cells induced by ICOS and ICOSL interaction facilitates the activationassociated state of ILC3s, which is synergistically mediated by CD40 signaling. ILC3 help in B-cell Ig class switching and differentiation primarily requires CD40 signaling and, to a lesser extent, ICOS signaling ( Supplementary Fig. 8). These observations indicate the unique and nonredundant role of ILC3s in the immune microenvironment and their significant implications for studies investigating the T-cell-independent B-cell response during lymphoid tissue formation and Ig class switchingassociated diseases, such as common variable immunodeficiency.
Human sample and tissue processing
Tonsil tissue samples were obtained from pediatric patients undergoing routine tonsillectomy at the First Hospital of Jilin University. Human distal lung biopsies (more than 5 cm from the tumor lesion) were obtained from lung cancer patients at the Second Hospital of Jilin University. Peripheral blood (PB) samples from healthy donors were obtained from the Blood Bank Center of Jilin Province, Changchun, China. All study participants provided informed written consent (Ethics Committee of the First Hospital of Jilin University, Clinical Trials and Research Approval No. 2020-426).
Fresh tonsil and lung tissues were mechanically dissociated on ice in RPMI-1640 containing 2% fetal bovine serum (FBS, Gibco, New Zealand) and then digested for 30-45 min at 37°C with 500 μg/mL type IV collagenase (Gibco, New York) and 5 μg/mL DNase I (Sigma-Aldrich, Shanghai) in culture medium. Subsequently, the cells were washed with PBS + 2% FBS and passed through a 70 μm filter to generate a single-cell suspension. Before antibody staining for cell sorting, tonsillar and PB mononuclear cells were isolated using Ficoll-Paque Plus medium (GE Healthcare, Uppsala) through density gradient centrifugation.
Before sorting using flow cytometry, tonsillar and PB mononuclear cells were depleted of T cells and B cells. Briefly, cells were labeled with biotinconjugated anti-CD3 and anti-CD19 antibodies (BioLegend, San Diego) followed by incubation with anti-biotin microbeads (Miltenyi, Germany). Cells were then passed through an LS column (Miltenyi) using a MultiMACS Separator (Miltenyi) according to the manufacturer's instructions. Subsequently, the ILC subsets from the microbead-unlabeled mononuclear cells were isolated using FACSAria (BD Biosciences, San Jose). The staining and gating were performed as described above. In addition, ICOS + T cells (CD3 + CD4 + CD19 − ) and B cells (CD3 − CD19 + ) were isolated with a FACSAria from microbead-labeled mononuclear cells. All cells utilized for experiments had a purity of higher than 95%.
ICOSL-expressing cell lines
Human ICOSL-expressing cells were generated using retroviral-mediated transduction. Briefly, using the pLent-EF1a-FH-CMV-copGFP-P2A-Puro lentivirus vector, the full-length coding sequence for human FC-GAMMARIIC (GenBank: U90939.1) plus human ICOSLG (NM_015259) or FC-GAMMARIIC alone was transfected into K562 cell lines. Subsequently, puromycin dihydrochloride (Santa Cruz Biotechnology, Dallas) was used to obtain CD32/ICOSL and CD32 control lines with a purity higher than 95%. The cell lines were prepared by Shandong Weizhen Biotechnology Co., Ltd. The cells were used in the ICOS costimulation of ILC3s. The cells were cultured in DMEM containing 10% FBS, and 2 µg/mL puromycin was added to maintain cell purity.
Transwell assay
To investigate whether cell-to-cell contact is critical for the reciprocal promotion of cytokine or immunoglobulin production during the ILC3-B-cell interaction, Transwell experiments were performed. In brief, sorted ILC3s were added into the upper chamber of a 24-well Transwell plate (6 × 10 4 cells/well), and sorted autologous B cells were added into the bottom chamber (3 × 10 5 cells/well) in the presence of 50 ng/mL IL-7, 50 ng/mL IL-23 and 1 μM ODN2006. The chambers were separated with a 0.4 μm pore membrane (Corning, Kennebunk). All experiments were performed in duplicate. After 4 days, the supernatants in both the upper and bottom chambers were harvested and pooled to detect cytokines and immunoglobulins using enzyme-linked immunosorbent assay (ELISA) and cytometric bead array (CBA) assays.
Giemsa staining and multiplexed immunofluorescence staining Tonsillar ILC3s before or after activation and CD4 + T cells were resuspended in RPMI-1640 containing 10% FBS. Cells were then centrifuged, placed onto slides by centrifugation and subjected to Giemsa staining. According to the manufacturer's instructions, Ridge-Giemsa solution A was added dropwise to cells on glass slides followed by incubation for 30 s. Cells were then washed with phosphate solution for 3 min, washed with distilled water to remove excess dye, dried, and observed under a microscope. The Giemsa staining kit used in this experiment was from BASO.
Detection of cytokines using ELISA and CBA assay
The ability of ILC3s and B cells to secrete cytokines or immunoglobulins was detected using ELISA and CBA kits. The supernatants of ILC3s or B cells after stimulation, Transwell culture and coculture were collected and stored at −80°C. IL-22 in the supernatants was detected using ELISA MAX™ Deluxe Set Human IL-22 (BioLegend, San Diego) according to the manufacturer's instructions, and optical densities were measured using a spectrophotometer. For IL-17A, IFN-γ, total TNF, GM-CSF, IL-10, IgA, IgM, and IgG detection, corresponding CBA Flex Sets (BD Biosciences, San Diego) were used to fluorescently label the beads bound with cytokines or immunoglobulin in the supernatant, and the concentrations were quantified using a BD LSRFortessa flow cytometer and the FCVP Array v3 system.
RNA sequencing and analysis
Tonsillar ICOS − and ICOS + ILC3s from one donor (n = 3) were sorted by flow cytometry into an Eppendorf tube containing TRIzol (Invitrogen, 1 × 10 6 cells/mL) and quickly transferred to liquid nitrogen. Sequencing reads were obtained using the DNBSEQ-T7 platform, and RNASeq analysis was performed by the Beijing Genomics Institute. Sequencing reads were aligned to the GRCh38 human genome. To remove lowquality data, adapters were trimmed using Cutadapt 1, and low-quality bases were removed by ERNE2. To analyze differentially expressed genes (DEGs), the quality-checked reads were processed using Dr. Tom's platform (https://biosys.bgi.com). Only protein-coding genes were considered, and gene level expression values were determined as fragments per kilobase million mapped (FPKM). All genes were analyzed with an established DEGseq analysis method (log2 fold change ≥1.5, Q ≤ 0.005). Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis was performed, and the significantly enriched terms were identified based on low P values.
Statistical analysis
Student's paired t-test, matched and unmatched one-way analysis of variance (ANOVA) and Tukey's multiple comparison test were used to determine the significance of differences between ILC3s with or without activation or costimulation using Prism 6.0c (GraphPad). The P values are shown in the figures, and P ≤ 0.05 were considered significant. | 2023-05-26T06:17:51.169Z | 2023-05-25T00:00:00.000 | {
"year": 2023,
"sha1": "64a08a698406060433d7aeadface6b4b03aaf407",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/s41423-023-01041-w",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "8cda8200da9ac8a3e63b3d28bd8ceea73f937415",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237406825 | pes2o/s2orc | v3-fos-license | Developmental Angiogenesis Requires the Mitochondrial Phenylalanyl-tRNA Synthetase
Background: Mitochondrial aminoacyl-tRNA synthetases (mtARSs) catalyze the binding of specific amino acids to their cognate tRNAs and play an essential role in the synthesis of proteins encoded by mitochondrial DNA. Defects in mtARSs have been linked to human diseases, but their tissue-specific pathophysiology remains elusive. Here we examined the role of mitochondrial phenylalanyl-tRNA synthetase (FARS2) in developmental angiogenesis and its potential contribution to the pathogenesis of cardiovascular disease. Methods: Morpholinos were injected into fertilized zebrafish ova to establish an in vivo fars2 knock-down model. A visualization of the vasculature was achieved by using Tg (fli1: EGFP) y1 transgenic zebrafish. In addition, small interference RNAs (siRNAs) were transferred into human umbilical vein endothelial cells (HUVECs) to establish an in vitro FARS2 knock-down model. Cell motility, proliferation, and tubulogenesis were determined using scratch-wound CCK8, transwell-based migration, and tube formation assays. In addition, mitochondria- and non-mitochondria-related respiration were evaluated using a Seahorse XF24 analyzer and flow cytometry assays. Analyses of the expression levels of transcripts and proteins were performed using qRT-PCR and western blotting, respectively. Results: The knock-down of fars2 hampered the embryonic development in zebrafish and delayed the formation of the vasculature in Tg (fli1: EGFP) y1 transgenic zebrafish. In addition, the siRNA-mediated knock-down of FARS2 impaired angiogenesis in HUVECs as indicated by decreased cell motility and tube formation capacity. The knock-down of FARS2 also produced variable decreases in mitochondrial- and non-mitochondrial respiration in HUVECs and disrupted the regulatory pathways of angiogenesis in both HUVECs and zebrafish. Conclusion: Our current work offers novel insights into angiogenesis defects and cardiovascular diseases induced by FARS2 deficiency.
Mitochondrial phenylalanyl-tRNA synthetase, encoded by the nuclear gene FARS2, catalyzes the recognition and binding of Phe and mt-tRNA Phe in the mitochondria (5). Mutations in the FARS2 gene are associated with central nervous system (CNS) diseases, such as autosomal recessive spastic paraplegia (7), epileptic encephalopathy (8)(9)(10), and infantile mitochondrial Alpers encephalopathy (11)(12)(13). In addition, our group reported that a missense homozygous mutation [c.424 G > T (p.D142Y)] in the FARS2 gene was the underlying cause of hereditary spastic paraplegia in a Chinese family (7). Because CNS disorders are recognized as the major manifestations of FARS2 gene mutations, previous research into the potential molecular mechanisms involved in the pathogenicity of these mutations has focused on the CNS (7,8,10,(12)(13)(14)(15)(16), and little is known about their effects on the cardiovascular system. Cardiovascular diseases (CVDs), including stroke, heart failure, coronary artery disease, cardiomyopathy, and hypertensive heart disease, are some of the leading causes of death worldwide (17)(18)(19). Nonetheless, the etiology of CVDs has not been well investigated on account of their multi-factorial causes, covering inherited and environmental factors (20). Endothelial cells (ECs) play an indispensable role in angiogenesis and vascular remodeling, and endothelial dysfunction occurs in the early stages of CVDs such as coronary artery disease (21,22). Angiogenesis, a process in which new blood vessels are formed from pre-existing vessels, is crucial for embryogenesis, tissue healing, and placental vascularization (23). In response to Abbreviations: mtARSs, mitochondrial aminoacyl-tRNA synthetases; FARS2, mitochondrial phenylalanyl-tRNA synthetase; siRNAs, small interference RNAs; HUVECs, human umbilical vein endothelial cells; ARSs, aminoacyl-tRNA synthetases; mtDNA, mitochondrial DNA; OXPHOs, oxidative phosphorylation system; Phe, Phenylalanine; mt-tRNA Phe , mitochondrial phenylalanyl-tRNA; CNS, central nervous system; CVDs, cardiovascular diseases; ECs, endothelial cells; MOs, Morpholinos; ISVs, intersegmental vessels; DLAVs, dorsal longitudinal anastomotic vessels; PAVs, parachordal vessels; CVP, caudal vein plexus; hpf, hours post-fertilization; OCR, oxygen consumption rate; ROS, reactive oxygen species. angiogenic stimuli, ECs differentiate into two distinct subtypes that perform characteristic functions: the tip cells extend the filopodia of the vascular branch frontlines, and the stalk cells extend the vascular branches behind the tip cells. Following the formation of the vascular network and blood perfusion, ECs are trans-differentiated into quiescent phalanx cells that line the new vessels (24)(25)(26). This complex process of EC specialization is regulated by multifarious signaling molecules, including paracrine and autocrine factors, as well as by oxidative respiratory metabolism. The mitochondria play an essential role in cellular oxidative respiration; however, although angiogenesis is an energy-intensive process, the respiratory metabolism in ECs is highly glycolytic and relies little on the mitochondria (27)(28)(29)(30)(31). Nonetheless, the mitochondria not only play a major role in aerobic oxidation but are also key intracellular structures that regulate several EC functions (32)(33)(34). While mitochondriarelated metabolism resulting from angiogenic stimuli has been studied extensively (34,35), the functions of mitochondrial protein synthesis in angiogenesis are only partially understood.
Angiogenesis is regulated by a complex network of molecules. As one of the indispensable pathways regulating embryonic development, the Wnt signaling pathway regulates a variety of complex biological processes (36, 37). The high expression levels of Wnt signaling genes in ECs during vasculature development support the pivotal role of this pathway in angiogenesis (38,39). The Notch pathway, another evolutionarily conserved signaling system, is required for normal embryonic development, tissue homeostasis, and adult stem cell maintenance (40) and controls the specification of ECs in multiple vertebrates, such as chicken, zebrafish, and mice. Although the intracellular signaling pathways regulated by angiogenic stimulation have been investigated widely, the relationship between FARS2 and signaling transduction in angiogenesis is unknown. Here, to determine whether the FARS2 gene plays an essential role in developmental angiogenesis, we established two FARS2 deficiency models. In the in vivo model, Tg (fli1: EGFP) y1 transgenic zebrafish were treated with fars2-specific morpholinos (41). In the in vitro model, HUVECs were transfected with FARS2-specific small interference RNAs (siRNAs). By combining imaging, post-transcriptional manipulations of FARS2, and gene expression detection techniques, we found that FARS2 might participate in the pathological process of CVD by affecting the mitochondrial protein synthesis in ECs. Our data demonstrate a previously unanticipated role of FARS2 in coordinating the angiogenic process.
Zebrafish Care and Maintenance
Adult wild-type AB strain zebrafish were maintained at 28.5 • C on a 14-h light/10-h dark cycle. Five to six pairs of zebrafish were set up for natural mating every time. On average, 200-300 embryos were generated. The embryos were maintained at 28.5 • C in fish water (0.2% Instant Ocean Salt in deionized water). The embryos were washed and staged according to (41). The establishment and characterization of fli1a-EGFP transgenic lines have been described elsewhere (42). The zebrafish facility at Shanghai Model Organisms Center is accredited by the Association for Assessment and Accreditation of Laboratory Animal Care International.
Quantitative Real-Time PCR
For zebrafish, total RNA was extracted from 30 to 50 embryos per group in Trizol (Roche) according to the instructions of the manufacturer. The RNA was reverse-transcribed using the PrimeScript RT reagent Kit with gDNA Eraser (Takara). The quantification of gene expression was performed in triplicates using Bio-rad iQ SYBR Green Supermix (Bio-rad) with detection on the Realplex system (Eppendorf). The relative gene expression quantification was based on the comparative threshold cycle method (2 − Ct ) using ef1α as endogenous control gene. The primer sequences are given in Supplementary Table 1.
For HUVECs, total RNA was extracted from cells by using Axypre TM Multisource Total RNA Miniprep Kit (Axygen, cat. #365). The total RNA was reverse-transcribed with PrimeScript TM RT Master Mix (Takara, cat. #RR036A). Realtime fluorescent quantitative PCR was implemented by SYBR R Premix Ex Taq TM II (Takara, #RR820A) using 7500 system (Applied Biosystems). The procedures of the qRT-PCR were as follows: 95 • C for 30 s for the first step and then for the ensuing 40 cycles-95 • C for 5 s and 60 • C for 30 s. Relative gene expression quantification was based on the comparative threshold cycle method (2 − Ct ) using GAPDH as the endogenous control gene (44). The primer sequences are given in Supplementary Table 1. All experiments were performed in triplicate and repeated three times independently.
Zebrafish Angiogenesis Studies
To evaluate blood vessel formation in zebrafish, fertilized one-cell fli1a-EGFP transgenic line embryos were injected with fars2-MO and control-MO. At 48 hpf, the embryos were dechorionated and anesthetized with 0.016% MS-222 (tricaine methanesulfonate, Sigma-Aldrich, St. Louis, MO). The zebrafish were then oriented on the lateral side (anterior, left; posterior, right; dorsal, top) and mounted with 3% methylcellulose in a depression slide for observation by fluorescence microscopy. The phenotypes of complete intersegmental vessels (ISVs) [i.e., the number of ISVs that connect the dorsal anastomotic vessels to the dorsal longitudinal anastomotic vessels (DLAVs)], caudal vein plexus (CVP), DLAVs, and parachordal vessels (PAVs) were quantitatively analyzed. A total of 10 animals from at least three independent MO injections in each group were used in this experiment.
Cell Culture and siRNA Transfection
Human umbilical vein endothelial cells (HUVECs, Sciencell cat. # 8000) were used from passages 3-9 and cultured in endothelial cell medium (ECM, Sciencell cat. # 1001) containing 500 ml of basal medium, 5% fetal bovine serum (FBS, Sciencell cat. #0025), 1% endothelial cell growth supplement (Sciencell cat. #1052), and 1% antibiotic solution (P/S, Sciencell cat. #0503) in 5% CO 2 at 37 • C. Then, 2 × 10 5 cells, 10 5 cells, and 10 4 cells per well were seeded in six-well, 12-well, and 96well plates for siRNA transfection. The cells were transfected with the following siRNAs: a FAM-labeled non-relevant control (50 nM), a non-relevant control (siCtrl, 50 nM), and FARS2 siRNA (si-FARS2, 50 nM) from Ribobio TM (Guangzhou, China). The specific target sequences of these siRNAs are listed in Supplementary Table 2. X-tremeGENE siRNA Transfection Reagent (Roche) was used to build cells in the transfection process. In brief, X-tremeGENE siRNA Transfection Reagent and siRNA were separately diluted in Opti-MEM (Gibco cat. # 31985070) and mixed for 15 min at room temperature. Then, the mixture was added into the plates. The evaluation of transfection efficiency and functional assays on HUVECs was performed at 48 h after transfection. The transfection efficiency was monitored by calculating the percentage of FAM-positive cells under a fluorescence microscope.
Cell Proliferation and Transwell-Based Cell Migration Assays
The HUVEC cell proliferation assay in vitro was evaluated by CCK8 assay (44). The HUVECs were seeded in a 96-well plate with 100 µl ECM per well at 24 h before transfecting with siRNAs. Then, 10 µl CCK8 (HanBio, cat. # HB-CCK8-500T, Shanghai, China) reagent was added to each well for 1 h at 48 h after transfection with siRNAs. We measured the absorbance at 450 nm to detect proliferation of cells. All experiments were performed in triplicate and repeated three times independently.
The HUVEC migration assay was as described previously (45). In brief, for one well of a 24-well plate, the HUVECs transfected for 48 h were re-seeded in the upside of the transwell chamber (Corning) with 500 µl basal medium; 700 µl ECM (containing 5% FBS) was added in the bottom of the well. After cultivating for 24 h, the chamber was wiped with a cotton swab. The cells were fixed with 4% paraformaldehyde, stained with crystal violet solution, and counted under a microscope (×20 objective). At least three different fields were averaged, and the experiment was repeated three times independently.
Scratch-Wound Migration Assay
The HUVEC scratch-wound migration assay was evaluated by wound-healing assay (46). Briefly, the cells were transfected with siRNAs for 48 h (cultured upon reaching 90-95% confluence) in a six-well plate with 2 ml ECM; the HUVECs were scratched with the head of a 200-µl tip. The motility of the cells into the wound was imaged under a microscope (×10 objective) at 0 and 6 h after wounding. The blank area in the wound was detected using Fiji Image J (NIH, Bethesda, MD, United States). All experiments were performed in triplicate and repeated three times independently.
Tube Network Formation on a Matrigel Matrix
The method of tube network formation was studied as described previously (47). After transfection for 48 h, 300 µl of HUVEC suspension (4 × 10 5 cells/ml) was re-seeded in a 24-well plate pre-coated with 289 µl Matrigel (10mg/ml, Corning cat. #354248) per well, which was polymerized by incubating in 37 • C for 30 min. Then, an Olympus microscope, with ×10 objectives, was used to take brightfield images of the 24-well plate. Fiji Image J (NIH, Bethesda, MD, United States) was employed to count the number of intersections in each field, and the total length of the structures was measured (48). At least three different fields were averaged, and the experiment was repeated three times independently.
Mitochondrial Stress Testing Using Seahorse Technology
We studied mitochondrial stress testing as described previously (49). Seahorse Bioscience XFp extracellular flux analyzer (Agilent) was used to measure the mitochondrial stress test of HUVECs. This device works by creating a sealed chamber to measure oxygen consumption by the mitochondria in real time in the microplates under various stimuli. Mitochondrial reagents (Seahorse Bioscience Cell Mito Stress Test Kit, Agilent cat. #103010-100) were optimized at 2 µg/ml oligomycin (complex V inhibitor), 5 µM FCCP (a respiratory uncoupler), and 2 µM rotenone/antimycin A (inhibitors of complex I and complex III). A total of 30,000 HUVECs transfected with siRNAs for 48 h were seeded into the seahorse cell culture plate per well with 500 µl ECM and cultured at 37 • C in 5% CO 2 humid atmosphere overnight. The sensor cartridge was incubated at 37 • C in a non-CO 2 incubator for 24 h before detection. The cell culture plate and sensor cartridge were placed on XFp extracellular flux analyzer for Mito Stress Test. After detection, all the data were normalized to the BCA quantification of each well. This synthetic bioenergy spectrum provides detailed information on the various components of the respiratory chain. In brief, six essential parameters of mitochondrial respiration function were calculated from the results: basal respiration, ATP production, proton leakage, maximum respiration, spare respiration capacity, and non-mitochondrial respiration.
Reactive Oxygen Species Assay
The intracellular reactive oxygen species (ROS) was analyzed by Reactive Oxygen Species Assay Kit (Beyotime cat. # S0033S, China). HUVECs (2 × 10 5 per well of six-well plates) were seeded and transfected with siRNAs for 48 h. Then, the cells were washed with PBS once; 1 ml of DCFH-DA (1:1,000 dilution) was added to each collecting tube in the dark and incubated at 37 • C for 30 min. The labeled cells were collected and analyzed by flow cytometry at 488 nm. All experiments were repeated three times independently.
Detection of ATP Levels
ATP dissolved in cells was detected by enhanced ATP assay kits (Beyotime cat. #S0027, China). According to the recommendations of the manufacturer, the standard curve was established and the concentration was detected by an enzyme reader (TECAN cat. #30086376, Switzerland). Finally, the ATP concentration was normalized by the BCA protein concentration method to eliminate the error caused by the difference of protein content. The quantification of the total ATP levels in HUVECs was conducted 48 h after transfection with siRNAs. All experiments were repeated three times independently.
Image Acquisition
For zebrafish, embryos and larvae were analyzed with a Nikon SMZ 18 fluorescence microscope and subsequently photographed with digital cameras. A subset of images was adjusted for level, brightness, contrast, hue, and saturation with Adobe Photoshop 7.0 software (Adobe, San Jose, California) to optimally visualize the expression patterns. Quantitative image analyses was processed using image-based morphometric analysis (NIS-Elements D4.6, Japan) and Fiji Image J (NIH, Bethesda, MD, United States). Ten animals for each treatment were quantified, and the total signal per animal was averaged.
For HUVECs, all the experiment images were taken with an Olympus IX73 fluorescence microscope. Quantitative image analyses were processed using image Fiji Image J (NIH, Bethesda, MD, United States).
Statistical Analysis
All data were presented as mean ± SEM. Statistical analysis and graphical representation of the data were performed using GraphPad Prism 8.3 (GraphPad Software, San Diego, CA). Statistical significance was performed using Student's t-test or ANOVA as appropriate. Statistical significance is indicated by an asterisk where P < 0.05; two asterisks, where P < 0.01; three asterisks, where P < 0.001; and four asterisks, where P < 0.0001.
Expression of fars2 Is Essential in the Early Stage of Zebrafish Embryo Development
Zebrafish (Danio rerio) is used extensively in angiogenesis studies because it undergoes rapid growth. The development of the vasculature in zebrafish can be divided into five major stages (50,51). Compared with humans, the fars2 gene was highly homologous, and the sequence similarity of Fars2 protein in zebrafish reached 71.39%. To explore its role in angiogenesis, we investigated the expression of fars2 during the embryonic development of zebrafish. The qRT-PCR analyses of total embryos revealed that fars2 transcription increased between 6 and 24 h post-fertilization (hpf) and then again between 72 and 96 hpf, which are the critical stages of vascular formation in zebrafish (Figure 1A). At 20 hpf, primary sprouts start to emerge bilaterally from the dorsal aorta at each vertical myoseptal boundary and then elongate dorsally, ramify, and interconnect along the dorsolateral roof of the neural tube to form paired dorsal longitudinal anastomotic vessels. The primary sprouts grow in a saltatory pattern, with numerous filopodia actively extending and retracting in all directions around the stretchy vessels (50,52). The 3-6 days post-fertilization stage is the key period for the establishment of the systemic circulation in zebrafish embryos (50).
To investigate the role of fars2 in zebrafish embryo development further, two specific MOs (ATG-MO and E3I3-MO) were designed to reduce its expression in vivo (Supplementary Figure 1A). Quantitative analyses performed after injecting one-cell fertilized ova with a non-specific control MO or the fars2-specific MOs confirmed the successful knockdown of fars2 by the latter (Supplementary Figures 1B,C). Approximately 26.2% of fars2 ATG morphants and 55.3% of E3I3 morphants presented an enlarged yolk sac, with the embryos displaying delayed growth and curved trunks (Figures 1B,C). The remaining embryos injected with the fars2 MOs all died ( Figure 1B).
Overall, these findings demonstrate that fars2 is expressed at high levels during the critical period of angiogenesis in zebrafish and that the loss of fars2 impairs embryonic development.
Morpholino-Induced Knock-Down of fars2 Delays Vascular Formation in Zebrafish
To examine its role in zebrafish developmental angiogenesis, fars2 was knocked down in Tg (fli1:EGFP) y1 transgenic zebrafish, which display a steady expression of EGFP within vascular ECs, allowing easy visualization of the vascular structures (41). The labeled ISVs and DLAVs showed regular development in the embryos injected with the control MO. By contrast, embryos injected with fars2-specific MOs displayed lower numbers of ISVs and ectopic sprouts (Figure 2A). The PAVs, the precursors to the lymphatic system, formed normally in control embryos, whereas fars2 morphants displayed deficient PAV formation (Figure 2A). In addition, the number of complete ISVs (Figure 2B) and the mean length of ISVs ( Figure 2C) were significantly lower in the fars2 morphants than in the controls.
During zebrafish angiogenesis, new vessels that arise from axial veins and dorsal aortas form a primitive circulatory loop (53,54). At 26-32 hpf, the posterior axial vein stretches ventrally and ultimately forms a "honeycomb-like" network named the CVP at 38 hpf. The shape of the CVP is produced by dorsal veins, ventral veins, and interlacing vessels (55,56). In embryos injected with the control MO, the CVP formed canonical honeycomb-like structures at the tail at around 50 hpf. By contrast, fars2 knock-down caused specific defects in CVP formation ( Figure 2D). Furthermore, the number of loops at the CVP was lower in the fars2 knock-down embryos than in the control embryos ( Figure 2E). Overall, these findings demonstrate that MO-mediated knock-down of fars2 disrupted the formation of ISVs, DLAVs, and the CVP during embryonic development in zebrafish.
Deficiency of FARS2 Impairs Cell Motility, Proliferation, Migration, and Tube Formation in HUVECs
To gain further insight into the function of FARS2 in angiogenesis, we established an in vitro FARS2 knock-down model using HUVECs and siRNAs. Western blot and qRT-PCR analyses confirmed the efficient knock-down of FARS2 by three different siRNAs (si-FARS2). Compared with those in cells transfected with a control siRNA (siCtrl), the expression levels of the FARS2 gene and protein were reduced by at least 30% following transfection with si-FARS2 (Supplementary Figure 2).
Scratch-wound assays, CCK8-based cell proliferation tests, and transwell-based migration assays revealed that the loss of FARS2 reduced the motility, proliferation, and migration capacity of HUVECs (Figures 3A-E). In addition, tubulogenesis was also reduced in cells transfected with si-FARS2 ( Figure 3F). Compared with those in cells transfected with siCtrl, the number of intersections in one field ( Figure 3G) and the total length of the tube structures ( Figure 3H) were lower following FARS2 silencing. To our knowledge, this is the first report of an in vitro FARS2 knock-down cell model created using siRNAs. Our findings demonstrate that the loss of FARS2 in HUVECs impairs cell motility, proliferation, invasion, and tube formation.
FARS2 Silencing Causes Mitochondrial Dysfunction in HUVECs
As the FARS2 gene encodes the mitochondrial phenylalanyl-tRNA synthetase, which is involved in the synthesis of mtDNAcoded OXPHOs subunits, we investigated mitochondrial respiration in HUVECs after FARS2 silencing. To this end, a Seahorse Bioscience XF24 analyzer was used to measure the rates of non-mitochondrial respiration, basal respiration, maximal respiration, proton leak, ATP production, and spare respiratory capacity (57) in HUVECs transfected with siCtrl or si-FARS2 for 48 h. Basal mitochondrial respiration, represented by the oxygen consumption rate (OCR), was lower in HUVECs transfected with si-FARS2 than in non-transfected HUVECs or those transfected with siCtrl (Figures 4A,B). Following the addition of oligomycin, an inhibitor of ATP synthase, ATP production and proton leak were lower in si-FARS2-treated cells than in siCtrl-treated cells (Figure 4B). FARS2 silencing also attenuated the OCR after the cells were treated with FCCP to maximize mitochondrial respiration (Figure 4B). In addition, after treatment with rotenone to uncouple the oxidation respiratory chain, the loss of FARS2 attenuated the OCR. Finally, the spare respiratory capacity, which was calculated based on the basal and maximal respiration values, was also lower in si-FARS2-treated cells than in siCtrl-treated cells (Figure 4B). The mitochondria produce ATP and are a main source of ROS. Reduced ATP production and increased levels of ROS are thought to occur as a result of mitochondrial dysfunction. Compared with the control cells, the FARS2-deficient HUVECs showed lower levels of sector ATP and increased levels of ROS (Figures 4C,D). Overall, these results suggest that silencing of the FARS2 gene impairs mitochondria-and non-mitochondria-related respiration, leading to mitochondrial dysfunction in HUVECs.
Deficiency of FARS2 Impairs Angiogenesis by Disrupting the Notch and Wnt Signaling Pathways
To explore the potential molecular mechanisms underlying the suppression of angiogenesis following MO-mediated knockdown of fars2 in zebrafish, the expression levels of key genes in the Notch and Wnt pathways were examined using qRT-PCR. In zebrafish, fars2 deficiency upregulated the notch1b (a Notch receptor) and hey2 (a downstream gene in the Notch pathway) expression levels, indicating the activation of the Notch pathway ( Figure 5A). In addition, fars2 deficiency increased the expression level of dkk1b and decreased those of other downstream genes in the Wnt pathway, indicating an inhibition of Wnt signaling ( Figure 5A).
As seen in zebrafish, siRNA-mediated knock-down of FARS2 in HUVECs also activated the Notch signaling pathway by upregulating all four mammalian Notch receptors (NOTCH1-4) and three ligands (DLL1, 3, and 4) to varying degrees ( Figure 5B). In addition, the Wnt signaling pathway was inhibited after FARS2 silencing, as indicated by the downregulation of Wnt downstream genes (β-catenin, AXIN1, and AXIN2) and upregulation of the Wnt signaling inhibitor gene DKK1 (Figure 5B). Western blot analyses confirmed that the NOTCH1 and β-catenin protein levels were increased and decreased, respectively, following the siRNA-mediated knock-down of FARS2 (Figures 5C,D). Overall, these findings demonstrate that the loss of FARS2 affects angiogenesis by disrupting the Notch and Wnt signaling pathways.
DISCUSSION
The results presented here show that mitochondrial phenylalanyl-tRNA synthetase plays an essential role in angiogenesis both in vivo and in vitro. Our initial analysis of the expression pattern of fars2 during zebrafish embryonic development suggested that it plays a role in developmental angiogenesis. Subsequently, using MOs, we found that fars2 deficiency caused the delayed development of zebrafish embryos and impaired vascular formation, including those of ISVs, DLAVs, PAVs, and the CVP. Similarly, we found that siRNA-mediated knock-down of FARS2 in HUVECs impaired cell motility, proliferation, migration, and tube formation, confirming the role of FARS2 in angiogenesis. We also found that the loss of FARS2 led to mitochondrial dysfunction in HUVECs. Finally, we explored the possible mechanisms underlying the disruption of angiogenesis and found that FARS2 deficiency may disrupt the Notch and Wnt signaling pathways, both of which are involved in angiogenesis (Figure 6).
The lethality of defects in ECs to mammalian embryos confirms the pivotal function of the vasculature in development. During embryonic development, two essential processes, vasculogenesis and angiogenesis, form the vasculature consisting of arterial, venous, and lymphatic vessels. Vasculogenesis is defined as the de novo emergence of vessels through the differentiation of angioblasts. Angiogenesis describes new vascular formation after the proliferation of ECs from preexisting vessels (58)(59)(60). Much effort has been focused on investigating the key stages of vasculature development in mammalian embryos. The first sign of vascular formation occurs in the extraembryonic yolk sac blood island at the gastrulation stage as early as embryonic day 7.5. Subsequently, the blood island fuses to constitute the primary plexus, which leads to the establishment of the complex yolk sac vasculature The OCR was measured continuously throughout the experimental period, both at baseline and in the presence of the indicated drugs. (B) Non-mitochondrial respiration, basal respiration, maximal respiration, proton leak, ATP production, and spare respiratory capacity in control and FARS2-deficient HUVECs. The measurements were made in triplicate (mean and SEM). ** P < 0.01, *** P < 0.001, **** P < 0.0001. (C) The effects of FARS2 knock-down on intracellular reactive oxygen species production by HUVECs. The measurements were made in triplicate (mean and SEM). * P < 0.05, (D) Quantification of total ATP levels in HUVECs 48 h after transfection with the indicated siRNAs. The measurements were made in triplicate (mean and SEM). ** P < 0.01. (61,62). Next, under the influence of complex transcriptional regulation and critical signaling components of angiogenesis, the newborn vessels of the developing embryo specialize further and differentiate into arteries, veins, and capillaries. Our results presented here not only identify the phenotype of delayed embryonic development in zebrafish caused by fars2 deficiency but also preliminarily suggest that this phenotype may be caused by impaired angiogenesis.
Although angiogenesis is an energy-intensive process, glycolysis is the primary energy-producing mechanism in vascular ECs, a feature that is attributable to their special physiological distribution and high levels of exposure to oxygen (27,28,63). Historically, the role of mitochondrial respiration in angiogenesis has been overlooked, that is, until the discovery of the essential role of mitochondrial fatty acid and amino acid oxidation pathways in angiogenesis (34). A growing body of evidence suggests that, by acting as important organelles that sense ambient oxygen concentrations and generate energy, the mitochondria play an integral role in controlling metabolism and in regulating the proliferation and survival of ECs during angiogenesis. The mutation of mitochondrial tRNA and aberrant tRNA metabolism induce mitochondrial dysfunction, leading to apoptosis and impaired angiogenesis in HUVECs (64). The mitochondrial permeability transition pore also plays a role in regulating mitochondrial metabolism in ECs and in the maintenance of vascular integrity (65). In addition, mitochondrial dynamics (44) and mitochondria-endoplasmic reticulum contacts (66, 67) are critical for the regulation of angiogenesis and vascular remodeling. In our current study, we found that the impairment of HUVEC proliferation, migration, and tube formation by FARS2 deficiency was caused by abnormal mitochondrial respiratory function.
The hierarchical organization of ECs into tip cells (leading role) and stalk cells (trailing role) is required by angiogenesis. The expression levels of genes involved in the Notch/Wnt pathways in control and fars2 zebrafish morphants, as determined by qRT-PCR analyses (n = 6-10 individual embryos). *** P < 0.001, ** P < 0.01, * P < 0.05; ns, not significant. (B) The relative mRNA expression levels of Notch/Wnt pathway-related genes. The human umbilical vein endothelial cells (HUVECs) were transfected with the indicated siRNAs for 48 h and then harvested for qRT-PCR analysis. The measurements were made in triplicate (mean and SEM), and the results are indicative of three independent experiments. * P < 0.05, ** P < 0.01, *** P < 0.001, **** P < 0.0001. (C) Western blot analyses of NOTCH1 and β-catenin protein levels. The HUVECs were transfected with the indicated siRNAs for 48 h prior to analysis. (D) Quantification of the western blotting data described in (C). The measurements were made in triplicate (mean and SEM). * P < 0.05, ** P < 0.01.
Tip cells lead the sprouts toward the signaling sources of angiogenesis in tissues, and the tip cells are followed by stalk cells, which elongate the sprout (24,68). These processes are orchestrated by a complex molecular network, like Notch, Wnt, and VEGF/VEGFR. In tip cells, the activation of VEGFR2 induced the expression of DLL4 in response to VEGF from the signaling source (69). Then, DLL4 activates Notch in stalk cells to restrict branching. Studies in zebrafish and mice reveal that Notch is essential for restricting EC behavior to tip cells, reflected in the excessive sprouting of arteries in the absence of the Notch and the damage of angiogenesis in the activation of the Notch (70, 71). In ECs, Wnt signals could induce a Notch-like phenotype in a reciprocal feedback role, characterized by vascular remodeling and branching defects (39). Studies in mice reveal that Wnt is also required for angiogenesis, reflected in vascular defects after geneinactivation of the Wnt genes (72). In our study, the activation of Notch and the inhibition of Wnt caused by FARS2 deficiency might damage angiogenesis by breaking the determination of EC fate and disrupting the signaling system in ECs. In addition, we detected that the transcript of dll4 and notch1a had no significant changes in zebrafish, which was inconsistent with the results of HUVECs. However, the regulation of angiogenesis in vivo is an extremely complex process involving various network pathways. In fars2 deficiency in zebrafish, the upregulation of hey2 and notch1b could partially indicate the activation of Notch signaling pathway (73, 74), but no changes in dll4 and notch1a were potentially due to the crosstalk with other signaling pathways, like VEGF/VEGFR (75). Moreover, we are eager to explore the specific molecular mechanisms involved in these processes during future research.
Expanding research into brain science has produced a large amount of evidence showing that angiogenesis plays a neurotrophic role in neurodegenerative disorders such as Alzheimer's disease. The relationship between cerebrovascular abnormalities and cognitive decline is supported by the fact that Alzheimer's disease brains display vascular pathology, with microvasculature changes occurring before cognitive decline and preceding neurodegenerative changes (76-78). In addition, there is sufficient evidence to suggest that vascular endothelial growth factor-based gene or protein therapies could be used to treat amyotrophic lateral sclerosis patients (79). Although mutations in the FARS2 gene have a strong association with neurological diseases, the relationship between neural microvascular networks and disease phenotypes in patients with these mutations has not been characterized. Our study may provide new insights into the progression of neurovascular diseases and the diagnosis and treatment of FARS2 mutation-related genetic diseases.
In summary, using in vivo and in vitro knock-down models, we report that FARS2 is essential for angiogenesis. In this study, we focused on elucidating the phenotypes associated with angiogenic defects caused by FARS2 deficiency. However, the specific molecular mechanisms linking cardiovascular system defects to the impairment of mitochondrial respiratory function due to FARS2 deficiency have not been investigated thoroughly. In addition, the interaction between the pathogenesis of neurodegenerative diseases and impairment of angiogenesis caused by FARS2 defects requires further exploration.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/Supplementary Material.
ETHICS STATEMENT
The animal study was reviewed and approved by the Fourth Military Medical University.
AUTHOR CONTRIBUTIONS
BL contributed to the conceptualization, data curation, investigation, statistical analysis, visualization, and writing of the original draft. KC and FL contributed to the conceptualization, project administration, methodology, software, editing, and writing of the original draft. JZ contributed to the conceptualization, data collection, and writing of the original draft. XC contributed to the conceptualization, methodology, and writing of the original draft. TC, QC, YY, and WH contributed to the methodology, data collection, data validation, formal analysis, and resources. YW and LW contributed to the conceptualization, project administration, and writing (editing). All authors contributed to the article and approved the submitted version. | 2021-09-04T13:26:29.915Z | 2021-09-01T00:00:00.000 | {
"year": 2021,
"sha1": "3a51fa4b3ac94054503d194e89f336a6064f9610",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcvm.2021.724846/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3a51fa4b3ac94054503d194e89f336a6064f9610",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52875402 | pes2o/s2orc | v3-fos-license | On the Regret Minimization of Nonconvex Online Gradient Ascent for Online PCA
Non-convex optimization with global convergence guarantees is gaining significant interest in machine learning research in recent years. However, while most works consider either offline settings in which all data is given beforehand, or simple online stochastic i.i.d. settings, very little is known about non-convex optimization for adversarial online learning settings. In this paper we focus on the problem of Online Principal Component Analysis in the regret minimization framework. For this problem, all existing regret minimization algorithms are based on a positive semidefinite convex relaxation, and hence require quadratic memory and SVD computation (either thin of full) on each iteration, which amounts to at least quadratic runtime per iteration. This is in stark contrast to a corresponding stochastic i.i.d. variant of the problem which admits very efficient gradient ascent algorithms that work directly on the natural non-convex formulation of the problem, and hence require only linear memory and linear runtime per iteration. This raises the question: \textit{can non-convex online gradient ascent algorithms be shown to minimize regret in online adversarial settings?} In this paper we take a step forward towards answering this question. We introduce an \textit{adversarially-perturbed spiked-covariance model} in which, each data point is assumed to follow a fixed stochastic distribution, but is then perturbed by adversarial noise. We show that in a certain regime of parameters, when the non-convex online gradient ascent algorithm is initialized with a"warm-start"vector, it provably minimizes the regret with high probability. We further discuss the possibility of computing such a"warm-start"vector. Our theoretical findings are supported by empirical experiments on both synthetic and real-world data.
Introduction
Nonconvex optimization is ubiquitous in contemporary machine learning, ranging from optimization over sparse vectors or low-rank matrices to training Deep Neural Networks. While traditional (yet still highly active) research on nonconvex optimization focuses mostly on efficient convergence to stationary points, which in general need not even be a local minima, let alone a global one, a more-recent line of work focuses on proving convergence to global minima, usually under certain simplifying assumptions that on one hand make the nonconvex problem tractable, and on the other hand, are sufficiently reasonable in some scenarios of interest. One of the most studied and well known nonconvex optimization problems in machine learning underlies the fundamental task of Principal Component Analysis (PCA), in which, given a set of N vectors in R d , one wishes to find a k-dimensional subspace for k << d, such that the projections of these vectors onto this subspace is closest in square-error to the original vectors. It is well known that the optimal subspace corresponds to the span of the top k eigenvectors of the covariance matrix of the data-points. Henceforth, we focus our discussion to the case k = 1, i.e., extracting the top principal component. Quite remarkably, while this problem is non-convex (since extracting the top eigenvector amounts to maximizing a convex function over the unit Euclidean ball), a well known iterative algorithm known as Power Method (or Power Iterations), which simply starts with a random unit vector and repeatedly applies the covariance matrix to it (and then normalizes the result to have unit norm), converges to the global optimal solution rapidly. The convergence guarantee of the PM, can also be shown to imply that the nonconvex projected gradient ascent method with random initialization and a fixed step-size also converges to the top principal component.
In a recent line of work, the convergence of non-convex gradient methods for PCA was extended to a natural online stochastic i.i.d. setting of the problem, in which, given a stream of data points sampled i.i.d. from a fixed distribution, the goal is to converge to the top eigenvector of the covariance of the underlying distribution as the sample size increases, yielding algorithms that require only linear memory (i.e., do not need to store the entire sample or large portions of it at any time) and linear runtime to process each data point, see for instance [15,3,18,10,1,14,22].
In a second recent line of research, researchers have considered Online PCA as a sequential decision problem in the adversarial framework of regret minimization (aka online learning, see for instance the recent introductory texts [17,8]), e.g., [20,21,16,4,6,2]. In this framework, for each data-point, the online algorithm is required to predict a unit vector (i.e., a subspace of dimension one, recall we are in the case k = 1) before observing the data-point, and the goal is to minimize regret which is the difference between the square-error of the predictions made and the square-error of the principal component of the entire sequence of data. Different from the i.i.d. stochastic setting, in this framework, the data may be arbitrary (though assumed to be bounded in norm), and need not follow a simple generative model. Formally, the regret is given by where λ 1 (·) denotes the largest (signed) eigenvalue of a real symmetric matrix. Naturally, the arbitrary nature of the data in the online learning setting, makes the problem much more difficult than the stochastic i.i.d. setting. Notably, all current algorithms which minimize regret cannot directly tackle the natural nonconvex formulation of the problem, but consider a well known (tight) convex relaxation, which "lifts" the decision variable from the unit Euclidean ball in R d to the set of all d × d positive semidefinite matrices of unit trace (aka the spectrahedron). While this reformulation allows to obtain regret-minimizing algorithms in the online adversarial settings (since the problem becomes convex), they are dramatically less efficient than the standard nonconvex gradient methods. In particular, all such algorithms require quadratic memory (i.e., O(d 2 )), and require either a thin or full-rank SVD computation of a full-rank matrix to process each data point, which amounts to at least quadratic runtime per data point (for non trivially-sparse data), see [20,21,16,4,6,2]. This phenomena naturally raises the question: Can Nonconvex Online Gradient Ascent be shown to minimize regret for the Online PCA problem?
While in this paper we do not provide a general answer (either positive or negative), we do take a step forward towards understanding the applicability of nonconvex gradient methods to the Online PCA problem, and to online nonconvex optimization in general. We introduce a "semi-adversarial" setting, which we refer to as adversarially-perturbed spiked-covariance model, which assumes the data follows a standard i.i.d. stochastic distribution with a covariance matrix that admits a non-zero spectral gap, however, each data point is then perturbed by some arbitrary, possibly adversarial, vector of non-trivial magnitude. We view this model as a natural extension of the standard stochastic spiked covariance model (which was studied extensively in recent years) due to its ability to capture arbitrary (adversarial) patterns in the data. Hence, we believe the suggested model might provide a much better approximation for real-world data streams. We formally prove that in a certain regime of parameters, which concerns both the spectral properties of the distribution covariance and the magnitude of adversarial perturbations, given a "warm-start" initialization which is sufficiently correlated with the top principal component of the distribution covariance, the natural nonconvex online gradient ascent algorithm guarantees anÕ( √ N ) regret bound with high probability. In particular, the algorithm requires only O(d) memory and O(d) runtime per data point. We further discuss the possibilities of computing such a "warm-start" vector (i.e., initializing from a "coldstart"). Finally, we present empirical experiments with both synthetic and real-world datasets which complement our theoretical analysis.
Assumptions and Results
In this section we formally introduce our assumptions and main result. As discussed in the introduction, since our aim is make progress on a highly non-trivial problem of providing global convergence guarantees for a non-convex optimization algorithm in an online adversarial setting, our results do not hold for arbitrary (bounded) data, as is usually standard in convex online learning settings, but only on for a more restricted family of input streams, namely those which follow a model we refer to in this paper as the adversarially-perturbed spiked-covariance model. Next we formally introduce this model.
Adversarially-Perturbed Spiked-Covariance Model
Throughout the paper we assume the data, i.e., the vectors {x t } t∈[N ] , satisfy the following assumption.
We now make a few remarks regarding Assumption 1. Item (1) assumes that the adversarial perturbations are bounded which is standard in the online learning literature, Item (2) is also a standard assumption, which is used to apply standard concentration arguments for sums of i.i.d random variables. Item (3), i.e., the assumption that the distribution as zero mean, while often standard, is not mandatory for our analysis to hold, however since it greatly simplifies the analysis and results in a much wider regime of parameters to which our result is applicable, we make it.
To better understand item (4), it helps to think of δ(Q), V 2 , ε as quantities proportional to λ 1 (Q), i.e., consider δ(Q) = c δ λ 1 (Q), V 2 = c V λ 1 (Q), ε = c ε λ 1 (Q), for some universal constants c δ , c V , c ε ∈ (0, 1). Now, item (4) in the assumption boils down the the condition c δ ≥ c V + c 2 V + c ε 1 , i.e., the eigengap in the covariance Q needs to dominate the adversarial perturbations in a certain way. We further discuss this assumption after presenting our main theorem -Theorem 1 in the following Subsection.
Connection with stochastic i.i.d. models: note that when setting V = 0 in Assumption 1 (i.e., there is no adversarial component), our setting reduces to the well studied standard stochastic i.i.d. setting. In particular, in this case item (4) in Assumption 1 simply reduces to the standard assumption in this model that the covariance admits an eigengap bounded away from zero (δ(Q) ≥ ε). Hence, the model introduced above can be seen as a natural, yet highly non-trivial, extension of the standard stochastic model to a "more expressive" online adversarial model, that might serve as a better approximation for real-world data-streams in online-computation environments.
Algorithm and Convergence Result
For simplicity of the analysis we consider the data as arriving in blocks of length ℓ, where ℓ is a parameter to be determined later. Towards this end, we assume that N = T ℓ for some integer T and we consider prediction in T rounds, such that on each round t ∈ [T ], the algorithm predicts on all ℓ vectors in the tth block, which we denote by x t . It is important to emphasize that, while our algorithm considers the original data in blocks, it requires only O(d) memory and O(d) time to process each data point x Our algorithm, which we refer to as nonconvex online gradient ascent, is given below (see Algorithm 1).
Algorithm 1 Nonconvex Online Gradient Ascent for Online PCA The following theorem states our main result.
For large enough N , there exists an integer
that applying Algorithm 1 with blocks of length ℓ and initializationŵ 1 which satisfies , where x is the leading eigenvector of Q (as defined in Assumption 1), and with learning rate η = . Theorem 1 roughly says that when the distribution covariance has a large-enough eigengap with respect to the adversarial perturbations (item 4 in Assumption 1), then non-convex OGA converges from a "warm-start" withÕ( √ N ) regret. Intuitively, the condition on the eigengap implies that the best-in-hindsight eigenvector cannot be far from x -the leading eigenvector of the distribution covariance by more than a certain constant. Hence, Theorem 1 can be seen as an online "local" convergence result. Importantly, it is not hard to show that under the conditions of the theorem, the best-in-hindsight eigenvector can also be far from both the initial vectorŵ 1 and from x by a constant (and hence in particular bothŵ 1 and x can incur linear regret). Hence, while our setting is strictly easier than the fully adversarial online learning setting, it still a highly non-trivial online learning setting. In particular, all previous algorithms that provably minimize the regret under the conditions of Theorem 1 require quadratic memory and quadratic runtime per data-point.
Computing a "warm-start" vector
We now discuss the possibility of satisfying the "warm-start" requirement in Theorem 1.
First, we note that given the possibility to sample i.i.d. points from the underlying distribution D, it is straightforward to obtain a warm-start vectorŵ 1 , as required by Theorem 1, by simply initializingŵ 1 to be the leading eigenvector of the empirical covariance of a size-n sample of such points. It is not difficult to show via standard tools such as the Davis-Kahan sin θ theorem and a Matrix-Hoeffding concentration inequality (see for instance the proof of the following Lemma 1), that for any (ǫ, p) ∈ (0, 1) 2 , a sample of If sampling directly from D is not possible, the following lemma, whose proof is given in the sequel, shows that with a simple additional assumption on the parameters δ(Q), λ 1 (Q), V 2 , it is possible to obtain the warm-start initialization directly using data that follows Assumption 1. Moreover, the sample-size n required is independent of N , and hence using for instance the first n vectors in the stream to compute such initialization deteriorates the overall regret bound in Theorem 1 only by a lower-level term.
Lemma 1. [warm-start] Suppose that in addition to Assumption 1 it also holds that
such that initializingŵ 1 to be the leading eigenvector of the empirical
Analysis
At a high-level, the proof of Theorem 1 relies on the combination of the following three ideas: 1. We build on the fact that the Online PCA problem, when cast as online linear optimization over the spectraedron (i.e., when the decision variable is lifted from a unit vector to a positive semidefinite matrix of unit trace), is online learnable via a standard application of online gradient ascent, which achieves an O( √ N ) regret bound (however requires a full SVD computation on each iteration to compute the projection onto the spectrahedron).
2. We prove, that under Assumption 1, the above "inefficient" algorithm, when initialized with a proper "warm-start" vector, guarantees that the projection onto the spectrahedron is always a rank-one matrix (hence, only a rank-one SVD computation per iteration is required).
3. Finally, we show that the nonconvex online gradient ascent algorithm, Algorithm 1, approximates sufficiently well the steps of the above algorithm (in case the projection is rank-one), avoiding SVD computations all together.
We introduce the following notation that will be used throughout the analysis. For all t ∈ [T ], we define X t : Recall that we let Q denote the covariance matrix associated with the distribution D (as detailed in Assumption 1), and we let λ 1 (Q), . . . , λ d (Q) denote its eigenvalues in descending order. Also, we let x denote the leading eigenvector of Q, which under Assumption 1, is unique. We also define D t := Q t − ℓ · Q + M t . Note that X t = ℓ · Q + V t + D t . Intuitively, under Assumption 1, 1 ℓ D t converges to zero in probability as ℓ → ∞. We denote by S the spectrahedron, i.e., S := {W ∈ R d×d | W 0, Tr(W) = 1}, and we let Π S [W] denote the Euclidean projection of a symmetric matrix W ∈ R d×d onto S.
Our main building block towards proving Theorem 1 is to analyze the regret of a different non-convex algorithm for Online PCA. The meta-algorithm, Algorithm 2, builds on the standard convexification scheme for Online PCA, i.e., "lifting" the decision set from the unit ball to the spectrahedron, however, instead of computing exact projections onto the spectrahedron, it follows a nonconvex approach of only approximating the projection via a rank-one solution. We refer to it as a meta-algorithm, since for a given approximation parameter γ, it only requires on each iteration to find an approximate eigenvector of the matrix to be projected onto S.
Note that a straightforward implementation of Algorithm 2 with γ = 0 corresponds to updatingŵ t+1 via accurate SVD of the d×(ℓ+1) matrix (ŵ t , x Algorithm 2 Approximate Non-convex Rank-one Online Gradient Ascent 1: input: unit vectorŵ 1 , learning rate η > 0, approximation parameter γ > 0 2: for t = 1 . . . T do 3: play vectorŵ t 4: Let w ∈ R d be a unit vector and let X ∈ R d×d be positive semidefinite. Let w ′ be the leading eigenvector of the matrix W := ww ⊤ + ηX, for some η > 0. If Proof. Recall w ′ denotes the leading eigenvector of W and let y 2 , . . . , y d denote the other eigenvectors in non-increasing order. It is well known that the projection of W onto S is given by Note that on one hand, On the other-hand, using the last inequality, we can also write Using Ky Fan's eigenvalue inequality, we have that Thus, by combining Eq. (1), (2), (3), we arrive at the following sufficient condition so that w ′ w ′⊤ = Π S [W]: which is equivalent to the condition w ⊤ Xw ≥ λ 1 (X)+λ 2 (X) 2 .
Lemma 3. Suppose that on some iteration t of Algorithm 2 it holds that
.
, and denote W t+1 =ŵ tŵ ⊤ t + ηX t . Using Lemma 2 it suffices to show that On one hand we havê On the other hand, where (a) follows from Ky Fan's eigenvalue inequality. Combining Eq. (4), (5), (6), we arrive at the following sufficient condition so that Lemma 4. suppose that on some iteration t of Algorithm 2 it holds that (x ⊤ŵ t ) 2 ≥ 1 2 . Then, for any learning rate η > 0, it holds that Proof. Fix some iteration t. We introduce the short notation w =ŵ t , w ′ = w t+1 , , and for all i ≥ 2, y i is the eigenvector of W t+1 associated with eigenvalue λ i .
It holds that Thus, we have that Rearranging we obtain, Note that via Ky Fan's inequality we have that Thus, using our assumption that (x ⊤ w) 2 ≥ 1/2, we have that Note that where (a) follows again from Ky Fan's inequality. Plugging-in both the above inequalities, we have that Lemma 5. Suppose that when applying Algorithm 2, the following conditions holds: Then, Proof. In light of Lemma 3, it suffices to show that on each iteration t, it holds that
We prove this inequality indeed holds for all t ∈ [T ] via induction.
Note that for t = 1, this clearly holds by our assumption onŵ 1 . Suppose now the assumption holds for some t ≥ 1. In the following we let λ i denote the ith largest eigenvalue of the matrix W t+1 :=ŵ tŵ ⊤ t + ηX t . Note that under the induction hypothesis and Assumption 1, it in particular holds that (ŵ ⊤ t x) 2 ≥ 1/2, and hence we can invoke Lemma 4.
We consider two cases.
, then using Lemma 4 we have that were (a) follows since under the induction hypothesis, we in particular have that w t+1 w ⊤ t+1 = Π S [W t+1 ] (see Lemma 3), which in turn implies that λ 1 ≥ 1 + λ 2 (see proof of Lemma 2).
Moreover, we have that Thus, for any γ ≤ ǫ 4λ 1 (Q) the claim indeed holds for the first case. On the other hand, in case ( , by an application of Lemma 4 we have that where (a) follows from our assumption on (ŵ ⊤ t x) 2 in this second case, and (b) follows, since Assumption 1 implies that max{δ(Q), V 2 } ≤ λ 1 (Q)}.
Thus, for any we have that Moreover, as before, we have that where (a) follows since Assumption 1 implies that δ(Q) 2 − V 4 − V 2 λ 1 (Q) ≥ 0, and since where the last inequality follows from our assumption that η ≤ 1 ℓ(R+V ) 2 . Thus, for any (which in particular holds for γ ≤ 9ηℓǫ/2), we have that as needed.
Lemma 6 (Convergence of Algorithm 2). Consider an application of Algorithm 2 to a se- which follows Assumption 1, and suppose that all conditions stated in Lemma 5 hold. Then, Proof. By an application of Lemma 5, it holds for all t ∈ [T ] that w t+1 w ⊤ t+1 = Π S [W t+1 ]. Thus, using standard arguments, we have that for all t ∈ [T ] it holds that where (a) follows since for any two unit vectors y, z it holds that yy ⊤ − zz ⊤ F ≤ √ 2. Combining both of the above bounds, we obtain Summing over all iterations we obtain the bound Finally, note that under Assumption 1, it holds for all t ∈ [T ] that
Convergence of Algorithm 1
Lemma 7. Consider some iteration t of Algorithm 1, and let w t+1 denote the leading eigenvector of the matrix W t+1 :=ŵ tŵ Proof. Let us denote by y 2 , . . . , y d the (d − 1) non-leading eigenvectors of the matrix W t+1 . Since both w t+1 ,ŵ t+1 are unit vectors, we have that Note that by the update rule of Algorithm 1, the vectorŵ t+1 could be written aŝ Hence,ŵ t+1 , is the result of applying a single iteration of the Power Method, initialized with the vectorŵ t , to the matrix W t+1 . Let us denote by y 2 , . . . , y d the (d − 1) nonleading eigenvectors of W t+1 . Using standard arguments, see for instance Eq. (18) in [5], we have that Sinceŵ t is a unit vector, we have that Moreover, we can bound where the first inequality follows from Weyl's inequality for the eigenvalues. Thus, plugging-in into Eq. (9), we have that Using the Davis-Kahan sinθ theorem (see for instance Theorem 2 in [7]), we have that Plugging back into Eq. (10) and using the fact that η ≤ 1 √ 2 Xt (see bound on X t in Eq. (8)), we can conclude that where (a) and (b) follow from our assumption on η and the bound (8).
Lemma 8 (Matrix Hoeffding). Under the conditions of Assumption 1, it holds for all t ∈ [T ]
and for all ǫ > 0 that Proof. By a straightforward application of the Matrix Hoeffding inequality (see for instance [19]), we have for any fixed t ∈ [T ] that Thus, the lemma follows from applying both of the above bounds with parameter ǫ/2 and noting that V 2 ≤ R 2 .
We can now finally prove Theorem 1.
Proof. The proof follows from straightforward application of the tools we have developed thus-far. We assume for simplicity that N = T · ℓ for our choice of ℓ. Note this is without loss of generality, since the remainder (N − ℓ · ⌊N/ℓ⌋) affects the bound in the theorem only via lower-order terms.
Let us define ǫ : , and note this choice corresponds to the RHS of Eq. (7). Thus, for a certain ℓ = O(R 4 ǫ −2 log dT p ), we have by an application of Lemma 8 that with probability at least 1 − p it holds for all t ∈ [T ] that 1 where η is the chosen learning rate. Note that for a large enough N , all parameters ǫ, η, γ,ŵ 1 satisfy the conditions of Lemma 5 with probability at least 1 − p, and thus, by invoking Lemma 6, we have that with probability at least 1 − p that where (a) follows from plugging the value of η, γ and (b) follows since N = T · ℓ. The theorem now follows from plugging-in the bound on ℓ.
Proof of Lemma 1 ("warm-start")
Proof. Letŵ 1 be the leading eigenvector of the normalized covarianceX = 1 where we use the notation ∆ = X − E[X] . Via the Davis-Kahan sin θ theorem (see for instance Theorem 2 in [7]) and using the short notation δ = δ(Q), we have that Now, using the short notation λ 1 = λ 1 (Q), the warm-start condition in Theorem 1 boils down to the condition Solving the above inequality for V 2 we obtain the solution interval: In particular, for we obtain the feasible sub-interval: which is equivalent to the requirement δ(Q) ≥ 16λ 1 (Q)V 4 1/3 . We conclude the proof with the simple observation that using a standard Matrix Hoeffding concentration bound (see for instance Lemma 8 in the sequel), it suffices to take n = O (R+V ) 4 λ 1 log(d/p) for the bound in (11) to hold with probability at least 1 − p.
Experiments
We test the following algorithms. Algorithm 2 with block-size ℓ = 1, whereŵ t+1 is computed via rank-one SVD (R1-OGA), a similar algorithm which uses non-unit blocksize ℓ > 1 (BR1-OGA), the non-convex online gradient ascent, Algorithm 1, with unit block-size ℓ = 1 (Nonconvex-OGA), and the convex online gradient ascent (equivalent to Algorithm 1, but uses accurate Euclidean projections onto the spectrahedron) with unit block-size (Conv-OGA). Since computing that exact projection for Conv-OGA via a full SVD is highly time consuming, we approximate it by extracting only the five leading components. Finally, we record the regret of the initial "warm-start" vectorŵ 1 (BaseVec), which serves as the initialization for all algorithms. For all datasets we plot for each iteration t the average-regret against the leading eigenvector in hindsight (w.r.t. all data) up to time t.
We consider the following three datasets. Synthetic: a random dataset is constructed by generating Gaussian zero-mean data with a random covariance matrix Q with eigenvalues λ i = 15 · 0.3 i−1 for all i ∈ [d], and perturbing them using independent Gaussian zero-mean noise with random covariance matrix V with eigenvalues µ i = 3 · 0.3 i−1 for all i ∈ [d], where we use d = 100. We set the number of data points to N = 10000, and we compute the initializationŵ 1 for all algorithms by computing the leading eigenvector of a sample of size 100 (i.e., 1% of N ) based on samples from the covariance Q only. For the algorithm BR1-OGA we set ℓ = 10. We average the results of 30 i.i.d. experiments. MNIST: we use the training set of the MNIST handwritten digit recognition dataset [13] which contains 60000 28x28 images, which we split into N = 59400 images for testing, while 600 images (i.e., 1% of data) are used to compute the initializationŵ 1 . For the algorithm BR1-OGA we set ℓ = 5. CIFAR10: we use the CIFAR10 tiny image dataset [12] which contains 50000 32x32 images in RGB format. We convert the images to grayscale and use N = 49900 images for testing and 100 images (i.e., 0.2% of data) are used to compute the initialization. For BR1-OGA we set ℓ = 5.
The results for all three datasets are given in figure 1. It can be seen that indeed all algorithms improve significantly over the "warm-start" base vector. We also see that all algorithms indeed attain low average-regret, and in particular are competitive with OGA which follows a convex approach (up to the approximation of the projection via thin SVD).
To further examine the applicability of our theoretical approach, for all datasets, we recorded for algorithm BR1-OGA the fraction of projection errors, i.e., the precent of number of iterations t on which the projection of the matrix W t+1 =ŵ tŵ spectrahedron S is not a rank-one matrix. The results are 6.24%, 0.26%, 0%, for synthetic, MNIST and CIFAR10, respectively. These low error rates indeed support our theoretical analysis which hinges on showing that under our data model (recall Assumption 1) and given a "warm-start'" initialization, the projections of the matrices W t in Algorithm 2 are always rank-one.
Discussion
In this paper we took a step forward towards understanding the ability of highly-efficient non-convex online algorithms to minimize regret in adversarial online learning settings. We focused on the particular problem of online principal component analysis with k = 1, and showed that under a "semi-adversarial" model, in which the data follows a stochastic distribution with adversarial perturbations, and given a "warm-start" initialization, the natural nonconvex online gradient ascent indeed guarantees sublinear regret. Our theory is further supported by empirical evidence. We hope this work will motivate further research on online nonconvex optimization with global convergence guarantees. Future directions of interest may include extending our analysis to an even wider regime of parameters and extracting k principal components at once. Also, it is interesting if in the standard adversarial setting, it can be shown that online nonconvex gradient ascent achieves low-regret, or on the other-hand, to show that there exist instances on which it cannot guarantee non-trivial regret. Finally, moving beyond PCA, other online learning problems of interest that may benefit from a nonconvex approach include online matrix completion [9,11], and of course, provable online learning of deep networks. | 2018-09-27T12:35:17.000Z | 2018-09-27T00:00:00.000 | {
"year": 2018,
"sha1": "5ae5d5505b9af94163a12229d4ecd60ef0e416f9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0edcbd28e190396134855efced8981bbdec67499",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
14649155 | pes2o/s2orc | v3-fos-license | The Neutral ISM in Nearby Luminous Compact Blue Galaxies
We observed 20 nearby Luminous Compact Blue Galaxies (LCBGs) in HI and CO(J=2-1) with the GBT and JCMT. These ~L^star galaxies are blue, high surface brightness, starbursting, high metallicity galaxies with an underlying older stellar population. They are common at z~1, but rare in the local Universe. It has been proposed that intermediate redshift LCBGs may be the progenitors of local dwarf ellipticals or low luminosity spirals, or that they may be more massive disks forming from the center outward to become L^star galaxies. To discriminate among various possible evolutionary scenarios, we have measured the dynamical masses and gas depletion time scales of this sample of nearby LCBGs. We find that local LCBGs span a wide range of dynamical masses, from 4 x 10^9 to 1 x 10^11 M_solar (measured within R_25). Molecular gas in local LCBGs is depleted quite quickly, in 30 to 200 million years. The molecular plus atomic gas is depleted in 30 million to 10 billion years; however, ~80% of the local LCBGs deplete their gas in less than 5 billion years. As LCBGs are heterogeneous in both dynamical mass and gas depletion time scales, they are not likely to evolve into one homogeneous galaxy class.
Introduction
Luminous Compact Blue Galaxies (LCBGs) are ∼L ⋆ , blue, high surface brightness, high metallicity, vigorously starbursting galaxies with an underlying older stellar population [1,2]. They include a variety of morphological types, such as spiral, polar-ring, interacting/merging and peculiar galaxies. They have optical diameters of a few kpc, but are more luminous and more metal rich than the Blue Compact Dwarf Galaxies widely studied in the nearby Universe, e.g. [3,4].
When Jangren et al. [5] compared intermediate redshift LCBGs with local normal galaxies, they found that they can be isolated quantitatively on the basis of color, surface brightness, image concentration and asymmetry, with color and surface brightness giving the best leverage for separating LCBGs from normal galaxies. Specifically, LCBGs have B−V < 0.6, SBe < 21 mag arcsec −2 , and M B < −18.5, assuming H 0 = 70 km s −1 Mpc −1 . LCBGs are quite common at intermediate redshifts, but by z∼0 their number density has decreased by a factor of ten. At z∼1, they have a total star formation rate density equal to that of grand-design spirals at that time, but today they contribute negligibly to the star formation rate density of the Universe [6]. Therefore, LCBGs must undergo dramatic evolution. From studies at intermediate redshift, Koo et al. [7] and Guzmán et al. [1] suggest that some LCBGs may be the progenitors of local low-mass dwarf elliptical galaxies. Alternatively, Phillips et al. [8] and Hammer et al. [9] suggest that others may be more massive disks forming from the center outward to become local L ⋆ galaxies.
In order to discriminate between the possible evolutionary scenarios it is essential to measure the dynamical masses of the galaxies: Are they as massive as implied by their high luminosities? It is also essential to measure their gas content for future star formation in order to constrain the amount of fading of their stellar populations. We have undertaken a survey of local LCBGs in H I and CO to address these questions. The H I provides a measure of the dynamical mass, while both H I and CO provide measures of the gas content: H I for long-term star formation and CO for the current burst of star formation.
Observations
The current sensitivity of telescopes limits us to detecting CO in LCBGs within ∼70 Mpc. We used the first data release of the Sloan Digital Sky Survey to select our sample of nearby LCBGs, using Jangren et al.'s [5] selection criteria. Out of the ∼million galaxies in the first data release, only ∼100 are LCBGs, and only 16 are within 70 Mpc. To these 16 local LCBGs, we added four more from the literature, for a local LCBG sample of 20 galaxies.
Dynamical Masses
We find that local LCBGs span a wide range of dynamical masses, from 4 × 10 9 to 1 × 10 11 M ⊙ (measured within R 25 ). Figure 1 compares the dynamical masses of local LCBGs with intermediate redshift LCBGs and local spiral galaxies of all Hubble types. Many local LCBGs are ∼ten times less massive than local galaxies of similar luminosities, as found for LCBGs at intermediate redshifts [8]. However, others are as massive as local galaxies of similar luminosities.
Gas Depletion Time Scales
We find our 13 LCBGs detected in CO(J=2−1) have molecular gas masses ranging from 5 × 10 7 to 2 × 10 9 M ⊙ (assuming a Galactic CO-to-H 2 conversion factor of 1.8 × 10 20 cm −2 K −1 km −1 s [10]). Note that these are most likely underestimates of the molecular gas masses since we are using CO(J=2−1). The fraction of molecular to atomic gas mass is small, ranging from 0.03 to 0.3, similar to local late-type spiral galaxies [11].
We estimated star formation rates from available IRAS data, using 60 and 100 µm fluxes as outlined in Kewley et al. [12]. The star formation rates for local LCBGs range from ∼1 to 15 M ⊙ year −1 . For comparison, local spirals of all types have star formation rates ∼2 M ⊙ year −1 [11], so these LCBGs do not have unusually high star formation rates. However, they do have very high specific star formation rates-the ratio of star formation rate to dynamical mass (within R e ). As seen in Figure 2, local LCBGs have specific star formation rates from ∼3 to 40 times those of local normal spirals [11]. The specific star formation rates of local LCBGs are in the same range as local H II (starbursting) galaxies [13].
We find that the molecular gas in local LCBGs is depleted quite quickly, in 30 to 200 million years. The molecular plus atomic gas is depleted in 30 million to 10 billion years; however, ∼80% of the local LCBGs deplete their gas in less than 5 billion years. Therefore, most LCBGs will not be able to sustain their current rates of star formation and will eventually fade.
Conclusions
Both in dynamical masses and gas depletion time scales, we find that local LCBGs have a wide range of characteristics and are unlikely to evolve into one galaxy class. They have dynamical masses consistent with a range of galaxy types, such as dwarf ellipticals, Magellanic (low-luminosity) spirals and normal spirals. The majority have atomic plus molecular gas depletion time scales less than five billion years; such galaxies may have masses, sizes and faded luminosities and surface brightnesses consistent with the brightest local dwarf ellipticals. A few local LCBGs have longer gas depletion time scales, approaching a Hubble time. These may fade very little, becoming spirals or Magellanic irregulars. [8], and local spiral galaxies [11] are indicated. Note that "Sm" indicates Magellanic or low-luminosity spirals. Fig. 2. The specific star formation rate (ratio of star formation rate to dynamical mass within Re) for the local sample of LCBGs. Their specific star formation rates are much higher than all types of local spirals (S); they are similar to local H II galaxies. | 2014-10-01T00:00:00.000Z | 2003-10-30T00:00:00.000 | {
"year": 2003,
"sha1": "d4bda0b9047afbd2bcac93ccfd99ad2999f97be0",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/astro-ph/0310857",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d4bda0b9047afbd2bcac93ccfd99ad2999f97be0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
6332933 | pes2o/s2orc | v3-fos-license | A plant‐produced influenza subunit vaccine protects ferrets against virus challenge
Background Influenza A viruses are of major concern for public health, causing worldwide epidemics associated with high morbidity and mortality. Vaccines are critical for protection against influenza, but given the recent emergence of new strains with pandemic potential, and some limitations of the current production systems, there is a need for new approaches for vaccine development. Objective To demonstrate the immunogenicity and protective efficacy of plant‐produced influenza antigens. Method We engineered, using influenza A/Wyoming/3/03 (H3N2) as a model virus, the stem and globular domains of hemagglutinin (HA) produced in plants as fusions to a carrier protein and used purified antigens with and without adjuvant for ferret immunization. Results These plant‐produced antigens were highly immunogenic and conferred complete protection against infection in the ferret challenge model. The addition of plant‐produced neuraminidase was shown to enhance the immune response in ferrets. Conclusions Plants can be used as a production vehicle for vaccine development against influenza. Domains of HA can generate protective immune responses in ferrets.
Introduction
Influenza is a highly contagious disease that typically results in fever and respiratory symptoms with frequent complications that can lead to hospitalization and death, particularly in young children, adults over 65, and individuals with certain chronic underlying health conditions. 1 Annually, in the United States, there are some 30 million cases, 200 000 hospitalizations and 36 000 deaths from influenza, with an economic impact of $10 billion. 2 Outbreaks of influenza associated with type A virus subtypes H3N2 and H1N1 and type B virus occur almost annually in many countries, 3 and are caused by emerging new strains resulting from 'antigenic drift' in the envelope proteins hemagglutinin (HA) and neuraminidase (NA). 4 Drifted strains can evade the immune responses raised to previous infections or vaccinations and necessitate the almost annual revision of vaccine composition. 5 In addition, the periodic emergence of radically different virus strains possessing novel HA and NA antigens resulting from 'antigenic shift', and for which there is no prior immunity, 6 can lead to pandemics, as in 1918 caused by an H1N1 virus, when there were up to 50 million fatalities worldwide. 5 Currently, highly pathogenic H5N1 strains of avian origin are of particular public health concern and are panzootic among domestic and wild birds in Asia, Europe, and Africa. 7 Since 1997 these strains have shown the capacity to be transmitted to humans who have been in contact with infected poultry. So far, 353 cases of human H5N1 infection have been reported worldwide, with over 60% mortality. 8 Our major defense against infection with influenza viruses is immunization of individuals with an annually updated vaccine 9 that is currently produced in chicken eggs, with a global annual capacity of about 400 million doses, 10 a scale of production insufficient to combat a pandemic. Furthermore, at least 6 months is required between the identification of new virus strains to be included in the vaccine formulation and the manufacture of bulk quantities. 11 Uncertainties over the robustness of egg-based vaccine production are intensified even further by the emergence of H5N1 strains that are highly virulent to both chickens and eggs. 7 Thus, there is a need to develop alternative vaccine production systems capable of rapid turnaround and high capacity. Recombinant subunit vaccines should circumvent some of the concerns regarding our current dependence on egg-based production.
Antibodies to HA and NA play a key role in the immunogenicity and protective efficacy of influenza vaccines. 12 HA binds to the target cell receptor and consists of a stem domain (SD), that is relatively well conserved between strains of virus within an HA subtype, and a more variable globular domain (GD), that contains the majority of antigenic sites and epitopes that generate virus-neutralizing antibodies. 13 NA is present at around 20% the molar equivalent of HA, 14 and has been shown to contribute to immunity. 15 Thus, HA and NA are prime candidates for influenza subunit vaccine development.
Here we report the production and evaluation of domains of HA (SD and GD) of influenza A ⁄ Wyoming ⁄ 3 ⁄ 03 (H3N2) virus 16 expressed as fusions to the engineered thermostable enzyme, lichenase (LicKM), 17 and of NA (amino acids 38-469) from the same virus. All vaccine targets were produced using a plant-based transient expression system. 17 LicKM is derived from Clostridium thermocellum b-1,3-1,4-glucanase, which has previously been used as a carrier molecule for reporter gene expression in recombinant prokaryotic and eukaryotic systems. 18 When tested in ferrets, vaccine candidates containing these engineered plant-produced influenza HA and NA antigens were highly immunogenic, and were protective against infection following challenge with homologous H3N2 virus. This plant-based production system offers safety and capacity advantages, which, taken together with the protective efficacy data reported here, demonstrate the promise of this approach for subunit influenza vaccine development.
Production of influenza antigens in plants
To evaluate the feasibility of our approach for subunit influenza vaccine development target antigens were selected from the previously epidemic A ⁄ Wyoming ⁄ 3 ⁄ 03 (H3N2) virus 16 and engineered as fusions to LicKM. 17 The LicKM carrier molecule is based on the thermostable enzyme b-1,3-1,4-glucanase (LicB) of C. thermocellum. The original sequences of HA and NA were obtained from the National Institute for Biological Standards and Control. The sequences were based on egg-produced virus. HA and NA nucleotide sequences were optimized for expression in plants and synthesized by GENEART (Regensburg, Germany). During this optimization, no amino acid changes were introduced. Nucleotide sequences encoding amino acids 17-67 plus 294-532 of HA, which together comprise the SD, 19 were inserted into LicKM to give LicKM-SD, and nucleotide sequences encoding amino acids 68-293 of HA, comprising the GD, 19 were similarly inserted to give LicKM-GD. Sequence encoding the signal peptide of the Nicotiana tabacum pathogenesis-related protein PR1a 20 was included at the N-terminus of the fusions. Also, sequences encoding the poly-histidine affinity purification tag (6xHis) and the endoplasmic reticulum retention signal (KDEL) were included at the C-terminus. The LicKM fusions were introduced into the launch vector pBID4 17 allowing for viral genome transcription from the cauliflower mosaic virus 35S promoter, followed by viral replication and target sequence expression from the tobacco mosaic virus coat protein subgenomic mRNA promoter. Recombinant viral vectors were introduced into Agrobacterium tumefaciens strain GV3101 by electroporation 17 . The sequence encoding amino acids 38-469 of NA from the same influenza virus strain was introduced into pBID4, without prior fusion to LicKM. As above, the signal peptide of PR1a was included at the N-terminus and 6xHis plus KDEL were included at the C-terminus. Suspensions of recombinant A. tumefaciens carrying launch vectors were introduced into Nicotiana benthamiana plants, a wild variety of tobacco, by inoculating leaves 6 weeks after sowing. Plants were grown in potting soil under 12 h light ⁄ 12 h dark conditions at 21°C. Leaves were harvested 4 days after inoculation with LicKM-SD and LicKM-GD and 7 days after inoculation with NA to assure optimum accumulation of each target. Protein extracts were prepared by grinding leaves in 50 mm sodium phosphate buffer pH 7AE0, 100 mm sodium chloride, 10 mm sodium diethyldithiocarbamate, and 10 mm b-mercaptoethanol, and recombinant antigens were enriched by ammonium sulfate precipitation followed by immobilized metal affinity chromatography and anion exchange chromatography, with dialysis after each step, to at least 80% purity. The purity of the final product was determined on a protein basis using a Coomassie gel. The average yield of LicKM-SD and LicKM-GD was estimated to be 100 mg ⁄ kg of fresh plant tissue, whereas the average yield of NA was estimated to be 300 mg ⁄ kg of fresh plant tissue.
In vitro characterization of plant-produced influenza antigens
The reactions of plant-produced antigens with reference antisera were assessed by ELISA analysis and immunoblotting. For ELISA, 96-well plates were coated with LicKM-SD, LicKM-GD or NA purified from plants, or with inactivated influenza A ⁄ Wyoming ⁄ 3 ⁄ 03 virus, and were incubated with sheep antiserum raised against purified HA of A ⁄ Wyoming ⁄ 3 ⁄ 03 virus, sheep antiserum raised against NIBRG-18 (H7N2) reassorted virus or sheep antiserum raised against NIBRG-17 (H7N1) reassorted virus. For immunoblot analysis, 100 ng of LicKM-SD and LicKM-GD, and inactivated influenza A ⁄ Wyoming ⁄ 3 ⁄ 03 corresponding to 100 ng of HA, were separated by SDS-PAGE, transferred to polyvinylidene fluoride membrane, and incubated with rabbit antiserum raised against LicKM or sheep antiserum raised against purified HA from A ⁄ Wyoming ⁄ 3 ⁄ 03 virus. NA activity was assayed according to the standard WHO protocol WHO ⁄ CDS ⁄ CSR ⁄ NCS 2002.5 Rev.1 21 The inhibition of NA activity was assessed by preincubating plant-produced NA with sheep antiserum raised against homologous [NIBRG-18 (H7N2)] or heterologous [NIBRG-17 (H7N1)] reassorted virus prior to conducting the NA assay.
Assessment of immunogenicity and efficacy of plant-produced antigens
The ferret challenge study was carried out under UK Home Office license as required by the UK Animal (Scientific Procedures) Act, 1986. Female, outbred fitch or albino ferrets, 4AE5 months old, and weighing from 441 to 629 g at the initiation of the study, were maintained on high-density ferret LabDiet 5L15 (IPS Product Supplies, London, UK). The study consisted of five groups of eight animals per group. Experimental groups received VC1 + A, VC2, or VC2 + A (Table 1) by subcutaneous injection on days 0, 14, and 28. The negative control (NC) group received alum adjuvant alone under the same dosing regimen. The positive control (PC) group were infected intranasally with 0AE5 ml of influenza A ⁄ Wyoming ⁄ 3 ⁄ 03 virus at a concentration of 10 5AE5 TCID 50 per ml on day 0 only. Following immunization, animals were monitored daily for lesions or irritation, mobility, erythema, and general activity. Animals were challenged intranasally with 0AE5 ml of influenza A ⁄ Wyoming ⁄ 3 ⁄ 03 virus at a concentration of 10 5AE5 TCID 50 per ml 10 days after the final dose. Blood samples were collected from superficial tail veins at the time of vaccination and challenge and 4 days post-challenge. Nasal washes were collected on each of the 4 days post-challenge. Serum HI titers were determined for homologous influenza A ⁄ Wyoming ⁄ 3 ⁄ 03 virus and heterologous influenza A ⁄ Sydney ⁄ 5 ⁄ 97 (H3N2), A ⁄ California ⁄ 7 ⁄ 04 (H3N2), and A ⁄ New Caledonia ⁄ 20 ⁄ 99 (H1N1) viruses. Hemagglutination was visually assessed following incubation with turkey red blood cells. The microneutralization assay was carried out as described by Rowe et al. 22 except that serum samples from immunized ferrets were treated with receptor-destroying enzyme (Denka Seiken Co Ltd, Tokyo, Japan) prior to incubation with 2 · 10 3 TCID 50 per ml of H3N2 influenza A ⁄ Wyoming ⁄ 03 ⁄ 03 virus. Viral shedding was determined using a Madin-Darby Canine Kidney (MDCK) cell titration on the nasal wash samples. The endpoint of the MDCK cell titration assay was determined by performing a hemagglutination assay with turkey red blood cells. The Karber calculation was used to determine log 10 TCID 50 per ml for each sample. The inflammatory cell response was assessed in post-challenge nasal washes by staining with Trypan blue and counting leukocytes. Post-challenge, animals were kept under close surveillance for 4 days for clinical evidence of influenza infection. Animals were monitored for body temperature increase and weight loss. They were also assessed for clinical signs indicative of respiratory symptoms, comprising singular sneezing or nasal rattling (1 point) or excessive sneezing (2 points), purulent discharge from the external nares (1 point), decreased alertness, spontaneous activity or play (1 point), or no activity (2 points).
Results and discussion
In vitro characterization of plant-produced influenza antigens To evaluate the feasibility of our approach for producing immunoprotective influenza antigens we employed A. tumefaciens containing 'launch vectors' 17 engineered to express LicKM-SD, LicKM-GD, or NA. These were separately inoculated into N. benthamiana plants. Sequences encoding NA were not fused to LicKM so as to avoid potential interference with NA tetrameric structure formation and enzymatic activity that could be important for generating target-specific immune responses. Four to seven days after inoculation, recombinant proteins were recovered and characterized. Plant-produced LicKM-SD and LicKM-GD were detected by reference polyclonal sheep serum raised against HA purified from influenza A ⁄ Wyoming ⁄ 3 ⁄ 03 virus in an ELISA ( Figure 1A) and under denaturing conditions in an immunoblot ( Figure 1B). In both assays LicKM-SD was more strongly recognized by the reference serum than LicKM-GD, although polyclonal rabbit serum raised against LicKM recognized each fusion to a similar extent ( Figure 1B). This observation will be further studied. Plant-produced NA was also recognized by reference polyclonal sheep serum raised against reassortant H7N2 virus ( Figure 1C), and showed enzymatic activity that was inhibited by reference serum in a strain-specific manner ( Figure 1D).
Immunogenicity and protective efficacy in ferret challenge model
The ability of candidate influenza vaccines to elicit immune responses and confer protection in animal models is key to pre-clinical evaluation. 12 We assessed the immunogenicity and protective efficacy of the plant-produced antigens in ferrets, the accepted and well-validated animal model for influenza. 19,23 It should be emphasized, however, that there is no prior report of immunizing animals with plant-produced recombinant influenza antigens. Therefore, we adopted an immunization regimen and route of administration that would allow us to assess whether the plantproduced influenza antigens are immunogenic. The dose of antigen chosen for this study was based on prior reports in which plant-produced vaccine candidates induced protective immunity against relevant pathogens. 24,25 In the present study, three groups of eight ferrets were immunized subcutaneously by priming and boosting twice with candidate vaccine formulations (VC1 + A, VC2, and VC2 + A) containing combinations of plant-produced influenza antigens (Table 1). NC animals received alum adjuvant alone, and PC animals were given a single intranasal dose of live influenza A ⁄ Wyoming ⁄ 3 ⁄ 03 virus. No adverse effects were noted in any animals receiving plant-produced vaccine candidates. Hemagglutination-inhibition (HI) activity of sera from immunized animals is regarded as a strong correlate of protection, 12,25 and therefore, ferret sera were assessed for HI activity to A ⁄ Wyoming ⁄ 3 ⁄ 03 virus. No HI activity was observed in pre-immune sera from any animal, or in sera from NC animals ( Figure 2A). However, sera from all ferrets vaccinated with VC2 + A exhibited extremely high HI titers in the range of 1:320 to 1:2560 (mean titer 1273) following the first dose (Figure 2A). These titers are much higher than 1:40, regarded as the minimum HI titer consistent with protection in humans. 12,26 While more experimentation is needed, the data suggest that a single dose of VC2 + A could provide protection against virus challenge. Fewer responders and lower HI titers following the first dose were observed among animals that received VC1 + A (Figure 2A), a vaccine candidate lacking NA. This suggests Interestingly, five of the eight animals that received VC2 gave HI titers in the range of 1:160 to 1:1280 following the first dose, whereas commercial inactivated influenza vaccines in the absence of adjuvant typically induce very low HI titers. [27][28][29] Following the second dose of VC1 + A, VC2, or VC2 + A, sera from all ferrets had HI titers in the range of 1:640 to 1:2560, and these remained similarly high after the third dose ( Figure 2A). Again, sera from all of these animals had HI titers well in excess of 1:40. It is of interest that HI titers in sera from ferrets receiving two or three doses of any of the plant-produced vaccine candidates were equivalent to those in sera from intranasally infected PC animals (Figure 2A). Sera from ferrets immunized with all three vaccine candidates were also assessed for the presence of A ⁄ Wyoming ⁄ 3 ⁄ 03 virus neutralizing (VN) antibodies using a micro-neutralization assay. The VN titers correlated well with observed HI titers for each group (Figure 2B), and also for individual animals within the groups.
To assess the breadth of the HI response induced by the three vaccine candidates, sera from ferrets immunized with VC1 + A, VC2, or VC2 + A were tested against the heterologous H3N2 virus strains A ⁄ Sydney ⁄ 5 ⁄ 97 and A ⁄ California ⁄ 7 ⁄ 04. Immunization with all three candidates generated cross-reactive serum HI titers well in excess of 1:40 (Table 2), although these titers were two-to 32-fold lower than HI titers observed against homologous A ⁄ Wyoming ⁄ 3 ⁄ 03. These results suggest that the plant-produced vaccine candidates could provide some protective immunity against heterologous H3N2 strains. No HI activity (titers £10) was observed against influenza A ⁄ New Caledonia ⁄ 20 ⁄ 99 (H1N1) ( Table 2), indicating the H3 subtype specificity of the HI antibody responses generated by these vaccine candidates.
The protective efficacy of the plant-produced HA and NA antigens was assessed in the immunized ferrets by intranasal challenge with live egg-grown influenza A ⁄ Wyoming ⁄ 3 ⁄ 03 virus. The extent of viral infection following challenge was determined for each animal by monitoring the titer of virus shed in nasal washes for 4 days post-challenge. Clear evidence of protection was observed for animals receiving any of the candidate vaccine formulations. Only one animal that received any of the three candidate vaccine formulations showed detectable virus shedding, and even then at less than 10 2 TCID 50 . By contrast, animals in the PC group showed a low level of virus shedding, in the range of 10 2 to 10 3 TCID 50 ( Figure 3A) and animals in the NC group shed virus in the range of 10 6 to 10 7 TCID 50 ( Figure 3A). Following the challenge, animals were also observed for weight loss, body temperature, respiratory symptoms, and leukocyte count in nasal washes of ferrets. Weight loss, an indicator of the severity of influenza infections in ferrets, was greatly reduced in ferrets that received VC1 + A, VC2 + A, or the homologous virus, compared with those in the NC group ( Figure 3B). The reduction in weight loss for animals that received VC2 was less striking ( Figure 3B). The febrile response following challenge was also monitored as an indicator of infection. The rise in body temperature in ferrets immunized with VC2 + A was less than that observed for animals in the NC group ( Figure 3C). Furthermore, the mean peak of symptom scores, an index indicating the frequency of several influenza-related symptoms following challenge, was significantly reduced in animals that received the candidate vaccine formulations compared with those in the NC group ( Figure 3D). Similarly, counts of leukocytes in nasal washes of ferrets, taken as an indicator of upper respiratory tract infection, were significantly reduced in candidate vaccine recipients compared with animals in the NC group ( Figure 3E). Overall, the challenge study clearly indicated that the plant-produced HA and NA antigens confer a high degree of protective immunity in ferrets, showing promise for vaccine development. In future studies we will elucidate the minimum protective dose for the vaccine candidates, the protective role of LicKM-SD and LicKM-GD when administered individually, and the role of NA in further facilitating immune responses. The continual emergence of new influenza strains necessitates annual updating of the vaccine. 5 Egg-based production has served us well for decades in providing effective and safe vaccines. 30 However, given current concerns over the transmission of highly pathogenic avian influenza type A H5N1 strains from poultry to humans, 8 several alternative approaches are being pursued for influenza vaccine production. Animal cell cultures are the most advanced in development. 31 Mammalian cells are being applied to produce target vaccine strains, either directly 31 or following reverse engineering 32,33 and insect cells are being utilized to produce subunit vaccine candidates expressed from baculovirus vectors. 34 In recent years plants have emerged as systems for protein expression, and are being evaluated for commercial production of vaccine antigens. Here we used a 'launch vector' that combines an agrobacterial binary plasmid and plant RNA viral sequences, 17 and allows for the production of target antigens within a week of plant inoculation. As the vectors are introduced into non-genetically modified plants, there is no requirement for the development of production lines. Thus, new targets can be engineered into expression vectors, produced in plants, and purified for formulation within the time frame required for the annual influenza vaccine.
Conflict of Interest
InB contracted Retroscreen to perform aspects of the work. | 2017-08-31T10:04:33.840Z | 2008-01-01T00:00:00.000 | {
"year": 2008,
"sha1": "79a7c41641b48449595f81fcae11e15e75d7b010",
"oa_license": null,
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/j.1750-2659.2008.00037.x",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "79a7c41641b48449595f81fcae11e15e75d7b010",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
219947239 | pes2o/s2orc | v3-fos-license | Temperature dependence of photoluminescence properties of water-soluble CdS quantum dots
We investigated the size dependence of band-edge photoluminescence (PL) dynamics for CdS quantum dots (QDs). The temperature dependence of the PL-decay profiles of CdS QDs with an average diameter of 3.7–6.0 nm was measured. The PL-decay profiles became longer as the temperature increased. Further, it was found that the temperature dependence of the PL-decay profiles depends greatly on the QD size. These experimental results can be understood by considering that the magnitude of the splitting energy between the bright-and dark-exciton states depends on the QD size and becomes larger as the QD size becomes smaller.
Introduction
The observation of an optically passive state, the so-called dark-exciton state, is a characteristic photoluminescence (PL) property of semiconductor quantum dots (QDs) [1,2]. Theoretical and experimental studies indicate that the splitting energy (ΔST) between optically active bright-and darkexciton states, which is less than 1 meV in the bulk crystal, increases from several meV to several tens of meV owing to the quantum confinement effect in QDs [3,4]. CdSe QDs have been extensively studied as a model material for QD studies [5,6]. Crooker et al. quantitatively explained the temperature dependence of the PL dynamics using a three-state model comprising a ground state and two excited bright-and dark-exciton states [7]. Compared to CdSe QDs, it is theoretically predicted that CdS QDs have a larger ΔST value [8,9], and the contribution of dark excitons to the PL processes is considered to be large.
Previously, we prepared water-soluble CdS QDs in which the band-edge PL is observed as the main PL band [9]. In addition, film samples containing CdS QDs with an average grain size of 4.2 nm were prepared, and the temperature dependence of the PL dynamics was measured. We found that the darkexciton state has a large influence on the PL processes, even at room temperature (RT), because ΔST has a value of 40 meV for CdS QDs with a diameter of 4.2 nm [9]. Since the value of ΔST largely depends on the QD size, it is important to investigate the size dependence of the PL dynamics. In this study, we investigate the size dependence of the band-edge PL dynamics in CdS QDs.
Experiments
CdS QDs were prepared by injecting H2S gas into aqueous solutions containing Cd(ClO4)2 and sodium hexametaphosphate (HMP). The sizes of the CdS QDs were controlled using a size-selective photoetching technique [9]. The QD surface was modified by the addition of Cd(ClO4)2 after adjusting the pH of the solutions to the alkaline region. The sample solution was mixed with polyvinyl alcohol (PVA) For PL measurements, the 325-nm line of a He-Cd laser was used as the excitation-light source. Third-harmonic-generation light (355 nm) from a laser-diode-pumped yttrium aluminum garnet laser with a pulse duration of 20 ns and a repetition of 10 kHz was used as the excitation light for measuring the PL-decay profiles. Figure 1 shows the absorption and PL spectra of CdS QD solution samples with mean diameters of 3.7, 4.9, and 6.0 nm. Note that band-edge PL is clearly observed in all the samples. This makes it possible to study the PL dynamics in detail. QD-dispersed PVA film samples were prepared to investigate the PL characteristics at low temperatures as well as their temperature dependence.
Results and discussion
The temperature dependence of the PL spectra was measured. Band-edge PL was observed as the main PL band at all temperatures ranging from 10 K to RT (not shown). The PL intensity at RT is~60% of that at 20 K. Therefore, the influence of thermal quenching is small. These results suggest that the prepared CdS QDs are suitable for detailed studies of the temperature dependence of PL dynamics. Figure 2 shows the decay profiles of band-edge PL at 10, 100, and 200 K for CdS QDs with mean diameters of 3.7, 4.9, and 6.0 nm, respectively. In this temperature range, the PL-decay profiles become longer as the temperature increases; the temperature dependence of the 3.7-nm QDs is smaller than that of the other two larger QDs. The decay profile at 200 K has a slow decay time of the order of hundreds of nanoseconds. This long decay-time component suggests a contribution by the optically passive darkexciton state to the PL processes [9]. Usually, as the temperature increases, the non-radiative-decay rate increases, and the PL intensity and PL decay time decrease; however, the observed PL-decay profiles became longer with increasing temperature. In Ref. [9], we proposed a three-state model consisting of a ground state and two excited states (a lower-lying bound-exciton state and a higher-lying dark-exciton state), to explain the anomalous temperature dependence of PL dynamics. With increasing temperature, the thermal population of the higher-lying dark-exciton state becomes larger, and the PL-decay time becomes longer.
The observed PL-decay profiles exhibit multi-exponential decay. To quantitatively discuss the PLdecay profiles, we used a combination of monoexponential function and stretched-exponential function [9]: . As discussed in Ref. [9], the decay components, τ1 and τ2, correspond to the decay times of the bright-exciton state and that characterized by the three-state model, respectively. As an example, the fitting results of the PL-decay profiles in 4.9-nm CdS QDs are shown The temperature dependence of the decay times of τ1 and τ2 obtained from the analysis of the PLdecay profiles in each sample are shown in figures 3(a)-3(c). In CdS QDs with diameters of 4.9 and 6.0 nm, τ2 becomes longer with increasing temperatures up to~180 K and becomes shorter in the temperature region higher than 200 K. The temperature dependence of τ2 in the temperature range up to 180 K qualitatively corresponds to the temperature dependence of the decay time characterized by the three-state model [9]. The decay time of τ1 becomes longer with increasing temperature in the highertemperature region. Conversely, for smaller CdS QDs with diameter of 3.7 nm, both τ1 and τ2 are nearly independent of temperature.
The value of ΔST in 6.0-nm CdS QDs is~9 meV [10], which is smaller than the thermal energy at RT. As a result, τ1, which reflects the decay time of the bright-exciton state, becomes longer with increasing temperature in the high-temperature region owing to the thermal-energy-assisted mixing of bright-and dark-exciton states, which was not considered in smaller-sized CdS QDs having a large ΔST [9]. Because ΔST in 3.7-nm CdS QDs is~50 meV [10], there is no influence of the mixing of bright-and dark-exciton states on them. Therefore, τ1 and τ2 are considered to be constant regardless of the temperature.
Finally, we quantitatively discuss the temperature dependence of the PL-decay time. In Ref. [11], a simple approach to evaluate the distribution of the decay time and the statistical average PL-decay time 〈τ〉 from experimentally observed stretched-exponential PL decays was presented. Figure 3(d) shows the temperature dependence of <τ2> in 4.9-nm CdS QDs. Based on the three-state model, the temperature dependence of the decay time is expressed by the following equation [9]: . exp 1 Here, 1/τDx (1/τBx) denotes the radiative-decay rate, gDx (gBx) denotes the density of states of the darkexciton (bound-exciton) state, and ΔE corresponds to the energy difference between the dark-and bound-exciton states.
The broken curve in figure 3(d) represents the result calculated using equation (1) with τDx = 1600 ns, τBx = 25 ns, ΔE = 8 meV, and gDx/gBx= 90. This result demonstrates agreement with <τ2> becoming longer as the temperature increases up to~180 K. Conversely, at temperatures higher than 180 K, the experimental result shows a decreasing behavior, while the calculated result continues to increase gradually. This discrepancy appears to be due to the influence of the non-radiative recombination process, i.e., thermal-quenching at temperatures higher than 180K. To consider the influence of the nonradiative recombination process, we assumed the following thermally active non-radiative process: 4 1/τnr(T) = 1/τnr(0)· exp(-Ea/kBT), where Ea represents the thermal-activation energy for the non-radiative process. The solid curve in the figure shows the result calculated using the parameters of τnr(0) = 8 ns and Ea = 120 meV. This result quantitatively explains the temperature dependence of 〈τ2〉. Therefore, we can explain the PL-decay time becoming shorter in the high-temperature region (higher than 180 K) by considering the thermally activated non-radiative recombination process. | 2019-06-07T22:45:48.519Z | 2019-05-01T00:00:00.000 | {
"year": 2019,
"sha1": "11c6a53c49fabb8eb90e8735b48d46193931f715",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1220/1/012029",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "13accf8564dfe9208b8466b5be8b5c636980bca0",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
260851012 | pes2o/s2orc | v3-fos-license | A FRACTIONAL DYNAMICS MODEL OF HEPATITIS B DISEASE SPREAD UNDER INFLUENCE OF CAMPAIGN AND TREATMENT
. In this work, we present a fractional dynamic model to describe the spread of Hepatitis B disease in human population under influence of campaign and treatment parameters. It was shown that the stability of disease-free equilibrium and disease endemic equilibrium depend on the basic reproduction number. These results are in accordance with the epidemic theory. A numerical example is given to demonstrate the validity of the results. The results show that the media campaigns and treatment increase susceptible subpopulations, reduce infectious ones, and increase recovered subpopulations, thus the model gives adequate information about the spread of the Hepatitis B virus.
INTRODUCTION
Hepatitis B is an inflammation of the liver that is caused by a variety of infectious viruses leading to a range of health problems, some of which can be fatal.Usually this disease transmits from one person by different ways to another, e.g., through semen, blood, and vaginal secretion etc.. Sexual transmission is also one of the dominant sources of hepatitis B virus transmission [1,2].
Mathematical modeling is a method to better understand the dynamics of hepatitis B virus transmission and evaluate the effectiveness of various control and prevention strategies.Several studies on the use of mathematical models to study the spread of hepatitis B can be seen in [3,4,5,6,7,8].
One of the well-known models of the spread of the hepatitis B virus is the SIR compartment model where the model is given in the form of a nonlinear differential equation, see [4,6,9] for a wide discussion.In this SIR model, the observed human population (N) is divided into three epidemiological compartments denoted by susceptible S(t), infectious I(t) and recovered individuals R(t), thus the total population at the time t is given by N(t) = S(t) + I(t) + R(t).
The assumption made in developing this model can be found in [9] and the involve various parameters in (1) are described in Table 1.The dynamics of SIR model for hepatitis B spread with the initial conditions S(0) = S 0 ≥ 0, I(0 reaches the saturation level whenever I increases [9].
It is known that fractional order derivatives are generalizations of integer order derivatives, so modeling using fractional differential equations is a powerful method for studying the overall spread of the disease.
Motivated by the current study, in this manuscript, we modified model ( 1) by replacing the first-order derivative with fractional-order derivatives and including the media campaign parameter (i.e.education about the threat of hepatitis B disease) (µ 1 ) and the hepatitis B treatment parameter to infected individuals (µ 2 ) into the model, with µ 1 , µ 2 ∈ [0, 1), such that the model ( 1) can be written as a following new model: (2) In this new model, D (γ) is the fractional-order derivative operator of Caputo type of oder γ with 0 < γ < 1.We study the influence of parameters µ 1 and µ 2 on each compartment by inspecting the stability behavior of the equilibrium points of the model (2).To the best of the author's knowledge, this issue has not been solved yet to date.Therefore the results of this research constitute a new contribution in the field of fractional-order epidemic dynamics.
SOME USEFUL RESULTS
In this section we recall several mathematical tools used in this study.Let y : [0, ∞) → R n is an integrable vector function and γ ∈ (k − 1, k) , k ∈ N .The Caputo fractional-order derivative of order γ is defined by where Γ(.) is the Euler Gamma function [19].Let us consider the general fractional-order dynamic system involving Caputo derivative ( 4) with suitable initial conditions y(t 0 ) = y 0 , where y(t) is the state at time t of the system (4), . Note that the system (4) may be non-linear, or vice versa.If g is linear, the system (4) can be written as ( 5) where A is a n by n matrix.
One important thing of the system ( 4) is stability of equilibrium point.When talking about stability, one is interested in the behavior of the solutions of (4) for t → ∞ [17,18].The point y * is said the equilibrium point of the system (4) if g(t, y * ) = 0. Note that the equilibrium point is a constant solution to the dynamic system (4).
(2). y * is said to be asymtotically stable if it is stable and lim t→∞ y(t) = y * .
Theorem 2.2.[19,20] The equilibrium point y * of the fractional-order linear system (5) with γ ∈ (0, 1) is asymptotically stable if where Theorem 2.3.[19,20] The equilibrium point y * of the the fractional-order nonlinear system (4) with γ ∈ (0, 1) is asymptotically stable if for all roots β of the equation where J y * is the Jacobian matrix of system (4) around the equilibrium y * .
ASYMPTOTIC STABILITY OF THE EQUILIBRIA
By following the procedure in [3], it is easy to show that the solution of the model under consideration is restricted to the feasible region given by epidemiology that the dynamical behavior of the model ( 4) depends on the basic reproductive number.By using the next generation method, the basic reproduction number for the model ( 2) is given by In order to find the equilibrium point of the model (2), we must solve the following equations: By assuming I = 0, one finds the disease-free equilibrium, denoted by E 0 , of the fractional order Hepatitis B model (2), that is We will analyze the stability of this free disease equilibrium point.First of all, the Jacobian matrix of the vector field corresponding to model ( 2) around E 0 is .
The stability of the free disease equilibrium point E 0 is given in the following theorem.
Theorem 3.1.The free disease equilibrium point E 0 is asymptotically stable if R 0 < 1, and if R 0 > 1 then it becomes unstable.
Proof.Clearly J E 0 has the following three eigenvalues given by One can see that all eigenvalues of To find the disease endemic equilibrium point (denoted by E * ) of the fractional-order of hepatitis model (2), we solve the model (2) at steady state for S, I and R. One can observe that E * = (S * , I * , R * ), where is the disease endemic equilibrium poiny of hepatitis model (2).The stability of the disease endemic equilibrium E * is given in the following theorem.Theorem 3.2.If R 0 > 1, then the disease endemic equilibrium E * is asymptotically stable and becomes unstable when R 0 < 1.
Proof.The Jacobian matrix of (2) around E * is tutes an eigenvalue of J E * that have negative real part.In order to find the remaining, we take the following matrix The eigenvalues of the matrix K are negative if trace(K) < 0 and det(K) > 0. Observe that trace By subtituting (10), ( 11) into ( 13) and ( 14), and using the condition R 0 > 1, one get trace(K) < 0 and det(K) > 0. It is easy to check that the negativity of all eigenvalues of J E * implies In order to show the validity of the results, let us consider the following numerical exam-
CONCLUSION
We have found the fractional SIR model for the dynamic of Hepatitis B virus spread.An example that illustrates the result has been presented.The analysis shows that the media campaigns (education) and treatment increase susceptible subpopulations, reduce infectious ones, and increase recovered subpopulations, thus the SIR model gives adequate information about the spread of the Hepatitis B virus. | 2023-08-13T15:11:52.551Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "692ed55d4a1fa704e7b0471d3b63ad76a4db51d2",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.28919/cmbn/8085",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "51f22e1324d2193e43f37dff00273c4dd9b81a42",
"s2fieldsofstudy": [
"Medicine",
"Mathematics"
],
"extfieldsofstudy": []
} |
118645872 | pes2o/s2orc | v3-fos-license | Microscopic model of quantum butterfly effect: out-of-time-order correlators and traveling combustion waves
We extend the Keldysh technique to enable the computation of out-of-time order correlators. We show that the behavior of these correlators is described by equations that display initially an exponential instability which is followed by a linear propagation of the decoherence between two initially identically copies of the quantum many body systems with interactions. At large times the decoherence propagation (quantum butterfly effect) is described by a diffusion equation with non-linear dissipation known in the theory of combustion waves. The solution of this equation is a propagating non-linear wave moving with constant velocity despite the diffusive character of the underlying dynamics. Our general conclusions are illustrated by the detailed computations for the specific models describing the electrons interacting with bosonic degrees of freedom (phonons, two-level-systems etc.) or with each other.
Motivation
In a chaotic classical system a small perturbation leads to the exponential divergence of trajectories characterized by Lyapunov time, 1/Λ. As a result, the observables in two copies of the system experiencing different perturbations quickly become uncorrelated. In a many body system a local perturbation initially destroys the correlations locally, then the region where the correlations are destroyed quickly grows with time. Killing a butterfly in Ray Bradbury story [1] leads to the spreading perturbation until it reaches the size of the system (Earth in this story). This phenomena is known as butterfly effect.
The concept of butterfly effect can be generalized to a closed chaotic quantum system even though such generic system does not necessarily have a direct analogue of Lyapunov divergence of trajectories because quantum mechanics prohibits the infinitesimal shift of the trajectory. The convenient measure of the butterfly effect is provided by the out-of-time-order correlator (OTOC) that was first introduced by Larkin and Ovchinnokov [2], revived by Kitaev [3,4] and extensively discussed by a number of works recently [5,6,7,8]. OTOC is defined by where O(t) andÕ(t) are two local operators in Heisenberg picture. Physically, it describes how much the perturbation introduced byÕ(0) changes the value of the O(t). At large times A(t) goes to a zero, because the state created by the consecutive action of the operators O(t)Õ(0) is incoherent with the state obtained when these operators act in a different order. 1 The anomalous time order in the correlator (1) implies the evolution backward in time, so it is not measurable by direct physical experiments on one copy of the system in the absence of a time machine such as implemented in NMR experiments [9]. One can view the the decrease of the OTOC with time as the consequence of the dephasing between two initially almost identical Worlds evolving with the same Hamiltonian. In this respect it is different from the problems of fidelity [10] and Loschmidt echo ( [9] and references therein) that study evolution forward and backwards with slightly different Hamiltonians. It is also different from a problem of the evolution of a particle along quasiclassically close trajectories appearing in studies of the proximity effects [2] or weak-localization [11] and quantum noise [12]. For physical systems the Hamiltonian is local, so that distant parts of a system are not interacting directly with each other. In this case, one may further distinguish the case when operators O andÕ act far from each other in real space. One expects that the correlator decreases after the significant delay needed for the perturbation to spread over the distance separating these operators. When correlators of this type decayed for any separation between the operators in the real space the coherence is completely lost. The decay of OTOC at long times for all subsystems (i.e. for all separations) for all operators O and O implies complete quantum information scrambling [13]. Note that the separation of the operators in space is equivalent to the separation into subsystems introduced in quantum information works. We are not going to discuss here quantum information implications of OTOC and the exact definition of quantum scrambling; we refer the reader to the literature that discussed its theory [14,15,16,17,18] and the possibility of its experimental measurement [19,20].
The goal of this work is to develop the analytic tools to study OTOC (1) for microscopic models that allow for the solutions for conventional correlators.
The technique that we develop is essentially a straightforward extension of the Keldysh technique. We apply our technique to three models that are basic in condensed matter physics: (i) electrons interacting with localized bosonic degrees of freedom (Einstein phonons or simplified two level systems), (ii) electrons in the disorder potential and (iii) electrons weakly interacting with each other. We find that in models (i) and (iii) the mathematical description of the OTOC is similar to the description of the combustion waves. The small initial perturbation first grows exponentially remaining local and then starts to propagate with a constant velocity and a well defined front, despite the fact that the thermal transport in these models is always diffusive. The velocity of the front propagation is always slower than electron Fermi velocity and it is parametrically slower than it in some models. The apparently slow velocity of the front propagation implies that it does not necessarily saturate Lieb-Robinson bound [21]. This conclusion of the constant velocity of the quantum butterfly propagation agrees with the result obtained in holographical theory of black holes [6,22,23].
The plan of this paper is the following. In Section 2 we introduce the basic elements of our technique: the augmented Keldysh formalism that involves two forward and two backward paths. The state of the system in this formalism is described by the diagonal and off-diagonal Green functions in the augmented space. The diagonal functions describe the quasiparticle distribution function in each "world", the off-diagonal ones describe the coherence between the "worlds". In Section 2.3 we introduce two types of correlators that one can compute in this technique: the observables that can be measured directly in a physical experiment and the computables that one can only compute numerically (or measure given the time machine). In Section 3 we introduce the details of the microscopic models for which the anomalous correlator of type (1) will be computed. In Section 4 we derive the analogue of the kinetic equation for both diagonal and off-diagonal ones. In Section 5 we analyze the stability of the kinetic equations of Section 4 ignoring their spatial structure (i.e. in zero dimensional case) and show that the instability of the off-diagonal functions is described by non-linear ordinary differential equations. Section 6 generalizes these equations for the models with spatial structure for which they become similar to the equations describing the combustion waves. Section 7 describes the formation of the propagating front that follows from the non-linear diffusive equations derived in Section 6. The Section 8 studies the initial time period at which the state of the system is not yet accounted for by the diffusive equations and its match to the evolution at longer times. Finally, the Section 9 gives the summary of the results and discussions of possible extensions.
Augmented Keldysh technique
Anomalously ordered correlator such as out of time ordered A(t) introduced in the Section 1, see Eq. (1), cannot be computed in conventional techniques that assumes casual time evolution. To circumvent this difficulty we augment the standard Keldysh technique by introducing two forward and two backward evolutions shown in Fig. 1b.
In order to describe the augmented technique we recall the conventional Keldysh technique [24] first. There, the differently ordered correlators are given by where α, β, γ, δ = ± denote the positions of the operators on the traditional Keldysh contour shown in Fig. 1a. Here all observable operatorsÔ and the interaction part of the HamiltonianĤ int are in the interaction representation, the averaging is done with a density matrix (that represents the initial conditions in the past), the symbol T C K denotes ordering of the operators on the Keldysh contour, i.e. the operator referring to the position down the contour is on the left in (2) (for fermion operators the change of order brings in minus sign). One can see that by choosing indexes α, β, γ, δ = ± one can get different order of operators but never the anomalous order required by Eq. (1). Figure 1: The traditional, C K , (a) and the augmented (b) Keldysh contours C aK . Times t 1,2 label the insertion of the operators for observable or computable quantities, see text.
Operators (fermionic or bosonic) are ordered according their location on the contours C K , C aK .
The augmented contour C aK allows anomalous order of the operators such as in Eq. (1). For this contour the indices α, β, γ, δ can acquire four values, u±, d± (where u stays for the up and d for the down parts of the contour). The expression similar to Eq. (2) with the choice α = u+, β = d+,γ = u−, δ = d− becomes out-of-order correlator A(t). Clearly other combinations of indices will produce normal as well abnormal correlators. Equation (3) is the essence of the augmented technique. Unitary evolution on u/d segments of the contour can be viewed as the evolution of the different worlds (we will use this term loosely throughout the paper) with the same Hamiltonian and the same initial conditions ("correlated worlds " initially). The correlator (3) can be viewed as the response at time t to the perturbation (source) at time 0. When the sources are located at the same up/down parts of the contour, the response is directly measurable, we shall refer to these sources as 'physical'. All the other sources will be referred to as 'unphysical'. Our ultimate objective is to describe how these "correlated worlds" become "uncorrelated worlds" provided that a small local perturbation is seeded differently in the two worlds (butterfly effect). 2 In the next few sections we generalize the rules and the results of Keldysh technique for the augmented Keldysh technique. We will see that almost all rules are going through up to the kinetic equation where the correlation function describing not only the occupation numbers but also the measure of the correlation between the different worlds.
Augmented space and Green function
Similarly to usual diagrammatic technique, we introduce the 4 × 4 matrix of Green functions of Fermi or Bose fields. It is convenient to view this four dimensional space as a direct product of 2 × 2 Keldysh and 2 × 2 augmented space. Each operator (fermionic or bosonic) ψ(t) can be placed in four different points of contour at time t, therefore it is enlarged into four dimensional vector: where (1), (2) are the short hand notations for the coordinates, times (and might be spin) that specifies the single particle state: i ≡ (t i , r i , σ i ). In these notations the 4 × 4 matrix Green function reads As usual, the components of the Green functions are linearly dependent. This redundancy is eliminated by the Keldysh rotation, which is conveniently described by the Pauli matriceŝ The superscript, · = a, K, describes the space (augmented or Keldysh) in which these matrices act. In terms of matrices (6) the Keldysh rotation is given by 3 After rotation (7), the Green function acquires the form In the absence of the non-physical sources 4 , the components diagonal in the augmented space are equal,Ĝ uu =Ĝ dd , and coincide with the conventional Green functions. In particular, the retarded, advanced and Keldysh Green functions, G R,A,K are given by (hereinafter, the upper sign corresponds to fermions and lower to bosons unless stated otherwise), whilst the inter-world functions read: In the absence of non-physical sources, these functions include the information on the single particle spectrum and on the distribution functions of holes (particles), Γ K (Γ K ). We note that even in the presence of non-physical sources the diagonal components are not influenced by the non-diagonal ones. This is because the correlations between the upper and down worlds can not affect the dynamics in each of these worlds. Formally, this means that the structure of the Green functions always retain the form of Eq. (8). Only the relation (10) between diagonal and non-diagonal components in augmented space can be modified by the presence of non-physical sources. In fact, the violation of the relation (10) will be the formal indicator of the quantum butterfly effect.
Observables and computables
We distinguish the correlators (observables) that can be in principle measured by a physics experiment and the ones that can only be studied in the rather artificial system that allows inversion of time directions. Because the latter can be more readily studied by numerical simulations, where the unitary evolution can be formally reversed 5 , we call them computables.
The example of observable is given by the casual correlator that describes the density response at time t to the perturbation at time 0. Indeed the correlator (11) rewritten in terms of the original fields ψ has the form N ρρ (t) = ψ † (t, r)ψ(t, r), ψ † (0, r 0 )ψ(0, r 0 ) , the usual rules of linear response imply that the density induced by the scalar potential applies at point r 0 at time t = 0 is given by −iN ρρ (t). In fact this structure is general for Keldysh technique: the physical perturbation comes with τ K 0 while the observable comes with τ K 1 . In contrast the out-of-time-ordered correlator provides the example of the computable. In this paper we focus on out-of-time-ordered correlators of the form that becomes out-of-time-ordered for many combinations of indices α, β, γ, δ. For instance, for α = u+, β = d+,γ = u−, δ = d− it provides an example of the general correlator (3) discussed in Section 2.1. It is convenient to separate, as we have done here by parenthesis, the 'source' term provided by the product of two operators at time t = 0 and the 'response term' provided by two operators at time t ≈ t > 0. For the fixed γ and δ the correlator (12) can be viewed as the Green function G αβ (t, r; t , r) computed in the states modified by the action of the operators ψ γ (0, 0)ψ † δ (0, 0). In particular, it satisfies the same identities as the Green function: After Keldysh rotation in indices α and β the correlator (12) acquires the same general form as the Green function (8).
Because of the identities (13) one can choose many equivalent forms of the out-of-time-ordered correlators that display unusual behavior. It will be more convenient to us to compute the symmetrized correlator defined by The source term,Ŝ 0 , can have many equivalent forms that distinguish upper and down Worlds, we can choose for instancê This term destroys one particle in the down World and creates it back before the evolution in the upper World starts (notice that for the operators at t = 0 ψ † u− (0) = ψ † d+ (0), see Fig. 1). As we shall see below the final results depend very weakly on the particular form of the source term.
The response operator in this correlator is the sum of four termŝ that measures the product of the distribution functions and the correlations between the worlds. The minus sign in this equation is due to fermionic commutation rules.
As usual, any correlator allows for a pictorial representation to facilitate the basic structure of the theory and to be able to sum up the most important parts of the perturbative expansions up to infinite order. We develop the diagrammatic rules for the technique in the augmented space below in sections 2.6, 2.7, 2.8.
In the absence of the unphysical sources the two worlds remain perfectly correlated. The stability of this solution can be discussed in very general terms without the knowledge of the details of the microscopic model. In fact, the existence of the self-energy, Dyson equation, and the general thermodynamic relations are sufficient to prove that the perfectly correlated solution is unstable. We begin with these general considerations.
Dyson equation.
In any field theory that allows the separation of the Hamiltonian into bare (H 0 ) and interacting (H int ) parts, one can introduce the notion of bare Green function, G 0 , corresponding to Hamiltonian H 0 , the full Green functions (defined above) and the self energies,Σ, that take into account the effects of the interaction on the bare Green functions. In diagramm technique the self-energy can be defined as the sum of all one-particle-irreducible diagrams (see Sections 2.6, 2.7, 2.8). The Green functions and self-energies obey the Dyson equation that can be written in two equivalent forms where1 is the unit operator in the space-time and the augmented Keldysh space, the symbol • implies the matrix multiplication in the augmented space and the convolution in space-time. The operatorĤ 0 is diagonal in Keldysh and augmented spaces with the diagonal elements defined by equationsĤ 0 G R 0 = 1, The general structure of the Green functions (8) implies that the parts of the Green function that are retarded and advanced in Keldysh space remain diagonal in the augmented space. Because the bare retarded and advanced Green functions are diagonal in the augmented space, the self energies Σ A,R αβ = δ αβ Σ A,R α remain also diagonal and they are given by the solution of the equations As usual, the non-diagonal part of the Green functions in Keldysh space is not entirely determined by the Eqs. (18): it also depends on the initial conditions. Its evolution is described by the homogeneous equations Notice that both the diagonal and the non-diagonal parts G K αβ are controlled by the initial conditions. We emphasize that the diagonal components Σ A,R α , Σ K αα may depend on the diagonal components of the Green functions in the augmented space, G K αα , but not on the other diagonal (e.g. G A,R β , G K ββ β = α) or the non-diagonal ( G K αβ ) Keldysh components. This observation turns out to be the key of the description of the instability in the evolution of non-diagonal correlations as we see in the next subsection.
Stability and instability.
Let us consider the fermionic Green function for the sake of concreteness. The Keldysh components of the Green function can be conveniently parametrized via For α = β this equation reduces to the conventional parametrization of G K in terms of the quantum distribution function, F uu and F dd ; for α = β it gives the parametrization of the new functions Γ K andΓ K in terms of F ud and F du . Substituting Eq.(20) into Eqs. (19) and using Eqs. (18), we find that Eqs. (19) are satisfied for F αβ solving the quantum kinetic equation In the quasiclassical approximation the two terms in brackets correspond to the outgoing scattering processes (this term taken alone always leads to dissipation) whilst the last term corresponds to the incoming processes (this term taken alone always leads to instability).
In thermal equilibrium the Green functions depend only on the time difference. The diagonal parts of the electron self energies are related by the fluctuation-dissipation theorem (FDT): where is the frequency conjugated to the time difference, µ is the chemical potential, and T is the temperature: both of them are determined by the initial conditions. For Bosons one should replace n 0 ( ) = tanh(. . .) by p 0 (ω) = coth(. . . ). For phonons (which number is not conserved) the chemical potential µ = 0. In equilibrium the left hand side of Eq. (21) is zero, substituting Eq. (22) into (21) we see that [1 − F uu ( )] /2 has the meaning of the Fermi distribution function.
The FDT also implies that Eq. (21) has a generally stable solution. The only reason for this solution to become unstable is the metastability of the state that might happen on the unstable branch of the phase transition. However, even in this case, the ultimate fate of the system is a different equilibrium characterized by different Σ R,A α and with different µ and T that are found from the number of particle and energy conservation in the new spectrum. That solution would be stable again.
The stability of the thermal solution of Eq. (21) is guaranteed by Boltzmann H-theorem and the global conservation laws (energy and the number of particles). In the framework of Eq. (21) it means that for small deviations of F uu , F dd from the thermal distributions the outgoing terms dominate incoming ones in Eq. (21). This fact is far from trivial because both terms are generically non-linear.
The equation (21) allows for solution similar to Eq. (23) for the off-diagonal components: We will call this solution the "correlated worlds solution". In the Pauli matrix notation given by Eq. (6), Eq. (23) and Eq. (24) can be compactified (for fermions) aŝ where anticipating further applications, we allow the distribution function to depend not only on energy but also on the phase space variable and time. The precise definition of the notion of the semiclassical phase space is given in Sec.
4.
Notice that in contrast to the solution Eq. (21), the stability of the correlated worlds solution is not guaranteed even if the solution Eq. (23) is stable. Indeed, for the diagonal (conventional) distribution function the small deviation from equilibrium results in the non-zero RHS of Eq. (21) in which outgoing terms always dominate incoming ones. Outgoing terms always imply relaxation, which leads to the stability of the equilibrium solution. For non-diagonal distribution function, the small deviation from equilibrium results in the same incoming terms as for diagonal distribution function but smaller outgoing terms. A small deviation of diagonal term leads to two contributions to the outgoing term: one due to the interaction induced change in Σ A,R α , another due to the change in F uu (F dd ) itself. Because Σ A,R α do not depend on the off-diagonal terms F ud (F du ), the former contribution is missing for the off-diagonal terms. This makes deviation of the outgoing term that tries to restore equilibrium smaller for non-diagonal distribution function in the interacting system. Thus, for the offdiagonal distribution function the outgoing terms do not necessarily dominate the incoming ones for small deviations from the equilibrium. This results in a possible instability of the solution of Eqs. (24). The computations for the specific models below show that this instability is indeed present for electronphonon and electron-electron interactions but not for impurity scattering.
The alternative solution (allowed by conservation laws for off-diagonal components) is this solution will be called the "uncorrelated worlds solution". The incoming term is second order (or higher) in F ud (F du ), therefore it vanishes for the small deviation from this solution. In contrast, the outgoing term is always linear in F ud (F du ) and it dominates. Thus, the uncorrelated worlds solution is generally stable and one expects that the correlated worlds solution (24) is not. The only exception is the electron scattering by impurities that conserves the number of particles at each energy separately. In this case both outgoing and incoming terms are linear in all components of F αβ and the previous arguments do not hold.
The meaning of the "uncorrelated worlds solution" (26) is the following. Unlike their diagonal counterparts, F ud (F du ) encode not only the distribution functions but also the overlap of the many-body wave-functions evolving at the upper and lower contour. Any decrease of this correlation diminishes the values of both F ud (F du ). The proposed instability is therefore nothing but the quantum butterfly effect, the decay of F ud (F du ) everywhere in the system results in the loss of the coherence between many body wave functions describing upper and down Worlds. The ultimate solution given in Eq. (26) corresponds to the complete destruction of the coherence between lower and upper contour.
The description of the evolution of the system from the "correlated worlds solution" (25) to the "uncorrelated worlds solution" (26) is the subject of the further sections.
Basic rules of the diagram technique: Green functions
The basic elements of the diagrammatic representation needed to compute the correlators in the augmented space are shown in Fig. 2. Notice that for keeping track of the Keldysh structure putting arrows on the Green function for the real fields (as it is done throughout this paper) is convenient but not necessary. In the absence of interactions the observable (11) and the computable (14) are given by the diagrams shown in Fig. 3. Introduction of the separate notation for the box (see Fig. 3 ) enables one to display the matrix structure of the interaction vertices as well.
Vertices
In order to develop the perturbation theory one needs to supplement the expression for the Green functions with the expression for the bare vertices. Because the unitary evolution in each sector is formally independent, these vertices do not couple different sectors of the augmented space. In the Keldysh space they have the usual structure.
To illustrate this point, more for the benefit of the readers familiar with the conventional Keldysh technique, let us consider the textbook [27] example of the perturbation theory for the electrons interacting with phonons. The lowest order contribution to the electron self-energy has the form (formal general rules for the diagram techniques will be summed in the next subsection): (11) and (14) in non-interacting problem. where The diagonal components in the augmented space coincides with the ones for the regular technique. The non-diagonal ones are found by using the Wick's theorem and noticing that the vertices by themselves do not mix different sectors of the augmented space. The matrix structure displayed by Eq. (27) can be further compactified by equation where 4 × 4 × 4 matrices Υ γ αα are given by Here0 ≡ 1,1 = 0,n = (0, 1; 0, 1), n = (1, 0; 1, 0) T , and the sum over index m = 0, 3 gives the sum τ a 0 ⊗ τ a 0 + τ a 3 ⊗ τ a 3 that is different from zero (and equal 2) only for coinciding indices in the augmented space. Pictorially, this matrix structure can be summarized by Fig. 4 where the basic blocks (boxes and triangles) are again defined in Fig. 2. The appearance of the vector n in the formalism is due to the non-conservation of the number of particles (phonons), see Fig. 2 c).
Note, that the same vertex Υ describes the interaction of the electron with the disorder potential. The only difference is that there is no time dependence of the quenched disorder potential, thus the impurity line connecting different branches of the augmented Keldysh contour never decays. The general structure of the vertices for electron-electron interaction is shown in Fig. 4a. Note that is is described by the same building blocks as the electron-phonon and electronimpurity interaction. Because the number of particles in the electron-electron interaction is conserved the vector n does not appear in this case. The definition of vertices has to be supplied with the remaining bosonic Green functions, defined in Fig. 5
Diagram technique: summary
We are now prepared to formulate the general rules of the diagram technique that operates with the blocks defined in sections 2.6, 2.7: In order to compute the correlator (observable or computable) one has: (i) to place the sources and the interaction vertices, (ii) to connect them by the Green function lines, (iii) to trace over the indices in the augmented Keldysh space, (iv) to integrate over positions of the interaction vertices, (v) to multiply the result by (−1) N F L , where N F L is the number of the closed fermionic loops. As usual, in order to derive the physical properties at large scales one introduces the notion of self-energy that is defined as the sum of all one-particle- irreducible diagrams. For example, the self-energies for the electron-phonon, electron-electron and electron-disorder interaction are shown in Fig. 6.
Microscopic Models
The instability expected in section 2.5 is of kinetic nature. Its form depends on the detailed form of the kinetic equation and thus on the microscopic model on which the latter is based. In the following we describe the models that allow one to study the development of the instability in detail.
In all these models the main ingredient are mobile electrons that form a Fermi sea. They are described by the quadratic Hamiltonian and characterized by the the bare Green function where the single particle energy ξ p is counted from the Fermi energy F . The condition ξ p = 0 defines the Fermi surface of the electrons. For electrons the operator H 0 introduced in Eqs. (17) acquires the form Here we restored the units of for future convenience in developing the quasiclassical approximation later on. The three models for the electron interaction that we formulate below differ by their conservation laws. The primitive model of electron-phonon interaction (section 3.1) preserves the total energy of the system and the number of electrons but not the momentum of the system. The electrons in the impurity potential (section 3.2) is not a translational invariant system, however, the Figure 6: Self-energies for the (a) electron-phonon interaction, (b) electron-electron interaction; and (c) electron in the Gaussian disordered potential. The first order self-energy for the electron electron interaction is discarded as not-related to the collisions but rather renormalizing the self-consistent spectrum for the deterministic motion. The inside lines for the Green function are solid which means that infinite series of the rainbow diagrams is summed. This approximation is known to lead to the quasi-classical Boltzmann equation and can be justified for weak interactions or small enough disorder strength (neglecting localization effects).
scattering by impurities conserves the energy of individual electrons, leading to infinite number of conservation laws in this problem. The electron-electron interaction (section 3.3) preserves both the translational and Galilean invariance, so it conserves the total energy, momentum, and the particle number. 6
Electron-phonon interaction
The simplest interacting model is the one in which the electrons interact with dispersionless phonons with frequency ω 0 (Einstein phonons) with Hamiltonian To avoid inconsequential consideration of the band structure we simply assume (somewhat artificially) that all the points r are random and dilute, their density per unit volume is n ph . Notice that n ph represents the density of phonon sites, the density of thermally excited phonons is the product of n ph and phonon occupation number. Bosons are interacting with electrons via where u r = λ(b r + b † r ). Correlators of field u are given by the Green functions local in space, Poles at positive frequencies in these functions correspond to phonon emission and at negative frequencies to phonon absorption (while the physical energies of phonons are of course positive). These Green functions should be used as the basic elements of the diagram technique shown in Fig. 5a. Here and below we adopt the traditional convention in which the phonon frequency is denoted by ω whilst reserving for the electron energy. For phonons the operator H 0 , introduced in Eq. 17, acquires the form Similarly to the electrons, see Eq. (20), the Keldysh part of the phonon Green function can be parametrized by .
With this parametrization the form of the kinetic equation (21) for the phonons remains the same;, the only difference is that their equilibrium distribution functions for the correlated world solution iŝ and for the uncorrelated world solution: In the thermal equilibrium p ph = p 0 (ω).
Electrons in disorder potential
The interaction with quenched disorder potential is described by the Hamiltonian: After averaging over the disorder potential with correlator U (r)U (r ) = V (r − r ), the translation invariance is restored for averaged correlation functions and the diagrams for the electron correlators become similar to those for electronphonon interaction that carries zero frequency, see Fig. 5.
Electron-electron interaction
The interaction between electrons is given by The rules of the diagram technique are given in Fig. 4. In the discussion of the properties of this model we shall neglect the spin of the electrons. For completeness, we also mention that in the perturbation theory based on Eq. (41) the singular terms proportional to G K (t, t) have to be understood as G K (1, 1) → 2i Ψ † (r 1 )Ψ(r 1 ) , and G R,A (t, t) → 0. Such terms appear only in the Hartree-Fock contributions to the single electron spectrum and not in the collisions interesting for us.
Quasiclassical descriptions
The kinetic equation (21) fully determines the evolution of the observables and the computables. However, it is not solvable in a general case. Substantial simplification occurs in the quasiclassical limit in which equations (21) become local in time and phase space. This simplification is possible if the rate of the electron scattering is smaller than the relevant energy scales in the problem: temperature for electron-electron or electron-phonon interactions or Fermi energy for electrons in disorder potential. For the diagonal part of the kinetic equation this is well established and the theory of quantum corrections is well developed, see ref. [29] for a pedestrian introduction. In the following we shall assume that these conditions hold and that the quasiclassical kinetic equation follows for the diagonal terms. Under these conditions similar local equations hold for non diagonal Greem functions despite the fact that these functions do not have classical meaning.
We follow the standard procedure for the derivation of the quasiclassical equations for both diagonal and non-diagonal components. Any function of two coordinates and two times can be represented as a Wigner transformation where t = (t 1 + t 2 )/2, r = (r 1 + r 2 )/2, t − = t 1 − t 2 , r − = r 1 − r 2 . In this section we chose to keep the Planck constant explicitly so that the parameter for semiclassical expansion is always displayed. Using this representation for the Green functions of electrons and employing Eq. (33) we get for the left hand side (LHS) of the quantum kinetic equation (21) which coincides with the LHS of the classical Boltzmann equation for both diagonal and off-diagonal components of the Green functions. Similar arguments for the phonons lead to the LHS of the kinetic equation The LHS of Eqs. (43,44) represent the deterministic (Liouville) evolution corresponding to the unitary quantum dynamics which is identical for both diagonal and non-diagonal components in the augmented space. The right hand side (RHS) of the kinetic equation describes the non-reversible probabilistic parts and it is different for different models. The equations describe the time evolution of the distribution functions. Here [St ··· el ] , St ··· ph denote collision integrals for the particles (electrons or phonons) scattered by other particles (denoted by · · · ). These collisions integrals will be computed in the next section.
Collision integrals for the specific models
The RHS of the kinetic equation allows a number of simplifications in the leading order in . Furthermore, below we shall consider only the leading order term in the interation.
In the leading approximation one can replace and, with the same accuracy, Note that the fact that D K αα (ω) is an odd function of frequency allows the simultaneous description of phonon emission and absorption processes by a single P αα (ω > 0) > 0 . Substituting these forms into the RHS of the kinetic equation we obtain the collision integrals, for different models. In Eq. (47) we kept only the real part of the collision integral, we discuss various approsimation involved in its derivation of in more detail in section 4.4
Electron-phonon scattering.
We calculate the lowest order diagrams shown in Fig. 6. For our model one neglect the correlations between phonons at different space locations, i.e. the blobs for fermionic loop in self-energy shown in Fig. 6 a) correspond to coinciding points (with density n ph ). This implies that the electron self-energy and collision integral contain an extra factor n ph with respect to the phonon ones. In the diagonal sector we obtain (we do not write down the spatial and time coordinates as the semiclassical collision integrals are local in those variables) where we introduced functions that determine outgoing rate. This notation is useful as the same quantity enters the equations for the non-diagonal part. For off-diagonal (α = β) we obtain St ph el αβ = n ph dP 1 dQ 1 M (P ; P 1 , ω) (2π)(2π ) (d+1) −L ph el (P 1 , ω)F αβ (P ) + P αβ (ω)F αβ (P 1 ) , where we introduced the short hand notation The form factors M include the matrix elements, conservation laws for the electron and phonons colliding with each other, and their spectrum: where numerical factor 1/2 includes the difference of F, P from the physical distribution function by a factor of two. We find it is more convenient to keep which is the microscopic manifestation of Boltzmann H-theorem. Equality is reached only for thermal distribution functions for which it reduces to Eq. (52) and Eq. (53) . These equations allow to prove that the only stable solution of the kinetic equation is given by the thermal distribution functions and all deviations from it decay (generally, exponentially). In contrast, the non-diagonal part does not satisfy any of these properties or conservation laws. As we already mentioned, this absence of conservation laws and H-theorem will be the key to understand the instability of the thermal non-diagonal distributions for correlated worlds (25) and (39) and their subsequent evolution to non-correlated worlds (26), (40). The discussion of the instability will be done in Secs. 5 and 6. In the remainder of this section, we list the properties of the collision integrals for the other physical models of Sec. 3
Electron-electron scattering.
We calculate the lowest order diagram shown on Fig.6. In the diagonal sector we obtain St el el αα = dP 1 dP 2 dP 3 M (P, P 1 ; P 2 P 3 ) (2π ) 3(d+1) − L el el α (P 1 , P 2 , P 3 )F αα (P ) where we denoted As before, the form factors M include the matrix elements, the conservation laws for the electron and phonons colliding with each other, and their spectrum: where numerical factor 1/8 includes the difference of F, from the physical distribution function by a factor of two, and exchange symmetry of the final state. We find it is more convenient to keep , ω as independent energies connected by δ-functions in M with physical spectrum to have the symmetric form for the conservation laws and use the d + 1 dimensional momentum vectors P = ( , p).
Similarly to the electron-phonon interaction, the collision integral (60) for the non-diagonal term is non-linear due to the incoming term. This leads to the instability of the thermal non-diagonal distribution.
Electron-impurity scattering
The collision integral for the electron-impurity scattering is linear in the distribution function where This implies that in the case of the impurity scattering the non-diagonal components of the Keldysh function have the same time evolution as the diagonal ones, so the solution in which it is equal to the thermal equilibrium distribution is stable. Note that electrons in the impurity potential is a chaotic system. In this respect it is not different from the electron-phonon and the electron-electron interaction. Nevertheless, the non-diagonal components are stable, in contrast to the models with electron-phonon and electron-electron interactions. This results in a very different behavior of the out-of-time-ordered correlators in this system.
Additional remarks
It is worthwhile to emphasize that the basic form of the kinetic equation and the forthcoming conclusions are not limited to the lowest order self-energy calculation. In particular, taking into account the commutators of the self-energy with F αβ results in well controllable corrections to the LHS of the kinetic equation and has the meaning of the self-consistent spectrum. The higher order expansion improves the accuracy of the matrix elements in the collision integrals and also produces the real processes involving larger number of particles. Neither of those complications seem to affect the basic relations of the diagonal and non-diagonal evolutions and we will not be dwelling on them in this paper. The imaginary part of the non-diagonal elements of collision integral neglected in Eq. (47) formally appears due to the difference between the distribution functions in upper and down Worlds: This effect also disappears in the leading quasiclassical approximation and does not affect the instability discussed in the next sections. For instance, for electronphonon model this term becomes It disappears for the two Worlds in equilibrium. Furthermore, it is zero if one World has extra particle density that resulted in the spatially non-unform chemical potential.
Instability of the augmented Keldysh functions in zero dimensional case
In this section we study the instability in the systems in which the spatial dependence of the correlation functions can be neglected.
Instability in electron-phonon model.
The Einstein phonon distribution function is characterized by just two numbers in each sectors of the augmented space that are the values of P αβ (±ω 0 ). In thermal equilibrium the two diagonal components are given by Eq. (39). Because the instability in the non-diagonal sector does not affect the diagonal one (Sec. 2.4), for simplification we assume that the diagonal sector for both electrons and phonons is in equilibrium. This assumption is not essential, and the thermal function can be replaced to its non-equilibrium value without any technical complications. Because the phonon field is real, the non-diagonal sectors are related to each other by the symmetry P ud (ω 0 ) = −P du (−ω 0 ), and the state of the phonons is described by two parameters The phonon scattering process does not depend on the electron momentum, so we need to keep only the energy, = ξ(p), dependence of the electron distribution function: .
We also replaced ω 0 → ω 0 as the semiclassical expansion is already completed. In deriving Eqs. (62), we neglected the energy dependence of the electron density of states, ν, we denoted and introduced the dimensionless parameter Eqs. (62) are further simplified in the limits η 1 and η 1. In the former limit the phonon relaxation is slow compared to electrons, in the latter the electron relaxation is slower. The limit η 1 seems to be the most relevant for physical situations (e.g. to describe electrons interacting with low density TLS) and moreover it will enable to develop intuition for analyzing the more involved kinetics of electron-electron collisions. Thus we focus on this limit only. Because the relaxation of θ is much faster than that of f , we can solve the Eqs (62c,d) for θ,θ in the stationary limit. We obtain Eqs. (67) and Eqs. (63,64) form the complete set of equations describing the time evolution of the non-diagonal components of the distribution function. However, they are still non-linear and nonlocal in energy space.
The further analysis is separated into two regimes: "classical" ω 0 T and "quantum" ω 0 T . The difference between these regimes is expected on the physical grounds: in the "classical" regime a large number of excitations is already present, therefore, one expects (and we will see that that it is indeed the case) that the perturbation results in the evolution that leads to the uncorrelated fixed point, f = 0,f = 0 with the characteristic time of the order of τ. In the quantum regime, one expects that the characteristic time is determined by an exponentially small number of excitations and it becomes infinite at zero temperature.
Generally, one expects that in the gapful systems at T = 0 a small perturbations cannot lead to any instability, in particular, these systems cannot be chaotic, so that the scrambling time is infinite. The exponential growth of the characteristic time at low temperatures in the gapless system found here implies a smooth crossover between the properties of the gapful and gapless systems at T = 0 .
Classical limit (ω 0
T ) At high temperatures we can neglect ω 0 in I ± (63) and in the arguments of f in Eq. (67), we can also approximate L = 2/y in Eq. (64). We see then that the form of f ( ) andf ( ) dependencies is not changed by the evolution. Therefore, we can look for the solution of Eq. (67) in the form We obtain that the function φ(t) obeys the first order differential equation that has the unstable fixed point at φ = 1 and the stable fixed point at φ = 0. The solution of this equation takes the form where t cl * = τ ω 0 /8T . This time dependence is typical of the dissipative instabilities. The delay time, t d , depends only logarithmically on the initial conditions: t d = t cl * |ln [1 − φ(0)]| whilst the decay time t cl * coincides with the classical scattering time of the electrons that is inversely proportional to the density of phonon sites, n ph , and the phonon occupation number, T /ω 0 . Note that this classical time, t cl * , is less than the energy relaxation time, in the limit ω 0 T .
5.1.2.
Quantum limit (ω 0 T ) In order to describe the behavior of the solution in the quantum limit, it is instructive to look at the results of the numerical solution of Eq. (67) at low temperatures, see Fig. 7. In contrast to the classical case, the function f ( ) does not preserve the shape with the time evolution. However, one observes that the behavior does not change at negative energies. At positive energies we observe a sequence of peaks at energies n = (n + 1/2)ω 0 that are similar to the first peak at n = 0. This behavior can be qualitative understood as follows.
At low temperatures Eq. (63) implies that I − I + (see also Fig. 7b). If the terms proportional to I + are dropped, Eq. (67a) describes the drift of f ( ) to high energies together with relaxation. Similarly, Eq. (67b) describes the drift to negative energies and relaxation. The L-terms in Eqs. (67) lead to decay, so the frequency regime where these terms dominate cannot contribute to the instability. For frequencies | | > ω 0 the L-term in Eq. (67) is large so the region responsible for the instability is | | < ω 0 . In the absence of I + , the advection term proportional to I − removes perturbations from this region. Because the values of f ( ) at high energies does not feedback on low energies, the instability disappears. In order to see the instability it is therefore essential to keep the I + term in the region | | < ω 0 . From Fig. 7b we see that the main contribution to I ± comes from the vicinity of the frequencies ±ω 0 /2, so the integrals I ± can be approximated by I − ≈ f 2 + and I + ≈ f 2 − where f ± = f (±ω 0 /2) (we drop nonessential numerical factors). For this reason we focus on the time dependence of these two values of f ( ). In the equation for df − /dt we can neglect the ε/Τ contribution of In the equation for df − /dt we can neglect the contribution of I − f (3ω 0 /2) because it is proportional to I − 1. Then the integral equations (67) reduce to two ordinary differential equations: where L 0 = 2 exp(−ω 0 /2T ). At low temperatures the linear term in these equations becomes exponentially small, as a result the instability developes exponentially slowly. Solving for the product f + f − we get and where t qu * = τ /(2L 0 ). The Eqs. (73) should be compared with the solution (70), we see that they describe similar relaxation but with exponentially smaller rates. Although the behavior of the solution (73) is similar to the one in the high temperature limit, there is an important difference: the relaxation is determined only by a narrow advection region at low energies, whereas the high energy region plays a passive role of a sink. Furthermore, the non-linearity appears first at high energies but it does not affect the fact that the dynamics is determined by the narrow region at low energies that determines the value of I + .
Instability for electron-electron interaction.
As for electron-phonon scattering one can focus only on the energy dependence of the off-diagonal functions and assume that the diagonal functions correspond to the equilibrium. As a result, the time evolution of the functions f,f is described by the equations similar to Eq. (67): is the Fermi liquid relaxation rate, E * F is the parameter built from the electron density of states and the interaction constant. 8 The functional K( , f,f ) is of the second order in f and of the first order inf : Formally these equations are similar to Eqs. (67) for the electron-phonon scattering, the only difference is that instead of one mode Eqs. (76) contain the integral over frequencies. At large T the functionals K(f,f ) andK(f,f ) are dominated by terms proportional to I − (ω) that describes drift to larger frequencies for f ( ) and to smaller frequencies forf ( ). The feedback that results in dissipation is due to the energies T . The qualitative properties of these equations are thus captured by the simplified equations for two characteristic values of f ± = f (±T ). These equations have exactly the same form as Eqs. (71,72), with the important difference that L 0 ∼ 1. Thus their solution is given by the equations (70) with characteristic decay time t * ∼ τ F L .
It is very important that although the relaxation rate in a Fermi liquid becomes very large at high energies, the processes involving high energy electrons 8 In three dimensional Fermi liquid E * F ∼ E F where E F is the Fermi energy, in two dimensional Fermi liquid it contains additional ln(E F /T ) factors [30] while in 1D models two particle collisions do not lead to dissipation.
do not contribute to the instability of the equations (74) for the non-diagonal parts. Instead, the instability is controlled by the same processes as the physical relaxation and has characteristic time scale of the electron-electron relaxation time at temperature T .
Equations for spatial structure of the instability
As we have seen in section 5 the instability in zero dimensional models is always controlled by equations similar to (62). This equation can be derived and solved analytically in the case of electron-phonon model at high temperatures but it provides the qualitative description in other cases as well. To resolve the spatial structure of the instability we thus begin with the electron-phonon model at high temperatures. The presence of the spacial structure changes the quantum kinetic equations (62) very little (apart from introducing the spacial dependence). Because the phonons in this model are local, the equations for θ andθ contain the fermionic functions taken at the same spatial point. As in section 5.1 in the limit of low phonon density (η 1) the phonon relaxation is fast, so we can solve for local θ andθ: Performing the standard spatial gradient expansion in the LHS of the kinetic equation (45), we find that df /dt in (62) acquires an additional diffusion term. Parametrizing the solution by the ansatz (68) we obtain the final equation This equation is the central result of this paper. As we argue below it holds (with small modifications) for other models as well.
At non-zero temperature the electron-phonon interaction leads to the diffusive motion of electrons characterized by the momentum relaxation time so that the diffusion coefficient D * = v 2 F τ tr /d, where v F is the Fermi velocity and d is the spatial dimensionality. At high temperatures ω 0 T the transport relaxation rate is given by 1/τ tr = λ 2 ν(n ph T / ω 0 ), with (n ph T / ω 0 ) ph having the meaning of the thermal phonon density. In this case the energy relaxation of the electrons becomes parametrically slower than its momentum relaxation: 1/τ e = (ω 0 /T )λ 2 νn ph .
The diffusion approximation used to derive Eq. (77) can be rigorously justified only if the resulting gradient of φ is small on the scale of the mean free path, v F τ tr . This happens only if t cl * τ tr which can occur if the electron diffusion is additionally slowed down by the impurity scattering 1/τ tr = 1/τ (ph) tr Both the diffusion coefficient D * and the time t cl * depend on the local temperature T (r, t) and the electron density n(r, t). Those quantities are described by the standard diffusion and thermal diffusion equations for the diagonal components and their solutions has to be used as entry parameters for Eq. (77). This scheme gives the complete description of the quantum butterfly effect. Notice that depending on the particular model, D * may coincide with the particle or thermal diffusion coefficients or may be different from those by a numerical factor.
A very similar equation can be put forward for the model of the electronelectron interaction. Taking into account that the effective equations for electronelectron interaction is formally the same as that for electron-phonon case, we write The only modification here is the appearance of the drift term v∇which is dictated by the Galilean invariance for ξ p = p 2 /2m − F . The macroscopic velocity v(r, t), the local temperature T (r, t), and the electron density n(r, t) are controlled by the usual equations of local hydrodynamics and thermal (entropy) diffusion [31]. It is possible to generalize Eq. (78) for the case of relativistic hydrodynamics. Based on Lorentz invariance one obtains where covariant and contravariant component are related by the arbitrary metric tensor and u i is standard four component velocity vector with local constrain u i u i = 1.
To close the section, let us emphasize that the coefficients D * , t * do not affect the diagonal entropy production and do not enter the usual Onsager relations. It is unknown to us whether there is an analogue of the H-theorem that includes the non-diagonal distribution functions as well.
7. Spatial Propagation of the instability: combustion waves.
All equations of this type possess two stationary solutions y = 0 and y = 1 in case of FKPP, one of them is stable, another is not. In particulr, our Eq. (77), displays the instability of the solution φ(r) = 1 that evolves according to the following scenario. After being seeded at time t = 0 with the small deviation δφ(r) = 1 − φ(r) 1, in a region around 0 (i.e. δφ = 0 for r > R c ) the instability remains localized in the area where it was seeded (r < R c ) for the time t d ∼ ln(1/δφ). After this initial period, the instability starts to grow spatially forming a non-linear wave that moves with a well defined velocity v cw . For Eq. (77) in 1D the solution φ f (x − v cw t) for the front moving with constant velocity v obeys the equation As is established in the theory of combustion [34], the value of the front velocity can be found from the study of the solution of Eq.(80) at x → ∞ where δφ → 0. At δφ 1 the solution of Eq. (80) behaves as δφ ∼ exp(−kx) with k that is For the initial conditions that correspond to δφ = 0 for r > R c the solution quickly converges to the one moving with the minimal velocity allowed by the constraint (81). The presence of other solutions (with higher velocities) is due to the fact that for the (non-physical) initial conditions that differ from unity everywhere, the instability develops at large r might develop independently of the seed at small r. One concludes that the combustion wave moves with velocity v cw = 4 D * /t * Note that for electron-phonon and electron-electron models D * ∼ t * v 2 F in the absence of electron-impurity and elastic scattering, so that the front velocity v cw ∼ v F . Because no perturbation (even unphysical one) can propagate with velocity larger than v F , v cw v F .
In order to check the conclusions of the semi-quantitative analysis presented above we have studied numerically the front propagation in the dimensionless equation and in the similar equation describing evolution of both electrons and phonons that describes the situation in which the phonon dynamics is of the same order as electron one (i.e. η = 1). We found that in all cases and in all dimensions (d = 1, 2, 3) the front quickly assumes a well defined shape and start to move with the constant velocity. We note that this conclusion for the two component (electron and phonon) systems is not obvious because such equations are known to display more complex behavior in some cases. We now apply the findings of the previous sections, namely, the instability of the off-diagonal part of the Green function to the computation of the out-oftime-ordered correlator (14). In the conventional theory the correlator of two operators at large separations in time or space factorizes The corrections to this factorization are given by irreducible correlator that decreases quickly with distance and time. In the electron models considered here the irreducible part is small in 1/p F r and 1/ F t. Furthermore, in a conventional theory one can evaluate both averages in the RHS of (84) against the background of the unperturbed states. The crucial difference of the two Worlds theory is that the second term in this factorization is unstable. Thus, it is not correct to replace it by its value for the fully correlated, unperturbed state: a small deviation from this value at short distances grows quickly and eventually reduces it to zero in the whole system. Instead one should use for it the results of the solution of the equations for the Green functions discussed in previous sections. In particular, for the response operator in correlator (14) we get R t,t (r) = 2πν f ( , t, r) −f ( , t, r) .
where we emphasized that the augmented distribution functions f andf are generally the functions of the position in the space as well. The space-time dependence of these functions is determined by the equations derived in Sections 5-7. The average of the source operator is a constant factor, for the correlator (14) it is given by the total density of electrons: As discussed in Section 5 the time dependence of the augmented distribution functions is simplified in the high temperature regime of the electron-phonon model. In this case the form of the energy dependence of the augmented distribution function does not change with time, the time dependence shows up only in the factor φ(r, t): f ( , t, r) = φ(r, t)f 0 ( ). In this case we can write the final result for the Wigner transform of the out-of-time ordered correlator in the closed formà ρρ ( , t, r) = 2πνn 0 ( )n el φ(t, r), Here φ(t, r) is the solution of the equations (77-79) appropriate for a particular model with the initial conditions Here δφ(r) δ (r)/n el δ (r)p −d F describes the perturbation resulting from the introduction of one extra electron in down World, which serves as a seed of the instability. Hereδ(r) denotes the smeared δ-function that appears because the equations for the distribution function are valid only at the time scales larger than collision time τ tr , so the addition of one particle at time t = 0 in the down World translates in the density spead over distance l tr ∼ v F τ tr for the initial conditions of the Eqs. (77-79). As a result the δ−function in Eq. (90) has to be replaced byδ(r) which is smeared at the distances of mean free path, l tr ∼ v F τ tr . The solutions of the equations (77-79) correspond to the propagation of the front as illustrated by Fig. 8.
Note that the particular symmetric form (16) of the response operator computed here has the property that it vanishes at coinciding times and coordinates. This property disappears for less symmetric form of the correlators, for instance if τ a 1 is replaced by, e.g. τ − = 1 2 (τ 1 − iτ 2 ), in the definition of the response operator (16).à In this case the first term in (85) disappears and we get after integration over energiesà At low temperatures for electron-phonon model and for electron-electron interaction at any temperature the energy dependence of the augmented distribution function changes with time as well (Sections 5.1.2, 3.3). In this case the equations for the augmented distribution function are more complicated but the solution remains qualitatively similar.
In all cases, the augmented distribution function that controls the spacial and time dependence of the out-of-time-ordered correlator describes the front propagation, the state of the system before the front has not been affected yet by perturbation whilst the state of the system behind the front is characterized by exponentially vanishing correlations: The delay time t d in these equations is controlled by the initial conditions (89) to the Eqs. (77-79) or similar. It depends only logarithmically on the strength of the initial perturbation: Equation (90) enables us to estimate the strength of the initial perturbation. Indeed, δ(0) 1/(v F τ tr ) d . Thus, we estimate δφ(0) 1/(p F l tr ) d and the delay time It is worthwhile to notice that this expression is somewhat similar to the Ehrenfest time [11,35] appearing as the delay time for the quantum correction in quantum chaos for non-interacting system. In this one electron problem the real instability does not occur. In a finite size system of spatial size R the correlator (93) decreases exponentially to zero for all r < R after t scr = t d + R/v cw . The time t scr has the meaning of the time at which the two worlds become completely uncorrelated due to a local perturbation, this is also the time that it takes for the quantum information to be spread over the whole system (scrambling time). We see that although the propagation of the information is controlled by diffusion, it occurs with a constant velocity due to the non-linearity of the equations. The diffusion coefficient controls the velocity of this propagation.
The propagation with constant velocity (87,93) also indicates that in a chaotic many body system the entanglement entropy spreads ballistically despite the diffusive nature of the dynamics. This analytical result confirms the empirical conclusions reached in a number of numerical works.
As we discussed above, the conclusions of the linear propagation of the quantum butterfly effect controlled by the combustion equations is quite general. The details of the equations are sensitive to the microscopic model but the linear propagation similar to combustion front occurs in all of them.
Discussion and conclusions
We developed the technique to study the out-of-time-ordered correlators, such as Eq. (1), based on the extension of Keldysh technique. Similarly to standard Keldysh technique, the augmented technique enables the analytical study of systems in different limits, in particular to obtain the leading result in the quasiclassical approximation and systematic corrections to it. As well as in the Keldysh technique the quasiclassical approximation is valid provided that the particle motion between collisions is quasiclassical whilst collision themselves can be quantum.
We limited ourselves to the leading quasiclassical terms that result in the equations similar to the kinetic equation in traditional statistical mechanics. We found that they describe all (or most of all) non-trivial behavior of the outof-time-ordered correlators. The major difference from the traditional kinetic equation is the appearance of the off-diagonal functions, superficially similar to the conventional distribution function. However, unlike the state occupation probabilities, these new functions also describe the overlap between two copies of the system. The kinetic equation for the off-diagonal functions is dramatically different from that for the diagonal functions: the outgoing term depends on both diagonal and off-diagonal functions whilst the incoming term contains only off-diagonal ones.
The solution with initially unit overlap between two copies becomes unstable when disturbed by a very small perturbation, the phenomenon known as quantum butterfly effect. This instability is described at long times (longer than collision times) by non-linear diffusion equations similar to those appearing in the combustion front propagation. After an initial transient behavior the front of the propagating wave acquires a constant velocity and a shape that does not depend on the initial conditions (Section 7). In the electron models studied in this paper, the velocity of the front is of the order (but less than) the Fermi velocity that serves as natural bound for the propagation speed. In the presence of impurity scattering the velocity of the front can become parametrically slower than Fermi velocity. The microscopic model of electrons interacting with the dilute set of oscillators solved in this work might provide the description of the loss of coherence in the set of two level systems (TLS) that provide both elastic and non-elastic scattering for electrons with the latter becoming small at low temperatures.
Our work suggests a number of exciting developments. First, the quantum butterfly effect studied here can be viewed as the result of the gradual entanglement of the local degrees of freedom with larger and larger part of the surrounding system, and thus is likely to be related to the propagation of entanglement entropy discussed extensively recently [36,37,38,39,7]. Our results would enable us to put these works on the firm ground of an analytical theory if the relation between non-diagonal correlators and entanglement entropy is established. We hope to return to this point in future works.
The quantum butterfly effect can be studied numerically and compared with the analytical theory developed here. Also, the destruction of the coherence between two copies of the system might be a useful tool to study the appearance of the arrow of time in the systems described by the unitary evolution. Finally, the destruction of quantum coherence between two copies of the system is a very important phenomenon for the quantum information protocols that are based on the construction of the initially perfectly entangled states of two (or more) interacting qubit systems because small perturbation to one of these systems would result in a spreading decoherence wave described by our equations.
The spatial and time scales of the effective non-linear diffusive equations that describe the instability of the coherent solution are sensitive to the details of the microscopic theory. Furthermore, their relation to the ones appearing in physical observables is not expected to be universal. Thus they might provide a new tool and the new way of thinking about microscopically different systems that display similar properties such as conductivity.
Our formalism can be extended to the study of many body localization by augmenting the formalism developed in the work [40]. This would provide the analytical approach and qualitative understanding to the problem for which only numerical results are currently available. [41,42,43,44] It might even help to describe the transition itself and even the entanglement propagation in generic glassy systems. Moreover, the question of the propagation of the decoherence front in localized systems is similar to the problem of the decoherence propagation in integrable systems. Note that according to [40,45] in localized and integrable systems the collision integral disappears resulting in the suppression of the chaotic behavior that is responsible for the quantum butterfly effect.
Finally, the microscopic systems studied in this work are described by the combustion equations that display only laminar solution. However, combustion equations for systems with a few components are known to display a large variety of interesting behaviors: Turing instabilities [46], Zhabotinsky cycles [47] to name just a few. It remains to be seen if these solutions are realized in microscopic models as the instabilities of the correlated worlds solution. In particular, they might appear as the solutions against the background of nonequilibrium states such as turbulent hydrodynamics of normal or superfluid liquid.
Acknowledgement
We acknowledge extremely useful discussions with Alexei Kitaev and the hospitality of CTP CSIBS, Daejeon, Korea. Our research was supported by ARO grant W911NF-13-1-0431 and by the Russian Science Foundation grant # 14-42-00044. | 2016-09-14T13:03:57.000Z | 2016-09-05T00:00:00.000 | {
"year": 2016,
"sha1": "e73e418758c2616d129f6b14996c21778afdc4cf",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1609.01251",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a5bc3059ad9f8ac498af4ab94d34196a67b1edd1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
84143982 | pes2o/s2orc | v3-fos-license | Fungicide tolerance of Trichoderma asperelloides and T . harzianum strains
Tolerance in isolations of Trichoderma was developed by exposing two strains of T. harzianum and three of T. asperelloides to increasing concentrations of chemical fungicides. This isolation of Trichoderma was exposed to three fungicides: Captan, Thiabendazol and the mixture Captan-Carboxin. Some selected lines of these strains reached tolerance to Captan and partial tolerance to the mixture Captan-Carboxin. The biological and genetic changes in these tolerant lines were monitored by determining the relative growth rate of the fungus, inhibition of Fusarium and by analyzing the genomic changes through UP-PCR. The results show that the tolerance to fungicides can be developed without affecting the parameters of biological activity in these lines of Trichoderma (growth and parasitism against Fusarium). Chemical tolerance to the fungicide was verified by means of changes at the DNA level (UP-PCR), mainly in the lines tolerant to Captan. This suggests that Trichoderma survives in environments with remnants of fungicide molecules.
INTRODUCTION
A strategy of biological control of plant diseases caused by soil-borne plant pathogen fungi is the use of species of Trichoderma, these includes species of economic importance on industrial purposes for production of antibiotics and enzymes.In agriculture, these fungi, improves plant growth and development, has biological control activity against other fungi and nematodes [1][2][3][4].It has been found that the persistent use of fungicides could weak the natural antagonistic activity [5].However, Trichoderma has the capability of degradading xenobiotic compounds [6][7][8].There are Trichoderma tolerant strains that can survive field concentrations of chemical fungicides.We now have several approaches that can be used to obtain Trichoderma strains resistant to chemical fungicides.Goldman et al. [9] and Mukherjee et al. [10] have sccesfully obtained T. viride and T. pseudokoningii strains tolerant to chemical fungicides.The resistance mechanism of some fungi to chemical fungicides is due to genetic mutations, which reduces the susceptibility to the fungicides and decreases their efficacy [9,[11][12][13].
Strategy for the Selection of Tolerant Trichoderma Lines to Chemical Fungicides
Before the selection experiments were started, the Trichoderma strains were grown in potato dextrose agar (PDA) supplemented with the chemical fungicides at increasing concentrations.The final concentrations used in the field are: Captan 1132.5 ppm, a mix 1:1 Captan-Carboxim 2000 ppm, and Thiabendazole 450 ppm.The objective was to determine the natural fungicide tolerance of the five Trichoderma strains.The selection of tolerant lines to chemical fungicides was performed by successive cultures of the Trichoderma strains in PDA supplemented with the correspondent fungicide at increasing concentrations.Five mm diameter disks from Trichoderma 10 days old cultures were placed on PDA with the chemical fungicides, and mycelial growth was measured on days 1, 2, 3, 4, 5, 7 and 11.Trichoderma lines displaying more than 20 mm of growth were selected to be grown under the following chemical fungicide concentration in subsequent rounds of selection.Strains that did grow 20 or more mm in diameter after 10 days of incubation, continue in the selection media; on the contrary, strains that grew poorly (less than 20 mm of diameter) or did not sporulate, were discarded.Tolerant strains were subjected to further selection experiments with increasing fungicide concentrations until the Trichoderma lines were able to sporulate.
To evaluate the tolerance of the selected Trichoderma lines to the chemical fungicides, they were grown in 30 ml of liquid medium (yeast extract 2.5%, glucose 2.5%, NaNO 3 0.2%) in 125 ml flasks erlenmeyer supplemented with the fungicides, for four days at 28˚C, at 125 rpm.This experiment was performed twice, and in each one 2 replicates were set for each Trichoderma line.
Evaluation of the Antagonistic Activity and Growth of the Tolerant Trichoderma Selected Lines
The antagonistic activity of the selected tolerant Trichoderma strains was compared to the wild type strains by placing a 3 mm diameter disk from a Fusarium oxysporum 5 to 8 day old culture on PDA.After 24 h, a 3 mm diameter disk of the Trichoderma strain was placed 3 mm apart from the plant pathogen.Each treatment was done by triplicate, and incubated at 25 1˚C under lighT.The antagonistic activity of the Trichoderma strains was estimated according to two criteria,: the plant pathogen growth inhibition radius (IR) and the antagonism class system described by Bell et al. [15].Means of growth rate and IR was analyzed by ANOVA and Fisher's least significant difference (LSD) test to determine statistically significant differences.
Identification of Molecular Characteristics of Trichoderma Fungicide Tolerant Selected Lines
The DNA of the Trichoderma fungicides tolerant strains was analyzed through universal primer PCR marker (UP-PCR), a multi-site amplification technique [16,17].The amplifications patterns of these strains were compared to the wild type strain.DNA extraction was performed from 200 mg of lyophilized fungal mycelia according to the method described by [18].PCR amplification mixture was compose of PCR buffer 1X, MgCl 2 3 mM, dNTPs 0.2 mM, primer 1.6 M, Taq DNA polymerase 1 U, 25 ng of DNA distilled water to a final volume of 25 l.The following amplification program was used: initial denaturation at 94˚C during 2.5 min, followed by 30 cycles of 92˚C during 50 s, 53˚C during 90 s and 72˚C during 30 s, with a final extension at 72˚C during 3 min.UP primers used were L-45 (5' GTAAAA CGACGGCCAGT 3') and L-15 (5' GAGGGTGGCGG CTAG 3').All amplification reactions were performed at least by duplicate.Amplification products were separated in 2% agarose, stained with ethidium bromide and visualized on a UV transilluminator.Additionally, a specific DNA fragment of the β-tubulin gene was amplified and used as target to diagnosed resistance to the fungicide Thiabendazole [19].
Selection of Tolerant Trichoderma
Lines to Chemical Fungicides After five rounds of selection, it was noticed that although 10 out of 15 Trichoderma lines used in the experiments accomplished the mycelial growth selection parameter, of at least 20 mm of colony radius in 10 days, the speed of growth in all cases was lower than the wild type strains (Table 1).Natural tolerance to the field dose of the chemical fungicide Captan (1132, 5 ppm) was achieved in all the T. asperelloides and T. harzianum strains evaluated in this study.In general, isolates of T. harzianum were less tolerant to the chemical fungicides than isolates of the T. asperelloides species.At the end of the 9 rounds of selection with the chemical fungicides, tolerance to Captan varied between 176% and 207% of the dose recommended for field application.Isolates of T. harzianum could not develop tolerance to the fungicide Thiabendazole and the mixture Captan-Carboxim.T. asperelloides isolates T-19, T-84, and T-109 were able to grow and to sporulate in the culture medium containing 75% of the dose recommended for field application (2000 ppm).In contrast, none of the evaluated strains were able to develop tolerance to the fungicide Thiabendazole at a concentration below 20 ppm.Selected tolerant strains cultured in liquid medium supplemented with chemical fungicides Captan and Captan-Carboxin (Table 2) do not shown differences from the wild type strains grown without fungicides, after fourdays of culture (data not shown).
Analysis of the growth rate, (mm/hr) of the chemical fungicide tolerant Trichoderma lines compared to the wild type strains, show that this parameter was affected in 8 of the 10 tolerant selected lines.Statistical analysis indicate that the growth rate of six tolerant lines was lower than that of the wild type strains 3).
Antagonism Tests of Tolerant
Trichoderma Lines to Chemical Fungicides The antagonism test was performed with the plant pathogen Fusarium oxysporum and measured as the IR.It was observed that all tolerant Trichoderma strains kept their antagonism class 2 similar to the Trichoderma wild type, but strain T. asperelloides T-19 Thiabendazole shifted to class 3 of antagonism (Table 3).Comparison of the IR mean values displayed by the Trichoderma fungicide tolerant lines indicated that some lines have IR values that are significantly higher than the wild type strain, as in the case of T. harzianum T-7 selected with Captan and strain T. asperelloides T-19 selected against Captan-Carboxin.While in the other cases, the IR was the same or significantly lower than the wild type strain (Table 3).Taking in account that one of the criteria used in the selection experiments was the ability of the tolerant lines to sporulate, the microscopic study performed indicates that all the selected Trichoderma lines kept this characteristic except for T. asperelloides T-19 exposed to Thiabendazol (data not shown).
Molecular Analysis
PCR analysis of the Captan-Carboxin lines and the wild type Trichoderma strains showed different amplification patterns such as deletion or addition of DNA bands.DNA amplified with primer UP-L45 indicated that the strains T. asperelloides, T-19 and T-84, selected with the fungicide mixture Captan-Carboxin contain the same genetic changes compared to the wild type strains, lost a of 1400 bp DNA band, while the bands of 1150, 500 and 450 bp were new in the fungicide treated lines (Figure 1).
Although the PCR diagnostic test designed to identify Thiabendazole susceptible/resistant genotypes indicated that there were no changes in the β-tubulin gene (Figure 2(A)), a change at the DNA level was observed when primer UP-L45 was used.This change is illustrated by the appearance of a new 400 bp band in both selected Trichoderma lines (Figure 2(B)).Treatment of the Trichoderma strains with the chemical fungicide Captan induced the largest changes at DNA level of the fungicides, primer UP-L45 was used for detection (Figure 3).
DISCUSSION
T. asperelloides and T. harzianum contain strains that could be of importance in biological control of plant pathogens [20][21][22].Trichoderma strains used in this study were isolated from different geographical areas and from different sources.All of them were also naturally tolerant to the recommended concentration of the chemical fungicide Captan, and exposure of Trichoderma strains to increasing concentrations of this fungicide allowed for the selection of tolerant lines.Fungicide resistance is a stable, inheritable adjustment by a fungus to a fungicide, resulting in reduced sensitivity of the fungus to the fungicide.Resistant isolates are less affected or not inhibited at all by application of a fungicide [23].The fungicide can in fact still can control sensitive isolates, causing natural resistant isolates to potentially may become dominant in populations under selection pressure of fungicide.This phenomenos happens in assays, evidencing the fact that Trichoderma has a natural ability to tolerate fungicides, which is called 'natural' or 'inherent resistance'.Resistance is as a response to repeated use of the fungicide, or to the repeated use of another chemically related fungicide and/or by a biochemical mechanism of antifungal action [24].
Ruocco et al. [25] explained that the ability of Trichoderma to withstand relatively high concentrations of a variety of synthetic and natural toxic compounds, including its own antibiotics, depends on efficient cell detoxification mechanisms supported by a complex system of membrane pumps.Now it is well know that the genome of Trichoderma includes ABC transporters (ATPbinding cassette (ABC) transporters), which are members of a protein superfamily that effluxes drugs from cells of target organisms.Thus transporters may provide a mechanism of protection against cytotoxic drugs and xenobiotic agents.The natural function of ABC transporters in plant pathogenic fungi may relate to transport of plant-defense compounds or fungal pathogenicity factors [26].The ABC transporters may explain the natural tolerance of fungicides on Trichoderma, and their ability to successfully to survive in extreme environments.
Growth of T. asperelloides and T. harzianum strains in liquid medium with the fungicides Captan and Captan-Carboxin confirmed that the selected lines have developed a mechanism to tolerate the exposure to homogenous concentrations of the chemical fungicides.Tolerance to the fungicide mixture Captan-Carboxin was obtained in the treated lines of T. asperelloides strains T-19, T-84 and T-109, while some degree of tolerance to Thiabendazole was only obtained with the T. asperelloides strains T-19 and T-84.These data suggested de-toxification mechanisms are restricted to particular strains, and are not present in all the specimens of a taxa.
In some cases, growth rate and IR of the Trichoderma tolerant lines were affected by the exposure to the chemical fungicides.The antagonism capacity under in vitro conditions was only negatively affected in one out of the 10 tolerant lines obtained.A similar phenomenon was found in Penicillium on Imazalil resistance and sensible strains on which was no difference in spore production and radial growth [27].In two cases the antagonistic capacity was superior in tolerant lines (T.asperelloides T109 Captan/Carboxin and T. harzianum T53 Captan).Analogous results were obtained by Mukherjee et al. [10], with mutants of benomyl-tolerant strains of T. pseudokoningii, which were superior to the wild type in biocontrol potential on S. rolfsii.A correlation between fungicide resistance and antagonistic activity is suggested by Marra et al. [28], affirming that the upregulated expression of ABC transporter genes of T. atroviride during the three-way interaction with various plants and fungal pathogens, possibly supports both antagonistic activity and root colonization.
DNA changes were observed in T. asperelloides lines T-19 and T-84 treated with Thiabendazole (benzimidazole group) (Figure 2(B)).The results of the diagnostic test designed by Cañas (2004) indicated that there were not changes in the -tubulin gene level.Nevertheless benzimidazole resistance was conferred by point mutations in the β-tubulin gene in most phytopathogenic fungi.However, exceptions have also been noticed through via site-directed mutagenesis, a mutation that confers benomyl tolerance to other fungi does not impart resistance in T. viride [29].Kawchuk et al. [30] established that the amino acid sequences of the β-tubulin genes from several thiabendazole-resistant and sensitive isolates were identical in Gibberella pulicaris.This analysis confirmed that the β-tubulin gene was not linked to thiabendazole resistance.These results suggest that there must be other genomic regions involved in the resistance to benzimidazoles, but the exact molecular mechanism for this resistance still unknown.
Differences in the number of genetic changes observed in the Trichoderma strains treated with chemical fungicides could be due to their mode of action or to the approach used for tolerance development.It has been described that protectant fungicides such as Captan, induce mutations in several genes, contrary to systemic fungicides in which target a particular gene or gene product [9,[11][12][13].This coincides with the results, since high genetic changes observed in the Captan tolerant Trichoderma lines as compared to the wild type strains.
The results suggest that it is possible to develop Trichoderma tolerant lines to some chemical fungicides.
Most importantly, the changes induced by this tolerance, in most cases, does not negatively affect the antagonistic activity of the biological control strains, and in some other cases, the growth rate and the IR are increased.The molecular study performed permitted us to recognize changes at the genomic level, which in most cases are not related to the loss of biological fitness of the fungal strains.
1
Antagonism class determined according to Bell et al., (1982) determined after 67 hours of culture on PDA; 2 Mean values followed by the same letter within each Trichoderma strain and column are not significative different (LSD, = 0.05).
Table 1 .
Mean growth value of selected Trichoderma asperelloides and T. harzianum isolates exposed to several concentrations of the chemical fungicides Thiabendazole, Captan-Carboxin, and Captan compared to the wild type strains after 5 rounds of selection.Growth mean value in bold correspond to the Trichoderma isolates selected to continue in the fungicide tolerance selection experiments.
Table 2 .
Maximum concentrations tolerated by Trichoderma asperelloides and T. harzianum strains after multiple increasing exposures to the chemical fungicides Thiabendazole, Captan-Carboxin, and Captan, under laboratory conditions.
Table 3 .
Mean growth rate of Trichoderma strains and antagonism against to Fusarium oxysporum caused by wild-type and selected fungicide tolerant lines of Trichoderma asperelloides and T. harzianum. | 2019-03-20T13:15:24.905Z | 2011-08-10T00:00:00.000 | {
"year": 2011,
"sha1": "6c272c0b0dd134e8becf69cb219b364e75c40ee5",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=6689",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "6c272c0b0dd134e8becf69cb219b364e75c40ee5",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
119277638 | pes2o/s2orc | v3-fos-license | Bars and secular evolution in disk galaxies: Theoretical input
Bars play a major role in driving the evolution of disk galaxies and in shaping their present properties. They cause angular momentum to be redistributed within the galaxy, emitted mainly from (near-)resonant material at the inner Lindblad resonance of the bar, and absorbed mainly by (near-)resonant material in the spheroid (i.e., the halo and, whenever relevant, the bulge) and in the outer disk. Spheroids delay and slow down the initial growth of the bar they host, but, at the later stages of the evolution, they strengthen the bar by absorbing angular momentum. Increased velocity dispersion in the (near-)resonant regions delays bar formation and leads to less strong bars. When bars form they are vertically thin, but soon their inner parts puff up and form what is commonly known as the boxy/peanut bulge. This gives a complex and interesting shape to the bar which explains a number of observations and also argues that the COBE/DIRBE bar and the Long bar in our Galaxy are, respectively, the thin and the thick part of a single bar. The value of the bar pattern speed may be set by optimising the balance between emitters and absorbers, so that a maximum amount of angular momentum is redistributed. As they evolve, bars grow stronger and rotate slower. Bars also redistribute matter within the galaxy, create a disky bulge (pseudo-bulge), increase the disk scale-length and extent and drive substructures such as spirals and rings. They also affect the shape of the inner part of the spheroid, which can evolve from spherical to triaxial.
Introductory remarks
In the ΛCDM model, galaxies are formed in dark matter haloes, and, at early times, merge frequently with their neighbours. As time evolves (and redshift decreases), the rate of mergers decreases and the evolution of galaxies changes from being merger-driven to a more internally driven one. This change is progressive and the transition is very gradual. Generally, the internally driven evolution is on a much longer timescale than the mergerdriven one. It is now usually termed secular (for slow), a term introduced by Kormendy (1979), who made in that paper the first steps in linking this evolution with galaxy morphology.
In the sixties, and partly through the seventies as well, theoretical work on galaxy dynamics was mainly analytical. The working hypothesis usually was that potentials are steady-state, or quasi-steady-state. Thus, given a potential or type of potential, theoretical work would follow the motions of individual particles, or would study collective effects aiming for selfconsistent solutions, by following, e.g., the Boltzmann equation (Binney & Tremaine 2008). In this way, the basis of orbital structure theory was set and a considerable understanding of many dynamical effects was obtained. The advent of numerical simulations, however, made it clear that galaxies evolve with time, so that a quasi-steady-state approach can not give the complete picture. Secular evolution was the general subject of this series of lectures, which were given in November 2011 in the XXIII Canary Islands Winter School of Astrophysics. My specific subject was bar-driven secular evolution and was presented from the theoretical viewpoint, although I included in many places comparisons with observations. In this written version I concentrate on a few specific topics, such as the angular momentum redistribution within the galaxy, the role of resonances in this redistribution, and its results on bar evolution and boxy/peanut bulges. I will discuss elsewhere the effects of gas and of halo triaxiality and clumpiness. The main tool I used was N -body simulations, and, albeit to a somewhat lesser extent, analytic work and orbital structure theory. It is only by coupling several independent approaches that the answer to complex questions, such as the ones we have tackled, can be obtained.
Introductory material, useful for a better appreciation of some aspects of bar evolution, can be found in Binney & Tremaine (2008), while further related material can be found in the reviews by Athanassoula (1984 -on spiral structure), Contopoulos & Grosbøl (1989 -on orbits), Sellwood & Wilkinson (1993 -
Introduction
N -body simulations have clearly shown that bars form spontaneously in galactic disks. An example is given in Fig. 4.1, displaying the face-on (upper panels), side-on † (middle panels), and end-on ‡ (lower panels) views of the disk component at three different times during the formation and evolution.
The left-hand panel shows the initial conditions of the simulation, the righthand one a snapshot at a time near the end of the simulation, and the central panel a snapshot at an intermediate time. Before plotting, I rotated the snapshots so that the major axis of the bar coincides with the x axis.
Note that between the times of the central and right panels both the bar † For the side-on view, the galaxy is viewed edge-on, with the direction of the bar minor axis coinciding with the line of sight. ‡ The end-on view is also edge-on, but now the line of sight coincides with the bar major axis. and the disk have grown considerably in size, and that in both snapshots an inner ring surrounds the bar. Note also that the initially thin disk becomes thick in the inner parts. Seen side-on, it first becomes asymmetric with respect to the equatorial plane and then puffs up to reach a peanut-like shape. Seen end-on, it displays a bulge-like central concentration. From the face-on and the side-on views we can infer that this concentration is simply the bar seen end-on. In a real galaxy, however, where knowledge about the two other views would be unavailable, this could be mistaken for a classical bulge, unless supplementary photometric and/or kinematic information is available. Athanassoula (2005b) showed that this error could occur only if the angle between the bar major axis and the line of sight was less that 5-10 degrees, i.e., within a rather restricted range of viewing angles.
Such bar formation and evolution processes had already been witnessed in the pioneering N -body simulations of the early seventies and onward (e.g., Miller et al. 1970;Hohl 1971;Ostriker & Peebles 1973;Sellwood 1980Sellwood , 1981Combes et al. 1990;Pfenniger & Friedli 1991). Although technically these simulations were not up to the level we are used to now (due to lower number of particles, lower spatial and temporal resolution, absence or rigidity of the halo component, a 2D geometry, etc.), they came to a number of interesting results, two of which are closely related to what we will discuss here. Ostriker & Peebles (1973), using very simple simulations with only 150 to 500 particles, came to the conclusion that haloes can stabilise bars. This number of particles is too low to describe adequately the bar-halo interaction and particularly its effect on the bar growth. It is thus no surprise that their result is partly flawed. Nevertheless, this paper, together with the subsequent one by Ostriker et al. (1974), gave a major impetus to research on dark matter haloes, focusing both observational and theoretical effort on them. , using 2D simulations with 40 000 particles only, showed that bars grow slower in hotter disks (i.e., in disks with larger velocity dispersions). They also confirmed a result which had been already found in analytical mode calculations (e.g., Toomre 1981), namely that a higher relative halo mass decreases the bar growth rate, so that bars grow slower in disk galaxies with a larger M H /M D ratio, where M H and M D are the halo † and disk masses, respectively. These results will be discussed further in Section 4.6.8. † There is some ambiguity in general about what is meant by the term 'halo mass'. In some cases it is the total halo mass, but in others it is the mass within a radius encompassing the relevant part of the simulated galaxy. In this case, since the simulations were 2D and therefore the halo rigid, only a small-sized halo was considered, so the two definitions coincide.
Orbits and resonances
Before starting on our quest for understanding the main bar formation and evolution processes, let me first give a brief and considerably simplified description of some basic notions of orbital structure theory. Readers interested in more thorough and rigorous treatments can consult Arnold (1989) and Lichtenberg & Lieberman (1992). Let me consider a very simple potential composed of an axisymmetric part (including all axisymmetric components) and a rigid bar rotating with a constant angular velocity Ω p . It is in general more convenient to work in a frame of reference which co-rotates with the bar, in order to have a time-independent potential (Binney & Tremaine 2008) and I will simplify things further by restricting myself to 2D motions. Any regular galactic orbit in this potential † can be characterised by two fundamental frequencies, Ω i , i = 1,2. In the epicyclic approximation these are Ω, the angular frequency of rotation around the galactic centre, and κ, the epicyclic frequency, i.e., the frequency of radial oscillations. We say that an orbit is resonant if there are two integers l and m such that lκ + m(Ω − Ω p ) = 0. (4.1) The most important resonances for our discussions here will be the Lindblad resonances (inner and outer) and the corotation resonance. The inner Lindblad resonance (hereafter ILR) occurs for l = −1 and m = 2. Therefore, in a frame of reference co-rotating with the bar, such orbits will close after one revolution around the centre and two radial oscillations ( Fig. 4.2).
Similarly, the outer Lindblad resonance (hereafter OLR) occurs for l = 1 and m = 2. For l = 0 we have the corotation resonance (hereafter CR), where the angular frequency is equal to the bar pattern speed, i.e., the particle corotates with the bar.
Contrary to regular orbits, chaotic orbits (often also called irregular orbits) do not have two fundamental frequencies and this property can be used to distinguish them from regular orbits with the help of what is often called a frequency analysis (Binney & Spergel 1982;Laskar 1990). Let us also briefly mention the so-called sticky orbits. Information on the dynamics and properties of such orbits can be found in Contopoulos (2002). Here we will only mention that, classified by eye, such orbits can be seen as being, say, regular over a given interval of time and then, within a relatively short time, turning to chaotic. Not too many years ago the existence and effect of non-regular orbits on the structure and dynamics of galaxies was generally neglected, but it is becoming progressively clear that this was wrong, so that such orbits are now known to play a considerable role in many fields of galactic dynamics.
By definition, resonant orbits close after a certain number of revolutions and a certain number (not necessarily the same) of radial oscillations, and are often referred to as periodic orbits. Several studies of such orbits in various bar potentials have been made in 2D cases † (e.g., Contopoulos & Papayannopoulos 1980;Athanassoula et al. 1983;Contopoulos & Grosbøl 1989). They show that, in the equatorial plane, the main supporters of the bar are a family of orbits elongated along the bar, named x 1 and having l = −1 and m = 2. Examples of members of this family can be seen in Fig. 4.3 here, or in Fig. 7 of Contopoulos & Papayannopoulos (1980), or Fig. 2 of Skokos et al. (2002a. In most cases there is another family of orbits with l = −1 and m = 2, but which are oriented perpendicularly to the bar and are named x 2 . These play a crucial role in determining the gas flow in the bar and the morphology of the inner kpc region in the centre of the galaxy and will be discussed further by Isaac Shlosman (this volume). Finally there are also two main families of periodic orbits at CR, examples of which can be seen, e.g., in Fig. 3 and 4 of Contopoulos & Papayannopoulos (1980). Periodic orbits can be stable or unstable and this can be tested by considering another orbit very near the periodic one in phase space, i.e., with very similar values of positions and velocities. If the periodic orbit is stable, then the new orbit will stay in the immediate surroundings of the periodic one and 'wrap' itself around it. It can then be said that this new orbit is 'trapped' by the periodic one. Examples of trapped orbits can be seen in † 3D cases will be discussed in Section 4.8.2. Binney & Tremaine (2008). The bar can then be considered as a superposition of such orbits, trapped around members of the x 1 family, which will thus be the backbone of the bar. On the other hand, if the periodic orbit is unstable, then this second orbit will leave the vicinity of the periodic orbit, and the distance between the two orbits in phase space will increase with time, even though initially they were very near.
The calculation of periodic orbits is straightforward, yet such orbits can reveal crucial information on galactic structure and dynamics. A good example is the work of Contopoulos (1980), who, with simple considerations on closed orbits, was able to show that bars cannot extend beyond their CR. Further work on periodic orbits coupled to hydrodynamic simulations gave an estimate of the lower limit to the bar length, and the ratio R of the corotation radius to the bar length was found to be in the range of 1.2 ± 0.2 (Athanassoula 1992a(Athanassoula , 1992b. Note, however, that the lower limit is only an estimate, and not a strict limit as the upper limit. Nevertheless, several other methods and works, including observational, gave results within the above-quoted range, as reviewed by Elmegreen (1996) and by Corsini (2011). The bars for which 1.0 < R < 1.4 are called fast, contrary to bars with R > 1.4, which are called slow.
Finally, a straightforward superposition (with some smoothing) of stable periodic orbits offers a very simple, yet most useful tool for studying morphological or kinematical structures in disk galaxies and has been successfully applied to bars, box/peanuts and rings (e.g., Patsis et al. 1997;Patsis et al. 2002Patsis et al. , 2003Patsis 2005;Patsis et al. 2010).
N -body simulations
The N -body simulations that we will discuss were tailored specifically for the understanding of bar formation and evolution in a gas-less disk embedded in a spherical spheroid. That is, the initial conditions were built so as to exclude, in as much as possible, other instabilities, thus allowing us to focus on the bar. Such initial conditions are often called dynamical (because they allow us to concentrate on the dynamics), or simplified, controlled, or idealised (because they exclude other effects so as to focus best on the one under study). They allow us to make 'sequences' of models, in which we vary only one parameter and keep all the others fixed. For example, it is thus possible to obtain a sequence of models with initially identical spheroids and identical disk density profiles, but different velocity dispersions in the disk.
The alternative to these simulations is cosmological simulations, and, more specifically, zoom re-imulations. In such re-simulations a specific halo (or galaxy), having the desired properties, is chosen from the final snapshot of a full cosmological simulation. The simulation is then rerun with a higher resolution for the parts which end up in the chosen galaxy or which come to a close interaction with it, and also after having replaced a fraction of the dark matter particles in those parts by gas particles.
Zoom simulations are more general than the dynamical ones because the former include all the effects that dynamical simulations have, deliberately, neglected. However, they do not allow us to build sequences of models and also have less resolution than the dynamical ones and necessitate much more computer time and memory. Furthermore, some care is necessary because cosmological simulations are known to have a few problems when compared with nearby galaxy observations, concerning, e.g., the number and distribution of satellites, the inner halo radial density profile, the formation of bulge-less galaxies, or the Tully-Fisher relation (see, e.g., Silk & Mamon 2012 for a review). Thus, the zoom re-simulations could implicitly contain some non-realistic properties, which are not in agreement with what is observed in nearby galaxies, and therefore reach flawed results. Moreover, since many effects take place simultaneously, it is often difficult to disentangle the contribution of each one separately, which very strongly hampers the understanding of a phenomenon. For example, it is impossible to fully understand the bar formation instability if the model galaxy in which it occurs is continuously interacting or merging with other galaxies. A more appropriate way would be to first understand the formation and evolution of bars in an isolated galaxy, and then understand the effect of the interactions and mergings as a function of the properties of the intruder(s).
Thus, zoom simulations should not yet be considered as a replacement of dynamical simulations, but rather as an alternative approach, allowing comparisons with dynamical simulations after the basic instabilities has been understood. A few studies using cosmological zoom simulations have been already made and have given interesting results on the formation and properties of bars (Romano-Díaz et al. 2008;Scannapieco & Athanassoula 2012;Kraljic et al. 2012).
A non-trivial issue about dynamical N -body simulations is the creation of the initial conditions. These assume that the spheroid and the disk are already in place and, most important, that they are in equilibrium. This is very important, since a system which is not in equilibrium will undergo violent relaxation and transients, which can have undesirable secondary effects, such as spurious heating of the disk or altering of its radial density profile. At least three different classes of methods to create initial conditions have been developed so far. In the case of multi-component systems, e.g., galaxies with a disk, a bulge and a halo, the components are built separately and then either simply superposed (e.g., Hernquist 1993), or the potential of the one is adiabatically grown in the other before superposition (e.g., Barnes 1988;Shlosman & Noguchi 1993;Athanassoula 2003Athanassoula , 2007McMillan & Dehnen 2007). The former can be dangerous, as the resulting model can be considerably off equilibrium. The latter is strongly preferred to it, but still has the disadvantage that the adiabatic growing of one component can alter the density profiles of the others, which is not desirable when one wishes to make sequences of models. It also is not trivial to device a method for assigning the velocities to the disk particles without relying on the epicyclic approximation (but see Dehnen 1999). Last but not least, this class of methods is not useful for complex systems such as triaxial bulges or haloes. (b) The Schwarzschild method (Schwarzschild 1979) can also be used for making initial conditions, but has been hardly used for this, because the application is rather time consuming and not necessarily straightforward.
(c) A very promising method for constructing equilibrium phase models for stellar systems is the iterative method (Rodionov et al. 2009). It relies on constrained, or guided, evolution, so that the equilibrium solution has a number of desired parameters and/or constraints. It is very powerful, to a large extent due to its simplicity. It can be used for mass distributions with an arbitrary geometry and a large variety of kinematical constraints. It has no difficulty in creating triaxial spheroids, and the disks it creates do not follow the epicyclic approximation, unless this has been imposed by the user. It has lately been extended to include a gaseous component (Rodionov & Athanassoula 2011). Its only disadvantage is that it is computer intensive, so that in some cases the time necessary to make the initial conditions is a considerable fraction of the simulation time.
I would also like to stress here a terminology point which, although not limited to simulations, is closely related to them. In general the dynamics of haloes and bulges are very similar, with of course quantitative differences due to their respective extent, mass and velocity dispersion values. For this reason, I will use sometimes in these lecture notes the terms 'halo' and 'bulge' specifically, while in others I will use the word 'spheroid' in a generic way, to designate the halo and/or the bulge component. The reasons for this are sometimes historic (i.e., how it was mentioned in the original paper), or quantitative (e.g., if the effect of the halo is quantitatively much stronger that that of the bulge), or just for simplicity. The reader can mentally interchange the terms as appropriate.
On angular momentum exchange and the role of resonances: the analytic approach
Two papers are the pillars of the analytical work on angular momentum redistribution in disk galaxies -namely Lynden-Bell & Kalnajs (1972) and Tremaine & Weinberg (1984) -while further useful information can be found in, e.g., Kalnajs (1971), Dekker (1974), Weinberg (1985Weinberg ( , 1994, Athanassoula (2003), Fuchs (2004), Fuchs & Athanassoula (2005). In order to reach tractable analytic expressions, it is necessary to consider the disk and the spheroid components separately, and use different approximations in the two cases. For the disk we can use the epicyclic approximation (i.e., we will assume that the disk orbits can be reasonably well approximated by epicycles), while for the spheroid we will assume that the distribution function depends only on the energy, as is the case for spherical isotropic systems. The main results obtained in the papers listed above are: (a) Angular momentum is emitted or absorbed mainly at resonances. It is, however, also possible to emit or absorb away from resonances if the potential is not stationary, but grows or decays with time. Nevertheless, the contribution of the non-resonant material to the total emission or absorption should remain small, unless the growth or decay of the potential is important. (b) In the disk component, angular momentum is emitted from the ILR and at other l < 0 resonances and absorbed at the OLR and at other l > 0 resonances. It is also absorbed at CR, but, all else being equal, at lesser quantities than at the Lindblad resonances. (c) The spheroid absorbs angular momentum at all its resonances. (d) The global picture is thus that angular momentum is emitted from the bar region and absorbed by the CR and OLR in the disk, and by all resonances in the spheroid. Thus, angular momentum is transported from the inner parts of the disk, to the part of the disk outside CR and to the spheroid resonant regions. (e) For both the disk and the spheroid components it is possible to show that, for the same perturbing potential and the same amount of resonant material, a given resonance will emit or absorb more angular momentum if the material there is colder (i.e., has a lower velocity dispersion). Therefore, since the disk is always colder than the spheroid, it will absorb more angular momentum per unit resonant mass. Nevertheless, the spheroid is much more massive than the outer disk, so the amount of angular momentum it absorbs may exceed that absorbed by the outer disk. (f) Since the bar is inside corotation, it has negative energy and angular momentum and as it emits angular momentum it gets destabilised, i.e., it grows stronger. It is thus expected that the more angular momentum is emitted, the stronger the bar will become.
General comments
It is not possible to compare the analytical work mentioned in the previous section directly with observations, because each galaxy is observed only at a single time during its evolution, and neither angular momentum exchange nor individual orbits can be directly observed. One should thus include an intermediate step in the comparisons, namely N -body simulations. In these, it is possible to follow directly not only the evolution in time, but also the angular momentum exchange and the individual orbits, i.e., it is possible to make direct comparisons of simulations with analytical work. Furthermore, one can 'observe' the simulation results using the same methods as for real galaxies and make comparisons (Section 4.10). Simulations thus provide a meaningful and necessary link between analytical work and observations. In order to show that the analytical results discussed in Section 4.5 do apply to simulations it is necessary to go through a number of intermediate steps, i.e., to show (a) that there is a reasonable amount of mass at (near-)resonance both for the disk and the spheroid components, (b) that angular momentum is emitted from the resonances in the bar region and absorbed by all the spheroid resonances and the outer disk resonances, (c) that the contribution of the spheroid in the angular momentum redistribution is important, (d) that, as a result of this angular momentum transfer, the bar becomes stronger and slows down, (e) that stronger bars are found in simulations in which more angular momentum has been exchanged within the galaxy, (f) and that more (less) angular momentum can be exchanged when the emitting or absorbing material is colder (hotter).
This sequence of steps was followed in two papers (Athanassoula 2002, hereafter A02 andAthanassoula 2003, hereafter A03) whose techniques and results I will review in the next subsections, giving, whenever useful, more extended information (particularly on the techniques) than in the original paper, so that the work can be easier followed by students and nonspecialists.
Calculating the orbital frequencies
Our first step will be to calculate the fundamental orbital frequencies. Since we are interested in the redistribution of L z , we will focus on the angular and the epicyclic frequency. The epicyclic frequency κ can be calculated with the help of the frequency analysis technique (Binney & Spergel 1982;Laskar 1990), which relies on a Fourier analysis of, e.g., the cylindrical radius R(t) along the orbit. The desired frequency is then obtained as the frequency of the highest peak in the Fourier transform. The angular frequency Ω is more difficult to estimate, and in A02 and A03 I supplemented frequency analysis with other methods, such as following the angle with time.
Several technical details are important for the frequency analysis. It is necessary to use windowing before doing the Fourier analysis, to improve the accuracy. It is also necessary to keep in mind that some of the peaks of the power spectrum are not independent frequencies, but simply harmonics of the individual fundamental frequencies, or their combination. Furthermore, if one needs considerable accuracy, one has to worry about the fact that in standard Fast Fourier Transforms the step dω between two adjacent frequencies is constant, while the fundamental frequencies Ω i will not necessarily fall on a grid point. Except for the inaccuracy thus introduced, this will complicate the handling of the harmonics.
Frequency analysis can be applied to orbits in any analytic stationary galactic potential, thus allowing the full calculation of the resonances and their occupation (e.g., Papaphilipou & Laskar 1996Carpintero & Aguilar 1998;Valluri et al. 2112). Contrary to such potentials, however, simulations include full time evolution, so that the galactic potential, the bar pattern speed, as well as the basic frequencies Ω and κ of any orbit are time-dependent. Thus, strictly speaking, the spectral analysis technique can not be applied, at least as such.
It is, nevertheless, possible to estimate the frequencies of a given orbit at any given time t by using the potential and bar pattern speed at this time t (which are thus considered as frozen), as I did in A02 and A03. After freezing the potential, I chose a number of particles at random from each component of the simulation and calculated their orbits in the frozen potential, using as initial conditions the positions and velocities of the particles in the simulation at time t. It is necessary to take a sufficient number of particles (of the order of 100 000) in order to be able to define clearly the main spectral lines. It is also necessary to follow the orbit for a sufficiently long time (e.g., 40 orbital rotation patterns), in order to obtain narrow lines in the spectrum. By doing so I do not assume that the potential stays unevolved over such a long time. What I describe here just amounts to linking the properties of a small part of the orbit calculated in the evolving simulation potential (hereafter simulation orbit) to an equivalent part of the corresponding orbit calculated in the frozen potential. The frequencies are then calculated for the orbit in the frozen potential and attributed to the small part of the simulation orbit in question (and not to the whole of the simulation orbit). This technique makes it possible to apply the frequency analysis method, as described in A02, A03 and above, and thus to obtain the main frequencies of each orbit at a given time. It is, furthermore, possible to follow the evolution by choosing a number of snapshots during the simulation and performing the above exercise separately for each one of them. The evolution can then be witnessed from the sequence of the results, one for each chosen time.
Material at resonance
Having calculated the fundamental frequencies as described in the previous section, it is now possible to plot histograms of the number of particles -or of their total mass, if particles of unequal mass are used in the simulation - as a function of the ratio of their frequencies measured in a frame of reference co-rotating with the bar, i.e., as a function of (Ω − Ω p )/κ. This can be carried out separately for the particles describing the various components, i.e., the disk, the halo, and the bulge. It was first carried out in A02 and the results, for two different simulations, are shown in Fig. 4.4.
Before making this histogram, it is necessary to eliminate chaotic orbits. Their spectra differ strongly from those of regular orbits, consisting of a very large number of non-isolated lines. They of course always have a 'highest peak', but this has no physical significance and is not a fundamental frequency of the orbit. Eliminating chaotic orbits is non-trivial because of the existence of sticky orbits (see Section 4.3) for which the results of the classification as regular or chaotic may well depend on the chosen integration time. Thus, although for regular orbits it is recommended to use a long integration time in order to obtain narrow, well defined spectral peaks, for sticky orbits integration times must be of the order of the characteristic timescale of the problem. For instance, if the sticky orbit shifts from regular to chaotic only after an integration time of the order of say ten Hubble times, it will be of no concern to galactic dynamic problems and this orbit can for all practical purposes be considered as regular.
It is clear from Fig. 4.4 that the distribution of particles in frequency is not homogeneous. In fact it has a few very strong peaks and a number of smaller ones. The peaks are not randomly distributed; they are located at the positions where the ratio (Ω − Ω p )/κ is equal to the ratio of two integers, i.e., when the orbit is resonant and closes after a given number of rotations and radial oscillations. The highest peak is at (Ω − Ω p )/κ = 0.5, i.e., at the ILR. A second important peak is located at Ω = Ω p , i.e., at CR where the particle co-rotates with the bar. Other peaks, of lesser relative height, can be seen at other resonances, such as the −1/2 (OLR), the 1/4 (often referred to as the ultraharmonic resonance -UHR), the 1/3, the 2/3, etc. In all runs with a strong bar the ILR peak dominates, as expected. But the height of these peaks differs from one simulation to another and even from one time to another in the same simulation.
This richness of structures in the resonance space could have been expected for the disk component. What, however, initially came as a surprise was the existence of strong resonant peaks in the spheroid. Two examples can be seen in the right-hand panels of Fig. 4.4. In both, the strongest peak is at corotation, and other peaks can be clearly seen at ILR, at (Ω − Ω p )/κ = −0.5 (OLR) and at other resonances. As was the case for the disk, the absolute and relative heights of the peaks differ from one simulation to another, as well as with time.
Thus, the results of A02 that we have discussed in this section show that, both for the disk and the spheroid component, a very large fraction of the simulation particles is at (near-)resonance. Note that this result is backed by a large number of simulations. I have analysed the orbital structure and the resonances of some 50 to 100 simulations and for a number of times per simulation. The results of these, as yet unpublished, analyses are in good qualitative agreement with what was presented and discussed in A02, A03 and here.
Further confirmation was brought by a number of subsequent and independent analyses (Martínez-Valpuesta et al. 2006;Ceverino & Klypin 2007;Dubinski et al. 2009;Wozniak & Michel-Dansac 2009;Saha et al. 2012). These studies include many different models, with very different spheroid mass profiles or distribution functions, as well as disks with different velocity dispersions. Also different simulation codes were used, including the Marseille GRAPE-3 and GRAPE-5 codes (Athanassoula et al. 1998), Gyr-Falcon (Dehnen 2000(Dehnen , 2002, FTM (Heller & Shlosman 1994, Heller 1995, ART (Kravtsov et al. 1997), Dubinski's treecode (Dubinski 1996) and GAD-GET (Springel et al. 2001, Springel 2005. Note also that Ceverino & Klypin (2007) have used a somewhat different approach, and did not freeze the potential before calculating the orbits. Instead, they followed the particle orbits through a part of the simulation during which the galaxy potential (more specifically the bar potential and pattern speed) do not change too much. In this way they obtain a power spectrum with much broader peaks than in the studies that analyse the orbits in a sequence of frozen potentials. Nevertheless, the peaks are welldefined and confirm the main A02 results -namely that there are located at the main resonances -without the use of potential freezing. Note also that this version of the frequency analysis is not suitable for deciding whether a given orbit is regular or chaotic, but is considerably faster in computer time than the one relying on a sequence of frozen potentials.
Angular momentum exchange
In A03 I used N -body simulations to show that angular momentum is emitted at the resonances within CR, i.e., in the bar region, and that it is absorbed at resonances either in the spheroid, or in the disk from the CR outwards, as predicted by analytic calculations. For this I calculated the angular momentum of all particles in the simulation at two chosen times t 1 and t 2 and plotted their difference, ∆J = J 2 − J 1 as a function of the frequency ratio (Ω − Ω p )/κ of the particle orbit at time J 2 . An example of the result can be seen in Fig. 1 of A03. Note that particles in the disk with a positive frequency ratio and particularly particles at ILR have ∆J < 0, i.e., they emit angular momentum. On the contrary particles in the spheroid have ∆J > 0, i.e. they absorb angular momentum and particularly at the CR, followed by the ILR and OLR. Further absorption can be seen at the disk CR, but it is considerably less than the amount absorbed by the spheroid. The amount of angular momentum emitted or absorbed at a given resonance is of course both model-and time-dependent, as were the heights of the resonant peaks (Section 4.6.3). On the contrary, whether a given resonance absorbs or emits is model-independent, and in good agreement with analytic predictions (Section 4.5).
Thinking of the bar as an ensemble of orbits, it becomes clear that there are many ways in which angular momentum can be lost from the bar region. The first possibility is that the orbits in the bar, and therefore the bar itself, will become more elongated. The second one is that orbits initially on circular orbits closely outside the bar region will loose angular momentum and become elongated and part of the bar, which will thus get longer and more massive. In both cases the bar will become stronger in the process. The third alternative is that the bar will rotate slower, i.e., its pattern speed will decrease. These three possibilities were presented and discussed in A03, where it was shown that that they are linked and occur concurrently. Thus evolution should make bars longer, and/or more elongated, and/or more massive and/or slower rotating (A03). Simulations agree fully with these predictions and go further, establishing that all these occur concurrently, but not necessarily at the same pace. 4.6.5 Types of models 4.6.5.1 Models with maximum and models with sub-maximum disks Since the spheroid plays such a crucial role in the angular momentum redistribution within the galaxy, it must also play a crucial role in the formation and evolution of the bar. Athanassoula & Misiriotis (2002, hereafter AM02) tested this by analysing the bar properties in two very different types of simulations, which they named MH (for Massive Halo) and MD (for Massive Disc), respectively. Both types have a halo with a core, which is big in the MD types and small in the MH ones. Thus, in MH models, the halo plays a substantial role in the dynamics within the inner four or five disk scale lengths, while not being too hot, so as not to impede the angular momentum absorption. On the contrary, in MD models the disk dominates the dynamics within that radial range.
The circular velocity curves of these two types of models are compared in Fig. 4.5. For the MD model (upper panel) the disk dominates the dynamics in the inner few disk scale lengths, while this is not the case for the MH model. MD-type models are what the observers call maximum disk models, while the MH types have sub-maximum disks. It is not yet clear whether disks in real galaxies are maximum, or sub-maximum, because different methods reach different conclusions, as reviewed, e.g., by Bosma (2002).
As shown in AM02 and illustrated in Fig. 4.6, the observable properties of the bars which grow in these two types of models are quite different. MHtype bars are stronger (longer, thinner and/or more massive) than MD-type bars. Viewed face-on, they have a near-rectangular shape, while MD-type bars are more elliptical. Viewed side-on, they show stronger peanuts and sometimes (particularly towards the end of the simulation) even 'X' shapes. On the other hand, bars in MD-type models are predominantly boxy when viewed side-on.
Thinking in terms of angular momentum exchange, it is easy to understand why MH-type bars are stronger than MD-type ones. Indeed, the radial density profile of MH-type haloes is such that, for reasonable values of the pattern speed, they have more material at resonance than do MD-types. Thus, all else being similar, there will be more angular momentum absorbed. This, in good agreement with analytical results, should lead to stronger bars. It should be stressed that the above discussion does not imply that all real galaxies are either of MH type or of MD type. The two models illustrated here were chosen as two examples, enclosing a useful range of halo radial density profiles, which could actually be smaller than what is set by the two above examples. Real galaxies can well be intermediate, i.e., somewhere in between the two. It is nevertheless useful to describe the two extremes separately, since this gives a better understanding of the effects of the spheroid.
Models of MH-or MD-type which also have a classical bulge can be termed MHB and MDB, respectively. The effect of the bulge in MD models is quite strong, so that the bars in MDB models have a strength and properties which are intermediate between those of MD and those of MH types (AM02). Furthermore, A03 and Athanassoula (2007) showed that an initially non-rotating bulge absorbs a considerable amount of angular momentum -thereby spinning up -and thus a bar in a model with bulge slows down more than in a similar model but with no bulge. All this can be easily understood from the frequency analysis, which shows that there are considerably more particles at resonance in cases with strong bulges (A03; Saha et al. 2012). On the other hand, the effect of the bulge on the bars of MH types is much less pronounced.
Models with cusps
The two models we have discussed above have a core, more or less extended. There is a further possibility, namely that the central part has a cusp. It has indeed been widely debated whether haloes have a cusp or a core in their central parts. Cosmological CDM, dark matter only simulations produce haloes with strong cusps. Thus, Navarro et al. (1996) find a universal halo profile, dubbed NFW profile, which has a cusp with a central density slope (β = d ln ρ/d ln r) of −1.0, while Moore et al. (1999), with a higher resolution, find a slope of −1.5. Increasing the resolution yet further, Navarro et al. (2004) found that this slope decreases with decreasing distance from the centre, but not sufficiently to give a core. Finally, the simulations with the highest resolution (Navarro et al. 2010), argue for a lower central slope of the order of −0.7, but still too high to be compatible with a core.
On the other hand, very extensive observational and modelling work (de Blok et al. 2001;de Blok & Bosma 2002;de Blok et al. 2003;Simon et al. 2003;Kuzio de Naray et al. 2006de Blok et al. 2008;Oh et al. 2008;Battaglia et al. 2008;de Blok 2010;Walker & Peñarrubia 2011;Amorisco & Evans 2012;Peñarrubia et al. 2012) argues that the central parts of haloes should have a core, or a very shallow cusp, the distribution of inner slopes in the various observed samples of galaxies being strongly peaked around a value of ∼ 0.2. This discrepancy between the pure dark matter, CDM simulations and observations may be resolved with more recent cosmological simulations which have high resolution and include baryons and appropriate star formation and feedback recipes. Indeed, such simulations start to produce rotation curves approaching those of observations (Governato et al. 2010Macciò et al. 2012;Oh et al. 2011;Stinson et al. 2012). In order to stay in agreement with observations, I will here not discuss models with cusps. Readers interested in such models can consult, e.g., Valenzuela & Klypin (2003), Holley-Bockelmann et al. (2005), Sellwood & Debattista (2006), or Dubinski et al. (2009. Let me also mention that it is possible to study models with cusps using the same functional form for the halo density as for the MH-and MD-type models (AM02), but now taking a very small core radius, preferably of an extent smaller than the softening length.
The effect of the spheroid-to-disk mass ratio
In the two models we discussed above, it is clear that it is the one with the highest spheroid mass fraction within the disk region that makes the strongest bar. Is that always the case? The following discussion, taken from A03, shows that the answer is more complex than a simple yes, or no.
Assume we have a sequence of models, all with the same total mass, i.e., that the sum of the disk and the spheroid mass within the disk region is the same. How should we distribute the mass between the spheroid and the disk in order to obtain the strongest bar? What must be maximised is the amount of angular momentum redistribution, or, equivalently, the amount of angular momentum taken from the bar region. For this it makes sense to have strong absorbers, who can absorb all the angular momentum that the bar region can emit. Past a certain limit, however, there will not be sufficient material in the bar region to emit all the angular momentum that the spheroid can absorb, and it will be useless to increase the spheroid mass further. So the strongest bar will not be obtained by the most massive spheroid, but rather at a somewhat lower mass value, such that the equilibrium between emitters and absorbers is optimum and the angular momentum exchanged is maximum. For the models discussed in AM02 and A03, this occurs at a spheroid mass value such that the disk, in the initial conditions, is sub-maximum. In Section 4.9.3 we will discuss how disks may evolve from sub-maximum to maximum during the simulation.
Live versus rigid halo
In the previous sections I reviewed the very strong evidence accumulated so far showing that many particles in the simulations, both in the disk and the spheroid component, are on (near-)resonant orbits and that the angular momentum exchanged between them is as predicted by the analytic calculations, i.e., from the bar region outwards (Section 4.5). The next step should be to clarify the importance of the halo resonances in the evolution. For this we have to compare two simulations, one in which the halo resonances are at work and another where they are not, as was first done in A02, whose main results will be reviewed here.
Each of the Fig. 4.7 and 4.8 compares two models with initially identical disks. In other words, the particles in the disk initially have identical positions and velocities in the two compared simulations. The models of the haloes were also identical, but in one of the simulations (right-hand panels) the halo was rigid (represented only by the forces that it exerts on the disk particles) and thus did not evolve. In the other one, however, the halo was represented by particles, i.e., was live (left-hand panels). These particles move around as imposed by the forces and can emit or absorb angular momentum, as required. Figure 4.7 compares the disk evolution in the live and in the rigid halo when the model is of MH type. The difference between the results of the two simulations is stunning. In the case with a live halo a strong bar has formed, while in the case with a rigid halo there is just a very small inner oval-like perturbation. This shows that the contribution of the halo in the angular momentum exchange can play an important role, actually, in the example shown here, the preponderant role. Figure 4.8, shows the results of a similar experiment, but now in an MDtype halo. The difference is not as stunning as in the previous example, but is still quite important. In the live halo case the bar is considerably longer and somewhat thinner than in the case with a rigid halo.
It is thus possible to conclude that the role of the halo in the angular momentum redistribution is important. In fact in the MH-type models the role of the halo is preponderant, but it is still quite important even in the MD-types. It is thus strongly advised to work with live, rather than with rigid haloes in simulations. Figure 4.4 displays the frequency histograms for two models, one MD-type (upper panels) and one MH-type (lower panels). The properties of these two types of models were discussed in Section 4.6.5.1, where their initial rotation curve, as well as their bar morphology are also displayed.
Distribution of frequencies for MD-and MH-type models
It is now useful to compare the distribution of frequencies for the two simulations used in Fig. 4.4. Starting with the disk we note that the ILR peak is about 50% higher in the MH than in the MD model, while the CR peak is considerably lower. Also the MD model has an OLR peak, albeit small, while none can be seen in the MH one. For the spheroid, the strongest peak for both models is the CR one, which is much stronger in the halo than in the disk. It is, furthermore, stronger in the MH than in the MD model. Also the MH spheroid has a relatively strong ILR peak, which is absent from the MD one. On the other hand, the MD model has a much stronger OLR peak than the MH one.
All these properties can be easily understood. From Fig. 4.6 it is clear that the bar in the MH model is stronger than in the MD one, as discussed already in Section 4.6.5.1, and this accounts for the much stronger ILR peak for the disk of the MH model. Also, from the initial circular velocity curves (Fig. 4.5) it is clear that the halo of the MH model has much more mass than the MD model within the radial extent where one would expect the CR and particularly the ILR to be. This explains why the CR halo peaks are stronger in the MH model and why the halo ILR peak is absent in the MD one. At larger radii the order between the masses of the MH and the MD haloes is reversed, being in the outer parts relatively larger in the MD model. This explains why the halo OLR peak is stronger for the MD halo than for the MH one.
Bar strength
4.6.8.1 Evolution of the bar strength with time Figure 4.9 shows the evolution of the bar strength † with time, comparing an MH-type and an MD-type model. It clearly illustrates how important the differences between these two types can be, as expected. It also shows that, in both cases, one can distinguish several evolutionary phases.
By construction, both simulations start axisymmetric and this lasts all through what we can call the pre-growth phase. The duration of this phase, however, is about half a Gyr for the MD model, while for the MH one it lasts about 2 Gyr. The second phase is that of bar growth, and lasts considerably less than a Gyr for the MD model and much longer (about 2 Gyr) for the MH one. In total, we can say that the bar takes less that 1 Gyr to reach the end of its growth phase in the MD model, compared to about 4 Gyr in the MH one. This is in good qualitative agreement with what was already found by , using simpler 2D simulations. From this and many other such comparisons, it becomes clear that the presence of a massive spheroid can very considerably both delay and slow down the initial bar formation due to its strong contribution to the total gravitational force. † The definition of bar strength is not unique. The one used in the analysis of simulations is usually based on the m = 2 Fourier component, but precisely how this is used varies from one study to another. We will refrain from giving a list of precise definitions here, as this would be long and tedious. Furthermore we will, anyway, only need qualitative information for our discussions here, which is the same, or very similar, for all definitions used in simulations. We will thus talk only loosely about 'bar strength' here and use arbitrary units in the plots (see also Section 4.10). After the end of the bar growth time, both models undergo a steep drop of the bar strength. This is due to the buckling instability (Raha et al. 1991). The final phase -which can also be called the secular evolution phasestarts somewhat after 5 Gyr for the MH model and after about 3 Gyr for the MD one. The corresponding bar strength increase which takes place during this phase is much more important for the MH than for the MD mode. By the end of the evolution, MH models have a much stronger bar than MD ones. As already mentioned, this is due to the more important angular momentum redistribution in the former type of models.
As in Section 4.6.5.1, let me stress that we are comparing two models which display strong differences. Real galaxies can be of either type, but, most probably, can be intermediate, in which case their bar strength evolution would also be intermediate between the two shown in Fig. 4.9. (a) In the early evolutionary stages, before and while the bar grows, the spheroid delays and slows down bar formation. This is due to the fact that the gravitational forcing of the spheroid 'dilutes' the non-axisymmetric forcing of the bar. Thus, this delay and slowdown occurs even in cases with a rigid spheroid, or with an insufficient number of particles (e.g., Ostriker & Peebles 1973). (b) In the late evolutionary stages, e.g., when the secular evolution is underway, the presence of a massive and responsive spheroid will make the bar much stronger. This is due to the help of the spheroid resonances, which absorb a considerable fraction of the emitted angular momentum, thus inducing the bar region to emit yet more and (since it is within the CR and of negative energy) to become stronger. In order for this phase to be properly described the spheroid has to be live and contain a sufficient number of particles for the resonances to be properly described (A02).
Velocity dispersion and bar strength
Analytical works for the disk and/or the spheroidal component (Lynden-Bell & Kalnajs 1972;Tremaine & Weinberg 1984; A03) predict that the hotter the (near-)resonant material is, the less angular momentum it can emit or absorb. This was verified with the help of N -body simulations in A03. Contrary to the spheroid mass, velocity dispersion has the same effect on the bar strength evolution both in the early bar formation stages and in the later secular evolution stages. In the early stages a high velocity dispersion in the disk slows down bar formation, as shown initially by and later in A03. This is illustrated also in Fig. 4.10, where I compute two MD-type models with different velocity dispersions.
The first one has a Toomre Q parameter (Toomre 1964) of 1.3 and the second one of 1.7. This difference has a considerable impact on the growth and evolution of the bar. In the former the bar starts growing after roughly half a Gyr, its growth phase lasts about 1.5 Gyr and the secular increase of the bar strength starts around 4.5 Gyr. For the latter (hotter) model the beginning and end of each phase are much less clear, so that one can only very roughly say that the bar growth starts at about 4, or 5 Gyr and ends at about 9 Gyr. During the later evolutionary stages also, a high velocity dispersion will work against an increasing bar strength because, as shown by analytic work and verified by N -body simulations, material at resonance will emit or absorb per unit mass less angular momentum when it is hot. Thus, increasing the velocity dispersion in the disk and/or the spheroid leads to a delayed and slower bar growth and to weaker bars. This has important repercussions on the fraction of disk galaxies that are barred as a function of redshift and on their location on the Tully Fisher relation (Sheth et al. 2008(Sheth et al. , 2012.
Bar strength and redistribution of angular momentum
One of the predictions of the analytic work is that there is a strong link between the angular momentum which is redistributed within the galaxy and the bar strength. One may thus expect a correlation between the two if the distribution functions of the disks and spheroids of the various models are not too dissimilar. This was tested out in A03, using a total of 125 simulations, and was found to be true. Here we repeat this test, using a somewhat larger number of simulations (about 400 instead of 125) and a more diverse set of models and again a good correlation is found. The result is shown in Fig. 4.11, where each symbol represents a separate simulation. It is clear that this correlation is tight, but still has some spread, due to the diversity of the models used. Note also that we have not actually used the total amount of angular momentum emitted from the bar region (or, equivalently, the amount of angular momentum absorbed in the outer disk and in the spheroid), but rather the fraction of the total initial angular momentum that was deposited in the spheroid, which proves to be a good proxy to the required quantity. Finally, note that the points are not homogeneously distributed along the trend. This has no physical significance, but is simply due to the way that I chose my simulations. Indeed I tried to study the MH-type and the MD-type models and was relatively less interested in the intermediate cases.
Results from N-body simulations
Another prediction of the analytic work is that the bar pattern speed will decrease with time, as the bar strength increases (Section 4.5). This has been confirmed by a large number of N -body simulations (e.g., Little & Carlberg 1991a, b;Hernquist & Weinberg 1992;Debattista & Sellwood 2000;A03;O'Neil & Dubinski 2003;Martínez-Valpuesta et al. 2006;Villa-Vargas et al. 2009). The amount of this decrease was found to vary considerably from one simulation to another, depending on the mass as well as on the velocity distribution in the disk and the spheroidal (halo plus classical bulge) components, consistent with the fact that these mass and velocity distributions will condition the angular momentum exchange and therefore the bar slowdown.
There is a notable exception to the above very consistent picture. Valenzuela & Klypin (2003) found in their simulations a counter-example to the above, where the pattern speed of a strong bar hardly decreases over a considerable period of time. The code they use, ART, includes adaptive mesh refinement, and thus reaches high resolution in regions where the particle density is high. According to these authors, the difference between their results and those of other simulations are due to the high resolution (20-40 pc) and the large number of particles (up to 10 7 ) they use. Sellwood & Debattista (2006) examined cases where the bar pattern speed fluctuates upward. After such a fluctuation, the density of resonant halo particles will have a local inflection created by the earlier exchanges, so that bar slowdown can be delayed for some period of time. They show that this is more likely to occur in simulations using an adaptive refinement and propose that this explains the evolution of the pattern speed in the simulation of Valenzuela & Klypin (2003). Klypin et al. (2009) did not agree and replied that Sellwood & Debattista did not have the same adaptive refinement implementation as ART. Sellwood (2010) stressed that such episodes of non-decreasing pattern speed are disturbed by perturbations, as e.g., a halo substructure, and thus are necessarily short lived. He thus concludes that simulations where the pattern speed does not decrease have simply not been run long enough. At the other extreme, Villa-Vargas et al. (2009) find a similar stalling of the pattern speed for prolonged time periods when the simulation is run so long that the corotation radius gets beyond the edge of the disc. Dubinski et al. (2009) published a series of simulations, all with the same model but with increasing resolution. They use between 1.8×10 3 and 18×10 6 particles in the disk and between 10 4 and 10 8 particles in the halo. They also present results from a multi-mass model with an effective resolution of ∼ 10 10 particles. They have variable, density dependent softening, with a minimum of the order of 10 pc. Their Fig. 18 shows clearly that the decrease of the pattern speed with time does not depend on the resolution and that it is present for all of their simulations, even the ones with the highest resolution, much higher than the one used by Valenzuela & Klypin. They conclude that 'the bar displays a convergent behavior for halo particle numbers between 10 6 and 10 7 particles, when comparing bar growth, pattern speed evolution, the DM [dark matter] halo density profile and a nonlinear analysis of the orbital resonances'. This makes it clear that, at least for their model, the pattern speed decreases with time for all reasonable values of particle numbers. Figure 4.12 shows very schematically an interesting effect of the bar slowdown. The solid line shows the radial profile of Ω(r) for a very simple model with a constant circular velocity, but the following hold for any realistic circular velocity curve. Let us assume that at time t = t 1 the pattern speed is given by the dashed horizontal line and drops by t = t 2 to a lower value given by the dotted horizontal line, as shown by the vertical arrow. This induces a change in the location of the resonances. For example the CR, which at t 1 is located at 5 kpc, as given by vertical dashed line, will move by t 2 considerably outwards to a distance beyond 6 kpc, as given by the dotted vertical line and shown by the horizontal arrow. This increases also the region in which the bar is allowed to extend, since, as shown by orbital theory (Contopoulos 1980 and Section 4.3), the bar size is limited by the CR. This schematic plot also makes it easy to understand how a 'fast' bar can slow down considerably while remaining 'fast'. As we saw in Section 4.3, a bar is defined to be 'fast' if the ratio R of the corotation radius to the bar length is less than 1.4. Thus, a bar that slows down and whose corotation radius increases can still have R < 1.4, provided the bar length increases accordingly. This occurs in a number of simulations, see, e.g., Dubinski et al. (2009).
What sets the pattern speed value?
What sets the value of the pattern speed in a simulation (and thus also presumably in real galaxies)? The value of the pattern speed is set by the value of the corotation radius, which is in fact the borderline between emitters and absorbers. Thus, if the galaxy wants to maximise the amount of angular momentum it pushes outwards (i.e., the amount of angular momentum that it redistributes), it has to set this boundary, and therefore its bar pattern speed, appropriately. If the spheroid is massive, i.e., if it has sufficient mass in the resonant regions, then the bar can lower its pattern speed in order to have more emitters, since the absorbers are anyway strong. On the other hand if the spheroid is not sufficiently massive, then the bar should not lower its pattern speed overly, because it needs the absorption it can get from the outer disk. Thus, indirectly, it could be the capacity of the spheroid resonances to absorb angular momentum that sets the value of the bar pattern speed. This would mean that properties of the dark matter halo and of the classical bulge, such as their mass relative to that of the disk and their velocity dispersion at the resonant regions, will have a crucial role in setting the bar pattern speed.
Peanuts: input from simulations, orbits and observations
When bars form in N -body simulations they have a thin vertical density profile, similar to that of the disk. In other words, it is the in-plane rearrangement of the disk material that creates the bar, when initially near-planar and near-circular orbits become more elongated and material gets trapped around the stable periodic orbits of the x 1 family, as already discussed in Section 4.3 and 4.6.4. This configuration, however, lasts for only a short while, after which the bar buckles out of the plane and becomes asymmetric with respect to the equatorial plane, as shown, e.g., in Combes et al. (1990) and Raha et al. (1991), and as is illustrated in the middle central panel of Fig. 4.1. This evolutionary phase, which can be called the asymmetry phase, is also very short-lived and soon the side-on view displays a clear peanut or boxy shape. During the peanut formation phase the strength of the bar decreases considerably (Combes et al. 1990;Raha et al. 1991;Debattista et al. 2004Martínez-Valpuesta & Shlosman 2004;Martínez-Valpuesta et al. 2006;Athanassoula 2008a). Two examples of this decrease can be seen in Fig. 4.9, one for an MH-type simulation (where the bar strength decrease starts only at roughly 6 Gyrs) and another for an MD-type simulation (where it already starts at roughly 3 Gyrs). This decrease can sometimes be very important, so that it could get erroneously interpreted as a bar destruction.
These boxy/peanut structures had been observed in real galaxies many times, well before being seen in simulations. Due to the fact that they extend vertically well outside the disk, they were called bulges. More specifically, if they have a rectangular-like (box-like) outline they are called boxy bulges, and if their outline is more reminiscent of a peanut, they are called peanut bulges. Sometimes, however, this distinction is not made and the words 'boxes' or 'peanuts' are used indiscriminately, or the more generic term 'boxy/peanut' is used instead. A number of kinematical or photometrical observations followed and comparisons of their results with orbits and with simulations established the link of boxy/peanut bulges to bars (Kuijken & Merrifield 1995;Bureau & Freeman 1999;Merrifield & Kuijken 1999;Lütticke et al. 2000;Aronica et al. 2003;Chung & Bureau 2004;Athanassoula 2005b;Bureau & Athanassoula 2005;Bureau et al. 2006).
Peanut-related orbital structure
Considerable information on boxy/peanut structures can be obtained with the help of orbital structure theory. In 3D the orbital structure is much more complex than in 2D, as expected. Thus, the x 1 family has many sections (i.e., energy ranges) where its members are vertically unstable, and, at the energies where there is a transition from stability to instability, a 3D family can bifurcate (i.e., emerge). The orbits that are trapped around the stable l = 1, m = 2, n = 0 periodic orbits of this family can participate in the boxy/peanut structure (Patsis et al. 2003). They were discussed by Pfenniger (1984) and by Skokos et al. (2002a, b), who presented and described a number of relevant families. Since these orbits bifurcate from the x 1 and create vertically extended structures, they were named by Skokos et al. (2002a) by adding a v i ,i = 1, 2, ... after the x 1 , i.e., x 1 v i , i = 1, 2, ..., where i is the order of the bifurcation. Projected on the (x, y) plane, their shape is very similar to that of the members of the planar x 1 family. Good examples of such periodic orbits can be seen in Fig. 9 of Pfenniger (1984), or Fig. 7 to 10 of Skokos et al. (2002a).
Peanuts as parts of bars: shape and extent
Contrary to what has been very often said and written, boxy/peanut bulges are not bars seen edge-on. The correct statement is that boxy/peanut bulges are the inner parts of bars seen edge-on. The evidence for this was put together and discussed in Athanassoula (2005b) and I will only summarise it briefly here. Orbital structure theory shows that not all planar periodic orbits of the x 1 family are vertically unstable. In fact, the ones in the outer part of the bar are stable. Therefore, the outer part of the bar will stay thin and only the part within a given radius will thicken, so that the peanut will be shorter than the bar. This gives the bar an interesting form. As a very rough approximation, one can think of the bar as a rectangular parallelepiped box (like a shoe box), from the two smallest sides of which (perpendicular to the bar major axis) stick out thin extensions. Of course this is a very rough picture and the shape of the 'box' is in fact much more complex than a rectangular parallelepiped, while the extensions have shapes which are difficult to describe. The best is to look at an animation † where one can see a bar from a simulation, from various viewing angles.
How much longer is the bar than the peanut? The answer to this question is not unique and depends on which one of the x 1 v i families sets the end of the peanut, on the galactic potential and on the bar pattern speed (Patsis et al. 2003). Figure 4.13 gives an estimate of the ratio of boxy/peanut to thin bar length, for one of my simulations. In general, it is much easier to obtain an estimate of this ratio for simulations than for observed galaxies, because one can view snapshots from any desired angle. Thus, the length of the bar can be obtained from the face-on view (lower panel) as the major axis of the largest isophotal contour that has a bar shape. This of course introduces an uncertainty of a few to several percent, but is about as good as one can achieve with difficult quantities such as the bar length ‡. The size of the boxy/peanut part can be found from the edge-on view (upper panel). This also introduces an uncertainty, probably much larger than that of the bar length (AM02), but even so one can get reasonable estimates of the ratio of the two extents, and certainly make clear that the thin part of the bar can be much longer than the thick boxy/peanut part. Further discussion of this, and further examples can be found in Athanassoula (2005b), Athanassoula & Beaton (2006) and Athanassoula (2008a).
Estimates of the ratio of boxy/peanut to thin bar length can also be obtained from observations. Nevertheless, information for observed galaxies can be obtained from only one viewing angle and these estimates are less precise than the corresponding simulation ones. Figure 4.14 allows us to get an estimate for NGC 2654. Lütticke et al. (2000) made a cut along the major axis of this edge-on disk galaxy and from the projected surface density profile along it they obtained the thin bar length (confront with method (vi) from AM02). They also made cuts parallel to this and offset above or below it and from them could obtain the extent of the bar/peanut part. In this way, Lütticke et al. (2000) were able to measure the ratio of extent of the thin part of the bar to the extent of the thick boxy/peanut part and show clearly that the former can be much longer than the latter.
The boxy/peanut system in the Milky Way
The bar shape described in the previous section has important implications for the structure of the Milky Way. It is now well established that our Galaxy is barred (e.g., de Vaucouleurs 1964;Blitz & Spergel 1991). The thick component which can be seen to extend outside the Galactic plane in the near-infrared COBE (COsmic Background Explorer) image is often referred to as the COBE /DIRBE (Diffuse Infrared Background Experiment) bar, or the thick bar. About ten years later, further evidence started accumulating and was initially interpreted as due to the existence of a second bar, longer than the first one and considerably thinner (Hammersley et al. 2000;Benjamin et al. 2005;Cabrera-Lavers et al. 2007;López-Corredoira et al. 2007;Cabrera-Lavers et al. 2008;Churchwell et al. 2009). This second bar has been named the Long bar. The existence of a second bar is very common in barred galaxies and about a fourth or a fifth of disk galaxies have both a primary or main bar and a secondary or inner bar (Erwin & Sparke 2002;Laine et al. 2002;Erwin 2011). However, the ratio of the lengths of the two presumed Milky Way bars is totally incompatible with what is observed in double-barred external galaxies (Romero-Gómez et al. 2011), and it would be very dangerous to assume that our Galaxy has morphological characteristics so different from those of external galaxies.
There are two very important clues that can help us understand the nature of the bar system in the Milky Way. The first one is that the COBE /DIRBE bar is thick while the Long bar is thin, their ratio of minor (z-) axis to major axis being of the order of 0.3 and 0.03, respectively. The second one is that Fig. 4.14. Upper panel: Isophotes for the edge-on disk galaxy NGC 2654 in the near-infrared. Lower panel: Surface brightness profiles from cuts along, or parallel to the major axis of this edge-on disk galaxy. From the cut along the major axis (uppermost curve), it is possible to obtain an estimate of the projected bar length -BAL on the plot. The size of the boxy/peanut bulge is obtained from cuts parallel to the major axis, but offset from it above or below the equatorial plane -BPL on the plot. (Figure 1 of Lütticke et al. 2000, reproduced with permission c ESO).
the COBE /DIRBE bar is shorter than the Long bar by a factor of roughly 0.8. These clues, taken together with the discussion in Sections 4.8.2 and 4.8.3, point clearly to a solution where the thick COBE /DIRBE bar and the thin bar are just parts of the same single bar, the former being its thick boxy/peanut part and the latter being its outer thin part. This alternative was first proposed for our Galaxy by Athanassoula (2006Athanassoula ( , 2008b and first tested by Cabrera-Lavers et al. (2007) using their red-clump giant measurements. This suggestion was disputed at the time, because a number of observations (Hammersley et al. 2000;Benjamin et al. 2005;Cabrera-Lavers et al. 2008) argued that the position angles of the COBE /DIRBE bar and of the Long bar are considerably different, with values between 15 and 30 degrees for the former and around 43 degrees for the latter.
Yet this difference in orientations is not a very strong argument. First, due to our location within the Galaxy, the estimates for the Galactic bar position angles are much less accurate than the corresponding estimates for external galaxies. Thus, Zasowski et al. (2012) find the position angle of the Long bar to be around 35 degrees, i.e., much closer to that of the COBE /DIRBE bar than the 43 degrees estimated in previous works. Second, if the shape of the outer isodensity contours of the thin part of the bar are, in the equatorial plane, more rectangular-like than elliptical-like -as is often the case in external galaxies (e.g., Athanassoula et al. 1990;Gadotti 2008Gadotti , 2011) -the Long bar position angle will appear to be larger than what it actually is. A third is that our Galaxy could well have an inner ring, of the size of the bar. N -body simulations have shown that, in such cases, there is often within the ring a short, leading segment near the end of the bar. Examples can be found in Fig. 2 in AM02, Fig. 3 In view of all the above comments, the small difference between the position angle of the COBE /DIRBE bar and that of the Long bar should not be a major concern. I thus still believe that my initial proposal, that the COBE /DIRBE and the Long bar are parts of the same bar, is correct.
Secular evolution of the disk and of its substructures
The presence of a bar induces not only the redistribution of angular momentum within the host galaxy (Section 4.5 and 4.6), but also the redistribution of the material within it. The torques it exerts are such that material within the CR is pushed inwards, while material outside the CR is pushed outwards. As a result, there is a considerable redistribution of the disk mass.
Redistribution of the disk mass: formation of the disky bulge
It is well known that gas will concentrate to the inner parts of the disk under the influence of the gravitational torque of a bar, thus forming an inner disk whose extent is of the order of a kpc (Athanassoula 1992b;Wada & Habe 1992, 1995Friedli & Benz 1993;Heller & Shlosman 1994;Sakamoto et al. 1999;Sheth et al. 2003;Regan & Teuben 2004). When this gaseous disk becomes sufficiently massive it will form stars, which should be observable as a young population in the central part of disks. Kormendy & Kennicutt (2004) estimate that the star formation rate density in this region is 0.1-1 M ⊙ yr −1 kpc −2 , i.e., one to three orders of magnitude higher than the star formation rate averaged over the whole disk. Such disks can harbour a number of substructures, such as spirals, rings, bright star-forming knots, dust lanes and even (inner) bars, as discussed, e.g., in Kormendy (1993), Carollo et al. (1998) and Kormendy & Kennicutt (2004). Furthermore, a considerable amount of old stars is pushed inwards so that this inner disk will also contain a considerable fraction of old stars (Grosbøl et al. 2004). Such disks are thus formed in N -body simulations even when the models have no gas, as seen, e.g., in AM02, or Athanassoula (2005b). Such inner disks are evident in projected surface luminosity radial profiles, as extra light in the central part of the disk, above the exponential profile fitting the remaining (non-central) part. Since this is one of the definitions for bulges, such inner disks have been linked to bulges. When fitting them with an r 1/n law -commonly known as Sérsic's law (Sérsic 1968) -the values found for n are of the order of or less than 2 (Kormendy & Kennicutt 2004 and references therein). They are thus often called disky bulges, or disk-like bulges (Athanassoula 2005b), or pseudobulges (Kormendy 1993, Kormendy & Kennicutt 2004.
Redistribution of the disk mass: the disk scale-length and extent
Due to the bar torques and the resulting mass redistribution, the parts of the disk beyond corotation become more extended and the disk scale length increases considerably (e.g., Hohl 1971;Athanassoula & Misiriotis 2002;O'Neil & Dubinski 2003;Valenzuela & Klypin 2003;Debattista et al. 2006;Minchev et al. 2011). Debattista et al. 2006 showed that the value of Toomre Q parameter (Toomre 1964) of the disk can strongly influence how much this increase will be. Important extensions of the disk can also be brought about by flux-tube manifold spiral arms (Romero-Gómez et al. 2006, 2007Athanassoula et al. 2009aAthanassoula et al. , 2009bAthanassoula et al. , 2010, as shown by Athanassoula (2012) who reported a strong extension of the disk size, by as much as 50% after two or three episodes of spiral arm formation within a couple of Gyrs. Sackett (1997) and Bosma (2000) discuss a simple, straightforward criterion allowing us to distinguish maximum from sub-maximum disks. Consider the ratio S = V d,max /V tot , where V d,max is the circular velocity due to the disk component and V tot is the total circular velocity, both calculated at a radius equal to 2.2 disk scalelengths. According to Sackett (1997), this ratio has to be at least 0.75 for the disk to be considered maximum. Of course in the case of strongly barred galaxies the velocity field is non-axisymmetric and one should consider azimuthally averaged rotation curves, or 'circular velocity' curves. Furthermore, in the case of strongly barred galaxies it is not easy to define a disk scalelength, so it is better to calculate S at the radius at which the disk rotation curve is maximum, which is a well-defined radius and is roughly equal to 2.2 disk scalelengths in the case of an axisymmetric exponential disk. After these small adjustments, we can apply this criterion to our simulations. In Section 4.6.5.1 we saw that the disks in MH models are sub-maximum in the beginning of the simulation and in Section 4.9.1 that the bar can redistribute the disk material and in particular push material inwards and create a disky bulge. Is this redistribution sufficient to change sub-maximum disks?
Redistribution of the disk mass: maximum versus sub-maximum disks
The answer is that this can indeed be true in some cases, as was shown in and is illustrated in Fig. 4.15. This shows the Sackett parameter S and the bar strength as a function of time for one such simulation. Note that the disk is initially sub-maximum and that it stays so during the bar growth phase. Then the value of S increases very abruptly to a value larger than 0.75, so that the disk becomes maximum. After this abrupt increase the S-parameter hardly changes, although the bar strength increases considerably.
Secular evolution of the halo component
The halo also undergoes some secular evolution, albeit not as strong as that of the disk. The most notable feature is that an initially axisymmetric halo becomes elongated in its innermost parts and forms what is usually called the 'halo bar', or the 'dark matter bar', although the word 'bar' in this context is rather exaggerated, and 'oval' would have been more appropriate. This structure was already observed in a number of simulations (e.g., Debattista & Sellwood 2000;O'Neil & Dubinski 2003;Holley-Bockelmann et al. 2005;Berentzen & Shlosman 2006) and its properties have been studied in detail by Hernquist & Weinberg (1992), Athanassoula (2005aAthanassoula ( , 2007 and Colin et al. (2006). It is considerably shorter and its ellipticity is much smaller than the disk bar, while rotating with roughly the same angular velocity. It is due to the particles in the halo ILR (Athanassoula 2003(Athanassoula , 2007. A less clear-cut and certainly much more debated issue concerns the question whether secular evolution due to a strong bar could erase the cusp predicted by cosmological simulations and turn them into cores, which would lead to an agreement with observations. A few authors (e.g., Hernquist & Weinberg 1992;Weinberg & Katz 2002;Holley-Bockelmann et al. 2005;Weinberg & Katz 2007a, 2007b argued that indeed such a flattening was possible, while a larger consensus was reached for the opposite conclusion (e.g., Sellwood 2003;McMillan & Dehnen 2005;Colin et al. 2006;Sellwood 2008). We refer the reader to these papers for more information.
Comparison with observations
Technically, comparison between observations and simulations is relatively straightforward. From the coordinates of the particles in the luminous components, and after choosing the viewing angles and taking into account the observational conditions, it is possible to obtain an image that can be output in the standard format used by observers, namely FITS (Flexible Image Transport System). This image can then be analysed as are observations, using standard packages, such as IRAF (Image Reduction and Analysis Facility). Similarly one can create data cubes, containing velocity information, which again will be analysed with the same software packages as observations. Taking into account the limitations of the instruments and more generally those due to observational conditions is an important feature here, as is the fact that it is the simulation data that must be transformed into observations and not the other way round.
There is, nevertheless, one subtle point concerning a limitation of dynamical simulations that should be kept in mind. It concerns the simulation time to be chosen for the comparison. As already mentioned, in dynamic simulations the disk is assumed to be in place and in equilibrium before the bar starts forming. On the contrary, in cosmological simulations the bar should start forming as soon as the relative disk mass is sufficiently high to allow the bar instability to proceed. One must add to this the uncertainty about when disks can be considered as being in place. All this taken together makes it very difficult to pinpoint the simulation time to be used for the comparisons. The best is to try a range of times and then describe how the fit evolves with time.
Summary and discussion
Angular momentum can be redistributed within a barred galaxy. It is emitted from the (near-)resonant stars in the bar region and absorbed by the (near-)resonant material in the spheroid and the outer disk. By following the orbits in a simulation and measuring their frequencies, it is possible to determine whether they are (near-)resonant or not, and, if so, at which resonance. For strong bar cases, the most populated disk resonance is the inner Lindblad resonance. Simulations confirm the theoretical prediction that this emits angular momentum, and that the corotation and outer Lindblad resonances absorb it. In the spheroid the three most populated resonances are the corotation, the outer Lindblad and the inner Lindblad resonance, and, in many cases, it is corotation that is the most populated. Again simulations confirm the theoretical prediction that angular momentum is absorbed at the spheroid resonances.
In order for bars to evolve uninhibited in a simulation, it is necessary that the angular momentum exchange is not artificially restrained, as would be the case if the halo in the simulation was rigid, e.g., represented by an axisymmetric force incapable of emitting or absorbing angular momentum. It is thus necessary to work with live haloes in simulations, and, more generally, to avoid the use of any rigid component.
Note also that the effect of the spheroid on bar growth is different in the early and in the late phases of the evolution. During the initial phases of the evolution, the spheroid, due to the strong axisymmetric force it exerts, delays and slows down the bar growth. Thus, bars will take longer to form in galaxies with a large ratio of spheroid-to-disk mass. On the other hand, at later stages, after the secular evolution has started, the spheroid can increase the bar strength by absorbing a large fraction of the angular momentum emitted from the bar region. Thus, stronger bars will be found in galaxies with a larger spheroid-to-disk mass ratio.
Contrary to spheroid mass, the velocity dispersion in the disk has always the same effect on the bar growth. During the initial phases it slows down the bar growth. Thus, bars will take longer to form in galaxies with hot disks. During the secular evolution phase, a higher velocity dispersion in the disk component will make its resonances less active, since it decreases the amount of angular momentum that a resonance can emit or absorb. A similar comment can be made about the velocity dispersion of the spheroid (near-)resonant material. Thus, increasing the velocity dispersion in the disk and/or the spheroid will lead to less angular momentum redistribution and therefore weaker bars.
As the bar loses angular momentum, its pattern speed decreases, so that the resonant radii will move outwards with time. Since the corotation radius provides an absolute limit to the bar length, this increase implies that the bar can become longer. Indeed, this occurs in simulations. It is thus possible for the pattern speed to decrease while the bar stays 'fast', provided the bar becomes longer in such a way that the ratio R of corotation radius to bar length stays within the bracket 1.2 ± 0.2.
As the bar loses angular momentum it also becomes stronger, so that there is a correlation between the bar strength and the amount of angular momentum absorbed by the spheroid. In general, as bars become stronger they become also longer and their shape gets more rectangular-like. They redistribute mass within the disk and create the disky bulge (more often referred to as pseudo-bulge) in the central region. They also increase the disk scalelength. All these changes brought about by the evolution can also strongly influence the form of the rotation curve and change an initially sub-maximum disk to a maximum one.
The strongest bars will be found in cases where the maximum amount of angular momentum has been redistributed within the galaxy, and not when the spheroid mass is maximum. A further parameter which is crucial in trying to maximise the angular momentum redistribution is the bar pattern speed. Indeed, this is set by the location of the corotation radius and therefore by the balance between emitters and absorbers in the disk.
When bars form they are vertically thin, but soon their inner parts puff up and form what is commonly known as the boxy/peanut bulge. This is well understood with the help of orbital structure theory. It gives a complex and interesting shape to the bar -i.e., vertically extended only over a radial extent from the centre to a maximum radius of the order of (0.7 ± 0.3)a B , where a B is the bar length, and then very thin outside that range. This shape explains a number of observations and also argues that the COBE /DIRBE bar and the Long bar in our Galaxy are, respectively, the thin and the thick part of a single bar.
From the above it is thus possible to conclude that there is a continuous redistribution of angular momentum in disks with strong bars and that this drives a secular evolution. It is secular because the timescales involved are long, contrary to, e.g., a merging, which occurs in a very short time interval. | 2012-11-28T21:07:06.000Z | 2012-11-28T00:00:00.000 | {
"year": 2012,
"sha1": "694a54225af2748b041ee99afbe4393de383a4fa",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1211.6752.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "694a54225af2748b041ee99afbe4393de383a4fa",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
28708343 | pes2o/s2orc | v3-fos-license | Effects of Infant Pneumococcal Conjugate Vaccination on Serotype Distribution in Invasive Pneumococcal Disease among Children and Adults in Germany
This study describes the effects of the introduction of universal infant pneumococcal conjugate vaccination in 2006 on invasive pneumococcal disease (IPD) among children and adults in Germany with a focus on the dynamics of serotype distribution in vaccinated and non-vaccinated age groups. Over a period of 22 years (1992–2014), microbiological diagnostic laboratories from all over Germany have been sending isolates of IPD cases to the German National Reference Center for Streptococci on a voluntary basis. Streptococcus pneumoniae isolates were serotyped using Neufeld’s Quellung method. Among children <16 years, the proportion of PCV7 serotypes among isolates from IPD cases decreased from 61.8% before vaccination (1997–2006) to 23.5% in the early vaccination period (2007–2010; p = 1.30E-72) and sank further to 5.2% in the late vaccination period (2010–2014; p = 4.59E-25). Similar reductions were seen for the separate age groups <2 years, 2-4 years and 5-15 years. Among adults, the proportion of PCV7 serotypes decreased from 43.4% in the pre-vaccination period (1992–2006) to 24.7% (p = 3.78E-88) in the early vaccination period and 8.2% (p = 5.97E-161) in the late vaccination period. Both among children and among adults, the non-PCV7 serotypes 1, 3, 7F and 19A significantly increased in the early vaccination period. After the switch from PCV7 to PVC10/PCV13 for infant vaccination in 2010, serotypes 1, 6A and 7F significantly decreased. A decrease in serotype 19A was only observed in 2013–2014, as compared to 2010–2011 (children p = 4.16E-04, adults p = 6.98E-06). Among adults, serotype 3, which strongly increased in the early vaccination period (p = 4.44E-15), remained at a constant proportion in the late vaccination period. The proportion of non-PCV13 vaccine serotypes increased over the whole vaccination period, with serotypes 10A, 12F, 23B, 24F and 38 most significantly increasing among children and serotypes 6C, 12F, 15A, 22F and 23B increasing among adults. Eight years of childhood pneumococcal conjugate vaccination have had a strong effect on the pneumococcal population in Germany, both among the target group for vaccination as well as among older children and adults.
Introduction
The actual choice of vaccine made by the parents/physicians is reflected in the prescription data of different vaccine formulations for children under two years of age in Germany. Before 2009, only PCV7 was used. In 2009, 19.8% of prescriptions were PCV10, in 2010 this lowered to 5.6%, in 2011 to 2.6% and in 2014 only 1.9% of prescriptions were PCV10 (data from IMS Health, Germany, 'Verordnungsindex Pharma (VIP)').
For adults aged 60 years and older, the 23-valent polysaccharide vaccine (PPV23, SPMSD, Pneumovax) has been recommended since 1998, currently as a single dose. A further vaccination recommendation exists for all children and adults with an increased risk for pneumococcal disease due to underlying conditions. For children up to the age of four years, pneumococcal conjugate vaccine is recommended; for individuals older than 5 years, a vaccination with PCV13 or PPV23 is recommended [12].
This study describes the effects of the introduction of childhood pneumococcal conjugate vaccination on invasive pneumococcal disease among children and adults in Germany, focusing on the dynamics of serotype distributions in vaccinated and non-vaccinated age groups over a period of 22 years.
Study materials
The German National Reference Center for Streptococci (GNRCS) has conducted surveillance for IPD in Germany since 1992, using a laboratory-based approach. IPD cases were defined as Streptococcus pneumoniae isolates from blood, cerebrospinal fluid (CSF) or any other normally sterile body fluid. Microbiological diagnostic laboratories from all over Germany have been sending isolates of IPD cases to the GNRCS on a voluntary basis. In total, over 400 laboratories have participated, including large, nationally-operating commercial labs. Participating laboratories are located in all German federal states, and the number of laboratories per federal state correlates to the different population densities of the states. In the last 7 years of the study (2007-2008 to [2013][2014], reported cases varied between 0.7 per 100,000 inhabitants per year (Schleswig-Holstein) and 5.6 per 100,000 per year (Bremen) (Table in S1Table).
Over the years, the surveillance system has been improved. In 2001, surveillance for adults was enhanced in North Rhine-Westphalia (22% of German population), as well as in Bavaria and Saxony in 2006. On each occasion, all laboratories in the respective federal states were approached and asked to send in isolates. In 2007, a web-based surveillance system called PneumoWeb (www.rki.de/pneumoweb) was set up by the Robert Koch Institute in collaboration with the GNRCS. PneumoWeb enables the laboratories to report a case of IPD via an online system, and directly print the corresponding information as a Case Report Form to send to the GNRCS, accompanied by the IPD isolate. The web-based system resulted in a large increase in reported cases for adults, whereas the amount of cases for children remained at the same high level. For children, using our capture-recapture incidence calculations, we determined that before the vaccination recommendation, 40-50% of all IPD cases had a sample sent to the GNRCS. This percentage increased to 50-60% after vaccination introduction [18].
Characterization of isolates and serotyping
Species identification was performed using bile and optochin testing. In dubious cases, PCR analysis of several genes was performed (ply, lytA, sodA, 16S-rRNA). As a last resort, MLST was performed. Pneumococcal isolates were serotyped by Neufeld's Quellung reaction using type and factor sera provided by the Statens Serum Institut, Copenhagen, Denmark. Isolates were considered non-typeable when there was no reaction with any of the antisera.
Statistical methods
Cases were grouped per pneumococcal season (from July to June of consecutive years) because of known infection clusters during winter. For the analysis of vaccination effects, we defined three time periods. The pre-vaccination period from 1997-2006 summarizes 9 pneumococcal seasons in which children were not vaccinated (for adults: 1992-2006, 14 seasons). The season 2006-2007 was considered a transition year in which pneumococcal conjugate vaccination was introduced, and was taken out of the analysis. The early vaccination period summarized the three seasons (2007-2008, 2008-2009 and 2009-2010) in which PCV7 was used, and the late vaccination period summarizes four seasons (2010-2011, 2011-2012, 2012-2013 and 2013-2014) in which higher-valent vaccines (mainly PCV13) were used. To study the most recent effects of higher-valent vaccination, a direct comparison of the seasons 2010-2011 and 2013-2014 was made.
Differences in proportions were tested by Fisher´s exact test with a two-sided p-value <0.05 considered statistically significant. Analyses were conducted using R (R Foundation for Statistical Computing, Vienna, Austria, 2014).
Ethical statement
An ethical approval was not required since the study was performed with Streptococcus pneumoniae isolates that resulted from routine microbiological diagnostic procedures as requested by the treating physician. No additional biological specimens were taken for the purpose of this study. Specimens were anonymized and only data on year and month of birth, sex, type of specimen and hospital/laboratory where the case was diagnosed were registered.
Results
From July 1992 until June 2014, a total of 3,853 isolates from invasive pneumococcal disease (IPD) among children (<16 years) and 20,382 isolates from IPD among adults (16 years) were received at the GNRCS. Of all isolates (24,235), 15.9% were from children under 16 years of age (11.7% from children under 5 years of age). 68.5% of the isolates were from adults over 50, 53.8% from adults over 60 years of age (Fig 1). The median age among isolates from children was 1 year (21 months), and 57.3% of isolates were from male patients (40.0% female, 2.7% gender unknown). Among adults the median age was 67 years, and 45.3% of isolates were from male patients (54.2% female, 0.5% gender unknown). All isolates from children were serotyped, whereas 20,104 isolates from adults were available for serotyping (Fig 2).
Effects of PCV on IPD among children
In the pre-vaccination period, PCV7 serotypes represented an average of 61.8% of all isolates among children. This proportion was reduced to 23.5% in the early vaccination period (p = 1.30E-72) and sank further to 5.2% in the late vaccination period (p = 4.59E-25). The reduction in the early vaccination period was highly significant for all PCV7 serotypes except for serotype 18C (p = 0.21). Reductions between the early and late vaccination period were much less significant, mainly due to the already very low numbers ( Table 1). The reduction of serotype 18C reached statistical significance in the late vaccination period (p = 4.36E-05). Similar reductions were seen for the separate age groups <2 years, 2-4 years and 5-15 years, though reductions in the two higher age groups are less significant due to lower numbers (Tables A-D in S1Text). Comparison of the season 2010-2011 with 2013-2014 shows that PCV7 serotypes have almost disappeared in all age groups.
Of the six extra serotypes included in PCV13 as compared to PCV7, four showed an increase in the early vaccination period: 1 (p = 1.24E-10), 3 (p = 3.86E-04), 7F (p = 3.82E-10) and 19A (p = 2.05E-07). The proportion of serotype 6A increased, but not significantly, whereas the number of serotype 5 isolates was too low for analysis. In the late vaccination period, a significant decrease was observed for serotypes 1, 6A and 7F. Serotypes 3 and 19A further increased, but the increase was no longer statistically significant. The increase in serotypes 19A and 3 persisted well into the late vaccination period. A decrease in serotype 19A was only observed in 2013-2014, as compared to 2010-2011 (p = 4.16E-04). However, when looking at the separate age groups, the impact of the higher-valent vaccination becomes very clear among children <2 years of age, with each of the six extra serotypes decreasing (Tables A-D in S1Text). The proportion of non-PCV13 vaccine serotypes among IPD in children in the pre-vaccination period was 15.6%. This proportion increased to 29.7% in the early vaccination period, and to 59.2% in the late vaccination period. The most significantly increasing serotypes were 10A, 12F, 23B, 24F and 38. Table 1 lists only those non-PCV13 serotypes which showed statistically significant changes. Table A in S1Text lists all non-PCV13 serotypes.
Non-PCV13 serotypes made up 27.7% of all isolates in the pre-vaccination period, increasing to 33.4% in the early vaccination period, to 52.7% in the late vaccination period and to 63.7% in the last season (2013-2014). The most significantly increasing serotypes among adults were 6C, 12F, 15A, 22F and 23B. Table 2 lists only those non-PCV13 serotypes which showed statistically significant changes. Table B in S1Text lists all non-PCV13 serotypes.
Serotype 19A
Among children <2 years of age, cases with serotype 19A have increased in the early vaccination period (p = 3.03E-08). During the late vaccination period, 19A decreased as compared to the early vaccination period, but not significantly (p = 9.07E-01). A significant decrease in 19A cases was only observed when comparing 2010-2011 to 2013-2014 (p = 1.06E-03). Among children 2-4 years of age, 19A cases increased significantly in both the early and the late vaccination period, and a (non-significant) decrease was only seen in the last surveillance years (2010-2011 vs. 2013-2014; p = 2.36E-01). Among older children (5-15 years of age), neither the vaccination increase in 19A cases nor the decrease in the latter surveillance years reached statistical significance (Tables A-D in S1Text). Among adults, the dynamics of serotype 19A were similar: a significant increase in all age groups during both the early and the late vaccination periods. Only when comparing 2010-2011 to 2013-2014 a significant decrease in reported cases with serotype 19A was observed for all age groups except for the year olds, where the decrease did not reach significance (p = 4.27E-01; Tables E-I in S1Text).
Serotype 3
In the early vaccination period, reported cases with serotype 3 significantly increased in all age groups, except for children 5-15 years of age, among whom a non-significant decrease was observed (p = 8.36E-01). In the late vaccination period, and also when comparing 2010-2011 to 2013-2014, no significant changes in serotype 3 levels were observed in any age group (Tables A-I in S1Text).
Serotype 1
The number of reported cases with serotype 1 increased in the early vaccination period in each separate age group, reaching statistical significance among children 0-1 years and 2-4 years, and among adults 16-49 years and 50-60 years. In the late vaccination period, a steep decrease in the number of reported serotype 1 cases was observed among all age groups, most significantly among year olds (p = 2.51E-07) and >75 year olds (p = 9.42E-07) (Tables A-I in S1Text).
PPV23 serotypes among adults
In the pre-vaccination period, PPV23 serotypes were responsible for 87.4% of all IPD cases reported from adults ( Table 2) with little variation in this percentage over the 14 prevaccination seasons (84.2%-90.6%). In the early vaccination period, this percentage decreased significantly to 84.1% (p = 3.38E-06), due to the significant decrease in PCV7 serotypes (p = 3.78E-88). The decrease became more significant in the late vaccination period (76.5%, p = 5.00E-29), caused by the continuing decrease in PCV7 serotypes (p = 5.97E-161) and the decrease in PCV13-non-PCV7 serotypes (p = 8.68E-04). Several of the serotypes included in PPV23 but not in PCV13 significantly increased, either over the whole vaccination period (22F, 33F), or only in the late vaccination period (8, 12F).
Serotypes in transition season 2006-2007
Two serotypes were found only once and only in the transition season 2006-2007. This concerned a case of serotype 10F in a child and a case of the new serotype 6G (see below) in an adult.
New serotypes
In the course of this study, two new serogroup 6 serotypes (6F and 6G) were discovered, which have been described by Melissa Oliver and our group: [19].
Dynamics of serotype distribution over time
The development over time of the serotype distribution for seven different age groups is presented in Fig 3. Immediately after the start of PCV7 vaccination, a steep drop in reported cases with PCV7 serotypes was observed for children <2 years of age, resulting in almost no reported PCV7 cases in 2013-2014. Among 2-4 year olds, a similar decrease was observed, although it was slower. Among children aged 5-15 years, a decrease in PCV7 serotypes was only observed starting in 2010-2011. In all three age groups, the PCV13-non-PCV7 serotypes increased in the early vaccination period, but decreased in the late vaccination period. Non-PCV13 serotypes increased among all three childhood age groups, but to a lesser extent among older children (5-16 years of age). The increase in non-PCV13 serotypes was most pronounced for children <2 years and from 2011-2014. The total amount of reported IPD cases has decreased in all childhood age groups in the vaccination period. Among adults, a reduction of PCV7 serotypes was observed in all four age groups starting from 2008-2009. PCV13-non-PCV7 serotypes increased considerably in all age groups in the early vaccination period, and decreased again in the late vaccination period. The decrease was much less pronounced among older adults (>75 years of age). Cases with non-PCV13 serotypes have increased in all four adult age groups and during the whole vaccination period (2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014). A reduction in the total amount of reported IPD cases was not observed in any of the four adult age groups (2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014). Fig 4A shows the dynamics of the individual serotypes among children over the surveillance period (1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014). In the early vaccination period, PCV7 serotypes (blue) decreased, whereas PCV13-non-PCV7 serotypes (green and orange), as well as non-PCV13 serotypes (black, grey and purple), increased. In the late vaccination period, PCV13-non-PCV7 serotypes decreased, while the non-PCV13 serotypes continued to increase. The amount of reported cases per
Discussion
Our study has several limitations. Since IPD is not a notifiable disease in Germany, isolates were sent in by clinical microbiological laboratories on a voluntary basis, which bears the risk of under-reporting. Furthermore, the systematic sampling of invasive isolates from adults (1992) and children (1997) was taken up at different times, and for adults, the surveillance included population-based studies in three German federal states: North Rhine-Westphalia, started in 2001; Bavaria, started in 2006; Saxony, started in 2006. Over the years, the surveillance project has been continuously intensified, particularly after the recommendation for universal infant vaccination against pneumococci was issued. This recommendation unavoidably led to an increased awareness of IPD among clinical microbiologists and pediatricians. Finally, in 2007, the introduction of PneumoWeb, a web-based reporting system contributed to an increased number of reported cases of IPD among adults, from 200-500 cases per season to over 2000 cases per season. The introduction of PneumoWeb did not lead to increased reporting among children, reflecting the good level of reporting already reached in previous years.
The impact of pneumococcal conjugate vaccination on IPD caused by PCV7 serotypes has been vast, with PCV7 serotypes having almost disappeared among children <2 years of age. Similarly, PCV7 serotypes have been strongly reduced among older children and adults, indicating herd protection. These effects are in accordance with reductions of IPD incidence among children published by our group [20] and with reports from other countries that have introduced pneumococcal conjugate vaccination [21]. Among the PCV13-non-PCV7 serotypes, serotypes 1, 3, 7F and 19A increased in the early vaccination period (in all age groups), when PCV7 was used. These serotypes apparently occupied the niche vacated by the PCV7 serotypes. This replacement was observed in other countries as well [10]. After the introduction of higher-valent vaccination, serotypes 1, 7F and 6A decreased in all age groups. This immediate effect of higher-valent vaccination was also observed in other countries [22][23][24][25]. The steep rise in serotype 19A cases, which occurred in the early vaccination period, was only reverted in 2013-2014. Effects on serotype 19A have been more direct in other countries [26]. An explanation for the delayed effect in Germany could be the late onset of the increase in 19A after the start of PCV7 vaccination [27]. Therefore, when higher-valent vaccines were introduced, 19A was still steeply increasing, and it took time for this increase to first diminish and then be reversed. Obviously, the decrease in non-targeted age groups appeared with even more delay.
A decrease of serotype 3 cases was observed among children <2 years of age, in the late vaccination period, but it did not reach significance. Case numbers of serotype 3 among children are very low, and therefore it is difficult to judge whether there is a vaccination effect. However, serotype 3 cases did not increase, which might have been expected if the vaccine had no effect on serotype 3 at all. Interestingly, a herd protection effect towards the older age groups was not observed either. Among adults, serotype 3 has increased strongly and is now the most prevalent serotype. Steens et al. report very low numbers of serotype 3 cases among children <2 years in Norway, and also did not observe an effect of PCV13 on serotype 3 in non-targeted age groups [24]. Harboe et al. report no change in serotype 3 following PCV13 vaccination in Denmark [25]. Miller et al. report a non-significant reduction of serotype 3 since PCV13 introduction, whereas Kaplan et al. report a 68% decrease in serotype 3 among patients from eight US children's hospitals, although numbers were low [28].
The significant increase of non-PCV serotypes shows that replacement is an issue after each PCV introduction, even though the net effect of vaccination remains positive. Several of the upcoming serotypes in this study (6C, 10A, 12F, 15A, 22F, 23B, 24F and 38), have been described in other studies as well [24,25]. The most strongly increasing serotypes, 23B and 15A, were also found to be increasing in Norway [24] and in Hong Kong [29], respectively.
The immediate, strong and lasting decrease in serotype 1 cases in all age groups in the late vaccination period is enigmatic. For other serotypes, the herd protection effects among nontargeted age-groups came with a delay in time of about one year. For serotype 1, the observed reduction was as fast as in the directly-vaccinated age groups. A reduction in serotype 1 was not observed among adults in Denmark [25] but it was seen in Norway, and appears to have occurred just as fast [24].
In the early vaccination period, an immediate decrease of the PCV7 serotypes was observed among children <2 years of age. Among 2-4 year old children, an immediate decrease was also seen, but it was slower. Among older children (5-15 years), and among adults, the decrease came with a considerable delay (2010-2011 and 2008-2009, respectively). This shows that herd protection often comes into effect with a time delay. For the PCV13-non-PCV7 serotypes, an increase in the early vaccination period, followed by a decrease in the late vaccination period was observed in all age groups. Again, the decrease occurred with a delay in the non-vaccinated age groups. Interestingly, the decrease was less strong among the oldest adults (>75 years).
The share of PPV23 serotypes among adult IPD cases remained the same over the entire prevaccination period, indicating little effect of PPV23 vaccination. This could either be due to limited effectiveness of the PPV23 vaccine, or to low levels of vaccination among adults in Germany (31%, [17]). The PPV23 serotype share changed slightly in the early vaccination period as compared to pre-vaccination times. This was due to the fact that all of the replacement serotypes (1, 3, 7F, 19A) were included in PPV23. In the late vaccination period, the share of PPV23 serotypes decreased sharply, as now only some of the replacement serotypes were included in PPV23 (8, 12F, 22F, 33F), whereas others were not (6C, 15A, 23B).
It is of interest that a total of 17 serotypes have never been detected during our 22 years of surveillance. Obviously, serotype 11E cannot be distinguished from 11A using antisera and therefore was not detected in our study [30]. The remaining 16 serotypes, however, are so rare that they are hardly ever (if at all) reported in any surveillance studies. The current epidemiological relevance of these serotypes therefore remains unclear.
In the course of this study, two new serotypes (6F and 6G) were detected [19]. Both isolates show point mutations in the capsular genes, resulting in new variants of the capsular polysaccharide. Whether these serotypes arose as a consequence of vaccination pressure remains unclear, since one of the variants already appeared in 2006, i.e. before the start of vaccination in Germany.
Conclusions
Eight years of childhood pneumococcal conjugate vaccination have had a strong effect on the pneumococcal population in Germany, both among vaccinated children as well as among nonvaccinated children and adults. Serotypes included in the vaccines have strongly diminished, but have not disappeared completely. Non-vaccine serotypes have gained importance, with several single serotypes occurring much more frequently than others. These phenomena stress the importance of continued surveillance in order to monitor the dynamics of the pneumococcal population under vaccination pressure and inform the development of higher-valent pneumococcal vaccines.
Supporting Information S1 | 2018-04-03T01:14:39.027Z | 2015-07-01T00:00:00.000 | {
"year": 2015,
"sha1": "c8fde17b48c0f558cd29c69810b89001b0cb1744",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0131494&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c8fde17b48c0f558cd29c69810b89001b0cb1744",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
169455836 | pes2o/s2orc | v3-fos-license | Sustainable tourism and harmonious culture: a case study of cultic model at village tourism
The research aims to analyze an event model of Culture and Tourism International Camp (Cultic) from two aspects, harmonious culture and sustainable tourism. Currently, Indonesian government promotes village tourism by involving more villagers to achieve village independence in its development. The program has faced various obstacles, such as the eroded local cultures due to the development of a massive and money-oriented tourism with less attention on the environmental damage. One of the offered programs is a green tourism model for an event named Culture and Tourism International Camps – Cultic. The research is conducted in several stages. The first stage is the development of model based on the theoretical study. The second stage is the implementation of the model with 85 participants. The third stage is the evaluation of the model through harmonious culture and sustainable tourism approaches. The data is collected through a direct observation and a questionnaire. The result of qualitative analysis indicates that the developed event model supports the harmonious culture, especially the natural environment. Whereas, the result of quantitative analysis indicates that the participants enjoy the activities, such as green food, material natural, waste management, and ecosystem. Another finding is that the community strongly supports the concept of sustainable tourism.
Introduction
The concept of tourism village in Indonesia is similar to the results of previous studies [35,14,22] that it balances the economy, socio-culture and environment with more emphasize on the culture. The three components walk side by side in the tourism village thus creating a sustainable tourism village. The government of Indonesia develops tourism villages to advance the villages and income distribution. One province selected as the pilot project with target of 100 tourism villages in 2018 is Bali Province [19].The program has been conducted since 2013 based on the number of foreign tourists in Indonesia where 38.45% of them went to Bali [15]. In 2015, the number of foreign tourists came to Bali was 4.001.835 people, [15] whereas the number of national tourists were 7.147.100 people [16].
The plan of tourism development through a concept of numerous tourism villages will impact the new infrastructure development, the rise in the production of disposals (wastes), the alteration of ecosystems, the introduction of exotic species of animals and plants, the loss of traditional habits, and the increase in the prices of goods and services (e.g. houses, labor around tourist destination) [11]. Other impacts were the rising in crime and drug abuse because of lack social life relationship to the local people. Bali, a small island is also inevitably getting alarming issues from those effects. To reduce the negative impacts, sustainable tourism is the chosen concept as stated by Bramwell and Lane [12] that sustainable tourism is a positive approach intended to reduce the tensions and friction created by the complex interactions between the tourism industry, visitors, the environment and the communities which are host to holidaymakers. Lane has added sustainable tourism is a concept designed not to restrict tourism but to manage it in the interests of all three parties involved -the host habitats and communities, the tourists, and the industry itself [20]. It seeks a balance between development and conservation and to find the best form of tourism in term of ecology and culture relationship. Walker & Moscardo were explained that an ecotourism definition typically include three key features, i)conducted in natural environments; ii) a requirement that ecotourism businesses be conducted such that they make positive contributions to all dimensions of sustainability in the places visited; and iii) an explicit focus on providing opportunities for tourists to learn about, understand and develop positive attitudes towards sustainability in both the places visited and more generally in their lives beyond the particular tour [33].
Results of previous studies illustrated that sustainable tourism is increasingly important to be implemented in a tourism village in maintaining the natural balance. Sustainable development in tourism field is not limited to the nature but also it is also oriented to the past and future community welfare [13]. Tourism is the power of the world economy with contribution of 9 percent of the world GDB (United Nations World Tourism Organization [30]. The importance of tourism has made European experts to study the field. Postma stated that the development of sustainable tourism industry in 2040 is divided into four scenarios: back to the seventies, captured in fear, shoulders to the wheel, and unique in the world [26]. The scenarios explain how the role of economy and resources in facilitating the implementation of sustainable strategies to achieve community welfare [24]. The use of resources, such as culture, is the concern of the experts in developing sustainable tourism in Europe [26] and it followed by the developing countries, such as Indonesia. Culture is a source having the power to develop tourism, such as harmonious culture implemented in Bali that able to maintain Ubud Village as one of the best world destinations to present [3].The harmonious culture, called tri hita karana (the three causes of happiness), is a sustainable concept and it is recognized by the WTO to be applied in tourism industry. The first concept of the culture is maintaining the harmony with the creator of the nature with its content through religious rituals, such as worship ceremonies. The activities become a tourism attraction [5]. The second concept is maintaining the harmony with other human being naturally without distinguishing one another. The concept is implemented through the assimilation of culture and religion that respect human from birth to death, for example, ngaben ceremony for funeral [8,4,7]. The third concept is maintaining harmony with the nature where the society keep the environment well since they believes that keeping the nature is a form of their devotion to God; therefore, the process of nature conservation is arranged in custom and religious rules [9]. Harmonious culture has an important role in building human characters and achieving sustainable tourism as well as increasing the performance of a company [36]. The condition is being maintained in Bali through socialization of culture through information technology, community meetings at the banjars at the villages or through other meetings [10].
Community welfare is the goal in the tourism development and it is not easy to achieve due to the diverse condition of resources owned by each village. Therefore, an innovative effort is needed through a study in order to create products that able to strengthen the tourism village. Based on the previous studies, event tourism has an important role in advancing a destination [18] that integrates various factors as socio-culture, technology, economy, politics and ecology. Event activities conducted regularly in an area can make the area as a new tourism destination and have impact on the economy, socio-culture, and environment. Cultic (culture and tourism international camps) is an event package made by State Polytechnic of Bali for students with a purpose to introduce the culture, nature, and the use of technology in a tourism village [6]. The event is made to promote tourism villages as the world tourism destination and to support sustainable tourism.
Previous studies on event are limited to certain areas. There were no events conducted in a tourism village that are integrated with local culture; therefore, the research was very important to support sustainable tourism. The research aimed to study Cultic Event in 2017 at Pinge Village. It was the first model trial before being implemented at 100 villages in Bali. The event model is made as a strategy of tourism village development that currently less developed. The event concept refers to the sustainable tourism integrated with the harmonious culture. The participants were 85 students from various countries of Indonesia, Thailand, Philippine, Papua New Guinea, Malaysia, France, Czechoslovakia, and Yugoslavia. The research used qualitative and quantitative studies through three stages. The first stage was adjusting a model developed by Santosa through a direct observation on the program given to the guests and analyzing with harmonious cultural concept [37]. The next stage was studying the tourists' perception of the program they follow through a questionnaire. The last stage was studying the perception of Pinge villagers of the Cultic event through a sustainable concept developed by Zamfir & Corbos [35]. The number of respondents in the study was 160 head of families in Pinge Vilage. Data was analyzed qualitatively and with descriptive statistics. The research background was that there was less developed of tourism village as a tourism destination and previous studies indicated that in order to advance the destination, sustainable tourism and event concepts were among the ways. Next, the study was based on several theoretical studies, such as sustainable tourism, tourism village, event, and harmonious culture. The discussion was conducted through qualitative and descriptive statistics methods. Result of the discussion was that Cultic event is a new model in supporting the green tourism that integrates the modern concept with traditional concept rooted from harmonious culture.
Literature review 2.1. Sustainable tourism
The concept of sustainable tourism has been developed since early 1990, which is a part of sustainable development concept [32,35,14]. According to Dangi and Jamal [16], sustainable tourism is defined as a tourism activity emphasizing on the current condition and the future impact on economy, social, and environment and satisfying the need of tourists, industry, environment, and local communities (stakeholders). In addition to the three pillars, sustainable tourism also emphasizes on the increasingly important role of the stakeholders in sustainable development of a tourism destination area. The synergy among the three pillars of sustainable development and stakeholders is important for the concept of sustainable tourism. Sustainable tourism consists of all types of tourism: mass tourism, cultural tourism, mountain tourism, seaside tourism, spa tourism, business tourism, medical tourism, rural tourism, urban tourism and so on [35].The principles of sustainable tourism are: (1) local community should manage the tourism activities in their area; (2) the tourism should provide jobs for the community to improve their welfare; (3) the use of international standard as a reference; and (4) education and training should be conducted to improve the management of local tourism to protect the environment and nature [35].
Event Tourism
One program offered by the Center of Excellences of the State Polytechnic of Bali is a Culture and Tourism International Camps (CULTIC). It is an event package sold to the students both in national and international levels [6]. One of the activities was conducted at Pinge Village where participants in the event will have knowledge on tourism products such as plowing, tracking, cooking class, painting, and workshop on metegak culture [6]. An event could be beneficial to develop a tourism destination 18 and gives benefit to the economy, socio-culture, and environment. Hornga [39] investigated the relationships across a behavior model of festival visitors based on a major festival encouraging energy saving and carbon reduction (ESCR). Using the 2010 Taipei International Flora Exposition (Taiwan) as a case study, shifts the debate on sustainable tourism destinations from an emphasis on ecotourism and eco-resorts towards sustainable urban tourism destinations. It explores five major antecedents to those categories: habitual behavior, environmental attitudes, facilities available, a need to take a break from environmental duties, and sense of tourist social responsibility. Existing habits were found strongly influenced of all four urban pro-environmental behaviors. A range of tourism industry and public sector agency policy recommendations are made, in terms of developing specification, well sited and easy to find/use environmental infrastructure assets such as recycling facilities and public transport, reducing implementation barriers and in formulating an overall proenvironmental image for the destination [23]. All of previous studies showed that a success green concept for the event implementation can't stand alone, but should be integrated with other factors, especially social culture and economic. So, this study also examined some consideration supporting aspects for the green event model which was designed previously. And finally the model become one of guidelines for sustainable tourism development in Bali Island in general.
Tourism Village
A tourism village is the development of a village in form of integration of attraction, accommodation, and the supporting facilities presented in the structure of life of a community [2,29]. Bali Province has 53 tourism villages formed and 47 other villages are in the process to be formed as tourism villages and they are spread throughout the regencies and cities. The reasons behind the necessity of the development of tourism village are (1) it is a relevant way to attract humanity and cultural-oriented tourists who also have environmental awareness; (2) it increases the local community welfare by opening a higher profit opportunity; and (3) it could stimulate the development of the village. The concept of tourism village is similar to the community based tourism (CBT) promoted as a way to develop tourism where the social, environmental and economic needs of the local community are fulfilled through the tourism products offered [17,31]. CBT is a tool to achieve sustainable tourism [21]. It is a form of tourism aimed to involve and benefit the local community, especially the villagers. One sample of CBT concept is a tourism village where the villagers manage their own tourism potential through shared management and profit sharing [1]. The main principle of the CBT is to increase the standard of living of the local community. The characteristics of the CBT are: (1) the benefits are enjoyed by the local communities; (2) shared infrastructure (3) equality in receiving the benefits; (4) initiative to protect the environment; (5) outer companies could form joint ventures with the local communities; (6) the communities own and manage the company; (7) although the company is owned by the private sector, the profit is for local community; (8) the development of tourism products network; (9) cooperative; and (10) the development of private sectors in the empowerment of village potential.
THK (Tri Hta Karana) Culture / Harmonious
Culture THK culture is a culture originated from the local wisdom. According to Sobirin [28], national culture is formed by different reasons since the emergence of a country has different background. Therefore, various factors, such as ethnicity, economy, politics, religion, or language, give contribution in the formation of the national culture. Schein [27] stated that organizational culture is based on three levels. First, artifacts, which is something being modified by human for certain purposes and they could be observed directly from the structure of an organization as well as the processes conducted in the organization. Artifacts are the easiest thing to be captured when entering an organization since they are related to what to see, hear, and feel while in the organization environment. Second, espoused beliefs and values, which are the supporting values that consisted of strategy, goals, and basic philosophy owned by the organization and they can be understood if we start to explore the organization by living in the organization longer. The supporting values are usually expressed in writing and become the reference for every step made by members of the organization. Third, underlying basic assumptions, which are the shared implied assumptions. Values, beliefs, and assumptions used by the founder are considered important for the success of an organization. THK is the product of subjective and interpretative human behaviors. Therefore, symbols will be built by subjective understanding related to the phenomena that have objective consequences. Regarding THK, parahyangan is analogous to the subsystem of value, pawongan is analogous to the subsystem of social, and palemahan is analogous to the subsystem of artifacts [34].The culture is often named as harmonious culture that plays role in increasing the financial performance [5,7]. The result of other studies indicated the role of harmonious culture that able to contribute to the decrease in credit risk in microfinance institutions in Bali [8,4,9]. Harmonious culture developed in Bali is the foundation in developing tourism. It is stated by Astawa and Sukawati [3] that harmonious culture implemented by Ubud community has become the tourism attraction and a differentiator to other tourism products. The activities of harmonious culture are very important so that socialization to the communities is also important. Through information technology, the understanding of cultural values by the community is getting stronger and gives impact on the behavior in running a business [10]. Harmonious culture is also used to measure financial performance and it gives easiness and trust that good implementation of harmonious culture will bring good financial performance [36].
Methodology
The research was an experimental research using qualitative and quantitative analysis techniques. It divided into three stages. First was adjusting the developed model based on the research [37]. Based on initial survey result, the developed research model of Cultic event consisted of green aspect and cultural aspect as presented in the following Figure 1.
Figure 1. Research Model of Cultic Event
The research used green and cultural aspects to study the event activities for three days and two nights at Pinge Village. There were 85 participants of the event. The second stage was a review of the event participants on program they had bought or followed. The participants filled a 5 Linkert Scale questionnaire (5 = very happy, 4 = happy, 3 = quite happy, 2 = less happy 1 = unhappy) that consisted of green food, natural materials, waste management, and ecosystem items. The third stage was a review of the benefit of Cultic event for the community of Pinge Village that consisted of 160 head of families. The questionnaire was presented based on the research [35] consisted of items: (1) local community should manage the tourism activities in their area; (2) the tourism should provide jobs for the community to improve their welfare; (3) the use of international standard as a reference; and (4) education and training should be provided to improve the management of local tourism to protect the environment and nature. The questionnaire was a 5 Linkert Scale (5 = strongly agree, 4 = agree, 3 = fairly agree, 2 = less agree, 1 = disagree). Data collected was analyzed using descriptive statistics and a focus group was conducted to validate the qualitative data.
Result And Discussion
This section will explain the result of qualitative study related to the Cultic activities with harmonious culture concept implemented by the community. The culture illustrates the harmonious condition in a life. The concept maintains a harmonious relationship with God, human being, and natural environment that implemented through religious rituals. The belief on the harmony of the three components brings happiness in life and any imbalance will bring problems and impact on human life. The simple concept reinforces the development of sustainable tourism or green tourism although in reality, their culture is already considered as green. Thus, education and real samples are needed in a green-concept tourism package. Therefore, quantitative study illustrates the benefit of a sustainable-concept attraction.
Cultic Event in Harmonious Culture
The activity of Culture and Tourism International Camps (CULTIC) at Pinge Village was conducted for three days and two nights. The number of participants was eighty five people consisted of twelve foreign students and seventy three local students. The result of qualitative study 1 based on the developed model can be explained in the following Table 1. Table 1, it can be explained that the concept of green event, conducted based on the developed model, had fulfilled the concept of harmonious culture or green culture in Pinge Village that emphasizes more on the environmental and human aspects. God aspect was conducted when participants made a canangsari, which is the symbol to get closer to God, and learning the irrigation system (subak) where praying conducted by farmers during their activities at the rice fields was also explained in the activity.
Analysis of Cultic Participants
The perception of participants of the Cultic activities was needed in developing a better model. Questionnaire was made as five scales questionnaire with the following meaning: 5 = very happy (SS), 4 = happy (S), 3 = fairly happy (CS), 2 = less happy (KS), and 1 = unhappy (TS). The result of questionnaire distribution is presented in Table 2. Average 81 3 1 --85 Source: processed data Based on Table 2, it can be explained that 81 participants gave response of strongly agree meaning that the activity related to the environment in the Cultic event was suited to their preference. It was also found that in average of 3 people were agree and one was less agree. The favorite activities of the event was plowing activity, followed by introduction to the environment, introduction to subak, not using and carrying plastic tools, processing the waste into compost, processing cow dung into biogas, eating together on a mat (metegak), introducing types of local vegetables and fruits, the making of garbage bin from bamboo, cooking traditional foods, the making of canangsari using natural materials, and cooking class using local materials at the village. In a whole, the activities at the event were well received by the participants.
The Benefit of Cultic Event
The implementation of Cultic Event based on the sustainable tourism were: (1) local community should manage the tourism activities in their area; (2) the tourism should provide jobs for the community to improve their welfare; (3) the use of international standard as a reference; and (4) education and training should be provided to improve the management of local tourism to protect the environment and nature [35]. Based on the reference, a questionnaire was built that consisted of items: the involvement of tourism awareness organizations at Pinge Village in the event, the community are involved in serving the guests, the result of event sales was given to the village, the implementation of international standard in serving the guests, and training in environmental conservation by State Polytechnic of Bali. The questionnaire was 5 Likert Scale consisted of 5 = strongly agree, 4 = agree, 3 = fairly agree, 2 = less agree, and 1 = disagree. The result of questionnaire distribution to 160 head of families at Pinge Village is presented in Table 3. The result of event sales was given to the village 145 10 5 --4 The implementation of international standard in serving the guests that adjusted to the village culture 140 15 2 3 -5 Training in environmental conservation 155 5 ---Total 725 49 23 3 0 Percentage 90,60 6,10 2,90 0,40 0 The result of data processing indicates that the community was strongly agree with the implementation of sustainable tourism. It was proven by the respondents' answer that 90.60% of them were strongly agree. The condition gave hope on the model developed by the Polytechnic in packaging a sustainable event and it was considered as an appropriate step. The program of alignment to the community is currently needed to strengthen the economic order of the nation and the society in a long run.
Conclusion
Cultic event activities referred to tri hita karana culture that put forward the harmony with God, human being, and nature gave strong legitimation on the implementation of green event concept; therefore, the model test gained strong support from the community. It was proven by the study of cultural activities that put respect to the nature as the majority activity and then followed by activities between humans and God. However, in practice, all elements should be present in these cultural activities and only the percentage is different. The participants of Cultic event had a very high environmental awareness although they were young. It was a good condition for the development and conservation of nature through a fun recreation so that the harmonious concept of THK culture is continued along the times. The community had high awareness on the sustainability of tourism. It can be seen from their high involvement in the event and they also had commitment in maintaining cultural and natural conservation.
Acknowledgment
The authors would like to send their gratitude to the State Polytechnic of Bali for the research funding as well as to the head of P3M that gave motivation to complete the research. | 2019-05-30T23:43:46.206Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "54268232a31f1121deec904a7897e6ca69341937",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/953/1/012057/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "4933661b15b79f484f9bffe4b0bfe0e51490c0d6",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Physics",
"Business"
]
} |
8364288 | pes2o/s2orc | v3-fos-license | Small Double-Stranded RNA Mediates the Anti-Cancer Effects of p21WAF1/ClP1 Transcriptional Activation in a Human Glioma Cell Line
Purpose This study was conducted to investigate the small double-stranded RNA (dsRNA) mediated anti-tumor effects of p21WAF1/ClP1 (p21) transcriptional activation in vitro in the human glioma SHG-44 cell line. Materials and Methods Human glioma SHG-44 cells were transfected with dsRNA using LipofectAMINE 2000 transfection reagent. Real-time PCR and Western blot analysis were conducted to detect p21 and survivin mRNA and protein levels, respectively. Cell proliferation was examined by MTT assay. Cell cycle distribution and apoptosis were detected by flow-cytometric analysis. Results We found that dsRNA targeting p21 promoter (dsP21) significantly induced the expression of p21 at transcription and protein levels, and reduced the expression of survivin. AS well, dsP21 transcription significantly inhibited human glioma SHG-44 cell proliferation. Analysis of cell cycle distribution revealed that dsP21 transfection increased accumulation of cells in the G0/G1 phase and reduced accumulation of cells in the S phase. Further analysis revealed that dsP21 transcription led to an increase in both early and late stages of apoptosis in human glioma SHG-44 cells. Conclusion In the present study, P21 activation by RNA-induced gene activation (RNAa) induced anti-tumor activity in vitro in a human glioma SHG-44 cell line. The results suggested that RNAa could be used for human glioma treatment by targeted activation of tumor suppressor genes.
INTRODUCTION
Glioma is the most common type of malignancy that originates in the central nervous system. Some of these tumors are highly malignant and tend to spread and infiltrate into normal nerve tissue, which makes surgical removal very difficult. Moreover, radiotherapy and chemotherapy are not sensitive to these tumors. The prognosis for patients with high-grade gliomas is generally poor, especially for older patients. Survival rates are 42.4% at 6 months, 17.7% at 1 year, and 3.3% at 2 years in these patients, according to a population-based study. 1 Therefore, determining the pathogenesis of glioma and finding new methods are essential for im-as a nonspecific control in this study. Synthetic dsRNA were green fluorescently-labeled and manufactured by Genepharma Company, Ltd. (Shanghai, China).
Cell culture and transfection
The human glioma cell line SHG-44 was purchased from the cell bank of China (Shanghai, China). SHG -44 cells were maintained in RPMI -1640 medium supplemented with penicillin G (100 U/mL), streptomycin (100 μg/mL), 2 mmol/L L-glutamine, and 10% fetal bovine serum. The cell line was incubated in a 37°C, 5% CO2 humidified incubator. The culture medium was changed every 48 h. The day before transfection, cells were plated in growth medium without antibiotics at a density of 50% to 60% (1×10 5 /mL). Transfection of saRNA at a concentration of 50 nmol/L was carried out using LipofectAMINE 2000 reagent (Invitrogen, CA, USA) according to the manufacturer's instructions.
Protein isolation and Western blot analysis
Cells were washed with ice-cold phosphate-buffered saline (PBS) at 72 h after transfection and lysed with RIPA Buffer (Pierce, MA, USA). Cell lysates were clarified by centrifugation at 12000×g for 30 min at 4°C and protein concentrations were determined by using the BCA protein assay reagent (Pierce, MA, USA). Cell lysates were added to sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) sample buffer, separated by SDS-PAGE and electrophoretically transferred to polyvinylidene difluoride membranes (Solarbio, Beijing, China). The membrane was detected with anti -p21 or anti survivin antibodies (1:500; proved clinical treatment of gliomas. RNA-induced gene activation (RNAa) is a new mechanism of gene activation directed by small double-stranded RNA (dsRNA). [2][3][4][5] dsRNA, are also referred to as 'small activating RNA' (saRNA) to distinguish them from small interfering RNA. 6 By targeting gene promoter regions, saRNA induce the demethylation of histones, leading to transcriptional gene activation. 7 Since the RNAa mechanism alters the chromatin structure leading to robust and prolonged expression of the endogenous target gene, 2 it may be an attractive option to activate tumor suppressors in the treatment of cancer.
As a downstream mediator of tumor suppression, the p21 gene is linked to p53 expression and inhibition of cell cycle progression. 8 It is involved in cell growth, differentiation, aging and death processes, and closely related to tumorigenesis. The p21 protein binds to cyclin-CDK2 or -CDK4 complexes and inhibits their activity. It is also an important regulatory protein of cell cycle progression. Previous studies have shown that decreased p21 expression may be involved in tumorigenesis or leads to poor prognosis of malignancy. [9][10][11] Although prior experiments have shown the anti-tumor effects of p21 activation via RNAa in many human cell lines, 12-15 on study has been done in human glioma cell lines.
Survivin is an inhibitor of the apoptosis protein family and has been implicated in anti-apoptosis, cell division, and cell cycle control. 16,17 One previous study has reported that survivin and p21 are functionally associated with each other. 18 Therefore, in this study, we attempted to investigate the anti-tumor effects of RNAa in human glioma SHG-44 cells and to examine survivin expression after dsP21 mediated p21 gene activation.
MATERIALS AND METHODS Double-stranded RNA
The design of dsRNA was performed as described previously by Li, et al. 2 dsRNA targeting the p21 promoter at position 322 relative to the transcription start site [sense, 5'-CCAACUCAUUCUCCAAGUA(dT)(dT)-3'; antisense, 5'-UACUUGGAGAAUGAGUUGG(dT)(dT)-3'] was used to activate p21 expression. Control dsRNA (dsControl) lacking significant homology with any other human sequences (sense, 5'-UUCUCCGAACGUGUCACGUTT-3'; antisense, 5'-ACGUGACGUUCGGAGAATT-3) was used Transfected cells were harvested, washed with pre-cooled PBS twice, resuspended in binding buffer, and stained by Annexin V and PI according to the manufacturers' instructions. Annexin V stained cells indicate early apoptotic cells, whereas Annexin V+PI stained cells indicate late apoptotic cells. All of the samples were assayed in triplicate.
Transfection efficiency of the human glioma SHG-44 cells
Cells were plated in six-well plates at a density of 1×10 5 cells/mL and washed with ice-cold PBS twice at 72 h after transient transfection. Then, total cell numbers and fluorescent cells in the same field were counted respectively by phase contrast microscope, and transfection efficiency was calculated according to the following formula: fluorescent cells/total cell numbers×100%. The transfection efficiency was 57.2% ( Fig. 1).
P21 up-regulated by saRNA in human glioma SHG-44 cells
Previous studies have demonstrated that dsRNA targeting the p21 gene promoter at position 322, relative to the transcription start site, can activate p21 expression. 2,14 In the present study, SHG-44 cells were transiently transfected with 50 nmol/L of dsP21 and a nonspecific control dsRNA for 72 h, and expression of p21 mRNA and protein was evaluated by real-time PCR and Western blotting, respectively. Expression of p21 mRNA in dsP21-transfected cells was significantly elevated compared to mock and dsControl treatments ( Fig. 2A). Induction of p21 was also confirmed by Western blot analysis ( Fig. 2B and C). As well, elevated Bioworld Technology, Nanjing, China) and incubated at 4°C overnight. Next, primary antibodies were removed and the membrane was detected by horseradish peroxidase-conjugated goat anti-rabbit IgG secondary antibody (1:10000; Bioworld Technology, Nanjing, China) and enhanced chemiluminescence detection (ECL System, Pierce, MA, USA).
Cell proliferation assay
Cells were transfected with dsRNA for approximately 6 h. Following treatments, cells were plated in 96-well microplates at a density of 3000 cells in 200 μL of complete RP-MI-1640 medium per well for proliferation assay. Every 24 h, a batch of cells were stained with 20 μL of MTT [3-(4, 5-Dimethyl-2-thiazolyl)-2, 5-diphenyl-2Htetrazolium bromide] dye (5 mg/mL) at 37°C for 4 h, after which the culture medium was removed and 100 μL of dimethyl sulfoxide was added and entirely mixed in for 10 minutes. Spectrometric absorbance at 490 nm was surveyed using a microplate reader.
Flow cytometric analysis for cell cycle and apoptosis
Cells were plated in six-well plates at a density of 1×10 5 cells/mL. The next day, transfection was carried out and cells were incubated for 6 h before changing the transfection medium to fresh medium. Cells were harvested with trypsinization at 72 h, washed twice with pre-cooled PBS, fixed with cold 75% ethanol, and stained by propidium iodide (PI) in PBS. PI fluorescence intensity was surveyed by flow cytometry to assess cellular DNA content. Cells in the G0-G1, S, G2/M phases of the cell cycle were determined from the flow cytometry data. Apoptosis assays were also conducted to analyse the effect of p21 activation in SHG-44 cells with an Annexin V-fluorescein isothiocyanate apoptosis assay kit (Baiao Bioengineering Co. Ltd., Beijing, China).
A B
dsControl treatments, while there were no differences in apoptosis rates among the latter two types of cells (Fig. 4). Increases in both early and late apoptosis rates were seen.
Arrest of human glioma SHG-44 cells in the G1 phase induced by transfection with dsP21
Cell cycle analysis was conducted to investigate the cell cycle distribution of dsP21-transfected glioma SHG-44 cells.
The percentage of the cells in the G0/G1 phase was statistically significant increased in the dsP21-transfected cells compared to those in the mock and dsControl cells. Transfection with dsP21 also caused a decrease in S-phase cells, but no change was found in G2/M phase cell populations (Fig. 5).
Survivin expression decreases following transfection with dsP21 in human glioma SHG-44 cells
Real-time PCR and western blot analysis were conducted to examine the effect of p21 activation on survivin mRNA and protein. As shown in Fig. 6A, compared to mock and dsControl cells, a statistically significant decrease in sur-levels of p21 protein were strongly correlated with increases in p21 mRNA expression in SHG-44 cells.
Human glioma SHG-44 cell proliferation is inhibited by p21 up-regulaton in vitro
Because up-regulation of p21 leads to an inhibition of tumor growth, we thus examined the effect of p21 transcriptional activation on the proliferation of glioma SHG-44 cells in vitro. In this experiment, cellular proliferation was monitored by MTT assay daily for 6 days. Cell growth curve showed that, compared with mock and dsControl treatments, dsP21 transfected cells were significantly inhibited in a time-dependent manner, while dsControl and mock cells showed no significant inhibition of proliferation (Fig. 3).
Apoptosis in human glioma SHG-44 cells is induced by transfection with dsP21
Apoptosis assays were used to investigate the effect of p21 knockdown on the growth of human glioma SHG-44 cells.
The early and the late apoptosis rate of dsP21-transfected cells significantly increased compared to the mock and pressor genes is an important cause of tumorigenesis. Gene mutation, deletion, and structural chromosomal rearrangements are an important mechanism for the inactivation of tumor suppressor genes. 19 Previous studies have already confirmed that inactivation of p21 expression may be involved in tumorigenesis or lead to poor prognosis of malignancy. [9][10][11] Interestingly, other findings have found that increased p21 expression is associated with tumor progression or worse prognosis. [20][21][22][23] These studies suggest that p21 may act as an oncogene, either during tumor development or in the course of anti-cancer treatment.
Accordingly, there are questions as to whether p21 is a vivin mRNA was observed when transfected with dsP21 (Fig. 6A). The decrease in survivin protein was further evaluated by Western blot analysis. The expression of survivin protein was significantly decreased in dsP21 transfected cells compared with both mock and dsControl treatments ( Fig. 6B and C).
DISCUSSION
A tumor suppressor gene is a gene that protects a cell from one step on the path to cancer. Inactivation of tumor sup- blotting results. Furthermore, induction of p21 protein expression led to a significant inhibition of SHG-44 cells proliferation. Moreover, expression of p21 upregulation induced the accumulation of cells in the G0/G1 phase and significantly increased the early and late apoptosis rates of dsP21-transfected cells.
RNAa-mediated overexpression of p21 in human glioma SHG-44 cells suppressed expression of survivin. Hence, this result suggests that survivin may serve as a downstream factor of p21 to promote cell cycle arrest and enhance apoptosis.
In conclusion, the preset study demonstrated dsRNA-mediated gene activation in a human glioma cell line. Induction of p21 by RNAa exhibited anti-tumor activity in vitro in glioma SHG-44 cells by inhibitting cell cycle progression and inducing apoptosis. Further research should focus on revealing the exact mechanism of RNAa and develop potent reagents for laboratory and clinical therapeutic application.
tumor suppressor or an oncogene. This discrepancy could be due to the status of p21 itself and/or to differences in the histological types of cancers that have been analyzed. 24 Hukkelhoven, et al. 25 confirmed that tyrosine phosphorylation contributes to the conversion of cdk inhibitors from tumor suppressive roles to oncogenic roles. Besson, et al. 26 reported that control of the subcellular localization of p21 could represent an important regulatory switch from a nuclear tumor suppressor to a cytoplasmic oncogene. In the current study, we demonstrated that p21 plays a tumor suppressive role in human glioma cell lines and that it may be a potentially desirable target for glioma treatment.
Many studies have reported that use of dsRNA targeting gene promoters to activate expression of tumor suppressor genes thereby inhibits tumor cell proliferation and migration, leading to cell cycle arrest and induction of apoptosis. 2,7,13,27 Matsui, et al. 28 reported that implementing duplex RNA complementary to the promoter of LDL receptor (LDLR) activated expression of LDLR and increased the display of LDLR on the surface of liver cells. Additionally, Chen, et al. 29 utilized ribonucleic acid RNAa mechanisms to increase the expression of VEGF to treat erectile function. As recent studies have suggested that RNAa depends on Argonaute (AGO) proteins, Chu, et al. 30 investigated the role of AGO1-4 in gene silencing and activation of the progesterone receptor gene. Their data indicated that expression of AGO2 is necessary for efficient gene silencing or activation: saR-NA loading and processing by an AGO protein, which then guides it to its promoter target, which can be a non-coding transcript overlapping the promoter or the chromosomal DNA, and recruits histone modifying enzymes to the promoter to activate transcription by causing permissive epigenetic changes. 6,28,30 Small saRNA mediated gene activation offers a promising new approach for investigating gene function, and may serve as a novel strategy for the treatment of many diseases, especially for tumors. We designed this experiment to examine whether induction of p21 by RNAa has an anti-tumor effect on human glioma cells as an effort to explore novel therapeutic strategies for the treatment of human gliomas. In our study, we found the transfection efficiency of SHG-44 cells to be satisfactory, and activation of gene expression by RNAa may be a likely therapy strategy for the treatment of gliomas. After transfection with dsP21 into SHG-44 cells at 72 h, the expression of p21 in SHG-44 cells was significantly increased, compared to mock and dsControl treatments, according to real time PCR and Western | 2018-04-03T00:00:40.074Z | 2014-02-10T00:00:00.000 | {
"year": 2014,
"sha1": "34d8098f1fcbedbd692fa52da28a6699eb384e5f",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3349/ymj.2014.55.2.324",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "34d8098f1fcbedbd692fa52da28a6699eb384e5f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
259642168 | pes2o/s2orc | v3-fos-license | Working-class student-hood and ‘job-readiness’: Affective relations of class, gender and employability policy in higher education
ABSTRACT Past decades have seen increased emphasis on graduate employability as a driver of higher education policy. In the Australian context, employability discourses in the public domain have become inflected with anti-intellectual sentiment, serving to reproduce the perception that the humanities and social sciences are of less value to graduates’ employability than are science, technology, engineering, mathematics and medicine. Against this backdrop, and with particular reference to the Job-ready Graduates Package, we investigate how diverse notions of employability shape student-hood for working-class female students who are largely engaged in the social sciences. Attending to affective dynamics, we show how employability imperatives ‘land’ for these students, individually, and as an ‘equity group’. While employability policies are typically positioned as a salve for class inequalities, they can also discredit educational and employment endeavours of working-class students, and reproduce class tensions. To enhance employability policies, there is a need to move beyond reductionist models of job-readiness, towards responding to the complexities of policy as enacted through lived relations. We propose attending to the variability of both identity and value positions and recognising the contribution of affect and emotion to this complex set of policy dynamics.
Introduction
In this paper, we consider the human capital assumptions that currently underpin contemporary higher education (HE) policy making and their social class impacts.The imagined, individualised HE student -who accrues education only for the purpose of gaining high-paid employment -is wholly naturalised in policy discourse across the West (Kalfa and Taksa 2015;Lumb and Matthew 2021;Moore and Morton 2017;Tholen and Phillip 2017).Globally, graduate employability 'has become synonymous with the ways in which the relationship between higher education and the economy is now understood' (Tomlinson 2017, 1).Indicative of the global nature of CONTACT Maree Martinussen maree.martinussen@unimelb.edu.auMelbourne Graduate School of Education, University of Melbourne, Melbourne, Victoria 3010, Australia employability agendas, for over two decades, the United Nations has encouraged nation states to review education-to-work transition policies (Matherly and Tillman 2015).The aim of this paper is to consider how working-class humanities and social sciences (HASS) graduates receive employability imperatives, and what this reception implies for employability policy and the broader program of widening participation.We do so with reference to a recent restructuring of funding for Australia's HE that emphasises job-readiness, and prioritises science, technology, engineering, maths and medicine (STEMM) education.
We begin by introducing the policy context of increasing emphasis on employability, providing an outline of the Australian HE case.Against a backdrop in which there are pervasive calls for HE to better serve emergent knowledge economies, and a cultural landscape in which anti-intellectual post-truth narratives hold sway, we show that current concepts of employability have the potential to place undue pressure on workingclass students.After outlining our theoretical and methodological approach, we analyse data generated through interviews with working-class women enrolled in postgraduate studies in Australia, asking how employability imperatives 'land', both individually, and as an 'equity group'.Our analysis is predicated on the assumption that for widening participation policy agendas to take effect, positive affective relations between universities and working-class communities must be generated (Walkerdine 2021, 64).Through our relational approach that emphasises working-class capacities, we close by suggesting that if widening participation goals is to be progressed, a reparative approach to employability is needed, which attends to working-class values and positions.
Employability, anti-HASS sentiment and working-class dilemmas
As noted, there has been a global policy shift in HE towards employability.However, the Australian Government's Job-ready Graduates Package (JRGP) stands out as a significant national reformation.Introduced by the then Morrison government in 2020, a central aim of the JRGP is to increase the number of students in areas predicted to have good employment prospects. 1These 'national priority' disciplines were predominantly, science, technology, engineering, maths and medicine (STEMM) based.For these select fields, along with, for example, teaching and nursing, the government pays a greater contribution of course costs, reducing the loan debt students accrue.Meanwhile, government contribution amounts in other fields, including the social sciences, humanities and law, were decreased, leaving students to pay a greater proportion of the fees -more than double in some cases (Norton 2020).Three previous iterations of the legislation were voted down over a six year period, however, emphasising both the COVID-19 pandemic and risks of not keeping up with the technological changes needed for national productivity, a rhetoric of crisis was mobilised to see the policy pass into legislation (Molla and Cuthbert 2022).As such, the JRGP's instantiation links to the long-running, global crisis narrative that sits behind the re-orientation of HE towards serving technology and science education industries -the purported risk of getting left behind in a knowledge economy (Olssen and Peters 2005).The need for innovation to attend to the knowledge economy is consistently invoked, but innovation is frequently 'reduced to an economic barometer of big business . . .with scientific innovation being co-opted to the growth process' (Craswell 2007, 380).Equating innovation and knowledge excellence only with scientific endeavours devalues a diversity of knowledge, including those associated with the humanities and social sciences (HASS).
The recent deepening of the 'culture wars' has brought on a second, quite different, ideological movement that similarly challenges the value of HASS knowledges.Post-truth discourses, where racism and sexism are espoused as common sense (Cover, Haw, and Thompson 2022;Ringrose 2018), position universities, and HASS disciplines in particular, as too progressive (Morris 2021), or as Blackmore (2022, 634) has it, as 'havens of left-wing activism'.Frunzaru et al. (2018) have attempted to quantify the challenge of anti-intellectualism facing universities today, which they link to narrow employability measures that emphasise job-specific skills.By assessing anti-intellectualism, materialism and employability together, they demonstrate, the more students aspire to materialistic [instrumental] goals, the less they value learning for its own sake and the less they are interested in intellectual development.(Frunzaru et al. 2018, 388) The pressure on universities to become labour market infrastructure (Hartmann and Komljenovic 2021), taking less of a role as a 'public intellectual' (Boden and Nedeva 2010, 50), risks producing 'a voluminous but docile cohort of worker/consumers ' (ibid., p.41).Instrumentalising education towards employability, shifting it away from the types of critical thinking HASS promotes, has particular significance for working-class students.As Wheelahan (2010, 9) puts it, by having greater access to HASS knowledge, the middle classes have access to 'theoretical abstract knowledge [which] provides them with the ability to . . .think the unthinkable and the not-yet-thought'.In the current climate, in which there is hostility to HASS disciplines, how do we ensure people of all class backgrounds are able to access HASS education, and find value in it?How do we immunise HASS knowledge producers against deepening class divides?
We would like to explore the potential social class ramifications of the devaluing of HASS briefly via remarks made by Australia's former Prime Minister, Scott Morrison, which were reported locally under the headline 'PM explains: there are unis and there are unis' (Matchett 2022).Upon opening a new medical school and research centre at an Australian regional university, Morrison proclaimed that this was the type of university endeavour that: sits at the heart of pretty much every successful economic regional plan you care to nominate anywhere in the world, let alone in Australia.But not any university that, you know, keeps itself separate from the rest of the community and walks around in gowns and looks down on everybody.And, you know, only looks at things that are [not] remotely interesting to anyone.It's a university that's very practical and understands the opportunities, whether it's in science or medicine or in any other areas or fields of enquiry and research, and is raising up a workforce and a generation of people that can actually transform the region in which they're living.
There are obviously two camps of universities and university workers delineated here.One camp is 'very practical and understands the opportunities'.The second is elitist and 'keeps itself separate from the rest of the community'.Via remarks such as these, knowledge-production with commercial potential is privileged (Lingard and Gale 2007;Ozga 2007).In this particular context, the economic disadvantages experienced in Australia's regions become a trojan horse for the promotion of science as the only (commercial) opportunities regional and rural, working-class students should aspire to.Emphasis is placed on the urgency to turn university inputs into economic outputs in the regions, where a 'transformation' is needed.A moral dilemma is implied.Gravitating towards 'too intellectual' HASS, would be to go against, 'transform[ing] the region in which they're living'.Concomitantly, implicitly, the humanities and social sciences (HASS) are constructed as oppositional to economic success, and forms of impractical, middle-classed navel-gazing.There is semblance here with Threadgold and Gerrard's (2022) analysis of the contemporary Australian context, where HE-educated people are often positioned within populist discourse as too intellectual, against both the national interest and 'ordinary' Australians.
Such messaging exacerbates existing pressure on working-class students to opt for trajectories that are perceived as more practical, and therefore, to engage in instrumental careerist strategies.While working-class students are motivated by a range of ideals (Scherer 2022), low SES students are more likely to be risk adverse in their post-school planning, and to be centrally guided by concerns that studies will lead to permanent work (Raciti 2019).This goes some way to explaining why education, engineering, IT and business are already popular choices for disadvantaged students in Australia (Edwards and Coates 2011).However, contra to human-capital informed employability policies, which present a picture of students as universally oriented to 'maximisation of selfinterest and the instrumental pursuit of HE for gaining well-paid careers' (Lumb and Matthew 2021, 114), the choices made by first-in-family students are influenced by a complex web of factors, including 'the strategic orientation of family capital, the profound stratification of schooling and universities, and the desperation to realise the investment of families in efforts to advance intergenerational social mobility' (Guzmán-Valenzuela et al. 2022, 945).Employability and anti-intellectual discourses introduce yet greater complexity to decision-making, by casting doubt on the viability of HASS pathways.
It is notable also that a widening of employability policies has been occurring.While employability policies have historically been geared towards undergraduate programmes, it is increasingly oriented to graduate offerings (McGagh et al. 2016).Accordingly, we pursue our interest in employability policy in the wake of increased scrutiny on employment outcomes in postgraduate education, where attention to social class inequalities is often overlooked (Grant-Smith, Irmer, and Mayes 2020;).Rather than following the policies themselves (Craswell 2007;Cuthbert and Molla 2015), we attend to workingclass graduate student (affective) responses to them.
Affective capacities and working-class subjectivities
In exploring graduate student responses to employability discourses, we attend to suggestions for the development of an 'ecology of classed relations' for higher education policy (Walkerdine 2021).We draw attention to the varied but distinct abilities of nontraditional students, orienting to questions of how 'social position and access to resources mediate what ends are felt to be possible and desirable' (Sellar and Gale 2011, 129, our emphasis).In this way, we hope that working-class capacities, rather than deficits, guide discussions of equity and mobility.Capacities are conceptualised here quite differently to the capabilities, skills and enterprising dispositions inferred under the rubric of graduate employability (e.g., Pool and Sewell 2007) and centrally concern attention to affect and emotion (Mulcahy and Martinussen 2023).
In line with enactment approaches to policy analysis (Ball, Maguire, and Braun 2012), including affective enactment approaches (Pitton and McKenzie 2022), we assume that policy is not 'given' to the student subject.Rather, policy acts in relation with students, in an emergent and contingent practice of materialisation.In the contexts of neoliberal governmentality, subjectivity 'is a key site of political struggle', including of refusal (Ball 2016(Ball , 1129)).We attempt in our analysis to understand the affective responses and situated identities of higher education students in relation to contemporary employability policies, such as the Job-Ready Graduates Package.We ask, how are working-class women studying in postgraduate HASS-related disciplines responding to employability policy in the current Australian context?
Data, methodology and affective analytical approach
Data excerpts analysed in this article derive from a study exploring class relations in Australia and how they shape the experiences of women enrolled in postgraduate studies (domestically).The invitation to participate was also extended to those who had completed their postgraduate studies within the previous six months.Six of the twenty-five participants also fit the characteristics of 'early career academic', by having considerable responsibility for carrying out teaching or other research work, aside from their doctoral studies.Ethics approval was obtained from the Human Research Ethics Committee of The University of Melbourne (reference 2056680).
The research oriented to questions of how students from diverse social class histories come to feel in or out of 'place' at university.As such, despite the relatively small cohort, diversity was sought out.The advertisement made a call for financially disadvantaged, working class, or low socioeconomic postgraduate students, including ethnic minority women/womxn.Many participants understood themselves as transitioning between class categories, and for some, class was complicated by racialized identities.As 'Kelly', one queer, mixed-race Caribbean/Aboriginal woman put it: 'It is finding little pockets to put [things] in but you have to accept that sometimes. . .I love the grey parts.We really sit in the grey parts − as Indigenous women, that's where we live'.Although the following list elides the rich, grey hues to which Kelly refers, the sample might be categorised as such: 5 participants identified as coming from a 'welfare-class' background in which their families' primary incomes were from government welfare services when they were growing up, 14 from a working-class background, and 6 from 'lower-middle'.Around half of the interviewees, including all those who identified as coming from a welfare-class background, described traumatic events that occurred in their family or environmental factors that are considered 'adverse childhood events' (Emerging Minds & ANU Australia 2022).Almost two-thirds identified as having a disability, most of which were mental-health related.The age range of participants ran from 28 to 60, with a median of 34.Half of the universities in which participants were enrolled are part of Australia's elite 'Group of Eight', the remainder were based in universities that are in urban centres, but which service regional areas.One aspect of the sample in which there is less diversity, is that of academic discipline, with 21 participants enrolled , or having recently completed degrees in, HASS-related studies, in fields such as education, media studies, creative arts, literature, sociology and law.Included in this figure also are three participants who were enrolled in public health, and used critical feminist and Indigenous standpoint theories.
The interviews were designed to capture the hybridity, dilemmas and competing subjectivities relating to gender, ethnicity and social class that are often involved in the social mobility of working-class students (Lucey, Melody, and Walkerdine 2003).As such, participants were invited to repeat, biographical-oriented interviews, and most took part in three interviews over the course of one year (2021).The analytic approach involved repeated reading of interview transcripts.Bringing different understandings of affect to bear, Author One predominantly read using an affective-discursive practice lens (Wetherell 2012) while Author Two mobilised materialist (Deleuze and Guattari 1987) concepts.Although underscored by different ontologies, when put into conversation with one another, these approaches assist to enlarge understandings of relations of class, gender and policy through exploration of moments of emergence, agencies as distributed and, most particularly, affect as political.
Wetherell's (2012) affective-discursive practice theory centres human meaningmaking and subject formation, placing emphasis on the practical and collaborative nature of affective practices.Contrasting with materialist and posthuman accounts of affect, as sketched below, where affect 'overspills the individual' (Massumi 2017, 1), how people fluidly combine everyday identity practices is considered central to configurations of power.How does employability get translated into situated practices, moment to moment, in affective episodes?We understand that people will enact 'figuring and gathering' activities (Wetherell 2012, 139), as affects are organised into (employable) subjectivities.Employability identities will be constantly negotiated and their shapes will shift, depending on the interactional contexts they are produced for.Identities crafted to serve different, possible versions of employability will also be modulated by the affective routines and practices expected in particular settings as well as personal histories.Notably, as identity practices are considered resources for living, this figuring work is capacitating.Those with non-normative employment or higher education trajectories may still have enormous capacities to account for themselves (as 'good', or 'reasonable' etc.).
Materialist and posthuman accounts of affect (indicatively, Braidotti 2021; Deleuze and Guattari 1987) provide resources for meeting, if not necessarily resolving, issues of social inequality and institutional responsibility, head on.We employ a social relational version of affect as developed through Deleuze's (1988) reading of Spinozaaffect as bodily 'capacities to affect and be affected'.We ask of a concept such as employability, not what it is, but what it can do in a specific context: what are its capacities?Affects signify a change of state of an entity and its capacities, human and otherwise (Deleuze and Guattari 1987, 256).Capacities can be increased or lessened, and it is this movement -the likelihood of leveraging its direction -that is key to their powers.When understood as increasing or decreasing powers to act, affect is 'directly political' (Massumi 2017, 1) and can be deployed to offset the normative meanings attaching to the discourse of employability mobilised through the JRGP.
One of the questions under investigation in the broader study was how participants' social class positions might change throughout the course of their HE engagements.Class scholars have long shown the 'hidden injuries' of class transitions (Sennett and Cobb 1973), bringing into view feelings of guilt and shame that often occur through social mobility (Mahony and Zmroczek 1997;Michell, Wilson, and Archer 2015).We follow in the footsteps of those who articulate the 'hybrid' shapes these transitional, psychosocial formations take (Lucey, Melody, and Walkerdine 2003) while attuning affectively to material dimensions of this formation (e.g. the 'pull' of place and space).In showing how class transitions are dealt with, we emphasise the multiplicity of affective regimes that participants are entangled in, such as race, geographies, gender, but also those relating to constructions of 'intellect', 'academic', or 'social justice'.The affective identity-work required to ameliorate competing conceptions of class for working-class, highly educated women also involves significantly navigation of class tensions around notions of employability.
Working class student-hood in HE: affective explorations
Most participants in this sample have taken non-linear trajectories into their present studies and vocations.Only four of the twenty-five were studying or working in the same area as their undergraduate studies.One participant had made changes in their academic foci, but their route from school, into undergraduate and then postgraduate education was relatively sequential.The remaining twenty took significant breaks from their studies and most worked in a range of jobs, switching their vocations, for example, from marketing into nursing, journalism into history, business studies into teaching and banking into medicine.Kelly, who we referred to earlier, had studied at seven different universities, stating also 'I've had a million jobs in between'.Dominant temporalities of employability policy making conceive the future as 'something to be filled with employable subjects and new discoveries', and the present is imagined as 'emptied out' (Clegg 2010, 359).The rich diversity of work and study participation of this small sample is illustrative of the significant mismatch between future-oriented human capitalinfluenced policies and lived experience.Dominant employability policy cannot account for the past and accruing emotional investments that are continually drawn on in decision-making.
We begin now to examine in detail the routine work of ameliorating the disparate affective regimes surrounding intellect and (employability) skills, alongside the production of social class.The data excerpt we begin with is again from Kelly, who is carrying out her doctorate in law.She was one of a few participants who expressed discomfort around the public nature of graduation ceremonies.I've never been to a graduation . . .I will probably go to my PhD but it's just embarrassing to me to do that . . .I realise that I've also gotten a lot of criticism for my achievements.So, you can't big-note yourself, you can't talk about your qualifications, although you are expected to use them to support the people who are telling you NOT to big-note yourself.(laughing) Potentially, regardless of educational policies, a statement like the one above can be expected in Australia.In a colonial setting where egalitarianism is highly valued, Kelly's high achievements may see her appraised as a 'tall poppy', sitting 'above her station' and needing to be brought down to size.However, Kelly's words, as well as those of other participants who described graduation ceremonies as deeply uncomfortable, are indicative of a cultural milieu where higher education is positioned contradictorily.In this 'ecology of classed relations' (Walkerdine 2021, 60), as Kelly's achievements become associated with multiple effective regimes, the resulting identity trouble requires that different capacities be put to use.Her legal expertise and understanding of colonial-centric bureaucracies may be needed at times, and animates an authority, provided through her education and professional standing.In other settings, the authority that comes with education must be downplayed.Kelly has succeeded in a wide range of professional roles and in her studies, in the face of multiple 'disadvantages'-Aboriginal, Black, queer, disabled and growing up in poverty.Yet, her achievements have not only left uncelebrated at times, but also become a source of shame.Equity policies in HE aim to raise aspirations of equity-seeking students, but without attention to the affective-discursive realm, the emotional costs that come with accomplishing those aspirations are omitted (Bunn, Jane Burke, and Threadgold 2022).Similarly, as we show below, the JRGP may come with additional costs that are hidden without attention to affective practices.
Although the JRGP restated intentions towards equity for all Australians, it gives particular attention to raising aspiration for regional and remote Australians (Molla and Cuthbert 2022).In our small sample however, it was notable that regional participants felt the judgment of friends and community rather intensely.Olive, a casual lecturer and doctoral researcher in sociology summarises it thus: That's like 'you've got tickets on yourself, you think you're smarter than me', 'there's some snob with a good education thinking they know better than us', that kind of thing.And particularly because I live in a very, very agricultural, rural area there is a lot of antagonism to intellectuals and people who don't work with their hands, basically.
Chay, who is based in a remote area and studied a range of social science disciplines and whose post-PhD work lies in public health policy, described some of her previous interactions with her siblings in this way: They'd be like 'this is the trouble with the university.They just like fill your head with this crap' and 'blah, blah, blah, blah, blah, blah', as though I have no agency or decision-making, like thoughts of my own.Like somebody just like brainwashed me or something like that.And it was always amusing to me because they were always like, they would pretend as though going to university, getting more education, somehow made you ignorant.
Again, there is nothing particularly surprising about the described interactions; through reoccurring affective performances, both women are well-used to refuting the derogatory perceptions.Accusations of being a snob become 'that kind of thing' or 'blah blah blah'.However, the current, elevated anti-HASS sentiment creates a social climate in which regional students in HASS become a 'sticky' surface (Ahmed 2004) and regional and remote areas have become sites where an everyday, familial politics of affect can play out (Massumi 2015).
In addition, Chay highlights a potentially gendered aspect of anti-HASS sentiment that is identifiable in other interviews, centred on naivety ('as though I have no . . .thoughts of my own').Eloise, who also works in public health and is based rurally, similarly claimed 'my mother tells people I'm an airhead', despite always achieving high academic success.The following excerpt emerged out of a conversation about feeling misunderstood by her mother and brother.
They believe that if you don't get out there and work, shovelling dirt, you know, working on the chain gang from dawn till dusk, you're not actually working.You know?That you're not putting in an effort and you wouldn't know what work (is) and you're a socialist.
Eloise went on to report of her brother, who has a career in the trades, challenging her with, 'If you've got a [undergraduate] degree in economics why aren't you wealthy?', the implication being, her shift into social-justice inspired public health research lacks obvious utility, particularly, economic success.
What the above suggests is that working-class students entering HASS today who reject the largely STEMM-focused pricing signals of the JRGP are engaging in a risky pathway.Those who choose to take it may become embodied sites of struggle over the meanings of HASS, which include its lack of usefulness (it doesn't lead to economically viable work) or its damaging properties (it makes people naive).Shaping these family debates is an interplay between associations with the intellectual middle-classes, and long-standing patterns of gendering of the labour market.The social sciences, and their purportedly too-intellectual content, sit in contrast to the 'recognizable training and career structures' associated with 'men's work' (Kenway 1993, 82).For working-class female-identifying students in HASS, perceived to be forging intellectual pathways, they may simultaneously be deemed both too clever and an 'airhead'.
So far, we have discussed the effective patterning of gendered and classed dealings with anti-HASS sentiment in rather general terms.To follow, the narratives of two doctoral researchers are examined with a view to providing greater specificity of patterns of subjectivity, where employability is read from a biographical lens.
'It's always been education with an outcome'
Here, we meet Isabel, a White woman in her 30s who was based at a regional campus, near where she and her husband grew up.The interview extracts are from the second and third interviews.Although Isabel has class-transitioned financially, she strongly identifies as working class, related in large part to her links to her regional locality, which she describes as 'very community minded'.Isabel is a 'first in family' student and finds it difficult to talk to some of her family members about her PhD studies, a media studiescentred project in English Literature.At this point in the conversation, Isabel is elaborating on a commentshe made in an earlier interview, in which she described her PhD studies as 'fluffy', explaining, 'I feel like it's valueless or something in a practical sense'.
Education has been done [in my community], but it's always been 'education with an outcome', like it's that career oriented-and so that was . . .One of the big things when I first decided to do the PhD was 'well what do you get out of it?'And I was sort of going 'nothing'.Like I just was doing it because I want to do it.And they were all like 'why?'And I'm going 'because I enjoy it', you know?[. ..]And I think that was really-lots of people didn't get that at all.And then combined with the topic, as you say, which, you know, even my own parents would have issues with the things I say.And my sister.You know, not many people are fully onboard with all the feminist stuff.And then I was also bringing in race and ethnicity and you know, inherent racism and systemic racism, all that stuff that people are uncomfortable with anyway [. ..]The education part of it is valued to a point, but they definitely don't see the point of it if there is no tangible outcome.
As Isabel becomes a vector for competing ideologies, we see the affective dimensions of higher education policy framings.She centres her family's line of questioning around the purported impracticality of her PhD, but it aligns also with public and policy perspectives found elsewhere that treat doctoral studies as arcane (Sin and Tavares 2020).Isabel positions herself as an outsider or anomaly in her community ('lots of people didn't get that at all').Resonant with Reay's (2013) autobiographical analysis of academic culture and feminine working-class subjectivity, Isabel's narrative illustrates an intermixing of privilege (engaging in doctoral studies because you will enjoy it), along with the isolation and an ongoing sense of disloyalty that can come with social mobility.
It is from this complex positioning work that Isabel's descriptor of her doctoral studies as 'fluffy' emerges.The inference of her non-present interlocutors is that studying for enjoyment's sake is frivolous ('What do you get out of it?''Why?').Isabel reports acquiescing and replying that she will get 'nothing' out of her PhD.The topic of Isabel's PhD also adds to her semi-outsider status; living in regional Australia ill affords imagining the work possibilities leading on from a doctorate in English literature, with a focus on media studies.Another set of meanings of 'fluffy' sticks to Isabel, and relates to her commitment to anti-sexism and anti-racism.The application of Isabel's social-justice orientated research could stand as a defence of its practical utility.Instead, these become unspeakable topics for her, constituting 'all that stuff that people are uncomfortable with anyway'.
Although Isabel struggles to maintain a sense of belonging in both working-and middle-class cultures, these negotiations may also be mobilising.Rather than interpreting Isabel's struggles as engendering only 'bad' affects -isolation, feeling misunderstood and acquiescing to economic and social pressures -we suggest that her uneasy fit with her culture of origin is to some degree a socialised affective response, a surplus effect of belonging to other than her home community (affect economy).From these data alone, we cannot trace out exactly how Isabel's status as regional and working-class pull her towards anti-sexist/racist programmes of study, but perhaps a lived sense of dislocation is an informing factor.It is not an attachment to productive economics that pulls her towards wanting to 'actually transform the region in which [she's] living', but an embodied understanding of working-class viewpoints and values.Like other participants in the sample, she responds empathetically towards those who have not had the benefit of the critical awareness that is ideally cultivated in higher education.There was little inclination to judge: 'You know, not many people are fully onboard with all the feminist stuff.And then I was also bringing in race and ethnicity . . .all that stuff that people are uncomfortable with anyway'.In her employability decision-making, she is affectively responsive to opinions expressed by family and community members.
The juggling of dominating and counter narratives is present in another unfolding of this story.The excerpt below is from a subsequent interview undertaken just six months before Isabel was due to complete her doctoral studies.It came to light that Isabel had pulled out of the programme, enrolling instead in a Master of Teaching course.She explained that a large part of her motivation for switching was down to the lack of opportunities to continue casual tutoring at her university.Further, working part-time, being the mother of three children and doing her PhD in the evenings, was causing significant stress.Isabel must engage material arrangements (Deleuze and Guattari 1987) that take in practical considerations concerning motherhood, work structures and study and work hours.These fall outside employability, as employability policy defines it -as job-ready skills.Isabel talked about how she imagined her family interpreting her new vocation: They are STILL not going to understand that it's FULL of humanities stuff.It's full of social justice, it's full of equity, you know?So essentially, I'm in the same field, I've just changed to a different age group.And that was also part of my motivation because I was like I'm going to be doing secondary [humanities teaching].And to my mind, well, all the stuff that I was doing in the PhD I can implement in my teaching with real life, like, visible consequences, you know?
Isabel finds a more 'liveable' position, becoming legible to family and community through the profession of teaching.Her training in analysing creative works can now be carried out in a setting that is perhaps legible with regard to rurality.There may also be a pleasurable rebelliousness activated here.Her equity-related practice, and associated politics, will go undetected in a school classroom, in a way that was not possible when her family imagined her in the culturally-distant and pretentious setting of a university.
Isabel's experiences suggest that the demands to prove one's employability productivity are felt acutely in regional Australia.Economistic impulses would appear to trump commitments to social justice; Isabel must, it seems, dream less large if she is to thrive in her rural community.And yet, just as 'there are unis and there are unis' (Matchett 2022), there are economies and economies.The national economy and the instrumental employability ideology that promotes it is not the only field of practice in which Isabel is embedded.Her positive attachments to social justice become practices operating as affective economies (Ahmed 2004) where affects (empathy, care, loyalty) function as a form of capital, and Isabel's biography is part of the relational field: 'well, all the stuff that I was doing in the PhD I can implement in my teaching with real life, like, visible consequences'.Throughout both excerpts, Isabel is pulled in different directions, towards the 'practical' but within the 'intellectual' social science domain she has opted for.In the final excerpt, to follow, a sense of struggle is less apparent.We make use of one participant's own analysis of how employability imperatives 'land', and negatively impact the lives of working-class actors.
Becoming educated about the failings of employability
Taylor is in her late twenties, White, grew up in an urban centre on the outskirts of a large city, and is from a background of intergenerational disadvantage.Her research is around disadvantage and gender in schools, and she understands her participants to be survivors of a violent education system, as is she.In the excerpt below, Taylor is relaying a conversation she had with someone who was shocked at the way she was managing her superannuation.
I was talking to her about super[annuation] and she was like, 'Aren't you worried about retirement?You have to work a 9 to 5 to be able to look after yourself when you're old'.She was just spilling all of this really common language about, you know, justifications for why particular things happen or why things SHOULD be the way that they are and the idea that meeting two people who don't want to work a 9 to 5 and don't believe it is a good thing for people was just like, 'What the hell?' Like, blasphemy, right?! So, it's in those moments -I'm educated so I can advocate for my choices and I can advocate against, you know, I can defend my position and not feel like it's attacking my identity but that sort of language around, 'Well, you're not respectable in society.You're not valuable in society unless you fit this category and middle-class ideal' . . .So it's like if you talk to me about what poverty is, that's poverty of relationships, poverty of choices.You're so stuck in that cycle that you can't see that those things are really bad for you.You're working so much that you get depression or you have all this anxiety or you're carrying all these things with you.You go to counselling, that's more money and that's more time just to feel okay.That's not a nice life, in my view.So, of course, I wouldn't try to be aspiring to do that but I also know that by coming from chronic unemployed families, it's like I didn't have a thing that was like, 'Well, you're not good for society if you don't have a fulltime job'.I didn't carry that burden, like having to fit into that expectation.I was free from that.So my poverty gave me freedom in a lot of ways too because it's like, 'Well, society already said that I don't fit so I don't have the expectation to fit with them and the way that they want me to fit with them is actually really bad for my health'.
The focus of our analysis here is the middle-class assumptions built into employability policy such as the JRGP.Taylor provides a range of working-class responses to the middle-class emphasis on productive employability.In this rather complicated mix, her emergent sense of self as able to reject middle-classed attitudes towards money, is grafted onto longstanding associations of chronic unemployment, and concomitant societal rejection.There is a parallel to be made with Sellar and Gale's (2011) sage analysis of inequities in higher education.They point out that widening participation policy fails because it invites workingclass people into HE institutions, but ill affords their abilities to shape them.Similarly, Taylor has been invited to take up a middle-class life, complete with a respectable job in which poor quality of life and mental health present, with seemingly little recourse to negotiate its form.
The emphasis on productive employability may land differently across a range of heterogenous working-class actors, affecting who comes to work so much 'that [they] get depression'.As Taylor notes, some can't rely on constructing their identity with reference to respectability discourses in the same way that she now can ('I'm educated so I can advocate for my choices').Nevertheless, although class transitions are painful (Reay 2015), Taylor finds a way to capitalise on hers, for herself and others.In leveraging the direction of this transition away from middle-classed investments in productive employability, her powers to act are increased.Her critical view of herself as respectable, educated and having a specific knowledge of social discourses that only those who have endured environments of chronic intergenerational unemployment can have, becomes a tale of empowerment.Taylor encourages us to look beyond seeing only deficit in the chronically unemployed and to be open to the important lessons we may learn from them: 'You're so stuck in that cycle that you can't see that those things are really bad for you'.With reference to the end of her doctoral-research looming, as well as her scholarship funding, she stated elsewhere in the same interview: You take a job because it pays well.That's not really on my prerequisite list and every time I've taken a job for the money or felt good about a job because of the way that it pays, I am thoroughly disappointed because other costs are not considered.
It might be imagined that the threat of post-PhD unemployment would be greater for someone who has been swimming against the tide of middle-classed culture for at least a decade -there is more ground to be lost.However, we see Taylor reach out to and strengthen her working-class knowledges and ties to her community, and through this, she refuses to see potential unemployment as a fall from grace.Similar to Isabel's clear affinities to her community, Taylor makes use of her education in the social sciences, and mobilises its authority, to reframe poverty as a freedom from the tyranny of (over)work, for both her community and herself.
Job readied student-hood and subjectivities: 'we really sit in the grey parts' Bringing an affective lens to bear, we sought insight on how the policy shift towards employability, such as in the form of the Job-ready Graduates Package, 'lands' for working-class HASS graduate students.Addressing the question of these students' response to this policy, we commence the discussion with a query as to the ultimate goal of producing the job-ready graduate student.The mismatch between the sampled students' responses and the policy's avowed intent and purpose runs throughout it.As does the 'thread' of working-class capacities, a less-told narrative, which is a chief focus here.
While working-class graduate students are targeted for needing to become job-ready, their financial disadvantage often requires that they are already, demonstrably, 'jobreadied'.Many of the students sampled had built an extensive job portfolio prior to and during studies and were adept at transitioning across jobs and disciplines.Thus, Isabel switches from a PhD programme and casual tutoring at her university to a teacher preparation programme and Taylor is weighing up her options for her next move.Changing track and taking 'a million jobs' over the course of studying, calls into question the project of producing the job-ready graduate student as an ultimate goal.Recent research parallels our reservations, indicating that 'graduates in a wide range of disciplines, including arts, social sciences and humanities are highly employable and . . .attempts to drive students into some fields at the expense of others are misplaced' (Bisley 2022, para. 4).We want to note also however, that while we have placed focus on working-class capacities, working 'a million jobs' while studying and negotiating the often painful consequences of social mobility (Abrahams and Ingram 2013;Friedman 2014) is something that middle-class students are typically more insulated from (Baglow and Gair 2019).
In line with our analysis, we suggest that the singular policy focus of job ready skills is reductive and reduces possibilities for working-class students.The employable policy subject is produced through a wide range of factors: affects, cultural norms (concepts of employability), material needs (money), material practices (shifting from university to university), work conditions (casual work), commitments (caring responsibilities), ideals and more all bundle together in what can be called an alternative employability assemblage.Job readiness is not just a matter of 'jobs of national importance, such as teaching, nursing and STEM fields' (Department of Education Skills and Employment 2020, 7).It is 'profoundly ecological' (Walkerdine 2021, 70), spanning people, places and predilections.Attempting to pick 'winners among the disciplines for study to generate "jobready" graduates' is flawed, both, because it ignores the highly fluid nature of the workforce (Daly and Lewis 2020, 231), but also because it belies the affective attachment many higher education students have to HASS disciplinary domains.
The integrity of employability policy, then, might be improved by moving away from the reductionist approach of the job-ready model, towards a more ecological approach that recognises heterogeneity.If the markers of employability are not expanded to account for the specificity of students' ethico-political and 'othered' identity positionings, we risk reproducing inequalities.An ecological approach would also better recognise that employability cannot be the sole, or even primary, responsibility of universities.National policies in many countries place universities at the locus of the problem of graduate employability, omitting the powerful role of employer behaviours and perceptions and labour market factors (Hartmann and Komljenovic 2021).Although we have used the Australian case, the need to challenge employability assumptions is therefore a global issue and concerns class dynamics that exceed universities.
In a number of ways, we have questioned the leitmotif of employability as a market, and taken-for-granted, good.As is the nature of policy, the Job-ready Graduates Package strives to set higher education students on a desired path.It has a telos.This path, however, is interpreted flexibly, worked around or not worked at all: 'meeting two people who don't want to work a 9 to 5 and don't believe it is a good thing for people, was just like, "What the hell?"' (Taylor).Sceptical about a neoliberal policy logic whereby 'marketbased competition, personal choice, and human capital' are emphasised (Molla and Cuthbert 2022, para. 34, original emphasis), what the job readied students sampled attune to is a discourse of employment that provides opportunities for redressing social disadvantage and taking care of one's self: 'And to my mind, well, all the stuff that I was doing in the PhD I can implement in my teaching with real life, like visible consequences' (Isabel) and 'You're not valuable in society unless you fit this . . .middle-class ideal . . .You're so stuck in that cycle that you can't see that those things are really bad for you' (Taylor).Clearly, employability has a wider remit than the production of job-readiness.
Further to the issue of policy narrowing and governance is the inattention to what working-class students bring to higher education, their sensibilities and capacities.While there have been recent inroads towards greater recognition of the importance of connections between workplace learning and higher education (Römgens, Scoupe, and Beausaert 2020), the diversity of graduate students' prior life and work experiences remains neglected.Working-class students are expressly absent from the processes of policy formulation; working-class actors are not imagined as holders of specific knowledge that we might defer to, as is the case for other marginalised communities (Walkerdine 2021).In a recent submission to the new Australian Labour government, equity practitioners in higher education call for taking 'a holistic approach to supporting students from targeted equity groups' (Equity Practitioners in Higher Education Australasia 2022, 2) which includes reference to this knowledge.The problem of leaving out working-class student knowledge is also pertinent to the postgraduate cohort, where policy attention to widening participation concerns is significantly lacking (McCulloch and Thomas 2013).This omission is perhaps indicative of the common misconception that working-classed people who engage with higher education wish to become middle-classed (Loveday 2015, 571).
As is the case with social class, the role and contribution of affect in employability policymaking tends to go unacknowledged.As multiple pushes and pulls are wrestled with -necessary when one occupies a space in the 'grey parts' -the 'felt sense of the quality of life' (Williams 1961, 63) comes into sharp focus.The affective jostling between subject positionings such as 'employable, self-regarding subject' and 'employed (or not), other-regarding subject' provides insight into working-class student-hood in HE and how it matters to the students in question.
To provide expansive employment possibilities, we maintain that the space of affective jostling should be kept open towards displacing normative discourses of the job-ready student -a subjectivity that is too firmly set.
The shortcoming of a policy emphasis on employability and a JRGP framing of such is 'not that it gets things wrong, generally speaking, but that it leaves a gap' (Nissenbaum 2010, 9), with this gap being acutely felt by actors such as the study's participants who are implicated in the directions set, yet unable to influence the course that these directions take.Framed as members of 'equity groups', individuals in their respective social positions (classed, gendered and raced) are not legible in the JRGP reform policy and process.Fundamental to why our participants continue in their studies are their affective investments in the humanities and social sciences, which afford attentiveness to inequalities.Their wish to feel less confined by economic precarity may also play into this continuity.There is a certain security in knowing that employment can be had in national interest areas, noting that a logic of employability can be mobilised judiciously, and care-fully.
Altogether, while employability policies are positioned as a salve for class inequalities, they can serve to discredit educational and employment endeavours and reproduce class tensions.As Taylor's narrative well illustrates, affective aspects of employability are essential to refusing narrow definitions of employability, and are key to the reimagining of employability in the context of widening participation policies.Calls are being made currently (Bisley 2022;Molla and Denise 2022) to reconsider the Job-ready Graduate Package.The new Federal Government is being asked to reimagine the future and purpose of Australian higher education and to frame equity issues less reductively.We propose that the integrity of employability policy can be advanced through moving beyond the reductionist model of job-readiness.Attending to the identity and values positions of those who enact policy, and to the effective dynamics of policy, invites a reparative and holistic approach. | 2023-07-11T17:35:57.942Z | 2023-06-27T00:00:00.000 | {
"year": 2024,
"sha1": "b520ae4501c36eac2b9f365825118d530eef4cf5",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/02680939.2023.2228755?needAccess=true&role=button",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "8f79c6fc14c9d0abcf1d2070983e3b7b7b1c62dc",
"s2fieldsofstudy": [
"Education",
"Sociology"
],
"extfieldsofstudy": []
} |
227053757 | pes2o/s2orc | v3-fos-license | Pseudo Entropy in Free Quantum Field Theories
Pseudo entropy is an interesting quantity with a simple gravity dual, which generalizes entanglement entropy such that it depends on both an initial and a final state. Here we reveal the basic properties of pseudo entropy in quantum field theories by numerically calculating this quantity for a set of two-dimensional free scalar field theories and the Ising spin chain. We extend the Gaussian method for pseudo entropy in free scalar theories with two parameters: mass $m$ and dynamical exponent $z$. This computation finds two novel properties of Pseudo entropy which we conjecture to be universal in field theories, in addition to an area law behavior. One is a saturation behavior and the other one is non-positivity of the difference between pseudo entropy and averaged entanglement entropy. Moreover, our numerical results for the Ising chain imply that pseudo entropy can play a role as a new quantum order parameter which detects whether two states are in the same quantum phase or not.
Recently, a new geometric connection between a minimal area surface and a novel quantity, called pseudo entropy, has been found via AdS/CFT [16]. The pseudo entropy is a generalization of entanglement entropy to a transition between the initial state |ψ 1 and the final state |ψ 2 . First we introduce the transition matrix τ 1|2 We divide the total Hilbert space H tot into two parts A and B as we do so to define entanglement entropy, Note that when |ψ 1 = |ψ 2 , this quantity is equal to the ordinary entanglement entropy. Even though this expression (2) looks like the von-Neumann entropy, this takes complex values in general because τ 1|2 A is no longer hermitian. However, when we construct the initial and final state by a Euclidean path-integral with a real valued action, S(τ 1|2 A ) turns out to be positive [16], which is the case we will focus on in this article. Moreover, it was found that the pseudo entropy for holographic CFTs can be computed as the areas of minimal surfaces in time-dependent Euclidean asymptotically anti de-Sitter (AdS) backgrounds [16]. Such a time-dependent Euclidean space is dual to an inner product ψ 2 |ψ 1 via AdS/CFT [13]. In addition to the above importance in gravity, pseudo entropy has an intriguing interpretation from quantum information viewpoint, as a measure of quantum entanglement for intermediate states between arXiv:2011.09648v1 [hep-th] 19 Nov 2020 the initial and the final state [16]. In this letter we would like to pursuit the next obviously important task, namely, to uncover basic properties of pseudo entropy in quantum many-body systems, including quantum field theories and condensed matter systems.
FREE SCALAR FIELD THEORY
Consider free scalar field theory in two dimensions as our first example. We take into account two parameters in the free scalar theory, which are the mass m and the dynamical exponent z. At z = 1, this describes the relativistic scalar field, while for z > 1, it is called Lifshitz scalar field, invariant under the Lifshitz scaling symmetry t → λ z t, x → λx. Its Hamiltonian is written as where φ and π are the scalar field and its momentum. In order to do concrete calculations, we consider its lattice regularization [17][18][19][20] given by the Hamiltonian: where N is the total lattice size. We define N A to be the lattice size of subsystem A. These models are straightforwardly generalized to higher dimensions [17,19]. It is known that we can calculate the entanglement entropy in free field theories from correlation functions on A when a quantum state is described by a Gaussian wave functional [21]. Though for pseudo entropy, we consider a transition matrix instead of a density matrix, we can remarkably extend this Gaussian calculation via an analytic continuation. This makes numerical computations of pseudo entropy possible, playing a major role below.
Two point functions of φ and π consist the where As opposed to the standard case where τ 1|2 A is given by a hermitian density matrix ρ A , we find that the matrix R takes complex values, though X and P are real symmetric matrices. Therefore, we consider a complexified symplectic transformation Sp(2N A , C) to diagonalize Γ into the form (see appendix A for more details) where ν is a diagonal matrix and we write its diagonal components as ν i = 1 2 coth i 2 . Practically, we can obtain ν i from the fact that the eigenvalues of the following rearranged matrix are ±ν i : In our interested examples below, ν i and i always take positive real values. Finally, the pseudo entropy is computed by the formula This Gaussian calculation of pseudo entropy can also be justified by a more direct approach, the operator method [22,23] as presented in appendix B. Though it has not been proven rigorously that performing the analytic continuation used in this Gaussian calculation is possible, we can directly derive the same formula by the operator method without using the analytic continuation. In our analysis, we take |ψ 1 and |ψ 2 to be ground states for various values of the mass m and dynamical exponent z, which we denote by (m 1 , z 1 ) and (m 2 , z 2 ). Let us first start with the relativistic setups z 1 = z 2 = 1 and m 1 = m 2 . We take the total system to be a circle length L and define a subsystem A to be a length l interval on this circle. We write the UV cut off (lattice spacing) as such that L = N and l = N A . Our numerical analysis reveals the general behavior of pseudo entropy where the first term in the right hand side coincides with the known behavior of entanglement entropy in two dimensional CFT with the central charge c = 1 [3,6], while the second term is a constant term which depends on the relevant parameters. For confirmations of this behavior, refer to Fig.1, where the first logarithmic term in (8) gives a dominant l dependence for small masses. This shows A ) as a function of the size of the subsytem NA. We set N = 200 and z1 = z2 = 1. The curves are c1 ln[(N/π) sin[πNA/N ]] + c0, where c1 0.3333 and 6.028 < c0 < 6.453.
that the leading logarithmic divergence, which is equivalent to the area law, is robust for the pseudo entropy. For small values of masses, our numerical calculations determine analytical structures of the function f (m 1 , m 2 , L, l). When we consider the almost massless limit m 1,2 L 1, we have as we will explain in appendix C. This logarithmic behavior is due to the zero mode of scalar field and the above formula agrees with the known result of entanglement entropy in [24]. When the mass is small such that m 1,2 L ∼ 1 and m 1,2 l 1, we can find the l dependence where the final term f 0 does not depend on l. This expression again reproduces the known 1 2 log (− log(ml)) term [25] in the entanglement entropy. Refer to appendix D for more details. Now we turn on the dynamical exponent (z 1 , z 2 ) to describe the Lifshitz scalar theory. When z 1 = z 2 , the pseudo entropy gets larger as the dynamical exponent increases as in the upper graph of Fig.2. When we fix z 1 and increases z 2 , the pseudo entropy approaches to a certain finite value as can be seen from the lower graph in Fig.2. We call this phenomenon saturation. The saturation occurs when we fix |ψ 1 and consider a limit where the entanglement of |ψ 2 gets larger. The two graphs in Fig.3 demonstrate the saturations when we take different two limits of m 2 → 0 and z 2 → ∞, respectively. This saturation in our free scalar field theory implies that the behavior of pseudo entropy qualitatively looks like The upper plot shows the pseudo entropy as a function of the subsystem size NA when we chose m1 = 10 −3 and m2 = 10 −5 for various values of z1 = z2. The lower plot shows the pseudo entropy when we set z1 = 3 and m1 = m2 = 10 −5 . We chose the total system N = 100.
3. The upper graph shows the pseudo entropy as a function of m2 when we set z1 = 1 and m1 = 10 −5 . The lower graph depicts the pseudo entropy as a function of z2 when we set m1 = m2 = 10 −5 .We chose NA = 50 and N = ∞.
From our numerical results, we can find one more basic property of pseudo entropy by introducing the difference: If |ψ 1 and |ψ 2 are very closed to a state |ψ 0 , such that A is very small, then we can derive a first law like relation (see appendix E for a derivation): as in the first law of entanglement entropy [26][27][28]. Here we introduced the modular Hamiltonian The linear combination (11) is special such that it cancels out in this linear difference (12), leaving only the quadratic order as ∆S 12 In general, this quadratic difference ∆S 12 is not guaranteed to be positive definite. Indeed, we can confirm that both signs are possible even in a two qubit example, as discussed in appendix E. However, in all of our numerical results in the free scalar field theory (3), we observe its non-positivity ∆S 12 ≤ 0 when we vary the masses and dynamical exponents, as depicted in Fig.4. Also, in the small mass limit (9), this non-positivity is satisfied.
PSEUDO ENTROPY IN PERTURBED CFT
To investigate the behavior of pseudo entropy more, consider a perturbation in a two dimensional CFT. We assume that the subsystem A is a length l interval and the CFT is defined on R 2 . The perturbation is expressed as λ dtdxO(t, x), where O is a primary operator and λ is a small perturbation parameter. We choose |ψ 1 is the original CFT vacuum and |ψ 2 is the new vacuum obtained by this perturbation. Since one point function vanishes in a CFT, there is no O(λ) term in the differences S(τ In particular, if we consider an exactly marginal perturbation, we find that the coefficient of the logarithmically divergent terms is changed The conformal perturbation shows f (λ) = 1+gλ 2 +O(λ 3 ) with g < 0 in the λ → 0 limit. We can also derive the same behavior from the holographic calculation of pseudo entropy in Janus solutions [29][30][31][32][33][34]. In this way, we can confirm ∆S 12 ≤ 0 for exactly marginal perturbations. Refer to appendix F for derivations of these results.
PSEUDO ENTROPY IN ISING MODEL
As another class of basic quantum many-body systems, we would like to consider a transverse Ising spin chain model. In the continuum limit near the critical point, this model is known to be equivalent to the two dimensional free fermion CFT [35]. Its Hamiltonian can be written as where the spins are be labeled by i = 0, 1, 2, · · · , N − 1 and the σ z i is Pauli operator on i with eigenvalues ±1. We impose the periodic boundary condition. Note that the quantum critical point is situated at J = h in the continuum limit, where J > h is the ferromagnetic phase, while J < h describes the paramagnetic phase.
We calculate the pseudo entropy S(τ 1|2 A ) by choosing |ψ 1 and |ψ 2 to be the ground states for (J, h) = (J 1 , h 1 ) and (J 2 , h 2 ), respectively. The subsystem A is assumed to be a single interval with N A spins. We show numerical results in Fig.(5) (We used the python package quspin [36] in our computation.). From the numerical results, we can observe the saturation S(τ 1|2 A ) log 2 in the J 2 → ∞ limit when J 1 > 1. Moreover we can confirm that the difference (11) satisfies ∆S 12 ≤ 0 when (J 1 , h 1 ) and (J 2 , h 2 ) are in the same phase, i.e. (J 1 − h 1 )(J 2 − h 2 ) > 0. However, we can have ∆S 12 > 0 when they belong to two different phases, i.e. (J 1 − h 1 )(J 2 − h 2 ) < 0. This implies that the sign of the difference ∆S 12 can provide an order parameter which tells us whether the two states |ψ 1 and |ψ 2 are in the same phase or not. This result is also expected to hold when considering two ground states of 2D free Majorana fermion theories with different mass as long as they belong to the same phase [37], since free Majorana fermion can be obtained as a scaling limit of transverse Ising chain after Jordan-Wigner transformations.
DISCUSSIONS
In this article we have uncovered basic properties of pseudo entropy in quantum field theories by focusing on numerical calculations in a class of free scalar field theories and the Ising spin chain. We would like to conjecture that the properties: area law, saturation and non-positivity of ∆S 12 , which we found for free scalar field theories, will be universal also for any quantum field theory. It will be an important future problem to study pseudo entropy in broader class of field theories and test the above properties. Moreover, our results for Ising spin chain imply that we can classify different phases in quantum many-body systems from the calculations of pseudo entropy. This origins from our expectation that the pseudo entropy helps us to probe the difference of structures of quantum entanglement between two states. One obvious future direction will be to analyze the pseudo entropy in topological phases, to see if it can play a role of topological order parameter.
Lifshitz Scalar Theories
We consider the following free scalar theories, which are invariant under Lifshitz scaling symmetry in the massless limit (m = 0). In order to do concrete calculations we consider the regularized version of these theories on a lattice, known as Lifshitz harmonic lattice models, where we set M = K = 1 without loss of generality (see [17][18][19][20] where different information theoretic properties of these models has been addressed). The z = 1 case is the standard harmonic lattice model. The diagonalized Hamiltonian in generic dimensions takes the following form where In the following we explain how to compute pseudo entropy in these theories, though the method is more general for ant Gaussian state in quadratic theories.
Pseudo Entropy in Scalar Theories: Correlator Method
Standard correlator method is used to study entanglement and Renyi entropies is Gaussian states of quadratic theories. The idea is based on the fact that the spectrum of the reduced density matrix is fully determined with the two-point functions of the operators restricted into the subregion of interest. The idea is very similar in case of pseudo entropy, except that the notion of density matrix is replaced by the transition matrix. The transition matrix in the post-selection setup defines an analogue to the expectation value of these restricted operators on a Gaussian state as We consider the case when |ψ 1,2 are vacuum states with different (m, z) parameters in the Hamiltonian, namely with different dispersion relations. In this case we have a 1 |ψ 1 = a 2 |ψ 2 = 0. As will be described in the appendix B with more detail, these states are related to each other via where and ω (i) 's are determined by (m i , z i ) in (4).
With the above Bogoluibov transformations, we can determine |ψ 2 in terms of the eigenvectors of the number operator n 1 = a † 1 a 1 as The expectation values of the restricted operators in two dimensions on a translational invariant lattice are given by where r, s = 1, 2, · · · , N A . In this case as opposed to entanglement and Renyi entropies in static states the R correlators, which take pure imaginary values in our case, play a non-trivial role. In order to find a suitable transformation that brings the transition matrix to a diagonal form we need a transformation which preserves the commutation relations. To this end we consider a generalized vector of canonical variables, the fields and their conjugate momenta, as r = (φ 1 , · · · , φ N A , π 1 , · · · , π N A ) T . So the canonical commutation relations read where A, B = 1, 2, · · · , 2N , and we define a correlator matrix as We consider the following transformations between the creation and annihilation operators restricted to subregion A where from commutation relations we find These transformations lead to the following expressions for the correlators In case of dealing with density matrices, where the R correlators take real values, utilizing Williamson's theorem [1], for any symmetric positive definite Γ there always exists a symplectic transformation, S ∈ Sp(2N, R) such that Now that the R correlator is pure imaginary, although the original form of Williamson's theorem does not apply, we consider an analytic continuation of such a transformation, i.e. S ∈ Sp(2N, C) [2]. This continuation is non-singular in our criteria of interest, as we provide several justifications in the following appendix as well as in [37]. An easy way to work out { k } is to find the spectrum of (iJ · Γ) denoted by {ν k }, which gives a double copy of { k } as In the following appendix B, alternatively we use the operator method to directly prove that even without assuming any ansatz for the transition matrix (10), pseudo entropy can be directly read from the spectrum of (iJ · Γ).
Appendix B: Operator method for Pseudo Entropy
We calculate the pseudo entropy by using the operator method developed in [22,23]. First, we summarize the Bogoliubov transformation. Next, we calculate the pseudo entropy.
Bogoliubov transformation
We consider a real free scalar field in (d + 1) dimensional spacetime. As an ultraviolet regulator, we replace the continuous d-dimensional space coordinates x by a lattice of discrete points with spacing a. As an infrared cutoff, we allow the individual components of n ≡ x/a to assume only a finite number N of independent values −N/2 < n µ ≤ N/2. The Greek indices denoting vector quantities run from one to d. Outside this range we assume the lattice is periodic. The scalar field φ n and the conjugate momentum π n obey the canonical commutation relations We consider vacuum states |0 α (α = 1, 2) of Hamiltonians H α , where the index k also carries d integer valued components, each in the range of −N/2 < k µ ≤ N/2 and a −k . We expand φ n and π n as From (20), we obtain From (21), we obtain the Bogoliubov transformation, and a (2) where From a where We use the following notation, where O is an arbitrary operator. O 12 can be calculated as follows. First, we express O as a function of a where f k is an arbitrary complex function. By using (26), we obtain and From (31) and (32), we obtain Operator method for Pseudo Entropy We apply the operator method [22,23] of entanglement entropy to the pseudo entropy. We review the operator method to compute the Rényi entropy developed in [22]. We consider n copies of the scalar fields in (d+1) dimensional spacetime and the j-th copy of the scalar field is denoted by {φ (j) }. Thus the total Hilbert space, H (n) , is the tensor product of the n copies of the Hilbert space, H (n) = H ⊗ H · · · ⊗ H where H is the Hilbert space of one scalar field. We define the density matrix ρ (n) in H (n) as where ρ is an arbitrary density matrix in H. We can express Trρ n Ω as where where π (l) n is a conjugate momenta of φ n exist only in Ω and J (n+1) = J (1) . Notice that φ and π in (36) are operators and the ordering is important. This operator E Ω is called as the glueing operator. When ρ is a pure state, ρ = |Ψ Ψ|, the equation (35) becomes where The useful property of the glueing operator for calculating the pseudo entropy is the following property. From eq.(2.18) in [22], for n arbitrary operators F j (j = 1, 2, · · · n) on H, where F jΩ ≡ Tr Ω c F j . We consider the transition matrix, By using the property (39), we obtain where |0 (n) α = |0 α |0 α · · · |0 α , (α = 1, 2). In order to calculate Tr(τ 1|2 Ω ) n , we express E Ω as a function of a (1) k and a (1) † k and represent it as the normal ordered operator.
We decompose φ and π into the creation and annihilation parts, where The commutators of these operators are By using (44) and the Baker-Campbell- where N 1 (O) is the normal ordered operator of O with respect to φ 1± n and π 1± n , and We substitute (45) into (41) and obtain Tr(τ where and we have used (33). By using (26), we obtain whereX From (46), (47) and (49), we obtain where X mn ≡ X 1,mn +X mn = φ m φ n 12 P mn ≡ P 1,mn +P mn = π m π n 12 We perform the J and K integrals in (47) simultaneously. We rewrite S 12,JK in (53) as S 12,JK = S 12,J = (J (1)T , · · · , J (n)T , K (1)T , · · · , K (n)T )S n and, where δ 1,n+1 = δ n,0 = 1. We substitute (56) into (47) and perform the J and K integrals in (47) and obtain We can diagonalize S n with respect to the replica label l by Fourier transformation. We define a unitary matrix U lk = 1 √ n e i2πkl/n and obtain, where From (60) and (61), we obtain where For k = 0, we obtain det 2S n,k=0 = det P det P −1 = 1.
where we used the formula For k ≥ 1, we can rewrite S n,k as where and we used Q = − i 2 . In order to calculate n−1 k=1 det 2S n,k , we use the following formulas, (we show them in the next subsection), (1 − e i2πk/n ) = n, From (66), (67) and (69), we obtain where ν i is the eigenvalue of iJΓ and V is the number of the points of the subsystem. From the characteristic equation, we obtain 0 = det(x − iJΓ) = det(x − iJiJΓiJ) = det(x − ΓiJ) = det(x + (iJΓ) T ) = det(x + iJΓ), where x is an eigenvalue of iJΓ and we used (iJ) 2 = 1 and Γ = Γ T . So, if x is an eigenvalue of iJΓ, −x is also an an eigenvalue of iJΓ. So, we sort ν i as ν V +i = −ν i and obtain From (59) and (71), we obtain the pseudo (Rényi) entropy as, When R = 0, Γ is a positive-definite real matrix and we can show that −ν V +i = ν i ≥ 0 by using Williamson's theorem. So, when R = 0, eqs (72) and (73) are the same as ordinary entanglement (Rényi) entropy.
Appendix C: Almost Massless Regimes
In this section, we consider a periodic system with length L and 'almost massless' scalar fields with mass m i L 1. Let ρ i A be a reduced density matrix for an almost massless scalar field in a single interval A = [0, l]. It is known that the entanglement entropy for ρ i A is schematically given by where f (m i , L, l) is a non-trivial function which is negligible in our almost massless field and is not important for our present discussion. On the other hand, for the pseudo entropy for two almost massless scalar fields with mass m i and m j , we numerically confirmed where f 0 (m i , m j , L, l) is again a negligible function which is less important than the second term in (80). In particular, we have numerically studied the difference between the pseudo entropy and the averaged entanglement entropy, Interestingly, it can be well-approximated as the mass terms in the above, which does not depend on the system size. Notice that it is always negative in our almost massless regimes. It means that these mass terms (the second term of (79) and (80)) essentially explain the negativity of ∆S 12 . For massive regions, however, we cannot neglect the third terms of these equations and still observe the negativity of ∆S 12 . We have confirmed the same behaviour for the 2nd pseudo Renyi entropy. See FIG. 1. We stress that these are not evenly spaced and it can be perfectly explained by the equation (83). We have seen this agreement up to 16 digits. Notice that we did not see such an almost perfect coincidence for z = 1 case.
Lifshitz cases with z1 = z2 > 1 One can repeat the same analysis for z 1 = z 2 ≡ z > 1 cases and ask a z-dependence of the previous mass-terms. The answer is simply given by replacing m 1,2 L to (m 1,2 L) z . To be explicit, we have numerically confirmed We stress that the z-dependence of the pseudo entropy does not show up as an overall factor (see FIG. 2).
Appendix D: Massive Regimes
In this appendix, we study the pseudo entropy for massive scalar fields. In contrast to the previous almost massless regime explained in appendix C, our result is based on semi-analytic approach. We will leave the detail of the calculation in the end of this appendix. Based on our correlator method, we propose a mass-correction formula of the pseudo entropy for scalar fields as where Here S(τ 1|2 A l ) gives the pseudo entropy for a single interval A l = [0, l] between two vacua with different mass parameters m 1 and m 2 . The l 0 is just a reference point to get rid of irrelevant contributions. Note that this formula is a leading order approximation and only valid for the small interval, m 1 l, m 2 l 1. Under the appropriate limit with m 1 → m 2 , it reduces to the famous result for the entanglement entropy for a massive scalar field [25].
Notice that the f (m 1 , m 2 , l) is symmetric, i.e. f (m 1 , m 2 , l) = f (m 2 , m 1 , l) which is also guaranteed by our numerical results. On the other hand, we have to mention that the l-dependence of f (m 1 , m 2 , l) is not sensitive to the mass parameters very much.
For convenience, we define a regularized PE as which corresponds to the left hand side of (84) with l 0 = . In Figure 3, we plotted PE and regularized PE for fixed m 1 with various mass parameters m 2 . These figures numerically guarantee that the above mass-corrected formula is valid.
In the same way, we can also find the similar expression for 2nd pseudo Renyi entropy as, where Note that the mass correction part does not depend on the Renyi index as well as the ordinary entanglement entropy. To see the consistency with numerical results, please see the Figure 4. Psuedo Entropy 3. The PE and regularized PE for fixed m1 = 1.0 × 10 −3 with various mass parameters m2. As a reference, we also plot the entanglement entropy for CFT vacuum with c = 1 (orange curve). Note that this formula is valid only in the regime mil 1. Out of this regime, as we can see from the right-top figure, there is a small deviation.
Detail of the semi-analytic derivation
In what follows, we explain a semi-analytic derivation of the above mentioned mass-correction formula from our covariance matrix methods.
A key idea is to notice that the mass-dependence is an IR effect which can be read off from the low energy modes in the discretized models. Having this intuition, let us treat a single site on the lattice as our subsystem and only focus on the lowest energy mode in the dispersion relation. The similar approach has been accomplished in [4,5]. That is to say, we take the thermodynamic limit N → ∞ and approximate our dispersion relation as, where we recovered the lattice size which now formally coincides with the subsystem size l. In this limit, each component of the matrix becomes an integral form, where each ω (i) p follows the standard dispersion relation of a massive free scalar field, Following our prescription, we shall study the eigenvalue of our covariance matrix, We can formally expand each component with respect to the small . Physically, we have to assume m 1,2 1. Remind that now we can regard as a subsystem size l. In doing so, we obtain the leading contribution of interest, In particular, we can neglect the off-diagonal elements R 11 up to this order. It means that we can simply obtain the desired eigenvalue ν as ν X 11 P 11 1 16 + 1 8 Finally, we have obtained the analytic expression of the pseudo entropy as As a consistency check, it is symmetric under the mass exchange m 1 ↔ m 2 and reduces to the well-known formula by Casini and Heurta under the ordinary entropy limit m 1 → m 2 . As we have already seen in FIG. 3, this expression matches the numerical calculations.
In the similar way, one can also consider the similar analytic form for any n-th Renyi entropy. For example, if we consider the 2nd pseudo Renyi entropy, we obtain the same form as (99), which has the same form as (99) if we focus on the leading order contribution and is consistent with the numerical plots (see FIG.4).
Our approach nicely captures the leading order of mass-corrections. Finding more refined or exact analytical approaches would be an interesting future direction.
If we include the quadratic order (102), we have The final integral term is negative if τ 0 is non-negative and δτ is hermitian.
Consider two quantum states |ψ 1 and |ψ 2 which are both very close to a state |ψ 0 . In this case the deviation of the pseudo entropy τ ψ1|ψ2 A from S(ρ 0 A ) is found from the first law (104): Here H A is the modular Hamiltonian defined by H A = − log ρ A + S(ρ 0 A ) such that ψ 0 |H A |ψ 0 = 0. For example, we can regard |ψ 0 as the ground state of a given Hamiltonian and the two states |ψ 1 and |ψ 2 are excited states.
We would like to consider the sign of the difference: A ) is real valued, which we assume in the main context of this paper, (108) is identical to the twice of the difference The transition matrix deviates from ρ 0 where we noted ψ 2 |ψ 1 1 + O( 2 ). By using (106) repeatedly, this leads to (up to O( 2 )) where the linear O( ) terms do cancel. Here ∆ (2) S is the quadratic contribution from the last integral term in (105).
In particular, if we consider the special perturbation where |α = |β and then we find This shows that the above difference is proportional to | 1 − 2 | 2 . However the sign of the quadratic is not definite from the above analysis. Indeed we will find that it can be both negative and positive below. Nevertheless as we will see in appendix F and the main context of this article, the sign turns out to be non-positive for quantum field theories.
Perturbations in Two Qubit System
For the two qubit system, we choose where we assume 0 ≤ θ 1 , θ 2 ≤ π 2 . The pseudo entropy is computed as [16] S We are interested in a small perturbation θ 2 − θ 1 = δ 1. Then the interesting difference looks like where we find This function is plotted in Fig.5. It is not always negative. In particular when the state |ψ 1 is highly entangled, the difference tends to be positive.
Appendix F: Pseudo Entropy for Perturbed CFTs
Here we analyze the change of the pseudo entropy S(τ 1|2 A ) when we perturb a CFT vacuum by a primary operator in two dimension. We choose |ψ 1 to be the original CFT vacuum and |ψ 2 to be the vacuum in the perturbed theory. We will calculate this both from the field theoretic and holographic approaches.
CFT Perturbations
Consider a two dimensional CFT perturbed by a primary operator O(x) with the (chiral) conformal dimension h: To describe a transition matrix, we assume where x 1 is the coordinate of the Euclidean time. Note that this is chosen such that the initial state is the original CFT vacuum, while the final state is the ground state of the perturbed theory (111).
We introduce the complex coordinate (w,w) such that w = x 2 + ix 1 and choose the subsystem A to be 0 ≤ x 2 ≤ l at the time x 1 = 0. In this setup, we have Here Σ n is the n-sheeted Riemann surface obtained by gluing n complex planes along the cut A. The reduced transition matrix at λ = 0 coincides with the reduced density matrix for the CFT vacuum ρ 1 A . Thus by using the fact that the one-point function in a CFT vanishes and by expanding up to the quadratic order we have where Σ + n denotes the upper half of the surfaces Σ n , where the perturbation is restricted. The difference between the n-th Renyi pseudo entropy and the original n-th Renyi entropy is By taking the limit n → 1, we obtain the difference S(τ 1|2 A ) − S(ρ 1 A ) between the pseudo entropy and entanglement entropy.
The two point function on a complex plane Σ 1 reads To calculate the two point functions on Σ n , we perform the conformal map from Σ n into a complex plane R 2 : This gives where P + n is the image of Σ + n by the conformal map (117). Explicitly we have where We call n disconnected regions in Q n as n chambers.
In the actual computations, we need a UV regularization when z 1 and z 2 get closer. To have a universal treatment of such a cut off, we rewrite the latter w-integral in (114) in terms of z coordinate as follows where we introduced The region P + 1 is defined by i.e. 1 n fraction of P + n or a single chamber. It is useful to note lim z2→z1 G(z 1 , z 2 ) = 1, and |G(z 1 , z 2 )| ≤ 1 Then we can evaluate as follows where It is clear from G ≤ 1 (122) that this difference is positive which gives the non-positivity of the difference This is because the difference is bounded from below by setting G = 1. If we set G = 1, the second term in (123) is canceled by the contributions from the first term where z 1 and z 2 are in the same chamber. Therefore totally the contributions from the first term where z 1 and z 2 are in different chambers remain, which are clearly positive.
Exact Marginal Perturbation h = 1
Let us estimate the leading divergent contribution for the exactly marginal perturbation h = 1. Since such a divergence arises when z 1 z 2 , we set G 1 and obtain log Tr The divergences when z 1 z 2 are canceled out when both z 1 and z 2 are in the same chamber. Thus the leading logarithmic divergence arises where z 1 and z 2 are in the different chambers. In this case the divergence occurs in the two limits z 1 , z 2 → 0 or z 1 , z 2 → ∞ corresponds to the limits that the coordinate w 1 and w 2 both get closer to either of the two end points of the interval A.
Thus we get Since we can confirm that c n is positive and monotonically increasing function of n, the above difference is negative in the limit n → 1 and thus we obtain S(τ 1|2 A ) − S(ρ 1 A ) < 0. Since the exact marginal perturbation does not change the central charge c, the logarithmic divergence in S(ρ 1 A ) has the same coefficient as that of S(ρ 2 A ). Thus we can conclude that ∆S 12 = S(τ 1|2 A ) − S(ρ 1 A )/2 − S(ρ 2 A )/2 < 0 under an exactly marginal perturbation. In other words, we have the expression in (13), where the function f (λ) behaves as f (λ) = 1 + gλ 2 + O(λ 3 ). The coefficient g is negative because c 3 g = − dcn dn | n=1 .
Holographic Analysis
Janus solutions [29,30,[32][33][34] provides us with a full order answer to the exactly marginal perturbation of a CFT in any dimension. Below we would like to evaluate the holographic pseudo entropy from the minimal areas in Janus solutions.
The d + 1 dimensional Janus solutions take the general form: where ds 2 AdS(d) is d dimensional AdS metric We are interesting in the holographic pseudo entropy at the time slice of the dual CFT defined by ρ = y = 0. The subsystem A sits on this d − 1 dimensional time slice. We assume the Z 2 invariance h(ρ) = h(−ρ) so that the minimal surface Γ A sits on the slice ρ = 0. Also we assume both the future and past infinity describe two different CFT vacua |ψ 1 and |ψ 2 with the same central charge c. This requires h(ρ) ±2ρ in the limit ρ → ±∞. The coordinate x is the space direction of the dual interface CFT. The Euclidean time direction of the CFT is (ρ, y) direction as usual. ρ = ∞ (and ρ = −∞) corresponds to the upper (and lower) half plane of the interface CFT.
When d = 2, we choose the subsystem A as an interval −l/2 ≤ x ≤ l/2 at t = 0 (i.e. the location of Janus interface) and calculate its holographic pseudo entropy. Due to the Z 2 symmetry, Γ A is the geodesic on the ρ = 0 slice. If we write the cut off of y as˜ we have the following estimation of holographic pseudo entropy: where c is the central charge for the CFT. Below we would like to work out whether this pseudo entropy is smaller than the original CFT entanglement entropy: We do not care about the difference between˜ and as this only leads to a subleading difference. To argue the difference (109) is non-positive, we need to confirm We can generalize this to any higher d dimensions straightforwardly and we can easily confirm that the difference is non-positive if when (134) is satisfied. Below we would like to argue, (134) is always true for any (physically sensible) Janus solutions in any dimensions. For this we would like to first impose an Euclidean version of null energy condition where N µ is arbitrary null vector. In our Euclidean setup (130), we choose The first one N (1) is trivial R µν N µ N ν = 0 but the second one leads to In an explicit example of 3D Janus solution of Einstein-dilaton theory [31], we have e 2h(ρ) = 1 2 + 1 − 2γ 2 2 cosh(2ρ). | 2020-11-20T02:01:01.370Z | 2020-11-19T00:00:00.000 | {
"year": 2021,
"sha1": "7f9a204f9d49db7ea0d139155cbe452f68269142",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevLett.126.081601",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "9d93a37d16201adc6efb18b3e7f999d7bfd0e088",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
} |
231709393 | pes2o/s2orc | v3-fos-license | Leveraging End-to-End ASR for Endangered Language Documentation: An Empirical Study on Yolóxochitl Mixtec
“Transcription bottlenecks”, created by a shortage of effective human transcribers (i.e., transcriber shortage), are one of the main challenges to endangered language (EL) documentation. Automatic speech recognition (ASR) has been suggested as a tool to overcome such bottlenecks. Following this suggestion, we investigated the effectiveness for EL documentation of end-to-end ASR, which unlike Hidden Markov Model ASR systems, eschews linguistic resources but is instead more dependent on large-data settings. We open source a Yoloxóchitl Mixtec EL corpus. First, we review our method in building an end-to-end ASR system in a way that would be reproducible by the ASR community. We then propose a novice transcription correction task and demonstrate how ASR systems and novice transcribers can work together to improve EL documentation. We believe this combinatory methodology would mitigate the transcription bottleneck and transcriber shortage that hinders EL documentation.
Introduction
warned that half of the world's 7,000 languages would disappear by the end of the 21st century. Consequently, a concern with endangered language documentation has emerged from the convergence of interests of two major groups: (1) native speakers who wish to document their language and cultural knowledge for future generations; (2) linguists who wish to document endangered languages to explore linguistic structures that may soon disappear. Endangered language (EL) documentation aims to mitigate these concerns by developing and archiving corpora, lexicons, and grammars (Lehmann, 1999). There are two major challenges: (a) Transcription Bottleneck: The creation of EL resources through documentation is extremely challenging, primarily because the traditional method to preserve primary data is not simply with audio recordings but also through time-coded transcriptions. In a best-case scenario, texts are presented in interlinear format with aligned parses and glosses along with a free translation (Anastasopoulos and Chiang, 2017). But interlinear transcriptions are difficult to produce in meaningful quantities: (1) ELs often lack a standardized orthography (if written at all); (2) invariably, few speakers can accurately transcribe recordings. Even a highly skilled native speaker or linguist will require a minimum of 30 to 50 hours to simply transcribe one hour of recording (Michaud et al., 2014;Zahrer et al., 2020). Additional time is needed for parse, gloss, and translation. This creates what has been called a "transcription bottleneck", a situation in which the expert transcribers cannot keep up with the amount of recorded material for documentation.
(b) Transcriber Shortage: It is generally understood that any viable solution to the transcription bottleneck must involve native speaker transcribers. Yet usually few, if any, native speakers have the skills (or time) to transcribe their language. Training new transcribers is one solution, but it is timeconsuming, especially with languages that present complicated phonology and morphology. The situation is distinct for major languages, for which transcription can be crowd-sourced to speakers with little need for specialized training (Das and Hasegawa-Johnson, 2016). In Yoloxóchitl Mixtec (YM; Glottocode=yolo1241, ISO 639-3=xty), the focus of this study, training is time-consuming: after one-year part-time transcription training, a proficient native speaker, Esteban Guadalupe Sierra, still has problems with certain phones, particularly tones and glottal stops. Documentation requires accurate transcriptions, a goal yet beyond even the capability of an enthusiastic speaker with many months of training.
As noted, ASR has been proposed to mitigate the Transcription Bottleneck and create increasingly extensive EL corpora. Previous studies first investigated HMM-based ASR for EL documentation (Ćavar et al., 2016;Mitra et al., 2016;Jimerson and Prud'hommeaux, 2018;Cruz and Waring, 2019;Thai et al., 2020;Zahrer et al., 2020;Gupta and Boulianne, 2020a). Along with HMM-based ASR, natural language processing and semi-supervised learning have been suggested as a way to produce morphological and syntactic analyses. As HMM-based systems have become more precise, they have been increasingly promoted as a mechanism to bypass the transcription bottleneck. However, ASR's context for ELs is quite distinct from that of major languages. Endangered languages seldom have sufficient extant language lexicons to train an HMM system and invariably suffer from a dearth of skilled transcribers to create these necessary resources (Gupta and Boulianne, 2020b).
As we have confirmed with this present study, end-to-end ASR systems have shown comparable or better results over conventional HMM-based methods (Graves and Jaitly, 2014;Chiu et al., 2018;Pham et al., 2019;Karita et al., 2019a). As endto-end systems directly predict textual units from acoustic information, they save much effort on lexicon construction. Nevertheless, end-to-end ASR systems still suffer from the limitation of training data. Attempts with resource-scarce languages have relatively high character (CER) or word (WER) error rates (Thai et al., 2020;Matsuura et al., 2020;Hjortnaes et al., 2020). It has nevertheless become possible to utilize ASR with ELs to reduce significantly, but not eliminate, the need for human input and annotation to create acceptable ("archival quality") transcriptions.
This Work: This work represents end-to-end ASR efforts on Yoloxóchitl Mixtec (YM), an endangered language from western Mexico. The YMC 1 corpus comprises two sub-corpora. The first ("YMC-EXP", expert transcribed, corpus) includes 100 hours of transcribed speech that have been carefully checked for accuracy. We built a recipe of the ESPNet (Watanabe et al., 2018) that shows the whole process of constructing an end-1 Specifically, we used material from the community of Yoloxóchitl (YMC), one of four in which YM is spoken.
to-end ASR system using the YMC-EXP corpus. 2 The second corpus, ("YMC-NT", native trainee, corpus) includes 8+ hours of additional recordings not included in the YMC-EXP corpus. This second corpus contains novice transcriptions with subsequent expert corrections that has allowed us to evaluate the skill level of the novice. Both the YMC-EXP and YMC-NT corpora are publicly available at OpenSLR under a CC BY-SA-NC 3.0 License. 3 The contributions of our research are: • A new Yoloxóchitl Mixtec corpus to support ASR efforts in EL documentation.
• A reproducible workflow to build an end-toend ASR system for EL documentation.
• A comparative study between HMM-based ASR and end-to-end ASR, demonstrating the feasibility of the latter. To test the framework's generalizability, we also experiment with another EL: Highland Puebla Nahuat (Glottocode=high1278; ISO 639-3=azz).
• An in-depth analysis of errors in novice transcription and ASR. Considering the discrepancies in error types, we propose Novice Transcription Correction (NTC) as a task for the EL documentation community. A rule-based method and a voting-based method are proposed. 4 In clean speech, the best system reduces relative word error rate in the novice transcription by 38.9% .
Corpus Description
In this section, we first introduce the linguistic specifics for YM and YMC. Then we discuss the recording settings. Since YM is a spoken language without a standardized textual format, we next explain the transcription style designed for this language. Finally, we offer the corpus partition and some statistics regarding corpora size.
Linguistic Specifics for Yoloxóchitl Mixtec
Yoloxóchitl Mixtec is an endangered, relatively low-resource Mixtecan language. It is mainly spoken in the municipality of San Luis Acatlán, state of Guerrero, Mexico. It is one of some 50 languages in the Mixtec language family, which is part of a larger unit, Otomanguean, that Suárez (1983) considers "a 'hyper-family' or 'stock'." Mixtec languages (spoken in Oaxaca, Guerrero, and Puebla) are highly varied, resulting from approximately 2,000 years of diversification. YM is spoken in four communities: Yoloxóchitl, Cuanacaxtitlan, Arroyo Cumiapa, and Buena Vista. Mutual intelligibility among the four YM communities is high despite significant differences in phonology, morphology, and syntax. All villages have a simple segmental inventory but significant though still undocumented variation in tonal phonology. YMC (refering only to the Mixtec of the community of Yoloxóchitl [16.81602, -98.68597]) manifests 28 distinct tonal patterns on 1,451 identified bimoraic lexical stems. The tonal patterns carry a significant functional load in regards to the lexicon and inflection. For example, 24 distinct tonal patterns on the bimoraic segmental sequence [nama] yield 30 words (including six homophones). This ample tonal inventory presents challenges to both a native speaker learning to write and an ASR system learning to recognize. Notably, it also introduces difficulties in constructing a language lexicon for training HMM-based systems.
Recording Settings
There are two corpora used in this study. The first (YMC-EXP) was used for ASR training. The second (YMC-NT) was used to train the novice speaker (e.g., set up a curriculum for him to learn how to transcribe) and for Novice Transcription Correction. The YMC-EXP corpus comprises expert transcriptions used as the gold-standard reference for ASR development. The YMC-NT corpus has paired novice-expert transcription as it was used to train and evaluate the novice writer.
The corpus used for ASR development comprises mostly conversational speech in two-channel recordings (split for training). Each conversation is with two speakers and each of the two speakers was fitted with a separate head-worn mic (usually a Shure SM10a). Over two dozen speakers (mostly male) contributed to the corpus. The topics and their distribution were varied (plants, animals, hunting/fishing, food preparation, ritual speech). The YMC-NT corpus comprises single-channel field recordings made with a Zoom H4n at the moment plants were collected during ethnobotanical research. Speakers were interviewed one after another; there is no overlap. However, the recordings often registered background sounds (crickets, birds) that we expected would negatively impact ASR accuracy more than seems to have occurred. The topic was always a discussion of plant knowledge (a theme of only 9% of the YMC-EXP corpus). Expectedly, there were many out-of-vocabulary (OOV) words (e.g., plant names not elsewhere recorded) in this YMC-NT corpus. 5
Corpus Transcription
(a) Transcription Level: The YMC-EXP corpus presently has two levels of transcription: (1) a practical orthography that represents underlying forms; (2) surface forms. The underlying form marks prefixes (separated from the stem by a hyphen), enclitics (separated by an = sign), and tone elision (with the elided tones in parentheses). All these "breaks" and phonological processes disappear in the surface form. For example, the underlying be 3 e 3 =an 4 (house=3sgFem; 'her house') surfaces as be 3ã4 . And be 3 e (3) = 2 ('my house') surfaces as be 3 e 2 . Another example is the completive prefix ni 1 -, which is separated from the stem as in ni 1xi 3 xi (3) = 2 (completive-eat-1sgS; 'I ate'). The surface form would be written nĩ 1 xi 3 xi 2 . Again, processes such as nasalization, vowel harmony, palatalization, and labialization are not represented in the practical (underlying) orthography but are generated in the surface forms. The only phonological process encoded in the underlying orthography is tone elision, for which parentheses are used.
The practical, underlying orthography mentioned above was chosen as the default system for ASR training for three reasons: (1) it is easier than a surface representation for native speakers to write; (2) it represents morphological boundaries and thus serves to teach native speakers the morphology of their language; and (3) for a researcher interested in generating concordances for a corpus-based lexicographic project it is much easier to discover the root for 'house' in be 3 e 3 =an 4 and be 3 e (3) = 2 than in the surface forms be 3ã4 and be 3 e 2 .
(b) "Code-Switching" in YMC: Endangered, colonialized Indigenous languages often manifest extensive lexical input from a dominant Western language, and speakers often talk with "codeswitching" (for lack of a better term). Yoloxóchitl Mixtec is no exception. Amith considered how to write such forms best and decided that Spanishorigin words would be written in Spanish and without tone when their phonology and meaning are close to that of Spanish. So Spanish docena appears over a dozen times in the corpus and is written tucena; it always has the meaning of 'dozen'. All month and day names are also written without tones. Note, however, that Spanish camposanto ('cemetery') is also found in the corpus and pronounced as pa 3 san 4 tu 2 . The decision was made to write this with tone markings as it is significantly different in pronunciation from the Spanish origin word. In effect, words like pa 3 san 4 tu 2 are considered loans into YM and are treated orthographically as Mixtec. Words such as tucena are considered "code-switching" and written without tones.
(c) Transcription Process: The initial timealigned transcriptions were made in Transcriber (Barras et al., 1998). However, given that Transcriber cannot handle multiple tiers (e.g., transcription and translation, or underlying and surface orthographies), the Transcriber transcriptions were then imported into ELAN (Wittenburg et al., 2006) for further processing (e.g., correction, surfaceform generation, translation).
Corpus Size and Partition
Though endangered, YMC does not suffer from the same level of resource limitations that affect most ASR work with ELs (Ćavar et al., 2016;Thai et al., 2020). The YMC-EXP corpus, developed for over ten years, provided 100 hours for the ASR training, validation, and test corpora. There are 505 recordings from 34 speakers in the YMC-EXP corpus, and the transcription for the YMC-EXP were all carefully proofed by an expert native-speaker linguist. As shown in Table 1, we offer a train-valid-test split where there is no overlap in content between the sets. The partition considers the balance between speakers and relative size for each part. As introduced in Section 2.2, the YMC-NT corpus has both expert and novice transcription. It includes only three speakers for a total of 8.36 hours. In the recordings of two consultants, the environment is relatively clean and free of background noise. The speech of the other individual, however, is frequently affected by background noise. This seems coincidental as all three were recorded together, one after the other in random order. But given this situation, we split the corpus into three sets: clean-dev (speaker EGS), clean-test (speaker CTB), and noise-test (speaker FEF; see Table 1).
The "code-switching" discussed in 2.3 (b) introduces different phonological representations and makes it difficult to train an HMM-based model using language lexicons. Therefore, previous work (Mitra et al., 2016) using the HMM-based system for YMC did not consider phrases with "codeswitching". To compare our model with their results, we have used the same experimental corpus in our evaluation. Their corpus (YMC-EXP(-CS)), shown in Table 1, is a subset of the YMC-EXP; the YMC-EXP(-CS) corpus does not contain "codeswitching" phrases, i.e., phrases with words that were tagged as Spanish origin and transcribed without tone.
ASR Experiments
3.1 End-to-End ASR As ESPNet (Watanabe et al., 2018) is widely used in open-source end-to-end ASR research, our endto-end ASR systems are all constructed using ESP-Net. For the encoder, we employed the conformer structure (Gulati et al., 2020), while for the decoder we used the transformer structure to condition the full context, following the work of Karita et al. (2019b). The conformer architecture is a stateof-the-art innovation developed from the previous transformer-based encoding methods (Karita et al., 2019a;Guo et al., 2020). A comparison between the conformer and transformer encoders shows the value of applying state-of-the-art end-to-end ASR to ELs.
Experiments and Results
As discussed above, our end-to-end model applied an encoder-decoder architecture with a conformer encoder and a transformer decoder. The architecture of the model follows Gulati et al. (2020) while its configuration follows the aishell conformer recipe from ESPNet. 6 The experiment is reproducible using ESPNet.
As the end-to-end system models are based on word pieces, we adopted CER and WER as evaluation metrics. They help demonstrate the system performances at different levels of graininess. But because the HMM-based systems were decoding with a word-based lexicon, for comparison to HMM we only use the WER metric. To thoroughly examine the model, we conducted several comparative experiments, as discussed in continuation.
(a) Comparison with HMM-based Methods:
We first compared our end-to-end method with the Deep Neural Network-Hidden Markov Model (DNN-HMM) methods proposed in Mitra el al. (2016). In this work, Gammatone Filterbanks (GFB), articulation, and pitch are configured for the DNN-HMM model. This baseline is a DNN-HMM model using Mel Filterbanks (MFB). In recent unpublished work, Kwon and Kathol develop a latest state-of-the-art CNN-HMM-based ASR model 7 for YMC based on the lattice-free Maximum Mutual Information (LF-MMI) approach, also known as "chain model" (Povey et al., 2016). The experimental data of the above HMM-based models is YMC-EXP(-CS) discussed in Section 2.4. For the comparison, our end-to-end model adopted the same partition to ensure fair comparability with their results. Table 2 shows the comparison between DNN-HMM systems and our end-to-end system on YMC-EXP(-CS). It indicates that even without an external language lexicon the end-to-end system significantly outperforms both the DNN-HMM baseline models and the CNN-HMM-based state-of-the-art model.
In Section 2.3 (b), we note that "code-switching" is invariably present in EL speech (e.g., YMC). Thus, ASR models built on "code-switching-free corpora (like YMC-EXP[-CS]) are not practical for real-world usage. However, a language lexicon is available only for the YMC-EXP(-CS) corpus so 6 See Appendix for details about the model configuration. 7 See Appendix for details about the model configuration. we cannot conduct HMM-based experiments with either YMC-EXP or YMC-NT corpora.
(b) Comparison with Different End-to-End ASR Architectures:
We also conducted experiments comparing models with different encoders and decoders on the YMC-EXP corpus. For a Recurrent Neural Network-based (E2E-RNN) model, we followed the best hyper-parameter configuration, as discussed in Zeyer et al. (2018). For a Transformer-based (E2E-Transformer) model, the same configuration from Karita et al. (2019b) was adopted. Both models shared the same data preparation process as the E2E-Conformer model. Table 3 compares different end-to-end ASR architectures on the YMC-EXP corpus. 8 The E2E-Conformer obtained the best results, obtaining significant WER improvement as compared to the E2E-RNN and the E2E-Transformer models. The E2E-Conformer's WER on YMC-EXP(-CS) is slightly lower than that obtained for the whole YMC-EXP corpus, despite a significantly smaller training set in the YMC-EXP(-CS) corpus. Since the subset excludes Spanish words, "codeswitching" may well be a problem to consider in ASR for endangered languages such as YM. In addition to comparing model architectures, we compared the impact of transcription levels on the ASR model. E2E-Conformer models with the same configurations were trained using both the surface and the underlying transcription forms, which are discussed in Section 2.3. We also trained separate RNN language models for fusion and unigram language models to extract word pieces for different transcription levels. Table 4 shows the E2E-Conformer results for both underlying and surface transcription levels. As introduced in Section 2.3, the surface form reduces several linguistic and phonological processes compared to the underlying practical form. The results indicate that the end-to-end system is able to automatically infer those morphological and phonological processes and maintain a consistent low error rate.
(d) Comparison with Different Corpus Sizes:
As introduced in Section 1, most ELs are considered low-resource for ASR purposes. To measure the impact of resource availability on ASR accuracy we trained the E2E-Conformer model on 10, 20, and 50 hours subsets of YMC-EXP. The results demonstrate the model performances over different sizes of resources. Table 5 shows the E2E-Conformer performances on different amounts of training data. It demonstrates how the model consumes data. As corpus size is incrementally increased, WER decreases significantly. It is apparent that the model still has the capacity to improve performance with more data. The result also indicates that our system can get reasonable performances from 50 hours of data. This would be an important guideline when we collect a new EL database.
(e) The Framework Generalizability: To test the end-to-end ASR systems' generalization ability, we conducted the same end-to-end training and test procedures on another endangered language: Highland Puebla Nahuatl (high1278; azz). This corpus is also open access under the same CC license. 9 It comprises 954 recordings that total 185 hours 22 minutes, including 120 hours transcribed data in ELAN and 65 hours still only in Transcriber and not used in ASR training. 10 Table 6 shows the performance of three different end-to-end ASR architectures on Highland Puebla Nahuatl. For this language the E2E-Conformer again offers better performances over the other models. Table 7 shows the E2E-Conformer performances on different amounts of training data for Highland Puebla Nahuatl. We can observe that 50-hour is a reasonable size for an EL, which is similar to the experiments in Table 5. These experiments indicate the general ability to consistently apply end-to-end ASR systems across ELs.
Novice Transcription Correction
Finally, this paper presents novice transcription correction (NTC) as a task for EL documentation. That is, in this experiment we explore not only the possibility of using ASR to enhance the accuracy of a YM novice transcription but to combine both novice transcription and ASR to achieve accurate results that surpass that of either component. Below we first analyze patterns manifested in novice transcriptions. Next, we introduce two baselines that fuse ASR hypotheses and novice transcription for the NTC task.
Novice Transcription Error
As mentioned in Section 1, transcriber shortages have been a severe challenge for EL documentation. Before 2019, only the native speaker linguist, Rey Castillo García, could accurately transcribe the segments and tones of YMC. To mitigate the YMC transcriber shortage, in 2019 Castillo began to train another speaker, Esteban Guadalupe Sierra. First, a computer course was designed to incrementally teach Guadalupe segmental and tonal phonology.
In the next stage, he was given YMC-NT corpus recordings to transcribe. Compared to the paired expert transcription, the novice achieved a CER of 6.0% on clean-dev, defined in Table 1. However, it is not feasible to spend many months training speakers with no literacy skills to acquire the transcription proficiency achieved by Guadalupe in our project. Moreover, even with a 6.0% CER, there are still enough errors so as to require significant annotation/correction by the expert, Castillo. The state-of-the-art ASR system (e.g., E2E-Conformer) shown in Table 3 gets an 8.2% CER on the cleandev set, more errors than the novice CER. So for YMC, ASR is still not a good enough substitute for a proficient novice. As Amith and Castillo worked with the novice, they saw a repetition of types of errors that they worked to correct by giving the novice exercises focused on these transcription shortcomings. The end-to-end ASR, however, has demonstrated a different pattern of errors. For example, it developed a fair understanding of the rules for suppleting tones, marked by parentheses around the suppleted tones. Rather than over-specify the NTC correction algorithm, we first analyzed the error-type distribution using the Clean-dev from the YMC-NT corpus, as shown in Table 8.
Novice-ASR Fusion
Rapid comparison of the types of errors for each transcription (novice and ASR) demonstrated consistent patterns and has led us to hypothesize that a fusion system might automatically correct many of these errors. Two baseline methods are examined for the fusion: a voting-based system (Fiscus, 1997) and a rule-based system.
The voting-based system follows the definition in (Fiscus, 1997) that combines hypotheses from different ASR models with novice transcription.
The framework of rule-based fusion is shown in Figure 1. The rules are defined in different linguistic units: words, syllables, and characters. They assume a hierarchical alignment between the novice transcription and ASR hypotheses. The rules are applied to the transcription from word to syllable to character level. The rules are developed based on continual evaluation of the novice's progress. Thus they will be different but discoverable when Syllable Rules: If a novice syllable is tone initial, use the corresponding ASR syllable. If the novice and the ASR have identical segments but different tones, use the ASR tones. When an ASR syllable has CVV or CV'V, and its corresponding novice syllable has CV, 11 use the ASR syllable (CVV or CV'V). If the tone from either transcription system follows a consonant (except a stem-final n), use the other system's transcription. Character Rules: If the ASR has a hyphen, equal sign, parentheses, glottal stop which is absent from the novice transcription, then always trust the ASR and maintain the aforementioned symbols in the final transcription. We apply the edit distance (Wagner and Fischer, 1974) to find the alignment between the ASR model hypothesis {C 1 , ..., C n } and the Novice transcription {C 1 , ..., C m }. The L I , L D , L S are introduced in the dynamic function as the insertion, deletion, and substitution loss, respectively. In the naive setting, L I , L D are both set to 1. The L S is set to 1 if C i is different from C j and 0 otherwise. This setting is computation-efficient. However, it does not consider how the contents mismatch between the C i and C j . Therefore, we adopt a hierarchical dynamic alignment. In this method, the character alignment follows the native setting. While the L S (C i , C j ) for syllable alignment is defined as the normalized character-level edit distance between C i and C j as follows: where the |C i | is the lengths of the syllable. Similarly, the L S (C i , C j ) for word alignment is defined based on syllable alignment.
Experimental Settings
The novice transcription, the E2E-Transformer model, and the E2E-Conformer model were considered as baselines for the NTC task. To evaluate the system for reduced training data, we also show our results of E2E-Conformer trained with a 50-hour subset. For the end-to-end models, we adopted the trained model from Section 3 with the same decoding set-ups. To test the effectiveness of the hierarchical dynamic alignment, we tested the data with two fusion systems, namely Fusion1 and Fusion2. The Fusion1 system used the naive settings of edit distance, while the Fusion2 system adopted the hierarchical dynamic alignment. Both fusion systems adopt rules defined in Section 4.2. Two configurations for voting-based methods were tested. The first "ROVER" combined three hypotheses (i.e., the E2E-Transformer, the E2E-Conformer, and the Novice). In contrast, the "ROVER-Fusion2" combined the Fusion2 system with the above three.
Results
As shown in Table 9, voting-based methods and rule-based methods all significantly reduce the novice errors for clean speech. 12 However, for the noise-test, the novice transcription is the most robust method. For overall results, the ROVER system (model I) has a lower WER, while the ROVER-Fusion2 system (model J) reaches a lower CER. Model J significantly reduces specific errors, including tone errors (25%), enclitic errors (50%), and parentheses errors (87.5%). In addition, models D, F, and H indicate that the system could still reduce clean-environment novice errors using ASR models trained with a 50-hour subset of the YMC-EXP corpus.
As we discussed in Section 4, novice and ASR transcriptions manifest distinct patterns of error and thus can be used to complement each other. Table 9 shows that our proposed rule-based and voting-based fusion methods can potentially eliminate the errors that come from the novice transcriber, and it can mitigate the transcriber shortage problems based on these fusion methods. However, we should note that a noisy recording condition would negatively affect a fusion approach as ASR does poorly under such conditions (>23% CER), and for practical purposes, the novice transcription alone (<8.5%) is much more accurate. In such conditions we should rely on the novice transcriber alone.
Conclusion and Future Work
This work presents an open-source endangered language corpus in Yoloxóchitl Mixtec and a comparative and reproducible study on various approaches to end-to-end ASR. We demonstrate that end-toend approaches are feasible and present comparable results over conventional HMM approaches, which require resources such as language lexicons not necessary with end-to-end ASR. Additionally, we propose novice transcription correction as a potential task for ASR in EL documentation. We examine two methods to approach this task. The first is a rule-based approach that uses hierarchical dynamic alignment and linguistic rules to perform novice-ASR hybridization. The second is a votingbased method that combines hypotheses from the novice and end-to-end ASR systems. Empirical studies on the YMC-NT corpus indicate that both methods significantly reduce the CER/WER of the novice transcription for clean speech.
The above discussion suggests that a useful approach to EL documentation using both human and computational (ASR) resources might focus on training each system (human and ASR) for particular transcription tasks. If we know from the start that ASR will be used to correct novice transcriptions in areas of difficulty, we could train an ASR system to maximize accuracy in those areas that challenge novice learning.
phone-based. The transcriptions are mapped to surface representations and then to phones (a total of 197 phones, as each tone for a given vowel, is a different phone). There are 22,465 total entries in the lexicon. The chain model is trained with a sequence-level objective function and operates with an output frame rate of 30 ms, three times longer than the previous standard. The longer frame rate increases decoding speed, which in turn makes it possible to operate with a significantly deeper DNN architecture for acoustic modeling. The best results were achieved with a neural network based on the ResNet architecture (Szegedy et al., 2017). This consists of an initial layer for Linear Discriminative Analysis (LDA) transformation and subsequent alternating 160-dimensional bottleneck layers, adding up to 45 layers in total. The DNN acoustic model is then compiled with a 4-gram language model into a weighted finite-state transducer for word sequence decoding. | 2021-01-27T02:16:19.982Z | 2021-01-26T00:00:00.000 | {
"year": 2021,
"sha1": "8c4d1e81c277f71cd9e3c9a0af356203c7948dca",
"oa_license": "CCBY",
"oa_url": "https://aclanthology.org/2021.eacl-main.96.pdf",
"oa_status": "HYBRID",
"pdf_src": "ACL",
"pdf_hash": "3264e1e756cf5e72631d69db1ce64f9149df97bc",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
17420140 | pes2o/s2orc | v3-fos-license | Increased nuclear stiffness via FAK-ERK1/2 signaling is necessary for synthetic mechano-growth factor E peptide-induced tenocyte migration
We have previously reported that a synthetic mechano-growth factor (MGF) C-terminal E-domain with 25 amino acids (MGF-C25E) promotes rat tenocyte migration through the FAK-ERK1/2 signaling pathway. However, the role of the nucleus in MGF-C25E-promoted tenocyte migration and the molecular mechanisms involved remain unclear. In this study, we demonstrate that MGF-C25E increases the Young’s modulus of tenocytes through the FAK-ERK1/2 signaling pathway. This increase is not accompanied by an obvious change in the expression of Lamin A/C but is accompanied by significant chromatin condensation, indicating that MGF-C25E-induced chromatin condensation may contribute to the increased nuclear stiffness. Moreover, DNA methylation is observed in MGF-C25E-treated tenocytes. Inhibition of DNA methylation suppresses the elevation in chromatin condensation, in nuclear stiffness, and in tenocyte migration induced by MGF-C25E. The inhibition of the focal adhesion kinase (FAK) or extracellular signal regulated kinase 1/2 (ERK1/2) signals represses MGF-C25E-promoted DNA methylation. It also abolishes chromatin condensation, nuclear stiffness, and cell migration. Taken together, our results suggest that MGF-C25E promotes tenocyte migration by increasing nuclear stiffness via the FAK-ERK1/2 signaling pathway. This provides strong evidence for the role of nuclear mechanics in tenocyte migration and new insight into the molecular mechanisms of MGF-promoted tenocyte migration.
domain 6,7 . MGF has been shown to activate satellite cells in muscles, resulting in hypertrophy or regeneration. It has also been shown to function as a neuroprotectant in brain ischemia [7][8][9] . Moreover, previous studies demonstrated that MGF promotes bone-defect healing and induces more blood vessels in bone regeneration around the defective areas 6,10 . However, it remains unclear whether MGF has the potential to accelerate tendon repair.
The important role of cell movement in multiple biological processes such as embryonic development, immune response, wound healing and tissue renewal makes it one of the most fundamental cellular activities. In the early stages of tendon healing, tenocytes gradually move to the wound and proliferate while secreting collagens and glycoproteins for regeneration 11 . Cell movement is a complex process requiring motor proteins and coordinated structural changes in multiple cellular components 12,13 . Research about cell movement has predominantly focused on the cytoskeleton, adhesion complexes and signaling molecules. Recently it has been found that nuclear shape, size, stiffness and plasticity may play an important role in cell movement 14,15 .
Cell movement can lead to nuclear changes at three levels: transcription of specific genes, the shape of the nucleus and the localization of the nucleus within the cell 16 . Transcription of genes encoding proteins, such as plasma membrane receptors, focal adhesion proteins and cytoskeletal proteins, are involved in the migration process 17,18 . Generally, cell movement can lead to translocation of the cell body across two-dimensional (2D) surfaces, through basement membranes or three-dimensional (3D) interstitial tissues 19 . During migration through 3D tissues, the stiffness and density of the surrounding extracellular matrix (ECM) or the cells themselves present a physical challenge to the moving cell body. The cell movement through 3D tissues is strongly dependent on the deformation of the cells. The nucleus is the largest and stiffest organelle, so its size and relative stiffness can pose a major obstacle for cell migration through narrow openings in the ECM or inside layers of cells. However, cell movement is often, but not always, associated with changes in nuclear shape. In 2D surfaces, the cells often spread out significantly, resulting in more disk-shaped nuclei and a height of only a few micrometers 20 . Cell migration is associated with cytoskeleton-mediated relocation of the nucleus within the cells, and an increase in nuclear stiffness can enhance the ability of cytoskeletal elements to relocate the nucleus inside the cell 16 .
In our previous studies, we demonstrated that MGF-C25E promotes rat tenocyte migration and invasion via the FAK-ERK1/2 signaling pathway 21,22 . However, little is known about the role of the nucleus in MGF-C25E-promoted tenocyte migration and the molecular mechanisms involved. In this study, we aimed to explore the effect of MGF-C25E on nuclear mechanics of tenocytes and its roles and mechanisms in the process of MGF-C25E-promoted tenocyte migration.
MGF-C25E increases stiffness of the nucleus via the FAK-ERK1/2 signaling pathway. Nuclei
were isolated from MGF-C25E-treated tenocytes with or without 10 μ M PF573228 (a selective inhibitor of FAK) or 50 μ M PD98059 (a specific inhibitor of MEK). The Young's modulus of the nucleus was assessed by atomic Exons 1 and exon 2 (black) serve as leader exons, either of which can be spliced to exon 3 and exon 4 (light gray). The mRNA variants with exon 4 spliced to exon 6 (white) are designated as IGF-1 Ea, while those variants with exon 4 spliced to exon 5 (gray) and exon 6 are designated as IGF-1 Eb in rat or IGF-1 Ec in human. The exon 4 spliced directly to exon 5 is designated as IGF-1 Eb, which exists only in humans. (B) The translation of MGF mRNA sequence generates prepro-MGF. This is followed by the removal of the signal peptide and the liberation of MGF. A 49-bp insert of exon 5 in MGF mRNA results in the distinct E-domains from the other variants.
force microscopy (AFM). The AFM results showed that the Young's modulus of tenocyte nuclei exposed to MGF-C25E was apparently higher than that of the control, showing that the nucleus was stiffer after exposure to MGF-C25E. Moreover, the inhibition of the FAK or ERK1/2 signal suppressed the MGF-C25E-induced increase in the Young's modulus (Fig. 2). These results demonstrate that MGF-C25E increases the stiffness of tenocyte nuclei via the FAK-ERK1/2 signaling pathway.
MGF-C25E has no effect on Lamin A/C expression. To determine whether the MGF-C25E promotes Lamin A/C expression, which contributes to regulation of nuclear stiffness and the involved signaling molecules, tenocytes were exposed to MGF-C25E for 24 h with or without FAK inhibitor or ERK 1/2 inhibitor. The expression of Lamin A/C was detected by qRT-PCR, western blot, and immunostaining. We found that MGF-C25E had no significant effect on Lamin A/C expression in tenocytes both at the mRNA (Fig. 3A) and protein levels ( Fig. 3B-E). Blockage of the FAK-ERK1/2 signaling pathway did not affect the expression of Lamin A/C (Fig. 3). These results suggest that Lamin A/C does not contribute to the MGF-C25E-induced increase in tenocyte nuclear stiffness.
MGF-C25E increases chromatin condensation via the FAK-ERK1/2 signaling pathway. Chromatin
condensation is another contributor to the stiffness of the nucleus. To determine whether MGF-C25E affects chromatin condensation, we adapted the well-established chromatin DNaseI sensitivity assay to test chromatin condensation. Measurement of the relative nuclear size is a reliable indicator of DNaseI sensitivity and can therefore be used to assess the relative levels of chromatin condensation 23,24 . As shown in Fig. 4A, the mean size of the nuclei without DNaseI treatment was approximately 162 μ m 2 . When the cells were treated with 50 U/mL DNaseI, the mean size of the nuclei was decreased to approximately 140 μ m 2 . An increase in the DNaseI concentration (100 U/mL) further reduced the nuclear size, and the mean size of nuclei was only approximately 118 μ m 2 (Fig. 4A,B).
To further verify that the in situ DNaseI-sensitivity assay was indeed adequate for assessing the relative levels of chromatin condensation, we compared the DNaseI sensitivity of cells treated with the histone deacetylase (HDAC) inhibitor trichostatin A (TSA). TSA is known to cause decondensation of chromatin 25 . As shown in Fig. 4C,D, the size of the nuclei in TSA-treated cells significantly decreased when compared to that of the control cells. The accelerated decrease in the nuclear size of TSA-treated cells indicated that DNaseI digested the chromatin more rapidly. Moreover, TSA significantly decreased the Young's modulus of tenocyte nuclei, resulting in softer nuclei in comparison with the non-TSA-treated nuclei (Fig. 4E). Treatment with TSA also caused notably lower wound closure of tenocytes compared with the control (without TSA treatment) (Fig. 4F,G). These results indicate that TSA-induced decondensation of chromatin results in reduced nuclear stiffness and decreased migration of tenocytes.
Next, the in situ DNaseI-sensitivity assay was used to compare the chromatin condensation state in MGF-C25E-treated tenoctyes with or without PF573228 or PD98059. As shown in Fig. 5, the mean size of the nuclei of MGF-C25E-treated cells significantly increased compared to that of the control cells. The inhibition of the FAK or ERK1/2 signal repressed the MGF-C25E-induced increase in the nuclear size (Fig. 5A, B). These results demonstrate that MGF-C25E promotes chromatin condensation through the FAK-ERK1/2 signaling pathway in tenocytes.
MGF-C25E elevates DNA methylation via the FAK-ERK1/2 signaling pathway. DNA methylation
has been implicated in chromatin condensation and nuclear organization. To reveal the possible role of DNA methylation in MGF-C25E-promoted chromatin condensation, tenocytes were exposed to MGF-C25E for 24 h with or without PF573228 or PD98059, and the levels of DNA methylation were detected by immunostaining. As shown in Fig. 6, the DNA methylation level increased significantly in MGF-C25E-treated tenocytes. The inhibition of the FAK or ERK1/2 signals suppressed MGF-C25E-induced DNA methylation (Fig. 6A,B), suggesting that MGF-C25E induces DNA methylation via the FAK-ERK1/2 signaling pathway in tenocytes. DNA methylation increases chromatin condensation, nuclear stiffness, and tenocyte migration. To further examine the role of MGF-C25E-induced DNA methylation in chromatin condensation, in nuclear stiffness, and in migration of tenocytes, DNA methylation was inhibited using the methylase inhibitor 5'-deoxy-5'-methylthioadenosine (MTA). MTA (0.05 μ M) suppressed the elevation in the DNA methylation levels induced by MGF-C25E (Fig. 7A,B). MTT analysis showed that MTA at a concentration of 0.05 μ M had no effect on tenocyte viability (Fig. 7C). Furthermore, inhibition of DNA methylation also inhibited MGF-C25E-induced chromatin condensation (Fig. 8A,B). AFM assays showed that blockade of DNA methylation abolished the MGF-C25E-promoted nuclear stiffness (Fig. 8C). Wound healing assays further showed that blockade of DNA methylation abrogated the MGF-C25E-promoted migration of tenocytes (Fig. 8D,E). These results demonstrate that MGF-C25E-induced DNA methylation plays a crucial role in regulating chromatin condensation, nuclear stiffness, and migration in tenocytes.
Discussion
We have previously demonstrated that MGF-C25E promotes the migratory potential of in vitro cultured tenocytes via the activation of the FAK-ERK1/2 signaling pathway. In this study, we report that MGF-C25E promotes tenocyte migration by inducing DNA methylation and by increasing chromatin condensation and nuclear stiffness via the FAK-ERK1/2 signaling pathway. This new finding is a close link for MGF-C25E signal transduction from the cytoplasm into the nucleus of tenocytes, and it uncovers the important roles of nuclear mechanics in MGF-C25E-promoted tenocyte migration.
At present, there is disagreement as to the effect of nuclear stiffness on cell migration. Gerlitz and Bustin hold that the cells with stiffer nuclei are easier to move 16,23 . The stiffness of the nucleus may affect the outcome of forces applied to it by the cytoskeleton. Forces applied to a highly elastic nucleus would be dispersed into many directions, making it harder to push the nucleus and control its migration towards a specific cellular location. Conversely, forces applied to a stiffer nucleus would stay more focused, making it easier to regulate its shape and direction of migration 16 . However, some studies suggest that a more deformable nucleus may confer a significant advantage for cell migration 26,27 . Remarkably large nuclear deformations are observed during the migration of stem cell-like progenitor cells in brain tissue 28 . Discher et al. considered that stem cell nuclei, which are more plastic than those of fully differentiated cells, are highly contractile and can generate significant cytoskeletal stress, representing a potential driving force for cell motility 27,29 . In our study, using a scratch wound assay, we found that MGF-C25E can promote tenocyte migration by increasing nuclear stiffness in 2D surfaces. This result agrees with Gerlitz and Bustin's finding 24 , suggesting that a stiffer nucleus is better for a cell to migrate in 2D surfaces.
The nuclear lamina is usually considered to be a major contributor to the mechanical properties of the nucleus 30 . In our study, the MGF-C25E clearly increases nuclear stiffness; however, the surprising finding of this study is that there is no significant change in Lamin A/C expression in MGF-C25E-treated tenocytes, suggesting that MGF-C25E-increased nuclear stiffness is independent of Lamin A/C. The nuclear lamina assembles underneath The data are expressed as the means ± SD; *p < 0.05 and **p < 0.01. the nuclear envelope and consists of A-type lamins (mainly lamins A and C), B-type lamins (lamins B1 and B2 in somatic cells), and lamin-associated membrane proteins, which connect lamins to both intranuclear chromatin and the cytoskeleton 31 . B type lamins are essential for viability but have no effect on nuclear stiffness 30 ; A-type lamins form thick layers that provide rigidity 27,32,33 . However, Discher et al. demonstrated that the stiffness of the nuclear lamina is a barrier to 3D migration, but does not affect 2D migration 27 . Moreover, the stiffness of the nucleus is defined by a number of factors, such as the nuclear membranes, nuclear lamina and chromatin structure 23,32,34 . Discher et al. demonstrated that changes in the global condensation level of chromatin have a significantly larger effect on stiffness of the nucleus than changes in Lamin A/C 28 . In our study, MGF-C25E did not affect Lamin A/C expression, but it clearly promoted chromatin condensation, indicating the key contribution of MGF-C25E-promoted chromatin condensation to nuclear stiffness.
In the mammalian genome, DNA methylation is an epigenetic mechanism that involves a change of chromatin structure, DNA conformation, DNA stability and the mode of interaction between DNA and protein 35 . There has been recent interest in characterizing links between DNA methylation and chromatin condensation. Gerlitz and Bustin have proven that DNA methylation promotes chromatin conformation in murine B16-F1 cells 23 . In our study, we found that MGF-C25E promotes DNA methylation. We also found that inhibition of DNA methylation inhibits the elevation of chromatin condensation induced by MGF-C25E. Moreover, AFM assays showed that blockade of DNA methylation inhibits the MGF-C25E-increased nuclear stiffness. Our study provides evidence of a link for DNA methylation to chromatin condensation and nuclear mechanics.
The FAK-ERK1/2 signaling pathway plays an important role in numerous biological behaviors of a cell. ERK1/2 is a member of the family of mitogen-activated protein kinases (MAPK) that are activated in response to the signals originating from integrins and growth factor receptors. ERK1/2 activation has been implicated as a regulatory 38 . Moreover, other studies showed that DNA methyltransferases (Dnmts) could be regulated by the ERK1/2 signal 39,40 . DNA methylation is catalyzed by a family of Dnmts that transfer a methyl group from S-adenosyl methionine (SAM) to the fifth carbon of a cytosine residue to form 5mC 35 . FAK is a focal adhesion-associated protein kinase involved in cellular adhesion and spreading processes. Cells lacking the tyrosine kinase FAK have larger and more numerous adhesions and migrate poorly 41 . Although there is no direct study showing that the FAK signal can regulate DNA methylation or Dnmts, ERK1/2 is a well-known downstream effector of FAK, and some studies have shown that the FAK signal plays a role in cells through the ERK1/2 signal 42,43 . Moreover, FAK is a primary signaling mediator of dynamic changes in actin cytoskeletal reorganization 44 . Nuclear shaping by the cytoskeleton has been described in several systems 45,46 . We previously demonstrated that the FAK-ERK1/2 signaling pathway is necessary for MGF-C25E-promoted tenocyte migration. In the current study, we prove that MGF-C25E induces DNA methylation, increases chromatin condensation, enhances nuclear stiffness, and promotes migration in tenocytes via the FAK-ERK1/2 signaling pathway. These findings clarify a mechanism for MGF-C25E signal transduction from the cytoplasm into the nucleus in MGF-C25E-promoted tenocyte migration.
Conclusion
In this study, we have demonstrated that MGF-C25E promotes tenocyte migration by increasing nuclear stiffness through DNA methylation and chromatin condensation via the FAK-ERK1/2 signaling pathway. Our results identify the molecular mechanisms involved in MGF-C25E-triggered cellular signaling in tenocyte migration, and also provide strong evidence for the role of nuclear mechanics during cell migration. These findings are helpful towards a better understanding of the role of MGF in tendon wound healing, and they may serve as the basis for a potential therapeutic strategy in tissue repair and regeneration of the injured tendon.
Materials and Methods
Primary culture of rat tenocytes. Male Sprague-Dawley rats (Laboratory Animal Center, the Third Military Medical University, China) weighing 150-200 g were used as the source for tenocytes in this study. All of the procedures were approved by the Chongqing Science and Technology Commission, Chongqing, China. The methods were carried out in accordance with the approved guidelines. The tenocytes were harvested from the Achilles tendons of rats through aseptic procedures. Briefly, each tendon tissue was cut into pieces that were approximately 1.5-2.0 mm 3 in volume, which were separately placed into culture flasks. Then, 3 mL of Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% fetal bovine serum (FBS, Hyclone, Logan, UT, USA), 100 U/mL penicillin, and 100 μ g/mL streptomycin was added to the flasks. The culture flask was then placed upside down in an incubator with 95% air and 5% CO 2 at 37 °C for 12 h to promote the attachment of the tissues and was then inverted. After the tenocytes migrated from the explants, the culture medium was replaced every three days. After reaching 80-85% confluence, the tenocytes were subcultured with 0.25% trypsin-0.02% EDTA at a dilution rate of 1:2. Cells from passages 2 or 3 were used for the experiments.
Scratch wound migration assay. The migration was assessed through a wound healing assay. The tenocytes were seeded at a density of 4 × 10 4 cells/well in a 24-well culture plate. After reaching 80-85% confluence, the tenocytes were synchronized by serum starvation for 12 h. The confluent monolayer was then scratched with a 10-μ L pipet tip and washed twice with PBS. The wells were filled with 1 mL of serum-free DMEM with or without MGF-C25E (Phoenix Pharmaceuticals, Burlingame, USA). To test whether chromatin decompaction inhibited the rate of cell migration, we examined the effect of the levels of DNA methylation on the rate of cell migration. The cells were incubated with histone deacetylase (HDAC) inhibitor trichostatin A (TSA) (Beyotime, Jiangsu, China) or methylase inhibitor 5'-deoxy-5'-methylthioadenosine (MTA) (Sigma-Aldrich, St. Louis, MO, USA) for 60 min at 37 °C before scratching. Images of the wounds were acquired at 0, 24, and 48 h through a microscope (Olympus, Japan). Using Image J, the levels of wound closure were assessed by calculating the ratio of the closure area to the initial wound area as follows: where Wn represents the percentage of wound closure, An represents the residual wound area at the metering time point (h), and A0 represents the initial wound area. Isolation of the nucleus. Nuclei were isolated from the tenocytes as previously reported 47 . Briefly, confluent cells were washed with PBS (Dulbecco's phosphate-buffered saline without Ca 2+ or Mg 2+ , Sigma-Aldrich, St. Louis, MO, USA) and treated with an ice-cold low-ionic-strength extraction solution consisting of 2.5 mM triethanolamine (Sigma-Aldrich, St. Louis, MO, USA), 1 mg/mL leupeptin (Sigma-Aldrich, St. Louis, MO, USA), and 1 mg/mL pepstatin (Sigma-Aldrich, St. Louis, MO, USA) in distilled water for 10 min. The cells were then treated with 0.05% NP-40 in PBS (hereafter, containing 1 mg/mL leupeptin and 1 mg/mL pepstatin and kept at 4 °C) for 1 min. The dorsal side of many cells was broken away, and the nuclei were popped out by shaking the dish gently under a phase-contrast microscope. A small volume (approximately 1 mL) of the supernatants containing the isolated nuclei was aspirated with a pipette and suspended in 10 mL PBS. The solution was centrifuged at 400 × g for 10 min at 4 °C with two changes of PBS. The isolated nuclei were seeded on sterilized 24-mm coverslips coated with poly D-lysine (Sigma-Aldrich, St. Louis, MO, USA) in a six-well culture plate. The culture plate was then placed in an incubator with 95% air and 5% CO 2 at 37 °C for 3 h to favor the attachment of nucleus.
Atomic force microscopy (AFM) analysis. The stiffness of isolated nuclei was measured using an AFM (JPK, Berlin, Germany) mounted on an inverted microscope (Leica, Solms, Germany) at 37 °C. A soft silicon nitride quadratic pyramid tip (0.02 N/m) was used at an angle of 17.5°. A single nucleus with normal morphology was identified using an optical microscope, and the AFM cantilever probe was positioned on the nucleus region. The cantilever was descended toward the nucleus at a ramp speed of 3 μ m/s. The force-distance curves were collected and analyzed using the JPK Imaging Process Software to obtain the Young's modulus (E).
RNA preparation and Real-time Polymerase Chain Reaction (qRT-PCR).
Total RNA was isolated from tenocytes using the RNeasy minikit (BioTeke, Beijing, China) according to the instructions of the manufacturer. A 500-ng aliquot of each sample of total RNA was reverse-transcribed in 20 μ L using Reverse Transcriptase kit (TaKaRa, Japan). Levels of Lamin A/C gene were determined by real-time PCR (qRT-PCR) performed with Scientific RepoRts | 6:18809 | DOI: 10.1038/srep18809 In situ DNaseI-sensitivity assay. The tenocytes were seeded at a density of 4 × 10 4 cells/well in a 24-well culture plate. After reaching 80-85% confluence, the cells were synchronized by serum starvation for 12 h and treated with serum-free DMEM with or without MGF-C25E for 24 h. The cells were then washed once with PBS and lysed in CSK buffer supplemented with 0.2% Triton X-100 and protease inhibitor cocktail (Sigma-Aldrich, St. Louis, MO, USA) at room temperature for 5 minutes. Next, the samples were incubated in CSK buffer supplemented with 0.1% Triton X-100, protease inhibitor cocktail and DNaseI (Sigma-Aldrich, St. Louis, MO, USA) at the indicated concentrations at room temperature for 20 minutes. The remaining DNA was stained using DAPI at a concentration of 1 μ g/mL in CSK buffer supplemented with 125 mM ammonium sulfate and protease inhibitor cocktail for 10 minutes. Following fixation in methanol at -20 °C for 5 minutes, the cells were washed in CSK buffer and photographed using a fluorescence microscope (Olympus, Japan). The area of the nucleus was measured using Image J software.
MTT assay. The MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl-tetrazolium bromide) assay was performed to measure the proliferation of tenocytes. Tenocytes were seeded at a density of 3 × 10 3 cells/well in a 96-well culture plate. After adherence, tenocytes were serum-starved overnight and then treated with MTA. Then, 5 mg/ mL MTT reagent (Sigma-Aldrich, St. Louis, MO, USA) was added into each well at the indicated treatment time. After incubation at 37 °C for 4 h, the media was removed and the formazan crystals in the cells were solubilized with 200 μ L of dimethyl sulfoxide (DMSO). The formazan was then quantified using a microplate reader (Model 680, Bio-Rad, Hercules, CA, USA). | 2018-04-03T06:19:34.377Z | 2016-01-08T00:00:00.000 | {
"year": 2016,
"sha1": "b6b30c47e5663c9a395a926ae1e61566e4f15355",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep18809.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b6b30c47e5663c9a395a926ae1e61566e4f15355",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
227060106 | pes2o/s2orc | v3-fos-license | Histology of Interstitial Lung Disease in Common Variable Immune Deficiency
Interstitial lung disease (ILD) is an important non-infectious complication in several primary immune deficiencies. In common variable immune deficiency (CVID) it is associated with complex clinical phenotypes and adverse outcomes. The histology of ILD in CVID is heterogeneous and mixed patterns are frequently observed within a single biopsy, including non-necrotising granulomatous inflammation, lymphoid interstitial pneumonitis, lymphoid hyperplasia, follicular bronchiolitis, organizing pneumonia, and interstitial fibrosis; ILD has to be differentiated from lymphoma. The term granulomatous-lymphocytic interstitial lung disease (GLILD), coined to describe the histopathological findings within the lungs of patients with CVID with or without multisystem granulomata, is somewhat controversial as pulmonary granulomata are not always present on histology and the nature of infiltrating lymphocytes is variable. In this mini review we summarize the literature on the histology of CVID-related ILD and discuss some of the factors that may contribute to the inter- and intra- patient variability in the histological patterns reported. Finally, we highlight areas for future development. In particular, there is a need for standardization of histological assessments and reporting, together with a better understanding of the immunopathogenesis of CVID-related ILD to resolve the apparent heterogeneity of ILD in this setting and guide the selection of rational targeted therapies in different patients.
INTRODUCTION
Common variable immune deficiency (CVID) is the most common of the primary immunodeficiency (PID) syndromes with a prevalence of 1 in 25,000 and 50,000, depending on the population (1,2). It is characterized by low serum levels of IgG, IgA, and/or IgM, and poor specific antibody production (3). There is no definitive diagnostic test, so diagnosis requires the exclusion of secondary hypogammaglobulinaemia, combined immune defects, and, where appropriate, Mendelian disorders (4,5). Up to 70% of patients suffer with variable noninfectious complications reflecting broader immune dysregulation, including autoimmunity, most commonly autoimmune cytopaenias; lymphocytic infiltration and/or granulomatous inflammation which can affect the lungs, gastrointestinal tract, spleen, skin or liver; or malignancy, in particular lymphoma (6,7). Importantly, while bacterial infections are significantly reduced by adequate replacement therapeutic IgG, disease-related complications are not, but are associated with substantially increased mortality (7)(8)(9).
INTERSTITIAL LUNG DISEASE IN COMMON VARIABLE IMMUNE DEFICIENCY
Clinical Significance of CVID-Related ILD ILD is among the more frequent non-infectious complications of CVID, reported in 15%-60% of patients (7,9,14,(21)(22)(23). Clinical symptoms and high-resolution computed tomography (HRCT) findings of ILD can appear before or after CVID diagnosis (24,25). The pathogenesis of CVID-related ILD is presumed to be unrelated to bacterial infections because it can be seen in the absence of bronchiectasis and is not significantly associated with a history of pneumonia (21). Patients with ILD have distinct clinical and immunological phenotypes in keeping with immune dysregulation, in contrast to those without ILD or those with bronchiectasis alone (6,9,14,16,21,26,27). Furthermore, there is no current histological or molecular evidence for chronic bacterial, EBV or CMV viral infections as triggers for inflammation (16,(28)(29)(30), though granulomas in other PIDs, such as those with DNA repair defects, show evidence of vaccine derived rubella virus (31). Other related complications, including splenomegaly, autoimmune cytopaenias, persistent lymphadenopathy and lymphoproliferation, but not necessarily granulomata, occur more frequently in patients with CVID-related ILD, supporting at least a role for intrinsic immune dysregulation driving these varied features (6,9,16,21,27,32,33).
Since CVID-related ILD causes significant morbidity, can be progressive and contributes to mortality, there is urgent need for effective treatments (8,9,34,35). Because the mechanism(s) underlying CVID ILD have not been elucidated, immunosuppressive treatments have been tried with varying success, including corticosteroids, ciclosporin, methotrexate, sirolimus, cyclophosphamide, hydroxychloroquine, anti-TNF agents, mycophenolate mofetil, abatacept, rituximab and azathioprine (16,34,(36)(37)(38). Corticosteroids are often used firstline, however, response may be short-lived or incomplete, there are significant side effects associated with protracted use and a proportion of patients are refractory (16,34,36,39). Success with Rituximab, both in combination with azathioprine or mycophenolate mofetil, and as monotherapy, has been reported although controlled trials and long-term outcome data are lacking (40)(41)(42)(43). Elevated levels of B-cell activating factor (BAFF), a cytokine that promotes the maturation and survival of B-cells, within the serum and lungs of patients with CVID-related ILD levels drives B-cell hyperplasia and may account for disease progression in a small proportion of patients (15) with invasive B cells in inappropriate germinal centers (28,44).
Nomenclature
Various terminologies are used for CVID-related ILD, reflecting a lack of consensus regarding the naming of this complication and its heterogeneous nature (45). Lymphoid interstitial pneumonitis was first reported in patients with antibody deficiency in 1973 (46). Since then, various histopathological entities have been reported within lung biopsies of CVID ILD patients, from those caused by polyclonal lymphocytic inflammation to well-formed granulomata, organizing pneumonia, or pulmonary fibrosis, often with mixed pathology within individual patient biopsies (7,9,16,27,33,35,44). "Granulomatous-lymphocytic interstitial lung disease" (GLILD), first proposed in 2004, is often used as an overarching term to describe CVID ILD with lymphocytic infiltrates and/or granulomata (9,45). However, the accuracy of this term has been called into question. Since not all patients have pulmonary granulomata, it does not fully capture the heterogeneity of the histopathology and similar histological patterns fulfilling a GLILD diagnosis are found in non-CVID PIDs (33,47).
Investigations for CVID-Related ILD
Non-invasive investigations for CVID-related ILD include elevated serum IgM, decreased class-switched memory B-cells and absolute/relative numerical abnormalities of T-cell populations (15,16,34,35,48). Alongside rising IgM levels, BAFF, soluble IL-2 receptor and b2microgloblin have also been proposed as serum biomarkers for disease activity (15,34,49). Lung function tests, particularly the diffusion capacity for carbon monoxide (DLCO), are useful in monitoring for disease progression and response to treatment, but can lack the sensitivity required for diagnosis, particularly early in the disease course (14,28,34,35,37). HRCT is highly sensitive for the detection of CVID ILD, including at an early stage before symptoms or abnormal pulmonary function have developed (14,33,34). Radiographic findings are mixed and include lymphadenopathy, ground glass opacification, nodularity, septal thickening and consolidation (21,33,50). The use of CT combined with positron emission technology (PET) has also been reported as useful to identify sites of active disease, guide biopsy sampling, and monitor response to treatment (41). In selected cases, particularly, but not restricted to, pediatric presentations, genetic testing may be warranted. For example, patients with mutations in CTLA4, LRBA, TACI, KMT2D, XIAP, RAG1, and NFKB1 have been found within so called "CVID" cohorts, and ILD is a common feature of other monogenic PIDs (34,39,(51)(52)(53)(54)(55)(56)(57). A molecular diagnosis enables other therapeutic approaches such as CTLA-4 fusion proteins abatacept and belatacept for the inflammatory associations of CTLA-4 and LRBA deficiency (58,59). Invasive investigations include assessment of bronchoalveolar lavage fluid for infection and lymphocyte phenotyping, often used to avoid possible complications of biopsy (60), or biopsy of lung tissue under imaging for histopathological assessment.
Importance of Histopathological Assessment of Lung Tissue
Histological assessment of affected lung tissue is essential if features of ILD are present on HRCT. Imaging alone is not sufficient because radiographic patterns of parenchymal lung disease do not correlate with pathological features (33). It has been suggested that tissue from more accessible organs could be used in lieu of lung biopsy (34); however, patients with granulomata at other sites do not necessarily display granulomata within areas of ILD, indicating that other organs do not necessary serve as a proxy for the lung (33). Importantly, histological assessment contributes to the exclusion of differential diagnoses including infection and lymphoma and can provide prognostic information, since interstitial fibrosis has been associated with poorer outcomes (7,(17)(18)(19)(20)33). Currently, it is common practice to subject lung biopsy specimens to hematoxylin and eosin (H&E) staining, immunohistochemical staining for CD3, CD4, CD20/19 and EBV and CMV viral infections (37,44). Understanding the pathological processes at play and the phenotype of infiltrating immune cells can help rationalize the selection of therapeutics used for CVID ILD (40)(41)(42)(43).
We have reviewed the published literature of large series (>10 cases) for detailed histological findings of CVID ILD, the most recent being Larsen et al. (46). It is not always possible to know which patients were included in previous reports so only the most recent from each center is used unless marked ( Table 1). Variations including the methods used for both biopsy and reporting are discussed in Section 4.
HISTOLOGICAL PATTERNS OF ILD IN CVID
The histological abnormalities reported in CVID ILD vary and overlap extensively. Similar patterns can also be found in numerous other lung diseases, making diagnosis challenging (44). Using a similar structure as Rao et al. (44), we summarize the commonly reported lung biopsy findings, each of which we discuss in turn ( Table 1).
Granulomata
The granulomata reported in CVID ILD can vary from poorlyto well-circumscribed, with an apparent predilection for the former (28,33,44). Non-infectious CVID granulomatous lung disease shares some similar histological features with sarcoidosis and hypersensitivity pneumonitis; thus, clinical and radiological correlation is important in distinguishing these conditions (44,62). "Poorly-formed granulomata" have been found within areas of pulmonary lymphoid hyperplasia and are difficult to define, as these are very subjective; additionally, granulomata can be found throughout the lung parenchyma (28,44). It is worth reemphasizing that granulomata are not reported in all cases of CVID-related ILD, with frequencies ranging from 0-94% depending on the individual study (Table 1) (7,33,44,47). This suggests that there may be more than one pathological process in CVID-ILD (33,47) and that the generalized use of overarching term "GLILD" to refer to all CVID-related ILD can be misleading.
Pulmonary Lymphoid Hyperplasia
Lymphoid proliferation has been designated as the "cardinal" feature of CVID ILD, and different patterns of pulmonary lymphoid hyperplasia (PLH) have been described, including follicular bronchiolitis, lymphocytic interstitial pneumonitis (LIP), lymphocytic infiltrates, and nodular lymphoid hyperplasia (28,38,40,44,47). In one case series where severity was assessed, PLH tended to be toward the moderate to severe end of the spectrum, (33) reported their findings in similar terms, but these varied in other publications. Efforts were made to group similar findings on the basis on similar histological terms in these instances. Where detail for a given finding was not specified (NS), this is also indicated. *Where the inclusion of previously published cases in a paper could not be completely excluded. ** on CT not reported on histology.
with peribronchiolar and interstitial lymphocytic inflammation (44). These patterns often occur together and are rarely found in isolation (33,44). Follicular bronchiolitis and/or LIP are found in around half of the cases reviewed (Table 1), and this is also in keeping with a recent review where 20/46 patients had some form of lymphoid infiltration, though not always specified (7).
Organizing Pneumonia
Organizing pneumonia (OP), intra-alveolar buds of granulation tissue with myofibroblasts and connective tissue, is reported in a substantial number of histological specimens, although to varying degrees between studies ( Table 1). Cryptogenic organizing pneumonia (COP) is also found in CVID patients and is an important differential diagnosis when OP is the predominant finding on biopsy (40,44). However, Rao et al. demonstrated the potential for misdiagnosis of CVID ILD when isolated COP was found on limited biopsy samples obtained by bronchoscopy. OP can have many aetiologies. Larsen et al. reported that in their cohort OP was accompanied by a "dense lymphoid infiltrate", which was not seen in biopsies from other causes of OP (47). Therefore, in their cohort of 34 patients with CVID and 4 with IgAD, these authors suggest that the combination of these two findings should suggest CVID or IgA deficiency rather than another etiology.
The lack of overlap between OP and pulmonary fibrosis (1/19 cases) in our cases might indicate separate pathological entities; however, significant overlap was described by Rao et al. (11/16 cases) (33,44), who suggested evolving pathology.
Pulmonary Fibrosis
Pulmonary fibrosis is described in a quarter of CVID ILD cases ( Table 1); however, similar to OP, one case series accounts for most of these cases (44), where the majority of patients had some degree of fibrosis. In contrast, Ho et al. found 6.3% of cases where "extensive pulmonary fibrosis" was the "predominant" finding at the time of biopsy; however, it was not reported whether it was a feature in other biopsies to a lesser degree (7).
Interstitial fibrosis in CVID ILD together with lymphoproliferation may resemble some of the patterns of idiopathic interstitial pneumonia, particularly if significant fibrosis (44). Only two studies looked specifically for architectural remodeling, and one of these found this to be associated with significant interstitial fibrosis (33,44). The presence of fibrosis is a poor prognostic factor; prospective clinical studies are needed to justify earlier treatment (33).
Immunohistochemistry
Immunohistochemical staining of the lymphocytic infiltrate has produced discordant findings in the cases where it has been performed. CD20 + B-cells were found in a small proportion of cases, in follicles with T-cells circumscribing them, although Tcells are also reported more diffusely and in areas without B-cells (28,33,44). Rao et al. found a predominance of CD4 + T-cells within lymphoid infiltrates and also observed the presence of Bcell follicles surrounded by CD4 + T-cells (44). We recently reported a predominance of T-cells in most cases ( Figure 1A), either CD4 + or CD8 + ; only 1 of six had germinal centers within B-cell follicles ( Figure 1B) (33). Maglione et al. reported actively proliferating germinal centers in some of their patients with Bcell follicles (28). It is important to differentiate these from pulmonary MALToma, as found in two patients in the Oxford series (33).
We suggested that since the predominant T-cells were either CD4 + or CD8 + , this pointed to different pathological entities (33). Chase et al. hypothesized that the inflammatory infiltrate, including B-and T-cells, might contribute to progressive ILD and pulmonary fibrosis, something that therapy directed against B-and T-cells might possibly prevent (40). Similarly, Maglione et al. suggested B-cells may be responsible for leukocyte accumulation in their role as antigen presenting cells and producers of chemokines and/or cytokines, making them a therapeutic target (28).
ADDRESSING THE HETEROGENEITY OF HISTOPATHOLOGICAL FINDINGS CVID-RELATED ILD
There is a large amount of histopathological heterogeneity in biopsies from CVID-related ILD cases, both from one patient to the next, as well as between different case reports ( Table 1). We discuss possible reasons for this in respect to the underlying pathophysiology, the patient populations reported, and factors relating to obtaining and interpreting lung biopsies.
Pathophysiology: A Spectrum of Disease, Separate Diseases, or a Shared Endpoint for Several Diseases?
Since the pathophysiology of CVID ILD is unknown, it is not surprising that there is no explanation for the degree of heterogeneity in the histology (33,44). CVID-related ILD (or GLILD) was originally defined as a "conglomeration of pulmonary histopathologic abnormalities seen in a subset of patients with CVID (44). The divergent findings may represent a "spectrum" of a single disease (44) or several different pathologies, in addition to the primary antibody deficiency. Another hypothesis is that CVID ILD represents a common "pulmonary reaction pattern" (or "morphological common endpoint") not only for CVID but also for other PIDs in which similar clinical, radiographical, and histological features have been described (44,47). None of these hypotheses are mutually exclusive; it may be that the small numbers and the absence of international standardization frustrate the recognition of distinct pathological patterns.
Patient Populations
Geography may influence the variability observed, with different genetic influences in particular populations. It is interesting that three of the large CVID-related ILD case series, one from the UK and two from the USA, show the most divergence, despite a conscious effort on the part of the former to adhere to similar definitions used previously. Differences in clinical practice, including diagnosis, cannot be totally discounted. Some series are restricted to patients with spontaneous (non-familial) CVID in adults and others include patients diagnosed in childhood. Since no diagnostic details are given, the exclusion of combined immune deficiencies involving T-cell immunity as well as B-cell failure (5), or known mutations in monogenic disease (e.g. CTLA4, LRBA, KMT2D, XIAP, RAG1, NFKB1) (34,39,(51)(52)(53)(54)(55)(56)(57)63) is unclear.
Biopsy-Related Factors: Technique, Timing, Treatment, and Interpretation
The method by which a biopsy has been obtained may have a significant impact on the clinical conclusions reached (61). Given that several different biopsy techniques have been used across the cases reported, this may be a contributing factor to some of the variation between cases, though in almost all series so far, imaging was used to obtain the biopsy.
A further consideration is the timing of the biopsy with respect to disease progression but most patients do not undergo repeat biopsies. It is likely that once pulmonary fibrosis and possibly organizing pneumonia are present that these may progress (33).
Another potential contributing factor is whether the biopsy was performed prior to or following corticosteroid or immunosuppressive treatment. These drugs could plausibly alter the patterns observed or mask them entirely, particularly those related to inflammation. While some authors have clearly documented when such drugs were used before biopsies were performed (33), this is not always the case, so firm conclusions cannot be drawn.
In the absence of standardized reporting, reading of the biopsy adds a great deal of potential for variation to be introduced. Although some authors have tried to mirror the approach pioneered by others and/or have a second, independent pathologist review the histology, some degree of both intra-and inter-operator variability is inevitable when faced with an uncommonly encountered pathological entity (33,40).
CONCLUSIONS AND FUTURE DIRECTIONS
In summary, there is considerable heterogeneity in the histopathological findings both within individual patients, A B FIGURE 1 | Lung biopsies from patients with common variable immune deficiency (CVID)-related interstitial lung disease (ILD). (A) Patient 1: (i) lung biopsy section stained with hematoxylin and eosin (H&E), to show lack of alveolar spaces, and many lymphocytes infiltrating the interstitium (ii) shows staining for CD4 + cells that predominate, sometimes in nodules, (iii) shows scanty CD8 + cells (33). No granulomata or organizing pneumonia. (B) Patient 2: (i) lung biopsy section stained for CD3 + cells, showing that T-cells surround follicles and are additionally found in discreet nodules, (ii) shows the follicles to consist of CD20 + cells, with only scattered CD20 + B-cells in other areas. No granulomata or organizing pneumonia. between patients and between study centers, which include lymphoid hyperplasia, granulomata, organizing pneumonia and pulmonary fibrosis. The term "GLILD" is best avoided as not all patients have pulmonary granulomata (32,46), and its use may mask the histopathological complexity and/or multiple pathological processes (33,47). Possible explanations include differences in the timing of sampling with respect to the disease process or treatments, genetic, geographical and environmental factors (7,33,44,47). Finally, inconsistencies in obtaining histological specimens, treated, immuno-stained and described between studies have contributed (33), highlighting an urgent need for standardization of histopathological findings, to allow fairer comparisons to be made between distinct studies. The ability to compare separate studies is of paramount importance when dealing with a rare disease entity.
We need to expand our understanding of the etiology and immunopathogenesis of ILD in CVID, to provide more accurate prognostication and select appropriate treatments. Future studies will incorporate detailed cellular phenotypic, proteomic, transcriptomic and genomic dissection of CVID-ILD, to shed further light on pathogenesis, identify disease-relevant biomarkers and better guide treatment selection.
AUTHOR CONTRIBUTIONS
FD and DM prepared the first draft of the manuscript. All authors contributed to editing of subsequent versions and reviewed and authorized the final version. HC and SP played a supervisory role. All authors contributed to the article and approved the submitted version. | 2020-11-20T14:09:45.814Z | 2020-11-20T00:00:00.000 | {
"year": 2020,
"sha1": "97cc0caeb183db62e2d0e33d42df095a6251962b",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2020.605187/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "97cc0caeb183db62e2d0e33d42df095a6251962b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54753474 | pes2o/s2orc | v3-fos-license | Creating an Evidence Base to Support the Development of a Holistic Approach to Working with Children and Young People in Derbyshire: A Local Authority Case Study on the Integration of Social Pedagogy in Children and Young People’s Services
To cite this article: Chavaudra, N., Moore, N., Marriott, J. & Jakhara, M. (2014). Creating an Evidence Base to Support the Development of a Holistic Approach to Working with Children and Young People in Derbyshire: A Local Authority Case Study on the Integration of Social Pedagogy in Children and Young People’s Services. International Journal of Social Pedagogy, 3(1), 54-61. Available online: http://www.internationaljournalofsocialpedagogy.com
Improving outcomes, particularly those relating to educational attainment for children in care, remains a ubiquitous challenge for local government. Some European countries use social pedagogy as a conceptual framework to improve the outcomes for children. As part of its aspiration to embed a holistic mind-set for staff and carers working with children, Derbyshire County Council has practiced social pedagogy within its children's residential homes since 2010, resulting in positive changes for staff and young people. In 2013 the University of Derby was commissioned to scope the content of the Council's workforce development approach, to explore the idea that social pedagogy is a promising approach, not just in children's homes but also in wider services. The scoping project included surveys and interviews with a range of children's services workers, including those from social work, child and family support, residential and fostering services. The research identified that, where social pedagogy underpins the activities offered to vulnerable children and those in residential care settings, the outcomes for these groups are improved. There is a growing appetite for a programme of workforce development in social pedagogy, however any such programme should be inclusive and offered at different levels. Furthermore, the principles and concepts should be embedded in the existing roles of a range of practitioners working with children and young people. Ongoing research with Derbyshire's children's services workforce will contribute to a growing body of evidence, which supports the development and application of social pedagogy to improve the experiences of children and young people in the county.
Background
Improving outcomes, particularly those relating to educational attainment for children in care, remains a ubiquitous challenge for local government. However, robust evidence relating to the effectiveness of interventions aimed at improving educational outcomes for children in care is lacking (Brodie et al., 2009). A conceptual framework which has been adopted in some countries in Europe to improve the outcomes for children is that of social pedagogy, which has its origins in the nineteenth century. Lyons & Hueglar (2011) note that the development of social pedagogy across Europe has followed different traditions, making it a difficult concept to define for a UK audience, so they define it broadly as an element of education that includes informal learning processes that contribute to human development. Petrie et al (2006) describe it as 'education in its broadest sense ' and 'bringing up' (p. 3) children in a way that addresses the whole child. Kyriacou et al (2009) describe social pedagogy as referring 'to the theory and practice underpinning the work of those professionals involved in supporting the personal development, social education and overall welfare and care of the whole child' (p. 101). Others have attempted to define it according to the areas of practice that it represents. From a more continental perspective, Eichsteller & Holthoff (2011) argue it contains four areas of practice: A multi-dimensional and holistic understanding of well-being; Learning from a standpoint of the 'competent' or 'rich' child, where education does not impose but facilitates children's capacity to think for themselves; Authentic and trusting relationships between professionals and young people that acknowledge and work with both the authoritative and affectionate, as well as retaining a sense of the private; and Empowerment or promoting active engagement in one's own life and within society, and as such is fundamentally concerned with children's rights and developing the skills for living in a democracy. Berridge et al (2011) note that researchers from a European tradition indicate that social pedagogy is not an approach or profession or a set of techniques that can be easily learnt but a perspective that pervades all areas of practice involving the welfare of children. This provides a distinction between social pedagogy and many other approaches to workforce development -its emphasis on values provides an ethical framework for organisations and for practice. As argued by Coussée et al (2010), 'social pedagogy can contribute to a set of shared values and skills across the child-centred sectors' (p. 794). Social pedagogy is also a conceptual framework closely linked to mentoring in that it underpins personal empowerment and change management (McGowan et al, 2009;Morgan, 2012). Paget et al (2007) concluded that social pedagogy as a conceptual framework should underpin the work of all individuals working with children and young people, particularly where mentoring or supportive relationships are being developed. In this sense, social pedagogy could apply equally to youth workers, teachers, child care and play workers, community workers, youth justice, counsellors and guidance workers as well as to members of social care teams and foster carers. They also note that social pedagogy can underpin work with adults as well as with children and young people.
The appeal of social pedagogy for English children's services is well demonstrated by Petrie et al (2006) in their exploration of the social pedagogic approach for workforce training and education. Within the study, they compare children's residential care in England with that of Germany and Denmark. Their findings suggest that staff turnover, recruitment and retention caused the greatest concern in England and least in Denmark. Their findings also suggest that better life chances, such as lower teenage pregnancy rates, engagement in criminal activity and young people not in education, employment or training, were associated with the professionalised workforce and reflexive and child-centred approach of Denmark and Germany. Social pedagogy appeals to the ambitions of children's services departments both in terms of outcomes for children and young people, but also of addressing cultural and procedural barriers to service improvements. However, research into the impact of a pilot which employed European social pedagogues as workers in English children's residential homes concluded that this had not always resulted in social pedagogical approaches permeating practice throughout the homes, and as a consequence the impact had been limited (Berridge et al, 2011). As such, future social pedagogy developments would require a deeper permeation of practice and organisational culture to achieve a greater impact.
In searching for a more effective approach to improving the outcomes for children in Derbyshire and specifically those in residential homes, members of Derbyshire County Council who remained committed to the potential of social pedagogy secured resources to invest in a programme of workforce development. This article explores the findings from a scoping exercise, which considered a number of options for the delivery of social pedagogy training and development in the Derbyshire County Council Children and Younger Adults Directorate. It also presents the outcomes of this research in terms of policy and aspiration.
Context
Derbyshire County Council has aspired to embed a holistic mind-set for all staff and carers working with children. As part of this aspiration, social pedagogy has been practiced within Derbyshire's children's residential homes since 2010. A pilot, which offered accredited social pedagogy training to nine staff, resulted in new insights into professional practice and a commitment to share new and emerging good practice. As a consequence, in-house training was delivered to over 200 Derbyshire Local Authority workers by the nine members of staff who attended the pilot training. Derbyshire County Council have attributed a number of changes, which have arisen since the onset of these training initiatives, including a reduction in staff absence, improved living and working environments and improved responses from children in residential care -for example, runaway episodes from residential homes reduced from 205 in 2010/11 to 76 in 2012/13. The changes have provided the evidence needed to secure commitment for further investigation and resources.
In 2012, Derbyshire County Council was successful in becoming a Creative Council as part of a national programme led by NESTA (National Endowment for Science, Technology and the Arts) and the Local Government Association (LGA). Through this initiative, social pedagogy was identified as one of the most promising approaches to solving some of the most challenging problems for local government. A recent position statement from the Association of Directors of Children's Services (ADCS, 2013), acknowledges that 'where authorities have persevered with the approach initial resistance has been superseded by a light bulb moment when it became clear that this approach can bring improved outcomes for young people and a better, more satisfying, working experience for staff' (p. 5). This mirrors the experience in Derbyshire and supports the idea that social pedagogy is a promising approach, not just in children's homes but also in wider services (Bowyer and Wilkinson, 2013). As part of the Creative Councils journey, the University of Derby was commissioned by Derbyshire County Council to scope the content of the Council's workforce development approach to social pedagogy.
Scoping workforce development in social pedagogy
The scoping project involved a range of research approaches, including a desk based review of literature, conversations with practitioners, managers and foster carers and an online survey. In all, 209 people participated in the research.
Total managers 34
Foster carers Telephone interview 6 Focus group 12 Survey 63
Total research participants 209
The results from the research demonstrated overwhelmingly that there was an interest in and commitment to developing knowledge and skills in the area of social pedagogy across all of the participant groups. However, a number of interesting insights emerged during the research and were used to inform the policy and approaches adopted by Derbyshire County Council.
12 of the 80participants in the research found the term social pedagogy difficult to understand, despite being receptive to some of the key concepts that it describes. The findings of the scoping study therefore suggested that the use of the term social pedagogy should be carefully considered when developing any future strategy. Views were divided about the use of the term. Whilst it is important for professionals to be able to embed their practice in understood and accepted theory and concepts, it is also possible to recognise that using unfamiliar language is not helpful when developing and embedding a new approach across a range of stakeholders with different levels of education and professional experience. It was concluded that some care should be taken to maximise positive engagement by choosing to communicate the concept using clear and widely understood language.
The research noted that embedding social pedagogy across the workforce will have the most impact if there is a training strategy and framework which includes non-accredited and accredited training for all levels of workers, including the integration of social pedagogical ideas and concepts in any induction activities.
One question that the scoping study addressed was what form of accreditation might best support the development of the workforce. The research indicated that a level 4 accredited award (equivalent level to the first year of a university degree) in social pedagogy would have the greatest appeal to the maximum number of staff. However, it was also noted that the strategy would need to attend to progression routes into and beyond this level of learning to ensure that the entire children and young people's workforce could be engaged in it.
Embedding social pedagogy in the Derbyshire County Council approaches
The outcome of the scoping exercise was a renewed commitment to social pedagogy as the underpinning conceptual framework for work with children and young people across Derbyshire and particularly those who are considered vulnerable and for those in residential care. This section outlines the policy and practice implications of the decision to embed social pedagogy in the work of the Directorate.
An increasing body of knowledge and practice of social pedagogy in the UK accepts that it is undesirable, impossible even, to achieve the transfer of whole systems of training, qualification and practice from our European counterparts to the UK -a country without a distinct social pedagogy tradition (Kornbeck. 2009). The transfer of policies from one country to another can be fraught with difficulties due to the different social, cultural and economic contexts which exist (Sultana, 2009). However, research points to ways in which social pedagogy as a model could fit with the development of training and services for England (Petrie et al, 2009). Derbyshire is exploring this opportunity, with the ambition of creating a workforce, not of social pedagogues, but of social pedagogy practitioners: staff and carers with an understanding of the principles of social pedagogy, a grasp of the relationship between theory and practice and the capability and passion to continue to learn and reflect on its application to their own working life.
Some contemporary thinking in the UK identifies the need not for social pedagogues as a separate professional group as in European settings, but a UK model which recognises the specific cultural nuances and is expressed as a mind-set adopted by all rather than a specialist profession (Paget et al, 2007;DCSF, 2008;Stevens, 2010). Given the diversity of concepts associated with social pedagogy, Cameron and Moss (2011) identify that the term pedagogies may be more appropriate.
The need for further research and exploration in this area is also identified, to explore further the potential for social pedagogy to inform policy and practice in the UK, and that work with researchers, academics, practitioners and social pedagogues is required to develop a model of social pedagogy that can work in the UK context (Smeeton, 2011).
The embedding of social pedagogy within Derbyshire will be enabled through a programme of workforce development. This will include growing the currently provided in-house two-day social pedagogy training to include a bespoke degree level module in social pedagogy by the University of Derby. Training will be targeted at all those who work with children and young people in care, including foster carers, social workers, residential staff, youth workers, personal advisors and family support workers, with 100 people from across the spectrum of the workforce completing the course in the first two years.
In addition to workforce development, a programme of research will be undertaken to determine any inconsistencies, ambiguities and contradictions in the behaviour of staff, provide the opportunity to explore interests and understand relationships for those developing learning and practice in social pedagogy.
As the approach develops and is evaluated, the Council will be mindful of key issues from the literature. These are likely to include 'Haltung' (Eichsteller, 2010?), broadly understood as a personal and professional, ethical stance or attitude. In some UK explorations of social pedagogy approaches, this has been found a useful concept to describe the shift in practitioner's personal resourcefulness (Smith, 2012). It will also consider and explore social pedagogy within the framework of workforce development, which forms part of the local authority's strategy to improve 59 CREATING AN EVIDENCE BASE TO SUPPORT THE DEVELOPMENT OF A HOLISTIC APPROACH TO WOEKING WITH CHILDREN AND YOUNG PEOPLE IN DERBYSHIRE outcomes for children in care by enabling changes that influence the lives of young people. The evaluation will also explore the extent to which practitioners feel that 'it is about how to work with the process of change, how to find resources together with the client to think of oneself in new and ground-breaking ways' (Storø, 2012, p. 26).
In addition, the knowledge and application of the concept of the Common Third will be examined in order to understand how practitioners are using such theory within their social pedagogical practice. As Storø (2012) identifies, social pedagogy is not only about doing -the action must be connected to theory, and the trained professional should always have the possibility, and the obligation, to consult theory to find the best possible action in every situation.
The cultural approach to risk in local practice and provision will also be explored, as it is has been acknowledged that social pedagogy has the potential to challenge bureaucratic and risk aversive practice (Berridge et al, 2011). This is particularly poignant for children's services at present following the Munro review (2011). The findings of the review indicate that anxiety is an important factor in determining the practice of those working in social care. The false hope of eliminating risk has contributed to an increase in defensive practice, which can result in the interests of children and young people being overlooked. The Munro report (2011) notes that it is major challenge to all involved in child protection to make the system less 'risk averse' and more 'risk sensible' (p. 60).
Also, any sense of organisational cultural change will be explored in terms of how practitioners view the profile and importance of education in support for young people in care (Hämäläinen, 2012), together with how policy, regulatory and inspection frameworks nationally and workforce development strategy and policy locally either support or create barriers to the successful application of social pedagogical approaches.
Over coming years, the learning from the Derbyshire approach can complement, support and inform the UK social pedagogy knowledge base. By sharing the learning journey with programmes such as the Head, Heart, Hands demonstration programme led by the Fostering Network, UK social pedagogy can evolve collaboratively, in response to emerging understandings of how approaches can be most effectively deployed. Practice methods and beliefs are inextricably linked with the social, economic, political and cultural contexts in which developments of social pedagogy exist. As such, exploring the role of the local authority organisation, its culture and its leadership, including management and supervision, will form a critical element of the learning to be obtained from this, as from other, critical evaluations of UK programmes. In doing so, hopefully, the understanding of and commitment to social pedagogy across the UK, particularly in a time of economic challenge for local authorities, will continue to grow.
Conclusion
Derbyshire County Council Children and Younger Adult's Directorate has been undergoing a social pedagogy learning journey since 2010. Local research has identified that, where social pedagogy underpins the activities offered to vulnerable children and those in residential care settings, the outcomes for these groups are improved. Research suggests that there is a growing appetite for a programme of workforce development in social pedagogy, however any such programme should be inclusive and offered at different levels. The principles and concepts should be embedded in the existing roles of a range of practitioners working with children and young people. As a result of these insights a new accredited programme is being developed, which will be offered to 100 practitioners drawn from across the range of the children's and young people's workforce. This new approach will be the focus of new research which monitors the impact of the training on the behaviours of practitioners and the outcomes for children. By researching the impact of the training and development strategy, Derbyshire will be able to contribute to a growing body of evidence supporting the development and application of social pedagogy to improve the experiences of children and young people in the county. | 2018-12-07T21:17:20.442Z | 2014-12-01T00:00:00.000 | {
"year": 2014,
"sha1": "6b76632fd97be06d73c82f67c27023e3ccc1b21b",
"oa_license": "CCBY",
"oa_url": "https://www.scienceopen.com/document_file/0fe8a6cb-d13d-451f-853e-3f440cee1a5d/ScienceOpen/Article6.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0511831f397e2cc011783c8c451129f3c8a2d2c1",
"s2fieldsofstudy": [
"Education",
"Sociology"
],
"extfieldsofstudy": [
"Political Science"
]
} |
2646150 | pes2o/s2orc | v3-fos-license | Information transfer fidelity in spin networks and ring-based quantum routers
Spin networks are endowed with an information transfer fidelity (ITF), which defines an absolute upper bound on the probability of transmission of an excitation from one spin to another. The ITF is easily computable, but the bound can be reached asymptotically in time only under certain conditions. General conditions for attainability of the bound are established, and the process of achieving the maximum transfer probability is given a dynamical model, the translation on the torus. The time to reach the maximum probability is estimated using the simultaneous Diophantine approximation, implemented using a variant of the Lenstra–Lenstra–Lovász (LLL) algorithm. For a ring with uniform couplings, the network can be made into a metric space by defining a distance (satisfying the triangle inequality) that quantifies the lack of transmission fidelity between two nodes. It is shown that transfer fidelities and transfer times can be improved significantly by means of simple controls taking the form of nondynamic, spatially localized bias fields, opening up the possibility for intelligent design of spin networks and dynamic routing of information encoded in them, while being more flexible than engineering fixed couplings to favor some transfers, and less demanding than control schemes requiring fast dynamic controls.
Introduction
Efficient and controllable transport of information is crucial for information processing, both classical and quantum. While bosonic channels [1] are the most attractive option for long-distance communication, efficient on-chip interconnectivity in a quantum processor based on atomic, ionic or quantum dot-based qubits, or quantum spintronic devices [2], will require direct information transport through networks of coupled solid-state qubits. Such networks can be modeled via interacting spins and are therefore generally referred to as spin networks. Initiated by Bose's [3] seminal work, spin networks have received considerable attention in recent years (see review articles [4,5] and references therein). Most of the work has focused on information transmission through linear chains as prototype quantum wires, starting with unmodulated chains [3] and later perfect state transfer in chains with fixed, engineered couplings [6,7], and finally controlled state transfer in spin chains, e.g., via adiabatic passage [8], ac modulation to achieve renormalization of the couplings between adjacent qubits [9], single-node bang-bang controls [10] or global dynamic controls [11]. Perfect state transfer in more general networks has also been considered, and some interesting results for complete graphs were obtained in [12].
Nonetheless, the information-theoretic properties of spin networks are not fully understood. Information encoded in excitations of a network of coupled spins propagates, even under ideal conditions when quantum coherence is maintained, in a nonclassical way determined by the Schrödinger equation. Under the best possible circumstances, this propagation of excitations determines the information transfer fidelity (ITF) between various nodes of the network. 1 Perfect state transfer between two nodes can only be achieved when the information transfer fidelity between the respective nodes is unity. However, this condition is not sufficient. For example, while it is satisfied for the end nodes of a chain with uniform couplings [13], such chains are usually not considered to admit perfect state transfer except for chains of length two or three.
This raises the question of the attainability of the upper bound given by the information transfer fidelity. Attainability in general also depends on time constraints, i.e., attainable in what time, and the margins of errors we are willing to accept. In practice, some margin of error is unavoidable, and the real question of interest is therefore not whether we can achieve, e.g., perfect, i.e., unit fidelity, state transfer in time t f , but rather whether we can achieve state transfer with a fidelity 1 − , where is an acceptable margin of error, in a reasonable amount of time. We may be willing to accept a slightly increased margin of error for a significant reduction in the transfer time. In this work, we are interested in such fundamental questions for spin networks subject to coherent dynamics in general, and specifically simple configurations such as a circular arrangement of spins (or spin ring for short), which could serve as basic building blocks for more complex architectures.
After introducing some basic definitions and basic results in Sect. 2, the concept of asymptotic ITF, i.e., maximum information transfer fidelity attainable absent constraints on the transfer times, between nodes in a network of spins is introduced in Sect. 3. Conditions for attainability of the bounds are derived using dynamic flows on tori and the simultaneous Diophantine approximation, computationally implemented using the Lenstra-Lenstra-Lovász (LLL) algorithm. Under certain conditions, the information transfer infidelity induces a metric that captures how close two nodes in a spin network are from an information-theoretic point of view. This information transfer geometry is investigated in Sect. 4. Finally, in Sect. 5, we investigate how the information transfer geometry of a network can be changed by means of simple controls in the form of fixed biases applied to individual nodes and how this principle could be employed for dynamic routing in a spin network with ring topology without the requirement of fast-switching controls.
Basic definitions and results
We consider networks of N spins arranged in some regular pattern with either XX or Heisenberg interaction [14] specified by the Hamiltonian We specifically focus on networks with XX coupling (η = 0) and Heisenberg coupling (η = 1), although most of the concepts and analysis in the following are not limited to these types of coupling. J i j is the strength of the coupling between spin i and spin j. The factor σ x,y,z i is the Pauli matrix along the x, y, or z direction of spin i, i.e., σ x,y,z i = I 2×2 ⊗ · · · ⊗ I 2×2 ⊗ σ x,y,z ⊗ I 2×2 ⊗ · · · ⊗ I 2×2 , where the factor σ x,y,z occupies the ith position among the N factors and σ x,y,z is either of the single spin Pauli operators The system Hilbert space H on which H acts is conveniently taken as C 2 N . We can abstract the network of spins as a graph G = (V, E), where the vertices represent the spins and the edges indicate the presence of couplings. A particular configuration considered in this paper is that of spin rings, i.e., spin networks defined by a circular arrangement of spins, described by a J -coupling matrix that is circulant with nearest-neighbor coupling: (2) The term J N ,1 represents the coupling energy between the two ends, spins 1 and N , closing the ring. For networks with uniform couplings, i.e., all nonzero couplings have equal strength J (in units of Hz), we can set J = 1 by choosing time in units of J −1 .
Single excitation subspace
Although many of the results in the following sections are more widely applicable, we primarily concern ourselves here with the single excitation subspace of the network [5], spanned by the N single excitation quantum states {|i : i = 1, . . . , N }, where |i = | ↑↑ · · · ↑↓↑ · · · ↑ with ↓ in the ith position indicating that spin i carries the excitation. The natural coupling among the spins allows the excitation at i to drift toward an excitation at j with an information transfer fidelity (ITF) that can be quantified by the maximum transition probability p max (i, j). This concept will be precisely defined in the next section, but in this introductory exposition, we could think of "maximum" as the process of giving the transition from spin i to j the correct amount of time so that it is most likely to occur. The concepts behind these ideas are lying at the foundation of quantum mechanics as embodied in the Feynman path integral. These concepts reveal that contrary to classical least-cost-path routing that follows a single path from a source to a destination in a classical network, quantum networks follow all possible paths from the state |i to the state | j .
Eigendecomposition of the Hamiltonian
Restricted to the single excitation subspaceH ∼ = C N , the eigen decomposition of the Hamiltonian readsH = k λ k Π k , where λ k for k = 1, . . . , N ≤ N are the distinct real eigenvalues and Π k are the projectors onto the corresponding eigenspaces.
where the subscript N is utilized to indicate that the system has N spins. For uniform Heisenberg coupling, the Hamiltonian is the same except for the addition of a multiple of the identity, which simply shifts the eigenvalues by a constant and does not affect the eigenvector structure or differences between eigenvalues. The eigenvalues and eigenvectors of circulant matrices are well known and shown in Table 1. The N single excitation eigenvalues are conveniently parameterized by an integer k running from 0 to N − 1 or 1 to N with the cyclic condition that λ 0 = λ N . The following lemma regarding the eigenvalues will be helpful later.
, there are 1 2 N − 1 distinct pairs of double eigenvalues, and two single eigenvalues ±2, giving a total of N = (N + 2)/2 pairwise distinct eigenvalues: If N is divisible by 4, then the spectrum has a total of N = (N + 2)/2 pairwise distinct eigenvalues and a double eigenvalue at 0 (for k = 1 4 N , 3 4 N ). -For N odd, we have λ N −k = λ k = 0 and there are (N − 1)/2 distinct pairs of double eigenvalues and a single eigenvalue +2, giving a total of N := (N + 1)/2 distinct eigenvalues: -In either case, the number of pairwise distinct eigenvalues is Moreover, the eigenvalues of C N and C N −1 are interlaced.
Proof The listed items are trivial. The last claim is the Cauchy interlacing property [16].
For a double eigenvalue λ k = λ N −k , denote the projection on the corresponding eigenspace as Π k := |v k v k | + |v N −k v N −k |, where the eigenvectors can be chosen Moreover, for the single eigenvalue λ 0 = +2, define Π 0 := |v 0 v 0 | to be its eigenprojection. If N is even, the single eigenvalue λ N /2 = −2 has its eigenprojection denoted as Π N /2 := |v N /2 v N /2 |. If, in addition, N is divisible by 4, denote the eigenprojection of the double eigenvalue λ N /4 = λ 3N /4 = 0 as Π N /4 := |v N /4 v N /4 |+|v 3N /4 v 3N /4 |. With this notation, the Hamiltonian restricted to the single excitation subspace can be written as The above can easily be extended to the Heisenberg case by globally shifting the eigenvalues by 1.
Maximum transfer fidelity and attainability
Let |i ∈H be a quantum state with excitation localized at spin i. The quantum mechanical probability of transition from state |i to state | j in an amount of time t is given by where we choose energies in units ofh/J allowing us to assumeh = 1 and omith in the following. This formula is a corollary of the Feynman path integral [17,18]. To circumvent the difficulty posed by the time dependence of this probability, we proceed as in [13] and define the maximum transition probability p max (i, j) also referred to as information transfer fidelity (ITF): Clearly, p max (i, j) ≤ 1. Observe that instead of taking the sum of the absolute values of all i|Π k | j terms, we could take the sum of the absolute values of some partial sums of such terms and derive other upper bounds. Note that the upper bound is valid for any spin network, no matter how many spins, no matter how many multiple eigenvalues, no matter the topology. Since the upper bound depends only on the eigenvectors of the Hamiltonian and since those are continuously dependent on the strengths of the couplings, the upper bound is continuous relative to the J i j .
Attainability of bounds
The ITF p max (i, j) is an upper bound on p t (i, j), which acquires its full significance if p max is achievable, that is, if there exists a sequence of time samples {t i, j (n) : n ∈ N} such that lim n→∞ p t i j (n) (i, j) = p max (i, j). Observing that the absolute value in Eq. (4) will absorb any global phase factor, the attainability condition is that there exists t ∈ [0, ∞) such that e −ıλ k t = s k (i, j)e ıφ , ∀k = 0, . . . ,Ñ − 1, where s k (i, j) := Sgn( i||Π k | j ) ∈ {0, ±1} is a sign factor and φ is a global phase, which is arbitrary but must be the same for all k's. Eigenspaces with s k = 0 (where the (i, j) dependency is suppressed to avoid the clutter) have no overlap with the initial and/or target state and do not contribute to the sum. We shall refer to them as dark-state subspaces. They can be ignored, and we can restrict ourselves to the set K ⊆ {0, 1, . . . , N − 1} of indices k for which s k = 0. The physical interpretation of K is the set of eigenspaces Π kH that have nontrivial overlap with the initial and target states. Noting that s k = ±1 for k ∈ K , and exp[−ı π 2 (s k − 1)] = 1 for s k = 1 and exp[−ı π 2 (s k − 1)] = −1 for s k = −1, we can write where n k ∈ Z is an arbitrary integer. Inserting this into (5), taking the logarithm and dividing by −ı yield This condition is not directly useful as φ can be arbitrary, but we obtain meaningful constraints if we subtract the equations in a pairwise manner, with k = : We can also write the attainability constraints more explicitly: These conditions are necessary and sufficient for attainability. They are physical, only involving differences of the eigenvalues, which are observable and independent of arbitrary phases. Vanishing left-hand sides in the above are not an issue, as we are only looking at the differences, which are nonzero by definition as λ k , k ∈ K , are the distinct eigenvalues ofH .
Observe that all of the equations are compatible. Indeed, adding Eq. (8) for (k, ) and ( , m) yields (8) for (k, m). Naturally, these equations are redundant, but we obtain a set of linearly independent equations if we exclude the dark-state subspaces and restrict ourselves to a suitable subset of equations, e.g., (k i−1 , k i ) or (k 0 , k i ) for K = {k 1 , k 2 , . . . , K N }.
Example 1 (Dark States for Rings.)
For ring systems with uniform XX coupling, the distinct eigenvalues are λ k = 2 cos(2π k/N ). For eigenvalues of multiplicity 1, which occur for k = 0, and k = 1 2 N if N is even, i|Π 0 | j = (1/N ) = 0 and i|Π N /2 | j = (1/N )(−1) i− j = 0; therefore, there are no dark states associated with these eigenvalues. For eigenvalues with multiplicity 2, i|Π k | j = 2 N cos( π 2 n) with n = 4k(i − j)/N for k = 0, . . . , (N − 4)/2 ; therefore, there are dark states if and only if n is an odd integer. This can happen only if N is divisible by 4. The same holds of rings with uniform Heisenberg coupling as they have the same eigenspace structure and the differences between eigenvalues are the same.
Simultaneous attainability and flows on the torus
Excluding dark-state subspaces, restricting (8) to a subset S ⊆ K × K of linearly independent equations, and setting ω k, = (λ k − λ )/π , the attainability conditions become The left-hand side of the above is the solution of the flow on the torusẋ = ω k , with x(0) = 0. In this dynamic formulation, the question is whether the flow starting at x(0) = 0 passes through the point with coordinates 0 or 1, depending on whether s k = s or s k = s , respectively. It is well known [19, Prop.1.5.1] that the flow starting at an arbitrary x(0) (which includes x(0) = 0) passes arbitrarily close to an arbitrary point on the torus if and only if the ω k, 's are linearly independent over the rationals Q. This property of the flow getting arbitrarily close to an arbitrary point from an arbitrary initial condition is very strong and referred to as minimality. Observe that for the flow to be minimal it suffices that starting at x(0) = 0, it gets arbitrarily close to any point. Obviously, minimality is sufficient but not necessary for attainability, as the latter only requires the flow to pass arbitrarily close to a specific point on the torus, while minimality guarantees that the flow can get arbitrarily close to any point. Recall that Eq. (8) refers to a specific but arbitrary transfer |i → | j , as the signs depend on i, j. We could consider all Eq. (8)'s for all i = j and ask the question as to whether there exists a unique t such that attainability holds for all i = j. We refer to this stronger version of attainability as simultaneous attainability.
If for a given pair (i, j) there are at least three nondark eigenspaces corresponding to s d , s m , s n ∈ {±1}, then there must exist a pair, say (m, n), with s m − s n = 0. In this case, setting t = 2τ/ω mn for τ ∈ N ensures that the (m, n) Eq. (9) holds exactly and the remaining attainability equations become The left-hand side θ k τ of the preceding equation is the solution of the translation on the torus, that is, x(τ + 1) = x(τ ) + θ k mod 2 with initial condition x(0) = 0. By [19,Prop.1.4.1], the translation on the torus can come arbitrarily close to any point iff the elements in the set {1} ∪ {θ k : (k, ) ∈ S 0 } are linearly independent over Q. As before, the linear independence is sufficient, but not necessary for attainability. Note that we can in principle always reorder the eigenvalues so that the reference transition is (m, n) = (1, 2).
It should be noted that the attainability criteria above apply to any spin network. For specific types of networks, we can derive more explicit criteria.
Example 2 (Attainability Condition for Rings.) Given the formula for the eigenvalues for homogeneous rings, λ k = 2 cos(2π k/N ), elementary trigonometry shows that There are N = N /2 + 1 eigenspaces and N − 1 independent transition frequencies ω k . Choosing the subset of linearly independent equations S = {(k, k + 1) : k = 0, . . . ,N },N := N − 2, with the ordering of the eigenspaces as defined above, the attainability conditions can be written as If s m = s m+1 , then setting t = 2τ/ω m,m+1 for τ ∈ N ensures ω m,m+1 t = 2τ = 0 mod 2 and the attainability conditions become Notice that the signs of the projections of the initial state |i and target state | j , s k = j|Π k |i , depend on the choices of the latter, and it may happen that the signs s k are alternating, s k+1 = −s k for all k. In this case, the problem can easily be rectified by reordering the eigenvalues, e.g., so that s 0 = s 1 with the new ordering.
Hence, the p max (i, j) are not simultaneously attainable-although p max (i, j) may be attainable for some (i, j).
More generally, for a ring with N even, there are 1 2 N transition frequencies Example 5 (Rational Dependence for Odd Rings.) Similarly, we can easily verify that for a ring with N = 9 spins, the transition frequencies are not rationally independent, as we have, e.g., sin(7π/9) − sin(5π/9) + sin(π/9) = 0 and thus In general, rational independence of the transition frequencies for homogeneous rings does not hold when N is not prime.
Simultaneous Diophantine approximation
Instead of checking rational independence of {1} ∪ {θ k : (k, ) ∈ S 0 }, a less conservative approach is to proceed, either analytically or computationally [20,21], via the simultaneous Diophantine approximation [22][23][24][25] by finding integers p k , q such that and > 0. With τ = q, the above yields In the single-dimensional case, the solution is well known to be given by the continued fraction expansion of θ . Truncating the continued fraction expansion yields convergents, i.e., rational fractions p/q, with errors bounded as |θq − p| ≤ 1/q, which is optimal among all rational approximations of denominators less than or equal to q. The major hurdle at extending this result to the multi-dimensional case is that there is an incompatibility between the unimodular property of the multi-dimensional continued fraction (MCF) solution and optimality.
Nevertheless, the celebrated Dirichlet box principle shows that there are multidimensional approximations with c = 1 and = 1/N , where in the present context N = |S 0 |. Moreover, there are infinitely many integer solutions q to the simultaneous Diophantine approximation; in other words, as τ is allowed to become arbitrarily large, the above error can be made arbitrarily small. The constant c = 1 can hardly be improved as for c < 1 there are "badly approximable vectors" θ ∈ RN defined by lim inf q→∞ q 1/N d(θq, ZN ) > 0 such that the simultaneous Diophantine approximation has only finitely many solutions [22,26, Sec. 5]. If, however, c is allowed to depend onN , refined bounds (c < 1) can be derived on c(N ) due to the existence of infinitely many solutions [23]. Specializing the approximation toN = 2, it can be shown [27] that the bound can be improved down to c = 8/13, along with = 1/2. In the one-dimensional case Hurwitz's theorem says that one can take c = 1/ √ 5 and = 1. On a general tone, the Dirichlet approximation can only be improved slightly and at the expense of considerable extra difficulties; we will therefore work exclusively with the Dirichlet approximation in the following.
Assuming we have obtained a Dirichlet-good simultaneous Diophantine approximation, the approximate attainability conditions become The difficulty is to find, if it exists, a simultaneous Diophantine approximation of Dirichlet accuracy that satisfies the above conditions on the numerators. The following example demonstrates that it is not, in general, possible to achieve the even/odd conditions (12) on the numerators p k without compromising on the accuracy of the Diophantine approximation. To be more specific, arbitrary accuracy can still be achieved with Conditions (12), but a larger denominator is required to achieve the same level of accuracy in the presence of the constraints.
Example 6 (Simultaneous Diophantine Approximation with Constraints.)
In Example 3, the flow on the torus for a ring with N = 5 was found to be minimal, implying that we can get arbitrarily close to an arbitrary point on the torus. By the preceding argument, this guarantees existence of simultaneous Diophantine approximations of arbitrary accuracies with prescribed even/odd numerators. Furthermore, it is readily found that [21]. Given two convergents ordered as p n−1 /q n−1 < p n /q n , one can easily squeeze a semi-convergent between them as follows: The semi-convergent has even numerator and has the accuracy of the convergents p n−1 /q n−1 and p n /q n but at the cost of doubling the denominator. To prove that the semi-convergents provide approximations of arbitrary accuracy, it suffices to show that there are infinitely many n's such that p n−1 /q n−1 < p n /q n . This is a corollary of the unimodular property of continuous fractions, saying that p n−1 q n − p n q n−1 is alternately ±1.
We propose a general iterative method to deal with the even/odd constraints. To simplify the notation, let θ ∈ RN , p ∈ ZN , be column vectorizations of the θ k s, p k s, respectively, whereN := |S 0 |. We want to come up with a Dirichlet-good approximation, θ ≈ p/q, where p ∈ ZN , q ∈ N, with even/odd constraints on the numerators p i . By "Dirichlet-good," we mean that the infinity error is bounded as θq − p ∞ ≤ c/q 1/N , where c is a constant independent ofN and q. The idea is to iteratively scale θ by (the inverse of) a diagonal matrix of positive rational numbers, θ = Y (n) −1 θ , compute a Dirichlet-good approximation ofθ using, e.g., the Dirichlet box principle, or the LLL-algorithm, or Lagarias' multi-dimensional continued fractions (MCFs), and then revise the scaling to meet the even/odd constraints, with the hope that the procedure will converge. Write the Dirichlet-good approximation θ ≈p/q and manipulate it as follows: In other words, Y (n)p/q is a Dirichlet-good approximation of θ , provided max i Y (n) ii can be dominated by a bound independent of q andN . Because the initial choice of Y (n) is arbitrary, it is not guaranteed that Y (n)p has the correct even/odd property. Nevertheless, we could revise Y (n) to meet those properties. If a componentp i comes out to be odd and needs to be even, we choose Y (n + 1) ii = 2. If the algorithm has converged, that is Y (n + 1) = Y (n), then the bound becomes Conversely, ifp i comes out to be even with 2 d in its prime number decomposition, we take Y (n + 1) ii = 1/2 d and, at convergence, the bound on the ith component becomes Then this procedure is repeated with the scalingθ = Y (n + 1)θ , in the hope that it converges.
(Weighted) LLL-algorithm
Even though Theorem 1 guarantees that under convergence conditions, Dirichletgood simultaneous Diophantine approximations can be manipulated so as to yield numerators that have prescribed even/odd properties, we are still left with the problem of coming up with simultaneous Diophantine approximations in the first place.
One of the first computational solutions to the simultaneous Diophantine approximation was the so-called LLL-algorithm by Lenstra, Lenstra and Lovász [21,24,30]. An alternative algorithm based on geodesic multi-dimensional continued fraction expansion was proposed by Lagarias [31]. Both approaches proceed by reduction in the basis of the lattice generated by the columns of where s ↓ 0 is a scaling parameter. Observing that B(s)( p, q) T = ( p − θq, sq) T , it follows that a short vector in the lattice B(s)ZN +1 yields a good approximation. The numerator of this good approximation could be "fixed" by the procedure of Sect. 3.3 to satisfy the even/odd requirement. However, it is proposed to combine the two procedures into a single one-computation of a good approximation from a short lattice vector and fixing the numerator-by introducing a nonuniform diagonal scaling and work on the lattice Λ(s, X ) generated by the columns of Note that for s = 1 and X = x IN ×N , we recover the scaling of [21]. Like the algorithm of Sect. 3.3, this procedure is not guaranteed to be successful, but if it is, it yields solutions guaranteed to be optimal relative to some criterion. The LLL-algorithm produces a basis of short Euclidean norm vectors b * (s, The b * (s, X ) 1 vector is very close to the shortest one. A refined version of the LLLalgorithm captures the genuinely shortest vector of the lattice Λ(s, X ) as follows: Given the reduced basis {b * (s, X ) i : i = 1, . . . ,N + 1}, it can be shown that the shortest (in the sense of the Euclidean norm) lattice vector is to be sought among all . Lagarias' theorem [24, Lemma 5] then implies that a shortest Euclidean norm vector of the lattice is a best X -weighted Diophantine approximation. Observing that and taking s ↓ 0, it becomes clear that a short vector in the lattice B(s, X )ZN +1 provides a good X -weighted Diophantine approximation: With the shortest vector, we construct the best approximation, that is, the approximation that minimizes in the same way as for the good approximation. Before proceeding any further, we take care of a technicality: As one would expect, the simulations also suggest that q grows without bound as s decreases to zero. For the weighted LLL-algorithm, we can prove the following:
Theorem 2 For the weighted LLL-algorithm to solve
(p(s),q(s)) = arg min where · X ⊕1 is the Euclidean norm weighted by the direct sum of X and 1, we have lim s↓0q (s) = ∞.
Proof Assume that there exist s min and q max such that ∀s ≤ s min , we have q ≤ q max . Consider (15) for any 0 < s ≤ s min . By contradicting hypothesis,q(s) ≤ q max . The above yields a Diophantine approximation of θ but not the optimal one as s = 0. Now define (p,q) = arg min Observe that there exists a lower bound δ min such that δ(s) ≥ δ min > 0 as p(s) − θq(s) X cannot reach its minimum sinceq(s) ≤ q max . Now, consider the original , s min . With this choice, we have Then we have The above is clearly a contradiction to the optimality of (p(s),q(s)).
Note that the result appears trivial from Eq. (14) except that the behavior of the last component of the first vector of the reduced basis has not yet been explored in the weighted case.
Comparison between the weighted LLL-algorithm, X (θq − p), and the one of Sect. 3.3, Y −1 (θq − Yp), indicates that a good choice of the weighting might be X = Y −1 . This is only a guiding idea, as X = Y −1 would mean that Yp = p, that is, Y × Dirichlet numerator(Y −1 θ) = Dirichlet numerator(θ ), which does not hold exactly.
For practical computation of the time steps τ = q, we must find numerators p k that fulfill the odd/even constraints using the LLL-algorithm. The nonuniform variant introduced above makes it simpler to find suitable parameters X and s, but a search is still required. To automate the search, we use a standard genetic algorithm to find weight vectors X with a user-defined s that minimize the number of parity constraint violations of the p k . This works well in most cases, requiring only a few iterations (typically up to 5) for reasonably sized populations (about 200). We suggest that the standard crossover and mutation operators could be adjusted to improve the performance of the search. In particular, increasing the likelihood of changing the X values corresponding to denominators p k that violate a constraint and increasing the likelihood of retaining X values for which the corresponding p k do not violate the constraints may improve performance.
with corresponding constraints s = (1, 1), which means that the numerators p k in the simultaneous Diophantine approximation of θ must both be odd.
Applying the classical LLL-algorithm to solve the simultaneous Diophantine approximation for θ yields rational approximations of very high accuracy, as shown in Fig. 1a. However, most of the resulting approximations p k /q do not satisfy the parity constraints. Using the weighted LLL-algorithm and varying the diagonal scaling vector X enable us to find solutions of arbitrary accuracy, as shown in Fig. 1b, all of which satisfy the parity constraints for the numerators p k . = (170921, 307989) and q = 192028, we obtain the transfer time t f = 2q/ω 12 = 7.1308 × 10 5 (in units of J −1 ) and corresponding transfer fidelity which is within 1 − p t f (1, 3)/ p max (1, 3) = 2.41 × 10 −6 of the maximum transfer fidelity p max (1, 3).
The previous example illustrates how we can use the weighted LLL-algorithm to find optimal transfer times that yield very high transfer fidelities, and how we can control the margins of error and ensure the parity constraints are satisfied by adjusting the scaling parameter and diagonal weights in the algorithm.
Estimate of time to attain maximum probability
Our objective is to find an upper bound on the amount of time t it takes to achieve The approach is to translate the specification on the probability prob to a specification on the infinity norm of the simultaneous Diophantine approximation Da ∞ , where Da = θq − p.
Proceeding from (4), recalling that Sgn ( i|Π k | j ) =: s k = e −ıπ(s k −1)/2−2πın k , where n k is some integer, we obtain In the second equation, the sum over k has been replaced by a sum over k ∈ K as states with i|Π k | j = 0 do not contribute to the sum. The third equality stems from the fact that for fixed , e ı π 2 (s −1)+2πın is a global phase factor that is absorbed by the absolute value.
Next, we introduce the attainability condition (8), which is only approximately satisfied using the simultaneous Diophantine approximation. The idea is to expose the gap between the left-hand side and the right-hand side of (8) when t is constrained to emerge from the Diophantine approximation: where it is observed that ∈ K is arbitrary. The last step is to relate the left-hand side of (18) to the simultaneous Diophantine approximation error. Define where S is the subset of linearly independent attainability equations chosen. By definition, any constraint c k := ω k t − 1 2 (s k − s ) = 0 mod 2 with ω k = (λ k − λ )/π can be written as a linear combination of constraints with (k , ) ∈ S, c k = (k , )∈S b k c k , with coefficients b k ∈ {0, ±1}. Furthermore, given ω mn ∈ S with s m = s n and setting t = 2τ/ω mn with τ ∈ N and θ k = 2ω k /ω mn , we can write the constraints as c k = θ k τ − 1 2 (s k − s ) for (k, ) ∈ S 0 . Given a Diophantine approximation that satisfies the parity constraints, we have c k = Da (k, ) mod 2 for (k, ) ∈ S, and c k ≤N Da ∞ for (k, ) / ∈ S, whereN = |S| − 1 is the number of independent constraints reduced by 1. Thus, we have From the above string of inequalities, it follows that for the attainability accuracy prob to be reached, it is sufficient to take Although conservatively derived, the above formula is consistent with the tightly derived simulation results in Fig. 7 (right). We now summarize the situation we have reached: Theorem 3 For homogeneous rings, the ITF specification p t (i, j) ≥ p max (i, j) − prob is achieved at time t = 2q/ω mn (in 1/J units) if q is chosen so that simultaneous Diophantine approximation error Da (q) := p−θq has its infinity norm satisfying (20) and ω mn is the reference transition with respect to which θ = (θ k ) was defined in (10).
There are many simultaneous Diophantine approximation schemes. If we retain the Dirichlet-good one with even/odd constraints on the numerators, under the assumption that the algorithm of Sect. 3.3 converges, the error bound is Da ∞ ≤ 2/q 1/N , and we obtain the further sufficient condition 2|K | sin πN A minimum q that guarantees prob is easily extracted from the above inequality where the latter approximation uses sin(x) ≈ x and is valid if x = 1 4|K | prob 1. As an example will soon show, contrasting the above with numerical simulations of Eq. (4) reveals that the bound O NN is very conservative, mainly because the continuous-time dynamics on the torus was converted to a discrete-time dynamics. The conservativeness is somewhat mitigated by the dimension reduction achieved by the elimination of dark states and symmetries that reduce the number of relevant eigenspaces. For example, for homogeneous ringsN ≈ 1 2 N rather than N . Further improvement of the scaling behavior could be achieved by utilizing tighter simultaneous Diophantine approximations [13, Th.2], [23], but at the expense of significantly complicating the notation. The reward for the conservativeness of this bound is that it is quite general for rings with uniform coupling and their ITF attainable by the algorithm of Sect. 3.3, as it depends on neither the eigenvalues nor the odd/even pattern. Furthermore, it becomes very general for any network subject to the mild modification of replacingN by N and |K | by N . frequencies and there are no dark subspaces. Hence, |K | = 3 and we haveN = 1 independent θ . In this case, our conservative bound implies that we can get within prob of the maximum transition probability in time 12π/ prob .
In practice, simulations suggest that we can achieve very high fidelities in much shorter times. Figure 2 shows that we can achieve > 99.99% of the maximum transfer fidelity for any two nodes with distance 1 in time t = 77.28, and transfer between two nodes with distance 2 in time t = 125 (in units of 1/J ). Notice that the maximum distance between any two nodes in a homogeneous ring of size N = 5 is 2, and hence any transfer can be achieved to within 0.01% of the maximum possible in time t ≤ 125.
As observed before, for rings of size N = 6, the primary transition frequencies are not rationally independent, implying that we do not have simultaneous attainability. Indeed, Fig. 3 (left) shows that the bound p max (1, 2) is not attainable. Lack of simultaneous attainability does not imply that all bounds are not attainable. Indeed Fig. 3 (right) suggests near-perfect transfer between nodes of distance n = 2.
We can use simulations combined with the LLL-algorithm to estimate the minimum times required to achieve various transfers with a certain maximum error probability. The results for rings of size N = 5 and N = 7, which satisfy the rational independence conditions for simultaneous attainability, as shown in Fig. 4, suggest a power-law scaling. Finally, comparing the scaling of the transfer times for rings of different size in Fig. 5(left) suggests that we have similar scalings for both N = 5 and N = 7 although the constant is larger for N = 7. The scaling behavior for various transfers for a chain of size N = 7 in Fig. 5(right) is similar but more complicated, and the transfer times required to get close to the upper bounds appear to be significantly longer.
Transfer time versus decoherence time
In general there is a trade-off between the error prob and the transfer time t f required to achieve p t f (i, j) = 1 − prob . For actual physical realizations of quantum networks, decoherence is generally a limiting factor. In this case, the relationship between the error probability and the expected transfer time can be useful in estimating what error probabilities can be achieved based on the coherence time of the network t coh .
For instance, in example 9, we showed that for a ring of size N = 5, we would achieve 99% of the maximum transfer probability between any two nodes in time T ≤ 125 in units of the inverse coupling rate J −1 , and we could therefore expect to closely approximate these transfer fidelities, provided the coherence time of the system is 100J −1 . More generally, Figs. 4-5 suggest that we have the power law t f = c −α prob (in 1/J units) , at least for certain types of networks such as rings. In this case for the algorithm to work, it is necessary that This means that realistically, the error probabilities prob attainable are limited and we can expect and the algorithm of Sect. 3.3 could be used to construct a simultaneous Diophantine approximation compatible with this requirement. Combining Theorem 3 and Eq. (21) also yields an upper bound on the transfer times for which the effect of decoherence should definitively be negligible although we would like to stress here that this bound is excessively conservative due to the approximations made. Given a concrete physical realization of a quantum network with an specific decoherence model, this information could be used to derive tighter time-dependent bounds on the transfer fidelities and realistic transfer times.
Information transfer (in-)fidelity metric and geometry
In this section, we come back to an issue raised in Sect. 3-namely that the upper bound derived in Eq. (4) can be justified by the fact that it induces a metric on the set of vertices. Unlike the results in the previous sections, most of the results in this section apply specifically to rings, although numerical simulations suggest that similar results may hold for other homogeneous spin networks such as chains.
Definition and motivation of ITF prametric
To develop a geometric picture, we can view a spin network as a pre-metric or more precisely a prametric space 2 endowed with the prametric that quantifies the information transfer infidelity (ITI). To fix terminology, recall that given a graph G = (V, E), or any set of points V for that matter, a prametric [33, p. 666], [32, p.23] is a function To derive a suitable prametric on the vertex set V = {|i : i = 1, . . . , N } from the probability p max data, we inspire ourselves from a similar situation in sensor networks [34], where V is the set of sensors and a Packet Reception Rate PRR(i, j) is defined as the probability of successful transmission of the packets from sensor i to sensor j. After symmetrization of the packet reception rate, a prametric (in fact, a semi-metric [35][36][37]) can be defined as d(i, j) = − log PRR(i, j). Should there be a violation of the triangle inequality, say, d(i, j) > d(i, k) + d(k, j), then the distance between i and j is redefined as d(i, k) + d(k, j). The importance of the metric is that it provides a notion of network curvature, which has a dramatic impact on the traffic flow [38,39] in a paradigm that extends to quantum chains [14]. Following sensor network intuition [34], we define Obviously, d(i, j) ≥ 0 and, as will be shown in Theorem 4, d(i, i) = 0. We could define the time-stamped prametric by d t (i, j) = − log p t (|i , | j ) except that in general d t (i, i) = 0. To remedy this situation, we could define d(i, j) = inf t≥0 d t (i, j) = − log sup t≥0 p t (i, j). Since, by Cauchy-Schwarz, p t (i, i) ≤ 1 and p t=0 (i, i) = 1, we have sup t≥0 p t (i, i) = 1 and hence d(i, i) = 0. This alternate prametric definition is equivalent to the earlier one when p max is attainable, but it reveals that this prametric makes the network of finite diameter (sup i, j d(i, j) < ∞) as N → ∞ as Theorem 4 will show. This has the unfortunate consequence of preventing a genuine large-scale analysis. As Sect. 5 will show, a bias rectifies this problem (see also [14]).
Generally, this information transfer infidelity prametric is not a proper distance satisfying the triangle inequality, but for certain networks such as rings with uniform coupling this prametric will be shown to define a proper distance.
This quantum mechanical (pra)metric is quite different from the usual Euclidean distance d E of the spins in the spintronic device. In particular, two spins that are physically close in the medium may be far quantum mechanically, and conversely. If two spins are quantum mechanically far, control is necessary to enable transmissions that are too weak or forbidden by the natural quantum mechanical couplings. This control of information can be viewed as the problem of controlling the quantum mechanical geometry of the network.
ITF distance geometry of homogeneous spin rings
It could be argued that a prametric is sufficient if we are solely interested in assessing the difficulty of communication or fidelity of information transfer between nodes in a network. However, a proper metric allows us to investigate other geometric properties such as the curvature of the network with regard to the ITF.
A prametric d : (k, j)) holds. A metric or distance is a pseudo-metric that has (iv) the separation property: d(i, j) = 0 if and only if i = j. (V N , E N ) of N uniformly distributed spins with XX or Heisenberg couplings, d N (i, j) := − log p max (i, j) has the following properties: For N odd, (V N , d N ) is a metric space. N even, (V N , d N ) is a pseudo-metric space that becomes metric after antipodal point identification. 3. If N = p or N = 2 p, where p is a prime number, then the distances on the space of equivalence classes of spins are uniform, i.e., d N (i, j) = c N for i = j. Otherwise, the distances are nonuniform.
In all cases lim
Proof To show that (V N , d N ) is a pseudo-metric space, we need to verify that (i) N ( j, i), and (iii) the triangle inequality holds. For a metric space, we must further have (iv) d N (i, j) = 0 unless i = j.
(i) is clearly satisfied as the projectors onto the eigenspaces are a resolution of the identity, k Π k = I , and thus for any unit vector |i , we have N k=1 | i|Π k |i | = N k=1 Π k |i 2 = 1. (ii) follows from | i|Π k | j | = | j|Π k |i |. The proof of the remaining properties relies on the circulant matrix property of the HamiltonianH in the single excitation subspaceH, as shown in Eq. (3) and Table 1.
Observe in Table 1 the double eigenvalues λ k = λ N −k , except for k = 0 and k = 1 2 N if N even. From Table 1, each of these double eigenvalues has two general complex conjugate eigenvectors. These general eigenvectors need not be orthogonal, but observing that v k |v = δ k and v k |v * k = 0, where v * k denotes the complex conjugate, it follows that defines an orthonormal basis ofH. Furthermore, in the basis in whichH is circulant, we have |i = e i , where {e i : i = 1, ..., N } is the natural basis of C N .
Summing over all eigenspaces k = 0, . . . , N /2 gives For N = 2N + 1, it is easy to see that p max (i, j) = 1 if and only if i = j, hence (iv).
For N = 2N + 2, on the other hand, we also have cos( 2π k N/2 N ) = | cos(π k)| = 1, and thus d(i, j) = 0 for i − j = 1 2 N , i.e., the distance vanishes for antipodal points, and thus d(i, j) is at most a pseudo-metric. However, noting that we can identify antipodal points | j and | j + N + 1 , let d be defined on the set of equivalence classes [| j ] for j = 1, . . . , N + 1 instead. (The antipodal identification preserves the ring structure.) At this stage, d is a semi-metric [35,36,40], that is, it satisfies all axioms of a metric except the triangle inequality.
To prove the triangle inequality, we show that j). The definition (4) of p max rewritten in terms of the eigenvectors of H using (27)-(29) gives , 0} is rewritten explicitly in terms of the eigenvectors rather than as in Sect. 3 and β k = s k (m, j). Setting The final equality follows because the LHS and thus the RHS are known to be real and positive. Furthermore, as ρ N is a root of unity, |ρ N | = 1, and recalling |α k | = |β k | = 1, 0, where the last inequality allows for the presence of dark states. Again we have ρ , and as the LHS above is known to be real, we know that we must have γ k = γ N −k . Hence, we can again collect exponential terms pairwise to obtain cosines, which gives for N = 2N + 1: For N = 2N + 2, we simply replace γ 0 by γ 0 + γ N +1 above to obtain This proves (iii) and hence parts (1) and (2) If N is not p or 2 p, then N and (i − j) will have factors (which can be canceled) in common for some (i − j) but not for others and hence we will obtain different distances.
To establish (4), letting N → ∞, it is easily seen that the dependency on i, j is eliminated, provided i = j mod ( 1 2 N ). Hence, taking the norm of the above and then − log(·), it follows that at the infinite ring limit, the distance is uniform for shows that lim N →∞ d N (i, j) = 2 log π 2 ≈ 2 × 0.4516 for i = j mod (N /2).
Case 3 of Theorem 4 allows for a very specific geometrization of the quantum ring in terms of constant curvature spaces. Define the n-sphere of curvature κ as S n κ := {x ∈ R n+1 : x 2 = 1/κ}. We have the following corollary: Notes: In the above, "irreducibly embeddable" means that the embedding cannot happen into a lower-dimensional constant curvature space. By convention, cos −1 takes values in [π/2, π].
Note that this corollary deals with embeddability of the vertices only; however, edges can be mapped isometrically as arcs of great circles on either the sphere of curvature (31) or that of curvature (32). Also note that the symmetry of the simple p = 3 case of the circle S 1 circumscribed to a equilateral triangle is misleading, as in very high dimension ( p → ∞), Eq. (32) yields 1/ √ κ =: R → c p π/2 , that is, all vertices are mapped to the half-sphere of radius R.
Regarding N = 2 p in Case 3, we could first do the antipodal identification on the combinatorial ring (V 2 p , E 2 p ), leading to a (V p , E p ) ring, and then embed (V p , E p ) as in the preceding corollary.
Regarding Case 4 when N is odd, define := max i = j |d N (i, j) − 2 log(π/2)|. Then the metric space (V N , d N ) can be mapped isometrically on the sphere S N −2 κ of radius d ∞ / cos −1 −(N − 1) −1 up to an additive distortion not exceeding , that is, the embedding is quasi-isometric [42, 7.2.G]. The case of an even N is dealt with as before using antipodal identification. The geometry of a genuinely infinite ring (N = ∞ rather than N → ∞) is completely different and is left to future work.
The N even case can be dealt with in a different way. Rather than doing, first, a combinatorial antipodal identification (i = j if i − j = 0 mod ( 1 2 N )) and, then, mapping the quotient space V N / ∼ to the sphere, we could map the combinatorial antipodal points to geometrical antipodal points on the sphere S N −2 κ with the understanding that geometrical antipodal points on the sphere are identified to yield the real projective space RP N −2 . A slight generalization of (32) of Corollary 1 together with 4 of Theorem 4 yields an irreducible embedding of (V N , d N ) into the sphere of cur- . On the other hand, RP N −2 is usually endowed with the standard curvature 1 metric of diameter π/2. To sum up: Corollary 2 For N even, there is an embedding V N → RP N −2 , which is quasiisometric for the scaled distance d N cos −1 − 1 N −1 / 4 log π 2 on V N and the curvature 1 distance on PR N −2 . Furthermore, for N → ∞ the distortion becomes vanishingly small.
Control of information transfer fidelity
To overcome intrinsic limitations on quantum state transfer or speed up transfer, one can either try to engineer spin chains or networks with nonuniform couplings [6,7], or introduce dynamic control to change the network topology [9][10][11].
Our analysis above shows that engineering the couplings is not strictly necessary. For an XX or Heisenberg-type chain with uniform nearest-neighbor couplings, for example, it can easily be verified that the information transfer fidelity between the end spins is unity, and attainability of the bounds means that we can achieve arbitrarily high state transfer fidelities between the end spins if we wait long enough. Engineering the couplings, however, can speed up certain state transfer tasks such as state transfer between the end spins at the expense of others.
A more flexible alternative to fixed engineered couplings is to apply control to change the network geometry and hence speed up state transfer as well as enable some transfers that either were forbidden or had poor ITF. One way this can be achieved is to apply static electromagnetic bias fields to change the energy-level splittings between the spin-up and spin-down states for different nodes in the graph, as suggested e.g., in [12]. To see how the application of such bias fields can alter the transfer fidelities and network geometry, consider a simple, concrete example of a single bias field ζ applied to node in a spin ring with uniform coupling. First, due to translation invariance, we can always relabel the nodes so that the biased node is node N . Then, assuming XX coupling, the Hamiltonian on the single excitation subspace becomes where it is observed that we have the decomposition H (ζ ) where C N is the N × N circulant matrix defined above and E N ,N is a N × N matrix which is zero except for a 1 at position (N , N ).
Physically, applying a large bias field to the N th node in the ring results in a large detuning that effectively eliminates this node from the ring and breaks the ring open, leaving a chain of length N − 1. Hence, in the limit ζ → ∞, we expect the transition fidelities for the first N − 1 nodes to approach those for a chain of length N − 1 while the transition fidelities between the first N − 1 nodes and the final (biased) node approach 0. We now reformulate this intuitively obvious result in precise mathematical language.
Next, we look at the eigenvectors and rewrite the eigenvector equation as Consider first the first k = N equations. Since lim ζ →∞ λ k (ζ ) exists and is finite, it follows from the bottom eigenequation that ζ |v k (ζ ) N remains bounded as ζ → ∞. Therefore, lim ζ →∞ |v k N = 0. Since λ k (∞) is a unique eigenvalue of T N −1 , it follows that lim ζ →∞ |v k (ζ ) 1:N −1 is the corresponding eigenvector of T N −1 . It remains to show that with this |v k 1:N −1 the bottom eigenequation can be made to hold. This is easily achieved by defining By the lemma, for k even, we have lim ζ →∞ ζ |v k (ζ ) N = 0, and therefore the k < N eigenequation holds with |v k (ζ ) N going to zero faster than 1/ζ . For k odd, |v k (ζ ) N goes to zero as c/ζ , where c = 0 is some constant. By the root-locus result, for ζ large enough, all eigenvalues are distinct, and we have p (ζ,N ) where the second equality is understood as the ζ → ∞ limit. To complete the proof, it therefore remains to look at |v N (ζ ) . The last k = N eigenequation easily implies that ζ |v N (ζ ) 1:N −1 remains bounded as ζ → ∞. Therefore, lim ζ →∞ |v N (ζ ) 1:N −1 = 0. To normalize the eigenvector, we take lim ζ →∞ |v N (ζ ) N = 1. The latter together with (35) proves the theorem.
Thus, we have a systematic way to compute the asymptotic transfer probability of a ring with high bias from the transfer probability of a chain without bias.
Example 10 (Dynamic Routing.) As an illustration of how these results can be used, consider a ring of size N = 9. The maximum transfer fidelities between nodes i = j for this ring without bias are quite low, 0.4094 and 0.4444. However, applying a large bias to node 9 changes the maximum transfer fidelities. In particular, the maximum transfer fidelity between nodes 1 and 8, 2 and 7, 3 and 6 and 4 and 5 now approaches 1. Figure 6 shows a visual representation of the transfer fidelities for the ring without bias (left) and with bias (right). This result is consistent with Theorem 5, as using Lemma 2, it is easily verified that p (8) The example also shows that a finite bias is sufficient to enable almost perfect state transfer in practice, despite the fact that the ring only becomes a chain in the limit when an infinite bias is applied to node 9. We also used the LLL-inspired algorithm to estimate the transfer time as a function of the infidelity of the transfer. We note here that it was crucial to use the weighted LLL-algorithm to generate a range of simultaneous Diophantine approximations, which generally did not satisfy the parity constraints on the numerators, and to use the idea of combining approximations to satisfy the constraints. With this approach, we were able to find solutions satisfying all of the parity constraints on the numerators over a wide range of infidelities to estimate the transfer times required as a function of the tolerated infidelity. The results, shown in Fig. 7 (left), suggest that high fidelities are indeed attainable for modest biases, and the apparent linearity of the data in the bilogarithmic plot still suggests a polynomial scaling. However, the actual transfer times are significantly higher in this case than in previous examples. We point out here that our algorithm is not guaranteed to find the shortest possible time although Fig. 7 (right) shows that there is a good correlation between the Diophantine approximation error and the observed infidelity of the transfer, as already anticipated by Eq. (20). Furthermore, the algorithm enables us to estimate necessary transfer times far beyond the regime accessible by brute-force numerical simulations.
This example shows how a dynamic routing scheme can be implemented to transfer information from any node in a ring to any other node with fidelity approaching unity by simply applying bias fields to different nodes. For transfer between nodes 1 and 8, 2 and 7, 3 and 6, or 4 and 5, it suffices to apply a large bias to node 9. If we wish to transfer information from node 1 to 4, then translation invariance of the ring allows us to shift the labels by 2, so that node 1 becomes 3 and 4 becomes 6, and applying a bias to the new node 9 will enable the transfer.
Further reflection shows that we can achieve maximum transfer fidelities approaching unity for transfer between any pair of nodes in a ring of size N , provided N is odd by simply biasing the node in the middle between the pair of spins. This is because in this case N − 2 is odd, so there must be an odd number of spins along one path around the ring and an even number between the spins around the other. By applying the bias in the middle of the path with an odd number of spins, we asymptotically reach a chain with N − 1 (even) spins. In this chain, the transfer probability between spins mirrored at the center is 1, which is specifically true for the source and target spin with an even number of spins between them in the chain.
If N is even instead, then the situation is more complicated. If there is an odd number of spins between source and target along the ring, then applying a bias at the middle creates an odd chain where source and target are connected with probability 1 as they are at mirror-symmetric positions in the ring. If there is an even number of spins between source and target, then applying a single bias cannot achieve perfect information transfer as the spins can never be at mirrored positions in the odd chain (which are the only ones in the chain perfectly connected). There are, however, multiple solutions to apply a bias at two spins that can asymptotically generate a suitable chain.
In practice, it may be possible and even preferable to simultaneously apply biases to several nodes instead of a single node to shape the overall potential landscape. This case is more difficult to treat analytically, but preliminary results [43] suggest that numerical optimization can be used in this case to optimize the applied biases to achieve significant reductions in the transfer times and the magnitude of the required bias fields, as well as to deal with practical issues such as leakage of the bias fields, i.e., the tendency of a bias applied to one node to also affect nearby nodes.
Conclusion
The concept of maximum transfer fidelity for information transfer between nodes in a network of interacting spins was introduced, and criteria for attainability of the bounds in terms of the transition frequencies of the network were given. Attainability was shown to be related, theoretically, to minimality of a linear flow and, computationally, to a translation on a torus. This last connection enabled us to derive upper bounds on the time required to realize transfer fidelities within prob of the maximum transfer fidelity, for arbitrary prob > 0, via the simultaneous Diophantine approximation. Algorithms were discussed to find the required approximations.
The ultimate aim of this analysis is to understand the intrinsic limitations of information transfer in spin networks and utilize this understanding to engineer networks with favorable bounds on the information transfer fidelities and dynamic attainability properties, so that high spin transfer fidelities can be attained in short times, enabling fast transfer and minimizing the effects of noise and decoherence. An advantage of our approach of combining general ITF bounds and asymptotic attainability conditions with an algorithm to estimate the time required to achieve transfer within a set margin of error, compared to engineering the spectrum of the network Hamiltonian to admit perfect state transfer, for example, is that the latter condition is generally a too strong requirement, as in practice there are always margins of error. Therefore, it makes more sense to ask how much time is required to achieve a certain transfer fidelity for a given acceptable margin of error , and try to optimize the network topology, couplings or biases to achieve the best possible transfer times for the acceptable margins of error.
The general results were applied specifically to regular spin structures such as rings with uniform coupling. In this case, the information transfer infidelity prametric induced by maximum transfer fidelity takes on full significance as it can be shown to be a proper metric defining an information transfer infidelity geometry for the network, which is significantly different from the physical network geometry. The analysis shows that the intrinsic transfer fidelities for simple networks such as rings are often attainable asymptotically, but the times required to achieve high fidelities can be very long. The intrinsic bounds on the ITFs and transfer times can be favorably changed, however, by simple Hamiltonian engineering such as applying spatially distributed static bias fields. In particular, it was shown how such simple controls can be used to alter the information transfer fidelities and geometry of a network. It was demonstrated how this idea can be applied to enable or disable information transfer between a pair of nodes in the network. Simple bias controls are sufficient to direct information flow between nodes. By changing the biases, different transfers can be targeted, and thus a spin ring with fixed couplings can be turned into a simple quantum router for information encoded in excitations of a spin network.
Directions for future work include optimizing information transfer in spin networks via optimal control to achieve faster and more efficient dynamic routing in more complex spin networks. While this work focused on transfer of a single excitation, the concepts and analysis can also be applied to the case of encoding and simultaneous transfer of multiple excitations. This is interesting as it could increase the information transmission capacity of the network. Finally, although simulation results for similar spin systems suggest that some degree of intrinsic robustness of state transfer and the ability to mitigate the effects of noise, decoherence or fluctuations in the couplings via control [10,11,44], the sensitivity of transfer fidelities with regard to noise and deleterious effect of the environment need to be investigated for specific physical realizations of spin networks. | 2017-05-12T04:59:01.859Z | 2014-08-16T00:00:00.000 | {
"year": 2015,
"sha1": "50f1fb9d9ade99622efc12af88026107b46fecf2",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11128-015-1136-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "f69e31f0889a06f80708317cbfce3f4594d31717",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Physics",
"Mathematics"
]
} |
266900591 | pes2o/s2orc | v3-fos-license | Approaching expert-level accuracy for differentiating ACL tear types on MRI with deep learning
Treatment for anterior cruciate ligament (ACL) tears depends on the condition of the ligament. We aimed to identify different tear statuses from preoperative MRI using deep learning-based radiomics with sex and age. We reviewed 862 patients with preoperative MRI scans reflecting ACL status from Hunan Provincial People’s Hospital. Based on sagittal proton density-weighted images, a fully automated approach was developed that consisted of a deep learning model for segmenting ACL tissue (ACL-DNet) and a deep learning-based recognizer for ligament status classification (ACL-SNet). The efficacy of the proposed approach was evaluated by using the sensitivity, specificity and area under the receiver operating characteristic curve (AUC) and compared with that of a group of three orthopedists in the holdout test set. The ACL-DNet model yielded a Dice coefficient of 98% ± 6% on the MRI datasets. Our proposed classification model yielded a sensitivity of 97% and a specificity of 97%. In comparison, the sensitivity of alternative models ranged from 84 to 90%, while the specificity was between 86 and 92%. The AUC of the ACL-SNet model was 99%, demonstrating high overall diagnostic accuracy. The diagnostic performance of the clinical experts as reflected in the AUC was 96%, 92% and 88%, respectively. The fully automated model shows potential as a highly reliable and reproducible tool that allows orthopedists to noninvasively identify the ACL status and may aid in optimizing different techniques, such as ACL remnant preservation, for ACL reconstruction.
and errors due to overload and fatigue.Most prior work has focused on classifying injuries 13,14 , detecting abnormalities 17,18 and automatically segmenting ligaments with a convolutional neural network (CNN) 19 .Radiomics employs advanced computational approaches to convert medical images into quantitative features 20,21 and provides new perspectives to aid in the diagnosis, recognition, treatment response and prognosis of diseases 22,23 .Deep learning methodologies predicated on image signal intensity encounter limitations in directly integrating data pertinent to the morphology of anterior cruciate ligament (ACL) tissues and the gray level co-occurrence matrix (GLCM) attributes.These attributes are crucial in determining the status of the ligament, a process routinely employed by senior radiologists in clinical assessments.This gap highlights a need for advanced algorithms that can effectively incorporate both intensity and textural feature analyses for more comprehensive and accurate ACL evaluations.Radiomics 24 employs advanced computations for medical image analysis that decode noninvasive images into quantitative features of tumor phenotypes.
To our knowledge, although several studies have shown the importance of preoperative MRI classifications, no prior studies have used deep learning and radiomics approaches to recognize ACL tear types from preoperative MR images 2,6,25 .Thus, we proposed a novel method for constructing a deep learning-based radiomics with sex and age to precisely recognize the status of the ACL that integrates the following: (i) a system that recognizes abnormalities using deep learning, (ii) a CNN for automated ligament tissue segmentation and (iii) a deep learning-based classifier for ligament status recognition that incorporates 2D images, 2D tissue shape and GLCM radiomics features, age and sex.
MRI datasets and ethical approval
The approval for study protocol and waiver for informed consent was obtained from Hunan Provincial People's Hospital Ethics Council.All methods were performed in accordance with the relevant guidelines and regulations.Researchers collected the preoperative knee MR images of all subjects who underwent knee arthroscopy from January 2019 to August 2022.Patients who met the following criteria were excluded: age less than 18 years; tumors, chronic ACL tears, partial ACL tears, multiple knee ligament tears or bone fractures around the knee; and diseases that could affect the quality of the ACL, such as metabolic arthritis and knee pigmented villonodular synovitis.The conditions of all subjects were confirmed by arthroscopic pathology, which was considered the reference standard for diagnosis.
The participant enrollment process is shown in Fig. 1.Among the 1023 initial participants, 862 participants were finally recruited after we excluded 51 patients whose data were acquired at postsurgery as well as13 patients with distorted high signals and 97 with occlusion of the ACL on the MR images.Among the 862 participants, 324 had intact ACLs, and 538 had ACL tears on their baseline MR images.Table 1 depicts the classification of ligament status for the study cohort.According to ligament status, the participants were randomly allocated into a development set (n = 772) and a holdout test set (n = 90).The development set was further divided into a training set (n = 692) and a validation set (n = 80).Prior to training, the images underwent standardized preprocessing procedures, including normalization and standardization.We implemented data augmentation techniques (such as random clipping, flipping, shifting, tilting, and scaling) was also used to optimize the network's generalization capability in training phases.During training, regularization methods such as dropout and weight decay were utilized.Throughout the model development and validation phases, early stopping for any signs of bias or overfitting was conducted.
Deep learning for ACL detection
The ACL status recognition system was developed on a Dell XPS 8930 server (hexa-core 3.20 GHz processor, 16 Gb RAM and one NVIDIA GeForce GTX 2080 video card) and implemented in Python (version 2.7, Python Software Foundation, Wilmington, Del).The deep learning models were coded by using the Keras 26 framework with the TensorFlowGPU 1.15 27 backend.
The schema consists of three separate components (see Fig. 2a).In our study, the segmentation masks were generated utilizing a Convolutional Neural Network (CNN) based on the U-Net architecture 28,29 , which is particularly adept at medical image segmentation due to its efficiency in learning from a limited amount of data.This network was trained to identify ACL tissue from the MR images, as depicted in Fig. 2b.
The U-Net-based CNN, specifically adapted for this study, takes 512 × 512 × 3 dimension MR images as input and outputs the segmentation of the whole ACL tissue.The segmentation process is automated and leverages edge detection algorithms inherent to the CNN, which have been optimized to recognize the complex anatomy of the knee and the specific texture of ACL tissue in MR images.
The segmentation process performed by the U-Net is not manual but automated; however, it is supervised in the sense that the network was initially trained on a dataset of MR images where the ACL tissue had been manually delineated.This training enables the network to learn the characteristic patterns of the ACL and apply this knowledge to new, unseen images to produce accurate segmentation masks.
Representative examples of the segmentation results by our deep learning model, referred to as ACL-DNet, are provided in Fig. 3.These examples showcase the model's ability to accurately delineate the ACL tissue, which is a testament to the robustness of the feature set developed through this process.www.nature.com/scientificreports/
ACL characterization and selection
Image processing was performed by using an open-source Python package (version 2.1.1;https:// pyrad iomics.readt hedocs.io/ en/ latest/) to extract MR radiomics features 30 (see Fig. 2c).The segmented ACL mask was overlaid onto each MRI sequence, and the extracted tissue segmentations and a standard atlas were used to extract 21 features of interest for each study (see Fig. 3).Quantitative imaging features (e.g., 2D shape and GLCM) were extracted and then thresholded to obtain qualitative features (e.g., mesh surface, pixel surface, perimeter, maximum diameter, autocorrelation, joint average and cluster prominence).Unsupervised clustering, Spearman correlation analysis, univariate analysis and feature selection algorithms (FSAs) 31 were executed for reducing the dimensions of the radiomics features (see Fig. 4).In the Spearman correlation analysis, the thresholds were set to 0.9.Features with p < 0.05 for univariate analysis were selected for further analysis.Features were scored based on their ranks provided via a random forest algorithm with tenfold cross-validation strategy, enabling us to reduce their dimensions and select highly discriminative features to identify those important in the differential task 32 .Of these, 6 2D shape features (20%) were selected due to their significant correlation with ACL status (Wilcoxon rank-sum test, p < 0.05).The complex heatmap (see Fig. 4a) shows clustering of these features in different clusters of patients.
Deep learning approaches for developing a differential diagnosis system
For each patient, the image inputs are combined with quantitative inputs (6 radiomics features and 2 clinical factors) by using the pretrained VGG16 33 as a backbone network for calculating a probability for each diagnosis (ACL-SNet, shown in Fig. 2d).The last fully connected layer of VGG16 at the top of the network is removed, and global max pooling is used to take the maximum values of each layer of the feature maps to transform them into raw values.The pretrained VGG16 first learns the relevant features of the image inputs as a 'warm up' .At the same time, the radiomics features are automatically extracted from the ligament tissue masks.www.nature.com/scientificreports/We used the highly informative pretraining signatures combined with clinical factors and radiomics features as the input of the diagnostic model, which was constructed with 2 dense convolutional layers, 1 dropout (0.25%) layer and a softmax classifier and fine-tuned by the input (a 1D vector of features).The purpose of this study was to decode general preoperative phenotypes present in multiple ACL statuses and encapsulate the differential mapping between features and diseases.
Performance comparison with radiologists
To compare the performance of the DL system with that of clinical experts, the MRI data were independently and blindly presented to a senior and a junior radiologist with had 5 and 9 years of experience in diagnosis, respectively.The radiologists were given the same MR images and clinical factors available to the DL system and were informed of the equal distribution of diagnoses across patients.One 4th-year orthopedist resident watched the video recording of the knee arthroscopy surgeries and checked the operation records to give the final diagnosis, which was used as the gold standard in this study.
Evaluation metrics
In our study, we utilized a range of metrics to assess the performance of our deep learning model in the detection and characterization of Anterior Cruciate Ligament (ACL) injuries from MRI images as follows: Sensitivity Specificity
Accuracy
The Confusion Matrix was utilized to detail true positives, false positives, true negatives, and false negatives, providing a comprehensive view of classification accuracy.The Area Under the Curve (AUC), with its 95% confidence intervals (CIs) calculated from 2000 iterations, offered an aggregate measure of the model's performance across all classification thresholds, with probabilities determined by the 'argmax' function.Additionally, a radiomics classifier, developed using a random forest algorithm and tenfold cross-validation on the development set, was employed to evaluate the differential value of quantitative radiomic features.A logistic regression-based regressor was also created for analyzing clinical factors.The differential performance of these components, including a conventional VGG16 model with image inputs, was scrutinized on the test set.Comparisons of the AUCs between the DL-based system and orthopedist evaluations were conducted to ascertain any significant disparities in performance.
Performance of DL approach
ACL-DNet segmentation was trained with 30 epochs, producing a Dice coefficient of 0.98 ± 0.06.To compare our system with other previous image segmentation applications 34,35 , we evaluated two state-of-art segmentation DLs, the pyramid scene parsing network (PSPNet 34 )-a commonly used image semantic segmentation network-and a deep convolutional encoder-decoder architecture for image segmentation (SegNet 35 ), as demonstrated in Table 2.
The pretrained conventional VGG16 was subjected to 87 epochs of training with image inputs.Among the 21 shape 2D and 6 GLCM radiomics features, showed significant differences according to ACL status, and were used along with age and sex as the quantitative inputs to ACL-SNet.After some of the weights were transfer learned from the pretrained conventional VGG16, ACL-SNet was established following 46 epochs of fine-tuning.Our model achieved an accuracy of 98.8% in the test set.The performances of the conventional VGG16, radiomics classifier, and predictor using sex and age are summarized in Table 3, achieving accuracies of 89.3%, 75.3%, and 68.1%, respectively, in the test set.The automated hybrid model was superior to the individual components in terms of ACL status identification across all datasets.
The ACL-SNet system performed differential diagnosis for 88 of 90 (97%) patients in the test set.There was no difference in diagnostic accuracy between the ACL-SNet system and senior radiologist on the same set of patients (mean percentage correct across participants, 94%; [95% CI 0.08, 0.32]; p = 0.12).The ACL-SNet system performance was better than that of the junior radiologist and orthopedist resident (67-75 of 90 items correct [74-83%]; mean percentage correct across participants, 78.5%; [95% CI 0.06, 0.25]; p < 0.003).Comparisons of the ACL-SNet system to the senior radiologist demonstrated similar findings: the AI system performance was similar to that of the clinical expert on all measures and was consistently better than that of the junior radiologist and orthopedist resident in the differential diagnoses (see Fig. 5a).
Evaluating the clinical experts and DL approach
We further evaluated the strengths and weaknesses of the DL system, as well as those of clinical experts, by using confusion matrices.When evaluated in terms of the differential diagnosis with respect to the true status, the DL approach system was found to perform well for most diagnoses but poorly for others (e.g., Type 2 and Type 5 tears, as shown in Fig. 6a).The clinical experts, meanwhile, made errors on a number of different diagnoses (see Fig. 6b).However, the DL approach system and clinical experts made different types of errors.White boxes may indicate a higher frequency of certain types of misclassification, For senior radiologists, junior radiologists, and orthopedic surgery residents, each group showed different misclassification patterns.For example, these patterns may indicate that junior radiologists frequently confuse type 2 and type 3 injuries, whereas orthopedist may have difficulty distinguishing between intact ACL and type 1 injuries.Senior radiologists may have more experience and may show fewer errors overall, but there may still be specific areas where confusion occurs, such as between type 4 and type 5 lesions.
For the majority of cases, the DL system correctly identified the ACL status (88 of 90 patients), meaning that for these correctly diagnosed cases, the DL system had an average of 97% probability of assigning the most likely www.nature.com/scientificreports/predicted status.In contrast, in the few cases where the DL system failed to predict the correct ACL status (2 of 90 patients), the system's confidence in the incorrect status was only 3%.This indicates that the system's assessment is significantly less certain about false predictions.
Discussion
We developed a deep learning approach model to differentiate ACL statuses.Our model incorporates information from 2D ACL image signal intensity, 2D ligament shape features, sex and age into one model in addition to DL-based automated ACL segmentation and fully automated pipeline processes.Although maximum ligament repair is the standard treatment regardless of complete ACL tear status, the preoperative prediction of ACL status is still helpful in guiding treatment and optimizing the appropriate style of operation.Deep learning and www.nature.com/scientificreports/radiomics are representative quantitative methodologies for medical image analysis that extract high-dimensional signal features and compute numeric information.
A number studies have applied artificial intelligence and radiomics for diagnosing diseases [36][37][38] .Rauschecker et al. 36 developed an AI system that integrated AI-extracted radiomics features into a probabilistic differential diagnosis by using Bayesian inference via data-driven and domain-expertise methodologies.Choi et al. 37 developed a composite CNN and radiomics approach to predict the isocitrate dehydrogenase (IDH) mutation status of gliomas from preoperative MR images.Here, we constructed a hybrid approach system as a fusion of various methods with complementary strengths that integrates DL-based identification, radiomics features and clinical factors to explore whether the differential accuracy of clinical experts could be improved.
Several studies 13,[39][40][41] have used deep CNNs for diagnosing ACL injuries.Chang et al. 13 reported on a multiple CNN for the detection of complete ACL tears.Alexia Tran et al. 39 build a deep learning-based ACL tear detector.Awan et al. 40 presented a new deep learning technique for localizing the ACL tear region in MRI images.However, all of these previous studies utilized proton density and T2-weighted sequence MRI scans for the detection of ACL tears.In contrast, the use of conventional MRI is significant as it reflects a more commonly available modality in clinical practice.
Several studies have shown the importance of the preoperative MRI classification of ACL tears, revealing that this classification could affect the choice of surgical technique during the preoperative assessment.MRI classification, for instance, can predict the success rate of the primary ACL repair technique 2,6 ; specifically, 90% of type I tears and 88% of type II tears could be repaired, while only 14% of type III tears could be repaired with this technique 2 .For some advanced ACLR techniques, such as remnant preservation, augmented remnant repair, repair with bioactive composite scaffolds, and remnant tensioning, preoperative MRI classification is also crucial because all these techniques involve the remnant ACL and require a sufficient remnant length [8][9][10][11][12]25 . Theremnant preservation technique only showed better results in patients with remnant lengths > 20%, which means that patients with MRI classifications of types 1 and 5 tears may not be suitable for this method 42 .Additionally, the biological internal bracing technique can only be utilized for patients classified with a type 1 or type 2 ACL tear, and the relevant MRI assessment must be completed before surgery 43 .
In this study, our proposed approach has been tailored to address certain limitations observed in previous methods, particularly those related to the incorporation of ACL tissue morphology and textural features into the analysis.While image signal intensity provides valuable information, it does not encompass the entirety of diagnostic features a senior radiologist would consider.The DL approach system was investigated as a way to fuse the perceptual and cognitive information from radiologic images.First, a CNN was trained on MR images to detect the ACL.Then, quantitative image and ACL features were explicitly derived by using image processing methodologies.Finally, this information was fused with a small number of clinical factors by using a pretrained classification model to yield the differential diagnosis and classification, which can benefit surgeons' preoperative decision-making regarding the ACL reconstruction technique.
The combination of DL-based ACL identification, radiomics features and clinical factors enhanced the identification performance in our study, and it consistently yielded better performance than the use of single modalities.The deep learning approach system achieved high diagnostic performance in recognizing the status of the ACL, with an AUC of 0.99.Furthermore, there was no statistically significant difference between the proposed system and clinical experts with different levels of experience in recognizing ACL status.
Our study presents several limitations.Primarily, it employs a cascaded system of two separate deep learning models rather than a unified end-to-end network.While this dual-model structure adds to the training complexity, it offers the advantage of modular adaptability for diverse applications.Additionally, the study relies exclusively on conventional MR images, omitting advanced techniques like perfusion and diffusion-weighted imaging.Though this limits the scope of detectable features, conventional MRI remains more accessible and practical.Furthermore, our dataset does not encompass rare or unique ACL tear types, a constraint that may affect the model's comprehensive diagnostic capabilities.To address these gaps, future research could explore synthetic data augmentation, transfer learning, and few-shot learning methods to enhance the model's performance with limited data samples.We also propose the development of multicenter collaborations to accumulate a more varied dataset, including rare ACL tear types, thereby broadening the model's learning spectrum and diagnostic applicability.
In conclusion, we developed a model utilizing deep learning and radiomics techniques that can reliably identify the status of the ACL and classify ACL tears using a fully automated process based on MR imaging.Our model has the potential to be used in the clinic for the noninvasive characterization of ACL tissue to support personalized treatment planning.
Figure 2 .
Figure 2. Image demonstrating the overview of the deep learning system.(a) Schematic of the deep learning network demonstrates the proposed architecture with a complete set of features used by the deep learning system to differentiate the status of the ACL, which are divided into three categories: clinical, signal and radiomics.(b) Schematic of the U-Net architecture used for ACL signal detection (ACL-DNet).(c) Multiple quantitative features are calculated for every ligament in every patient, including those shown in this example.These features are stored, providing a rich quantitative description of the tissue.For differential diagnosis, the features are thresholded and then probabilistically fused in a deep learning network.(d) Illustration of the hybrid approach for recognizing the ACL status by maximizing the synergy between the image features from the pretrained weights and numeric inputs (ACL-SNet).ACL = Anterior cruciate ligament, GLCM = Gray Level Concurrence Matrix, MCC = Maximal Correlation Coefficient, PD = Proton Density, SPAIR = Spectral Attenuated Inversion Recovery.
Figure 3 .
Figure 3. Representative example of ACL tissue segmentation.(a) The original grayscale images and segmentation of the ACL of a 19-year-old male patient, which was confirmed as intact with an arthroscopic hook probe.(b) The original grayscale images and segmentation for an 18-year-old male patient with a confirmed type 2 ACL tear.(c) The original grayscale images and segmentation for a 46-year-old female patient with a confirmed type 4 ACL tear.(d) The original grayscale images and segmentation for a 37-year-old female patient with a confirmed type 5 ACL tear.Red indicates the ground truth; blue indicates the prediction by ACL-DNet.
+ 2 ×
8 years).The development and test set split resulted in 772 individuals in the development set and 90 in the test set, the latter comprising 34 intact ACLs, 9 Type 1 tears, 12 Type 2 tears; 22 Type 3 statuses, 5 Type 4 tears, and 8 Type 5 tears.No significant difference TP + FN (4) Accuracy = TP + TN TP + FP + TN + FN
Figure 4 .
Figure 4. Representations of radiomic features.(a) Unsupervised clustering of participants on the x-axis and radiomics feature expression on the y-axis reveals that clustered patients have similar radiomics expression patterns.(b) An example showing no correspondence with radiomics expression patterns.(c) Correlation coefficient matrix between radiomics variables.(d) Ranks and scores identifying important radiomics features.
Figure 5 .
Figure 5. Graphs showing the performance of the hybrid deep learning system and that of the clinical experts.(a) Performance is estimated as the percentage correct by listing the differential diagnosis across 90 test patients (6 statuses).Each circle represents a group, and the horizontal line represents the mean across each group.The horizontal dashed line is the performance of the ACL-SNet system.Error bars represent 95% binomial probability confidence intervals.(b) Receiver operating characteristic (ROC) curves for the ACL-SNet system (blue) and the clinical experts (other colors).The ACL-SNet system has a similar area under the ROC curve (AUC) to that of radiologists (black and yellow).
Figure 6 .
Figure 6.Confusion matrices depict the errors in differential diagnosis by the DL approach system and clinical experts for each disease.In general, the true status is depicted along the x-axis, and the system-/expertdiagnosed status is depicted along the y-axis, with the color bar showing the number of participants whose true status was identified as the corresponding status on the y-axis.A perfect recognition algorithm would result in yellow squares along the diagonal from top left to bottom right.White rectangles on the DL approach system confusion matrix represents mistakes made by the system.
Table 2 .
Performance of ACL-DNet vs Alternative DL Systems in ACL Segmentation.
Table 3 .
Performance of the Conventional VGG16, Radiomics Classifier and Predictor in Identifying ACL Status. | 2024-01-11T06:17:20.205Z | 2024-01-10T00:00:00.000 | {
"year": 2024,
"sha1": "f563bc7f2018168f93323013223f31df4ea174ad",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-024-51666-8.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "29189d76337284125d7685831ed432fab286ceb6",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
335836 | pes2o/s2orc | v3-fos-license | Distributed Deep Q-Learning
We propose a distributed deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is based on the deep Q-network, a convolutional neural network trained with a variant of Q-learning. Its input is raw pixels and its output is a value function estimating future rewards from taking an action given a system state. To distribute the deep Q-network training, we adapt the DistBelief software framework to the context of efficiently training reinforcement learning agents. As a result, the method is completely asynchronous and scales well with the number of machines. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to achieve reasonable success on a simple game with minimal parameter tuning.
I. INTRODUCTION
Reinforcement learning (RL) agents face a tremendous challenge in optimizing their control of a system approaching real-world complexity: they must derive efficient representations of the environment from high-dimensional sensory inputs and use these to generalize past experience to new situations. While past work in RL has shown that with good hand-crafted features agents are able to learn good control policies, their applicability has been limited to domains where such features have been discovered, or to domains with fully observed, low-dimensional state spaces [1]- [3].
We consider the problem of efficiently scaling a deep learning algorithm to control a complicated system with high-dimensional sensory inputs. The basis of our algorithm is a RL agent called a deep Q-network (DQN) [4], [5] that combines RL with a class of artificial neural networks known as deep neural networks [6]. DQN uses an architecture called the deep convolutional network, which utilizes hierarchical layers of tiled convolutional filters to exploit the local spatial correlations present in images. As a result, this architecture is robust to natural transformations such as changes of viewpoint and scale [7].
In practice, increasing the scale of deep learning with respect to the number of training examples or the number of model parameters can drastically improve the performance of deep neural networks [8], [9]. To train a deep network with many parameters on multiple machines efficiently, we adapt a software framework called DistBelief to the context of the training of RL agents [10]. Our new framework supports data parallelism, thereby allowing us to potentially utilize computing clusters with thousands of machines for largescale distributed training, as shown in [10] in the context of unsupervised image classification. To achieve model parallelism, we use Caffe, a deep learning framework developed for image recognition that distributes training across multiple processor cores [11].
The contributions of this paper are twofold. First, we develop and implement a software framework adapted that supports model and data parallelism for DQN. Second, we demonstrate and analyze the performance of our distributed RL agent. The rest of this paper is organized as follows. Section II introduces the background on the class of machine learning problem our algorithm solves. This is followed by Section III and Section IV, which detail the serial DQN and our approach to distributing the training. Section V discusses our experiments on a classic video game, and some concluding remarks are drawn and future works mentioned in Section VI.
II. BACKGROUND
We begin with a brief review on MDPs and reinforcement learning (RL).
A. Markov decision process
In an MDP, an agent chooses action a t at time t after observing state s t . The agent then receives reward r t , and the state evolves probabilistically based on the current stateaction pair. The explicit assumption that the next state only depends on the current state-action pair is referred to as the Markov assumption. An MDP can be defined by the tuple (S , A , T, R), where S and A are the sets of all possible states and actions, respectively, T is a probabilistic transition function, and R is a reward function. T gives the probability of transitioning into state s from taking action a at the current state s, and is often denoted T (s, a, s ). R gives a scalar value indicating the immediate reward received for taking action a at the current state s and is denoted R (s, a).
To solve an MDP, we compute a policy π that, if followed, maximizes the expected sum of immediate rewards from any given state. The optimal policy is related to the optimal state-action value function Q (s, a), which is the expected value when starting in state s, taking action a, and then following actions dictated by π . Mathematically, it obeys the Bellman recursion The state-action value function can be computed using a dynamic programming algorithm called value iteration. To obtain the optimal policy for state s, we compute π (s) = argmax a∈A Q (s, a) .
B. Reinforcement learning
The problem reinforcement learning seeks to solve differs from the standard MDP in that the state space and transition and reward functions are unknown to the agent. The goal of the agent is thus to both build an internal representation of the world and select actions that maximizes cumulative future reward. To do this, the agent interacts with an environment through a sequence of observations, actions, and rewards and learns from past experience.
In our algorithm, the deep Q-network builds its internal representation of its environment by explicitly approximating the state-action value function Q via a deep neural network. Here, the basic idea is to estimate where π maps states to actions (or distributions over actions), with the additional knowledge that the optimal value function obeys Bellman equation where E is the MDP environment.
III. APPROACH
This section presents the general approach adapted from the serial deep Q-learning in [4], [5] to our purpose. In particular, we discuss the neural network architecture, the iterative training algorithm, and a mechanism that improves training convergence stability.
A. Preprocessing and network architecture
Working directly with raw video game frames can be computationally demanding. Our algorithm applies a basic preprocessing step aimed at reducing the input dimensionality. Here, the raw frames are gray-scaled from their RGB representation and down-sampled to a fixed size for input to the neural network. For this paper, the function applies this preprocessing to the last four frames of a sequence and stacks them to produce the input to the state-action value function Q.
We use an architecture in which there is a separate output unit for each possible action, and only the state representation is an input to the neural network; i.e., the preprocessed four frames sequence. The outputs correspond to the predicted Q-values of the individual action for the input size. The main advantage of this type of architecture is the ability to compute Q-values for all possible actions in a given state with only a single forward pass through the network. The exact architecture is presented in Appendix A, but a brief outline is as follows. The neural network takes as input a sequence of four frames preprocessed as described above. The first few layers are convolutional layers that applies a rectifier nonlinearity which was empirically observed to model real/integer valued inputs well [12], [13], as is required in our case. The remaining layers are fully-connected linear layers with a single output for each valid action. The number of valid actions varies with the game application. The neural network is implemented on Caffe [11], which is a versatile deep learning framework that allows us to define the network architecture and training parameters freely. And because Caffe is designed to take advantage of all available computing resources on a machine, we can easily achieve model parallelism using the software.
B. Q-learning
We parameterize the approximate value function Q (s, a | θ ) using the deep convolutional network as described above, in which θ are the parameters of the Q-network. These parameters are iteratively updated by the minimizers of the loss function and "behavior distribution" (exploration policy ρ (s, a). The optimizers of the Q-network loss function can be computed by gradient descent Q (s, a; θ ) := Q (s, a; θ ) + α∇ θ Q (s, a; θ ) , with learning rate α.
For computational expedience, the parameters are updated after every time step; i.e., with every new experience. Our algorithm also avoids computing full expectations, and we train on single samples from ρ and E . This results in the Q-learning update The procedure is an off-policy training method [14] that learns the policy a = argmax a Q (s, a; θ ) while using an exploration policy or behavior distribution selected by an ε-greedy strategy.
The target network parameters used to compute y in Eq. (1) are only updated with the Q-network parameters every C steps and are held fixed between individual updates. These staggered updates stabilizes the learning process compared to the standard Q-learning process, where an update that increases Q (s t , a t ) often also increases Q (s t+1 , a) for all a and hence also increases the target y. These immediate updates could potentially lead to oscillations or even divergence of the policy. Deliberately introducing a delay between the time an update to Q is made and the time the update affects the targets makes divergence or oscillations more unlikely [4], [5].
C. Experience replay
Reinforcement learning can be unstable or even diverge when a nonlinear function approximator such as a neural network is used to represent the value function [15]. This instability has several causes. A source of instability is the correlations present in the sequence of observations. Additionally, the fact that small updates to Q may significantly change the policy and therefore change the data distribution. Finally, the correlations between Q and its target values can cause the learning to diverge. [4], [5] address these instabilities uses a mechanism called experience replay that randomizes over the data, thereby removing correlations in the observation sequence and smoothing over changes in the data distribution.
In experience replay, the agent's experiences at each time step is stored as a tuple e t = (s t , a t , r t , s t+1 ) in a dataset D t = (e 1 , . . . , e t ) pooled over many game instances (defined by the start and termination of a game) into a replay memory. During the inner loop of the algorithm, we apply Q-learning updates, or minibatch updates, to samples of experience drawn at random from the replay dataset.
This approach demonstrates several improvements over standard Q-learning. First, each step of experience is potentially used in many weight updates, thus allowing for greater data efficiency. Second, learning directly from consecutive samples is inefficient due to the strong correlations between the samples. Randomizing the samples breaks these correlations and reduces the update variance. Last, when learning on-policy the current parameters determine the next data sample that the parameters are trained on. For instance, if the maximizing action is to move left then the training samples will be dominated by samples from the left-hand side; if the maximizing action then changes to the right then the training distribution will also change. Unwanted feedback loops may therefore arise and the method could get stuck in a poor local minimum or even diverge.
With experience replay, the behavior distribution is averaged over many of its states, smoothing out learning and avoiding oscillations or divergence in the parameters. Note that when learning by experience replay, it is necessary to learn off-policy because our current parameters are different to those used to generate the sample, which motivates the choice of Q-learning.
In practice, our algorithm only stores the last N experience tuples in the replay memory. It then samples uniformly at random from D when performing updates. This approach is limited because the memory buffer does not differentiate important transitions and always overwrites with recent transitions owing to the finite memory size N. Similarly, the uniform sampling gives equal importance to all transitions in the replay memory. A more sophisticated sampling strategy might emphasize transitions from which we can learn the most, similar to prioritized sweeping [16].
IV. DISTRIBUTED DEEP Q-LEARNING Algorithm 1 and Algorithm 2 define the distributed deep Q-learning algorithm. In this section we discuss some im-portant points about parallelism and performance.
Algorithm 1 Worker k: ComputeGradient state: Replay dataset D k , game state s t , target modelθ , target model generation Fetch model θ and iteration number n from server. if n ≥ C then Select action a t = max a Q (φ (s t ) , a; θ ) w.p. 1 − ε random action otherwise Execute action a t and observe reward r t and frame x t+1 Append s t+1 = (s t , a t , x t+1 ) and preprocess Send ∆θ to parameter server.
A. Data parallelism
The serial Deep Q-learning algorithm uses stochastic gradient descent to train the Q network. SGD is an inherently sequential algorithm, but various attempts have been made to parallelize it effectively. We implement a variant of Downpour SGD that takes advantage of data parallelism [10]. Unlike the "vanilla" version of Downpour SGD, our RL agents actively add experiences to the memory replay dataset (see Fig. 1). A parameter server stores a global copy of the model. Each worker node is responsible for 1) fetching the latest model, θ , from the server 2) generating new data for its local shard 3) computing a gradient using this model and mini-batch from its local replay memory dataset 4) sending the gradient ∆θ back to the server. These operations constitute a single worker iteration. All workers perform these iterations independently, asynchronously fetching from and pushing to the parameter server. Upon receiving a gradient, the parameter server immediately applies an update to the global model. The only synchronization is a write-lock on the model as it is being written to the network buffer.
For typical sizes of Q-networks, it takes much longer to compute a mini-batch gradient than it does to perform a parameter update. Therefore we can train on more data in the same amount of time by simply adding worker nodes (eventually this breaks, as we discuss later). Since each mini-batch is supposed to be drawn uniformly at random from some history of experiences, we can keep completely independent histories on each of the worker nodes and sample only from the local dataset. This allows our algorithm to scale extremely well with the size of the training data.
B. Model parallelism
Google's implementation of Downpour SGD distributes each model replica across multiple machines. This allows them to scale to very large models. Our implementation, on the other hand, assumes that the model fits on a single machine. This places a strict upper bound on the size of the model. However, it allows us to easily take advantage of hardware acceleration for a single node's computation. We use the Caffe deep learning framework to perform the gradient computation on each of the worker machines. Caffe allows us to take advantage of fast BLAS implementations, using the worker's CPU or GPU resources.
In this sense, the work done by a single node is also parallelized-either across multiple CPU cores or many GPU cores. Pushing the computation down to the GPU yields a substantial speed up, but it further constrains the size of the model. The GPU memory must not only hold the model and batch of data, but also all the intermediate outputs of the feedforward computation. This memory limit is often approached in practice, especially for lower end GPUs. The CPU's memory is much more accommodating and should be able to hold any reasonably large model. In the case where the worker computation must be done on the CPU due to memory constraints, the advantages of distributed deep Q are even more drastic.
C. Communication pattern
The server must communicate with all workers since each requires the latest model at every iteration. Each worker, in turn, communicates with the server to send a gradient, but does not need to communicate with any other worker node. This is similar to a one-to-all and all-to-one communication pattern, which could benefit from the allreduce primitive. However, all of the communication happens asynchronously, breaking the allreduce communication pattern. Further, to minimize the "staleness" of the model used by a worker for its gradient computation, it should fetch the latest version of the model directly from the server, not in bit-torrent fashion from its peers.
D. Scalability issues
As we scale up the number of worker nodes, certain issues become increasingly important in thinking about the performance of the distributed deep Q-learning algorithm.
1) Server bottleneck: With few machines, the speed of training is bound by the gradient computation time. In this regime, the frequency of parameter updates grows linearly with the number of workers. However, the server takes some finite amount of time, τ, to receive a gradient message and apply a parameter update. Thus, even with an infinite pool of workers, we cannot perform more than 1/τ updates per second.
This latency, τ, is generally small compared to the gradient computation time, but it becomes more significant as the number of workers increases. Suppose a mini-batch gradient can be computed in time T . A pool of P workers willon average-serve up a new gradient every T /P. Thus, once we have P = T /τ workers, we will no longer see any improvement by adding nodes. This is potentially alarming, especially since both T and τ grow linearly with the model size (i.e. the ratio T /τ is constant). However, one way to improve performance beyond this limit is to increase the batch size. Note that this increases the single worker computation time T , but does not affect the server latency τ. Another option is to use a powerful machine for the server and further optimize our server-side code to reduce τ.
2) Gradient staleness: As the frequency of parameter updates grows, it becomes increasingly likely that a gradient received by the server was computed using a significantly outdated model. This increases the noise in the parameter updates and could potentially slow down the optimization process and lead to problems with convergence. In practice, using adaptive learning rate methods like RMSprop or Ada-Grad, we do not see such issues. However, as the number of workers increases, this could become a significant problem and should be examined carefully.
E. Implementation details
We rely on a slighted out-of-date version of Caffe (which is included as a submodule) as well as Spark for running the multi-worker version of distributed deep Q-learning. Our implementation does not make heavy use of Spark. However, Spark does facilitate scheduling of gradient updates, coordinating the addresses of all machines involved in the computation, and shipping the necessary files and serialized code to all of the worker nodes. We also made some progress towards a more generic interface between Caffe and Spark using MemoryDataLayers and shared memory. For this, please see the shmem branch of the GitHub repository.
F. Complexity analysis 1) Convolutional neural network: To analyze our model complexity, we break our neural network into three components. The first part consists of the convolutional layers of the convolutional neural network (CNN). The complexity for this part is O d 2 Fk 2 N 2 L C , where we have frame width d, frame count F, filter width k, filter count N, convolution layer count L C . The second part consists of the fully connected layers and its complexity is O H 2 L B , where H is the node count and L B is the hidden layer count. Finally, the "bridge" between the convolutional layers and fully connected layers contributes a complexity of O Hd 2 N . The total number of parameters in the model p is thus O d 2 Fk 2 N 2 L C + H 2 L B + Hd 2 N . We use the variable p to simplify our notation.
2) Runtime: Consider a single worker and its interaction with the parameter server. The run-time for a single parameter update is the time to compute a gradient, T , plus the time to perform the update, τ. Both T and τ are O(p), but the constant factor differs substantially. Further, the server takes at least time τ to perform an update, regardless of the number of workers.
Thus the run-time for N parameter updates using k worker machines is O (N p/k) if k < T /τ or O(Nτ) otherwise.
3) Communication cost: Each iteration requires both a model and a gradient to be sent over the network. This is O(p) data. We do this for N iterations. Thus the total communication cost is O(N p).
V. NUMERICAL EXPERIMENTS
To validate our approach, we apply it on the classic game of Snake and empirically demonstrate the performance of our algorithm.
A. Snake
We implemented the Snake game in Python. To generate experiences, our program preprocesses the frames of the game and feeds them into our neural network (see Fig. 2). In our implementation, the Spark driver sends a script to each worker that spawns a local game instance and a neural network. Using the neural network, the worker generates experience tuples by interacting with the game. The game is made up of an n × n array. The snake starts with body length of two and gains an additional body length when it eats an "apple." Ingesting an apple awards the agent one point. At game termination, conditioned on the snake hitting itself, the agent loses one point. The goal of the game is to maximize the score by having the snake eat more apples and not dying. Each worker sends their model to the server after every gradient computation, and receives the latest model from the server periodically as detailed in Section IV. Figure 3 shows the experiment runtimes with different model sizes, which correspond to different game frame sizes. The legend is as follows. "comms" refers to the amount of time required for sending the model parameters between (either way) the parameter server and one worker. "gradient" refers to the compute time required to calculate a gradient update by a worker. "latency" refers to the amount of time required by the parameter server to update its weights with one set of gradients. The training rate was compute-bound by gradient calculations by each worker in our experiments.
B. Computation and communication
Note that the gradient calculation line is two orders of magnitude larger than the other two lines in the figure. Note also that the upper bound on number of updates per second is inversely proportional to number of parameters in the model, since the single parameter server cannot update its weights faster than a linear rate with respect to the number of updates in a serial fashion. Thus we observe that as the number of workers and model size increase, the update latency could become the bottleneck of the learning process. To prevent such bottlenecks, we can increase the minibatch size for each gradient update. This modification would therefore increase the compute time required by each worker machine and therefore reduce the rate at which gradient updates are sent to the parameter server. Additionally, the gradient estimates would be better due to the larger minibatch size.
C. Distributed Performance
To validate our work we collected results from two experiments, which were ran for a total of 120,000 gradient updates. The first experiment was performed using a serial implementation as documented in [4], [5]. The second experiment was ran with two workers for the same number of updates. As shown in Fig. 4, the two workers model experienced a much faster learning rate than the single worker model. In fact, the average reward over time scales linearly with number of workers: At every time stamp, we see that the average reward obtained by the two worker experiment is roughly twice of the single worker experiment. This trend suggests that the performance of our distributed algorithm scales linearly in the initial training phase.
VI. CONCLUSION AND FUTURE WORK
We have developed a distributed deep Q-learning algorithm that can efficiently train an complicated RL agent on multiple machines in a cluster. The algorithm combines the sequential deep Q-learning algorithm developed in [4], [5] with DistBelief [10], accelerating the training process via asynchronous gradient updates from multiple machines at a linear rate of increase with the number of RL agents. Future work includes scaling up the experiments and studying how issues such as model staleness from having more worker machines impacts convergence rates. We will also compare our work with Gorila, a distributed deep learning architecture similar to ours that is pre-dated by the first version of our paper [17].
SUPPLEMENTARY MATERIAL
All code for this project can be found, together with documentation, at https://github.com/kjchavez/distributed-deep-q.
APPENDIX
A. Network architecture Figure 5 visualizes the exact architecture of our deep neural network, as implemented in Caffe. Note that unlike in the "vanilla" DQN agent developed in [4], [5], our variant was designed as a single RL agent with both the target and actively training neural networks. This feature was enabled by special layers provided in the Caffe framework. | 2015-09-14T21:13:04.000Z | 2015-08-18T00:00:00.000 | {
"year": 2015,
"sha1": "4eb082956ea3f9b2d83936c41893e385d8cf8918",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "fcd13107bdd1df39ad0587def6410e216a1cff33",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
110238884 | pes2o/s2orc | v3-fos-license | Performance Enhancement of Optical CDMA by Differential-Phase Method for Radio-over-Fiber Transmissions
The study proposes the differential-phase optical code-division multiple-access (OCDMA) network for radio-over-fiber (RoF) transmissions, and the characteristics are numerically analyzed. The network coder/decoders (codecs) are structured on the basis of arrayed-waveguide-grating (AWG) routers with complementary Walsh-Hadamard (CWH) signature codes. In the proposed system, the network requires only two AWG routers to accomplish spectral encoding of radio base station (RBS) and decoding of control station for the complementary keying, thus resulting in a simpler and low cost system. Performance analyses are evaluated with the dominant noise of phase-induced intensity noise (PIIN) in spectral code OCDMA network. By the proposed AWG-based OCDMA with the differential-phase scheme, it is possible to establish interference-free and low crosstalk beat noise RoF systems.
Introduction
The millimeter-wave radio-over-fiber (RoF) system has drawn much attention on the realization of broadband radio access services recently.This is because of RoF technology can resolve the scarcity of available radio frequency (RF) resource problem [1][2][3][4][5].It will become an important network access scheme other than the fiber-to-the-home (FTTH) technology.As a configuration of RoF network, microcells are connected by optical fibers among radio base stations (RBS) and central control station (CS).
Optical code-division multiple-access (OCDMA) is one candidate scheme for RoF network access technique that allows multiple users in local area networks (LANs) to access the same fiber channel asynchronously without delay or scheduling.This property is also powerful for RoF access because of its asynchronous access, flexibility, and transparency for various radio air interfaces.
In general, traditional incoherent OCDMA uses unipolar sequences as signature codes, and the coded method is usually based on the time domain.However, it is a problem to suppress multiple-access interference (MAI), and the code length of the codes is always long to support simultaneous users in the systems.In the recent researches, spectral amplitude coding (SAC) scheme [4][5][6][7][8][9][10][11] of OCDMA becomes more popular because of the MAI-elimination and low cost set up components of incoherent optical sources and optical filter in the system.There are many code families that can be used in SAC-OCDMA network such as maximal-length sequence (M-sequence) codes [8], Walsh-Hadamard codes [9][10][11], modified quadratic congruence (MQC) codes [6], and modified PN codes [4].
The SAC-OCDMA network codecs can be constructed with fiber Bragg grating (FBG) devices [6,7], but the physical size of FBG arrays will become impractical when the number of network users becomes large.The other kind of implementation option uses arrayed waveguide grating (AWG) router as codec on OCDMA network [11].It needs mirrors and circulators to code the data, which loses the power and increases the system cost.The two-dimensional (2D) wavelength/time spreading OCDMA system in [12] implements multidimensional codes utilizing AWG multiplexers and fiber delay lines.This scheme is limited by the multiple-access interference (MAI) and is not easy to be implemented for analog RoF network.The other kind of digital OCDMA network [8] with AWG codec is proposed.Unfortunately, it will increase the crosstalk beat noise [13,14] when the number of active users becomes large.In spectrumbased coding OCDMA system, the number of simultaneous active users is limited to the beat noise or calls phaseinduced intensity noise (PIIN).It accumulated at balanced photodetector (PD) during decoding process.In this study, we propose the interference-free RoF system structured with AWG routers [15], differential-phase intensity modulators coded with complementary Walsh-Hadamard (CWH) codes in the transmitter and balanced photodetector scheme in the receiver.The scheme can reduce the physical size of the coder and is also constructed without any sampling technique and aliasing canceller.The carrier-to-noise (CNR) ratio of proposed scheme with CWH code is superior to conventional SAC with M-sequence and Walsh-Hadamard code about 7.7 dB in RoF system.
The remainder of this paper is organized as follows.In Section 2, spectral encoding scheme using CWH code is described.In Section 3, the system encoder and decoder are presented.Section 4 evaluates performance of the proposed system in terms of CNR ratio and bit-error-rate conditions for PIIN.Finally, conclusions are presented in Section 5.
In the proposed differential-phase scheme using CWH code, the RBS k will send codeword C (for in-phase radio signal ) and its complement codeword C (for out-of-phase radio signal ) at the same time.Thus by combining the orthogonal property of C and C codes to get phase diversity of radio signal, CNR will increase approximately 7.7 dB comparing with traditional coding single phase technologies.Each RBS requires only two AWG router and combiners to implement spectral encoding and decoding, respectively.Therefore, the fiber radio system using AWG router with CWH codes can be realized.
System Description
The proposed SAC-OCDMA system utilizes broadband light sources (BLSs) and AWG routers.Complementary keying is employed for each radio signal by directing the light from incoherent sources to input port of AWG encoder, and then AWG router output ports are connected to combiners according to codeword C and its complement codeword C to generate the amplitude spectrum of transmitted radio signal.
Figure 1 shows the proposed differential-phase transmitters and AWG encoders.In each RBS transmitter, differentialphase intensity modulator is performed when the radio signal of each RBS is used to analog or digital modulation.A BLS spectrum is filtered for one free spectral range (FSR) of the AWG router.The codeword C which takes the in-phase radio signal k and C which takes the out-of-phase radio signal k will be transmitted when the BLS is directed into the first input port of the AWG router in the transmitter k.The advantage of complementary encoding is accomplished with only one AWG router for each RBS.
After the encoding process, the coded spectra of C and C become the optical carriers to take the in-phase radio signal and out-of-phase radio signal simultaneously via differential-phase intensity modulator.All coded optical signals of each RBS then collected by the star coupler and broadcasted to the CS.
The radio signal () at the transmitter has the form where is the carrier frequency of the radio signal and () is the complex envelope with a bandwidth .The received optical signal spectrum S is the summation of all RBSs' transmitted signal spectrum as follows: where is modulation index and () is the kth RBS's radio signal.
The AWG router-based decoders are shown in Figure 2. The star coupler is connected to the decoder's AWG router, which distributes received signals to the balanced PDs of each CS decoder to realize differential decoding process.Taking CS decoder k as the example, connections from output ports of the AWG router to combiners are determined by the C codeword and its complement codeword C .The received signal S coming from the star coupler is connected to the AWG router directly; its can reduce the loss by using the split device and can also suppress the crosstalk beat noise of AWG router [8] because it is only connected to one input port.The balanced PD of RBS k will receive SC in the upper arm of PD and SC in the lower arm of PD.After correlation subtraction of SC −SC is performed in the balanced PD, the radio signal for the RBS k will be extracted, and other RBSs' interferences are rejected.
Performance Analysis and Discussion
The performance of the proposed 2D OCDMA system is limited by shot, thermal, and PIIN noises, particularly when the received power is high.PIIN results from the beating of incoherent light fields during the direct detection of squarelaw PDs, and its magnitude depends essentially on the state of polarizations (SOPs) and spectra of the optical signals.
To simplify the current system performance analysis, the following assumptions are made.
(i) The power spectral density (PSD) of each light source is ideally flat over the optical bandwidth ] + Δ]/2, where ] is the central optical frequency and Δ] is the common optical source bandwidth in hertz.
(ii) Sufficient chip time width delays exist between the arrivals of successive pulses.
(iii) Each coding chip has an identical spectral width of Δ]/.
(iv) The chip streams from each RBS are synchronous.
The light source spectrum of each RBS is assumed to be unpolarized and ideal flat over a bandwidth Δ] Hz with magnitude /Δ], where is the effective power from a Mathematical Problems in Engineering single source at the receiver.The instantaneous PSD of the received optical signals at star coupler can be written as where and (]) is the unit step function.
The instantaneous PSD of the upper PD and lower PD for CS decoder can be written as In (7), (], ) is assumed to be the single sideband instantaneous PSD of the source.
The input current to the BPF can be written as where , (), MAI (), PIIN (), shot (), and thermal () are the desired signal, the interference, the PIIN noise, the shot noise, and the thermal noise, respectively.In the proposed RoF system, differential-phase intensity modulator is employed to suppress the noise of nonlinearity during optical-electrical conversion process.We consider the worst case of proposed system when all RBSs transmit the maximum radio power, and it will cause the most noise power of PIIN.The PIIN power now becomes the dominant noise in our proposed RoF system.
To simplify the calculation, the radio signals are assumed as to be nonmodulated carries which have the autocorrelations function () and are expressed as The variation of the photocurrent caused as a result of PIIN is given by: where is the expectation operator, is the average photocurrent, is the noise-equivalent electrical bandwidth of the receiver, and is the coherence time of the source expressed as and the degree of polarization (DOP), , is defined as where 0 , 1 , 2 , and 3 are Stoke parameters used to express the state of polarization (SOP).The bracket ⟨⋅⟩ in ( 12) denotes the average value of the parameter over wavelength, time, or space.It is well known that the DOP is dependent on not only the light source but also the distance traveled by the optical signal in long haul network transmissions.
Since the noises at the upper and lower PDs are independent, the power of the noise sources in the output photocurrent can be written as The power of the differential-phase system's PIIN which exists in the photocurrent of the decoder is given by The SAC-OCDMA systems with flat PSD of light sources in the coded bandwidth, performance is mainly limited by PIIN [6][7][8][9][10][11] due to light from the incoherent sources interfering at the PDs especially when the received power is large.
Finally, the CNR due to the effect of PIIN for conventional single-phase and the proposed differential-phase modulation schemes can be calculated as follows: where 1 and 2 are two average photocurrents of the upper photodiode and lower photodiode at the CS decoder.Figure 3 plots the variation of the CINR with the number of simultaneous RBSs as a function of the length of several codes.It is clear that CNR performance of differential-phase SAC-OCDMA with CWH code is superior to conventional single-phase SAC-OCDMA with Hadamard code and Msequence code about 7.7 dB in large active RBSs scenario.The parameters of incoherent broadband sources used here have linewidth 60 nm, center wavelength 1550 nm, and the noiseequivalent electrical bandwidth of the receiver = 80 MHz (for the bit rate 155 Mb/s).
The CNR of the conventional single-phase system degrades more significantly than that of the proposed differentialphase system, particularly with a large number of RBSs.The reason for this is that the PIIN effect becomes much larger since in the SAC-based OCDMA system, when a large number of RBSs transmit their coding patterns simultaneously, more wavelengths beat together during the direct detection by the square-law PDs.Besides, the performance in SAC-OCDMA system cannot be improved by increasing code length.
A common unpolarized amplified spontaneous emission (ASE) source can be used in the current differential-phase system because the scheme considers only the source power but not phase or polarization.However, on the long-haul transmissions over RoF network, the DOP effect must be addressed.In general, CNR can be improved by positioning a scrambler in front of balanced photodetector to eliminate the polarization-dependent effect of the detector.The scrambler theoretically removes the polarization sensitivity of the photodetector in the proposed RoF scheme; hence the average values of 1 , 2 , and 3 in (12) approach zero, and the DOP is significantly decreased.In other words, the scrambler theoretically removes the polarization sensitivity of the photodetector in the proposed RoF scheme.In order to analyze the BER performance with variance of DOP (i.e., the DOP varies in the range 0 to 1) was assumed to represent the influence of polarization following long haul transmission.
As is shown in Figure 4, the BER performance of the proposed differential-phase scheme is characterized by an upper bound of = 1 for the worst case and a lower bound of = 0 for the ideal case with the same assumption of Figure 3. Compared to the average DOP of 1 for the worst case, the CNR of the proposed differential-phase scheme is improved about 7.7 dB when the number of RBSs is 120.We can also find that the proposed differential-phase system even in high SOP condition is still performing better performance than singlephase system in the transmission links.By assuming all the interference terms to be Gaussian distributed, the conditional BER can be calculated from amplitude shift keying (ASK) modulation; that is, on employing BER = 0.5 erfc(√CINR/8), we can obtain the relation BER and the number of active RBSs as a function of the length.Figure 5 plots the variation of the BER with the number of active RBSs as a function of the length of differential-/singlephase systems.It can be seen that the BER of the conventional OCDMA network using single-phase scheme is worse than that of the differential-phase technology particularly in a large number of RBSs conditions.The reason for this is that Mathematical Problems in Engineering when a large number of RBSs transmit their coding patterns simultaneously, more wavelengths beat together during the direct detection by the square-law PDs, the PIIN becomes the dominate noise degrading BER performance, and hence performance in SAC-OCDMA system cannot be improved by increasing code length of Hadamard codes.The BER performance of differential-phase scheme can support more than 19 active RBSs than single-phase scheme when the BER is 10 −5 .After BPF process, the transmitted signal power becomes the dominant issue to improve system performance.
Conclusion
The study proposes an AWG router-based OCDMA networks embedded with signal phase diversity scheme for RoF systems.The advantages of MAI and crosstalk beat noise effect in AWG routers can be suppressed by designed codec structure.In the case of ideal system constructed by the flattened source, each RBS requires only two AWG routers for spectral encoding and decoding processes; thus filter mismatch between network coders and decoders can be mitigated.Also, unlike FBG-based system, AWG router scheme exhibits no roundtrip time delay problem.Besides, AWG router-based codecs lies in that it has no accumulation of insertion loss when the total number of RBSs is increased.The CNR and BER of the proposed system are numerically analyzed by taking the dominate noise of PIIN into account.The result shows that the CNR of the proposed differential-phase system is superior 7 dB than other conventional single-phase OCDMA schemes in RoF system.The tradeoff on complementary codes in the study between system complexity and performance can be considered in different RoF links; hence the system flexibility is increased.In conclusion, the proposed system achieves a higher performance than a conventional RoF OCDMA scheme and can be implemented using a simple configuration comprising conventional low cost BLSs and compact optical components, rendering the overall system both cheap and straightforward.
Figure 1 :
Figure 1: The proposed RBS transmitters and AWG encoders.
Figure 2 :
Figure 2: The proposed CS receiver and AWG decoders.
Figure 3 :
Figure 3: CNR versus number of active RBSs for different code families. | 2018-12-28T19:19:59.146Z | 2013-11-18T00:00:00.000 | {
"year": 2013,
"sha1": "1d5525acf4201178353b58a71107d0a97deccb20",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2013/901871.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1d5525acf4201178353b58a71107d0a97deccb20",
"s2fieldsofstudy": [
"Engineering",
"Physics",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
248003964 | pes2o/s2orc | v3-fos-license | Thymic Microenvironment: Interactions Between Innate Immune Cells and Developing Thymocytes
The thymus is a crucial organ for the development of T cells. T cell progenitors first migrate from the bone marrow into the thymus. During the journey to become a mature T cell, progenitors require interactions with many different cell types within the thymic microenvironment, such as stromal cells, which include epithelial, mesenchymal and other non-T-lineage immune cells. There are two crucial decision steps that are required for generating mature T cells: positive and negative selection. Each of these two processes needs to be performed efficiently to produce functional MHC-restricted T cells, while simultaneously restricting the production of auto-reactive T cells. In each step, there are various cell types that are required for the process to be carried out suitably, such as scavengers to clean up apoptotic thymocytes that fail positive or negative selection, and antigen presenting cells to display self-antigens during positive and negative selection. In this review, we will focus on thymic non-T-lineage immune cells, particularly dendritic cells and macrophages, and the role they play in positive and negative selection. We will also examine recent advances in the understanding of their participation in thymus homeostasis and T cell development. This review will provide a perspective on how the thymic microenvironment contributes to thymocyte differentiation and T cell maturation.
INTRODUCTION
The thymus is an essential organ for T cell development (1). It is home to many cell types, such as stromal and immune cells, which not only aid in T cell development, but are also integral to thymus homeostasis (2)(3)(4). During T cell development, bone marrow-derived early thymic progenitors (ETPs) first seed the thymus where they receive Notch signals from cortical thymic epithelial cells (cTECs) and are signaled to enter the T-lineage differentiation pathway (5). These early progenitor T cells are double negative (DN) for CD4 and CD8 expression and their T cell receptor (TCR) genes have not yet undergone V(D)J rearrangement (6). At this stage, DN cells rearrange their g, d and b TCR gene loci, and following successful TCRb gene assembly gain CD4 and CD8 expressions, a checkpoint termed b-selection, and advance to the CD4 and CD8 double positive (DP) stage. Cells that properly rearrange their gd TCRs mature into the gd-T cell lineage (7). However, the majority of cells become DP cells, and following rearrangement of their TCRa gene loci are subjected to positive selection, which is conducted by cTECs presenting peptide self-antigens on their major histocompatibility complex (MHC) class I and MHC class II molecules to DP cells (8).
Proper TCR-MHC interactions predicate whether DP cells are allowed to differentiate to the next stage of ab-T cell development. Conversely, DPs with non-functional TCR-MHC interactions undergo death by neglect, which occurs for over 95% of DPs (9,10). Following positive selection, DPs migrate to the thymus medullary region and undergo negative selection against strong TCR-MHC interactions. This process, which helps to ensure self-tolerance, is conducted by medullary thymic epithelial cells (mTECs), which under the regulation of autoimmune regulator (AIRE) express a vast array of selfantigens, and with the help from other thymic antigen presenting cells (APC), such as dendritic cells (DCs) (Figure 1) (11)(12)(13). The purpose of this process is to eliminate potential self-reactive T cells, which could lead to autoimmune diseases if released into the periphery. In total, it is estimated that only 3-5% of developing thymocytes become mature CD4 or CD8 single positive (SP) T cells and exit the thymus (14).
The two-step selection process is repeated every day in the thymus and is only diminished during thymus aging or due to external injuries, such as irradiation and inflammatory stress (12,15,16). One necessary aspect of the selection process, which is critical to ensure that randomly generated TCRs are both able to properly interact with self-MHC and not lead to autoimmunity, is the need to eliminate a vast number of potentially useless or harmful cells on a continuous basis. Due to the daily massive cell death during T cell selection, thymic homeostasis needs to be strictly maintained by other cell types. Thymic macrophages are immune cells that are crucial for clearing apoptotic thymocytes in the thymus. Remarkably, thymic macrophages only make up 0.1% of all cells in the thymus (17). This suggests that they are highly efficient in efferocytosis since there are over 50 million DPs generated in a mouse thymus every day, a majority of which are likely destined for cell death and need to be cleared by thymic macrophages (13). These cells have also been shown to play a role in maintaining thymus homeostasis and thymus repair after injuries (18). As for the negative selection process, thymic DCs are also present in the medulla and have been shown to play a pivotal role in T cell selection alongside mTECs to curtail the generation self-reactive T cells and promote central tolerance (19). In this review, we will focus on these two important cell types in the thymus, DCs and macrophages, by examining their developmental origin, localization, function, and recent advances on their role in T cell selection and thymus repair post injury.
THYMIC DENDRITIC CELLS
DCs in the thymus make up 0.5% of thymus cellularity and are mainly composed of three different groups: plasmacytoid DCs (pDCs), CD8 + SIRPa -(CD8 + DCs), and CD8 -SIRPa + (SIRPa + DCs) (20). SIRPa + DCs and pDCs are migratory DCs that developed in the bone marrow and migrate from the periphery to the thymus, while a small fraction of CD8 + DCs originate intrathymically from a common T/DC progenitor, majority of CD8 + DCs develop outside the thymus (21)(22)(23)(24)(25)(26). Typically, mature SIRPa + DCs are located in the cortico-medullary perivascular space, CD8 + DCs are located within the medulla, and pDCs are located at the cortical-medullary junction (CMJ) (27)(28)(29). A recent paper published by Sarah Teichman's group using single-cell (sc) RNA-sequencing (seq) of human thymus cells, allowed them to identify a new subtype of DCs, which they named as activated DCs (aDCs), due to their high expression in costimulatory molecules (30). These aDCs could be further clustered into aDC1, aDC2, and aDC3 subsets, where aDC1 and aDC2 expressed similar gene profile as CD8 + DCs and SIRPa + DCs, respectively. While the aDC3 cluster expressed lower levels of co-stimulatory molecules compared with other aDCs, suggesting that these are postactivated aDCs. The distinct gene expression profiles from the different aDCs subsets suggests they are derived from different DCs population. This new aDC subtype is located at the center of the medulla, and uniquely expresses LAMP3 and CCR7, which are not found in other DC subtypes in the thymus. Their data also showed that aDCs can recruit naïve and regulatory T cells (Treg) into the thymus medullary through CCR7:CCL19 and CCR4: CCL17/CCL22 interactions, respectively. Interestingly, some aDCs also expressed AIRE, which validated other group's previous findings (31,32). It has been proposed that AIRE can regulate intercellular transfers of self-antigen from mTECs to thymic DCs to promote thymic tolerance (32,33). Combined with their high costimulatory molecule expression and their interaction with developing T cells, these aDCs may play a role in T cell negative selection, however, functional analyses are needed to further determine the exact role that aDCs may play in T cell selection. Furthermore, whether these aDCs share a common developmental origin as CD8 + DCs and SIRPa + DCs, or whether aDCs merely represent an activate stage of conventional DCs in the thymus requires further elucidation.
THYMIC DENDRITIC CELLS ON T CELL SELECTION
Thymic DCs are known to express high levels of class I and II MHC molecules (34). It has been well established that thymic DCs play a role in central tolerance and clonal deletion during T cell development (35). Particularly, SIRPa + DCs have been shown to transport antigens through blood and induce Treg development in mice (36). Further validating this point, Dominik Filipp's group recently found a novel CD14 + SIRPa + monocyte DCs (moDCs) subset in the thymus that was important for the generation of Tregs (37). While moDCs expressed some genes overlapping with SIRPa + DCs, they also expressed high levels of monocyte associated genes (Mafb, Apoe, and Csf2ra), which are absent in the SIRPa + DC subset, indicating that moDCs are likely a distinct population. Their findings suggested that the TLR9/MyD88 pathway induced mTECs to express chemokines that promoted the recruitment of moDCs to the thymus. These moDCs could also acquire antigens from mTECs. However, whether these or other DCs are able to transfer self-antigens expressed by medullary fibroblast, which were recently shown to express TRAs that contribute to central tolerance was not addressed (38,39). Of note, MyD88 DTEC mice that conditionally lacked MyD88 in mTECs, there was a decrease in moDCs populations in the thymus, leading the impaired generation of Tregs, and those Tregs that were generated displayed reduced suppressive capacity. The same group also found specific DCs subsets in the thymus have a preference in antigen transfer from different TEC subsets (40). Notably, moDCs were most efficient in antigen transfer compared with all other thymic DC subsets, and moDCs were able to acquire antigens from multiple mTECs. However, the mechanism of how these cells acquire self-antigens for T cell negative selection remained unclear.
Attempting to answer the above questions, Charles J Kroger et al. illustrated how thymic DCs can acquire MHC molecules from TECs through intercellular transfer (41). By coculturing thymic DCs from NOD mice and TECs from BALB/c mice that express H2-D d (an MHC class I antigen) and IE d (an MHC class II antigen), the authors found thymic DCs, compared with splenic DCs, had a higher efficiency in acquiring H2-D d and IE d . The capacity for MHC molecules uptake by thymus CD8 + DCs and SIRPa + DCs were similar. However, this intercellular transfer ability was only found between thymic DCs and TECs, and not with other APCs, such as B cells, when cocultured with thymic DCs. Using qRT-PCR, the authors identified that this intercellular antigen transfer process was correlated with the unique expression of the epithelial marker EpCAM only in DCs found in the thymus. Thymic DCs were previously thought to acquire EpCAM protein from TECs, while this paper showed that both thymic CD8 + DCs and SIRPa + DCs can express EpCAM, while SIRPa + DCs expressed the highest level of EpCAM compared with all other DC subtypes in the thymus (42). This intercellular transfer ability in DCs is organ specific and is regulated differently between the different subsets of DCs in the thymus. This was shown when the authors blocked PI3K signaling and the transfer of MHC antigens to CD8 + DCs was reduced, while transfer to SIRPa + DCs was not impacted. This work provided new insights on how thymic DCs can specifically acquire antigens from neighboring TECs in the thymus, and the mechanism for antigen transfer in thymus DCs subtypes are regulated by different pathways. Further studies can be done to determine the exact mechanism that regulates intercellular antigen transfer between TECs and SIRPa + DCs in the thymus since these DCs are known to play a role in the generation of Tregs.
Because a majority of thymic DCs are periphery-derived that migrate to the thymus, they also have the capacity to carry antigens from the periphery to the thymus for T cell selection (35). However, the specific molecules that each thymic DCs subtype carries remains unclear. A recent paper from Ulrich von Adrian's group found a new population of DCs that expresses CX 3 CR1 in both human and mice, which they named transendothelial DCs (TE-DCs) (43). Using multi-photon intravital microscopy, they found that these TE-DCs are located between the microvessels and the thymus where they can transport blood born proteins into the thymus and then use it for T cell selection (Figure 2). They also reported that these TE-DCs are a heterogeneous population of DCs, a majority of which are composed of SIRPa + DCs, followed by pDCs. Only a small fraction of TE-DCs was identified as CD8 + DCs. This finding was supported by previous research that looked at the origin of thymic DCs and showed that SIRPa + DCs and pDCs were migratory DCs from the periphery, while CD8 + DCs can be intrathymically derived. This new antigen transport system by CX 3 CR1 TE-DCs depends on its ligand CX 3 CL1, which is expressed by thymus endothelial cells. Recent work by Gretchen Diehl's group also showed CX 3 CR1 + DCs can capture microbial antigens, present these antigens to developing T cells, and induce microbial-specific T cell expansion (44). Altogether, these findings introduced a new model for T cell selection by thymic DCs where a specialized subset of CX 3 CR1 + DCs, located at microvessels, are actively taking up blood born antigens and transporting them into the thymus for T cell selection. However, whether these CX 3 CR1 + DCs have distinct developmental origin and what signals are responsible for the polarization of CX 3 CR1 DCs are still unclear.
FIGURE 2 | Localization of dendritic cell and macrophage subsets in the thymus. There are 6 subsets of dendritic cells (DCs) and 2 subsets of macrophage (MФ) in the thymus. SIRPa + DCs and pDCs are located closely to the cortical-medullary junction (CMJ), CD8 + DCs, activated DCs (aDCs), and CD14 + SIRPa + moDCs (moDCs) are located within the medullary region, and transendothelial DCs (TE-DCs) are located between the microvessels in the thymus. Timd4 + macrophages are located within the cortex and uniquely express Spic and Vcam1, while CX 3 CR1 + macrophages are located at the CMJ expressing Runx3 and antigen presenting genes, such as H2-Q7.
THYMIC DENDRITIC CELLS IN POST INFECTION
It has been shown that the generation of mature T cells from the thymus is attenuated during and post infections (45,46). Since a majority of thymic DCs come from the periphery, whether migratory DCs play a role in thymus damage post infection was unclear. A recent publication by Haojie Wu et al. showed that mature DCs from the circulation can enter the thymus and induce thymus involution through the Notch signaling pathway (47). Upon activation by antigens such as lipopolysaccharide and ovalbumin, DCs have been shown to enhance Jagged1 expression (48,49). Their work showed that these activated DCs expressing Jagged1 can bind to Notch3expressing mTECs and this interaction through the Notch signaling pathway induces apoptosis in mTECs. This in turn led to the disruption in SP cell generation in the thymus. However, this finding needs to be validated in disease models, such as post viral infections. Nonetheless, this work provided a new perspective on thymic atrophy upon infection by activated DCs, suggesting that DCs in the thymus may play a deleterious role during an infection, which as previously thought that this may be critical to prevent the thymus from inducing self-tolerance against virally encoded antigens. It would also be interesting to test whether blocking DCs infiltration into the thymus post infection could prevent thymic atrophy.
THYMIC MACROPHAGES
During T cell development, cells that do not pass positive or negative selection undergo apoptosis (50). It is estimated that over 95% of cells undergo apoptosis in the thymus every day (50,51). However, when isolating cells from the thymus of healthy adult mice, one typically finds that nearly all the thymocytes are live cells, suggesting that apoptotic cells within the thymus are actively and effectively cleared (52,53). The clearing of apoptotic cells is done by intrathymic macrophages (9,30,50,(54)(55)(56). For many years, macrophages in the thymus have not been well characterized nor understood, due to technical limitations in analyzing these cells and performing functional studies. There are only a few well known macrophage markers that have been found to be expressed on thymic macrophages (ED1 and ED2 in rats, CD68, F4/80 and CD11b in mice) making it difficult to study the origin of these thymic macrophages and identify their heterogeneity in the thymus (57)(58)(59)(60). With the advent of scRNA-seq technology, characterizing small cell populations, and performing ontogeny analysis on thymic macrophages have become possible.
A recent publication by Tyng-An Zhou et al. identified two macrophage subsets (Timd4 + and CX 3 CR1 + ) in the thymus of adult mice using scRNA-seq ( Figure 2). Both populations of thymic macrophages were found to developed during embryonic life, and the authors found Timd4 + thymic macrophages were derived from CX 3 CR1 + cells during embryogenesis. The two different subsets of thymic macrophages showed distinct gene expression profiles, where Timd4 + thymic macrophages expressed high levels of SpiC, MafB, and Vcam1, which showed high similarity with the transcriptomic landscape of spleen red pulp macrophages (61,62). While CX 3 CR1 + thymic macrophages had high expression of Runx3 (which is important for cytotoxic CD8 + T cell development), and genes involved in antigen presentation (B2m, H2-M2, H2-K1, and H2-Q7) (63)(64)(65)(66). These two tissue resident macrophage subsets found in the thymus agreed with recent findings by Slava Epelman's group, in which they showed Timd4 + and CX 3 CR1 + tissue resident macrophages were found across many organs (heart, liver, lung, kidney, and brain) in mice (67).
The distinct gene profile for these two subsets of thymic macrophages suggested they may have different functions within the thymus. Using immunofluorescence to examine thymic histological sections, Zhou et al. found that Timd4 + macrophages are found mainly in the cortex, while CX 3 CR1 + macrophages are localized in the CMJ. In combination with their transcriptomic profile, this suggests that Timd4 + thymic macrophages are the main cells performing efferocytosis of apoptotic thymocytes. Their findings were also supported by Catherine C. Hedrick's group who demonstrated that Timd4 + F4/ 80 + thymic macrophages have the highest phagocytic efficiency compared with other macrophage subsets, and that the depletion of these macrophages accelerated thymic involution, suggesting an important role in thymic homeostasis (68).
Conversely, CX 3 CR1 + thymic macrophages may play a role in T cell negative selection. This is supported by their location at the CMJ, which is where negative selection initiates, as positively selected thymocytes migrate into the medulla. Combined with their gene expression profile and migratory ability, these thymic macrophages may have the potential to carry self-antigen through blood vessels and present them to developing T cells for negative selection and tolerance induction. However, further studies need to be performed to validate their potential functions in vivo (69).
THYMIC MACROPHAGE IN T CELL SELECTION
As the findings from Zhou et al. suggest, thymic macrophages may play a role in T cell selection by their antigen presenting ability. Other groups have shown Timd4 + cells in the thymus can also present MHC-I peptides and induce negative selection of CD8 + T cells (70,71). However, as these authors mentioned, Timd4 can also be expressed on thymic DCs, thus it is difficult to distinguish whether Timd4 + thymic macrophage are the true players for culling selfreactive CD8 + T cells and whether they play a defining role in presenting antigens to developing T cells during negative selection. These data contrast the scRNA-seq results presented by Zhou's group, where CX 3 CR1 + thymic macrophages by their location and gene expression profile were suggested to have a higher probability in presenting self-antigens for negative selection.
Vijay K. Kuchroo's group generated a Timd4 -/mice, and found that Timd4-deficient mice had hyperactive T and B cells, as well as displaying an impairment in efferocytosis by peritoneal macrophages (70). However, the absolute cell number of thymocytes in Timd4 deficient mice did not differ from control wild-type mice, which contrasts other group's findings, where the depletion of thymic macrophage led to an acceleration of thymic involution, and hence decreasing the size of the thymus (68,71). This could be attributed by the compensation from other phagocytes in the thymus of Timd4 -/mice to maintain thymus homeostasis. This was evidenced in other organs where depletion of a specific subset of tissue resident macrophages led to empty niches in the organ where infiltrating monocytes or other tissue resident macrophages quickly occupied these niches and performed functions similar to the original tissue resident macrophage (72)(73)(74). Thus, whether thymic macrophages play a role in T cell selection remains to be elucidated.
THYMIC MACROPHAGE DURING THYMUS INJURY
In addition to efferocytosis, phagocytosis and antigen presentation, tissue resident macrophages have been shown to play a crucial role in tissue repair across many organs (75)(76)(77). After tissue injury, tissue resident macrophages can secrete cytokines (IL-10 and TGFb), growth factors (FGF, TGFa, and PDGF), and exosomes to promote cell differentiation and suppress inflammation (78). Depletion of tissue resident macrophages in the heart and liver were shown to impair organ healing (67,76,(79)(80)(81). However, whether thymic macrophages can play a similar role in thymus repair is still unclear.
One clinically relevant source of injury to the thymus is irradiation, a process that some cancer patients are subjected to as part of their treatment (82,83). The rate of recovery is crucial as the thymus is integral for generating T cells that form an immune response. Several groups have sought new approaches to improve thymic recovery post irradiation treatment (84)(85)(86). A recent publication by Gen Yamada's group used a MafB/green fluorescent protein knock-in (MafB +/GFP ) mouse to demonstrated that MafB expressing cells in the thymus play a crucial role in thymus repair after irradiation. When comparing thymus recovery post irradiation between MafB +/+ and MafB +/GFP , the authors found that there was a decrease in immature TECs (Krt5 + FoxN1 + ) generated in the MafB +/GFP thymus. The organization of the medulla was also found to be abnormal post-irradiation injury, where mTECs in the MafB +/GFP thymus formed only one prominent medullary compartment, while MafB +/+ maintained multiple medullary compartments after recovery. Since MafB is a common marker used to identify macrophage populations, it stands to reason that a majority of the cells expressing MafB in the thymus are likely macrophages (17,18,87). This new finding showed that thymic macrophages may play a role in thymus repair, potentially by engulfing apoptotic cells and controlling inflammation in the thymus. These results also suggested that post thymic injury, macrophages are important for the repair of the thymus architecture and supporting the regeneration of thymic endothelial cells. However, exactly which of the two thymic macrophage populations is playing a role in thymus repair after injury remains unclear. Further studies are needed to assess the role of the two thymic macrophage subsets, Timd4 + and CX 3 CR1 + , in clinically relevant injury models.
CONCLUSION
The thymus is a sophisticated organ that is important for generating T cells, which play a critical role in immune function. As a result, severe consequences can arise if thymic homeostasis is not properly regulated. This therefore demands the need to have a thorough understanding of the thymus environment that induces and support T cell development. Although the T cell selection process by TECs has been well studied, whether thymic DCs and macrophages are important players in T cell development, selection and thymus homeostasis remain to be further elucidated. With scRNA-seq technology, several groups have been able to identify new populations of DCs in the thymus (aDCs, TE-DCs, and CX 3 CR1 + DCs), each of which appears to serve distinct functions. Macrophage heterogeneity in the thymus was also elucidated using this technology, and we can now appreciate that there are two macrophage populations in the thymus, Timd4 + and CX 3 CR1 + . However, there are still many questions remaining, such as which thymic macrophage subset plays a role in thymus repair? Do thymic macrophages play a role in the negative selection of T cells, if so, which subset? By addressing these questions, we can pave the way to promoting new clinical therapies for the repairing of the thymus post injuries.
AUTHOR CONTRIBUTIONS
HW wrote the manuscript. JCZ-P wrote and edited the manuscript. All authors contributed to the article and approved the submitted version. | 2022-04-08T13:13:11.691Z | 2022-04-08T00:00:00.000 | {
"year": 2022,
"sha1": "d2ae83b6fb1859434fe92005b7b6381e21544fc4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "d2ae83b6fb1859434fe92005b7b6381e21544fc4",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252517285 | pes2o/s2orc | v3-fos-license | Multi-Omics Analysis of GNL3L Expression, Prognosis, and Immune Value in Pan-Cancer
Simple Summary Guanine nucleotide-binding protein-like 3-like (GNL3L) is a novel GTP-binding nucleolar protein. In this study, we analyzed the expression, prognosis, and immune roles of GNL3L in pan-cancer from multiple omics analyses. The final results showed that GNL3L is differentially expressed in a variety of cancers, plays a prognostic role, and has good immune value. Moreover, GNL3L may affect the occurrence of cancer through processes such as ribonucleoprotein, ribosomal RNA processing, and cell proliferation. At the same time, we established an esophageal cancer (ESCA) prediction model with strong predictive ability and proved that GNL3L can significantly affect the proliferation ability of esophageal cancer cells through clone formation assays. In conclusion, GNL3L is an important biomarker. Abstract Guanine nucleotide-binding protein-like 3-like protein (GNL3L) is a novel, evolutionarily conserved, GTP-binding nucleolar protein. This study aimed to investigate the expression, prognosis, and immune value of GNL3L in pan-cancer from multiple omics analyses. Firstly, the expression and prognostic value of GNL3L in pan-cancer were discussed using the TIMER2 database, the GEPIA database, the cBioportal database, COX regression analysis, and enrichment analysis. The association of GNL3L with tumor mutational burden (TMB), tumor microsatellite instability (MSI), mismatch repair (MMR) genes, and immune cells was then analyzed. Finally, an esophageal cancer (ESCA) prediction model was established, and GNL3L clone formation assays were performed. The final results showed that GNL3L is differentially expressed in the vast majority of cancers, is associated with the prognosis of various cancers, and may affect cancer occurrence through processes such as ribonucleoprotein, ribosomal RNA processing, and cell proliferation. At the same time, it was found that the correlation between GNL3L and TMB, MSI, MMR, and various immune cells is significant. The established ESCA prediction model had a strong predictive ability, and GNL3L could significantly affect the proliferation of esophageal cancer cells. In conclusion, GNL3L may serve as an important prognostic biomarker and play an immunomodulatory role in tumors.
Introduction
For decades, because of the high mortality rate, cancer has brought a serious burden to the world [1]. Research related to cancer has been the focus of scientific studies, and related researchers are constantly studying cancer from a multi-omics perspective such as imaging, spectroscopy, and genetics [2]. Cancer has emerged as a major public health issue around the world, and the incidence and mortality of cancer are increasing every year [3]. Therefore, finding biomarker genes that can help is an important means of preventing and treating cancer. However, there are problems, such as small study sample sizes, low statistical efficiency, and poor reproducibility in the genetic study of a single cancer,
Immunohistochemistry Staining
The HPA (https://www.proteinatlas.org/ (accessed on 15 February 2022)) database is based on proteomic, transcriptomic, and systems biology data, which can map tissues, cells, organs, etc. To continue to assess differences in GNL3L protein expression levels, immunohistochemical images of 20 tumors were studied from the HPA database.
Cancer Immune and Molecular Subtyping
The TISIDB database (http://cis.hku.hk/TISIDB/index.php (accessed on 15 February 2022)) collects a large number of human cancer datasets and can be used in human cancer immune and molecular subtype analysis [16]. Therefore, the TISIDB database was used to investigate the correlation between GNL3L and molecular and immunological subtypes of 33 cancers.
Genetic Variation and CNA Variation Analysis
The genetic variation in the GNL3L gene was analyzed using the cBioPortal database (https://www.cbioportal.org/ (accessed on 15 February 2022)) [17]. At the same time, the cBioPortal database was used to download CNA data for 33 tumors, and R language was used for correlation analysis. R4.1.0 was used for data collation and analysis.
Correlation Analysis of Tumor Mutation Burden, Tumor Microsatellite Instability, and Mismatch Repair Gene Expression
TMB refers to the number of DNA mutations carried by a tumor that may lead to the production of neoantigens [20]. MSI refers to the phenomenon of changes in the length of microsatellite sequences due to insertion or deletion mutations during DNA replication [21]. MMR is an important DNA repair mechanism that can accurately identify and repair base mismatches generated during DNA replication or recombination [22]. Somatic mutation data for 33 cancers were downloaded from the TCGA database, and the MAF files were analyzed using the R package "maftools" to calculate tumor TMB and MSI values. MMR detection mainly detects the expression of four proteins, MLH1, PMS2, MSH2, and MSH6, in cancer tissues. The correlation between the GNL3L gene and the expression levels of the four MLH1, PMS2, MSH2, and MSH6 protein genes was analyzed using the TCGA database.
Immune Cell Correlation Analysis
With two different algorithms, the CIBERSORT algorithm calculates the proportion of immunological cells of many sorts in each sample based on LM22, and then performs correlation analysis with GNL3L in each sample [23,24]. The ssGSEA algorithm is an extension of the GSEA method. ssGSEA allows for the definition of an enrichment score that represents the absolute enrichment of gene sets in each sample within a given dataset, then ranks and normalizes the gene expression values of a given sample to finally generate an enrichment score [25]. The raw data of the two algorithms were obtained from the pancancer data of the TCGA database, and R4.1.0 was used for data processing and analysis.
Related Gene Enrichment Analysis
The top 100 proteins associated with GNL3L were obtained using the STRING website (https://string-db.org/ (accessed on 15 February 2022)), a protein-protein interaction network (PPI) was constructed, and these protein-interacting networks were represented in the Cytoscape software [26,27]. Meanwhile, the top 100 genes associated with GNL3L expression were obtained using the GEPIA2 database. The Pearson correlation coefficient values of these 100 genes and GNL3L are all between 0.74 and 0.62, and the p values are all less than 0.05. The genes corresponding to the 100 proteins in the PPI and the top 100 genes obtained from the GEPIA2 database were intersected, and then a correlation analysis was performed on the intersected genes to draw a chord diagram. Finally, enrichment analysis was performed on all genes in the two datasets obtained above [28,29]. In addition to this, the cellular localization and function of GNL3L were analyzed using the GeneCards database [30].
Establishment of ESCA Prediction Model
Cox regression was utilized to investigate the related characteristics of survival and prognosis in ESCA patients, and the related factors were used to construct a nomogram and establish ESCA prediction models [31]. Finally, the calibration curve and ROC curve were utilized to verify and estimate the predictive accuracy of the nomogram [32].
Cell Culture and Clone Formation Assay
Esophageal cancer cell lines (KYSE30 and KYSE150) were obtained from the Shanghai Cell Bank (Shanghai, China), and all cell lines were supplemented with 10% fetal bovine serum, 100 units/mL penicillin, 100 µg/mL streptomycin, and 5% CO 2 in DMEM highglucose medium. For clone formation assays of GNL3L overexpression and knockdown, KYSE30 and KYSE150 cells were seeded into 6-well plates (1000 cells/well). After 14 days of culture, the formation of cell clones was examined, fixed with 4% paraformaldehyde, stained with 0.1% crystal violet, and colonies containing at least 50 cells were counted for analysis.
The GTEx dataset was utilized to assess the differential expression of GNL3L among normal and malignant tissues for malignancies that were not matched as normal tissues in the TIMER2 database. The GEPIA database analysis results show that GNL3L is highly expressed in DLBC (diffuse large B-cell lymphoma), LAML (acute myeloid leukemia), LGG (low-grade glioma of the brain), and TGCT (testicular cancer) tumor tissues ( Figure 1B).
Using the UALCAN database for the protein expression analysis of the CPTAC dataset, the total GNL3L protein acquired from the CPTAC dataset showed raised protein expression levels of GNL3L in tissues of breast cancer, clear cell RCC, colon cancer, lung adenocarcinoma, ovarian cancer, and UCEC compared to normal tissues ( Figure 1C).
Next, the differences in GNL3L protein expression levels were analyzed using immunohistochemical images from the HPA database. Results from the HPA database showed that GNL3L protein was expressed in stomach cancer, testis cancer, melanoma, lung cancer, skin cancer, liver cancer, breast cancer, ovarian cancer, endometrial cancer, 10-lymphoma-1, 11-renal cancer-1, and 12-pancreatic cancer-1 significantly higher than in normal tissue ( Figure 2). LGG, and TGCT based on GEPIA database. (C) Expression of GNL3L protein in breast cancer, clear cell RCC, colon cancer, lung adenocarcinoma, ovarian cancer, and UCEC based on CPTAC database. * p < 0.05; ** p < 0.01 and *** p < 0.001. Figure 2. Immunohistochemical images of differential expressions of GNL3L protein in 12 tumors.
GNL3L Expression Associated with Molecular Subtypes and Clinical Stages in Human Cancers
Moreover, the expression of GNL3L was investigated in several molecular subtypes Figure 2. Immunohistochemical images of differential expressions of GNL3L protein in 12 tumors.
GNL3L Expression Associated with Molecular Subtypes and Clinical Stages in Human Cancers
Moreover, the expression of GNL3L was investigated in several molecular subtypes and clinical phases. Using the TISIDB database, the expression of GNL3L was found to be significantly different among dissimilar molecular subtypes of ACC, BRCA, COAD, ESCA, KIRP, LGG, LIHC, PRAD, and STAD ( Figure 3A). Then, the TCGA database GNL3L clinical data were analyzed, and the findings revealed that GNL3L expression varied depending on the clinical phase of COAD, HNSC, KIRC, KIRP, MESO, and SKCM ( Figure 3B).
Genetic Variation and CNA Alterations of GNL3L in Human Cancer
Cancer development and immunological tolerance are influenced by genetic and epigenetic alterations. GNL3L genetic variations and CNA changes were investigated further using the cBioPortal; GNL3L changes comprised mutations, amplifications, structural variations, deep deletions, and numerous alterations, according to the findings. Endometrial cancer, cutaneous melanoma, bladder cancer, and esophageal cancer all have mutations as the most prevalent form of change. GNL3L mutations were found in about 6.6%
Genetic Variation and CNA Alterations of GNL3L in Human Cancer
Cancer development and immunological tolerance are influenced by genetic and epigenetic alterations. GNL3L genetic variations and CNA changes were investigated further using the cBioPortal; GNL3L changes comprised mutations, amplifications, structural variations, deep deletions, and numerous alterations, according to the findings. Endometrial cancer, cutaneous melanoma, bladder cancer, and esophageal cancer all have mutations as the most prevalent form of change. GNL3L mutations were found in about 6.6% of UCEC patients; the "amplified" type of alterations accounted for most of the alterations in UCS (uterine carcinosarcoma) cases, with a frequency of about 3.51%; and the "deeply deleted" type of alterations accounted for most of the alterations in ESCA, with a frequency of about 2.2% ( Figure 4A). The analysis showed that the main type of genetic variation in GNL3L was missense mutation. R369H site alteration was found in two COADs and one UCEC ( Figure 4B). The missense mutation at the R369H site may lead to the abnormal structure and function of GNL3L protein in the body, which may cause disease. This mutation is harmful. The 3D structure of the GNL3L protein containing the R369H site is shown ( Figure 4C).
Cancers 2022, 14, x FOR PEER REVIEW 8 of 2 of UCEC patients; the "amplified" type of alterations accounted for most of the alteration in UCS (uterine carcinosarcoma) cases, with a frequency of about 3.51%; and the "deepl deleted" type of alterations accounted for most of the alterations in ESCA, with a fre quency of about 2.2% ( Figure 4A). The analysis showed that the main type of genetic va iation in GNL3L was missense mutation. R369H site alteration was found in two COAD and one UCEC ( Figure 4B). The missense mutation at the R369H site may lead to the ab normal structure and function of GNL3L protein in the body, which may cause diseas This mutation is harmful. The 3D structure of the GNL3L protein containing the R369H site is shown ( Figure 4C). Next, to explore the CNA situation of GNL3L, the correlation between GNL3L ex pression and the relative linear copy number was analyzed. The results of the researc revealed that GNL3L expression is positively correlated with CNA in BLCA, ESCA Next, to explore the CNA situation of GNL3L, the correlation between GNL3L expression and the relative linear copy number was analyzed. The results of the research revealed that GNL3L expression is positively correlated with CNA in BLCA, ESCA, HNSC, LUAD, LUSC, SARC, and UCS and negatively correlated with CNA in KIRP and SKCM ( Figure 4D). Thus, it was demonstrated that GNL3L expression correlates with relatively linear copy number values and affects CNA in a variety of cancers.
Prognostic Analysis of GNL3L Expression in Pan-Cancer
Univariate OS, PFI, and DSS analysis of data from 33 cancer types showed that GNL3L has different prognostic values in dissimilar types of cancer.
Prognostic Analysis of GNL3L Expression in Pan-Cancer
Univariate OS, PFI, and DSS analysis of data from 33 cancer types showed that GNL3L has different prognostic values in dissimilar types of cancer.
Correlation of GNL3L with TMB, MSI, and MMR
After determining the prognostic value of GNL3L, the association between GNL3L and TMB, MSI, and MMR in 33 cancers was discussed. A TMB and MSI correlation radar
Correlation of GNL3L with TMB, MSI, and MMR
After determining the prognostic value of GNL3L, the association between GNL3L and TMB, MSI, and MMR in 33 cancers was discussed. A TMB and MSI correlation radar plot showed that GNL3L is associated with TMB in BRCA, LGG, LUAD, SARC, STAD, THCA, and THYM, and in ACC, BRCA, CESC, DLBC, HNSC, KIRC, LUSC, PRAD, SARC, and THCA, it is related to MSI ( Figure 8A,B). The heatmap of the correlation between GNL3L and MMR showed that GNL3L is co-expressed and significantly associated with PMS2 in 32 cancers except for ESCA; GNL3L is associated with MSH6 in 28 cancers except for ESCA, LAML, UCS, DLBC, and ACC, where it is co-expressed and significantly correlated; GNL3L was co-expressed with MSH2 in 32 cancers except for UCS; GNL3L was co-expressed with MLH1 in 26 cancers except for LUSC, READ, LAML, STAD, UCS, CHOL, and MESO, which are significantly correlated ( Figure 8C).
Correlation between GNL3L and Immune Microenvironment
Following the discovery of GNL3L's predictive usefulness, the researchers investigated the link between GNL3L and tumor-infiltrating immune cells in 33 malignancies. The CIBERSORT method was used to assess the components of the tumor immune cell microenvironment in 33 malignancies from the TCGA. Clustered heatmaps based on the correlation between GNL3L and immune cells showed that GNL3L is positively correlated with T cell CD4 memory, part of the majority of cancers, especially ESCA (Spearman
Correlation between GNL3L and Immune Microenvironment
Following the discovery of GNL3L's predictive usefulness, the researchers investigated the link between GNL3L and tumor-infiltrating immune cells in 33 malignancies.
The CIBERSORT method was used to assess the components of the tumor immune cell microenvironment in 33 malignancies from the TCGA. Clustered heatmaps based on the correlation between GNL3L and immune cells showed that GNL3L is positively correlated with T cell CD4 memory, part of the majority of cancers, especially ESCA (Spearman r = 0.26, p = 0.001), PAAD (Spearman r = 0.31, p = 3.75 × 10 −5 ), and DLBC (Spearman r = 0.30, p = 0.043). GNL3L is negatively correlated with T cell regulation (Tregs) in the majority of cancers; however, a positive correlation was found in ESCA (Spearman r = 0.25, p = 0.001) and LAML (Spearman r = 0.17, p = 0.043). GNL3L was shown to be negatively correlated with T cell CD8 in the majority of cancers; however, it is positively correlated in UVM (Spearman r = 0.23, p = 0.045) ( Figure 9A).
GNL3L-Related Gene Enrichment Analysis
In order to further study the molecular mechanism of GNL3L in carcinogenesis, we first obtained the top 100 proteins related to GNL3L using the STRING tool, constructed a PPI network, and expressed the network of these gene interactions in the Cytoscape software ( Figure 10A). Subsequently, the GEPIA2 database was used to obtain the top 100 genes associated with GNL3L expression. Then, the genes corresponding to the 100 proteins in the PPI and the top 100 genes obtained in the GEPIA2 database were intersected, and a Venn diagram was drawn to obtain the four most common genes in the above two datasets: WDR43, DDX18, WDR36, and HEATR1 ( Figure 10B). Then, a gene expression correlation analysis between GNL3L and the above four genes was performed, and a chord diagram was drawn ( Figure 10C). In it, the line in the figure represents the correlation information between two genes; red represents positive correlation; green represents negative correlation; and the thicker the line, the higher the correlation strength. The correlation coefficient can be calculated with the disc scale. It can be seen from the figure that the correlation coefficients between the five genes are all between 0.7 and 0.85, and the correlation is high. To further investigate the function and pathway enrichment analysis of GNL3L, KEGG and GO enrichment analyses were performed using the 197 genes above. The KEGG pathway analysis showed that GNL3L is associated with the ribosome pathway in eukaryotes. In addition, GO analysis showed that most of these genes were associated with ribonucleoprotein, rRNA processing, rRNA metabolic process, and ncRNA metabolic processes in the BP class; pre-ribosomal, telomerase holoenzyme complex, and small subunit process groups in the CC class; and catalytic activity, helicase activity, and ATPase activity acting on RNA in the MF class ( Figure 10D). To further explore the association between GO analysis results and the cellular localization and functions of GNL3L, the cellular localization and functions of GNL3L gene were analyzed using the GeneCards database. The GeneCards database showed that GNL3L is subcellularly localized to the nucleus and nucleolus, and it is essential for ribosomal pre-rRNA processing and cell proliferation. Through GO analysis results and the cellular localization and functions of GNL3L, it can be seen that GNL3L may affect the occurrence of cancer through ribosomal RNA processing.
Establishment of ESCA Prediction Model
In all the above analyses, the results indicated that GNL3L expression is significantly correlated with prognosis and immune cell infiltration in various cancers. Especially in ESCA, GNL3L showed high expression, poor prognosis, and significant correlation with molecular subtype, immune subtype, immune cell, OS, PFI, and DSS curves. Therefore, to
Establishment of ESCA Prediction Model
In all the above analyses, the results indicated that GNL3L expression is significantly correlated with prognosis and immune cell infiltration in various cancers. Especially in ESCA, GNL3L showed high expression, poor prognosis, and significant correlation with molecular subtype, immune subtype, immune cell, OS, PFI, and DSS curves. Therefore, to further investigate GNL3L and understand the prognostic role of GNL3L in ESCA, we tested whether the GNL3L gene is an independent prognostic factor using univariate and multivariate Cox regression analysis. Univariate Cox regression analysis revealed that M (p < 0.001), N (p < 0.001), T (p = 0.013), and the GNL3L gene (p = 0.038) are correlates of the survival prognosis in ESCA patients ( Figure 11A). A multivariate COX regression analysis using only the statistically significant features of the univariate COX model followed. The results showed that M (M1 vs. M0 p < 0.001), N (N1 vs. N0 p = 0.003, N2 vs. N0 p = 0.001), T (T4 vs. T1 p = 0.041), and the GNL3L gene (high vs. low p = 0.021) are all independent risk factors for predicting OS dysplasia in ESCA patients ( Figure 11B). Among them, T staging indicates the size of the primary tumor state; N staging indicates the status of regional lymph node metastasis; and M staging indicates the presence or absence of distant metastasis. The results validated and confirmed that the GNL3L gene can be used as an independent prognostic factor for ESCA patients.
A nomogram based on the results of the multivariate Cox analysis for predicting the probability of survival in ESCA patients was then constructed ( Figure 11C) to further improve GNL3L prediction accuracy. To assess the predictive risk potential of this nomogram, a Kaplan-Meier analysis was performed by stratifying all patients based on the median risk score sourced from the nomogram, consisting of M0, N1, and T3 staging and GNL3L gene expression levels ( Figure 11D), with high-risk patients exhibiting remarkably disadvantaged survival conclusions in terms of OS rates in contrast to low-risk patients. To validate the prediction model, ROC curves and calibration graphs were drawn to evaluate the prediction performance. The results showed AUC values of 0.726, 0.782, and 0.833 for the ROC curves at 1 year, 2 years, and 3 years, respectively ( Figure 11E). The calibration curve showed good agreement between the prediction of the line chart and the actual observation ( Figure 11F). These results further strengthen the clinical significance of GNL3L expression in relation to M0, N1, and T3 staging, indicating that the model has a good predictive power in predicting survival outcomes in ESCA patients.
Analysis of Clone Formation Assays of GNL3L Overexpression and Knockdown
By means of transient transfection technology, we increased or decreased the expression of the GNL3L gene in two esophageal cancer cells (KYSE30 and KYSE150) and performed clone formation assays, respectively. In the GNL3L overexpression experiment, the number of cell clones in the KYSE30-Vector (control group), KYSE30-GNL3L-OE, KYSE150-Vector, and KYSE150-GNL3L-OE groups was 101.667 ± 7.638, 160 ± 26.458, 105 ± 13.229, and 192.667± 40.266. From the statistical results, the number of clones in the overexpression group of the two cell lines was significantly higher than in the control group (KYSE30, t = 3.669, p = 0.0214; KYSE150, t = 3.583, p = 0.0231), which was statistically significant ( Figure 12A,C). In the GNL3L knockdown experiment, two sequences, KO1 and KO2, were designed to knockdown GNL3L. The number of clones in the GNL3L-KO2 group was 100.667 ± 19.629, 46.333 ± 10.599, 36.667 ± 4.726, 103 ± 7, 59.333 ± 5.132, 43.667 ± 4.041. From the statistical results, the number of clones in the knockdown group of the two cell lines was significantly reduced compared to the control group (p = 0.002054, p = 0.000031); it was statistically significant (Figure 12B,D).
The results of two clone formation assays showed that the clone formation ability of esophageal cancer cells was significantly increased after the high expression of GNL3L, and the clone formation ability of esophageal cancer cells was significantly reduced after the low expression of GNL3L. Furthermore, it can be seen that GNL3L significantly affects the proliferation ability of esophageal cancer cells.
Discussion
Previous studies have shown that green fluorescent protein (GFP) can be distributed within the nucleolus without changing the conformation and function of the original protein when attached to the nucleolus internal localization signal (NoLS) protein [33]. Rao et al. modified the ends of GNL3L with GFP and observed the organelle localization of GNL3L-GFP in Cos-7 cells via fluorescence microscopy and found that it was mainly localized in the nucleolus and partially distributed in the nucleoplasm; the same results were obtained in U2OS cells [34]. Therefore, GNL3L is mainly distributed in the nucleolus and partially in the nucleoplasm. Some later studies have shown that GNL3L is a nucleoplasmic shuttle protein that can shuttle between the nucleolus, nucleus, and cytoplasm. Other researchers found that GNL3L plays an important role in a variety of cancers and plays an important role in cell proliferation and ribosome synthesis [35][36][37]. However, we did not find any literature on the pan-cancer analysis of GNL3L. In this context, we analyzed the prognostic and immunological value of GNL3L in pancytopenia.
In this study, we performed multi-omics analyses of the genome, transcriptome, proteome, and epigenome. Firstly, the expression level of GNL3L gene was studied with TIMER2 and GEPIA. The results showed that GNL3L is highly expressed in BLCA, BRCA, CHOL, COAD, ESCA, GBM, HNSC, LIHC, LUAD, LUSC, READ, STAD, UCEC, DLBC, LAML, LGG, and TGCT. However, GNL3L is lowly expressed in KIHC, KIRC, KIRP, SKCM, and THCA. Then, GNL3L protein expression levels were investigated using the UALCAN database and HPA database, and immunohistochemical images in various cancers were studied. This indicates that GNL3L is differentially expressed in the majority of cancers and is a potential biomarker. The relationship between GNL3L expression and molecular subtypes and clinical stages in cancer was discussed next, leading to a study of the potential mechanisms of action. The results suggested that GNL3L is associated with both molecular subtypes and clinical stages in most cancers and may play a role in cancer growth and progression. Some studies have shown that the vast majority of cancers are caused by genetic mutations that alter cancer cells and enhance their ability to fight surrounding normal cells, which is also the focus of research on carcinogenesis [38,39]. In the current study, the genetic variation and CNA variation of GNL3L were further explored using cBioPortal. The results indicated that GNL3L is genetically altered in a variety of cancers, and the types of variation are mainly mutations, structural variants, and deep deletions. In the present study, the relevance of GNL3L expression in the prognostic value of cancer was also analyzed. Univariate OS, PFI, and DSS analysis of 33 cancer types showed that GNL3L has different prognostic values in different cancer types.
In recent years, with the rise of immune checkpoint inhibitors, traditional tumor treatment strategies have greatly changed. In theory, the greater the number of tumor mutations, the more likely it is to generate neoantigens [40]. Therefore, TMB is an important indicator for predicting immune efficacy, and MSI is also an important immune examination marker. The results of this study show that the expression of GNL3L is associated with TMB and MSI in various cancers, so the high expression of GNL3L may affect the treatment of immune checkpoint inhibitors. At present, the common immunohistochemical method is to detect the expression of the MMR genes MLH1, MSH2, MSH6, and PMS2 in the tumor group [41]. If the result shows that any protein is completely missing, it is interpreted as dMMR. Therefore, to continue the discussion on the correlation of GNL3L expression with immunotherapy, we analyzed the correlation of GNL3L with these four genes in 33 cancers. The heatmap results showed that GNL3L was only positively correlated with PMS2 in UCS, which co-expressed and significantly correlated with at least two of these genes in the remaining cancers. Therefore, we reasoned that GNL3L could be a novel biomarker associated with immune checkpoint inhibitors.
The tumor immune microenvironment has recently become a prominent issue in tumor research [42]. In addition, the effect of GNL3L on the immune microenvironment is rarely studied. The link between GNL3L and tumor-infiltrating immune cells was studied in 33 malignancies in this study. The results of the CIBERSORT algorithm and the ssGSEA algorithm showed that GNL3L is significantly associated with immune cells such as the Memory B cell, CD56 bright natural killer cell, CD56 dim natural killer cell, and Type 2 T helper cell in the majority of cancers. In most cancer types, there are remarkable dissimilarities in GNL3L expressions in different immune subtypes, which may demonstrate that GNL3L is a novel immune-related biomarker. These new findings represent a significant advance in defining the major role of GNL3L in the immune microenvironment and cancer analysis.
To further investigate the molecular mechanism of GNL3L in carcinogenesis, a PPI network was constructed to enrich the function and pathway analysis of GNL3L. A KEGG pathway analysis indicated that GNL3L has an influence on the ribosomal pathway of ribosome genesis in eukaryotes. Moreover, GO analysis showed that GNL3L genes are associated with ribonucleoproteins, rRNA processing, metabolic processes, and ncRNA metabolic processes in the BP class; preribosome, telomerase holoenzyme complex, and small-subunit processome in the CC class; and MF class catalytic activity acting on RNA, helicase activity, and ATPase activity. A gene enrichment analysis showed that GNL3L may be closely related to metabolic processes and ribosome synthesis.
Meanwhile, in this study, we found that GNL3L expression is significantly correlated with the prognosis and immune cell infiltration of various cancers, such as ESCA, KIRC, LGG, and SARC. The high expression of GN3L, as well as significant differences in molecular subtypes and immune subtypes in OS, PFI, and DSS analyses, are associated with a shorter survival time in ESCA patients compared to ESCA patients with low GNL3L expressions, and GNL3L is significantly associated with most immune cells in ESCA patients. In addition, studies have shown that GNL3L may be a potential prognostic marker for ESCA [8,37]. To more deeply analyze the prognostic worth and clinical value of GNL3L in cancer, finally, a predictive model of ESCA was constructed. A multivariate Cox-based risk line plot for predicting the probability of survival in ESCA patients was constructed by placing GNL3L with statistically significant clinical characteristics in a univariate Cox analysis, and the ROC curves indicated that GNL3L was highly predictable. For example, patients at the M0, N1, and T3 stages and with higher GNL3L gene expressions would receive a total of approximately 110 points, with predicted OS rates of approximately 76.0%, 47.0%, and 27.0% at 1 year, 2 years, and 3 years, respectively. These results indicate that the model has a good predictive ability and has some clinical value. Finally, in order to further study the effect of GNL3L expression on esophageal cancer, we supplemented a biological experiment and performed a clone formation assay of GNL3L. From the results of overexpression and knockdown experiments, it can be seen that GNL3L significantly affects the proliferation ability of esophageal cancer cells.
In conclusion, the results of the GNL3L expression analysis showed that the GNL3L gene is differentially expressed in the vast majority of cancers and correlated with molecular subtypes and clinical stages. GNL3L protein expression is significantly higher in various cancers than in normal tissues. Later, we found that GNL3L is a missense mutation in most cancers. Then, the analysis results of OS, PFI, and DSS showed that GNL3L has different prognostic values in different cancers. Next, in immunoassays, GNL3L was found to be significantly associated with TMB, MSI, MMR, and various immune cells in various cancers. Meanwhile, in order to further analyze the prognostic value and clinical value of GNL3L in cancer, we constructed a multivariate Cox-based risk nomogram for predicting the survival probability of ESCA patients, and we found that the model has a good predictive ability. Finally, we supplemented a biological experiment to perform a clone formation experiment on GNL3L. From the results of the overexpression and knockdown experiments, it can be seen that GNL3L significantly affects the proliferation ability of esophageal cancer cells. The results of these analyses illustrate the expression, prognosis, and clinical value of GNL3L, demonstrating the importance of GNL3L expression in the early detection and prognosis of multiple cancers. Therefore, GNL3L can serve as a potential pan-cancer biomarker.
Conclusions
Although we integrated multiple databases and performed a comprehensive and systematic analysis of GNL3L, this study still has some limitations. First, public databases were used, and the quality of data collection and the procedures employed to obtain the data may differ from one source to the next, affecting the conclusions of some analyses. Second, this study only studied data analyses of GNL3L and various cancers and only performed a biological experiment with a clone formation assay. More biological experimental work is needed to determine the precise function of GNL3L in cancer occurrence and prognosis.
In conclusion, we analyzed the expression characteristics, enrichment value, and prognostic value of GNL3L in pan-cancer and established an ESCA prediction model. At the same time, we performed clone formation assays to further verify the expression, prognosis, and clinical value of GNL3L. These results demonstrate the importance of GNL3L expression in the discovery and prognosis of cancer. Therefore, GNL3L can be used as a potential prognostic immune biomarker. | 2022-09-25T15:11:56.983Z | 2022-09-22T00:00:00.000 | {
"year": 2022,
"sha1": "9ea2d598668e87253d3392cd4b01032c763929d5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/14/19/4595/pdf?version=1663930149",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6d43ebc9e5eb96eabbcbaf9f5e8fadaf030a7ac5",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231924898 | pes2o/s2orc | v3-fos-license | Comment on"The phase diagram of the multi-matrix model with ABAB-interaction from functional renormalization"
Recently, [JHEP 20 131 (2020)] obtained (a similar, scaled version of) the ($a,b$)-phase diagram derived from the Kazakov--Zinn-Justin solution of the Hermitian two-matrix model with interactions \[\mathrm{Tr\,}\Big\{\frac{a}{4} (A^4+B^4)+\frac{b}{2} ABAB\Big\}\,,\] starting from Functional Renormalization. We comment on something unexpected: the phase diagram of [JHEP 20 131 (2020)] is based on a $\beta_b$-function that does not have the one-loop structure of the Wetterich-Morris Equation. This raises the question of how to reproduce the phase diagram from a set of $\beta$-functions that is, in its totality, consistent with Functional Renormalization. A non-minimalist, yet simple truncation that could lead to the phase diagram is provided. Additionally, we identify the ensemble for which the result of op. cit. would be entirely correct.
Main claim and organization
We prove the following: For a Hermitian two-matrix model including the 'ABAB-interaction' vertex b 2 Tr pABABq " the only one-loop, one-particle irreducible (1PI) diagram of order b 2 that has connected external leg structure-that is, such that it contributes to a connected-boundary correlation function-is (1.1) Its external leg structure is the cyclic word ABBA, represented by . This implies that the β b -function given in [1,Eq. 3.41], namely Different infrared regulators might lead to different coefficients (containing non-perturbative information), but the one-loop structure in Functional Renormalization should be evident; this means that the coefficient of b 2 in β b must vanish. Next, in Section 2, these statements are presented in detail in Claims 2.1, 2.2 and 2.3 (at the risk of being redundant) and proven. A short (albeit, rich in examples) user's guide to colored ribbon graphs (Section 2.1) prepares the core of this comment (Section 2.2). In Section 3 we further propose a more generous truncation to obtain the phase diagram. We compute in the large-N limit without further notice.
Context, definitions, examples and proofs
The two-matrix model in question has the following partition function 2 Fig. 4] a phase diagram of right-angled trapezoidal form for the couplings (a, b), called there (α, β) as well as in [1]. A consistent phase diagram with trapezoidal form (i.e. predicting « 1{10 for both critical exact a ‹ " b ‹ " 1{4π values) is one of the main results presented in [1], who addressed the model (2.1) using Functional Renormalization. The phase diagram [1, Fig. 1] obtained from pβ a , β b q follows from a correct expression for β a but also from β b "`η A`ηB`1˘b´2 5 rp5´η A q`p5´η B qs b 2 , which is [1,Eq. 3.41]. We prove here that the β b -function is incompatible with the well-known 3 one-loop structure [4] of Wetterich-Morris Functional Renormalization Group Equation [5,6]. While the behavior β g " g 2`. . . is indeed common for other quartic operators gO, this does not happen for b TrpABABq. The rest of the section introduces the terminology (Section 2.1) and proves in detail the claims (Section 2.2).
2.1. Colored ribbon graphs in multi-matrix models by example. Feynman graphs turn out to be useful also for 'non-perturbative [4] renormalization'. For sake of accessibility to a broader readership, we provide in this section an (incomplete) user's guide to graphs in multi-matrix models. The representation of the integrals of matrix models using ribbon graphs (or fat graphs), famous due to 't Hooft [7], is of paramount importance both in physics and mathematics; applications are also worth mentioning [8,9] (see in particular its relation to discrete surfaces or maps studied by Brezin-Itzykson-Parisi-Zuber [10]). The theory of ribbon graphs can be formulated in an extremely precise way [11], but for the purpose of this comment, the most important feature is that their vertices have a cyclic ordering-this is typically depicted with the aid of a disk with some thick strips (half-edges) disjointly attached to it. Each strip represents a matrix, and these are adhered (in our convention, clockwise) to the vertex, as in the following picture: where A, B, C, D are matrices of the same ensemble 4 . Sometimes the disk is omitted, as we often do below, and the coupling constant (here g), too. The rotation of the vertex is conform with the cyclicity of the trace, but one is not allowed to reflect the picture. This way, ribbon vertices, unlike ordinary ones, are sensitive to non-cyclic reorderings of the half-edges (see e.g. [12, Fig. 1 and eq. (18)]). The representation of the interaction (2.1b) in terms of ribbon (or fat) vertices reads 5 : Since the above graphical representation will be used only as a cross-check, we ignore the symmetry factors (strictly, we should put a root on one edge of each interaction vertex) and also absorb the couplings in the vertices. The cyclic ordering means, in particular, that , which faithfully represents the obvious: TrpABABq " { TrpABBAq.
(2.3)
Edges are also fat (double lines) and consist of pairings of half-edges (which for Hermitian matrix models cannot have net 'twists' 1 ). In order to emphasize that these arise from propagators, we shade the edges. While for a one-matrix model all edges are equal-for instance, the next graph arising in a quartic one-matrix model, -the new feature in multi-matrix models the coloring of the edges. In the ABAB-model (which contains the A 4 and B 4 interactions) the graph (2.4) is possible in green (implying only the A matrix) or in red (only B), but a coloring of the graph (2.4) in such a way that it contains (at least one occurrence of) the ABAB-vertex is not possible. An example of a ribbon graph with two such vertices is in which also the vertices v i , edges or propagators 6 e i , and faces f i are depicted (a face is a boundary component of the ribbon graph; here we have three bounded faces, f 1 , f 2 , f 3 , and one unbounded, f 4 ). The colored ribbon graphs of multi-matrix models, just as their uncolored version, have a topology determined by the Euler number χ (where χ " # verticeś # edges`# faces, in this case clearly χ " 2, corresponding to a spherical topology; this turns out to be important, since the scaling of the graph amplitudes with the matrix size N is " N χ ). If we would have an octic interaction vertex g TrpABABABABq, one of the (several) possible diagrams is One can verify, by starting at any point of the boundary of the diagram-say, at the point P in the picture-and by following the arrows, that when one comes back to P , one has already visited the whole boundary once. Therefore, this fat graph has a single face. In that picture, the arrow with a 45˝angle | Õ emphasizes the criterion to travel along the circle that defines the boundary of the face: when one arrives at the disk (marked with the coupling g) one picks the closest single-line, regardless of the color, for the cyclic order at the vertex determines precisely, which goes next. The counting of vertices (one), faces (one) and edges (four) exhibits its non-planarity (i.e. it cannot be draw of a sphere; the best one can do is draw it without intersections on a surface of genus 2) but below we will find only planar diagrams.
So far, all examples we presented are vacuum graphs. We now consider also graphs having half-edges that are not contracted, as in A face of a ribbon graph is said to be unbroken if no (uncontracted) half-edges are incident to it; the face is otherwise said to be broken, precisely by the incident half-edges. (Thus, vacuum graphs can only have unbroken faces.) In the graph (2.7), the faces f 1 and f 2 are unbroken, while the unbounded face f 3 is broken by the two red half-edges pointing northeast and northwest.
Given a (colored) ribbon graph one can forget the cyclic ordering of its vertices (and if present, the coloring of its edges) and thus obtain a graph in the ordinary sense. For instance, Notice that to the uncontracted half-edges of the ribbon graphs one associates leaf (degree one vertex). A connected (colored) ribbon graph is said to have a one-loop structure if its underlying ordinary graph has a one-loop structure, that is, if the latter has a first Betti number 7 equal to 1; alternatively, if it has one independent cycle. Therefore (2.9) above has a one-loop structure, but neither of the following has it: for the upper ribbon graph has no loops and the other has three. A connected (colored) ribbon graph is one-particle irreducible (1PI) if its underlying ordinary graph is 1PI. An ordinary graph is 1PI if it is neither a tree-a graph for which there is a unique path between any two given vertices-nor it can be disconnected by removing exactly one edge. The (ordinary) graph in (2.10a) is a tree and it can be disconnected by cutting either the straight or the curved edge. Therefore the fat graph in (2.10a) not 1PI. On the other hand, the ordinary graph in (2.10b) is neither a tree (for there exist vertices connected by more than one path) and if one removes any edge, it remains connected. Therefore its primitive ribbon graph in the left of (2.10b) is 1PI. The next example is not at tree, (2.11) but removing one propagator (the green one in position p p ) disconnects it, so it is not 1PI.
We now introduce the last concept. The external leg structure of a ribbon graph with one broken face is obtained by reading off clockwise the cyclic word formed by the matrices (associated with the half-edges) that break that face, going around the boundary-loop exactly once. This process is known in the matrix field theory literature [13] (illustrated in [14,Sec. 5]) and a generalization to multi-matrix models requires to additionally list the half-edges respecting the coloring. We illustrate this concept, reusing graphs previously drawn: ‚ Concerning the graph (2.7): it has an external leg structure BB, since going along the f 3 face, one meets twice a red line.
‚
The graph (2.9) has external leg structure ABBA ‚ The ribbon graph in (2.10a) has external leg structure AAABAABA ‚ That in (2.10b) has external leg structure AA An external leg structure determines in a natural way a new interaction vertex of matrix models just by 'taking its trace'. The cyclicity of the external leg structure yields welldefinedness. In the list of external leg structures of 1 through 4, these correspond, to 1. TrpB 2 q, 2. TrpABBAq, and 3. TrpAAABAABAq and 4. TrpA 2 q. The next example probably explains why we restricted ourselves to graphs with a single broken face, The face inside the red loop yields A 2 ; the same from the outer face. Thus, the external leg structure of (2.12) is the disjoint union of A 2 with A 2 . Graphs with more than one broken face will not appear below, since these lead to multi-traces, in the case of (2.12) to TrpA 2 qˆTrpA 2 q, and these multi-trace interactions are not consider in the article we are commenting (but are treated in [15]). But the idea of a more general setting is depicted in Figure 1.
We are done with the terminology. In the following section we prove the claims.
Proofs.
In the Functional Renormalization Group parlance, one says that the RG-flow generates the interaction vertices that arise from the external leg structure, whenever these come from a one-loop graph. The one-loop condition is a consequence of the 'supertrace' present in Wetterich Equation . This makes some of the graphs listed above uninteresting from the viewpoint of renormalization, as they do not have a one-loop structure. Those which do, have also some drawbacks: (2.9) has a one-loop structure, and is a graph generated by the RG-flow in the ABAB-model, but that sextic vertex is set to zero in the truncation that [1] considers. On the other hand, the graph (2.12) is a Feynman graph containing only vertices of the ABAB-model, but it leads to a double-trace, and so on.
This hopefully slowly starts to convince the reader that the 1PI and the one-loop conditions heavily restrict the graphs that play a role in the FRG; this is the flavor of the proofs below, where in fact, we obtain certain kind of uniqueness. Consider the following graph: It has a broken face and an unbroken one, and when one glues a disk along the boundary this happens: 8 Without giving a full description (for that we refer to [15]) we recall that R N is the infrared regulator; that Γ p2q N is the Hessian of the interpolating effective action Γ N ; and that t " log N is the RG-time. Proof. In G one has a single broken face (it can be recognized in the drawing (2.13) as the face that is unbounded). Starting at any point one reads off at the boundary of such face the word ABBA. One could also read off BBAA, AABB or BAAB, depending on where one chooses to start. Yet, nothing changes, since the external leg structure is cyclic by definition.
Claim 2.2.
The unique pconnectedq one-loop 1PI graph of order b 2 having a connected external leg structure is the graph G defined by (2.13).
In this statement the connectedness of the external leg structure means that G has a single broken face.
Proof. By assumption, the graph has two interaction ABAB-vertices; let us name V 1 and V 2 the two copies. The 1PI-assumption constrains the p propagators implied in the graph to p ą 1 (indeed, since the graph should be connected, and has two interaction vertices, a propagator should connect these. Should the graph have a single propagator, then removing it would yield a disconnected graph contradicting the 1PI assumption). On the other hand, the one-loop condition implies p ă 3 (if p ě 3 then more loops are formed). Again, since the graph in question is 1PI, the p " 2 propagators implied in the graph connect V 1 with V 2 . If they connect two equal colors, we get a disconnected external leg structure, i.e. either the graph (2.12) or its A Ø B (i.e. green Ø red) version. Therefore, by assumption, the two propagators connecting V 1 with V 2 must have different color. The only such graph having also a connected external leg structure is G defined by (2.13).
It is a Quantum Field Theory folklore result (see e.g. [16] and [17,Sec. 3.3]) that the effective action is the generating functional of 1PI graphs. Since neither the cyclicity of the vertices 9 nor the coloring of thick edges has influence on the 1PI property, the effective action in multi-matrix models generates 1PI colored ribbon graphs; thus, that folklore result remains true in this setting. Further, the Functional Renormalization Equation governs the interpolating effective action, hence the 1PI condition is imposed from the outset 10 . We have, in view of this: Claim 2.3. For the particular operator´b 2 TrpABABq, a quadratic term b 2 in β b is not possible in Functional Renormalization pin other words the coefficient of b 2 in β b vanishes, or in notation, rb 2 sβ b " 0q.
Proof. Wetterich-Morris Equation imposes the one-loop structure on any non-linear term in the coupling constants (i.e. on any non-vertex) appearing in each β-function. For the β bfunction, in particular, a second condition is that the external leg structure must be ABAB. These two conditions are mutually exclusive. Indeed, by Claim 2.2, the only such Opb 2 q graph is G, which, by Claim 2.1, has an external leg structure ABBA.
Remark 2.4. Some closely related, but not essential points: ‚ The correlation functions of matrix [13] and tensor field theories are indexed by boundary graphs [18]. The terminology makes sense graph theoretically but also geometrically in both the matrix and tensor field [14] contexts. In the case of matrix models, boundary graphs coincide with what we call here external leg structure. In matrix models, there are as many β-functions as correlation functions, hence the importance of the external leg structure. The map defined by 'taking the boundary' seems also to play a role in other renormalization theories, like Connes-Kreimer Hopf algebra approach [19]. In that theory for matrices (related construction appears in [20]) taking the boundary seems to be the residue map in terms of which one can define the coproduct of the Hopf algebra (also true [21] for the Ben Geloun-Rivasseau tensor field theory [22]). ‚ Other graphs might appear for real symmetric matrices, but the ribbons corresponding to Hermitian matrices remain untwisted, which played a role in the uniqueness of the graph above. It is not exaggerated to stress the reason for this rigidity, which is explained by the difference in the propagators of matrix models in ensembles tM P M N pKq | M : " M u for different fields K, i.e. (2.14)
A proposal to obtain the Kazakov-Zinn-Justin phase diagram
We recall that in [1] the truncation is minimal. Thus, the running operators are also those in the bare action in eqs. (2.1a)-(2.1b).
In that truncation [1], if one computes correctly, β b " b (the value of the vertex); this is not wrong, but only says that such truncation threw away useful information. In order not to 'waste' the b 2 term, we propose to add the operator that captures it, so the new effective action reads: where bar on quantities means unrenormalized and Z is the (now common) wave function renormalization. The ABBA-operator also respects the original Z 2 -symmetries: pA, Bq Þ Ñ pB, Aq, pA, Bq Þ Ñ p´A, Bq , pA, Bq Þ Ñ pA,´Bq .
Now, b 2 does appear, but in the β c -function. In fact β c " b 2`c2`2 ac, ignoring the contribution (9 c) of the vertex. This relation was obtained with the ('coordinate-free') method presented in [15,around Corollary 4.2]. Removing the symmetry condition (g AAAA " a " g BBBB ) we initially imposed on the couplings for A 4 and B 4 , and writing in full g w for the coupling of TrtwpA, Bqu, being w a cyclic word in A, B, one has 11 βpg ABBA q´g ABBA p2η`1q " g AAAAˆgABBA`gBBBBˆgABBÀ pg ABAB q 2`p g ABBA q 2 , (3.3a) obtained without the use of graphs. The missing coefficients (hidden in " and implying the anomalous dimension η "´N B N Z) are regulator-dependent and contain non-perturbative information, but the essential point is that now each β-function has a transparent one-loop structure; to wit, the RHS of (3.3a) corresponds (respecting the order in that sum 12 ) with``(
3.3b)
It is also clear that the cyclic external leg structure of each of these terms is ABBA. Since it is easy to confuse the order of the letters, we stress that this is already an extended version of the ABAB-model to exemplify the one-loop structure of another coupling constant. But expressions (3.3) also give the (by Claim 2.2 unique) β-function where the b 2 actually has to sit.
Adding the ABBA operator modifies the flow (for, now, β b´b p1`2ηq " bc) but in order to get the desired fixed points, higher-degree operators might still be required. As pointed out in the paragraph before [1, Sec. 3.3] when addressing higher-degree operators, TrrpABq 3 s is indeed forbidden. However, there are degree-six operators that do preserve the symmetries (3.2), concretely TrpABABAAq or TrpABABBBq, and contribute to the β b -function (see [15,Thm. 7.2], where the RG-flow has been computed adding these operators), thus enriching the truncation.
Remark 3.1. Some closing points one could learn from [1]: ‚ Notice that if we could somehow make ABAB indistinguishable from ABBA, then the β b -function [1,Eq. 3.41] would be, in that case, correct; see (2.3). This happens for a sub-ensemble of pairs of Hermitian matrices A, B such that AB is Hermitian (for then, A and B commute). ‚ Notice that the graph (3.4) would yield the b 2 term needed in [1,Eq. 3.41]. However, this graph is not possible, since the Ising operator σ TrpABq, depicted with the bicolored bead and responsible for 'changing color', is not in the truncation behind that equation; moreover, if added, it violates two symmetries in (3.2), on top of b 2 being screened by σ 2 (that graph is a one-loop containing four operators: two Ising, two ABAB, alternated). Nevertheless, it seems plausible that the contamination of the running operators (i.e. considering operators that are not unitary invariant) might effectively lead to a color-change, as in (3.4).
Conclusion
We showed that the connection that [1] established between the Functional Renormalization Group (FRG) and a phase diagram-identified there with [2, Fig. 4]-relies on a β-function that does not have the one-loop structure of the Functional Renormalization Equation. In Section 3 above, we proposed to extend the minimalist truncation of [1] in order to find a FRG-compatible set of β-functions. Accomplishing this proposal would provide a sound 13 bridge, in the intention of [1], between the FRG and Causal Dynamical Triangulations [23] through the ABAB-model [24,25]. Finally, we provided in Remark 3.1 the condition one would need to add in order for the β b -function given by [1,Eq. 3.41] to be correct. | 2021-02-16T02:16:30.019Z | 2021-02-13T00:00:00.000 | {
"year": 2021,
"sha1": "a9efd2eb1bcc19e307d25ba027aeeadc80d8e306",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP07(2021)042.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "d0050bcbbd44399873d136c9d9c57e6807267f38",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
25750959 | pes2o/s2orc | v3-fos-license | Oral malignant melanoma detected after resection of amelanotic pulmonary metastasis☆
INTRODUCTION Solitary pulmonary metastasis from oral malignant melanoma is very rare. PRESENTATION OF CASE We demonstrated a 84-year-old patient with a lung nodule that was diagnosed as malignant melanoma by video-assisted thoracoscopic resection. Because primary pulmonary malignant melanoma was extremely rare, the tumor was thought to be a metastasized from an occult primary lesion. A detailed physical examination revealed a black tumor in the oral cavity, and this was suspected to have been the primary. Resection of the hard palate tumor and dissection of the cervical lymph nodes were performed. The patient was simply followed up without further therapy at his request, and he died one year after surgery due to bleeding from a pleural metastasis of malignant melanoma. DISCUSSION Primary melanoma of the oral cavity is rare, accounts for 0.5% of all oral cancers, and 0.8–1.8% of all melanomas. Because of absence of symptoms in the early stage of the disease and the presence of the tumor in relatively obscure areas of the oral cavity, the diagnosis is unfortunately often delayed. In view of the rarity of primary lung melanoma, when lung tumor was diagnosed as malignant melanoma, detailed physical examination of the entire skin and mucosa including the oral cavity was necessary. CONCLUSION Oral malignant melanoma was very rare, but oral cavity should be examined when the pulmonary nodule was diagnosed as malignant melanoma.
Introduction
Primary melanoma of the lung is exceptionally rare. We presented a resected case of malignant melanoma of the lung that was diagnosed as metastatic tumor from an oral malignant melanoma after post-surgical physical examination including the oral cavity.
Presentation of case
An asymptomatic 84-year-old male was presented at our hospital because a nodular lesion of 8 mm in diameter had been detected in the left upper lobe of lung by routine chest computed tomography during follow up for ischemic heart disease ( Fig. 1). He had undergone subtotal gastrectomy for early gastric cancer, radiation treatment and hormone therapy for prostate cancer, and coronary artery stenting due to severe multiple coronary artery stenosis. He had a history of smoking with a pack-year rate of 35. Retrospective examination revealed a 6 mm nodular lesion on a chest ଝ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution and reproduction in any medium, provided the original author and source are credited.
* Correspondence to: 670-8520 Honmachi 68, Himeji City, Hyogo, Japan. CT film obtained 4 months previously. Positron emission tomography demonstrated only slight uptake of FDG at the position of the nodular shadow, and no other abnormal findings were evident ( Fig. 1). Although FDG uptake of left cervical lymph node was increased, this uptake was supposed to nonspecific uptake due to nonspecific lymph node swelling. Because the nodule increased in size during 4 months, malignant disease was suspected, and we performed video-assisted thoracoscopic surgery. Because of the patient's poor cardiac function, wedge resection was performed. The resected nodule had a smooth surface, and the cut surface was white with brown pigmented deposits. Intraoperative histological examination gave a diagnosis of undifferentiated carcinoma. The postoperative course was uneventful, and the patient was discharged from hospital 3 days after surgery. Histological examination demonstrated a gray-white tumor with brown spots and proliferation of spindle and epithelioid cells with abundant cytoplasm and atypia. The tumor involved the visceral pleura and was exposed to the pleural surface histologically. Immunohistochemical examination demonstrated negatively for CAM5.2, CK7, TTF-1, NapsinA and calretinin, and positively for vimentin, HMB-45 and S100. From these results, the tumor was diagnosed as malignant melanoma (Fig. 2). Because primary pulmonary malignant melanoma was extremely rare, the tumor was thought to be a metastasized from an occult primary lesion. A whole skin examination showed no abnormal findings. However, examination in oral cavity revealed slightly elevated, black lesions with irregular boundaries approximately 20 mm in diameter on the palatal mucosa (Fig. 3). The patient had not been aware of this lesion. An incisional biopsy was performed under local anesthesia, and the tumor was diagnosed as malignant melanoma. Resection of the hard palate tumor and dissection of the cervical lymph nodes were performed at the department of otolaryngology of our institute. Although the melanoma was at an advanced stage, the patient was simply followed up without further chemotherapy or immunotherapy at his request, and he died one year after surgery due to bleeding from a pleural metastasis of malignant melanoma.
Discussion
We have described a patient with a lung nodule that was diagnosed as malignant melanoma by histological examination. Because primary melanoma of the lung is exceptionally rare, only about 30 cases having been reported in English literature, and the lung is the most common site of metastasis from malignant melanoma, 1 the present lung tumor was thought to have metastasized from an occult primary tumor. A detailed physical examination revealed a black tumor in the oral cavity, and this was suspected to have been the primary. Shimmyo et al. also demonstrated a case of malignant melanoma that primary lesion was detected 8 month after resection of lung metastasis. 2 In view of the rarity of primary lung melanoma, physical examination of the entire skin and mucosa, including the oral cavity, was necessary. Although positron emission tomography (PET) scanning is useful for detecting malignant diseases, we were unable to detect the primary tumor by PET preoperatively in this case. PET scan detected the lung tumor but not the oral tumor. Even if PET does not demonstrate abnormal uptake, detailed examination of the entire body is necessary.
Oral mucosal melanoma is a rare neoplasm. Primary melanoma of the oral cavity accounts for 0.5% of all oral cancers, and 0.8-1.8% of all melanomas. 3 Because of absence of symptoms in the early stage of the disease and the presence of the tumor in relatively obscure areas of the oral cavity, the diagnosis is unfortunately often delayed. Aggressive resection with complete removal of the tumor is hindered due to the presence of teeth and bone in the affected region. Oral malignant melanoma is aggressive, and the abundant blood supply of the oral cavity may permit blood vessel invasion and hematogenous dissemination early in the disease course. Compared with cutaneous and ocular melanoma, oral malignant melanoma has a poor prognosis with a reported 5-year survival rate of 10-25%. 1,4,5 In the present case, the patient had no symptoms and the disease was not detected until pulmonary metastasis had been found. As a result, survival period was short, and the patient died after only one year.
For diagnosis of malignant melanoma, especially amelanotic melanoma, immunohistochemical examination is useful. Positive immunostaining for S100 and HMB45 is reported to have high sensitivity and specificity for malignant melanoma. 1,3,4 In this case, the lung tumor was amelanotic, but was diagnosed as malignant melanoma on the basis of positive immunostaining for S100 and HMB45.
Surgery is the mainstay of treatment for oral melanoma. 1 In the present case, we performed surgery for the oral lesion despite the advanced stage of disease with pulmonary metastasis, because no other distant metastasis was detected, resection of a solitary lung metastasis has been reported to improve the prognosis, 6-8 and surgical resection of oral tumor may prevent any decrease in the quality of life due to pain or eating disorder resulting from tumor progression. This patient suddenly died due to bleeding from a pleural metastasis one year after surgery, but no recurrent tumor was found in the oral cavity and normal oral food intake was possible until the time of death.
The optimal postoperative treatment for malignant melanoma has not been determined. 5 The present patient was 84 years old and he declined chemotherapy, being followed up without further treatment in spite of his advanced disease. Several reports have indicated that immunotherapy and chemotherapy are effective for malignant melanoma. 1,4 It is expected that further improvements in the treatment of malignant melanoma will be made, such as targeted drug delivery directed against the cancer specific antigen(s).
Conclusion
We have presented a case of amelanotic melanoma of the lung that was proved as metastatic tumor from an oral malignant melanoma by detailed physical examination including the oral cavity. Oral malignant melanoma was very rare, but oral cavity should be examined when the pulmonary nodule was diagnosed as malignant melanoma.
Conflict of interest
None declared.
Funding
None.
Ethical approval
Written informed consent was obtained from the patient for publication of this case series and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal on request.
Author contributions
Katsunari Matsuoka: data collection and writing the paper. | 2018-04-03T06:13:39.763Z | 2013-10-17T00:00:00.000 | {
"year": 2013,
"sha1": "204e8556b68ff144745f6f2168a84c59baf12e03",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.ijscr.2013.10.004",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "204e8556b68ff144745f6f2168a84c59baf12e03",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246384085 | pes2o/s2orc | v3-fos-license | Osteoarthritis Data Integration Portal (OsteoDIP): A web-based gene and non-coding RNA expression database
Objective OsteoDIP aims to collect and provide, in a simple searchable format, curated high throughput RNA expression data related to osteoarthritis. Design Datasets are collected annually by searching “osteoarthritis gene expression profile” in PubMed. Only publications containing patient data and a list of differentially expressed genes are considered. From 2020, the search has expanded to include non-coding RNAs. Moreover, a search in GEO for “osteoarthritis” datasets has been performed using ‘Homo sapiens' and ‘Expression profiling by array’ filters. Annotations for genes linked to osteoarthritis have been downloaded from external databases. Results Out of 1204 curated papers, 63 have been included in OsteoDIP, while GEO curation led to the collection of 28 datasets. Literature data provides a snapshot of osteoarthritis research derived from 1924 human samples, while GEO datasets provide expression for additional 1012 patients. Similar to osteoarthritis literature, OsteoDIP data has been created mostly from studies focused on knee, and the tissue most frequently investigated is cartilage. GEO data sets were fully integrated with associated clinical data. We showcase examples and use cases applicable for translational research in osteoarthritis. Conclusions OsteoDIP is publicly available at http://ophid.utoronto.ca/OsteoDIP. The website is easy to navigate and all the data is available for download. Data consolidation allows researchers to perform comparisons across studies and to combine data from different datasets. Our examples show how OsteoDIP can integrate with and improve osteoarthritis researchers’ pipelines.
Introduction
High throughput data have become essential for unbiased investigations of important biomedical research questions in the last two decades. The type, amount and quality of data analyzed has progressively increased, providing biomedical researchers with the resources needed to create a molecular view of the system being studied gradually closer to a "whole picture". Every set of data collected contributes to advance our understanding of complex diseases, including osteoarthritis (OA).
Nonetheless, it is only one stroke in a much bigger painting, mostly because OA is not homogeneous but rather represents a spectrum of diseases: patients with the same disease may respond differently to the same treatment and the disease may progress differently due to patient heterogeneity. Individual biological assays can detect a high number of molecules (for example proteins, microRNAs, metabolites, gene expression quantification) or their status (for example mutation, methylation or post-transcriptional modification), but in a cell all these molecules act in concert, and individual patients are characterized by a combination of these perturbations. To fathom how they influence health and disease state, we need to combine them using corresponding networksmicro-RNA:gene, protein:protein, pathways, etc. Furthermore, even considering only one type of data, different datasets can include patients with the same disease, but might be trying to answer different questions, or include patients with different clinicopathological features. However, even when the clinical question and the clinicopathological features are the same, technical and biological heterogeneity can lead to different resultsmost of which are still valuable, but represent small (and sometimes redundant) pieces of the disease's molecular puzzle [1].
Intuitively, data integration across datasets and molecules is key to gather more information and depict a more complete picture. Many disease-specific databases already harness this potential, especially for cancer (the most well-known being cBioPortal, that includes, among others, the huge collection of data from TCGA [2]).
In OA, many proteomics, gene and non-coding RNA expression, methylation, metabolomics and genome-wide association studies (GWAS) have been performed (reviewed in Ref. [3]). Resources collecting and annotating omics data in OA, though, are limited. SkeletalVis collects and re-analyzes transcriptomics datasets linked to skeletal diseases, including, but not focusing on, OA [4]. A researcher can use the database to calculate fold change and related enrichment analyses in one dataset at a time, or compare different datasets one gene or one signature at a time. OATargets collects data on model organisms and the effect of gene manipulation on their OA phenotypes [5]. In this paper we present OsteoDIP, a database collecting and annotating OA-specific omics data from human studies.
Data collection
OsteoDIP primarily focuses on genes found to be linked to OA using high-throughput techniques (for example microarray or RNAseq). To collect such data, we performed this exact search in PubMed: "osteoarthritis gene expression profile". Only papers that included patient diagnosis annotation for at least 4 patients and that provided the list of differential genes were collected. The search was first performed in October 2016, with annual updates. At each update cycle, we verify whether any paper has been retracted and, in such case, remove it from the database. In 2020, the search was extended to include also noncoding RNA molecules, using the same collection of articles as the ones used in the recent review [6]. Gene symbols are updated every release to the latest HGNC version [7], microRNAs to the latest miRBase [8] and long non-coding RNAs to the latest LNCipedia version [9].
With OsteoDIP, we provide a platform and initial curation with the aim to make available high-quality data for translational research in osteoarthritis. To ensure both high data quality and coverage, we opened the platform for contributed curation. For example, OsteoDIP includes 18 low-throughput protein expression and SNP data sets and radiographic biomarkers data from 5 studies curated and provided by Dr. Stuart Faulkner, Center for the Advancement of Sustainable Medical Innovation, Oxford University.
GEO data collection and normalization
Datasets were curated from GEO [10] using 'osteoarthritis' as a search term and 'Homo sapiens' and 'Expression profiling by array' as filters. For curation, the same criteria used for data collection were applied. Moreover, we checked that no overlap among samples (based on GSM identifier) was present. Series matrix files were then downloaded for the datasets of interest, and expression and clinical data were separated. For each expression table, if probeset to gene symbol mapping was not provided, it was created using the annotations present in GEO platform pages. For each clinical table, data was consolidated to be searchable in the database (for example, sex was transformed to F and M for all datasets, replacing "Female", "female", "Male" or "male").
If raw data was available, normalization was performed using R 4.0.3 [11] with packages limma 3.46.0 [12], affy 1.68.0 [13] or oligo 1.54.1 [14]. Most of the datasets were RMA-normalized, but the full list of datasets and their normalization methods is available in Supplementary Table 1.
Annotation
OsteoDIP provides, for each gene of interest present in at least one of the curated papers, a set of annotation data from external databases. Disease annotation is collected from DisGeNET [15], protein secretion data from The Human Protein Atlas [16] and MetazSecKB [17], SNPs from the GWAS Catalog [18], human protein-protein interaction (PPI) data from the Integrated Interactions Database (IID) ver. 2020-11 (with interaction annotations synovial macrophages, chondrocytes, growth plate cartilage, synovial membrane or articular cartilage) [19]. We also provide the number of conserved PPIs per species: conserved PPIs are determined by mapping experimentally detected human PPIs to orthologous protein pairs in 17 other species. Mappings are based on 1:1 orthologs from Ensembl [20] release 100.
Database
The web interface to the OsteoDIP database is implemented in the Java Server Faces framework running on IBM WebSphere Application Server (ver. 9.0). The backend storage deployed IBM DB2 database (ver. 11.1) engine. For performance improvement, the WebSphere and DB2 are placed on different virtual instances of IBM P770 and P750, running AIX (ver. 7.2). The OsteoDIP Data Integration Portal (DIP) is freely available at http://ophid.utoronto.ca/OsteoDIP, and online documentation provides more details for every page.
Use cases
Descriptive analyses. Descriptive analyses of OsteoDIP have been performed using search results and tables obtained from the website at the "Matrix" page. Top deregulated genes were extracted from the matrix of all deregulated molecules in OsteoDIP, and the microRNAs targeting them were obtained from mirDIP [21] (http://ophid.utoronto. ca/mirDIP) using the threshold "very high". Human PPI data among top deregulated genes were obtained from the Integrated Interactions Database (IID) ver. 2020-11 (http://ophid.utoronto.ca/iid).
OA signature. An OA signature was collected from Ref. [22]. Gene names were used to query the GEO page in OsteoDIP, and the downloaded sets of tables (clinical, expression and normalized expression data) were used in R 4.0.3 to calculate fold changes and moderated t-test p-values using the limma package ver. 3.44.3. Datasets were used if they had patients and healthy controls, at least 3 independent samples per group, and patients were not receiving treatments. Genes that had a significant p-value were plotted using ggplot2 ver. 3.3.3.
MicroRNA review. A recent review listed the microRNAs that have a protective or destructive role in OA [24]. We collected the microRNAs, converted them to miRbase v.22 IDs using miRBaseConverter 1.12 in R 4.3.0, and searched for them in the microRNA search page of OsteoDIP. Furthermore, we investigated the targets of such microRNAs present in the network depicted in Fig. 2 of the review. All the genes were searched for in OsteoDIP. We next investigated the overlap between the two lists of microRNAs and the microRNAs targeting top deregulated genes from OsteoDIP. Hypergeometric distribution test was performed in R 4.0.3. A network depicting the microRNAs from the two lists targeting OA genes has been built using NAViGaTOR 3.0.14 [25]. Conservation of the network's protein:protein interactions in different species was obtained from IID ver. 2020-11.
Hip and knee cartilage comparison. To investigate possible differences between hip and knee expression, we analyzed the matrices including all the protein coding genes that were deregulated in cartilage hip OA and the ones deregulated in cartilage knee OA. Overlap and differences between the two sets of genes were calculated in R 4.0.3. Pathway enrichment analysis for each specific set (genes deregulated only in hip or genes deregulated only in knee) was performed in pathDIP 4.1 [26] selecting BioCarta, EHMN, HumanCyc, INOH, IPAVS, NetPath, Panter_Pathways, PID, REACTOME, Signalink2.0, SIGNOR 2.0, Spike, STKE, systems.biology.org, Uniprot_Pathways and Wikipathways as sources. Enrichment was conducted using the sources separately, and only pathways with adjusted p-value (False Discovery Rate, BH method) < 0.01 were retained.
MALAT1. MALAT1 targets were retrieved from LncTarD [27]. Of the retrieved targets, microRNAs were then used to query mirDIP (threshold "very high") to obtain microRNA:gene interactions. Only gene targets of the microRNAs or of MALAT1 that were present in OsteoDIP were retained. A network was created using NAViGaTOR 3.0.14. Pathway enrichment analysis of gene targets was performed in pathDIP 4.1 using the same databases listed above. Enrichment was computed using the combined pathway source databases, and only pathways with adjusted p-value (False Discovery Rate, BH method) < 0.01 were retained. Each gene was then annotated with the pathway from the enriched list with the lowest p-value.
Data
We have considered 1204 papers as of December 2020 (Fig. 1A), 63 of which have been collected in OsteoDIP after necessary exclusions. While many papers have been excluded due to our curation choices (for example non-original data, model organism data), it stands out that 135 papers were excluded because the data were not availablehighlighting once more how the lack of data sharing affects curation efforts worldwide. As visible in Fig. 1B, OsteoDIP reflects the literature distribution, with most papers focusing on knee OA, and more frequently on cartilage tissues.
GEO curation led to the collection of 28 datasets. Consolidated clinical data from such datasets is available at the OsteoDIP GEO Clinical page, which shows that the datasets provide expression data for 1012 patients with heterogeneous types of data. Curation revealed that less than half of the datasets (13/28) include age annotation for the samples and only one (GSE15227) includes grade. The collection of papers, on the other hand, provides a view of OA obtained from 1924 samples (from different comparisons, but most frequentlyin 39 papers -OA samples are compared to healthy controls). Four studies are present both in the curated and in the GEO dataset pages (PMID: 16508983, 24229462, 29258882, 29973527).
Molecules deregulated in at least one study include 8905 genes, 402 lncRNAs, 56 microRNAs and 58 circRNAs. The distribution of gene deregulation and its direction is shown in Supplementary Fig. 1.
Use cases
OsteoDIP can be used for different types of studies, for example: Study specific genes of interest, where they have been published, to which conditions they have been linked, and which of their interactions are conserved across species. Study genes linked to specific tissues and/or joints and/or comparisons. We show this in: "Hip and knee cartilage comparison" case. Study specific noncoding RNAs of interest, where they have been published, to which conditions they have been linked and, in the case of microRNAs, which genes they target. Two cases are shown: "MALAT1 00 and "microRNA review". Perform analyses on OA related datasets, using consolidated annotation data that facilitates comparisons across datasets. We provide an example in "OA signature" Descriptive analyses. The most frequently dowregulated molecule is the gene APOD (9 studies), while the most frequently upregulated is the gene COL5A1 (14 studies). Searching for APOD in OsteoDIP, we can see that it is secreted, and that it interacts with 108 other proteins. Of these interactions, 84 are conserved in mouse, 83 in cat and guinea pig, and 82 in cow and rat, suggesting they would be the best animal models to study APOD's effect on OA. Of the 108 interactors, 52 are listed as deregulated in at least one study in OsteoDIP. Similarly, searching for COL5A1, we can see that it is secreted as well, and that it is annotated with a score of 0.73 with the disease Ehlers-Danlos syndrome type 1. There are 317 COL5A1 protein interactions reported in IID. Of the 317 interactors, 256 are annotated with at least one study where they are deregulated, but the highest number of conserved interactions is only 43, in mouse and pig.
We then explored top deregulated molecules. Focusing on those deregulated in at least 8 studies, we identified 138 genes that are connected by 624 PPIs, 607 of which are annotated with synovial or cartilage specific tissues. Table 1 shows that most PPIs are conserved across mammals, with cat being the species with the highest number of conserved PPIs.
MicroRNA review. Using mirDIP, we identified 701 microRNAs that target the top genes described above. OsteoDIP includes 41 of these microRNAs, providing further evidence of their importance to OA. To Fig. 1. (A) shows the curation process, from the starting point of 1204 papers to the final one of 63 collected for OsteoDIP. Numbers show how many studies were excluded and the reason for exclusion. "Non patients" refers to studies where data were not collected from human samples (but rather using, for example, model organisms). "non-HT" refers to nonhigh-throughput studies (i.e., studies exploring only one or a few genes or proteins). "non-OA" refers to studies not focused on osteoarthritis (for example, studies that mention the disease in the paper but study some other disease). "non available" refers to studies where the data are not available or data/paper are in a language other than English. "non applicable" refers to studies that include too few samples or to papers focused on OA but that do not include signatures (for example reviews). "non original" refers to papers that re-analyze previously published data. (B) shows the number of studies and datasets that belonged to each category, and how many contributed each different joint and tissue. N.s. ¼ not specified.
further annotate the microRNAs, we looked at the overlap between them and the protective and destructive microRNAs [24]. Hypergeometric test provides evidence that the top genes are significantly targeted by the reviewed microRNAs (p-value 5.485815e-08 for destructive microRNAs and 3.682239e-10 for protective ones). A network built using such overlapping microRNAs and their gene targets present in OsteoDIP shows 18 genes targeted only by protective microRNAs, among which the most downregulated gene is CHI3L1, a gene that supports OA progression facilitating ECM degradation through MMP9 and that degrades key proteins such as proteoglycan, collagen and osteonectin [28]. Furthermore, 10 genes are targeted only by destructive microRNAs, among which the most downregulated gene is NQO1, an antioxidant enzyme regulated by Nrf2 and involved in preventing cartilage degradation [29] (Fig. 2).
We then looked for the microRNAs present in the review: 6 cartilage protective (hsa-miR-24-3p, hsa-miR-27a-3p, hsa-miR-27b-3p, hsa-miR-193b-3p, hsa-miR-210-3p and hsa-miR-30a-5p) and 10 cartilage destructive (hsa-miR-139-5p, hsa-miR-181a-5p, hsa-miR-23a-3p, hsa-miR-34a-5p, hsa-miR-4454, hsa-miR-203a-3p, hsa-miR-223-3p, hsa-miR-302b-3p, hsa-miR-381-3p and hsa-miR-483-5p) are found in OsteoDIP, all being annotated as deregulated in only one paper each. Among the 997 genes identified in the review targeted only by cartilagedestructive microRNAs, 390 were found to be deregulated in one or more studies. Interestingly, RNF34, highlighted in the text for its cartilagedestructive link, has been shown to be downregulated in two studies comparing OA knee cartilages to healthy controls. 833 of 1854 genes targeted only by cartilage-protective microRNAs were also found to be deregulated in at least one study. HAS3 has been highlighted for its link to protective microRNAs and it was found downregulated in one study where normal knee synovial tissue was compared to inflamed areas of OA knee. Almost 47% of all the targets of the reviewed microRNAs are described as deregulated in at least one OA study, suggesting a strong connection between the microRNAs, the genes and OA.
OA signature. A recent blood diagnostic signature has been shown to separate OA from healthy samples [22]. A researcher might be interested to know wether the identified genes have a role in OA pathogenesis; thus, we tested in which datasets such genes were differentially expressed in OA compared to healthy individuals. Using the 4 listed genes, we collected from OsteoDIP their expression from 28 GEO datasets. All 4 genes were present, and 6 datasets passed our filtering criteria. Table 2 shows the number of samples and distribution of expression in healthy and OA samples in each dataset. As expected, there was variability across the datasets, and IL18 and SRSF2 were the only genes with significant differential expression (Fig. 3). SRSF2 was differential in a dataset that studied meniscal tissue (GSE98918) and IL18 in a dataset that examined synovial tissues (GSE82107). Interestingly, the two datasets with the strongest differential expression for the genes of interest, GSE82107 and GSE143514, were both derived from synovial tissues, while the remaining datasets were derived from meniscus, synovial fibroblasts and cartilage tissues. This suggests a possible connection between synovial tissue OA molecular landmark and the OA blood signature genes. This also further highlights the benefit of diverse samples and their rich annotation. Hip and knee cartilage comparison. Joint-specific OA pathogenesis has been hypothesized [31], but not many studies compare data from and mechanisms related to different joints [32][33][34][35][36]. In OsteoDIP, as in the literature, most studies focus on knee, but other joints are available for comparison. For example, we compared the genes deregulated in at least one study using knee cartilage (6793 genes) and hip cartilage (1248 genes) samples. 859 genes are in common between the two sets, and include genes frequently linked to OA like OGN (the most deregulated in hip) [37], TNFAIP6 (the most deregulated in knee) [38], and collagen genes (COL5A1 among the most deregulated in both joints) [39]. 389 genes are deregulated only in studies focused on hip cartilage, while 5936 are deregulated only in studies that focus on knee cartilage. Pathway enrichment analysis finds 1592 pathways for knee specific genes and 103 for hip specific genes. Of these, 15 pathways had no gene overlap with any other pathway for hip specific genes, while 116 pathways had no overlap for knee specific genes. No pathways were in common among the two sets. Nine of the 15 hip-specific pathways are metabolic. Metabolic differences in synovial fluid of knees and hips have been described, and in particular N-acetylated molecules, glycosaminoglycans, citrate and glutamine [35]. Pathway enrichment results of non overlapping pathways are available in Supplementary Table 3. MALAT1. 786 out of 795 long non-coding RNAs present in OsteoDIP have been annotated as deregulated only in one study. Of the few long non-coding RNAs identified in multiple studies, MALAT1 is the top deregulated (three studies). We obtained experimental interactions of MALAT1, and used the target microRNAs to predict gene targets in a sequence MALAT1 → microRNA → gene. Filtering out genes absent in OsteoDIP, we created the network in Fig. 4: five MALAT1 targets are also targets of the microRNAs, creating regulation loops. One microRNA (hsa-miR-9-5p) is present in OsteoDIP and 4 microRNAs (hsa-miR-9-5p, hsa-miR-127-5p, hsa-miR-145-5p, hsa-miR-146-5p) have been linked in the literature to OA via MALAT1 regulation. In OA papers [40] [-] [45], MALAT1 is shown to affect proliferation as well as ECM degradation. Pathway enrichment analysis of OsteoDIP targets of the 4 microRNA finds 27 enriched pathways, mainly linked to ECM, collagen formation and degradation and integrin signaling (available in Supplementary Table 4).
Discussion
Data curation provides a scientific asset to any research topic, but curation remains challenging due to frequent data unavailability, missing and limited clinical and biological annotation, limited biological assays or improper informatics workflows [46]. In OA many high-throughput studies have been and are being conducted to identify molecular pathogenesis paths, characterize patient heterogeneity and predict new (and effective) OA treatments. One feature of high-throughput studies is the amount of data collectedit being related to any set of molecules (i.e., the entire genome, the entire proteome, the entire transcriptome, metabolome). Obtaining high throughput data is expensive, though, leading to the application of these methods to only a reduced number of patient samples. While these data are still valuable, patient heterogeneity, lack of standard formats and annotation for data release (when available), different assays, and different questions being investigated can impede comparisons across studies. Still, each study provides a step to get closer to a more complete picture of OA molecular background, patient subtyping, and precision treatment. One aim of data curation is to collect and integrate multiple datasets that are scattered across different locations (such as different databases or, as in our case, different publications), and annotate them with the same ontology and rigour. To satisfy this aim, we curated and collected all available literature on gene expression studies, and we curated the most recent studies on non-coding RNA expression. A second aim of data curation is to consolidate the data so that it is comparable across studies and, if possible, patients, and to annotate it with relevant information, such as tissue, disease, interactions and pathways. To this aim, all the data collected have been annotated with standard labels so that it is easily searchable but also, being structured data, amenable to computational analyses.
As highlighted with examples and use cases, OsteoDIP can support diverse translational research projects in OA. If a researcher has already identified molecules of interest (as in the microRNA example), OsteoDIP can provide the researcher with literature and multiple annotations for protein coding genesall in one database. Such annotations can assist with simple tasks, e.g., providing context for genes of interest, or can provide the basis for further research steps. For example, knowing what part of a PPI network involving the proteins of interest is conserved across species (and which species) can suggest the best animal model where to test a specific mechanism, investigate OA pathogenesis or validate a hypothesized treatment. It is well known that there is not a single animal model that mimics all molecular and clinicopathological aspects of human OA (or indeed any complex disease), and that different models need to be used to answer different questions [47]. Thus, it would be useful to predict beforehand which model organism best recapitulates relevant biological context required for in vivo studies and pre-clinical validation.
Some genes of interest could be queried across different datasets and their gene expression compared (as in the signature genes examples). If needed, the expression could be linked to structured clinical data to provide support for external validations, reducing the time a researcher would need to spend to find the same type of data in more generic data repositoriesand the time needed to consolidate the data that, due to lack of standard labels in many databases, is usually quite different from one dataset to another. In our example, we attempted to link genes present in an OA diagnostic blood signature to a possible OA mechanism in other tissues, but other researchers could be interested in using the same data to investigate and validate prognostic or predictive signature performance. Most gene expression signatures do not generalize to new data; over 150,000 studies have reported gene signatures, but fewer than 100 are in clinical use [48]. The best way to find an effective signature is to validate its performance across many independent datasets. Such testing and analysis can answer several key questions about a candidate signature: does the signature reflect biological mechanisms or technical artifacts, does it work across independent cohorts, and if not, does it work in specific subsets of samples. Testing a signature in more than one dataset greatly reduces the chances that it is based on technical artifacts, such as a protocol for gathering or processing data. Testing on many datasets with heterogenous samples, including different age groups and both sexes, can indicate whether the signature can easily generalize to new cohorts. A signature may not work in all datasets, because many datasets include multiple conditions, such disease status, drug treatments and comorbidities, that all greatly affect gene expression. It is also important to pay attention to sample independence, that can strongly affect the results obtained. If a signature works in at least a few datasets, it may be possible to determine the context where the signature is effective; but it is equally valuable to know which patient cohorts cannot be reliably analyzed due to signature bias.
Finally, researchers that have OA molecular questions but no genes to start with, could query OsteoDIP and identify genes frequently associated with a feature of interest, or compare genes associated with different characteristic (as in the joint specific OA and the top deregulated genes examples).
We designed OsteoDIP to be flexible, so that many types of searches could be performed, open access, so that any kind of data collected and stored in OsteoDIP is immediately available to researchers, and modular, so that any kind of data of interest for the OA community can be included and the types of curation provided expanded.
Author contribution
CP participated in conception and design of the study, curated the data, performed analyses and data interpretation, drafted the article, and approved the final version of the manuscript. MA and RL created and update the database, collected annotation data, update nomenclature, drafted the article, and approved the final version of the manuscript. ZA collected and normalized GEO data, revised critically the article, and approved the final version of the manuscript. MK provided PPI conservation data, participated in data analysis and interpretation, revised critically the article, and approved the final version of the manuscript. CV participated in conception and design of the study, revised critically the article, obtained funding, and approved the final version of the manuscript. IJ (juris@ai.utoronto.ca) participated in conception and design of the study, obtained funding, revised critically the article, approved the final version of the manuscript, and take responsibility for the integrity of the work as a whole, from inception to finished article. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Declaration of competing interest
At the end of the text, under a subheading "Conflict of interest statement" all authors must disclose any financial and personal relationships with other people or organisations that could inappropriately influence (bias) their work. Examples of potential conflicts of interest include employment, consultancies, stock ownership, honoraria, paid expert testimony, patent applications/registrations, and research grants or other funding. | 2022-01-29T16:03:56.659Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "fcbca0cd4f045a9d862c2334c39524a80f8b67da",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ocarto.2022.100237",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5360d6f5a74e9a5f811b7aaa72a24577c7d15178",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
2641709 | pes2o/s2orc | v3-fos-license | The role of 'omics' in the quest to eliminate human filariasis
Lymphatic filariasis (LF) is a disease affecting approximately 120 million people in over 73 countries [1] and caused by infection with a group of filarial nematodes transmitted by mosquito vectors. Wuchereria bancrofti is responsible for approximately 90% of the disease worldwide, while the remaining cases are due to Brugia malayi and B. timori [2]. These filarial nematodes have important social and economic impact causing high morbidity and serious illnesses resulting in social stigmatization, marginalization, and loss of work for the afflicted [3]. In 2000, The Global Programme to Eliminate Lymphatic Filariasis (GPELF) was launched with the objective to eliminate this disease as a public health problem by 2020 [4]. The eradication of lymphatic filariasis relies on mass drug administration (MDA) using the three drugs currently available for treatment: diethylcarbamazine (DEC), albendazole, and ivermectin. GPELF also has made significant progress in many countries, delivering, between the years 2000 and 2014, 5.6 billion treatments to more than 763 million people living in 61 countries. It was estimated that this directly prevented 36 million clinical cases and saved 175 million disability adjusted life years (DALYs) [5]. However, it is unlikely that LF will be eliminated by the target year of 2020 as 55 of the 73 countries considered to be endemic for LF in 2015 still require MDA [6]. Moreover, GPELF has lagged in Sub-Saharan Africa where only 2 of 35 LF-endemic countries have stopped MDA and started post-MDA surveillance. Notably, recent studies show that single-dose combination therapy with the three antifilarial drugs (ivermectin/albendazole/diethylcarbamazine, or IDA) appears to be superior to current regimens used in the elimination programs, which may help accelerate LF elimination in Africa. Although it has not yet been tested, IDA may also be useful for treating onchocerciasis [7,8]. However, reports of drug resistance to ivermectin and albendazole [9,10] as well as serious concerns about using DEC in Sub-Saharan Africa because of ocular adverse events after DEC treatment of onchocerciasis in the past, makes the discovery of novel drugs against onchocerciasis imperative [11]. The debilitating eye and skin disease known as onchocerciasis is caused by Onchocerca volvulus; it is the world’s second-leading infectious cause of blindness in humans with 99% of cases in Sub-Saharan Africa alone. Current estimates put 120 million people at risk and 17 million already infected, of which 1.2 million suffer from vision impairment or blindness [12,13]. While the focus of the control efforts has been to alleviate morbidity and lost productivity, onchocerciasis has more recently been targeted for elimination [14,15]. The three past and present onchocerciasis control programs; OCP, the Onchocerciasis Control Programme; APOC, African Programme for Onchocerciasis Control; and OEPA, the Onchocerciasis Elimination Program for the Americas, rely on annual or biannual MDA of ivermectin, a therapy effective at killing microfilariae but not adult worms, with the goal of interrupting disease
transmission. However, as the programmatic goals shifted from reducing public health impact to active elimination by 2025, sole reliance on ivermectin is threatened by contraindications in areas co-endemic for loiasis, an inability to break transmission in some foci, and the emergence of drug resistance. Even successes in Latin America and small foci in Africa [16][17][18][19][20][21][22] now must be weighed against the fact that since 1995 only a 31% reduction in the incidence of onchocerciasis has been achieved in Africa [13]. APOC in 2015 predicted that 1.15 billion treatments until 2045 will be needed to achieve elimination [23]. Other neglected tropical disease experts doubt that onchocerciasis can ever be eliminated through MDA with ivermectin alone [24] especially given that MDA of ivermectin cannot be used in 11 Central African countries co-endemic with Loa loa infections due to the risk of severe adverse events [6,25]. Moreover, many areas of sub-Saharan Africa do not implement onchocerciasis MDA programs in areas of hypoendemicity, which could lead to reintroduction in areas undergoing MDA [26]. Of equal concern is the potential emergence of ivermectin-resistant O. volvulus, limiting the long-term effectiveness of MDA [27][28][29] and, in time, undermining gains achieved by onchocerciasis control programs. Complicating the resistance issue is that ivermectin is not administered to children 5 years old, and a macrofilaricidal drug, doxycycline, cannot be given to children 9 because of the limiting indications for these drugs. These children then are not only vulnerable to infection, they become reservoirs for transmission [30]. For these reasons, in 2014 APOC called for the development and testing of new O. volvulus technologies, including the development of novel macrofilaricidal drugs, vaccines, and diagnostic biomarkers of infection [31].
The last decade of 'omics' data for filarial worms
In 2007, the same year PLoS NTD was launched, the first parasitic nematode genome was published with the draft genome of B. malayi [32,33]. In 2009, a review in PLoS NTD focused on helminth genomics and its implications for human health [34], predicting that new sequence information would revise what we knew of the host-parasite, vector-pathogen, and filaria-symbiont relationships. At that time, the genomes of B. malayi and its endosymbiont, Wolbachia (wBm) [35] were available along with expressed sequence tags or EST datasets of other filarial parasites, enabling the construction of a microarray containing 18,104 elements derived from B. malayi (15,412), O. volvulus (1,016), W. bancrofti (872) and Wolbachia (wBm, 804 genomic elements) genomic information. This microarray was used in many studies to analyze expression profiles during development and after drug treatments [36][37][38][39][40][41][42]. With new sequencing technologies, RNAseq has now become the more common tool to study stage-specific expression profiles of filarial worms [43,44] and the effects of known drugs on the worm's transcriptome [45][46][47].
In 2008, the secretome of adult B. malayi worms was profiled from the proteomic characterization of the excretory-secretory (ES) products. The goal was to identify proteins that potentially influence infection by down-modulating host immune responses. Interestingly, among the more prominent novel products identified in the ES were a set of 11 small transthyretinlike proteins, which have been identified as potential vaccine candidates against other human helminth infections [48]. This analysis also identified novel proteins not previously suspected to be involved at the host-parasite interface, and thus provided important new insight on the biology of the filarial parasite [49]. The next year, a large-scale proteomic analysis also characterized the ES products of other stages such as L3, L3 to L4 molting worms, and microfilariae. Importantly, this analysis confirmed the presence of 274 "hypothetical" ES proteins inferred from gene prediction algorithms applied to the B. malayi genome. Moreover, it verified the enrichment of the previously characterized immunomodulatory proteins such as ES-62, leucyl aminopeptidase, MIF-1, serpin, glutathione peroxidase, and galectin in the ES of microfilariae and fertile adult females as compared to the adult males. It also revealed that many Wolbachiaspecific proteins, most of which are metabolic enzymes, were released in the ES. These analyses expanded our knowledge of secreted proteins that could play a role in host-parasite interactions [50,51].
In 2011, another large-scale proteomic characterization of almost all the major mammalian stages of B. malayi was performed, resulting in the identification of more than 61% of the products predicted from the B. malayi draft genome as well as 63% of the Wolbachia proteome [52,53]. Analysis of protein families and domains coupled with stage-specific gene expression from microarray and RNASeq data highlighted the important pathways that benefit the parasite during its development in the host. Gene set enrichment analysis identified extracellular matrix proteins and those with immunologic effects as enriched in the microfilariae and L3 stages. Sex-and stage-specific protein expression identified those pathways related to parasite differentiation and demonstrated stage-specific protein expression by the B. malayi endosymbiont Wolbachia as well. Like most nematodes, filarial parasites have a fully formed digestive tract; however, its functionality was not completely clear. The tissue-specific proteomic analysis of the body wall, digestive tract, and reproductive tract of B. malayi clearly indicated enrichment in transporters within the digestive tract, suggesting that the intestine of adult filarial parasites is functional and important for nutrient uptake or waste removal. In addition, it revealed the presence of 27 possible vaccine candidates sequestered within the digestive tract with a high degree of homology to W. bancrofti or O. volvulus; these could possibly represent "hidden antigens" with low risk of prior allergic sensitization [54].
In 2016, the first high-quality genome assemblies with reconstruction of whole chromosomes for both O. volvulus [55] and B. malayi (WormBase.org) [Manuscript in preparation] were obtained. The transcriptomic and proteomic profiles of both O. volvulus and its Wolbachia endosymbiont (wOv) in the major vector and the human host stages (L1, L2, L3, molting L3, L4, adult male and adult female) were also recently described [56]. This allowed the identification of stage-specific pathways important to the parasite's adaptation to its human host during its early development.
It is clear that the recent advances in the sequencing of filarial genomes, transcriptomes and proteomes, as well as those of their bacterial endosymbionts, have contributed greatly to a better understanding of the biology of these parasites. A recent review [64] focuses on many pathways that have just come to light as being important for the establishment of the filarial parasites in their definite host as well as those that might contribute to the intricate host-parasite interactions in each parasite system. Examples include the discovery of novel immunomodulators, and a family of proteases and protease inhibitors essential for development and tissue migration that are also able to manipulate the immune system of their host. The analysis also points to the potential stage-specific provisioning of metabolic products by Wolbachia to the filarial worms. The hope is that a better understanding of the specific mechanisms that define the mutualistic interplay between the filarial parasites and their Wolbachia endosymbiont will facilitate the identification of pathways that could be targeted to ultimately kill the adult worms. Such novel macrofilaricidal drugs would complement present MDA efforts to control and eventually eliminate filariasis.
The next sections focus on how recent genomic advances for the filariae have also enabled the identification of novel drug targets, potential vaccine candidates and additional diagnostic biomarkers of O. volvulus infection. These represent three additional tools that will eventually support elimination of filarial infections.
Drug repurposing and chemogenomic screening
As current microfilaricidal drugs appear to be insufficient for the control and elimination of these parasitic infections, new drugs will be required. Present efforts focus on the screening of libraries of drugs, including FDA repurposed drugs, against adult Brugia and Onchocerca worms in vitro and the selection of those that are effective for additional pre-clinical development and testing in small animal models [65][66][67]. While the primary focus is development of a macrofilaricidal drug candidate for the treatment of onchocerciasis, it is expected that parallel screening of the closely related filarid, Brugia, will also yield drug candidates for the treatment of lymphatic filariasis. An example is the discovery of auranofin as a potent anti-filarial drug [68]. Auranofin is an FDA-approved gold compound (2,3,4,6-tetra-O-acetyl-1-thio-beta-Dglucopyranosato-S (triethylphosphine) gold) that has been used to treat rheumatoid arthritis for over 25 years. In the study described by Bulman et al. [68] a library of over 2,000 FDAapproved compounds was screened first on B. malayi adult female worms and only auranofin was highly effective in inhibiting adult Brugia motility. It was then also shown to inhibit molting of O. volvulus and to kill adult O. ochengi worms. Additional studies will need to be conducted to determine efficacy with short treatment regimens in vivo using animal models and to obtain pharmacokinetic data before moving on to clinical development. Another approach is to screen repurposed and approved drugs from the human pharmacopoeia. This can be most easily done to target the Wolbachia endosymbionts of filarial worms, since macrofilaricidal effects have been observed when there is at least 90% reduction in the bacterial load [69,70], achievable with certain antibiotics.
Chemogenomic screening is a novel approach that uses a "chokepoint" analysis to identify essential drug targets. Reactions are determined to be essential if they either consume a unique substrate or produce a unique product. Based on the study reported by Taylor et al [71], a comparative analysis of nematode genomes yielded 487 genes conserved among all nematode species studied, of which 169 encoded chokepoint enzymes. A similar comparative analysis of the nematode proteomes yielded 477 chokepoint enzymes; 24 of which were found only in parasitic worms. Notably, several drugs that are already known anthelmintic drugs and novel candidate targets were identified in this study, 7 of which were tested in Caenorhabditis elegans and 3 that led to a detrimental phenotype. One of these three drug-like compounds, Perhexiline, was also deleterious to two parasitic nematodes, Haemonchus contortus and O. lienalis that exhibit different forms of parasitism and tropism in their final host. This study clearly illustrates that testing experimentally compounds already available and known to target proteins orthologous to nematode chokepoint proteins may lead to the identification of novel anthelminthics.
Filarial genomic, transcriptomic and proteomic data were also used in a computational target-based approach to screen FDA-approved drugs across all World Health Organization Anatomical Therapeutic Classes (WHO ATC) [55,63]. Sixteen O. volvulus enzymes and proteins involved in ion transport and neurotransmission were found to likely be good drug targets [55]. As some of these proteins are the targets of already approved human drugs that have not yet been tested on the filarial parasites, it would be of interest to verify if they could be repurposed as new therapies for filarial infections. Another computational approach, Flux Balance Analysis, calculates the flow of metabolites through metabolic networks constructed based on available enzyme annotation data to better understand the metabolic potential of a pathogen. This method was used to reconstruct the metabolism of two filarial nematodes, O. volvulus and L. loa, comparing essential reactions with specific interest in identifying those contributed by Wolbachia in the case of O. volvulus as L. loa does not harbor Wolbachia. Such analyses revealed that O. volvulus likely benefits from Wolbachia contributions to fatty acid metabolism, heme synthesis, and purine and pyrimidine metabolism. This analysis also pointed to the possibility that with the same gene complement, O. volvulus and L. loa may do things differently as it relates to certain pathways. For example, in purine salvage, Wolbachia provides an alternate pathway for O. volvulus, whereas L. loa depends exclusively on adenine import. This demonstrates how the presence of Wolbachia can change the metabolic chokepoints in filaria and serves to identify selective drug targets [55].
High-throughput immunomic screening of O. volvulus vaccine candidates
There are two types of vaccines that would be necessary for the efficient control of onchocerciasis: (a) a prophylactic vaccine to be used in children <5 years old to block new infections and the accumulation of adult worms, thus reducing microfilarial densities in the skin, pathology and transmission; and (b) a therapeutic vaccine that would be used in older children and adults that already carry adult worms, to potentially impair the fertility of female parasites, suppress the release of nodular microfilariae from the female worm and/or kill them once released, reducing accumulation of skin microfilariae thus interrupting the transmission cycle [72]. In both cases, the recipient of the vaccine benefits from a reduction in the only O. volvulus parasite stage that causes disease, the microfilaria. Importantly, the entire community also benefits since the microfilaria is the transmissible stage to insect vectors, further protecting areas from recurrence transmission where local elimination may have already been achieved. Vaccines may also lower the number of annual MDA with ivermectin, forestalling drug resistance, and ensuring the success of the existing MDA. Most of the present O. volvulus vaccine candidates were discovered by screening expression libraries with various antibody probes [73]. Two O. volvulus vaccine proteins, Ov-103 and Ov-RAL-2, are promising candidates for prophylactic vaccine [74] and their homologues in B. malayi were shown to also induce protection against infection with B. malayi infective stage larvae [75]. The disappointing results of clinical trials for several infectious diseases highlight the current limitations of vaccine candidate selection approaches that often fail to exclude at an early stage antigens with poor immunogenicity or low safety profiles in humans [76,77]. One approach for identifying novel vaccine candidates is immunomics [78][79][80][81][82], which allows high-throughput profiling of the host immune antibody responses to genome-wide candidate parasite antigens. Using this approach with putatively immune human sera and sera from infected individuals [56], six new potential vaccine antigens were identified by screening antibody responses (IgG1, IgG3 and IgE) against an O. volvulus recombinant protein array containing 362 proteins. Four of these antigens are highly expressed during the early stages of larval development in the human host and thus could be tested for efficacy in a prophylactic vaccine. The 2 other proteins are highly expressed by the microfilariae and are specifically recognized by sera from protected individuals who never developed a patent infection. This opens new possibilities for developing a safe anti-transmission or therapeutic vaccine. To the best of our knowledge, this is the first and only occasion in which genome-wide stage-specific expression data from O. volvulus have been exploited to discover novel vaccine candidates in an unbiased manner. Future studies using the diffusion chamber mouse model for O. volvulus will confirm whether these antigens do indeed protect against infection by L3s or against microfilariae [83].
Identifying novel O. volvulus biomarkers of infection
Gold standard diagnosis using blood films or skin snips has become less relevant as mass drug distribution programs for control of filarial infections have expanded. The spectrum of programmatic processes (mapping, mass drug interventions, monitoring and evaluation, and surveillance) require different approaches as different questions are asked at each stage [84]. Infection intensity may refer to adult worm burden or microfilarial load in the skin of O. volvulus infected individuals. However, the relationship between microfilarial load, as assessed by quantification of microfilaridermia by skin snip in onchocerciasis, and the total adult parasite burden is at best semi-quantitative. Moreover, the current toolbox for diagnosis and surveillance of onchocerciasis, as well as other helminthic infections, is limited because many of the available tools suffer from lack of sensitivity and specificity and/or are cost-prohibitive [85,86].
Given the constraints of achieving elimination using MDA with ivermectin alone, and concerns about recrudescence in areas of previous onchocerciasis control, more efficient tools are needed for diagnosis and monitoring of current and future control measures using emerging technologies that are field-deployable or suitable for low-resource settings. The development of better diagnostic tools is greatly needed for post-treatment surveillance where transmission of infection has been brought under control, the certification phase, and for mapping prevalence in meso-and hypo-endemic areas that had heretofore been ivermectin-naive. Transcriptome and proteome data have helped in the discovery of new biomarkers for O. volvulus infection. Using immunomics and the O. volvulus protein array used for the discovery of vaccine candidates, we identified 7 previously unrecognized biomarkers of active patent infection (OVOC10469, OVOC10602, OVOC11950, OVOC3261, OVOC5127, OVOC8491, OVOC9988), based on IgG4 responses in infected individuals [56]. Future assays, such as a luciferase immunoprecipitation system (LIPS) immunoassay, will help validate if such highly antigenic O. volvulus proteins can be used as specific and sensitive biomarkers of patent infection.
In another recent study [87], an integrated approach was used to identify adult female O. volvulus antigens to be explored for developing serodiagnostic tests. Using data from the O.
Key learning points for the role of 'omics' in the quest to eliminate human filariasis
• Although there has been much progress in the research and control of human filariasis, major obstacles remain that challenge the global public health community and for which fundamental and applied research is urgently needed.
• Genomes of filarial parasites are becoming increasingly available and through the related advances in transcriptomics and proteomics promise to revolutionize the field of helminth filarial biology and help unravel new targets for control and novel diagnostic tools.
• Without accurate annotation and the development of novel functional genomic tools, these data will not be truly valuable to support the better understanding of the filarial biology, host-parasite interactions and symbiosis.
• Knowledge of factors controlling host-parasite interactions can ultimately support identification of vulnerable pathways to be targeted by novel interventions and help avert unintended consequences of intervention (e.g., increased transmission, and/or morbidity).
• Knowledge of how parasite population genetic structure will change under chemotherapeutic pressure is essential to understand the evolutionary implications of intervention.
• The processes that initiate and sustain immune regulation on the one hand, or lead to pathogenesis on the other and the effects upon them of prolonged anthelmintic intervention remain incompletely understood.
• Twenty four publications cited in this review were disseminated through publication in PLOS NTD.
volvulus genome, proteome, and transcriptome, 241 immunoreactive proteins were identified. These included most of the major diagnostic antigens described over the past 25 years, validating the approach, plus 33 new proteins with great promise as serodiagnostic antigens. These candidates, as well as those identified using the immunomics approach together with the extensive pan-omics dataset generated in the studies described above, will facilitate the development of novel diagnostic tools. When combined with present and future secretome datasets, there is hope that additional tests focused on antigen detection assays in body fluids could also be developed.
Conclusion
The abundance of genomic, transcriptomic, and proteomic data has already provided novel biological insight into filarial nematodes, and led to the identification of novel drug targets, vaccine candidates and biomarkers of infection. We should expect that upcoming exploitation of these various novel datasets will further our understanding of these unique parasites and their interaction with the final host, ultimately helping us reach the goal envisioned by WHO to eliminate filarial infections for good [88].
Key papers in the field of 'omics' in filariasis | 2018-04-03T03:21:27.999Z | 2017-04-01T00:00:00.000 | {
"year": 2017,
"sha1": "39b6623da2d7f020539322df3da5f7fe8cf749e8",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0005464&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "39b6623da2d7f020539322df3da5f7fe8cf749e8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
259530242 | pes2o/s2orc | v3-fos-license | Scrub Typhus and Dengue Co-infection in an Adolescent Girl: A Diagnostic Challenge
Scrub typhus and dengue fever are common infectious diseases in tropical regions, and both have overlapping clinico-epidemiological and laboratory features, which often pose a diagnostic challenge. This case report discusses a 15-year-old girl from the Indian subcontinent who presented with acute undifferentiated febrile illness (AUFI) without typical features of any of the common tropical infections. She was diagnosed with co-infection of scrub typhus and dengue fever using laboratory tests with good diagnostic accuracy. The patient was managed on an ambulatory basis, treated with oral doxycycline, and showed symptomatic improvement within 48 hours. Co-infections in endemic areas present a significant diagnostic and therapeutic challenge. This case report highlights the importance of considering co-infections in the differential diagnosis of AUFI, especially during the post-monsoon period, and the use of highly sensitive and specific tests for the diagnosis of co-infections.
Introduction
Scrub typhus and dengue are common infectious diseases in tropical regions, including the Indian subcontinent. They share similar clinico-epidemiological and laboratory features such as fever, rash, thrombocytopenia, and hepatic dysfunction. Both involve similar underlying pathophysiological mechanisms including endotheliopathy, capillary leak, and third spacing. Scrub typhus has been reported to have a community seroprevalence of 34.2% in India, and is responsible for 25.3% of cases of acute undifferentiated febrile illness (AUFI), with a high incidence of multiple organ dysfunction (17.4%) and case fatality (6.3%) [1].
Dengue seroprevalence in the general population and case fatality rate among laboratory-confirmed patients has been reported to be 56.9% and 2.6%, respectively, and the prevalence of laboratory-confirmed dengue infection among clinically suspected patients is 38.3% [2]. Ahmed et al. reported a 16% prevalence of various co-infections in hospitalized patients with AUFI in North India [3]. While co-infections are rarely reported, they pose a significant challenge in diagnosis and management. In this report, we discuss the case of an adolescent girl with co-infection of scrub typhus and dengue, who presented without any typical features of either infection and was managed on an ambulatory basis with an uneventful recovery. We highlight the importance of considering co-infections in endemic areas, especially in the post-monsoon season, and the use of sensitive and specific tests for diagnosis.
Case Presentation
A 15-year-old girl presented, in the post-monsoon season, with a high-grade fever of 13 days duration with multiple daily spikes, associated with chills, myalgia, malaise, and decreased appetite. She also gave a history of pain in the left knee for three days, especially while walking. There was no history of rash, cough, breathlessness, vomiting, loose stools, pain abdomen, jaundice, burning micturition, decreased urine output, seizures, or bleeding manifestations. There was no history of recent travel or any febrile illness in the past six months. She was moderately nourished and well-oriented. At presentation, she recorded a temperature of 98.5°F, pulse rate of 141 beats/minute, and blood pressure of 90/60 mmHg. The patient had tachycardia, but there were no other features of shock. Blood pressure was appropriate for age, and peripheral perfusion was good. On examination, there was no pallor, icterus, lymphadenopathy, oedema, rash, or eschar. The abdomen was soft and non-tender, and no organomegaly was noted. There was no local rise in temperature, erythema, tenderness, or restricted movements of the left knee. Other systemic examinations revealed no abnormality.
With a syndromic diagnosis of AUFI, infectious causes were considered most likely; she was investigated for all the common tropical infections prevalent in the region like dengue, scrub typhus, malaria, and enteric fever [4]. Non-infectious causes like rheumatological conditions or malignancies were other possibilities considered. On evaluation, she was found to have microcytic hypochromic anaemia (haemoglobin 10.8g/dL, mean corpuscular volume (MCV) 67.9µm, mean corpuscular haemoglobin (MCH) 21.6pg, mean corpuscular haemoglobin concentration (MCHC) 31.8g/dL), with normal platelet (201,000/µL) and leukocyte (4750/µL; neutrophil 69.4%, lymphocytes 23.3%) counts. C-reactive protein was elevated (96 mg/L) ( Table 2). We further confirmed the dengue serotype as DEN-2 by RealStar® Dengue Type RT-PCR Kit 1.0 (altona Diagnostics GmbH, Hamburg, Germany) [7]. Based on these reports, a diagnosis of scrub typhus and dengue co-infection was made. She was managed on ambulatory basis as she did not have any complications or organ dysfunctions. Oral doxycycline was initiated for scrub typhus, along with adequate hydration and symptomatic treatment. She showed improvement within 48 hours and recovered without any complications with a seven-day course of doxycycline.
Discussion
Scrub typhus and dengue are both common causes of AUFI (fever of less than two weeks duration without any localizing features of infection) in tropical regions like India. Both are vector-borne diseases with peak incidence during the post-monsoon period. Scrub typhus is a rickettsial infection transmitted by trombiculid mites, while dengue is a viral infection transmitted by Aedes aegypti mosquitoes. They often present a diagnostic challenge due to overlapping clinical and laboratory features, including fever, rash, oedema, thrombocytopenia, and hepatic dysfunction [1,2]. While concurrent infections with multiple pathogens may be common in the tropics, co-infections of scrub typhus and dengue are not frequently reported, likely due to differences in vectors, their breeding habits, and biting behaviours. Nonetheless, the increasing incidence of scrub typhus in all regions of India emphasizes the importance of considering co-infections in the differential diagnosis of AUFI, especially during the post-monsoon period.
Concurrent infections with multiple pathogens may have an atypical presentation or protracted course, making diagnosis and management difficult. Previous studies have reported conflicting results regarding the severity of co-infections compared to mono-infections. Basheer et al. reported six cases of scrub typhus and dengue co-infections in adults from South India, with greater tachycardia, hypotension, thrombocytopenia, transaminitis, hypoalbuminemia, and lengthier hospital stay compared to either infection alone [8]. However, Ahmed et al. reported that co-infections were associated with milder clinical manifestations, organ dysfunction, and severity compared to mono-infections [3]. Jose et al. from South India reported scrub typhus co-infection in 51 of 606 dengue patients aged 0-14 years [9]. However, all these were retrospective studies and did not consider the sensitivity and specificity of the diagnostic tests used and possible serological cross-reactivity.
Despite presenting with AUFI during the post-monsoon period, our patient did not exhibit typical features of scrub typhus or dengue infection, including rash, oedema, eschar (dark scab-like region at the site of chigger bite), thrombocytopenia, or transaminitis. However, tachycardia was observed, which was disproportionate to the temperature. The co-infection was confirmed using highly sensitive and specific tests, leaving no doubt in the diagnosis. The scrub typhus IgM micro enzyme-linked immunoassay (ELISA) used in this case is known to have possible serological cross-reactivity with typhoid fever [5], however, it was ruled out by a negative Widal test. Dengue IgM ELISA kit does not have any serological cross-reactivity [6]. Most of the previous reports of scrub typhus and dengue co-infections were in patients who were critically-ill or had multi-organ dysfunction, and serological cross-reactivity could not be ruled out in the earlier reports. Our patient did not have any complications or organ dysfunctions in spite of the co-infection. She was managed on an ambulatory bases and the initiation of appropriate antimicrobial therapy likely contributed to the uneventful recovery of the patient.
Co-infections of scrub typhus and dengue can pose diagnostic and therapeutic challenges in endemic areas.
Our case highlights the importance of maintaining a high index of suspicion and conducting a comprehensive evaluation of patients with AUFI, particularly during the post-monsoon period when the incidence of vector-borne diseases is high. It is possible that co-infections are more prevalent than reported and are often overlooked. Therefore, searching for multiple etiologies should be a part of the initial routine diagnostic workup for patients with AUFIs [4]. The accuracy of diagnostic tests used, as well as the potential for serological false-positivity with cross-reacting and pre-existing antibodies, should also be taken into | 2023-07-11T18:38:54.113Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "a000890f6e2f2276cce7dfcd9ecad79ffe94c376",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/159777/20230623-18031-u33tnh.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "06b3cd686bb4b5b0ded88e35715cabf9d7b71bda",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
57013172 | pes2o/s2orc | v3-fos-license | Photoacoustic tomography for human musculoskeletal imaging and inflammatory arthritis detection
With the capability of assessing high resolution optical contrast in soft tissues, photoacoustic imaging (PAI) can offer valuable structural and functional information of human joints, and hold potential for diagnosis and treatment monitoring of inflammatory arthritis. Recent studies have demonstrated that PAI can map 2D and 3D morphology of the cartilage, synovium, vascularity, and bone tissue in human peripheral joints. Initial trials with patients affected by inflammatory arthritis have also suggested that PAI can detect the hemodynamic properties in articular tissues as well as their changes due to active inflammation. This review focuses on the recent progress in technical development of PAI for human musculoskeletal imaging and inflammation detection. PAI can provide non-invasive and non-ionizing serial measurements for monitoring of therapeutic interventions with the potential for higher sensitivity than existing imaging modalities such as ultrasound. However, further investigation is needed to validate the value of PAI in rheumatology clinical settings.
Introduction
Joint disorders caused by disease and injury are among the leading cause of activity impairment, work disability, reduced quality of life, and high health-care costs. Among joint diseases, arthritis occurs in 23% of the adult population (approximately 54 million people) in the United States [1][2][3] and has been the most common cause of disability for the past 15 years [4]. The major clinical manifestations of rheumatoid arthritis (RA) and osteoarthritis, which are the representative arthritis diseases, are abnormal and damaged cartilage, synovial, and bone tissues, resulting in severe mobility impairment of joints. Currently, the best established method of assessing joint damage in RA and osteoarthritis has been medical imaging [5], such as computed tomography (CT), magnetic resonance imaging (MRI) and ultrasound (US) imaging [6,7]. Imaging studies of the joint are helpful in preclinical applications and clinical diagnosis with the choice of imaging modality depending on the clinical manifestations, diagnostic considerations, and the capabilities of specific imaging modalities. Advanced functional imaging for early diagnosis and highly sensitive assessment of the treatment outcome are invaluable in clinical applications.
Inflammatory arthritis, such as RA, are associated with proliferation of synovial tissue and destruction of articular cartilage. Synovial angiogenesis is an important early symptom in the development and perpetuation of inflammatory arthritis [8,9]. Angiogenesis from a combination of hypoxia and high metabolic demand increases the number of synovial vessels [10], which drives synovial infiltration and hyperplasia. The presence of hyper vascularized synovial tissue is directly associated with disease activity. MRI and US are the established advanced imaging modalities which help with the diagnosis of arthritis by providing visualization of joint vascularity, synovitis and joint erosions. MRI, because of supreme image contrast, is great in assessing soft tissue changes related inflammation in and around the joints, cartilage damage, and bone marrow edema underlying active erosion. However, use of MRI is limited by its high cost, which is especially a concern for frequent monitoring studies. In addition, MRI may not work for patients with implanted devices. US imaging offers dynamic assessment, high resolution for anatomical imaging, and high sensitivity in identifying blood flow. It is also easily available and affordable, hence, widely accepted in clinical evaluation of inflammatory arthritis [11,12]. However, since it relies on measuring the speed of the blood flow relative to the probe, Doppler US is intrinsically more sensitive to the faster blood flow in relatively large vessels, while angiogenic micro vessels with slow flow speeds, which are more relevant to inflammation, are often missed. Moreover, US imaging is not available for evaluating hypoxia, another important physiological biomarker of inflammatory arthritis [13].
Photoacoustic imaging (PAI) has evolved as a non-ionizing, noninvasive, powerful and low cost imaging modality with the unique capability of presenting high sensitivity optical contrast in deep biological tissue with excellent detail [14][15][16][17]. This emerging optical imaging technology, which has temporal and spatial resolution comparable to US imaging, has been developed and trialed in various preclinical and clinical applications [18][19][20][21][22][23]. The optical absorption contrast in the visible to near-infrared (NIR) region presented by PAI is intrinsically sensitive to the contents of oxygenated and deoxygenated hemoglobin [24,25]. Therefore, PAI offers great potential in identifying and characterizing soft-tissue inflammation based on the detection of hemodynamic changes. For inflammatory arthritis, both hyper-vascularization and hypoxia, two physiological hallmarks reflecting the increased metabolic demand and relatively inadequate oxygen delivery of the inflammatory synovial tissue, can potentially be assessed by this functional imaging modality. In this paper, we review the studies and applications that have been focused on musculoskeletal imaging and inflammation detection. Earlier studies on animal models have demonstrated the feasibility of PAI in describing joint tissue structures, as well as morphological and functional changes in the joints affected by chronic or acute inflammation [26,27]. Other former studies have focused on the development of PAI enhanced by various optical contrast agents toward the goal of molecular level imaging of arthritis [28,29]. Although the results from these studies on animal models are encouraging, the research on human subjects is crucial to understand the potential values and limitations of this new technique for clinical applications. This review, therefore, focuses on the recent progress in technical development for imaging of human joints and the initial studies on patients affected by inflammatory arthritis.
2. Imaging of joints using home-built system with single transducer(s) In one of the earliest studies, Wang et al. developed a tomographic PAI system for cross-sectional imaging of human finger joints, and initially tested the performance of this system on human fingers harvested from an unembalmed cadaver [30]. The target imaging planes in the proximal interphalangeal (PIP) and the distal interphalangeal (DIP) joints of the digits were scanned circumferentially with an unfocused single transducer (XMS-310, Panametrics) working at a center frequency of 10 MHz. The circular scan of photoacoustic (PA) signal over 240 steps covered the entire 2π angle around the imaged joint. The applied laser beam, tuned to 720-nm wavelength, illuminated the target joint with 10 mJ/cm 2 light fluence. Fig. 1(b) and (c) shows example 2D cross-sectional images of the human fingers at the levels of PIP and DIP joints, respectively. Based on the endogenous optical absorption contrast, various tissues including aponeurosis, phalanx, skin, subcutaneous tissue, tendon, and volar plate can be identified, and have been confirmed by the corresponding anatomical photographs from the same joints, as shown in Fig. 1(d) and (e). The early study on ex vivo human finger joints demonstrated the feasibility of PAI in presenting articular morphology using safe laser energy and without involving contrast agents. This early work was followed by Sun et al. who developed a 3D tomographic PAI system for human finger joints in vivo [31]. To acquire a 3D image of the finger joint, PA signals were collected along a spherical scanning surface around the joint realized through the combined rotations of two rotary stages. Later, this system was further optimized by using a cylindrical scanning geometry instead of the spherical scanning geometry [32]. The scanning along a cylindrical surface was achieved through the circular scan around the cross section of the digit plus the axial scan along the digit, as shown in Fig. 2(a). PA signals were scanned by two transducers (V320-SU, Olympus) with a diameter of 19 mm, focal length of 25.1 mm, and central frequency of 7.5 MHz, with the concept of virtual detector also being utilized. The 720-nm laser light delivered through four optical fiber bundles illuminated the surface of the finger with an estimated light fluence of 2.8 mJ/cm 2 . Fig. 2(c) shows the PA images at three different crosssections in a human DIP joint in vivo. With an estimated lateral and axial resolution of 70 μm and 240 μm, respectively, phalanx and tendons in the human finger joint can be recognized, which was confirmed through the comparison with the MRI image from the same joint, as shown in Fig. 2 Aiming to visualize the vascularity across the interphalangeal joints, van Es et al. developed a home-built PAI system utilizing 32 transducers (Imasonic), as shown in Fig. 3(a) and (b) [33]. These transducers working at 6.25 MHz central frequency with bandwidth over 80% were driven by a 32-channel pulser/receiver (Lecoeur-Electronique) sampling at 80 MS/s. These 32 transducers covered 85°of a circle with a radius of curvature of 40 mm, enabling in-plane resolution of 100 μm. Delivered by six optical fiber bundles, the average light fluence on the finger surface was 6.8 mJ/cm 2 . To acquire a cross-sectional image, the transducers and optical fibers were fixed to the water tank which was rotated around the finger. Multiple slices along a healthy human finger were acquired by stepping the imager through various heights while the finger remains stationary in the water. PA images of eight cross-sections in the PIP and the DIP joints of the finger are shown in Fig. 3(c)-(j); while (c#), (e#), (g#), and (i#) are enlarged images showing more details in the areas marked by the dashed squares. These PA images at 805 nm laser wavelength show rich blood vessels with diameters between 100 μm and 1.5 mm. In Fig. 3(k) and (l), two B-scan US images along an axial section and a sagittal section, respectively, show the tissue structures in the finger, as well as the locations where the eight cross-sectional PA images were acquired. This study on a normal volunteer indicates the capability of PAI in mapping spatially distributed blood vessels in human fingers.
Imaging of normal joints using linear array probe
Although PAI of human finger joints using home-built systems based on single transducers has been validated successfully by several groups, a common problem is the limited imaging speed due to the need for a mechanical scan of the detector(s) around the target joint. Moreover, these home-built systems, despite providing satisfactory PA image quality, usually cannot enable concurrent US imaging. Clearly, realizing PAI function through a linear array driven by a commercial US unit would provide many advantages and could accelerate the clinical acceptance of this novel imaging modality. With the dual-modality arrangement, US and PA images of the same joint can be obtained simultaneously, using the same system and resulting in naturally coregistered images. Since US is an established tool for musculoskeletal imaging, images from US could be used to guide the PAI procedure and help to interpret PA images. More importantly, by using a linear array driven by a commercial grade medical US system, the development of PAI can be accelerated by taking advantage of the state-of-the-art US technologies, e.g., large number of parallel channels facilitating realtime image acquisition and display.
In a study by Xu et al., PA imaging was achieved on a commercial US unit (z.one, Zonare Inc.) and used for imaging of human finger joint in vivo [34]. Unlike most of the previous studies in which human fingers were imaged via cross sections, the imaging of finger joints in this study was performed along either the coronal middle planes or the sagittal middle planes. This is the common way followed by clinical US imaging of human peripheral joints for diagnosis of inflammatory arthritis. As shown in Fig. 4, the PA signals from the illuminated joint were acquired using a linear array (L10-5, Zonare Inc.) with 128 elements, working in the frequency range of 5-10 MHz. The laser at 740-nm wavelength was coupled into a bundle of optical fibers and delivered to the human finger with an estimated light fluence on the skin surface of 4 mJ/cm 2 . Both PA and US images of normal PIP joints of volunteers were scanned and compared, as seen in some example results shown in Fig. 4(d)-(g). Based on different contrast, the PA and US images from the same joints showed similar structures, as both could delineate the contours of the tendons and bones with comparable spatial resolution.
Although the results from this initial work involving a linear array driven by a clinical US unit are encouraging, the capability of this commercial grade US unit was not fully utilized in the imaging experiment due to limited access to the functions of the US unit. First, although the PA signals were acquired at a high speed using the linear array, the PA image reconstruction and display were completed offline on a standalone PC connected to the US unit. Second, to achieve sufficient signal-to-noise ratio (SNR) for PAI, PA signals from the target joints had to be averaged over 90 laser pulses, further reduced the imaging speed.
In another study performed by the same research group [35], realtime PA and US dual modality imaging of human finger joints was achieved using a linear array probe (CL15-7, ATL) driven by a research US platform (V1, Verasonics). The array probe with 128 elements, 11.25 MHz center frequency, and 75% -6 dB bandwidth scanned the finger joints along the sagittal sections. Powered by a GPU card (Ge-Force GTX690 GPU card, 3072 CUDA cores, NVidia) in the controlling PC, the US platform was able to perform signal scanning, image reconstruction and display for both PA and US imaging all in a truly realtime manner. To facilitate accelerated parallel computation, the backprojection algorithm for PA image reconstruction was optimized, which not only reduced the computational cost but also made the program executable on the GPU card. This imaging system, with the PA and US functions fully integrated, was tested for its performance in imaging normal human finger joints in vivo, as the example results shown in Fig. 5. Using the same laser which delivered light fluence on the skin surface of 4 mJ/cm 2 at the wavelength of 720 nm, this system achieved an imaging frame rate of 10 Hz which was limited by the pulse repetition rate of the laser.
Imaging of joints affected by arthritis
As one of the earliest attempts for in vivo detection of arthritis in human finger joints by using PAI, Xiao et al. developed a system involving spherical scanning with eight 1-MHz transducers (Valpey Fisher) [36]. As shown in Fig. 6(a), the transducers were equally spaced along a 210°arc arm, and could be rotated around the target joint for either 2D or 3D imaging. Pulsed light at 805 nm wavelength with a repetition rate of 10 Hz and a pulse width of < 10 ns was guided via an optical fiber, and illuminates the target joint with light fluence on the skin surface around 10 mJ/cm 2 . In the initial trial, 6 subjects including two osteoarthritis (OA) patients and 4 healthy controls were enrolled. All participants were white females with mean age of 61 (ranging from 45 to 71). The left second DIP joint from each subject was imaged and clinically examined by a rheumatologist prior to the experimental scanning. Fig. 6(b)-(g) presents the 2D images along the coronal sections of the DIP joints for the 6 subjects examined. In each image, the bones with highest absorption could be delineated from the adjacent tissues. The joint space could also be identified (marked by the arrow). It was noticed in these images that, compared with the healthy joints, the OA joints had an elevated absorption coefficient in the joint cavities and narrowed joint space. The average absorption coefficient of cartilage (red) and fluid (blue) from the OA joints, as quantified from the PA images, were also compared to those from the normal joint, as presented in Fig. 6(h). This initial trial on OA patients, although involving only single laser wavelength and limited number of human subjects, led to some encouraging findings.
Using the system shown in Fig. 3, van Es et al. also performed an initial study on a 42-year old female patient affected by early rheumatoid arthritis [37]. This home-built system equipped with 32 transducers distributed along an arc and driven by 32-independent channels as well as 6 optical fiber bundles for light delivery was designed specifically for cross-sectional imaging of human finger joint in vivo. The patient had an inflamed right third PIP joint with signs of synovitis on Color-Doppler US. Fig. 7 shows the PA image from the inflamed PIP joint. Laser light at wavelength of 800 nm illuminated the finger with light fluence less than 5.6 mJ/cm 2 . To obtain the result in Fig. 7, twelve views around the finger were taken with 20 averages per view. The acquisition of each 2D PA cross-sectional image required 1 min. In the PA image, a collection of small blood vessels could be distinguished at the dorsal side. This region, marked with a circle, was situated 4-6 mm distance from the surface and lies between the skin and bone, and was believed to be at the location of the synovial membrane dorsal to the joint space. This group of small thread shaped and point shaped blood vessels were thought to be associated with arthritis; however, this was not validated in this initial study on the arthritis patient. Based the imaging system described in Fig. 5, Jo et al. has recently performed an initial clinical trial on patients affected by inflammatory arthritis [38]. The dual modality system allowed simultaneous 2D PA and US imaging of human finger joints. Two functional biomarkers including hyperemia (increased blood content) and hypoxia (decreased blood oxygen saturation) in joint tissues were explored as measurements to differentiate arthritic and normal joints.
By performing PAI at a single wavelength (580 nm), the spatially distributed hemoglobin content reflecting the hyperemia in synovial [34]. tissue in the metacarpophalangeal (MCP) joints of 16 patients were imaged and compared to the results from 16 healthy controls, with example results shown in Fig. 8. For each joint, the PA image in pseudocolor was super-imposed on the US image scanned using the same system. Taking advantage of the excellent sensitivity of PAI to blood and the superior performance of US in delineating joint structures, hyperemia and its relative position in the joint can be visualized. In the PA images from joints affected by arthritis, strong signals in the areas next to the phalanges were apparent alongside the strong expected signals in the skin and phalanges. These signals were expected from hyperemia, and were confirmed by the Doppler US images from the same joints acquired using a commercial US unit (Z.ONE PRO, ZO-NARE). The hyperemia detected by PAI were further quantified, and student t-tests were conducted to validate whether PA measurements of hyperemia could differentiate the arthritic joints from the normal ones. The statistical analyses demonstrated significant differences between the arthritic joints and the normal joints.
In addition, by conducting PAI of each joint using two laser wavelengths (576 nm and 584 nm), decreased hemoglobin oxygenation (i.e., hypoxia) in synovium as another physiological biomarker of synovitis was assessed. Marked differences in blood oxygen saturation levels between the arthritis and the normal joints were detected. The result from this initial trial on human subjects is encouraging, suggesting that PAI, as a complement to musculoskeletal US, may enable the assessment of additional physiology biomarkers of inflammatory arthritis in vivo.
Conclusion and discussion
Although inflammatory arthritis is a prevalent, and often disabling disorder, development of effective therapy is hindered by the lack of objective outcome measures. While patient-reported quality of life measures and markers of inflammation are still used in diagnosis and treatment decisions, there is a significant need for robust joint imaging technology. Early initiation and timely escalation of anti-inflammatory therapies prevent disease progression, limit mobility, and preserve function and maintain quality of life. This targeted approach requires diagnostic technologies sensitive enough to detect pathological and functional change in response to treatment. Specifically, the imaging should enable early detection and accurate grading of subtle inflammation in intra-articular and juxta-articular soft tissues at the early stage of arthritis, well before irreversible bone destruction and remodeling occur; the imaging should also help with early identification of non-responders, so that treatment modification can be implemented earlier, minimizing unnecessary exposure to the potent side effects of drugs and reducing the cost by limiting ineffective treatment. Novel soft-tissue imaging technologies, such as PAI, which feature sensitivity comparable to MRI [39][40][41] but with significantly reduced costs and Recent preliminary trials of PAI on human subjects, including those affected by inflammatory arthritis, have presented some encouraging results, suggesting that the diagnostic information from the emerging PAI technology could be similar or better when compared to the current imaging technologies, such as MRI and US, that are widely used in the clinical management of inflammatory arthritis and other musculoskeletal disorders. Besides its non-invasive and non-ionizing features, the advantages of PAI in inflammatory arthritis includes excellent soft tissue contrast and intrinsically high sensitivity to characterize both blood volume and blood oxygen saturation. PAI, either stand-alone or combined with the state-of-the-art US imaging techniques, could provide clinicians with a powerful and easy-to-use tool for screening, diagnosis, and treatment monitoring of arthritis. Both the applications of technology and the clinical trials of PAI can largely benefit from combining this technology to the clinically approved advanced diagnostic US imaging modality, which is currently the gold standard equivalent in clinical diagnosis. By combining PA to US, a set of important and unique functional information, such as hypoxia and increased blood volume/hemoglobin (even in the absence of any appreciable flow), can be added to pathological information already present, such as synovial thickening, erosions and subtle increase in neovascularity. Combining the identification of underlying pathological findings with associated functional changes in a single scan will facilitate more definitive early diagnosis with comprehensive disease assessment than by using conventional musculoskeletal US alone. Recent studies of PAI of human joints have been focused on the small joints of human fingers. Besides the fact that these joints are smaller in size and, therefore, can be scanned in their entirety with the high-frequency US probes, these peripheral joints of human hands and feet are usually among the earliest to be affected by some inflammatory arthritis and are widely accepted to be best markers of overall joint damage [42]. The imaging depth of PAI, however, should be sufficient for the study of larger human joints, such as knee, ankle, and wrist joints, especially when working in the optical spectral window of 700-950 nm. To explore the feasibility, future trials should involve these larger joints that can be scanned by US imaging currently.
Given the lack of bed side availability and high cost of MRI, there is a clinical need for a reliable point-of-care clinical tool to assess disease activity and response to therapy in patients with inflammatory arthritis. This requires the documented outcome measures to be robust, precise, practical, reliably reproducible, and sensitive to early inflammatory change. PAI has a great potential to be this clinical tool. To promote the acceptance of PAI in clinical settings, future development should focus on collecting convincing clinical data documenting the accuracy of core outcomes (e.g., assessment of blood volume, hypoxia) of PAI in management of arthritis. Another hypothesis is that PA in combination with Fig. 7. (a) Photograph of the PIP joint 3 of a patient affected by early rheumatoid arthritis. (b) Cross-sectional PA image at location indicated in (a). A magnified view of indicated region shows small thread shaped and point shaped blood vessels at large depth (4-6 mm), which is thought to be associated with arthritis. Adapted with permission from Ref. [37]. (f) The averaged intensity of the pseudo-color pixels in the joint area (p < 0.001). Adapted with permission from Ref. [38].
US should be able to confidently detect the treatment response earlier (when compared to US alone) by quantifying both functional and pathological hemodynamic changes in soft articular tissues. To examine these hypotheses, blind tests on large numbers of patients, both preand post-treatment, to objectively compare the performance of PAI and US with US alone could be conducted using clinical scores of disease activity and/or MRI findings as the gold standard. PAI image quality and resolution could be improved by using detectors working at higher frequencies. The value of volumetric information for diagnosis and treatment assessment of arthritis is very promising and should also be explored by comparing 3D and 2D PAI findings. We also expect that the sensitivity of PAI based on the endogenous tissue contrast may be comparable to contrast-enhanced MRI, so that PAI could be developed as a label-free, low cost, surrogate of MRI for arthritis imaging. To the best of our knowledge, there has been no studies comparing the diagnostic accuracy between PAI and MRI modalities. A future study comparing the two modalities might be critical for establishing PAI as a standard diagnostic tool for arthritis.
Conflicts of interest
The authors declare that there are no conflicts of interest. | 2019-01-22T22:23:51.546Z | 2018-07-27T00:00:00.000 | {
"year": 2018,
"sha1": "e5b96e3a14f77ae01c4653cd28face4fe3df72ad",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.pacs.2018.07.004",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e5b96e3a14f77ae01c4653cd28face4fe3df72ad",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.